Researcher calls to Shut down and Ban AI worldwide.

Discussion in 'Off-Topic Discussions' started by Lerner, Apr 1, 2023.

Loading...
  1. Lerner

    Lerner Well-Known Member

    An AI researcher who has been warning about the technology for over 20 years says we

    https://www.yahoo.com/finance/news/ai-researcher-warning-technology-over-114317785.html
    [​IMG]
    An AI researcher warned that "literally everyone on Earth will die," if AI development isn't shut down.iLexx/Getty Images
    • One AI researcher who has been warning about the tech for over 20 years said to "shut it all down."

    • Eliezer Yudkowsky said the open letter calling for a pause on AI development doesn't go far enough.

    • Yudkowsky, who has been described as an "AI doomer," suggested an "indefinite and worldwide" ban.
    An AI researcher who has warned about the dangers of the technology since the early 2000s said we should "shut it all down," in an alarming op-ed published by Time on Wednesday.

    Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.
     
  2. Maniac Craniac

    Maniac Craniac Moderator Staff Member

    The rapid development of AI is concerning. I don't think any of us- ANY of us- are ready for the changes ahead. However, if there's one thing in the world that is guaranteed to be more dangerous than rapidly developing AI, it's banning the continued development of AI.

    Likelihood that North Korea, China, Russia, Iran, and Saudi Arabia would honor such a ban: 0%.
     
    Rachel83az likes this.
  3. AsianStew

    AsianStew Moderator Staff Member

    AI is a good option for those who embrace it and use it for personal/professional benefits, update their knowledge or get things done faster. If the AI was used for something else, such as cheating or hacking, then it should be banned. I don't think the country needs to ban it for ALL uses, it should be banning specific uses, each nation should have their own 'memo' to send out to all departments that may use that for research or teaching, whatever else it may be. Similar to cloning or stem cell research, there are boundaries each nation should follow...
     
    Maniac Craniac likes this.
  4. Johann

    Johann Well-Known Member

    Criminals (and that includes criminal nations) do not follow boundaries. Things we observe, they will not. What THEY do will have consequences for all of us. I really don't have a viable solution here. Maybe an AI can find one for us, before it's too late.
     
    Rachel83az and Maniac Craniac like this.
  5. Stanislav

    Stanislav Well-Known Member

    Yudkowsky is an interesting, if not completely mainstream, fellow. He espouses a lot of interesting ideas and likes to speculate about future super-human AI and its dangers, among other things; he enjoys a cult=like following in online communities (LessWrong.com). Not exactly a crank, but has limited influence in actual academic AI.

    He's a good writer; if you enjoy such things, check out his Harry Potter fanfic, "Harry Potter and the Methods of Rationality". It's longer than Rowling's work, but a surprisingly good read. I loved both the alternative, rationalist and scientist, Harry, and the non-dumb (but still evil) Voldemort. "Killing idiots is my great joy in life, and I'll thank you not to speak ill of it until you've tried it for yourself" :emoji_imp: .

    I'm totally re-reading this thing. It's chock full of Yudkowsky's pet ideas (mostly taken from real popular science and well worth a while) and is a total author tract, but it also has decent characters (unusual for a fanfic), an actual plot, and overall a way better read than his essays. Harry Potter and the Methods of Rationality | Petunia married a professor, and Harry grew up reading science and science fiction. (hpmor.com)
     
    SteveFoerster likes this.
  6. Rich Douglas

    Rich Douglas Well-Known Member

    Yada yada yada.

    In the advancement of technology, there have always been winners and losers. This is no different. In the case of AI, the winners will be those who get ahead of it or on top of it. The losers will trail behind, wondering what happened?

    But get this: it's not all-or-nothing. No, it will evolve right in front of you. Get with it and who knows what you'll see? Stay behind and you'll know. Eventually, it will gobble you up.
     
    Stanislav likes this.
  7. Johann

    Johann Well-Known Member

    So -- Rich says rah-rah-rah and Yudkowsky says waa-waa-waa? I think you CAN have it both ways - but each only to a certain extent.
    And abolishing AI? Well, we didn't shut down all disease research labs world-wide because of one horrible accident, did we? Or all nuclear reactors because of Chernobyl or Three Mile Island.

    My take: proceed with requisite caution. In most things. Love is the exception.
     
    Last edited: Apr 2, 2023
  8. Stanislav

    Stanislav Well-Known Member

    I agree with you. But Yudkowian argument is a bit different. If he follows his usual AI doomer line of thinking, what he's afraid of is AI gaining the ability to improve itself. If it does, it'll eventually grow exponentially smarter and more powerful than the human mind, take over, and if it turns out not to be properly "aligned", end humanity. I don't think it is true and at any rate don't think ChatGPT is anywhere close to that. Humans and corporate greed could do us in much faster, whether it'll be using LLMs or some other tech.
     
  9. Stanislav

    Stanislav Well-Known Member

    Well, Rich has two doctorates more than Yudkowsky - you know what that mean.
    (Actually, not that much. I have a relevant doctorate, but that doesn't necessarily mean I'm smarter. Something can be said for operating within the field, though.)
     
  10. SteveFoerster

    SteveFoerster Resident Gadfly Staff Member

    Perhaps, although I can't help but remember that when it came to chess, the time from when a computer was smarter than any human to when it was smarter than every human was shorter than a lot of people expected, including a lot of experts. So when I see something like this....

    https://www.independent.co.uk/tech/chatgpt-gpt4-ai-openai-b2301523.html
     
  11. Johann

    Johann Well-Known Member

    Yeah - it means Yudkowsky is no better than I am. And no more believable. NOBODY believes me - so why should I believe HIM? :)

    Thanks for the encouraging news.
     
  12. Rich Douglas

    Rich Douglas Well-Known Member

  13. Rich Douglas

    Rich Douglas Well-Known Member

    Well...maybe.

    My comment is about the inevitability of it all. The technology clock always runs forward, never in reverse.
     
    Maniac Craniac likes this.
  14. Rich Douglas

    Rich Douglas Well-Known Member

    Nothing as far as I'm concerned.

    My degrees indicate not whether or not I'm "smart." They may or may not mean I'm accomplished or learned. But that's an assessment I leave to others.

    Ironically, the thing I DO know the most about isn't covered by any of my degrees. Go figure.
     
  15. Stanislav

    Stanislav Well-Known Member

    To be fair - the guy can show a public body of work, that anyone can access and make one's own judgement. You and me, less so. One should not overestimate Yudkowsky, whose prestige is the highest in LessWrong community and adjacent corners of transhumanist Internet and some of the tech bros. But one should not underestimate him either. He's sort of younger and more relevant Levicoff, but with more groupies (and ability to get funding from Peter Thiel).

    Anyway - I don't think LLMs, as of now, possess agency, so I don't think alignment problem or AI Singularity is all that imminent. But this can change rather abruptly.
     
  16. SteveFoerster

    SteveFoerster Resident Gadfly Staff Member

    Similarly, I file this under "serenity to accept the things I cannot change".
     
    Rich Douglas likes this.
  17. Rich Douglas

    Rich Douglas Well-Known Member

    In my coaching, I encourage managers to consider the "CIA" model:
    • C: Change. This is the smallest of the three elements. (Think the center of a set of three concentric circles). There are very few things I can truly change myself--especially anything involving any other people.
    • I: Influence. This is where success lives. Choosing what I can affect (and how) and then executing that, and knowing the limits of what I can influence and what I must....
    • A: Accept. This is survival (literally and figuratively) lives. The vast majority of elements in my life I simply must accept as-is. But "accept" doesn't mean "surrender." We can still do things for ourselves in dealing with the things we accept as-they-are. We're not defenseless. But we're not going to change or influence them.
    Like all models, this one over-simplifies the complexities of life. But it can be useful when considering what you're dealing with and what you should do about it.

    "All models are wrong but some are useful" -- George Box
     
    SteveFoerster likes this.
  18. Johann

    Johann Well-Known Member

    Oy! Just...Oy! :(
     
  19. Lerner

    Lerner Well-Known Member

  20. Grand Ma/Pa Moses

    Grand Ma/Pa Moses New Member

    No to that nonsense! First, the whole of AI doom purveyors are overblown and how about those who have used all the power of AI to enrich themselves for the past 10 years or more? It is now obvious the core capabilities that are now open to the general public have been in the hands of a powerful few for a while.
    Imo, academia, business, and the medical community should embrace the new tilt in commoditizing AI. It is about time it get out of the hands of a few who have milked it and managed to make Billions.

    A more worrying medical advancement is being able to make sperm and egg cells in the lab using adult male and female human muscle or other cells.
     

Share This Page