Bill Nye vs. the World

show more
Comments (1)
Sorted by:
  • [ – ] TheRealHawkeye reply The prospect of true AI is darn worrisome - and apparently, I'm not quite alone in that. http://www.bbc.com/news/technology-30290540 https://futurism.com/2-expert-thinks-ai-will-undoubtably-wipe-out-humanity/ Would true AI be a _massive_ boon to humanity? Absolutely! Is it a sure thing a true AI would wipe out humanity? Of course not! But, is the risk worth the reward? With the whole of humanity at stake, my answer is a resounding NO When I hear/read an AI scientists say stuff like this: "I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realized," said Rollo Carpenter, creator of Cleverbot. I feel the wish to punch him in the face, because what he is essentially saying is, that he _believes_ that AIs won't take over within the next few decades (I'd really like him to give me an estimated percentage chance for this to still happen), he is willing to bet the whole of humanity on his "bel...moreieves" and that after all, this will not be an immediate problem and could only affect our children or grandchildren. F****ing idiot!
    • [ – ] ZZNep parent reply I highly creating AI will but humanity at stake. The chances that we create something like skynet off of Terminator and put the world's lives it it's hands is Completely unlikely. If AI starts getting ideas about wiping out humans, we just wipe it's memory, or shut it down. It's that easy.
      • TheRealHawkeye parent reply Probably, but what would you guess are the chances that the AI succeeds? 10%, 5%, 1%? If it were only the scientists/developer's lives at stake, hey, it's your ass. But we are not _talking_ about just the devs. Are you willing to bet the whole of humanity on a 1% chance? What about 0.1%? Now, let's assume we actually manage to get a kinda AI working. What would be the next step? We use that "kinda AI" to help develop the next, improved AI. And then we use that AI to develop the next one, again improved. And then comes the big step. The point where our new AI, on it's own, develops it's improved replacement. And if over the whole process a flaw in the programming slipped through, we are screwed. So, would such a flaw in the programming slip through? How many computer programs do you know _today_ that are free of bugs? If your answer is 0, then we might have a wee bit of a problem. Note: Errors/flaws in programming don't have to show up at once. It can take a loooooong time until ...morethe right circumstances come together to trigger the bug. And by that time, the AI could control a whole lot of stuff already Sorry, the risk-reward calculations comes _solidly_ down on the "too much risk" side for me.
Download the Vidme app!