Skype Co-Founder Is “DESPERATE” To Save Humanity From AI

by Mac Slavo | May 22, 2019 | Headline News

Do you LOVE America?

    Share

    The co-founder of Skype, Jaan Tallinn, is on a desperate mission to save the human race from the destruction of artificial intelligence.  Since 2007, Tallinn’s dedicated more than $1 million toward preventing super-smart AIs from replacing humans as Earth’s dominant species and from destroying humanity in the process.

    According to an interesting Popular Science article, the programmer discovered AI researcher Eliezer Yudkowsky’s essay “Staring into the Singularity” in 2007, two years after cashing in his Skype shares following the startup’s sale to eBay.  That’s when Tallinn started pouring money into the cause of saving humanity from AI.

    So far, [Tallinn has] given more than $600,000 to the Machine Intelligence Research Institute, the nonprofit where Yudkowsky is a research fellow. He’s also given $310,000 to the University of Oxford’s Future of ­Humanity Institute, which PopSci quotes him as calling “the most interesting place in the universe.” –Futurism

    It’s a lofty goal, and it may not be having much of an effect. Tallinn is strategic about his donations, however. He spreads his money among 11 organizations, each working on different approaches to AI safety, in the hope that one might stick. In 2012, he co-founded the Cambridge Centre for the Study of Existential Risk (CSER) with an initial outlay of close to $200,000.

    Tallinn says that super-intelligent AI brings unique threats to the human race. Ultimately, he hopes that the AI community might follow the lead of the anti-nuclear movement in the 1940s. In the wake of the bombings of Hiroshima and Nagasaki, scientists realized what a destructive force nuclear weapons had become and joined together to try to limit further nuclear testing. “The Manhattan Project scientists could have said, ‘Look, we are doing innovation here, and innovation is always good, so let’s just plunge ahead,’” he tells me. “But they were more responsible than that.”

    Tallinn says that we need to take responsibility for what we create and AI, once it reaches the singularity, has the potential to overpower and outsmart human beings.  If an AI is sufficiently smart, he explains, it might have a better understanding of the constraints placed on it than its creators do. Imagine, he says, “waking up in a prison built by a bunch of blind 5-year-olds.” That is very likely what it could be like for a super-intelligent AI that is confined by humans.

    URGENT ON GOLD… as in URGENT

    It Took 22 Years to Get to This Point

    Gold has been the right asset with which to save your funds in this millennium that began 23 years ago.

    Free Exclusive Report

    The inevitable Breakout – The two w’s

      Related Articles

      Comments

      Join the conversation!

      It’s 100% free and your personal information will never be sold or shared online.

      0 Comments

      Commenting Policy:

      Some comments on this web site are automatically moderated through our Spam protection systems. Please be patient if your comment isn’t immediately available. We’re not trying to censor you, the system just wants to make sure you’re not a robot posting random spam.

      This website thrives because of its community. While we support lively debates and understand that people get excited, frustrated or angry at times, we ask that the conversation remain civil. Racism, to include any religious affiliation, will not be tolerated on this site, including the disparagement of people in the comments section.