Theory of the End
Stephen Hawking is one of the most respected minds in science today. He often speaks on a wide range of topics both within and outside of his particular expertise in theoretical physics. Some of Hawking's most discussed topics including the search for alien life, climate change, artificial intelligence (AI), and how all of these things, and more, are going to spell the end of humanity once and for all.
Speaking at an event at Cambridge University last year Hawking said, "Our earth is becoming too small for us, global population is increasing at an alarming rate and we are in danger of self-destructing."
He recognizes the pessimism of his assertations. Hawking cites the passing of the controversial "Brexit" plan for the UK to leave the EU as a reason for this enduring pessimism, saying that if the measure passes, "...I would not be optimistic about the long-term outlook for our species."
"Spot On"
Durwood Zaelke is the founder and President of the Institute for Governance & Sustainable Development (IGSD), an organization with a mission "to promote just and sustainable societies and to protect the environment by advancing the understanding, development, and implementation of effective, and accountable systems of governance for sustainable development." Zaelke spoke to Futurism about Hawking's comments.
"Mr. Hawking is spot on," he began. "We’re chasing a fast-moving—indeed an accelerating problem of climate change, with slow-moving solutions, and we're getting further behind every day." As populations continue to boom, the detrimental environmental impact of humanity will only continue to worsen.
Like Hawking, Zealke believes that we are close to the tipping point where climate change becomes irreversible. Speaking in dire terms, Zealke said that "...we’ll soon face climate-driven chaos that will threaten our very civilization and our democratic form of government, while the fear of chaos feeds authoritarian regimes."
In order to ensure the survival of our species, Hawking suggests humanity move beyond the confines of this planet. In June, Hawking told the BCC that "Spreading out may be the only thing that saves us from ourselves. I am convinced that humans need to leave Earth." Colonizing other areas of the Solar System will certainly help to relieve some of this population pressure and therefore mitigate carbon emissions. Despite whatever potential this idea has, Hawking cites another threat to human-kind that there just might be no running away from.
AI-nihlation
Hawking often speaks about the development of artificial intelligence (AI) as the true perpetrator of the eventual demise of human beings. In an interview with WIRED Hawking said, "The genie is out of the bottle. I fear that AI may replace humans altogether."
Hawking fears that we will develop AI that is too competent, "A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble." Other big names within science and tech share these same trepedations as Hawking.
Founder of OpenAI and CEO of SpaceX and Tesla, Elon Musk, also has concerns about the destructive potential of AI. Musk has moderated his own rhetoric comparatively to Hawking (though he did say that AI is more of a risk than North Korea) which is focused on the need to regulate the development of AI systems. “AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being,” Musk said.
Hawking believes that “some form of world government” should have control of the technology to make sure the machines don't rise up and rebel like their terminating fictional counterparts.
However, these fears could be disproportionate to reality. Speaking to Futurism about all of this fear surrounding the development of AI, Pascal Kaufmann, the founder of Starmind and president of the synthetic intelligence development initiative the Mindfire Foundation, denies the likelihood of AI developing into a threat. "It is fascinating that when it comes to AI, our first thoughts often go towards being enslaved by our creations. That perspective makes for entertaining movies, but it does not mean that reality is doomed to walk the same path, or that it is the likely scenario," he explained.
Kaufmann, however, does not deny the potential for destructive AI. "There are dangers which come with the creation of such powerful and omniscient technology, just as there are dangers with anything that is powerful. This does not mean we should assume the worst and make potentially detrimental decisions now based on that fear."
Perhaps there is some way to trade the irrational fears for rational ones. Hawking certainly is right about the threat of climate change. Maybe if humans were more concerned with the scientific fact of climate change than the science fiction of killer robot overlords, we could start making real progress in getting our planet back on track.
Share This Article