Photo: Comfreak / Pixabay

Transcending the Turing Test

In June it was reported that a computer program, known as Eugene Goostman, had become the first machine in history to pass the Turing test. As a measure of a machine’s ability to show intelligent behaviour indistinguishable from that of a human, it was considered an important milestone in the world of artificial intelligence (AI) when Eugene passed the test.

However, this is just the tip of the iceberg. From Google pioneering self-driving cars to computers that are unbeatable in poker, you would be forgiven for being concerned that maybe AI is becoming, well, too intelligent. We have all seen the Terminator films – are we heading towards an apocalyptic scenario where robots and super computers overthrow mankind?

The Future of Life Institute is a volunteer-run organization, made up of numerous scientists and big thinkers including Stephen Hawking, Elon Musk and Morgan Freeman, that aims to prevent and counter existential threats that could result in the extinction of humanity.

They are currently focusing on the risk that artificial intelligence presents to our society. In response to this threat, the institute recently published an open letter warning the public about the misuse of AI. In the letter, they note that “because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls” and suggest that expanded research is carried out to ensure that “our AI systems must do what we want them to do”.

The message of this open letter is clear: there is a very real risk that while AI could be hugely beneficial to the human race in solving global issues such as poverty and disease, it could also pose a massive threat if we do not take the time to ensure these new intelligent machines are firmly under our control. Their words of warning appear to have resonated with the public, as the letter has attracted hundreds of signatures including several researchers, scientists and professors in the areas of robotics and AI.

Photo: D J Shin / Wikimedia Commons

Photo: D J Shin / Wikimedia Commons

But is this just being a tad pessimistic? What exactly would happen if we didn’t properly research our future AI? In an interview with the BBC last December, Professor Stephen Hawking grimly warned that “The development of full artificial intelligence could spell the end of the human race”. He went on to detail a possible apocalyptic scenario in which AI “would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded” – therefore highlighting the necessity of ensuring that future AI do not become an uncontrollable force.

Elon Musk, CEO of SpaceX, has also commented on the potential dangers of AI. In October, he warned that we should be careful about AI and stated he was becoming “increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish”. Without national and international regulations, it would take only one rogue corporation to create self-replicating machines which could threaten the human race. As Musk summarised: “With artificial intelligence we are summoning the demon”.

Concerns over the impact of AI are also apparent within the world of film. Upcoming titles, such as Ex Machina and Chappie, are the latest films to explore the potential social and cultural effects of what would happen when we achieve true artificial intelligence. But while AI and sentient machines have always been a popular theme within films, it’s only recently that the potential threat of AI has begun to be taken seriously. Maybe because it’s only a matter of time before the issues explored in such films become a reality that we have to face.

However, when exactly we will achieve human-level AI is up for debate. While machines such as Eugene Goostman have passed the Turing test, they cannot be considered to be intelligent in the same way human beings are. Until a machine can think and understand abstract things, it will be unlikely that AI will surpass human intelligence. However, if we ever do reach the stage where we can create machines that can truly think, then we will be entering unknown and unpredictable territory. Before that happens, we need to follow the advice of the Future of Life Institute and ensure we can control our AI systems – or the consequences may be devastating.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.