Last month, a collection of the world’s most renowned scientists and technology entrepreneurs signed an open letter from The Future of Life Institute warning of the potential dangers that unchecked artificial intelligence could bring.
The institute cautioned that, while AI has the potential to do good such as eradicating disease and poverty, the risks are often being left unconsidered as private companies fund millions of dollars into rapidly accelerating research programs.
While a Terminator-style apocalypse may seem far-fetched, examples of artificial intelligence are already all around us. Siri and similar mobile personal assistants, virtual agents on customer service websites, and even non-playable characters in video games are all present-day AI applications.
However, while no one could reasonably suggest that AI is a threat to human existence currently, it is the technology’s unknown future that is concerning, particularly as artificial intelligence becomes more capable. As The Future of Life Institute warns, "Our AI systems must do what we want them to do".
While it is easy to dismiss talk of the AI threat as misinformed nonsense or scaremongering, it is a view gaining some high profile support. Professor Stephen Hawking recently claimed that AI could "spell the end of the human race", while Bill Gates echoed his sentiment late last month.
"I am in the camp that is concerned about super intelligence", said the Microsoft founder. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern".
Already, autonomous machines are being used to aid military campaigns. While the US prohibits fully-automated lethal robots, the Navy is developing semi-automated drone boats that can swarm enemy targets and the US Army is working on automated convoys to deliver supplies in a war zone.
However, the development of AI weaponry raises a number of moral, ethical and legal problems.
"Autonomous weapons systems cannot be guaranteed to predictably comply with international law", Prof Sharkey told the BBC. "Nations aren’t talking to each other about this, which poses a big risk to humanity".
Recently, however, nations have been talking about the issue. The United Nations held a discussion on the matter at its Convention on Certain Conventional Weapons (CCW) last year, while the subject has also received serious academic debate through a recently released Oxford University paper.
Aside from the threat of physical violence, the mass deployment of AI robots would have huge socio-economic impacts, the likes of which we are currently unprepared for. Already robots are taking the place of humans in a number of manufacturing roles, but if AI develops sufficiently, we could see the approach taken in other areas of employment.
However, while we must exercise caution against an AI-dominated future, the present is far from threatening. Artificial intelligence as it stands today often requires human involvement and the prospect of a fully-automated robot having a detailed conversation with a human being is still some way off. That being said, progress in this area is rapidly accelerating.
Last year, it was announced that a computer algorithm had become the first example of AI passing the Turing test. Developed by World War Two codebreaker Alan Turing, the test is a benchmark for machines displaying intelligent behavior indistinguishable from that of a human. Eugene, as the computer algorithm was named, was able to convince 33 percent of judges that it was actually a 13-year-old Ukrainian child. While critics have dismissed this identity as being a lot easier to mimic than a full-grown adult’s, the feat is impressive nonetheless and points towards the technology’s remarkable advancement.
As with any new and developing technology, scientists are right to postulate on both the good and bad potential outcomes, whether that’s regarding nuclear power, the Internet of Things or AI. As such, the debate surrounding artificial intelligence is an important one, but at this stage of the technology’s development, many will be rightly asking if it is also premature.
Is artificial intelligence a threat to humanity? Not yet, but it is the responsibility of every scientist, researcher or engineer to ask themselves this question when developing such a transformative technology.
Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.