AI cyber attacks could be used to hijack driverless cars and spread fake news.
Don’t panic! In a Doomsday Day style warning a group of international experts have said rogue states and terrorists could turn to artificial intelligence (AI) to destabilise the world. Crikey!
What are they saying about AI cyber attacks?
A new report by 26 experts on AI, security and technology suggests that unless preparations are made against the malicious use of the technology, cybercrime will rapidly increase in years to come.
The report which warns against AI cyber attacks, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, also warns of the rise of “highly believable fake videos” impersonating prominent figures or faking events to manipulate public opinion around political events.
It forecasts artificially intelligent bots being used to manipulate the news agenda, social media and elections as well as the hijacking of drones and autonomous vehicles.
AI software features prominently in modern life – it is used to power virtual assistants and many smartphone features, as well as in driverless car technology and on an industrial scale to process large amounts of data.
AI is a game changer
Report co-author Dr Sean O hEigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk, said: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years.
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real.
“There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.
“For many decades hype outstripped fact in terms of AI and machine learning. No longer.
“This report looks at the practices that just don’t work anymore and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”
The report urges policy makers and researchers to work together to understand and prepare for how the technology could be used maliciously, and calls for developers to be proactive and mindful of how it could be misused.
Those who contributed to the study include the Elon Musk-founded non-profit research firm OpenAI and international digital rights group the Electronic Frontier Foundation.
Tesla and SpaceX founder Mr Musk is a prominent voice on the dangers of the misuse of artificial intelligence, warning it could threaten the existence of humans if allowed to grow too rapidly.
Several prominent technology figures, including Facebook boss Mark Zuckerberg, have however spoken out in favour of artificial intelligence.
Mr Zuckerberg said last year he was “optimistic” on the future application of the technology, and he has an ally in Microsoft founder Bill Gates, who told college students in the US last week that “AI can be our friend” through its ability to help humans “produce a lot more goods and services with less labour”.