Artificial intelligence: real war

By admin In News, Technology No comments

Artificial intelligence: real war

In an unremarkable room somewhere in the midst of an unidentified city, a machine has learned to crack the protein-folding problem.

Within seconds it has emailed sets of DNA strings to several laboratories that offer DNA synthesis, peptide sequencing and FedEx delivery. The superintelligent AI soon persuades a gullible human to mix the subsequent vials in a specified environment. The proteins now form a primitive ‘wet’ nanosystem, which is able to receive instructions from a speaker attached to its beaker. This nanosystem can now build more advanced versions of itself until finally it masters molecular nanotechnology. Not long after, billions of microscopic self-replicating nanobots silently fill the world, patiently awaiting the instruction from their master to emerge and destroy humanity.

It sounds like the plot of a science-fiction story, but it might not be as far-fetched as it seems. In fact this AI takeover scenario wasn’t concocted by a writer or fantasist but put forward in a 2008 scientific paper by AI researcher Eliezer Yudkowsky. Other prominent thinkers, including Stephen Hawking, Elon Musk and Sam Harris, have expressed their concerns about what unchecked AI research could lead to. Musk went as far as saying it would be a far likelier cause of World War Three than North Korea.

What are the specific dangers and how realistic are they? In the short-term many believe it won’t be super-intelligent AI but so-called ‘stupid AI’ like autonomous weapons – killing machines that can function without a human operator. The US has been using automated drones for several years now, but there are also a wide array of unmanned military ground and sea vehicles in development. There are even automated gun turrets (used currently by South Korea on its border with the North).

A recent series of UN talks failed to establish progress towards a ban or treaty on the use of such weapons. And with global spending on robotics set to reach $201bn by 2022 according to the International Data Corporation (IDC), we are in the midst of an arms race that will soon see automatic weapons proliferate.

“What we see today is very rudimentary,” says Toby Walsh, professor of artificial intelligence at the University of New South Wales, “but there are prototypes being developed, a number of autonomous weapons, in pretty much every theatre of battle you can think of.”

Currently autonomous weapons all have a human decision-maker in the loop – an agent who makes the ultimate life or death call. But it is easy to see a future where the human agent is removed. “The weak link in the Predator drone these days is the radio link back to base,” says Walsh. “If you can remove the radio link then you can have a much more robust, capable weapon. But equally then it’s got to be one that can make its own decisions.”

There are a number of reasons why Walsh believes this would be a bad idea. One is the use of autonomous weapons by terrorist organisations to cause mass civilian casualties. Another might be the assassination of a prominent figure or a world leader – an action that has triggered global conflict in the past. Potentially even more catastrophic would be the use of autonomous weapons by a rogue state like North Korea.

‘There are prototypes being developed, a number of autonomous weapons, in pretty much every theatre of battle you can think of.’

Toby Walsh, University of New South Wales

Walsh points to the vast efforts the country has made to obtain nuclear weapons and how much easier automated weapons will soon be to manufacture or obtain. An attack using autonomous weapons could quickly lead to an escalation of conflict. “This thing is coming at you from all directions,” says Walsh. “If you can’t defend yourself against it, you may be tempted to take it to the next level and that would start a nuclear conflict on the Korean peninsula.”

Even if automated weapons aren’t used maliciously, just the presence of them in adversarial scenarios, perhaps operating across a militarised border, could lead to unforeseen consequences. Flash conflicts might ensue, much in the same way that flash crashes occur on the stock exchange when computer programs interact in unforeseen ways. “When these two systems facing each other get into some strange feedback loop that no one wants to happen,” says Walsh, “by the time we get to press the off switch we find out we’ve already started a war. People will be dead and it will be much harder to persuade people, ‘oh I’m afraid that was a software bug’.”

Automated weapons are not the only means by which AI might spark conflict. Intelligent algorithms are already used in military intelligence to scan vast amounts of satellite and drone imagery. It could also soon influence high-level strategic decision-making. Elon Musk expressed this fear in a tweet warning that a global conflict “may be initiated not by the country leaders, but one of the AI’s [sic], if it decides that a prepemptive [sic] strike is most probable path to victory”.

If this too sounds like science fiction, Walsh notes that China is already working on an AI that could advise on just such high-level decisions. “It’s taking information from all sorts of sources,” he says. “It’s described as taking cocktail party conversations at the embassy to economic data about their opponents. And it offers strategic advice as to what needs to be doing.”

Other threats might be more insidious but no less real. The use of AI to influence public opinion and sway political process may already have been witnessed in the alleged Russian hacking of the US elections. According to a paper released in February 2018 by a team of the world’s leading thinkers on AI risk, this threat will only increase.

According to the report, ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, we could soon see automated hyper-personalised disinformation campaigns targeted at swing electorates. Key political influencers might be manipulated and used. And there could be a proliferation of fake-news techniques, including AI-generated videos showing faked footage of state leaders making inflammatory statements. As the paper points out, such hyper-sophisticated propaganda could be used to recruit people to terrorist organisations or radical ideologies as well as swaying popular opinion in favour of some harmful political action such as armed conflict.

This is a possibility that worries one of the report’s co-authors, Miles Brundage, previously a research fellow at the University of Oxford’s Future of Humanity Institute, now a policy advisor at Open AI. “One area I remain concerned about but that hasn’t gotten much attention,” he says, “is the intersection of AI and cyber security. As we discussed in the report, AI could be used to make some forms of attacks more effective and scalable, and we need to get ahead of that trend.”

‘This is a problem of policing and standards and just spreading the common-sense understanding that it would be madness to use any unsafe AI design.’

Stuart Russell, University of California

It’s not just online threats that could lead to conflict. The report also mentions the increasing number of low-level automated machines that are vulnerable to hacking. Unmanned drones are an obvious example which have already been used in terrorist attacks by ISIL. But so too are driverless cars, not to mention the Internet of Things, which will soon offer up a plethora of everyday objects to the possibility of misuse.

As the paper notes, the number of industrial robots is increasing, up from 121,000 supplied in 2010 to 254,000 in 2015, a trend that is only set to increase. Cleaning and service robots are also proliferating – 41,000 service robots were sold in 2015 for professional use, and about 5.4 million for personal and domestic use, according to the report. These machines could have access to high-level locations and have the potential to be put to devastating use.

The paper offers a possible scenario for such an attack – a hacked cleaning robot is used to infiltrate the underground parking lot of a ministry building late at night. The machine waits for two other cleaning bots to perform their routine sweep before following them and parking itself in the utility room with all the other robots. The machine then assumes the normal day-to-day duties of a cleaning bot until it attains visual detection of its target – the finance minister. It stops its duties and approaches the minister, detonating an explosive device triggered by proximity to the target, killing the minister and obliterating all trace of itself.

And if all this doesn’t seem futuristic enough, there is the scenario described at the beginning – an AI that has achieved super-intelligence and broken free from the constraints or goals of its human masters. If that sounds unrealistic it should be noted that the majority of AI experts believe that the so-called ‘singularity’ – the moment when artificial intelligence surpasses human-level intelligence – will occur this century, with some placing it as early as 2030, and the average estimate coming out at 2062.

A super-intelligent AI need not have malicious intent to pose an existential threat to humanity, just the single-minded purpose of a machine trying to achieve its programmed goals. An example is given in a thought experiment proposed by Nick Bostrom, philosopher and founding director of the Future of Humanity Institute. Bostrom asks us to consider the case of an AI with human-level intelligence (known as artificial general intelligence or AGI) given the task of maximising the amount of paperclips in its collection.

Bostrom postulates that soon, and without malice, the AI would do away with humanity as a potential threat to its goal. It would then work to improve its own intelligence, eventually achieving artificial super-intelligence (ASI) and going on to find ever-more powerful and sophisticated ways of manufacturing paperclips. These could involve transforming Earth itself, and large volumes of the observable universe, into paperclip manufacturing facilities.

The parable illustrates the importance of taking care to align AI with human values. The answer to such a threat, it might be said, is simple – program the AI to recognise values of human safety and happiness as more important than its practical goals. But the situation is not so simple. Trivial slips in our terminology could lead to horrendous unforeseen circumstances such as the example given by Yudkowsky of an AI tasked with increasing the number of smiles. The machine proceeds to meet its goal by freezing perma-smiles on the faces of all humanity, or else tiles the galaxy with smiley faces.

Clearly we need to be careful of the values we program into our AI so that they always minimise danger and maximise the wellbeing of humanity. But such values are difficult to define, especially when we cannot agree on what constitutes human wellbeing even among ourselves. One of the approaches to solving this problem is – perhaps counterintuitively – to program AI with the same uncertainty that we feel.

“The key,” says Professor Stuart Russell, an AI researcher at the University of California, Berkeley, “is to design machines that pursue only human preferences but are uncertain as to what those preferences are.” Russell’s approach to solving the ‘value alignment problem’ is to build AIs which have the maximisation of human values as their ultimate goal, but are unsure as to what those goals are. In order to understand them, they observe and learn from human behaviour.

The problem, as Russell admits, is the human behaviour part. People often do bad things. Besides which, there are an awful lot of humans with an awful lot of different and conflicting values. Another issue is that, even if we find a foolproof way of building safe AI, how can we ensure this method is adopted across the board in a world that is currently engaged in an AI arms race? Russell believes we – or rather AI – can solve the first problem, but is less sure about the second. “This is a problem of policing and standards,” he says, “and just spreading the common-sense understanding that it would be madness to use any unsafe AI design. I believe this issue is very difficult, however; we have had very little success with the easier problem of malware.”

China vs the US vs Russia: who will get there first and who’s going to control it? Why are we driven to find this other entity that can do everything better and faster than humans?

John C. Havens

Others believe that the idea of a technological singularity is little more than a distraction from what should really concern us – the existential threat AI research already poses. For John C Havens, author of the book ‘Heartificial Intelligence’, this is the way research into AI reinforces the current paradigm of unsustainable growth. “Everyone’s worried about China versus the US versus Russia,” says Havens. “Who will get there first and who’s going to control it? Well my question is, why are we driven as a species to find this other entity that can do everything better and faster than humans? That assumes that faster is always better, which I disagree with fundamentally.”

Havens sardonically envisions a world in 2031 where a group of scientists forlornly celebrate the advent of the singularity on a sinking island because so much time and resources have been spent on AI that environmental collapse has ensued.

Professor Joanna Bryson, an AI researcher at the University of Bath, agrees. She believes superintelligence is already here and causing its own existential threat. “It [superintelligence] is happening now,” she says, “has been happening at least since we invented writing. No one entirely understands how our collective intelligences like governments and corporations work, and the challenges of sustainability are absolutely a consequence of our runaway intelligence.”

Whatever the future holds, one thing is clear – the stakes are high. Whether it’s a superintelligent AI that fills the observable universe with paper clips, AI-manipulated fake news that sparks World War Three, or just taking our eye off environmental degradation, the specific cause isn’t really important. We are dealing with ideas and actions now that could have truly cosmic implications in the future.

In his book ‘Superintelligence’, Nick Bostrom speculates that if humanity survives to colonise space we could conceive of between 1035 and 1058 future human lives across the lifespan of the universe. That’s 1058 human lives that potentially hang in the balance of a careless piece of programming or a cut corner in the race for technological superiority.

As Bostrom writes: “If we represent all the happiness experienced during one entire such life with a single tear of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millenia. It is really important that we make sure these truly are tears of joy.”

In an unremarkable room somewhere in the midst of an unidentified city, a machine has learned to crack the protein-folding problem.

Within seconds it has emailed sets of DNA strings to several laboratories that offer DNA synthesis, peptide sequencing and FedEx delivery. The superintelligent AI soon persuades a gullible human to mix the subsequent vials in a specified environment. The proteins now form a primitive ‘wet’ nanosystem, which is able to receive instructions from a speaker attached to its beaker. This nanosystem can now build more advanced versions of itself until finally it masters molecular nanotechnology. Not long after, billions of microscopic self-replicating nanobots silently fill the world, patiently awaiting the instruction from their master to emerge and destroy humanity.

It sounds like the plot of a science-fiction story, but it might not be as far-fetched as it seems. In fact this AI takeover scenario wasn’t concocted by a writer or fantasist but put forward in a 2008 scientific paper by AI researcher Eliezer Yudkowsky. Other prominent thinkers, including Stephen Hawking, Elon Musk and Sam Harris, have expressed their concerns about what unchecked AI research could lead to. Musk went as far as saying it would be a far likelier cause of World War Three than North Korea.

What are the specific dangers and how realistic are they? In the short-term many believe it won’t be super-intelligent AI but so-called ‘stupid AI’ like autonomous weapons – killing machines that can function without a human operator. The US has been using automated drones for several years now, but there are also a wide array of unmanned military ground and sea vehicles in development. There are even automated gun turrets (used currently by South Korea on its border with the North).

A recent series of UN talks failed to establish progress towards a ban or treaty on the use of such weapons. And with global spending on robotics set to reach $201bn by 2022 according to the International Data Corporation (IDC), we are in the midst of an arms race that will soon see automatic weapons proliferate.

“What we see today is very rudimentary,” says Toby Walsh, professor of artificial intelligence at the University of New South Wales, “but there are prototypes being developed, a number of autonomous weapons, in pretty much every theatre of battle you can think of.”

Currently autonomous weapons all have a human decision-maker in the loop – an agent who makes the ultimate life or death call. But it is easy to see a future where the human agent is removed. “The weak link in the Predator drone these days is the radio link back to base,” says Walsh. “If you can remove the radio link then you can have a much more robust, capable weapon. But equally then it’s got to be one that can make its own decisions.”

There are a number of reasons why Walsh believes this would be a bad idea. One is the use of autonomous weapons by terrorist organisations to cause mass civilian casualties. Another might be the assassination of a prominent figure or a world leader – an action that has triggered global conflict in the past. Potentially even more catastrophic would be the use of autonomous weapons by a rogue state like North Korea.

‘There are prototypes being developed, a number of autonomous weapons, in pretty much every theatre of battle you can think of.’

Toby Walsh, University of New South Wales

Walsh points to the vast efforts the country has made to obtain nuclear weapons and how much easier automated weapons will soon be to manufacture or obtain. An attack using autonomous weapons could quickly lead to an escalation of conflict. “This thing is coming at you from all directions,” says Walsh. “If you can’t defend yourself against it, you may be tempted to take it to the next level and that would start a nuclear conflict on the Korean peninsula.”

Even if automated weapons aren’t used maliciously, just the presence of them in adversarial scenarios, perhaps operating across a militarised border, could lead to unforeseen consequences. Flash conflicts might ensue, much in the same way that flash crashes occur on the stock exchange when computer programs interact in unforeseen ways. “When these two systems facing each other get into some strange feedback loop that no one wants to happen,” says Walsh, “by the time we get to press the off switch we find out we’ve already started a war. People will be dead and it will be much harder to persuade people, ‘oh I’m afraid that was a software bug’.”

Automated weapons are not the only means by which AI might spark conflict. Intelligent algorithms are already used in military intelligence to scan vast amounts of satellite and drone imagery. It could also soon influence high-level strategic decision-making. Elon Musk expressed this fear in a tweet warning that a global conflict “may be initiated not by the country leaders, but one of the AI’s [sic], if it decides that a prepemptive [sic] strike is most probable path to victory”.

If this too sounds like science fiction, Walsh notes that China is already working on an AI that could advise on just such high-level decisions. “It’s taking information from all sorts of sources,” he says. “It’s described as taking cocktail party conversations at the embassy to economic data about their opponents. And it offers strategic advice as to what needs to be doing.”

Other threats might be more insidious but no less real. The use of AI to influence public opinion and sway political process may already have been witnessed in the alleged Russian hacking of the US elections. According to a paper released in February 2018 by a team of the world’s leading thinkers on AI risk, this threat will only increase.

According to the report, ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, we could soon see automated hyper-personalised disinformation campaigns targeted at swing electorates. Key political influencers might be manipulated and used. And there could be a proliferation of fake-news techniques, including AI-generated videos showing faked footage of state leaders making inflammatory statements. As the paper points out, such hyper-sophisticated propaganda could be used to recruit people to terrorist organisations or radical ideologies as well as swaying popular opinion in favour of some harmful political action such as armed conflict.

This is a possibility that worries one of the report’s co-authors, Miles Brundage, previously a research fellow at the University of Oxford’s Future of Humanity Institute, now a policy advisor at Open AI. “One area I remain concerned about but that hasn’t gotten much attention,” he says, “is the intersection of AI and cyber security. As we discussed in the report, AI could be used to make some forms of attacks more effective and scalable, and we need to get ahead of that trend.”

‘This is a problem of policing and standards and just spreading the common-sense understanding that it would be madness to use any unsafe AI design.’

Stuart Russell, University of California

It’s not just online threats that could lead to conflict. The report also mentions the increasing number of low-level automated machines that are vulnerable to hacking. Unmanned drones are an obvious example which have already been used in terrorist attacks by ISIL. But so too are driverless cars, not to mention the Internet of Things, which will soon offer up a plethora of everyday objects to the possibility of misuse.

As the paper notes, the number of industrial robots is increasing, up from 121,000 supplied in 2010 to 254,000 in 2015, a trend that is only set to increase. Cleaning and service robots are also proliferating – 41,000 service robots were sold in 2015 for professional use, and about 5.4 million for personal and domestic use, according to the report. These machines could have access to high-level locations and have the potential to be put to devastating use.

The paper offers a possible scenario for such an attack – a hacked cleaning robot is used to infiltrate the underground parking lot of a ministry building late at night. The machine waits for two other cleaning bots to perform their routine sweep before following them and parking itself in the utility room with all the other robots. The machine then assumes the normal day-to-day duties of a cleaning bot until it attains visual detection of its target – the finance minister. It stops its duties and approaches the minister, detonating an explosive device triggered by proximity to the target, killing the minister and obliterating all trace of itself.

And if all this doesn’t seem futuristic enough, there is the scenario described at the beginning – an AI that has achieved super-intelligence and broken free from the constraints or goals of its human masters. If that sounds unrealistic it should be noted that the majority of AI experts believe that the so-called ‘singularity’ – the moment when artificial intelligence surpasses human-level intelligence – will occur this century, with some placing it as early as 2030, and the average estimate coming out at 2062.

A super-intelligent AI need not have malicious intent to pose an existential threat to humanity, just the single-minded purpose of a machine trying to achieve its programmed goals. An example is given in a thought experiment proposed by Nick Bostrom, philosopher and founding director of the Future of Humanity Institute. Bostrom asks us to consider the case of an AI with human-level intelligence (known as artificial general intelligence or AGI) given the task of maximising the amount of paperclips in its collection.

Bostrom postulates that soon, and without malice, the AI would do away with humanity as a potential threat to its goal. It would then work to improve its own intelligence, eventually achieving artificial super-intelligence (ASI) and going on to find ever-more powerful and sophisticated ways of manufacturing paperclips. These could involve transforming Earth itself, and large volumes of the observable universe, into paperclip manufacturing facilities.

The parable illustrates the importance of taking care to align AI with human values. The answer to such a threat, it might be said, is simple – program the AI to recognise values of human safety and happiness as more important than its practical goals. But the situation is not so simple. Trivial slips in our terminology could lead to horrendous unforeseen circumstances such as the example given by Yudkowsky of an AI tasked with increasing the number of smiles. The machine proceeds to meet its goal by freezing perma-smiles on the faces of all humanity, or else tiles the galaxy with smiley faces.

Clearly we need to be careful of the values we program into our AI so that they always minimise danger and maximise the wellbeing of humanity. But such values are difficult to define, especially when we cannot agree on what constitutes human wellbeing even among ourselves. One of the approaches to solving this problem is – perhaps counterintuitively – to program AI with the same uncertainty that we feel.

“The key,” says Professor Stuart Russell, an AI researcher at the University of California, Berkeley, “is to design machines that pursue only human preferences but are uncertain as to what those preferences are.” Russell’s approach to solving the ‘value alignment problem’ is to build AIs which have the maximisation of human values as their ultimate goal, but are unsure as to what those goals are. In order to understand them, they observe and learn from human behaviour.

The problem, as Russell admits, is the human behaviour part. People often do bad things. Besides which, there are an awful lot of humans with an awful lot of different and conflicting values. Another issue is that, even if we find a foolproof way of building safe AI, how can we ensure this method is adopted across the board in a world that is currently engaged in an AI arms race? Russell believes we – or rather AI – can solve the first problem, but is less sure about the second. “This is a problem of policing and standards,” he says, “and just spreading the common-sense understanding that it would be madness to use any unsafe AI design. I believe this issue is very difficult, however; we have had very little success with the easier problem of malware.”

China vs the US vs Russia: who will get there first and who’s going to control it? Why are we driven to find this other entity that can do everything better and faster than humans?

John C. Havens

Others believe that the idea of a technological singularity is little more than a distraction from what should really concern us – the existential threat AI research already poses. For John C Havens, author of the book ‘Heartificial Intelligence’, this is the way research into AI reinforces the current paradigm of unsustainable growth. “Everyone’s worried about China versus the US versus Russia,” says Havens. “Who will get there first and who’s going to control it? Well my question is, why are we driven as a species to find this other entity that can do everything better and faster than humans? That assumes that faster is always better, which I disagree with fundamentally.”

Havens sardonically envisions a world in 2031 where a group of scientists forlornly celebrate the advent of the singularity on a sinking island because so much time and resources have been spent on AI that environmental collapse has ensued.

Professor Joanna Bryson, an AI researcher at the University of Bath, agrees. She believes superintelligence is already here and causing its own existential threat. “It [superintelligence] is happening now,” she says, “has been happening at least since we invented writing. No one entirely understands how our collective intelligences like governments and corporations work, and the challenges of sustainability are absolutely a consequence of our runaway intelligence.”

Whatever the future holds, one thing is clear – the stakes are high. Whether it’s a superintelligent AI that fills the observable universe with paper clips, AI-manipulated fake news that sparks World War Three, or just taking our eye off environmental degradation, the specific cause isn’t really important. We are dealing with ideas and actions now that could have truly cosmic implications in the future.

In his book ‘Superintelligence’, Nick Bostrom speculates that if humanity survives to colonise space we could conceive of between 1035 and 1058 future human lives across the lifespan of the universe. That’s 1058 human lives that potentially hang in the balance of a careless piece of programming or a cut corner in the race for technological superiority.

As Bostrom writes: “If we represent all the happiness experienced during one entire such life with a single tear of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millenia. It is really important that we make sure these truly are tears of joy.”

Lee Williamshttps://eandt.theiet.org/rss

E&T News

https://eandt.theiet.org/content/articles/2018/11/arti%EF%AC%81cial-intelligence-real-war/

Powered by WPeMatico