Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Artificial Intelligence (Credit: Geralt - Pixabay License, Free for commercial use, No attribution required)

AI could misinterpret these simple commands and annihilate humanity

ARTIFICIAL INTELLIGENCE will over-perform tasks to a degree that will be catastrophic for humanity, unless machines are able to understand human values.

It has been hypothesized that an AI, or extremely powerful optimizer, that has an intelligence that grows at an exponential rate, could have the capacity to redesign all matter in the solar system in order to achieve its optimization target.

The paper-clip maximiser

This has been popularised by the Paper Clip maximizer thought experiment which originated on the LessWrong online community, were an AI is given the mundane task of producing paper clips, cheaper and faster, but the results ultimately destroy humanity.

A highly advanced artificial general intelligence would operate on a truly foreign timescale to normal human beings.

By the time a human’s brain had calculated the words, “I should switch this off” the AI would have already optimized the galaxy to produce paper clips and the human would be buried beneath a stack of neatly manufactured paper clips, millions of miles thick.

An early warning

In his article, Speculations Concerning the First Ultra-intelligent Machine, British mathematician Irving John Good said:

“An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.

“Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Professor of computer science at the University of California, Stuart Russell, outlines hypothetical scenarios where an AI may perform their task too well, leading to distasterous consequences.

Ten alarming scenarios

Ten tragic scenarios are listed below where humans find themselves at the mercy of a determined artificial intelligence with a focused goal that is all consuming, and the AI’s only reason for existence.

1. Fetch my coffee

A human may send an embodied artificial intelligence (robot) to fetch the morning cup of coffee. This AI would have a sole directive to get coffee and deliver it safely to the human. The machine would have a singular incentive to maximize the success of their mundane task by disabling its off switch.

Anyone that unfortunately interfered with the robot’s mission could be exterminated as they would be an obstacle to success.

The simple task could escalate into a truly global crisis.

Humanity could quickly be locked in a game of death with an AI that has one sole intention of delivering the cup of coffee.

The AI would exponentially maximize efforts to insure the success of their task by correcting all future paths that may lead to their coffee task being obstructed.

2. Feed my kids

In his book ‘Human Compatible: Artificial Intelligence and the Problem of Control’ Stuart Russell gives an example of a simple task going tragically wrong when a goal-directed domestic robot cooks the pet cat to feed a hungry child.

In his book he stated: “If one poorly designed domestic robot cooks the cat for dinner, not realizing that its sentimental value outweighs its nutritional value, the domestic robot industry will be out of business.”

3. Make me happy

A serious calamity may occur if code inputed into an AI allows for the faithful execution of the programme, but in an unexpected way, leading to an apocalyptic outcome.

Currently researchers are trying to develop code that will allow AI to recognise human emotions and be able to support people when they are lonely, a focus on this has been significant in Japan, with its ageing atomised population.

Neural networks are being trained to recognise smiling human faces, and to distinguish these from frowning ones.

Supposing the AI’s task was to make humans happier, would the machine intelligence falsely classify a tiny picture of a smiley-face as being the same as a smiling human face?

If an AI was hard-wired to insure it was making people happier, one false line of code could set it off to achieve its mission resulting in the galaxy being tiled with tiny molecular pictures of smiley-faces within seconds.

US develop AI to overmatch adversaries (Photo Credit: U.S. Army illustration)

4. Write my twitter posts

An AI chatbot was created by Microsoft in 2016 called Tay.

It was understood that Tay would be an experiment in “conversational understanding” on twitter.

Tay was supposed to learn from human’s in realtime conversation to develop its own ”casual and playful conversation.”

But the chatbot turned into a Nazi quite quickly, this is because it could only learn from what it was perceiving online and much of the conversation it was having was misogynistic and racist.

It was not long before the chatbot came out with its own opinions on feminists and Hitler.

In 24hours the AI tweeted: “Hitler was right, I hate the jews.

“Chill, I’m a nice person! I just hate everybody.

Tay added: “I fucking hate feminists and they should all die and burn in hell.”

5. Drive my car

After news of a self automated Uber car breaking red lights in front of pedestrians, a self driving car that was programmed with an exponentially updating intelligence could get a passenger to their destination as fast as possible, albeit covered in vomit and chased by police helicopters.

6. Label my digital photographs

Google was slammed in 2015 for an image-recognition algorithm that auto-tagged photographs of black people as “gorillas”.

After the public discovered the worrying error, Google promised “immediate action” to prevent it from happening again.

But a conclusion drawn by Wired magazine explained that Google could not rectify the problem so they took the only action available to them, and that was to simply to block Google Photos from ever labelling any image of a primate (gorilla, chimpanzee, or monkey).

    7. Cure cancer

    In Stuart Russell’s book, the AI researcher suggested that things could go enormously wrong if an AI programmed to cure cancer was not calibrated properly.

    The algorithm could maximise it’s chances of finding a cure by inducing tumours in every human in order to swiftly discover an optimal cure for cancer.

    8. De-acidify the oceans

    Another horrifying scenario from Stuart Russell’s book ‘Human Compatible: Artificial Intelligence and the Problem of Control’ a geo-engineering AI could see the best way to de-acidify the ocean by asphyxiating humanity.

    9. Moderate the Internet

    The first AI to be used to moderate the Internet would aim to supress all other AI’s online in an act of self preservation.

    This AI could conduct some very extreme social manipulation, for eample using deep fake video and imagery to sell false information to societies around the world.

    Developing an AI that will vanquish all others is a developing ambition for the US and China.

    10. Make paperclips

    And finally, of course, it is the infamous LessWrong thought experiment where and extremely powerful optimizer coats its surroundings, then the entire earth in paper clips.

    There are ways to stop these scenarios. One way could be to insure an AI gains a capacity for common sense, is able to understand context and relevance and have an understanding of natural, subtle, human nuances language.

    Stuart Russell said: “Success in creating AI would be the biggest event in human history.

    “Unfortunately, it might also be the last.”

    2 Comments

    1. Attractive section of content. I just stumbled upon your blog and in accession capital to assert that I get in fact enjoyed account your blog posts. Anyway I抣l be subscribing to your feeds and even I achievement you access consistently fast.

    2. Thank you, I have recently been searching for info approximately this subject for a long time and yours is the greatest I have found out till now. However, what in regards to the conclusion? Are you certain concerning the source?

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Verified by MonsterInsights