...

The Dangers of Artificial Intelligence

[vc_row parallax=”” parallax_image=”” inner_container=”” no_margin=”” padding_top=”0px” padding_bottom=”0px” border=”none” marginless_columns=””][vc_column fade=”” fade_animation=”in” fade_animation_offset=”45px” width=”1/1″][image type=”none” float=”none” link=”” target=”” info=”none” info_place=”top” info_trigger=”hover” lightbox_video=”” src=”8769″][text_output]

Reading exercise: read the following article, check the vocabulary you might not know and try to answer the questions that follow.

THE DANGERS OF ARTIFICIAL INTELLIGENCE

Elon Musk and Steven Hawking made big news recently by expressing their concerns related to the advances of technology and more specifically artificial intelligence which has grown in leaps and bounds in the past couple decades.

Bill Gates is now another prominent figure who is also worried that intelligent machines could one day pose a threat to humans although he doesn’t think it should be a major concern for decades.

Elon Musk who is the creator of Paypal, Space X and Tesla Motors, spoke about his concerns in October of 2014 during an interview at the Aero Astro Centennial Symposium where he told students that the tech industry should start thinking about how it deals with AI in the future. He said notably, that AI poses our “greatest existential threat” and that we should implement a regulatory oversight “to make sure we don’t do something very foolish.”

Steven Hawking, the renowned physicist goes even further than Musk and says that although the primitive forms of artificial intelligence we’ve developed so far have been extremely useful, “the development of full AI could spell the end of the human race.”

It is recognized that the three main areas of risk are the following:

  1. programming errors in software
  2. cyber-attacks by terrorists, criminals or hackers on AI systems
  3. sorcerer’s apprentice scenarios where technology respond to instructions in totally unexpected and dangerous ways

Experts agree that we can’t put key algorithms in charge of high-risk systems unless we can guarantee with a high degree of certainty that they pose no threat and can be controlled or shut down in case of emergency.

Artificial intelligence, if left to its own devices, could redesign itself limitlessly and the risk would be that it could do without human intervention or presence for its development, and we would thus become obsolete in the machines’ view. Machines would evolve much faster than us because we are limited by biological evolution which is comparatively very slow.

If and when machines actually do become more advanced than their human creators, one of three possibilities awaits us, first of all they could be of enormous help in our everyday lives and become partners for the future, this is obviously the most desired option, secondly, they could just simply ignore us and do the same as some of us do with our senior citizens when we place them in care centers; and finally, the machines could decide that we are a danger to their existence and to the planet and decide to destroy us.

Several films deal with this problem in different manners, some are downright grim such as The Terminator where machines take over the world and a few humans lead a rebellion to try and survive or 2001 A Space Odyssey in which Hal, the evil computer who kills the astronauts on his ship in order to survive. There are countless other examples of films and novels that explore this topic, proof of the fascination we feel towards this proposition.

What is certain is that what was once science-fiction has now become a distinct possibility that people are starting to take seriously, to the point that many of the world’s most innovative minds are thinking about the prospect of artificial intelligence becoming too “intelligent” for our own good, and although there’s a lot of disagreement with regards to the severity of the risk, it is food for thought as we move forward in the development of new and more advanced technologies.

KEY VOCABULARY AND EXPRESSIONS TO UNDERSTAND THE ARTICLE

to grow by leaps and bounds: to grow fast

regulatory oversight: rules and laws

algorithm: procedure or formula to solve a problem

to be left to one’s own devices: to be left without supervision

downright: absolutely, purely

grim: bad, dark, scary

food for thought: something to think about

QUESTIONS: 

1. What do Bill Gates, Elon Musk and Steven Hawking have in common?

2. What does AI stand for?

3. What does Steven Hawking do for a living?

4. What could happen in the worst case scenario if we left artificial intelligence evolve unchecked?

5. What does the computer do in the movie 2001 A Space Odyssey?

ANSWERS:

1. They all agree that we should be careful because artificial intelligence could pose a threat in the future.

2. AI stands for Artificial Intelligence.

3. Steven Hawking is a physicist.

4. It could redesign itself and eventually destroy us. 

5. It kills the astronauts in order to survive.

[/text_output][image type=”none” float=”none” link=”” target=”” info=”none” info_place=”top” info_trigger=”hover” lightbox_video=”” src=”8771″][share title=”Share this Post” facebook=”true” twitter=”true” google_plus=”” linkedin=”” pinterest=”true” reddit=”true” email=””][/vc_column][/vc_row]