Consequences of Uses of Computing: Robotics

From Wikibooks, open books for an open world
Jump to navigation Jump to search

PAPER 2 - ⇑ Consequences of uses of computing ⇑

← Digital rights management Robotics Emerging technologies →


Robot - a mechanical or virtual intelligent agent that can perform tasks automatically or with guidance, typically by remote control

Robots are becoming an increasingly important part of modern society. They work in factories, fight wars and might one day nurse you in your old age.

Artificial intelligence[edit | edit source]

Artificial intelligence - a science concerned with the general study of intelligence in all its manifestations, both in living organisms and in present and future machines


A large and important field of computer science is Artificial Intelligence (AI), the study of making intelligent machines. Many robots and computer programs are said to have Artificial Intelligence (AI). AI can be summarised by the definition above, with explicit examples of it including:

Trying to get machines to perform very specific tasks, e.g.

  • recognition of faces or other things in pictures
  • automatic translation of written or spoken words from one language to another
  • controlling processes like landing aeroplanes, optimising a chemical plant or power station
  • vacuuming rooms
  • computer opponents in video games
  • building things in factories

You might even have some ideas for how AI can be used for your A2 project next year.

Factory automation with industrial robots for material handling in flat glass industry

The 'thinky' AI can even learn from experience, meaning that you don't have to program them how to explicitly respond to each and every situation. This AI starts to pose some very big questions for humanity. Is there really a difference between the intelligence of a human being and that of a program? We'll look into this a little below:

What are machines good and bad at, in comparison to humans?[edit | edit source]

  1. Machines are good at doing tasks repeatedly (think about car manufacturing robots), as they don't get tired or make mistakes.
  2. Machines are seen to be bad at making judgements which they haven't been built to make, sympathy, inventing things. etc. But if we built machines smart enough, couldn't we build these capabilities in?

What can this tell us about the way that the human mind works?[edit | edit source]

There are many scientists and philosophers who believe that computers will one day become as intelligent as humans. But there is a question about what 'intelligence' really means. If it is just performing tasks well, then there are computers that can compose music, or sweep a road, or fly a plane, or solve maths equations better than most humans. We can even get computers to display emotions such as sympathy and anger. Does this mean that we can fully recreate how the mind works?

In 1950 Alan Turing, an early pioneer in computer science, proposed a test for machine intelligence. If you could have a conversation with a panel of human beings and with a computer AI program, and be unable to tell the difference between whether you were speaking to a human or a computer, then the computer could be seen to be as intelligent as a human. This is known as the Turing Test.

Player C, the interrogator, is tasked with trying to determine which player - A or B - is a computer and which is a human. The interrogator is limited to only receiving written responses in order so that they can't judge on appearance.

In 1980 the philosopher John Searle posed a thought experiment that some see as proving machines cannot understand what they are doing. The Chinese Room is a box in which a man sits. He does not speak Chinese at all, but is passed Chinese characters under the door. He has a book of Chinese characters and their matching responses. On receiving a character he looks for it in the book and sends the corresponding character in reply. At no point does he understand what he is doing, he just follows the instructions. AI can be considered to be just like this, however complex the code, all it is doing is responding to inputs with set outputs, there is no understanding present.


A similar argument was made by Stanley Jaki in 1969, where he proposed that AI is a little like a drain pipe, where two water droplets roll down, at the bottom they combine to form a larger droplet, but at no point does the drainpipe understand what has happened. He argues that human beings possess this understanding, whilst machines do not.

However, many philosophers and scientists see intelligence and understanding as nothing more than complex algorithms responding to stimuli. Is there really a 'me' that 'understands' and what exactly is it? Could our mind be reduced to a set of algorithms?

Extension:Philosophy of Mind

What can we learn from machines[edit | edit source]

As machines are expendable they allow us to experiment and simulate human beings without worrying about their safety. For example machines are used by the army to simulate the damage received by a human being from the detonation of an Improvised Explosive Device. This allows us to design vehicles and clothing better able to protect soldiers.

Testing on machines allows us to better prepare for dangerous situations

What are the limitations of using machines as tools?[edit | edit source]

If you create a machine without emotions and without the ability to acquire emotions, for example a car manufacturing robot, then there are some important limitations about how they can be used.

If you were to work next to a robot in a factory and you started not feeling very well, the machine would be very unlikely to be programmed to feel any sympathy, and would most probably not change its work routine to accommodate your changing circumstances. However, it could be possible that the robot might be programmed with these features.

Articulated industrial robot operating in a foundry

Machines in most cases lack the ability to adapt to new situations, being stuck with the code they have been given, and unable to see safety problems when carrying out their routine. The first robot-caused death was in 1979 when a robotic arm struck Robert Williams, a worker at a metal casting plant in the USA.

There might also be problems on the horizon if AI produces machines as 'intelligent' as us. In this situation they would have no limitations, they would be just like us. Isaac Asimov saw this problem and defined three laws of robotics to make sure that robots and humans can live together in peace.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Exercise: Robotics

Name a task that a robot would be suitable to do

Answer:


  • Building cars - don't get tired doing repetitive tasks, don't make mistakes
  • Arial Attack drones - doesn't matter if they are shot down, can fly for longer without making mistakes

NOT calculators, doing maths, hosting websites, etc. Why do you need a robot for that?! Surely a calculator or computer program would suffice?

What are robots better at than humans, why?

Answer:

Repeated tasks, they don't get tired or make mistakes

Do you believe that robots can ever be as intelligent as humans?

Answer:

This is an essay question that I'd like you to give some serious consideration to

What limitations might a machine have when used to perform a task? Give an example of where something might go wrong

Answer:


It might not have been programmed to deal with a scenario it comes up against.

  • robotic arm not realising someone is within its danger zone and getting crushed by it