Future Robotic Ethics

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Future Robotic Ethics[edit | edit source]

Introduction[edit | edit source]

Due to robots being part of the larger more broad field of technology, it makes sense that the ethical aspect grows, expands, and advances simultaneously. In addition to Asimov’s founding laws of robotics, new laws or ideals are being added in order to expand future installments such as the three principles of combat robots which include:

  1. Combat robots cannot kill our side, but they can kill enemies.
  2. The battle robot must follow the command of the friend. You do not have to follow it when the order is out of line.
  3. A battle robot must defend itself as long as it does not violate Article 1 and 2.

Establishing Ethical Mechanisms[edit | edit source]

At present, there are two research approaches, "top-down" and "bottom-up"[1], for the study of artificial intelligence in ethical decision-making.

Top-Down Method[edit | edit source]

Top-down research is to code ethical principles or some moral theory into the guidance system of intelligent machines. Self-driving cars make moral choices based on embedded moral philosophical procedures. However, as critics have pointed out: On the one hand, the moral standards applicable to some people may not apply to others; on the other hand, there are difficulties in how to choose between competing moral systems. There is still no conclusion in the traditional ethics discussion. We use the “deep blue” as the example. The first shock of artificial intelligence was IBM's 'deep blue' program to win the chess championship. This is a rational thinking process of using computer to simulate human chess, which proves that the path of symbolism is feasible.

Bottom-Up Method[edit | edit source]

The bottom-up approach claims that the intelligent machine does not need to be embedded in any moral rules or moral philosophy. It only needs to 'observe' and analyze a large amount of human behavior data in real situations to learn how to make moral decisions. This reliance on big data and machine learning has been widely adopted in the non-ethical aspect of autonomous vehicles. For example, autonomous vehicles enable autonomous driving by analyzing dozens of hours of manual driving data. However, critics also pointed out that intelligent machines learn some ethical behavior while learning to behave ethically. In the face of unexpected situations, the driver's conditioned reflex action due to stress may become a confrontational sample of machine learning rather than a judicious and worthwhile moral decision. In this way, intelligent car learning is only a common treatment, not what is moral. We also use the Deep Blue as example. The second artificial intelligence attack on human beings was based on deep learning of neural networks. AlphaGo abandoned the programming method of traditional Go programs and used machine learning creatively to acquire the experience and intuition of playing chess. The result beats the world’s game of chess. champion. "More notable is that AlphaGo Zero started from scratch, through 36 hours of self-learning, over 3000 years of human Go experience, beat the previous version of AlphaGo 100 to 0. This proves that the second road is also feasible. Both kinds of research approaches are faced with various difficulties. This mainly stems from the internal structure of the moral philosophy used by humans, not from the technology itself. When we consider the possibility of embedding a moral norm or value standard into a smart machine, which of the community's moral consistency and the personal preferences of intelligent machine users should be prioritized? In 2016, a study published by Science magazine revealed that most people support smart machines to protect more people. However, when it comes to the safety of themselves or their families, people may turn to the opposite moral standards.[2][3]

Future Development in Robotics[edit | edit source]

Robotics in the future is concentrated in three categories, Android, Cyborg and Humanoid. Android is an artificial human made just like a person. Not only in appearance, but also in action and intelligence, Androids are almost the same as human beings. It is covered with artificial skin. Cyborg is a creation that wherein an organism is incorporated into machinery, whether it is a human being or an animal. Humanoid is a robot that has a shape similar to a human body, such as a head, a trunk, an arm, and a leg. Therefore, it is also called humanoid robot because it is the robot which can best imitate the behavior of human best. ASIMO developed by Honda Japan and HUBO developed by Korea's KAIST are typical humanoid robots. But their skin is harder than Android.

Autonomous Systems and their Application[edit | edit source]

Autonomous system applications have made a huge impact on the world with achievements in the field of robotics. Through the creation of drones, advancements have been made possible in other fields such as agriculture, marine technology, scientific research, etc. “Persons who authorize the use of, direct the use of or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rides, and applicable rules of engagement (ROE)”[4] The individuals who write ethics need to take many things into consideration before delivering a product to the public sector to ensure that no one will make wrong use of it. Autonomous systems bring an aggressive approach to economic development and therefore development of such systems has progressed rapidly. Autonomous systems have the potential to bring about just as many negative impacts as they do positive. Therefore, laws around development and manufacturing need to be implemented so that these entities have regulation and standards. It is obvious that autonomous systems need to be stable, accurate, and responsible so that it is used in ways that provide the greatest good for the greatest number.

Ethics and a Code of Conduct[edit | edit source]

Along with the invention of high tech autonomous machines, a lot change in the rules and regulations were needed to be framed. “Moreover, stability analysis for non-autonomous systems usually requires specific and quite different tools from the autonomous ones (systems with constant coefficients)”[5]. Among all, drones have been highly developed and more researched areas of A.I. Researchers and high-tech people are finding ways to use drones more and more in the day-to-day world. So far, drones has been successful in the areas which they are being used and has enriched that area of study. Due to its recent success, it is more in demand and due to the development in new areas of drones: arises a lot of other protocols which needs to be followed to put them for the use. Huge percentage of drone are used in the military and for the defense purpose around the borders of the countries. Since, A.I is vast field it requires to undergo a lot of different rules and regulation through which it needs to be passed taking into considerations that requirements fit all the departments. The ethical implication is necessary to have the best use of autonomous technology and particularly for drones which is achieving great heights day by day. To do this, it needs to filter through aviation ministry rules, defense ministry and protocols and other commanders of the army.

Autonomous Vehicles[edit | edit source]

Recently, the implementation of driver safety features has been rolled out by vehicle manufacturers to perform even more complex tasks. Today, vehicles can alert drivers to hazards on the road, apply the brakes, steer around turns, and even park themselves in busy parking lots. Companies such as Tesla, Inc. have even released a form of Artificial Intelligence (A.I.) that will pilot a vehicle to a set destination by the driver. This idea of a self-driving vehicle was once just a fantasy of those fascinated with science fiction and the 'vehicle of tomorrow.' While the concept of having a vehicle that drives itself sounds like an addition of convenience and safety, there must be a significant consideration of the ethical and legal issues surrounding A.I. and autonomous driving.

Trolley Car Dilemma[edit | edit source]

The Trolley Car dilemma can spark discussion around how autonomous vehicles are to make ethical and moral decisions. For example, if an autonomous vehicle is traveling down the road and suddenly a pedestrian steps out into the street, two choices can be made. One, the vehicle could strike and injure, or possibly kill, the pedestrian in an attempt to protect the driver of the vehicle. Two, the vehicle could swerve and crash into a guardrail and possibly injure or kill the driver of the vehicle. We must ask the question "who exactly should have the power to determine who lives and who dies…Should it be left up to the manufacturer of the vehicle in question?…Or, ultimately, perhaps it should rest with the individual” [6]

Liability[edit | edit source]

The Driver[edit | edit source]

In a system where there is human interaction involved with the driving of the vehicle, an example being Tesla, Inc.'s AutoPilot, the liability and decision would lie with the driver of the vehicle. This is because the continued attention of the driver is required for the AutoPilot system to be engaged and operate during the trip. The driver would have the ability to take control of the vehicle and use their own decision-making process to determine their desired outcome. This idea is aligned with the current legal responsibility of operating a motor vehicle. While the A.I. is in control for most of the drive, in an emergency situation, the system would require the driver to take over and operate the vehicle. This has some implications that could change the adoption rate of self-driving technology. Since the driver has to continue to be aware of their surroundings and potentially make decisions, the concept of letting the vehicle navigate itself while the driver performs other tasks, such as putting on makeup, reading the news, and even sleeping, is negated. We must also consider additional uses for driverless technology other than convenience. Such as an infrastructure for those with disabilities who would not normally be able to pilot a motor vehicle. This technology would allow those who, in the past relied on caretakers to get them to their destination, the freedom to be mobile. This form of automation that requires driver input is not feasible to drivers that could not take over control of the vehicle and they would then rely on the development of a completely driverless autonomy.

The Manufacturer[edit | edit source]

In a system where there is no interaction at all from the driver, and the vehicle is completely in control of the decisions made, the liability and decision would lie with the vehicle manufacturer. As the driver has no control over the situation and a predetermined decision would be implemented by the A.I. A very important notion to consider in this situation is how the vehicle would make the aforementioned decision. The vehicle manufacturer would have to code into the software of the A.I. how to react and make decisions in various emergency situations. The decision algorithm imposed by the manufacturer would then be applied to all drivers of the vehicle. It could make a decision that does not align with the values of the driver. When expanding the idea of the manufacturer as the liable party, consideration must be given to the concept of product liability. Under product liability, the manufacturer is liable for damage caused by their product if such damage is caused by a defect of the product. This only applies if the manufacturer knew of the defect, or should have known of the defect, and it was within their control to avoid such defect. Therefore, under the idea of product liability, the manufacturer would have to assure that there extensive testing of the A.I. used in the autonomous system and other mechanical vehicle systems to avoid legal issues surrounding product liability.

The Legal System[edit | edit source]

Lastly, we could rely on lawmakers to determine the ethical and legal responsibilities of self-driving vehicles. This would allow for the most concise and uniform creation of rules and regulations. This uniformity would remove liability from the manufacturer and provide a template for the judicial system when legal cases arise around personal injury and property damage due to the use of an autonomous vehicle. Ideally, the legislation would be updated to accurately reflect the current trends of technology and govern over changes to autonomous technology in the future. Through the drafting of legislation around self-driving vehicles, citizens would be able to democratically vote on and pass the regulations that they agree with. This legislation would have to determine which party's best interest is at hand. Is it better to protect the individual on the street with no protection or to protect the individual within the vehicle that has built-in safety features such as airbags? This does not come without criticism from individuals that do not wholeheartedly agree with the views of lawmakers in how their vehicle should operate.

Open Robotics Initiative Survey[edit | edit source]

A survey of individuals regarding autonomous vehicles and decisions, performed by Open Robotics Initiative[7], shows that 64% of individuals surveyed would rather the vehicle harm the pedestrian rather than themselves (the driver) of the vehicle. This survey also asked the same individuals who they believe should make the decision. Results find, 44% of individuals feel that, as the driver, they should make the decision, 33% of individuals feel that lawmakers should make the decision, 12% feel that the vehicle manufacturer should decide, and 11% feel that some "other" entity should decide. The survey did note that individuals who voted for lawmakers to make the ethical decision were under the age of 44. Those over the age of 45 did not support government intervention.

Issues Surrounding Ethics in the Future[edit | edit source]

Safety[edit | edit source]

The most important concern is safety. Robots have been developed and used only for industrial experts and military use, but now they are used by ordinary people. Robot vacuum cleaners and lawn mowers are already widely used at home, and robot toys are popular with children. As these robots become more intelligent, who is attacking first or harming is becoming unclear. Should designers be held accountable? Should the user be responsible? Do robots have to take responsibility? Robots have physical robots that can be touched and digital robots that can not be touched. Digital robots are as complex as computer programming. For example, you make financial decisions that involve the use of digital robots. If this intelligent expert software robot has made a huge loss in decision making, who will be responsible for it?

A Robot’s Right?[edit | edit source]

The second serious problem is the second principles of the Asimov robot. We have to obey human orders unconditionally, but human language and natural language programming make it difficult for robots to distinguish who has been commanded. Therefore, although the three principles of Asimov emphasize only the safety of the human, the problem is more serious if the robot has a sense of perception. If the robot feels pain, does it grant something special rights? If robots possess emotions, should robots be given the right to marry humans? Should I give personal intellectual property or ownership to a robot? This is still a long-time story, but we can know that it is not that far from the discussion of the animal rights abuse prevention robots that can have sex with humans have emerged and are now a major issue in society.

Automation through Robotics[edit | edit source]

Factory robots fulfill a wide range of tasks ranging from assembly of products, welding and cutting parts, as well as moving the inventory to specified locations. The use of robots to complete jobs is increasing in preference because they allow for ‘round the clock’ work since these systems do not need to regularly shutdown or take breaks. The other benefit of the factory automation is that there is not a need to pay the autonomous systems that are performing their set tasks. While there is an initial cost to purchase the robot as well as the cost to maintain the function of the robot, this typically costs less than hiring and paying a human to perform the same task. The robot will also produce little to no errors in production because they are tuned to do very specific jobs and to do them rhythmically. The lack of errors and speed of the robots will yield an increase in the production of goods and results in cheaper and more consistent goods as well.

How Automation of Labor Will Affect the Job Market[edit | edit source]

People fear this change as the process of automation could result in the loss of job security. By 2021, robots will have eliminated 6% of all jobs in the US. On a positive note, the incorporation of robots will also create jobs as there will be a need to design, manufacture, program, manage, and maintain these robots and systems. Another huge beneficiary is that it eliminates tedious, mundane, repetitive and potentially dangerous work. This will allow for people to focus on the more important tasks at hand while not being held back by time-consuming work.

Job Displacement[edit | edit source]

Autonomous systems and machines are gradually causing a decline in the need for low skilled labor. There are many reasons for this, but the main factors are it will be cheaper for the companies with the money they save in not hiring workers, and the increased productivity and efficiency these systems offer. This low skilled labor is mainly jobs that are highly routine and repetitive. Economists Daron Acemoglu and Pascual Restrepo conclude in a study that “Predictably, the major categories experiencing substantial declines are routine manual occupations, blue-collar workers, operators and assembly workers, and machinists and transport workers”[8]. These are the jobs that are at the highest risk of being replaced with automated machines and robots not necessarily because they are easy, but they are highly uniform and repetitive. This makes it easy to program machines to perform the task because no decisions really must be made, it’s just repeating a task over and over.

Job Creation[edit | edit source]

Overall, though the jobs lost to these machines will lead to other jobs being created though. So, it’s more of a job displacement than job loss. With more and more companies wanting to implement automated systems to perform these low skilled tasks they will need more people to build, program, and maintain these systems. An article written by Kevin Maney, a best-selling author and prevalent economic reporter, stated in a 2016 article the following:

   	"The robotization of work will eat into more knowledge-based jobs. Low-level accounting
   	will get eaten by software. So, will basic writing: Bloomberg already uses AI to write
   	company earnings reports. Robots today can be better stock traders than humans. It won’t
   	be long before you’ll be able to contact an AI doctor via your smartphone, talk to it about
   	your symptoms, use your camera to show it anything it wants to see and get a triage
   	diagnosis that tells you to either take a couple of Advil or get to a specialist".[9]

So as the need for these complex automated systems increasing so will the need for the high skilled workers to create them. These factors lead many to believe that all labor will eventually be displaced into the technology field, but all agree the low-skilled jobs will be the first. This displacement is gradual right now though, and Digital Economics Experts researching at MIT state that “the era of mass technological unemployment is not imminent”[10]. Many companies are far off from being able to implement the systems needed to automate a mass amount of labor on a large scale.

References[edit | edit source]

  1. Sun, B.-X. (2017, September 11). How Artificial Intelligence Makes Ethical Decisions. Guangming Daily. Retrieved from http://epaper.gmw.cn/gmrb/html/2017-09/11/nw.D110000gmrb_20170911_2-08.htm
  2. Didier, C., Duan, W., Dupuy, J.-P., Guston, D. H., Liu, Y., Cerezo, J. A. L., … Woodhouse, E. J. (2015). Acknowledging AI’s dark side. Science, 349(6252), 1064–1064.
  3. IBM. (n.d.). Deep Blue. Retrieved from http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
  4. Automating the right stuff? The hidden ramifications of ensuring autonomous aerial weapon systems comply with international humanitarian law. Air force law review, 7285-122
  5. Finite-time stability of linear non-autonomous systems with time-varying delays. Advances In Difference Equations, 2018(1), 1-10. doi:10.1186/s13662-018-1557-3
  6. Belay, N. (2015). ROBOT ETHICS AND SELF-DRIVING CARS: HOW ETHICAL DETERMINATIONS IN SOFTWARE WILL REQUIRE A NEW LEGAL FRAMEWORK. Journal Of The Legal Profession, 40(1), 119-130.
  7. Open Robotics Initiative. (2014, June 23). If death by autonomous car is unavoidable, who should die? Reader poll results. Retrieved February 19, 2018, from http://robohub.org/if-a-death-by-an-autonomous-car-is-unavoidable-who-should-die-results-from-our-reader-poll/
  8. Robots Do Destroy Jobs and Lower Wages, Says New Study. The Verge.
  9. How Artificial Intelligence and Robots Will Radically Transform the Economy. Newsweek, 167(21), 31-37.
  10. Human Work in the Robotic Future: Policy for the Age of Automation. Foreign Affairs, 95(4), 139-150.