User:Elliecolenso1/sandbox

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Driverless Cars[edit | edit source]

1) definition in context[edit | edit source]

2) how algorithm uses evidence, Bayes theorem. ELLIE[edit | edit source]

In a somewhat competitive manner, astounding technological advancement is facilitating a large movement to a future of driverless transportation: in June 2011 the U.S state of Nevada passed a law permitting the operation of driverless cars within the state.Algorithms in self-driving cars use input evidence to achieve this.[1]

Surrendering primary control to an algorithm could draw parallels with the practice of evidence-based healthcare; if only in that an optimal human lifestyle is a central aim. Both practices operate largely on the basis of Bayes’ theorem. (D’Agostinit, G. 1995) [2]Simply, Bayes’ theorem offers a systematic way to update one’s belief in a hypothesis on the basis of the evidence presented. For example, a doctor may prescribe certain medication on a repertoire of qualitative and quantitative evidence. Conversely, Google’s driverless cars make use of Google Street View, and artificial intelligence software which fuses evidence from cameras, a laser sensor (which measure distance) and radar sensors (which use radio waves to detect far away obstacles). The implicit challenge remains in both cases: what action is taken when the data sources disagree? For example, if one source deems the next movement safe, but the other does not.

Most recently, convolutional neural networks (CNNs) have been trained to use pixels from a front-facing camera to direct steering commands. (Bojarski. M. et al 2016) [3]Researchers argue that since the system is not operating on human-selected criteria, as has been previously done by using markers such has lane detection and road signs, internal components are self-optimizing, rather than in disagreement.

253 words

3) policy-making and risk assessment: precautionary principle BARTEK[edit | edit source]

The definition of evidence as "that what justifies belief"[4] shows the potential use of evidence in informed decision- and policy-making when assessing risk. The policy-makers analyze the available data in a way, which allows them to predict the potential risk of given decision and therefore state what, according to their interpretation of evidence collected and processed, are the best regulations on a given issue.

The development and implementation of self-driving cars poses inevitable questions of their safety in common use, the major concern being the number and rate of collisions and therefore effect on the rate human deaths in the car crashes.[5] The cases of Joshua Browns' death on Tesla's "autopilot" mode in 2016, Tesla Model X crash in March 2018 and Uber autonomous SUV accident, which caused the death of pedestrian Elaine Herzberg[6] show that testing and implementing self-driving cars still involves risks on human safety that needs to be taken into consideration when creating the regulations.

One of the existent approaches in policy-making when assessing risk is the precautionary principle. Widely present in the discussions regarding environment protection, its definition is still not crystallized and put under discussion.[7] However, its core meaning could be reduced to taking the precautionary measures when there is a possibility of threat to human health and environment even if there is a scientific uncertainty about the cause and effect relationship.[8] The precautionary principle can therefore serve as an example of how humans try to deal with the uncertainty of evidence they collect.

When applied to the case of self-driving cars, taking into account the documented cases of collisions and deaths, the precautionary principle, would determine the abstention from implementing the self-driving cars. This kind of approach is named by Gardiner as 'the ultraconservative precautionary approach'.[9] Such an approach is criticized for not considering the potential benefits of taking an action. Analysing this and other extreme approaches, Gardiner puts forward Rawlsian Core Precautionary Principle. Elaborating on Rawls maximin principle, Gardiner limits the use of the precautionary principle to three specific conditions, when decision-makers:

  • "either lack, or have reason to sharply discount, information about the probabilities of the possible outcomes of their actions"
  • "care relatively little for potential gains that might be made above the minimum that can be guaranteed by the maximin approach"
  • "face unacceptable alternatives"[10]

An example of evidence that analyses safety of self-driving cars and could therefore be a cause to use the precautionary principle are the Californian disengagement reports. These reports provide information about the number of human interventions during the self-driving cars tests in California. According to this data, during the tests, the driving of cars of Google/Waymo was intervened 0.02 times per 1000 miles, which means they would cause 4-5 times more crashes than human drivers based on the USA Department of Transportation Data.[11]

(466 words)

4) evidence suggesting its a car making the decision, is safer? or what evidence that humans are safer? KRISTINA + cece ethics and evidence car reactions[edit | edit source]

According to the World Health Organisation, around 1.25 million people die in road traffic crashes each year [12], deeming the importance of autonomous cars as a way to remove the danger of existent human driver error and deficiencies [13]. Although autonomous cars have the potential to remove these errors and provide numerous other benefits, a barrier exists as it's adoption relies heavily on level of public trust in driverless cars [14]. Despite the fact that autonomous cars are unable to commit many of the ethical and legal offences human drivers make, such as drinking and then driving, not to mention quicker reaction times, there are still concerns in the general population in regards to security, reliability, and liability [15]. Programming autonomous cars requires addressing of dilemmas or emergency situations where the algorithms must make decisions in these so called no-win situations or trolley problem premises, choosing essentially which people involved must die. One ethical concern relating to these split-decision algorithms is whether self-driving cars should always act in the interest of the passenger or society as a whole. Although these are essentially though experiment-based premises, they are used to determine how self-driving cars will ultimately react in an accident scenario where collisions are extremely likely or unavoidable[16]. There is, however, no evidence to suggest which reaction or path in a no-win situation is the correct or relative best way for a self-driving car to respond. Although multiple stakeholders with various perspectives may be involved in contingency planning, there is no way of using evidence to suggest that prioritising passengers is a better decision when compared against maximising total social benefit or vice versa. Attempting to evaluate and concur an all interest satisfying reaction when multiple people and factors are involved is impossible as it would produce both logical and practical contradictions. (304 words)

5) best of both worlds- human and algorithm[edit | edit source]

  1. Data Science: An Introduction/The Impact of Data Science#Google's Driverless Car
  2. D'Agostini, G., 1995. A multidimensional unfolding method based on Bayes' theorem. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 362(2-3), pp.487-498.
  3. Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J. and Zhang, X., 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.
  4. Kelly, T. (2018). Evidence. [online] Plato.stanford.edu. Available at: https://plato.stanford.edu/entries/evidence/ [Accessed 25 Nov. 2018].
  5. Saripalli, S. (2018). Redefining 'safety' for self-driving cars. [online] The Conversation. Available at: https://theconversation.com/redefining-safety-for-self-driving-cars-87419 [Accessed 25 Nov. 2018].
  6. BURNS, L. and SHULGAN, C. (2018). AUTONOMY the quest to build the driverless car - and how it will reshape the world. [S.l.]: WILLIAM COLLINS.
  7. Gardiner, S. (2006). A Core Precautionary Principle*. Journal of Political Philosophy, 14(1), pp.33-60.
  8. Wingspread Statement on the Precautionary Principle. (1998).
  9. Gardiner, S. (2006). A Core Precautionary Principle*. Journal of Political Philosophy, 14(1), pp.33-60.
  10. Gardiner, S. (2006). A Core Precautionary Principle*. Journal of Political Philosophy, 14(1), pp.33-60.
  11. Marchand, R. (2018). Put Driverless Cars Back in the Slow Lane. [online] Realclearpolicy.com. Available at: https://www.realclearpolicy.com/articles/2018/02/15/put_driverless_cars_back_in_the_slow_lane_110511.html [Accessed 25 Nov. 2018].
  12. Jackson, L. and Cracknell, R. (2018). Road accident casualties in Britain and the world. [online] Researchbriefings.parliament.uk. Available at: https://researchbriefings.parliament.uk/ResearchBriefing/Summary/CBP-7615 [Accessed 24 Nov. 2018].
  13. Smith, B. (2018). Human error as a cause of vehicle crashes. [online] Cyberlaw.stanford.edu. Available at: http://cyberlaw.stanford.edu/blog/2013/12/human-error-cause-vehicle-crashes [Accessed 24 Nov. 2018].
  14. Bansal, Kockelman & Singh, 2016. Assessing public opinions of and interest in new vehicle technologies: An Austin perspective. Transportation Research Part C, 67(C), pp.1–14.
  15. Fagnant & Kockelman, 2015. Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transportation Research Part A, 77(C), pp.167–181.
  16. Nyholm, S. & Smids, J., 2016. The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 19(5), pp.1275–1289.