Issues in Interdisciplinarity 2018-19/Evidence in driverless cars

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Evaluation of Evidence Within, Surrounding and In Consequence of Self-Driving Cars[edit | edit source]

This article examines how evidence is evaluated in self-driving cars from an interdisciplinary perspective encompassing: how self-driving cars use algorithms to collect and evaluate evidence (section 'within', disciplines: computer science and engineering), how policy-makers deal with risk and the uncertainty of evidence (section 'surrounding', disciplines: politics, statistics and psychology), and the role of evidence as an ethical entity (section 'in-consequence', discipline: ethics). A different, unique definition of evidence will be applied to each section in order to show the breadth of meaning of this concept.

The interior of Google's driverless car lacks most usual elements of a vehicle, illustrating the lack of human input required.

Within: Evidence and Bayes Theorem[edit | edit source]

Evidence, within a driverless vehicle, is defined as the continuous information gathered by cameras, radar and laser sensors from the surroundings. Algorithms are the central body, which process the data to perform reasoned actions. According to the SARTRE project a vehicle graded at Level 5 is fully autonomous in all driving modes, navigating entirely without human input.

Convolutional neural networks (CNNs) have been revolutionary in 'training' the algorithm in driverless cars, allowing them to learn automatically from training drives. CNNs use pixels from a front-facing camera to direct steering commands.[1]

This system operates largely on the basis of Bayes’ theorem.[2] Simply, Bayes’ theorem offers a systematic way to update one’s belief in a hypothesis on the basis of the evidence presented. For example, Google’s driverless cars use evidence from both Google Street View and artificial intelligence software.

Occasionally, the human operator is required to take driving control. A vehicle considered to be Level 3 can monitor its environment and drive with full autonomy under certain conditions, but not if sensors become damaged in challenging weather conditions.[3] Additionally, external data sources can oppose each other, but if the concepts of Evidentialism stand, each is justified in its recommendation to the driverless vehicle, if their evidence supports it.[4] To overcome this, the algorithm may re-direct the control to the human driver.

However, increasing reliance on automated systems could mean that humans will not maintain the skills to operate cars competently.[5] Consequently, although algorithms arose from computer science, their future role in driverless transportation is also relevant in the social and political disciplines.

Surrounding: Evaluating Evidence in Risk Assessment[edit | edit source]

The definition of evidence as 'that what justifies belief[6] illustrates the potential use of evidence in informed policy-making, where often the decision is justified by the assessment of potential risk.

Uber self-driving car showing damage after crashing into a pedestrian, reported by the "National Transportation Safety Board"

The cases of human deaths in the crashes of self-driving cars[7] show that their development and implementation can pose safety questions. These questions are investigated through risk assessment, which involves collecting and evaluating evidence on the variety of possible hazardous events and the probability of their occurrence.[8]

Human evidence evaluation in statistics can be seen in the "analytic system" and the "experimental system", utilized in risk assessment. The former uses normative rules (including statistics and formal logic), the latter uses emotion (including associations and experiences), although the "analytic system" requires guidance of the "experimental system".[9] Subsequently, programmers might be considered using their "experimental systems" to decide, for example, how the algorithm should react to certain situations (see 'in consequence' section). The algorithm and the evaluation of evidence (e.g. data) as the "analytic system" collaborate with the programmer's "experimental system".

Limitations in obtaining and evaluating evidence[edit | edit source]

Psychological factors affect the evidence evaluation performed by humans, who consequently make predictions and form policies. The perception of self-driving cars is connected to the emotions towards this innovative technology.[10] Therefore, evidence is important to inform opinions. Another concern is the possible access of third-parties to personal information compiled by self-driving cars.[11] The continuous data that would be gathered regarding the surroundings is likely used as it is collected in public. This contributes to privacy concerns and negative feelings towards the technology.[12]

Statistics can define observed data as evidence and evaluate data.[13] Evidence about fatalities and injuries using self-driving vehicles is hard to obtain as the vehicles have not driven sufficient miles to provide clear statistical evidence.[14] Miles driven does not correlate clearly to fatalities and injuries, so the cars need to complete hundreds of millions of miles to provide reliable evidence.[14] The limitations of obtaining and evaluating evidence show that it might not yet be feasible to demonstrate safety and uncertainty might remain, which affects policy-making.[14]

Approaches to uncertain evidence[edit | edit source]

One approach to deal with the uncertainty of evidence in policy-making is the precautionary principle. The meaning can be reduced to adopting measures to avoid harm to human health and the environment, even if these are not confirmed with data.[15] For example, in the USA NHTSA safety standards, it is assumed that a human driver should always be able to control the actions of motor vehicle in order to ensure its safety.[16]

However, in its extreme sense, the precautionary principle could lead to restraining from taking any action.[15] A more moderate approach is represented by adaptive regulations, which create new evidence (e.g through pilot experiment) and review it in order to adapt to the evolution of technology.[14] In case of autonomous vehicles, the adaptive regulations might become a mediator in negotiation between risk and progress, as experience and technological change will inform safety deliberations.[14]

In Consequence: Evidence as An Ethical Entity[edit | edit source]

Evidence in relation to ethics can be defined as mediating outcomes of driverless cars operating in accident scenarios used to determine how algorithms should react.

Programming autonomous cars requires addressing of dilemmas where the algorithms must make decisions in no-win situations or trolley problem premises, choosing which people involved are implicated, perhaps harmfully. One concern relating to these decisions is whether autonomous cars should act in the interest of the passengers or society. Although these are philosophical thought premises, they help determine how the algorithms will react in accident scenarios where collisions are unavoidable.[17]

A Waymo self-driving car on the road in Mountain View

There is, however, no evidence to suggest which reaction is the best way for a self-driving car to respond. From an utilitarian economic perspective, it should be to maximise total social benefit, hence resulting with the accident incurring the least total cost. From an engineering perspective, optimisation of machine functions and decisions outweighs ethical and legal considerations.[18] From a law standpoint, optimisation of an algorithmic decision to kill is unjustifiable and indefensible.[19] An interdisciplinary outlook must be applied as there are many conflicts in interests and little evidence to suggest a clear prioritisation of factors in these trolley problems.

Societal cultural values, which differ across nations, shape the normative ethical beliefs of individuals within those societies.[20] Studies on a range of countries have demonstrated varying opinions on the implementation of autonomous cars[21], revealing differences in ethical considerations. The validity of evidence is dependant on the desired outcome and desired outcomes will vary. There is a lack of real world evidence to guide a resolution amongst these variations in normative ethical ideas as autonomous cars are relatively untested.

Conclusion[edit | edit source]

This article has analysed how evidence is evaluated in both practical settings and in an abstract, emotional form: relating to the disciplines of computer science, engineering, statistics, psychology, ethics, and politics.

Bibliography[edit | edit source]

  1. Bojarski M, Del Testa D, Dworakowski D, Firner B, Flepp B, Goyal P, Jackel LD, Monfort M, Muller U, Zhang J, Zhang X. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. 2016 Apr 25. 1-4. Available from https://arxiv.org/pdf/1604.07316.pdf [Accessed 7th December 2018].
  2. D'Agostini G. A multidimensional unfolding method based on Bayes' theorem. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 1995 Aug 15;362(2-3):487-98, Available at:.https://www.sciencedirect.com/science/article/pii/016890029500274X [Accessed 5th December 2018].
  3. Paden B, Čáp M, Yong SZ, Yershov D, Frazzoli E. A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on intelligent vehicles. 2016 Mar;1(1):33-55. Available at https://ieeexplore.ieee.org/abstract/document/7490340 [Accessed 1st December 2018].
  4. Feldman R. Evidentialism, higher-order evidence, and disagreement. Episteme. 2009 Oct;6(3):294-312. Available at:. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/FEAB79DBDE02329F572D90BFD011E8E1/S1742360000001362a.pdf/evidentialism_higherorder_evidence_and_disagreement.pdf [Accessed on 25th November 2018].
  5. FRY, H. (2018). HELLO WORLD. [S.l.]: Doubleday, pp.122-125. [Accessed in 1st November 2018].
  6. Kelly T. Evidence. The Stanford Encyclopedia of Philosophy. Winter 2016 ed. 2016. Available from: https://plato.stanford.edu/archives/win2016/entries/evidence [Accessed 9 December 2018].
  7. Burns L, Shulgan C. Autonomy. The Quest to Build the Driverless Car – And How It Will Reshape Our World. 1st ed. Hypercollins; 2018.
  8. Ostrom L, Wilhelmsen C. Risk assessment. Tools, Techniques, and Their Applications. Hoboken, New Jersey: John Wiley & Sons, Inc.; 2012.
  9. Slovic, P., Finucane, M., Peters, E. and MacGregor, D. Risk as Analysis and Risk as Feelings: Some Thoughts about Affect, Reason, Risk, and Rationality. Risk Analysis. 2004;24(2): 311-322. Available from: https://onlinelibrary.wiley.com/doi/full/10.1111/j.0272-4332.2004.00433.x [Accessed 3rd December 2018].
  10. König, M., Neumayr, L. Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour. 2017;44: 42-52. Available from: https://www.sciencedirect.com/science/article/abs/pii/S136984781630420X [Accessed 1st December 2018].
  11. Elmaghraby, A. S., Losavio, M. M. Cyber security challenges in Smart Cities: Safety, security and privacy. Journal of Advanced Research. 2014;5(4): 491-49. Available from: https://www.sciencedirect.com/science/article/pii/S2090123214000290 [Accessed 25th November 2018].
  12. Bloom C, Tan J, Ramjohn J, Bauer L. Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles. In: SOUPS '17: Proceedings of the 13th Symposium on Usable Privacy and Security, July 2017. USENIX. 2017. Available from: https://www.usenix.org/system/files/conference/soups2017/soups2017-bloom.pdf [Accessed 1st December 2018].
  13. Royall, R. On the Probability of Observing Misleading Statistical Evidence. Journal of the American Statistical Association. 2000;95(451): 760-768. Available from: https://www.tandfonline.com/doi/abs/10.1080/01621459.2000.10474264 [Accessed 25th November 2018].
  14. a b c d e Kalra, N., Paddock, S. M. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transportation Research Part A: Policy and Practice. 2016;94: 182-193. Available from: https://www.sciencedirect.com/science/article/pii/S0965856416302129 [Accessed 25th November 2018].
  15. a b Gardiner S. A Core Precautionary Principle. Journal of Political Philosophy. 2006;14(1):33-60.
  16. U.S. Department of Transportation. Preparing for the Future of Transportation: Automated Vehicles 3.0. U.S. Department of Transportation;2018: 6-7.
  17. Nyholm S, Smids J. The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?. Ethical Theory and Moral Practice. 2016;19(5):1275-1289. Available from: https://link.springer.com/content/pdf/10.1007%2Fs10677-016-9745-2.pdf [Accessed 8th December 2018].
  18. Gogoll J, Müller J. Autonomous Cars: In Favor of a Mandatory Ethics Setting. Science and Engineering Ethics. 2017;23(3):681-700. Available from: https://link-springer-com.libproxy.ucl.ac.uk/content/pdf/10.1007%2Fs11948-016-9806-x.pdf [Accessed 7th December 2018].
  19. Coca-Vila I. Self-driving Cars in Dilemmatic Situations: An Approach Based on the Theory of Justification in Criminal Law. Criminal Law and Philosophy. 2018;12(1):59-82. Available from: https://link-springer-com.libproxy.ucl.ac.uk/content/pdf/10.1007%2Fs11572-017-9411-3.pdf [Accessed 7th December 2018].
  20. Chatterjee S, Tata R. Convergence and Divergence of Ethical Values across Nations: A Framework for Managerial Action. Journal of Human Values. 1998;4(1):5-23. Available from: https://journals-sagepub-com.libproxy.ucl.ac.uk/doi/pdf/10.1177/097168589800400102 [Accessed 8th December 2018].
  21. Kyriakidis M, Happee R, de Winter J. Public opinion on automated driving: Results of an international questionnaire among 5000 respondents. Transportation Research Part F: Traffic Psychology and Behaviour. 2015;32:127-140. Available from: https://ac-els-cdn-com.libproxy.ucl.ac.uk/S1369847815000777/1-s2.0-S1369847815000777-main.pdf?_tid=af389f4c-b60a-4677-b487-801b008f01e0&acdnat=1544380166_5a51e6744869c182952abdbe0b064d7d [Accessed 8th December 2018].