Jump to content

Issues in Interdisciplinarity 2018-19/Subjective and Objective Truth in AI: Difference between revisions

From Wikibooks, open books for an open world
[checked revision][unreviewed revision]
Content deleted Content added
→‎Objective Truth in AI: adding in refs for definitions of subjectivity and objectivity
→‎Objective Truth in AI: changed references for more reliable sources and started converting to Vancouver style
Line 1: Line 1:
== Objective Truth in AI ==
== Objective Truth in AI ==
Artificial intelligence (AI) is often thought to make objective decisions easier. Here, objectivity refers to conclusions based on critical thinking and scientific evidence, where the conclusion is indisputable and there is only one true answer<ref>Mulder, D. H. (9th September 2004), "[https://www.iep.utm.edu/objectiv/ Objectivity]" ''Internet Encyclopedia of Philosophy.'' Accessed online: 9th December 2018</ref>. Made up of formula and algorithms, AI can process vast amounts of data to come to a conclusion that is significantly more accurate, and therefore objective, than a human can achieve.  
Artificial intelligence (AI) is often thought to make objective decisions easier. Here, objectivity refers to conclusions based on critical thinking and scientific evidence, where the conclusion is indisputable and there is only one true answer<ref>Mulder, D. H, Objectivity [Internet]. Sonoma State University, California: Internet Encyclopedia of Philosophy; [updated 2004 Sept 9; cited 2018 Dec 9]. Available from: https://www.iep.utm.edu/objectiv/</ref>. Made up of formula and algorithms, AI can process vast amounts of data to come to a conclusion that is significantly more accurate, and therefore objective, than a human can achieve<ref>ICO, Big data, artificial intelligence, machine learning and data protection [Internet]. Cheshire, UK: Information Commissioner's Office; [updated 2017 May 17; cited 2018 Dec 9]. Available from: https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf</ref>.  


An example of this is machine learning and the task of identifying subjects in pictures. Though simple for humans, AI needs repetitive training with massive amounts of data to tell the difference between drinks, or a table and a stool. Neural networks in AI begin with one or multiple inputs, such as a picture, and processes them into one or multiple outputs, such as whether the picture shows wine or beer. These outputs consist of a complexity of ‘neurons’ that are grouped into layers, where one layer interacts with the next layer through weighted connections – each neuron carries a value, which is multiplied with the neuron in the consequent layer.<ref>Arzt, S. (2018). ''Explained In A Minute: Neural Networks''. [video] Available at: https://www.youtube.com/watch?v=rEDzUT3ymw4. Accessed online: 7 Dec. 2018.</ref>  Bias functions such as Eθ(θˆ) − θ<ref>Lecture 2. Estimation, bias, and mean squared error. (2018). [ebook] Cambridge, p.2. Available at: http://www.statslab.cam.ac.uk/Dept/People/djsteaching/S1B-15-02-estimation-bias-4.pdf. Accessed online: 7 Dec. 2018.</ref>, can be coded into the neural network and passed through the layers. As a result, inputs can be propagated through the whole network and the machine is taught to make predictions and draw conclusions that are as accurate as possible. This continual testing can reach decisions such as where to put a new train line through London, by using data of movements of people, costs involved etc.
An example of this is machine learning and the task of identifying subjects in pictures. Though simple for humans, AI needs repetitive training with massive amounts of data to tell the difference between drinks, or a table and a stool. Neural networks in AI begin with one or multiple inputs, such as a picture, and processes them into one or multiple outputs, such as whether the picture shows wine or beer. These outputs consist of a complexity of ‘neurons’ that are grouped into layers, where one layer interacts with the next layer through weighted connections – each neuron carries a value, which is multiplied with the neuron in the consequent layer.<ref>Marr, B., What Are Artificial Neural Networks - A Simple Explanation For Absolutely Anyone [Internet]. Forbes; [updated 2018 Sept 24; cited 2018 Dec 9]. Available from:https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/#4e593bd11245</ref>  Bias functions such as Eθ(θˆ) − θ<ref>Estimation, bias, and mean squared error [Internet]. Cambridge, UK: Statistical Laboratory; [updated 2018; cited 2018 Dec 7], pp.2. Available at: http://www.statslab.cam.ac.uk/Dept/People/djsteaching/S1B-15-02-estimation-bias-4.pdf</ref>, can be coded into the neural network and passed through the layers. As a result, inputs can be propagated through the whole network and the machine is taught to make predictions and draw conclusions that are as accurate as possible. This continual testing can reach decisions such as where to put a new train line through London, by using data of movements of people, costs involved etc.


As used by Accenture in their teach-and-test framework for AI, the continual connectivity and data processing mentioned previously can be tracked, and decisions or conclusions reached by the AI system can be questioned. The AI can even be coded to justify the decisions it reaches<ref>Ghosh, B. (2018). ''AI applications that are human-centred, unbiased, fair''. [online] <nowiki>https://www.livemint.com/</nowiki>. Available at: https://www.livemint.com/AI/5ixVrSb5hAn66jAloTt4cJ/AI-applications-that-are-humancentred-unbiased-fair.html. Accessed online: 7 Dec. 2018.</ref>. This can provide peace of mind that the AI is achieving human-centred, unbiased and fair conclusions – objectivity.
As used by Accenture in their teach-and-test framework for AI, the continual connectivity and data processing mentioned previously can be tracked, and decisions or conclusions reached by the AI system can be questioned. The AI can even be coded to justify the decisions it reaches<ref>Cathelat, B., 'How much should we let AI decide for us?' In: Brigitte Lasry, B. and Kobayashi, H., UNESCO and Netexplo. Human Decisions Thoughts on AI. Paris, France: UNESCO Publishing; 2018. p. 132-138. Available from: http://unesdoc.unesco.org/images/0026/002615/261563E.pdf</ref>. This can provide peace of mind that the AI is achieving human-centred, unbiased and fair conclusions – objectivity.


== Subjective Truth in AI ==
== Subjective Truth in AI ==

Revision as of 16:56, 9 December 2018

Objective Truth in AI

Artificial intelligence (AI) is often thought to make objective decisions easier. Here, objectivity refers to conclusions based on critical thinking and scientific evidence, where the conclusion is indisputable and there is only one true answer[1]. Made up of formula and algorithms, AI can process vast amounts of data to come to a conclusion that is significantly more accurate, and therefore objective, than a human can achieve[2].  

An example of this is machine learning and the task of identifying subjects in pictures. Though simple for humans, AI needs repetitive training with massive amounts of data to tell the difference between drinks, or a table and a stool. Neural networks in AI begin with one or multiple inputs, such as a picture, and processes them into one or multiple outputs, such as whether the picture shows wine or beer. These outputs consist of a complexity of ‘neurons’ that are grouped into layers, where one layer interacts with the next layer through weighted connections – each neuron carries a value, which is multiplied with the neuron in the consequent layer.[3]  Bias functions such as Eθ(θˆ) − θ[4], can be coded into the neural network and passed through the layers. As a result, inputs can be propagated through the whole network and the machine is taught to make predictions and draw conclusions that are as accurate as possible. This continual testing can reach decisions such as where to put a new train line through London, by using data of movements of people, costs involved etc.

As used by Accenture in their teach-and-test framework for AI, the continual connectivity and data processing mentioned previously can be tracked, and decisions or conclusions reached by the AI system can be questioned. The AI can even be coded to justify the decisions it reaches[5]. This can provide peace of mind that the AI is achieving human-centred, unbiased and fair conclusions – objectivity.

Subjective Truth in AI

It is often argued, however, that the supposed objective decisions made by AI end up becoming subjective because the data sets being used are biased [6] [7]. Here, subjectivity refers to a belief based on personal opinions, experiences and feelings and not on scientific evidence[8]. As human beings we all have our own biases, and no one can be truly objective [9]. As we are both creating the AI itself and the data it processes, it can be inherently implied that AI is never going to be objective.  

Gender and ethnicity biases are often unconsciously inputted into algorithms. A notable example of this is AI facial recognition software identifying black women as men[10][11]. It is suggested that this is down to the unconscious bias of computer scientists and engineers, the majority of which are white and male[11]. Similarly, when searching for pictures on Google, the word ‘CEO’ will bring up pictures of men and the word ‘helper’ will bring up pictures of women. This is based on biased data sets on what a CEO looks like – most CEO’s are indeed men, but this is based on historical patriarchal ideas that are generally considered wrong [12][13].

As AI is becoming increasingly more prominent in everyday life; self-driving cars, Google home devices, advertising etc. ethics need to be considered. Ethics can be defined as means to tackle the question of morality, but ethics can be interpreted differently according to one's opinions, beliefs and perspectives, as a result trying to create AI that is ethical is likely to cause many problems. Especially when these decisions are coupled with potentially biased data [14].

Issues and Contradictions

From a mathematical, objective point of view; AI provides significant computing and decision-making power that humans will never be able to accomplish on their own, achieving more of an insight into complex problems. From a subjective, ethical and philosophical stand point; AI will never be truly objective[15] and we’re likely to run into significant problems where AI ‘gets it wrong’, such as the 2010 Flash Crash, in its pursuit to find ‘the truth’ or to reach a logical conclusion[16][17].

As an example, AI could be used in recruitment to eradicate unconscious bias in hiring[18]. However, if a machine learning algorithm was used, data about gender, race, disability etc. could inform the AI to make decisions to hire white, straight, able-bodied men – who according to bias data are the least risky, and therefore, most cost-effective choice of employee[19]. It could easily highlight our own biases and amplify them[17]. And, because machine learning is done in itself, it is a black box – we input data and we get data out, without auditing the results, we could be completely unaware of what data points the AI was using to inform its decision[19].

AI struggles to be truly objective when presented with problems that have ethical questions tied to them[20]. However, evaluating AI from an interdisciplinary perspective ensures that there has been considered thought about the effects of AI and the decisions it has to make. Obviously, computer science and electronic engineering play a huge role in creating the technology, but philosophy and the social sciences such as anthropology, economics and psychology are needed in the development of AI to ensure we produce systems that ‘think’ about the other effects of its conclusions, making AI both useful and safe for humans to use in the future.

Notes

  1. Mulder, D. H, Objectivity [Internet]. Sonoma State University, California: Internet Encyclopedia of Philosophy; [updated 2004 Sept 9; cited 2018 Dec 9]. Available from: https://www.iep.utm.edu/objectiv/
  2. ICO, Big data, artificial intelligence, machine learning and data protection [Internet]. Cheshire, UK: Information Commissioner's Office; [updated 2017 May 17; cited 2018 Dec 9]. Available from: https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf
  3. Marr, B., What Are Artificial Neural Networks - A Simple Explanation For Absolutely Anyone [Internet]. Forbes; [updated 2018 Sept 24; cited 2018 Dec 9]. Available from:https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/#4e593bd11245
  4. Estimation, bias, and mean squared error [Internet]. Cambridge, UK: Statistical Laboratory; [updated 2018; cited 2018 Dec 7], pp.2. Available at: http://www.statslab.cam.ac.uk/Dept/People/djsteaching/S1B-15-02-estimation-bias-4.pdf
  5. Cathelat, B., 'How much should we let AI decide for us?' In: Brigitte Lasry, B. and Kobayashi, H., UNESCO and Netexplo. Human Decisions Thoughts on AI. Paris, France: UNESCO Publishing; 2018. p. 132-138. Available from: http://unesdoc.unesco.org/images/0026/002615/261563E.pdf
  6. Vanian J. (2018) "Unmasking AI's bias problem", Forbes, Available from: http://fortune.com/longform/ai-bias-problem/ Accessed online 2nd December
  7. Greene T. (2018) "Human bias is a huge problem for AI", The next web, Available from:https://thenextweb.com/artificial-intelligence/2018/04/10/human-bias-huge-problem-ai-heres-going-fix/ Accessed online 3rd December
  8. Francescotti, R. (24th April 2017), "Subjectivity" Routledge Encyclopedia of Philosophy. Accessed online: 09/12/2018
  9. Naughton J. (2018) "Don't worry about Ai going bad- the minds behind it are the danger", The Guardian, Available from:https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligence-going-bad-futuristic-nightmare-real-threat-more-current. Accessed online 4th December
  10. Lohr, S. (2018), "Facial Recognition Is Accurate, if You’re a White Guy" The New York Times. Accessed online: 3rd December 2018
  11. a b Buolamwini, J. and Gebru, T. (2018), "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", Proceedings of Machine Learning Research 81:1–15. Accessed online: 3rd December 2018
  12. Devlin H. (2017) "AI programs exhibit racial and gender biases, research reveals", The Guardian, Available from:https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals. Accessed online 4th December
  13. Sharkey N. (2018) "The impact of gender and race bias in AI", Humanitarian law and policy, Available from: http://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/ Accessed online 5th December
  14. Bostrom N. and Yudkowsky E. (2011) "The ethics of artificial intelligence", Cambridge Handbook of Artificial Intelligence, Cambridge University Press: Available from: https://nickbostrom.com/ethics/artificial-intelligence.pdf, p.1-20. Accessed online: 5th December
  15. Moor, J. H. (2011), "The Nature, Importance, and Difficulty of Machine Ethics", Machine Ethics, Cambridge University Press, pp. 13. Accessed online: 3rd December 2018
  16. Jøsang, A. (1997), "Artificial Reasoning with Subjective Logic", Norwegian University of Science and Technology. Accessed online: 3rd December 2018
  17. a b Newman, D. (2017), "Your Artificial Intelligence Is Not Bias-Free", Forbes. Accessed online: 3rd December 2018
  18. Lee, A. J. (2005), "Unconscious Bias Theory in Employment Discrimination Litigation" Harvard Civil Rights-Civil Liberties Law Review vol. 40, no. 2: p. 481-504. Accessed online: 3rd December 2018
  19. a b Tufekci, Z. (2016), "Machine intelligence makes human morals more important", TEDSummit. Accessed online: 3rd December 2018
  20. Polonski, V. (2018), "The Hard Problem of AI Ethics - Three Guidelines for Building Morality Into Machines", The Forum Network. Accessed online: 3rd December 2018