Lentis/Algorithmic bias by gender

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Algorithmic bias is any program that is designed to learn from large amounts of data to produce predictions on new data. Examples include facial recognition, self-driving cars, and resume analyzers.[1]

Gender Bias in AI[edit | edit source]

Algorithmic bias occurs when an algorithm produces “systematically unfair” outcomes due to biases encoded in data .[2] Gender bias often appears in male-dominated fields. Medical data often lacks female participants, leading to potentially inaccurate diagnoses and treatment suggestions.[3] Hiring algorithms for computer science tend to favor males due to the imbalance of males in the field.[4] These algorithms become self-reinforcing. Decisions due to the algorithm’s bias create data that enforces the bias for future generations of the algorithm.

Sources of Bias[edit | edit source]

AI training data can include biased human decisions or reflect historical and social inequalities. These human biases often form the basis of all algorithmic biases. This bias can result in incorrect decision-making. For instance, researchers at MIT showed that facial analysis technologies have higher error rates for minorities and women. This was due to unrepresentative training data: most of the training images were white males.[5]

Another form of bias comes from intentionally encoded bias. Natural Language Processing models “word embeddings” are used to process text. A word embedding is a mechanism of identifying hidden patterns in the words of a given block of text, considering language statistics, grammatical and semantic information, and human-like biases. This intentional accounting of human-like bias can increase the accuracy of these models, as human writing often includes human biases, but it can have negative consequences in applications that make use of the models. [6]

Case study[edit | edit source]

In 2018, Amazon stopped using a hiring algorithm after discovering that it was biased towards selecting male applicants. This came from a bias in NLP; however, it was not due to intentionally encoded biases. Rather, most of the resumes the model trained on came from males, as tech is a male-dominated field.[7] This led the algorithm to prefer words that males typically used. By not ensuring equal gender representation in the data, Amazon created an AI that further reinforced the gender imbalance in their field.

Social Perpectives[edit | edit source]

The effort to eradicate gender bias in AI algorithms is fought by many different social groups, all with their own approaches and efforts.

Corporations[edit | edit source]

Tech companies such as Google, Apple, and Facebook make most of the progress on the latest machine learning models which are then open-sourced for the world to use. When these models have bias, that bias propagates to many applications. For example, Google’s Switch Transformer and T5 models, which are used by other businesses for NLP applications, were shown to have “been extensively filtered” to remove black and Hispanic authors, as well as materials related to gay, lesbian, and other minority identities.[8] This affected countless NLP applications without the authors of the applications being aware. Activists argue that a few engineers at the largest tech firms bear the responsibility for building unbiased AI, not the broader public. Corporations often publicly commit to addressing algorithmic bias.[9] [10] Algorithmic bias negatively affects AI performance, and these companies recognize the necessity of mitigating this impact.

Feminist Data Manifest-No[edit | edit source]

The Feminist Data Manifest-No is a manifesto committed to the cause of “Data Feminism”. This movement aims to change how data is collected, used, and understood to better line up with the ideals of gender equality espoused in modern day feminism.[11] [12]

Manifest-No consists of a series of points broken into a refusal and a commitment. The refusal consists of a rejection of a currently established idea about data and algorithms prefaced with the words “we refuse.” The commitment then offers a rebuttal to that idea in the form of a new standard embraced by the authors of the paper, prefaced with the words “we commit”. The language creates a sense of community within the manifesto and enables the authors to clearly lay out both their understanding of the current world and their vision for a better one. This Manifest-No, while not always explicitly used or cited, forms a basis for many of the modern approaches to combating gender bias in algorithms.

Berkeley Haas Center for Equity, Gender, and Leadership[edit | edit source]

The Berkeley Haas Center for Equity, Gender, and Leadership has written a large playbook for business leaders to mitigate algorithmic bias within their companies.[13] This playbook consists of overviews and deep dives into the issue of bias in AI algorithms and how businesses can address the issues. The existence of this playbook indicates the belief of its authors that effective mitigation of this issue must come from the top down. However, the authors understand that effective incentives for change can be provided by grassroots organizers, even when these organizers often lack the power necessary to actually implement the change.

To this end, the Berkeley Haas Center for Equity, Gender, and Leadership maintains a list of gender-biased algorithms that have been used by different companies over the years. Recognizing the negative press associated with using biased algorithms, the use of this list aims to encourage the correction and avoidance of bias by making it public when it occurs.[14]

Conclusion and further steps[edit | edit source]

The tech companies and organizations should make sure that researchers and engineers who collect data and architect AI algorithms should be aware of potential bias against minorities and underrepresented groups. Some companies seem to be taking steps in this direction. For example, Google has published a set of guidelines for AI use both internally and for businesses that use its AI infrastructure. [15]

It is imperative to study the anticipated and unforeseen consequences of using artificial intelligence algorithms in a timely manner, especially as current government policies may not be sufficient to identify, mitigate and eliminate the consequences of such subtle bias in legal relations. Solving algorithmic bias problems solely by technical means will not lead to the desired results. The world community thought about the introduction of standardization and the development of ethical principles to establish a framework for the equitable use of artificial intelligence in decision-making. It is necessary to create special rules that set limits for algorithmic bias in all sectors.

References[edit | edit source]

  1. Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) ethics. Journal of Database Management, 31(2), 74–87 https://doi.org/10.4018/jdm.2020040105
  2. Akter, S., Dwivedi, Y. K., Biswas, K., Michael, K., Bandara, R. J., & Sajib, S. (2021). Addressing algorithmic bias in AI-Driven Customer Management. Journal of Global Information Management, 29(6), 1–27. https://doi.org/10.4018/jgim.20211101.o
  3. Rustagi, G. S. & I., Rustagi, I., & Smith, G. (2021, March 31). When good algorithms go sexist: Why and how to advance AI Gender Equity (SSIR). Stanford Social Innovation Review: Informing and Inspiring Leaders of Social Change. https://ssir.org/articles/entry/when_good_algorithms_go_sexist_why_and_how_to_advance_ai_gender_equity.
  4. Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
  5. Mayinka, J., Presten , B., & Silberg, J. (2019, October 25). What do we do about the biases in ai? Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
  6. Caliskan, A. (2021, May 10). Detecting and mitigating bias in Natural Language Processing. Brookings.https://www.brookings.edu/research/detecting-and-mitigating-bias-in-natural-language-processing/
  7. Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
  8. Anderson, M. (2021, September 24). Minority voices 'filtered' out of google natural language processing models. Unite.AI. https://www.unite.ai/minority-voices-filtered-out-of-google-natural-language-processing-models/
  9. Google. (n.d.). Our principles. Google AI. https://ai.google/principles/
  10. Meta. (2021, April 8). Shedding light on fairness in AI with a new data set. Meta AI. https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set/.
  11. Cifor, M., & Garcia, P. (n.d.). Feminist data manifest. Manifest-No. https://www.manifestno.com/.
  12. Cifor, M., & Garcia, P. (n.d.). Feminist data manifest. Full Version of Manifest-No. https://www.manifestno.com/home.
  13. Smith, G., & Rushtagi, I. (2020, July). Mitigating bias - haas school of business. Berkeley Haas UCB Playbook. https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf.
  14. Bias in AI: Examples Tracker https://docs.google.com/spreadsheets/d/1eyZZW7eZAfzlUMD8kSU30IPwshHS4ZBOyZXfEBiZum4/edit#gid=1838901553
  15. Responsible AI practices. Google AI. (2021,October). https://ai.google/responsibilities/responsible-ai-practices