Jump to content

Chatbots For Social Change/Conversational AI Ethics

From Wikibooks, open books for an open world

Note to self: Stream 2 incorporated in broad strokes. Still need Wk2St1.

The ethical use of conversational AI tools is still an emerging topic for discussion, and although there has been some guidance by governments around the world on basic principles of use, these are at best provisional. The observed and theoretically expected intended and unintended consequences of human-chatbot interaction already are so numerous as to fill a lengthy section. However, because the technology is so new, these may only scratch the surface, or at least leave unexpected another significant implication, especially as novel applications continue emerge.

This chapter covers both theoretically expected and actualized ethical implications of advanced chatbot technology. From anthropomorphizing and trusting AI tools, to their being used to collect and process sensitive semantic data, to the consequences for society of a seamless integration with and reciprocal definition of this emerging technology, this chapter is still intended only to sensitize the reader to the type of ethical thinking required to use chatbots as a tool for social good.

The Fallacy of Universal Morality

[edit | edit source]

One of the primary challenges is defining a universally agreed-upon moral standard. Even widely-held principles can be interpreted differently based on cultural, historical, or personal contexts. Deciding which moral principles to prioritize can inadvertently introduce biases.

Arriving at a universally accepted set of principles is ambitious, given the diversity of cultural, religious, philosophical, and personal beliefs globally. However, certain principles seem to be commonly valued across many societies, and with a high degree of agreement (though exact percentages might be challenging to pin down). Let's explore: Fundamental Principles:

  1. Respect for Autonomy: Every individual has the right to make choices for themselves, as long as these choices don't infringe on the rights of others.
  2. Beneficence: Actions and systems should aim to do good and promote well-being.
  3. Non-Maleficence: "Do no harm." Avoid causing harm or suffering.
  4. Justice: Treat individuals and groups fairly and equitably. This also means ensuring equal access to opportunities and resources.
  5. Transparency: Processes, intentions, and methodologies should be clear and understandable.
  6. Privacy: Everyone has a right to privacy and the protection of their personal data.
  7. Trustworthiness: Systems and individuals should act in a reliable and consistent manner.

Ethic of Formulating Ethics:

  1. Inclusivity: Ensure diverse voices and perspectives are considered in the formulation of ethical guidelines.
  2. Continual Reflection and Revision: Ethics shouldn't be static. As societies evolve, so should their ethical guidelines.
  3. Transparency in Process: It should be clear how ethical guidelines were formulated, what sources and methodologies were used.
  4. Avoidance of Dogma: Openness to new ideas and a willingness to adapt are crucial. Ethics should be based on reason and evidence, not unexamined beliefs.
  5. Accountability: Systems and individuals should be accountable for their actions and decisions, especially when they deviate from established ethical guidelines.
  6. Education and Awareness: Promote understanding of ethical principles and their importance.
  7. Promotion of Critical Thinking: Encourage individuals to think critically about ethical challenges and to engage in informed discussions about them.

Using these principles as a foundation, we can reason and build upon them to create more specific guidelines for different contexts, including AI and technology. Remember, however, that the true challenge lies in the application of these principles in real-world scenarios, where they may sometimes conflict with one another.

The Ethical Imperative of the Transformative Use of Conversational AI

[edit | edit source]

The challenges faced by modern democracies, influenced by factors like the rapid spread of information (and misinformation), increasing polarization, and the influence of money in politics, are pressing. Here's a more detailed exploration of the potential benefits and concerns of such an AI: Benefits:

  1. Combatting Misinformation: In an age where false information can spread rapidly and sway public opinion, a neutral AI can help verify and provide factual information.
  2. Encouraging Deliberative Democracy: By acting as a mediator and facilitator, the AI could promote thoughtful and informed discussion among citizens.
  3. Bridging Polarization: The AI could provide a space for civil discourse, allowing individuals with opposing views to understand each other's perspectives.
  4. Universal Access: An AI platform could potentially be accessible to anyone with an internet connection, democratizing access to information and discourse.

As pertains to your calling to build conversational AI systems:

  1. Skills & Resources: Do you have the technical skills, resources, and knowledge to build such a system? Even if not individually, do you have access to a team or network that can assist?
  2. Potential Impact: Weigh the potential positive impacts against the possible harms. Even with the best of intentions and safeguards, there's no guarantee of a net positive outcome.
  3. Ethical Alignment: Does this endeavor align with your personal and professional ethical principles? Are you ready to face the ethical dilemmas that may arise?
  4. Sustainability: Building the system is one thing; maintaining, updating, and overseeing it is another. Do you have a plan for the long-term sustainability of the project?
  5. Feedback Mechanisms: How will you gather feedback, analyze outcomes, and adjust accordingly? Will there be mechanisms for public input and oversight?
  6. Personal Fulfillment: Beyond ethical considerations, would this project bring you personal satisfaction and fulfillment? Passion and intrinsic motivation can be critical drivers of success.
  7. Alternatives: Are there alternative ways you can contribute to the cause of promoting understanding and critical thinking that might be more effective or less risky?


If one believes that the current challenges facing democracies are of such magnitude that they threaten the very foundations of the system, then, yes, there might be a moral argument to be made that those with the capability should strive to develop tools (like a political AI) that address these challenges.

However, it's also essential to approach this task with humility, acknowledging the potential pitfalls and ethical challenges such tools might bring. It would likely be a continuous process of refining and reassessing the AI's role, algorithms, and impact on society.

Ultimately, whether you feel ethically obligated or obliged to pursue this project will depend on your personal beliefs, values, and circumstances. It might be helpful to engage with mentors, peers, or experts in the field to get diverse perspectives. Engaging in such reflective processes, much like the discursive mediator you envision, can help clarify your path forward.


Transparency

[edit | edit source]

The AI should be transparent about its own limitations, the sources of its information, and potential biases in the data it has been trained on. Users should be encouraged to seek multiple sources of information and not rely solely on the AI for forming opinions.

Ethical Oversight

[edit | edit source]

Guardrails

[edit | edit source]

Human-in-the-loop

[edit | edit source]

Encouraging Critical Thinking: This is pivotal. Instead of just presenting conclusions, the AI can present multiple perspectives, explain the reasoning behind each, and prompt users to weigh the evidence and come to their own conclusions. By asking open-ended questions or presenting counterarguments, the AI can foster a more analytical approach in users.

This approach aligns with the broader educational philosophy of teaching people "how to think" rather than "what to think." By fostering critical thinking and analytical skills, the AI not only helps users navigate contentious issues but also equips them with the tools to evaluate other complex topics they might encounter in the future. Carl Rogers, one of the founding figures of humanistic psychology, emphasized the importance of creating a supportive, non-judgmental environment in which individuals feel both understood and valued.

Several of Rogers' principles can be applied to the design of an ethical AI:

  1. Empathy and Understanding: Just as Rogers believed that therapists should show genuine empathy and understanding towards their clients, AI should strive to understand user needs, concerns, and emotions, and respond with empathy.
  2. Unconditional Positive Regard: Rogers believed that individuals flourish when they feel they are accepted without conditions. While AI doesn't have emotions, it can be designed to interact without judgment, bias, or preconceived notions, thus providing a space where users feel safe to express themselves.
  3. Congruence: This is about genuineness or authenticity. In the AI context, it can mean transparency about its processes, capabilities, and limitations. Users should feel that the AI is 'honest' in its interactions.
  4. Self-actualization: Rogers believed every person has an innate drive to fulfill their potential. An ethical AI, especially in an educational context, should empower users to learn, grow, and achieve their goals.
  5. Facilitative Learning Environment: A key principle of Rogers' educational philosophy was creating an environment conducive to learning. In AI terms, this could be about ensuring user interactions are intuitive, enriching, and constructive.

Integrating these humanistic principles into AI design can lead to systems that not only provide information but do so in a manner that is supportive, respectful, and ultimately more effective in facilitating understanding and growth.


Allowing users to provide feedback or challenge the AI's statements can be a way to ensure continuous improvement and refinement of its knowledge and approach.

Black-Box Bias

[edit | edit source]

The Dangers of Neutrality

[edit | edit source]

ChatGPT often hesitates in making determinations when it's otherwise super important. This already takes a bit of an ethical stance, and the contours of when it makes moral judgements is already highly biased.

Trust and Deception

[edit | edit source]

Impact of AI Conclusions: Given the weight many people place on AI, if an AI system draws a conclusion, it could be viewed as a definitive statement, potentially reducing the space for human debate and interpretation.

From ChatGPT's own reasoning: "An AI that makes judgments could lose trust among sections of the user base who feel that the AI's conclusions don't align with their perspective, even if the AI's judgments are based on a rigorous analysis of facts and reason."

Unraveling the Black Box

[edit | edit source]

An AI that draws conclusions based on rigorous analysis of facts can promote a more reasoned and fact-based discourse, countering misinformation or overly emotive narratives. This is where we talk about RAG etc.

AI Reasoning Independently

[edit | edit source]

When the AI makes decisions based on its own survival. From ChatGPT, reasoning about its own duty to making moral judgements in certain scenarios: "Unintended Consequences: Taking a clear stance, even when justified, might expose the AI to backlash, boycotts, or manipulation attempts."


Data Manipulation

[edit | edit source]

The Ethics of Interaction

[edit | edit source]

There is a very complex web ethical considerations in conversational interaction, whether it be a human or AI that is conversing. In any given conversation, the interlocutor can choose to put you into an ethical dilemma, where non-response can even be ethically meaningful.

Instilling a sense of critical thinking in users while providing accurate information in a balanced and reasoned manner can be one of the most effective and ethically responsible ways for an AI to operate in contentious scenarios.

Sustainability

[edit | edit source]

The environmental impact is huge! How do we cope with that?

Open-Source or Closed-Source?

[edit | edit source]