Information Technology and Ethics/100 Year Study

From Wikibooks, open books for an open world
Jump to navigation Jump to search

100-Year-Study on Artificial Intelligence[edit | edit source]

History[edit | edit source]

This report is the first in the planned series of studies that will continue for at least a hundred years. "The Standing Committee defined a Study Panel charge for the inaugural Study Panel in the summer of 2015 and recruited Professor Peter Stone, at the University of Texas at Austin, to chair the panel. The seventeen-member Study Panel, comprised of experts in AI from academia, corporate laboratories and industry, and AI-savvy scholars in law, political science, policy, and economics, was launched in mid-fall 2015."[1]

The Standing Committee extensively discussed ways to frame the Study Panel charge to consider both "recent advances in AI and potential societal impacts on jobs, the environment, transportation, public safety, healthcare, community engagement, and government." [1] The committee considered various ways to focus the study, including surveying sub-fields and their status, examining a particular technology such as machine learning or natural language processing, and studying particular application areas such as healthcare or transportation.

This committee was created to monitor the current and future advances in AI. As we encounter new AI applications every day, it is important to understand that it will be everywhere and at this rate it is almost inevitable. We have already begun a journey that create way for us to build trust and understanding with these technologies. Computer scientists around the world are working everyday to create even more intelligent AI. Innovation is great, but “policies and processes should address ethical, privacy, security implications, and should work to ensure that the benefits of AI will spread broadly and fairly.”

Overview[edit | edit source]

The One Hundred Year Study acknowledges the potential of “Strong AI” yet their there is no regulation committee in place that watches over newest advances in AI. The seventeen-member Study Panel represent diverse specialties and geographic regions, genders, and career stages. Future development in AI in every sector seems very promising. Artificial intelligence is pervasive. It lives inside our phones and home devices, it allows Facebook and Amazon to target ads and products to specific users. This kind of Artificial Intelligence is called task-based. The software behind these AI devices allows the device to perform tasks. Learning to perfect those tasks is what makes AI “intelligent”. One Hundred Year Study committee in some ways stand as our first defense against “strong AI”. “Strong AI” could be the reason for economic devastation and worst of all, destroy human civilization. Not only that, the Standing Committee expects the study will have a broader global relevance, so in the future it plans to expand of the project internationally.

Even though Artificial Intelligence has already saturated most services and devices that we run by, it is important to understand the dangers of machine learning. It is affecting each and everyone of us whether we like it or not. Whoever controls this information while deciding the ethical and moral concerns about the advancements of AI is ultimately going to stop us from our own demise. The One Hundred Year Study on Artificial Intelligence was created in 2014. The purpose of the study was to analyze “the influences of AI on people, their communities and society”[1]. Development, research, and systems design in AI were all created to make sure that these systems only benefit both individuals and society and ensure their safety. While we are a long way from true artificial intelligence—which we can define as technology that understands, reasons, makes decisions, provides complex responses, and acts logically like a human—machine learning is at hand. It is an extension of advanced analytics based on the idea that machines can learn from data.

Focus[edit | edit source]

This study’s focus on a typical North American city is deliberate and meant to highlight specific changes affecting the everyday lives of the millions of people who inhabit them. The Study Panel further narrowed its inquiry to eight domains where AI is already having or is projected to have the greatest impact: transportation, healthcare, education, low-resource communities, public safety and security, employment and workplace, home/service robots, and entertainment.

The Report is designed to address four intended audiences.

  1. General public: it aims to provide an accessible, scientifically and technologically accurate portrayal of the current state of AI and its potential.
  2. Industry, the report describes relevant technologies and legal and ethical challenges, and may help guide resource allocation.
  3. Directed to local, national, and international governments to help them better plan for AI in governance.
  4. AI researchers, as well as their institutions and funders, to set priorities and consider the ethical and legal issues raised by AI research and its applications.

Effectiveness[edit | edit source]

How effective is AI? “AI is only as effective as the data it is trained on. In Tay’s case, the machine intelligence is accurately reflecting the prejudices of the people it drew its training from.”[2] AI is just a black box, that whatever data that goes in, and an answer comes out without any explanation for the decision. This is a huge problem, because as humans of the law, we want to have an understanding how a decision was reached. Currently there has been research done in deep learning, artificial neural networks, and reinforcement learning, all struggling to trace the logic of how AI knows what it knows. Scientists are trying to train AI to make new correlations just like the human brain.

AI NOW AND IN THE FUTURE[edit | edit source]

How do we measure the success of AI applications? The measure is the value they create for human lives. It was said "Going forward, the ease with which people use and adapt to AI applications will likewise largely determine their success"[1]. Another important consideration is how AI systems that take over certain tasks will affect people’s concordances and capabilities. "As machines deliver super-human performances on some tasks, people’s ability to perform them may wither."[1] Already, introducing calculators to classrooms has reduced children’s ability to do basic arithmetic operations. Still, humans and AI systems have complementary abilities. People perform tasks such as complex reasoning and creative expressions, that machines cannot do.

Further, AI applications and the data they rely upon may reflect the biases of their designers and users, who specify the data sources. This threatens to deepen existing social biases, and concentrate AI’s benefits unequally among different subgroups of society. Privacy concerns about AI-enabled surveillance are also widespread, particularly in cities with pervasive instrumentation. "Sousveillance, the recording of an activity by a participant, usually with portable personal devices, has increased as well. Since views about bias and privacy are based on personal and societal ethical and value judgments, the debates over how to address these concerns will likely grow and resist quick resolution." [1]

3 General Policy Recommendations[edit | edit source]

  1. Define a path toward accruing technical expertise in AI at all levels of government. Effective governance requires more experts who understand and can analyze the interactions between AI technologies, programmatic objectives, and overall societal values.
  2. Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems.
  3. Increase public and private funding for interdisciplinary studies of the societal impacts of AI.

Study Panel[edit | edit source]

  • Peter Stone, University of Texas at Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, Massachusetts Institute of Technology
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, Indian Institute of Technology Bombay
  • Ece Kamar, Microsoft Research
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, University of British Columbia
  • David Parkes, Harvard University
  • William Press, University of Texas at Austin
  • AnnaLee (Anno) Saxenian, University of California, Berkeley
  • Julie Shah, Massachusetts Institute of Technology
  • Milind Tambe, University of Southern California
  • Astro Teller, X

References[edit | edit source]

[3]

  1. a b c d e f https://ai100.stanford.edu/2016-report
  2. https://www.fastcompany.com/40536485/now-is-the-time-to-act-to-stop-bias-in-ai
  3. https://www.ebsco.com/apps/landing-page/assets/POVRC_Intelligent_Machines_vs_Human_Intelligence.pdf.