Information Technology and Ethics/Generative AI Ethics

From Wikibooks, open books for an open world
Jump to navigation Jump to search

What is Generative AI?[edit | edit source]

The term "Generative AI" refers to a sort of artificial intelligence that generates new data or content, such as writing, pictures, music, and movies, that didn't previously exist. Using machine learning techniques, frequently deep neural networks, it discovers the patterns and structures of the data it is trained on, and it then uses this knowledge to create new content. According to reports, research and development on the subject of generative AI, which has applications in fields including art, music, gaming, and design, is accelerating swiftly.

The History of Generative AI[edit | edit source]

In the beginning, rule-based systems that could make decisions based on a list of specified criteria were the main focus of AI research. These systems were largely static and only partially capable of picking up new information and adapting to it.

Artificial intelligence made tremendous strides in the 1980s and 1990s with the invention of neural networks. To mimic the organization of the human brain, neural networks were developed with interconnected nodes that could process input concurrently. This gave generative AI more chances and made decision-making processes more dynamic and complex.

In the late 1990s and early 2000s, a new wave of AI research emerged that focused on Bayesian inference and probabilistic modeling. Through the use of partial or erroneous data, this technique allowed for the development of more sophisticated generative models that could account for uncertainty.

Deep learning techniques' recent introduction has sparked a revolution in the field of generative AI. Deep learning models are able to learn from vast amounts of data and can create new results that are exact replicas of human-made decisions. This has opened up new possibilities for artistic endeavour's, including the creation of music, artwork, and even full novels.

Machine learning algorithms are used in the fascinating area of artificial intelligence known as "generative AI" to produce original content, including text, images, and videos. Unlike traditional machine learning, which focuses on categorization and prediction tasks, generative AI is focused on creative tasks. Generative AI is a fascinating area of research with a lot of potential for a range of applications, including art and entertainment, drug development, and scientific research. As we continue to develop more complex generative AI models and methods, we may anticipate seeing even more remarkable and revolutionary applications emerge in the years to come.[1]

Two Types of Generative AI[edit | edit source]

Fundamentally, generative AI involves using enormous datasets to train a model to generate new, realistic content that closely mimics the source data. The two most common types of generative AI are variational auto-encoders (VAEs) and generative adversarial networks (GANs).

Type 1[edit | edit source]

GANs are made up of a generator and a discriminator, two neural networks that work together to produce new content. The generator generates new data based on random input, and the discriminator judges how realistic the samples are. The two networks are trained simultaneously in an approach called adversarial training, where the generator tries to deceive the discriminator while the discriminator aims to appropriately classify real and generated data. Eventually, the generator gains the capacity to output data that is accurate and similar to the input data.

Type 2[edit | edit source]

VAEs, on the other hand, are neural networks that have the ability to encode and decode data. They learn to reduce incoming data into a latent space, a low-dimensional representation, and use this representation to generate new samples of data. A loss function that VAEs are trained to optimize measures how closely the output data matches the input data. VAEs, on the other hand, are neural networks that have the ability to encode and decode data. They learn to reduce incoming data into a latent space, a low-dimensional representation, and use this representation to generate new samples of data. A loss function that VAEs are trained to optimise measures how closely the output data matches the input data.

GANs and VAEs, respectively, can be used to create images, movies, and text. They require a large amount of high-quality data to operate effectively, and the training process can be time-consuming and computationally expensive.[2]

How does Generative AI work?[edit | edit source]

Function and Output[edit | edit source]

Generative AIs are typically trained using a set of data. Based on this training, Generative AI tweaks a number of parameters until they produce an output similar to the training data. In order to make the generated output more similar to what might be present in the training data, the model must learn patterns within the data because it has fewer parameters than the training data. A Generative AI is divided into several distinct layers of nodes, each of whose values is interdependent on the others. The values of the nodes are changed between layers by using the numerical weights of connections. These layers can be split into three categories: the input layer, the output layer, and any number of layers in between, collectively referred to as "hidden layers."[3]

To train a GAN, for example, a random seed is fed to the input layer of the generator, which then passes values between layers to generate an output. A set of these outputs, along with a set of training data, is passed to the discriminator network, which then tries to classify whether an input is from the training data or a generated output. Based on the results from the discriminator, weights in the network are tweaked so that future outputs of the generative network are more difficult for the discriminator to distinguish from the training data.

To generate, Generative AIs start with a small set of random data, which is passed to the input layer. These values in the input layer are fed through the different layers, being affected by the weights in between nodes, similarly to the process of training. At the output layer, the result is shown to the user.[citation needed]

Since the weights between the nodes are determined by the training data, the exact way a Generative AI processes data may change depending on the size of the dataset, the variety of data present within the dataset, and the training process. This may also mean that a Generative AI may struggle with generating an output that was underrepresented or entirely omitted from its training data.

Ethical Implications[edit | edit source]

Biased Outputs[edit | edit source]

Similar to other types of AI, Generative AIs are capable of producing biased outputs. This stems from a number of issues. Systemic biases present within institutions, culture, history, and/or society will affect the training data. These biases are then reflected as statistical and computational biases in the model. Inherent human biases exist as well, which influence the training data, the design of the model, and the use of its output.

Biases in Generative AI outputs can be reduced or eliminated throughout the lifecycle of the model. In the initial phase, designers need to avoid biased training data. Biases in training data can include issues such as not being representative of populations due to how the data was acquired (i.e., data scraped from a website is not representative of all humans or even other Internet users); systemic and historical biases, such as a correlation between race and ZIP code; and others. Additionally, design teams should have a diverse variety of members to reduce biases during decision-making processes. During the design phase, analyses should be conducted to identify sources of bias as well as plans to mitigate them. These should be continually evaluated to ensure the mitigation strategies are working well. Once deployed, the model should be constantly monitored to ensure minimal bias. If necessary, a model should be retrained or decommissioned.[4]

Copyright[edit | edit source]

One of the major implications that generative AI has is copyright ambiguity. This simply means that it is uncertain who would own the rights to the new creative content. Generative AI models are usually trained with large datasets along with contents from various websites, social media, Wikipedia, and huge discussion hubs such as Reddit. As it includes copyrighted material, it can be used to articulate various summary responses asked by the user. This can potentially give rise to copyright infringement by the owner of the content.

There has been a lot of ongoing debate on this case. The copyrighted content (both text and pictures) is used in the training data sets, which give rise to a significantly high modulus of languages for Generative AI. The question here arises: should such copyrighted material be allowed to train datasets for Generative AI? The answer is still uncertain, but looking from the perspective of the owner of the original content, it is violating the copyright act.[5]

Job Loss / Creation[edit | edit source]

As generative AI is progressing rapidly, there is a high possibility that it can contribute to unemployment at certain stages. This could be possible when the AI automates certain tasks or activities, displacing human workers. For example, in any B2B organization, the marketing team would be responsible for creating email campaigns that needed to be sent to their clients. The work includes making email templates, drafting the campaign emails, and getting reports on the open and click rates of those email campaigns. So, if an AI is able to create the contents of the email campaigns, the people working towards the creation of these email campaigns would be displaced.

Another example is that if AI automates the customer service tasks, then the workers usually working at the call centers providing in-person customer service would be replaced. Some of the AI modes are able to generate the codes as well. So, it can also risk the roles of developers, as AI is usually capable of writing that code in an effective manner, which means taking less time and causing minimal errors.[6]

Truthfulness / Accuracy[edit | edit source]

Generative AI usually gives uncertain responses to certain questions, such as may be, not sure about it, etc. As generative AI makes use of machine learning modules, it does guarantee 100% accuracy. There have been instances where this AI has provided highly incorrect answers to certain questions. Not only that, there have been instances where it has failed miserably in solving certain logics in mathematics. Apart from that, in popular games such as chess, where computers are known to play smarter than humans, AI does make irregular moves that do not make any sense. This eventually increases the risk of spreading false information through such chatbots.[7]

Social Engineering[edit | edit source]

Generative AI can be misused by convincing people to fall into its trap by creating sounds that replicate human sounds. They can use this to send phishing emails or scam phone calls. For example, the attacker can use this tool to claim that he is calling from the IT department of the organization and can eventually convince the user to share his login ID and password, thereby gaining access to his system. So generative AI can lead to social engineering attacks.[8]

Future Impacts[edit | edit source]

As generative AI continues to improve, there are several concerns about what potential future societal impacts may arise.

Software Engineering Roles[edit | edit source]

As various AI technologies continue to improve and evolve, there is a whole new job market for software engineers. Environments known as "robo-software engineering environments work to maximize efficiency by incorporating both AI technology and human creativity and work.  As robo-software engineering environments become more common, there will need to be teams of software engineers whose main role is to develop and maintain this new AI software. Furthermore, once the AI has been deployed for public use, it must still be overseen, maintained, and updated by human software engineers.

Regarding the design and development of AI and other machine learning systems,[9] there are many areas of research/work that software engineers can venture into. In this sector of AI software design, engineers should be monitoring any possible performance degradation regarding the AI software, looking into new and more efficient architectural styles and patterns, and looking into ways to better analyze and handle extremely large datasets. A coding team of software engineers will also need to become extremely familiar with how their chosen AI works; this involves learning what algorithms it uses, what NLP tools it has, and how it can help automate their daily work. To best incorporate the realm of human innovation into AI software efficiency and precision, software engineers must innovate new approaches for beneficial collaboration between humans and machines. Such methods require improvements in communication techniques to ensure seamless teamwork toward an end goal.[10]

School[edit | edit source]

Teachers and students will begin to implement generative AI, such as ChatGPT, into their educational environments. Already now, there are examples of teachers utilizing AI to generate lesson plans, exams, essay questions, etc.  As time passes, this technology will only improve, and teachers will be able to automate a large number of their tasks. "hatGPT successfully passed graduate-level business and law exams and even parts of medical licensing assessments (Hammer, 2023), which has led to suggestions for educators to remove these types of assessments from their curriculum in exchange for those that require more critical thinking.”[11] On the opposite spectrum, students have begun to utilize generative AI to mitigate the amount of busy work they actually need to do.  Models such as ChatGPT have been key tools in which outlines, papers, and code can be generated and then modified for student submission.  It is crucial to remember that not all content generated by these models is 100% accurate; therefore, students should take care not to rely on these models; instead, they should use generative AI as a tool for education, not a cheat code.  Technology has also been coming out to help detect AI-submitted content to mitigate students taking advantage of generative AI; GPTZero, for example, was designed specifically for detecting ChatGPT-generated content.

As of now, most academic institutions have banned the usage of generative AI, such as ChatGPT; however, this is only a temporary solution.  As this technology continues to improve and become more accessible to everyone, it will become near impossible for a ban to continue.  Academic institutions should begin to focus on teaching the new generation how to use tools such as generative AI in conjunction with their own knowledge and creativity.  Much like how the Internet transformed how students and teachers interacted with each other, generative AI and other AI technologies will have significant impacts in the future.

Drug Development[edit | edit source]

Generative AI systems can be trained on sequences of amino acids or molecular representation systems, such as AlphaFold, which is used for protein structure prediction and drug discovery. The models have become high-potential tools to transform the design, optimization, and synthesis of small molecules and macromolecules. On a large scale, there is the potential to boost the development process.[12]

The stages of the process are as follows:

  • Stage 1: There is AI assistant target selection and validation.
  • Stage 2: Molecular design and chemical synthesis
  • Stage 3: Biological evaluation, clinical development, and post-marketing surveillance
  • Stage 4: Several successful preclinical and clinical molecules identified by AI and deep generative models[13]

Role of VAE and GAN[edit | edit source]

The role of VAEs is to optimize the log-likelihood of the data by maximizing a lower bound on a likelihood function. Alternately, GAN learns to measure the difference between the so-called “valid” and “synthetic” molecules. However, most of these require large volumes of data. Currently, there is a lack of quality data, and adding to it is an efficient data-sharing process; therefore, data harmonization plays a crucial role in the drug discovery process.  

The current AI technologies restrict the application and affect the performance of drug development due to the lack of or limited interpretability, inaccessibility, and lack of availability of high-quality data.

Legal Profession[edit | edit source]

ChatGPT, amongst other services, could potentially significantly impact the legal profession as a whole, not just students. ChatGPT itself stated that, although difficult to predict, there are reasons to believe that the transformation of legal services through generative AI systems will happen quickly, as soon as 5 to 10 years.[14] In terms of students, ChatGPT stated that they should be aware that there is a possibility of these systems performing said jobs instead of humans. The focus should be on understanding how these systems work and how they can be used.[15]

ChatGPT is capable of many things in the legal profession. For example, in drafting it proved that it can draft advanced legal documents, including demand letters without prejudice and pleadings. These drafts demonstrate the ability of ChatGPT to elaborate and enhance the content based on simple facts inputted. Amongst other things ,hatT is able to identify legal strategies, generate a skeleton argument to support the case, anticipate potential defences, etc.,

Despite the capabilities, ChatGPT lacks the capability to undertake legal research and analysis as a competent lawyer could. It is anticipated that legal database like WestLaw and Lexis will adopt generative AI. However, it should be noted that ChatGPT currently lacks the capability to undertake legal research and analysis to the same extent as a competent lawyer.[16]

References[edit | edit source]

  1. "What is Generative AI? Everything You Need to Know". Enterprise AI. Retrieved 2023-04-25.
  2. "Generative AI : All you need to know | Murf AI". murf.ai. Retrieved 2023-04-25.
  3. "What are Neural Networks? | IBM". www.ibm.com. Retrieved 2023-04-25.
  4. Schwartz, Reva; Vassilev, Apostol; Greene, Kristen; Perine, Lori; Burt, Andrew; Hall, Patrick (2022-03-15). "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence" (PDF). Gaithersburg, MD. doi:10.6028/nist.sp.1270. {{cite journal}}: Cite journal requires |journal= (help)
  5. "The Legal and Ethical Ramifications of Generative AI". CMSWire.com. Retrieved 2023-04-25.
  6. "Generative AI Ethics: Top 6 Concerns". research.aimultiple.com. Retrieved 2023-04-25.
  7. Kruger, Michelle Lee, Lukas. "Risks and ethical considerations of generative AI". Passle. Retrieved 2023-04-25.
  8. "Generative AI Ethics: Top 6 Concerns". research.aimultiple.com. Retrieved 2023-04-25.
  9. Giray, Görkem (2021-10). "A software engineering perspective on engineering machine learning systems: State of the art and challenges". Journal of Systems and Software. 180: 111031. doi:10.1016/j.jss.2021.111031. {{cite journal}}: Check date values in: |date= (help)
  10. HEIL, JOE W. (2010-12). "Addressing the Challenges of Software Growth and Rapidly Evolving Software Technologies". Naval Engineers Journal. 122 (4): 45–58. doi:10.1111/j.1559-3584.2010.00279.x. ISSN 0028-1425. {{cite journal}}: Check date values in: |date= (help)
  11. Lim, Weng Marc; Gunasekara, Asanka; Pallant, Jessica Leigh; Pallant, Jason Ian; Pechenkina, Ekaterina (2023-07-01). "Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators". The International Journal of Management Education. 21 (2): 100790. doi:10.1016/j.ijme.2023.100790. ISSN 1472-8117.
  12. "Drug discovery", Wikipedia, 2023-04-17, retrieved 2023-04-25
  13. Zeng, Xiangxiang; Wang, Fei; Luo, Yuan; Kang, Seung-gu; Tang, Jian; Lightstone, Felice C.; Fang, Evandro F.; Cornell, Wendy; Nussinov, Ruth; Cheng, Feixiong (2022-12). "Deep generative molecular design reshapes drug discovery". Cell Reports Medicine. 3 (12): 100794. doi:10.1016/j.xcrm.2022.100794. PMC 9797947. PMID 36306797. {{cite journal}}: Check date values in: |date= (help)CS1 maint: PMC format (link)
  14. Macey-Dare, Rupert (2023). "How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession". SSRN Electronic Journal. doi:10.2139/ssrn.4366749. ISSN 1556-5068.
  15. "ChatGPT", Wikipedia, 2023-04-24, retrieved 2023-04-25
  16. Iu, Kwan Yuen; Wong, Vanessa Man-Yi (2023). "ChatGPT by OpenAI: The End of Litigation Lawyers?". SSRN Electronic Journal. doi:10.2139/ssrn.4339839. ISSN 1556-5068.