Information Technology and Ethics/Generative AI Ethics
What is Generative AI?
[edit | edit source]Generative Artificial Intelligence (Generative AI) is a type of technology that uses advanced algorithms to create new content. This can include text, images, videos, music, and even code. Unlike traditional AI, which is used mostly for analyzing or organizing information, generative AI produces original content by learning patterns from existing data.
The History and Background of Generative AI
[edit | edit source]Generative AI has come a long way, but its roots are closely tied to the broader development of artificial intelligence as a whole. In the beginning, AI systems were created to perform tasks that typically required human intelligence. One of the most influential figures in this early stage was Alan Turing. In 1950, he proposed the famous Turing Test, which aimed to measure whether a machine could "think" like a human. This idea became a foundation for future AI research. Turing’s early work helped shape what became known as symbolic AI, or “Good Old-Fashioned AI” (GOFAI), where machines followed strict, rule-based logic to make decisions. While these early systems were important, they lacked the flexibility and learning ability seen in today’s AI models. [1]
For a long time, AI could only do what it was explicitly told to do. It could follow instructions well but couldn’t learn from experience or improve over time. That began to change in the 1980s with the introduction of neural networks. Inspired by how the human brain works, neural networks use artificial “neurons” that pass data through layers to recognize patterns. One of the first models was called the perceptron, which could complete basic tasks like recognizing letters or shapes. However, due to its limitations, interest in neural networks faded for a while. That changed with the development of the backpropagation algorithm, which helped AI systems learn from their mistakes. Researchers like Geoffrey Hinton played a big role in improving these methods. Combined with better computer hardware, this allowed deep learning—neural networks with many layers—to become more powerful and capable of handling complex tasks like speech and image recognition. [2]
Generative AI started becoming more of a reality in the early 2000s. Earlier AI could spot patterns, but it couldn’t actually create anything new. Some of the first generative models, such as Hidden Markov Models (HMMs) and Gaussian Mixture Models (GMMs), were used for tasks like speech recognition and simple image generation, but they had limitations in producing diverse or detailed content. A major turning point came in 2014 with the invention of Generative Adversarial Networks (GANs). These systems use two networks: one generates content, and the other checks how realistic it is. This method helped AI produce much more convincing and lifelike images, videos, and more. [3]
In 2017, another breakthrough happened with the introduction of transformers. These models changed the way AI handles language by using something called "self-attention" to process entire sentences at once, rather than word by word. This allowed AI to better understand context and relationships between words, making the responses sound more natural and human-like. [4]
One of the most well-known examples of this is GPT-3 (Generative Pretrained Transformer 3). It’s a powerful AI model that can take a simple prompt and generate detailed, relevant responses. Tools like GPT-3 show just how far generative AI has come in creating text that feels genuinely human.
Today, generative AI models are more advanced than ever. Tools like DALL·E can take a short description—like “a surfing dog”—and turn it into an entirely new image. This ability to combine language and visuals shows how generative AI is pushing the boundaries of creativity. As these systems continue to improve, they are not just helping us make content—they are reshaping how we think about imagination, art, and human expression.
How Does Generative AI Work?
[edit | edit source]Generative AI is a form of artificial intelligence that focuses on creating entirely new content—like text, images, music, video, and more. These systems learn by analyzing large amounts of existing data, such as books, reports, images, or audio files. By finding patterns within that data, the AI can then generate original content that reflects what it has learned. Unlike traditional AI, which is typically used for tasks like sorting information or making predictions, generative AI is designed to build something new.
At the core of most generative AI systems is a deep learning model known as a neural network. These networks are inspired by the way human brains work. Just like our brains use neurons to pass signals and process information, neural networks use artificial "neurons" that pass data through layers. As this data moves through the system, the AI learns more complex patterns and structures.
For example, when a neural network is trained to work with images, it might begin by recognizing simple shapes and edges. As it processes more data and moves through more layers, it can begin to understand more detailed features—like facial structures or textures. This process allows AI to generate outputs that seem highly realistic, whether it’s a sentence that sounds human or an image that looks like it was taken by a camera.
The development of generative AI has been made possible by improvements in computing power, especially hardware like advanced GPUs developed by companies such as NVIDIA. These technologies allow AI systems to process massive amounts of data quickly and efficiently, which is essential for generating high-quality content.
There are multiple different generative models. One such model is the transformer model.[5] This was introduced by Vaswani et al. in 2017 in the paper "Attention Is All You Need". Transformers use self-attention to understand which part of an input or prompt is the most important in each context. This helps the AI better understand the prompt and how to give a more concise and clear response. Another important model is the generative adversarial network or “GAN” for short. GANs were discovered in 2014 and were the most common method used in the past. GANs essentially would place two neural networks together that would allow the AI to generate new examples and distinguish which content is real or fake. [6]
Generative AI has many uses. It can aid in healthcare by helping scientists create new drugs with all the considerations that a human might miss. It can create extremely specific medications. Generative AI can help the entertainment industry by speeding up the process of special effects. This can allow movies to be produced quicker with much more quality. It can even help a software engineer debug their code. The opportunities are endless with certain regulations.
Current Ethical Challenges and Debates
[edit | edit source]As generative AI becomes more powerful and widely used, it raises serious ethical concerns. These debates cover a wide range of issues, including intellectual property, job displacement, and the future of creativity.
One of the biggest questions is about intellectual property and copyright. Since generative AI is trained on existing content—often created by humans—it’s difficult to determine who owns the rights to the outputs it creates. If an AI generates a painting that mimics the style of a well-known artist, who should get credit or royalties? The AI? Its developer? The original artist’s estate? Today’s copyright laws don’t clearly address these scenarios, which has led to calls for new regulations that can keep up with technology. [7]
Another concern is that AI-generated content may not be truly original. Instead, it often pulls together patterns and ideas from existing works, raising the argument that it shouldn’t receive the same legal protections as human-created content. However, others believe that giving legal rights to AI-generated work could encourage more innovation and investment in this space.
Beyond legal issues, there are also big questions about the impact of generative AI on jobs and creativity. As AI gets better at producing high-quality content, there’s growing concern that it could replace human workers in creative fields like writing, design, and software development. While AI can be a helpful tool for boosting creativity, it also poses a risk to professionals whose careers depend on producing original work. [8]
These issues lead to a larger ethical question: How do we balance innovation with responsibility? If AI is going to reshape the future of work and creativity, we need to make sure that its benefits are shared fairly. That means thinking about who controls the technology, how it's used, and how we protect the people it might affect most.
Ethical Implications
[edit | edit source]Biased Outputs
[edit | edit source]Similar to other types of AI, Generative AIs are capable of producing biased outputs. This stems from a number of issues. Systemic biases present within institutions, culture, history, and/or society will affect the training data. These biases are then reflected as statistical and computational biases in the model. Inherent human biases exist as well, which influence the training data, the design of the model, and the use of its output.
Biases in Generative AI outputs can be reduced or eliminated throughout the lifecycle of the model. In the initial phase, designers need to avoid biased training data. Biases in training data can include issues such as not being representative of populations due to how the data was acquired (i.e., data scraped from a website is not representative of all humans or even other Internet users); systemic and historical biases, such as a correlation between race and ZIP code; and others. Additionally, design teams should have a diverse variety of members to reduce biases during decision-making processes. During the design phase, analyses should be conducted to identify sources of bias as well as plans to mitigate them. These should be continually evaluated to ensure the mitigation strategies are working well. Once deployed, the model should be constantly monitored to ensure minimal bias. If necessary, a model should be retrained or decommissioned.[9]
Copyright
[edit | edit source]One of the major implications that generative AI has is copyright ambiguity. This simply means that it is uncertain who would own the rights to the new creative content. Generative AI models are usually trained with large datasets along with contents from various websites, social media, Wikipedia, and huge discussion hubs such as Reddit. As it includes copyrighted material, it can be used to articulate various summary responses asked by the user. This can potentially give rise to copyright infringement by the owner of the content.
There has been a lot of ongoing debate on this case. The copyrighted content (both text and pictures) is used in the training data sets, which give rise to a significantly high modulus of languages for Generative AI. The question here arises: should such copyrighted material be allowed to train datasets for Generative AI? The answer is still uncertain, but looking from the perspective of the owner of the original content, it is violating the copyright act.[10]
Job Loss / Creation
[edit | edit source]As generative AI is progressing rapidly, there is a high possibility that it can contribute to unemployment at certain stages. This could be possible when the AI automates certain tasks or activities, displacing human workers. For example, in any B2B organization, the marketing team would be responsible for creating email campaigns that needed to be sent to their clients. The work includes making email templates, drafting the campaign emails, and getting reports on the open and click rates of those email campaigns. So, if an AI is able to create the contents of the email campaigns, the people working towards the creation of these email campaigns would be displaced.
Another example is that if AI automates the customer service tasks, then the workers usually working at the call centers providing in-person customer service would be replaced. Some of the AI modes are able to generate the codes as well. So, it can also risk the roles of developers, as AI is usually capable of writing that code in an effective manner, which means taking less time and causing minimal errors.[11]
Truthfulness / Accuracy
[edit | edit source]Generative AI usually gives uncertain responses to certain questions, such as may be, not sure about it, etc. As generative AI makes use of machine learning modules, it does guarantee 100% accuracy. There have been instances where this AI has provided highly incorrect answers to certain questions. Not only that, there have been instances where it has failed miserably in solving certain logics in mathematics. Apart from that, in popular games such as chess, where computers are known to play smarter than humans, AI does make irregular moves that do not make any sense. This eventually increases the risk of spreading false information through such chatbots.[12]
Social Engineering
[edit | edit source]Generative AI can be misused by convincing people to fall into its trap by creating sounds that replicate human sounds. They can use this to send phishing emails or scam phone calls. For example, the attacker can use this tool to claim that he is calling from the IT department of the organization and can eventually convince the user to share his login ID and password, thereby gaining access to his system. So generative AI can lead to social engineering attacks.[13]
Fields in which Generative AI has been used
[edit | edit source]Generative AI has had a noticeable impact across many industries, leading to changes in how work is done and how information is created and shared. In education, it is being used to design personalized worksheets, flashcards, diagrams, and even games or simulations. These tools help teachers create more engaging lessons and allow students to learn in a way that suits their needs. Game-based learning, for example, uses both digital and physical games as a way to improve classroom participation and make learning more interactive and immersive.[14]
In healthcare, generative AI plays a role in improving both patient care and medical training. It is used to generate medical diagrams that compare a patient’s current condition to expected outcomes, helping patients better understand their treatment. AI-generated procedure videos and AR/VR simulations are also used to train medical professionals. One advanced technique called diffusion modeling uses AI to denoise and enhance medical images. Researchers tested this method on datasets like chest X-rays, MRIs, and CT scans, and found that it outperformed previous methods in terms of clarity and accuracy.[15]
The automotive industry uses generative AI to create technical diagrams, generate concept car designs, and explore new approaches to engineering problems. These systems can suggest innovative design options by working within specific constraints, offering ideas that may not have been considered before.[16] This concept of AI-generated design has also appeared in the tech world, where people use AI to imagine future versions of iPhones, gaming consoles, and other upcoming technologies.
In construction, generative AI is used to produce digital blueprints and explore structural concepts. It can also generate safety materials, such as training videos and posters, to improve communication on job sites and help prevent accidents.
Social media is one of the most noticeably affected areas. Many content creators now use AI-generated images, voiceovers, and music in their posts, especially on platforms like TikTok and Instagram. While this has opened the door for new kinds of creativity, it has also raised concerns. The rise of deepfakes—videos that appear real but are artificially created—has led to misinformation and ethical debates. These manipulated videos can spread false information quickly and convincingly, making generative AI a growing concern in digital spaces.[17]
How accessible is Generative AI?
[edit | edit source]Generative AI is widely accessible online. Many platforms offer free tools, while others charge for advanced features. You can run many of these tools on basic devices, including phones. Some popular examples include:
- ChatGPT – for general use
- DALL·E 3 – for AI image generation
- Grammarly – for writing
- ElevenLabs – for audio work
- Framer – for website building
- Wondershare Filmora – for video work
Links are in External Links below.
Real-World Implications and Case Studies
[edit | edit source]Generative AI (GenAI) is increasingly shaping modern industries and education systems by introducing tools that accelerate innovation, automate complex tasks, and improve accessibility. In fields like pharmaceuticals and manufacturing, GenAI is used to predict drug behaviors and optimize product designs, significantly reducing development time and costs. It also supports productivity by automating content generation, data analysis, and customer service through applications such as chatbots and document summarization. Companies are leveraging these tools not just for efficiency but to maintain a competitive edge in rapidly evolving markets.[18]
In education, the integration of AI has become more prevalent, especially after the shift to online learning during the COVID-19 pandemic. Applications such as ChatGPT have been widely adopted by students for tasks ranging from information retrieval to content creation. A study involving 586 student users revealed that while AI tools are largely helpful, their usage also correlates with several challenges, including decreased academic integrity and over-reliance on AI-generated content. Factors such as service quality, perceived stress, and educational risk were found to significantly influence students' perceptions and use of AI in learning.[19]
Despite these challenges, AI offers promising enhancements to education through personalization and intellectual collaboration. Generative models enable tailored learning experiences by adjusting content to meet individual student needs and comprehension levels. This supports more effective learning outcomes and broadens access to educational resources. However, concerns remain over AI's inconsistencies, privacy risks, and potential to encourage passive learning habits among students.[19]
In creative and cultural sectors, GenAI has inspired new modes of expression. From generating artwork and music to personalized marketing campaigns, it enhances creativity and broadens the spectrum of content creation. Notably, its use in fashion, advertising, and entertainment is reshaping the way professionals design, produce, and distribute content. As artists and brands adopt these tools, they unlock new ways to engage audiences and craft immersive, interactive experiences.[20]
Ultimately, while GenAI presents vast opportunities for growth and efficiency, it also necessitates a careful approach to ethics, accuracy, and educational integrity. Future development must strike a balance between leveraging AI's capabilities and preserving human judgment and creativity to ensure responsible and sustainable use across all sectors.[20]
Generative AI is being adopted across diverse industries to enhance user experience, streamline operations, and deliver personalized services. For instance, Wayfair introduced Decorify, an AI-powered tool allowing customers to upload photos of their living spaces and receive photorealistic interior design suggestions based on their style preferences. The AI then recommends products from Wayfair’s catalog, simplifying the redecorating process for users. Similarly, Mass General Brigham (MGB), a major healthcare provider in Massachusetts, piloted the use of Large Language Models (LLMs) to support physicians in responding to patient queries. Initial testing revealed that 82% of AI-generated responses were safe to send without misinformation, and over half required no further editing, demonstrating promising results for medical communication.
In the enterprise software space, Salesforce launched Einstein GPT in early 2023. This multi-use generative AI tool is customized for different business departments—marketing, sales, and customer service—providing capabilities like content creation, email generation, and knowledge article summarization. It is built in collaboration with OpenAI and integrated directly into Salesforce’s CRM ecosystem, making it accessible and efficient across business verticals.
Creative and brand marketing have also seen innovative applications. Coca-Cola launched its limited-edition AI-inspired drink Y3000, crafted by blending customer feedback with AI to design the flavor and visual concept of a beverage representing the year 3000. The campaign included an interactive “AI Cam” experience accessible through QR codes on the product packaging. Meanwhile, Adidas adopted AI for internal efficiency by implementing a conversational knowledge management system that allows engineers to query the company’s vast information base. This tool has helped offload administrative work and accelerated innovation in their large-scale AI projects.[21]
Ethical Frameworks & Recommendations
[edit | edit source]As generative AI becomes more integrated into daily life, it brings with it a wide range of ethical challenges. These include issues like misinformation, bias, job displacement, and the misuse of personal data. To use this technology responsibly, developers and organizations need to follow structured ethical frameworks that guide both how generative AI is built and how it’s used. Principles like transparency, accountability, public involvement, and continuous oversight are essential to making sure these systems are developed in a way that benefits society.[22]
One of the most important starting points is defining the purpose and intent behind an AI system. Developers should have a clear understanding of what the system is meant to do and how it fits within shared social values. Ethical design starts long before the AI is deployed it begins with intentional planning.
Transparency
Transparency means being open about how an AI system works. Developers should clearly explain how the model was trained, what data was used, and how it makes decisions. This is especially important when the AI’s output can affect people in critical areas like healthcare or law. Tools should also be developed to help users and regulators understand how these systems reach their conclusions. [23]
Accountability
There must be clear lines of responsibility in place when AI systems produce harmful or misleading results. Whether the issue lies with the developer, the company, the user, or the system itself, there should be agreed-upon standards for who is accountable. Accountability isn't just about legal consequences it also includes moral responsibility, like ensuring the system doesn’t spread bias or harmful behavior. [22]
Data Privacy and Consent
Data privacy is a major concern when it comes to generative AI. Any personal or sensitive information used to train AI models must be collected with clear, informed consent and protected through methods like anonymization. Developers must ensure that training data is handled responsibly and does not violate anyone’s rights. In addition, systems that generate content—especially in education, journalism, and public communication—should include authenticity tools like watermarks or labels to indicate that the content was AI-generated. This helps prevent confusion and the spread of misinformation.[22] [23]
Fairness and Bias
AI models often learn from real world data and that data can contain bias. If not addressed, AI can end up reinforcing harmful patterns that already exist in society. To prevent this, developers should regularly run fairness audits and collect feedback to identify and reduce biased outputs. Independent reviews and outside evaluations are also important to make sure systems stay fair and don’t contribute to inequality. [23]
Future Impacts
[edit | edit source]As generative AI continues to improve, there are several concerns about what potential future societal impacts may arise.
Software Engineering
As various AI technologies continue to improve and evolve, there is a whole new job market for software engineers. Environments known as "robo-software engineering environments work to maximize efficiency by incorporating both AI technology and human creativity and work. As robo-software engineering environments become more common, there will need to be teams of software engineers whose main role is to develop and maintain this new AI software. Furthermore, once the AI has been deployed for public use, it must still be overseen, maintained, and updated by human software engineers.
Regarding the design and development of AI and other machine learning systems,[24] there are many areas of research/work that software engineers can venture into. In this sector of AI software design, engineers should be monitoring any possible performance degradation regarding the AI software, looking into new and more efficient architectural styles and patterns, and looking into ways to better analyze and handle extremely large datasets. A coding team of software engineers will also need to become extremely familiar with how their chosen AI works; this involves learning what algorithms it uses, what NLP tools it has, and how it can help automate their daily work. To best incorporate the realm of human innovation into AI software efficiency and precision, software engineers must innovate new approaches for beneficial collaboration between humans and machines. Such methods require improvements in communication techniques to ensure seamless teamwork toward an end goal.[25]
Education
Teachers and students will begin to implement generative AI, such as ChatGPT, into their educational environments. Already now, there are examples of teachers utilizing AI to generate lesson plans, exams, essay questions, etc. As time passes, this technology will only improve, and teachers will be able to automate a large number of their tasks. "hatGPT successfully passed graduate-level business and law exams and even parts of medical licensing assessments (Hammer, 2023), which has led to suggestions for educators to remove these types of assessments from their curriculum in exchange for those that require more critical thinking.”[26] On the opposite spectrum, students have begun to utilize generative AI to mitigate the amount of busy work they actually need to do. Models such as ChatGPT have been key tools in which outlines, papers, and code can be generated and then modified for student submission. It is crucial to remember that not all content generated by these models is 100% accurate; therefore, students should take care not to rely on these models; instead, they should use generative AI as a tool for education, not a cheat code. Technology has also been coming out to help detect AI-submitted content to mitigate students taking advantage of generative AI; GPTZero, for example, was designed specifically for detecting ChatGPT-generated content.
As of now, most academic institutions have banned the usage of generative AI, such as ChatGPT; however, this is only a temporary solution. As this technology continues to improve and become more accessible to everyone, it will become near impossible for a ban to continue. Academic institutions should begin to focus on teaching the new generation how to use tools such as generative AI in conjunction with their own knowledge and creativity. Much like how the Internet transformed how students and teachers interacted with each other, generative AI and other AI technologies will have significant impacts in the future.
Pharmaceuticals
Generative AI systems can be trained on sequences of amino acids or molecular representation systems, such as AlphaFold, which is used for protein structure prediction and drug discovery. The models have become high-potential tools to transform the design, optimization, and synthesis of small molecules and macromolecules. On a large scale, there is the potential to boost the development process.[27]
The stages of the process are as follows:
- Stage 1: There is AI assistant target selection and validation.
- Stage 2: Molecular design and chemical synthesis
- Stage 3: Biological evaluation, clinical development, and post-marketing surveillance
- Stage 4: Several successful preclinical and clinical molecules identified by AI and deep generative models[28]
The Legal Profession
ChatGPT, amongst other services, could potentially significantly impact the legal profession as a whole, not just students. ChatGPT itself stated that, although difficult to predict, there are reasons to believe that the transformation of legal services through generative AI systems will happen quickly, as soon as 5 to 10 years.[29] In terms of students, ChatGPT stated that they should be aware that there is a possibility that these systems will be performing these jobs rather than humans. The focus should be on understanding how these systems work and how they can be used.[30]
ChatGPT is capable of many things in the legal profession. For example, in drafting it proved that it can draft advanced legal documents, including demand letters without prejudice and pleadings. These drafts demonstrate the ability of ChatGPT to elaborate and enhance the content based on simple facts inputted. Amongst other things ChatGPT is able to identify legal strategies, generate a skeleton argument to support the case, anticipate potential defences, etc.,
Despite the capabilities, ChatGPT lacks the capability to undertake legal research and analysis as a competent lawyer could. It is anticipated that legal database like WestLaw and Lexis will adopt generative AI. However, it should be noted that generative AI tools currently lack the capability to undertake legal research and analysis to the same extent as a competent lawyer.[31]
References
[edit | edit source]- ↑ A. S. Alalaq (2024). "The History of the Artificial Intelligence Revolution and the Nature of Generative AI Work". DS Journal of Artificial Intelligence and Robotics. 2 (4): 1–24. doi:10.59232/AIR-V2I3P101.
- ↑ Rajendra Singh; Ji Yeon Kim; Eric F. Glassy; Rajesh C. Dash; Victor Brodsky; Jansen Seheult; M. E. de Baca; Qiangqiang Gu; Shannon Hoekstra; Bobbi S. Pritt (2025). "Introduction to Generative Artificial Intelligence: Contextualizing the Future". Archives of Pathology & Laboratory Medicine. 149 (2): 112–122. doi:10.5858/arpa.2024-0221-RA.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ↑ A. S. Alalaq (2024). "The History of the Artificial Intelligence Revolution and the Nature of Generative AI Work". DS Journal of Artificial Intelligence and Robotics. 2 (4): 1–24. doi:10.59232/AIR-V2I3P101.
- ↑ Rajendra Singh; Ji Yeon Kim; Eric F. Glassy; Rajesh C. Dash; Victor Brodsky; Jansen Seheult; M. E. de Baca; Qiangqiang Gu; Shannon Hoekstra; Bobbi S. Pritt (2025). "Introduction to Generative Artificial Intelligence: Contextualizing the Future". Archives of Pathology & Laboratory Medicine. 149 (2): 112–122. doi:10.5858/arpa.2024-0221-RA.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ↑ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Łukasz; Polosukhin, Illia (2017-12-04). "Attention is all you need". Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS'17. Red Hook, NY, USA: Curran Associates Inc.: 6000–6010. doi:10.5555/3295222.3295349. ISBN 978-1-5108-6096-4.
{{cite journal}}
: Check|doi=
value (help) - ↑ Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2020-10-22). "Generative adversarial networks". Commun. ACM. 63 (11): 139–144. doi:10.1145/3422622. ISSN 0001-0782.
- ↑ Gervais, Daniel J. (March 25, 2019). "The Machine As Author". Iowa Law Review. 105, 2019, 2053–2106 (Vanderbilt Law Research Paper No. 19-35).
- ↑ Brynjolfsson, Erik; McAfee, Andrew (2016). The second machine age: work, progress, and prosperity in a time of brilliant technologies (First published as a Norton paperback ed.). New York London: W. W. Norton & Company. ISBN 978-0-393-35064-7.
- ↑ Schwartz, Reva; Vassilev, Apostol; Greene, Kristen; Perine, Lori; Burt, Andrew; Hall, Patrick (2022-03-15). "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence" (PDF). Gaithersburg, MD. doi:10.6028/nist.sp.1270.
{{cite journal}}
: Cite journal requires|journal=
(help) - ↑ "The Legal and Ethical Ramifications of Generative AI". CMSWire.com. Retrieved 2023-04-25.
- ↑ "Generative AI Ethics: Top 6 Concerns". research.aimultiple.com. Retrieved 2023-04-25.
- ↑ Kruger, Michelle Lee, Lukas. "Risks and ethical considerations of generative AI". Passle. Retrieved 2023-04-25.
- ↑ "Generative AI Ethics: Top 6 Concerns". research.aimultiple.com. Retrieved 2023-04-25.
- ↑ Su, J., & Yang, W. (2023). Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education. ECNU Review of Education, 6(3), 355–366.
- ↑ Shokrollahi, Y., Yarmohammadtoosky, S., Nikahd, M. M., Dong, P., Li, X., & Gu, L. (2023). A comprehensive review of generative AI in healthcare. arXiv preprint arXiv:2310.00795.
- ↑ Madhavaram, C. R., Sunkara, J. R., Kuraku, C., Galla, E. P., & Gollangi, H. K. (2024). The Future of Automotive Manufacturing: Integrating AI, ML, and Generative AI for Next-Gen Automatic Cars.
- ↑ Tan, Y. H., Chua, H. N., Low, Y.-C., & Jasser, M. B. (2024). Current Landscape of Generative AI: Models, Applications, Regulations and Challenges. 2024 IEEE 14th International Conference on Control System, Computing and Engineering (ICCSCE), 168–173. https://doi.org/10.1109/ICCSCE61582.2024.10696569
- ↑ a b Surjandy, A., Nathanael, D., Wijaya, & Orlando, L. (2024). The Implications of Utilizing AI for Learning on Generation Z. 2024 International Seminar on Intelligent Technology and Its Applications (ISITIA), 442–447. https://doi.org/10.1109/ISITIA63062.2024.10668233
- ↑ a b Data Axle. (2024). Generative AI: 5 Innovative Case Studies. https://www.data-axle.com/resources/blog/generative-ai-5-innovative-case-studies
- ↑ Data Axle. (2024). Generative AI: 5 Innovative Case Studies. https://www.data-axle.com/resources/blog/generative-ai-5-innovative-case-studies
- ↑ a b c Zlateva, Plamena; Steshina, Liudmila; Petukhov, Igor; Velev, Dimiter (2024-01-12), Tallón-Ballesteros, Antonio J.; Cortés-Ancos, Estefanía; López-García, Diego A. (eds.), "A Conceptual Framework for Solving Ethical Issues in Generative Artificial Intelligence", Frontiers in Artificial Intelligence and Applications, IOS Press, doi:10.3233/faia231182, ISBN 978-1-64368-480-2, retrieved 2025-04-26
- ↑ a b c Al-kfairy, Mousa; Mustafa, Dheya; Kshetri, Nir; Insiew, Mazen; Alfandi, Omar (2024-08-09). "Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective". Informatics. 11 (3): 58. doi:10.3390/informatics11030058. ISSN 2227-9709.
- ↑ Giray, Görkem (2021-10). "A software engineering perspective on engineering machine learning systems: State of the art and challenges". Journal of Systems and Software. 180: 111031. doi:10.1016/j.jss.2021.111031.
{{cite journal}}
: Check date values in:|date=
(help) - ↑ HEIL, JOE W. (2010-12). "Addressing the Challenges of Software Growth and Rapidly Evolving Software Technologies". Naval Engineers Journal. 122 (4): 45–58. doi:10.1111/j.1559-3584.2010.00279.x. ISSN 0028-1425.
{{cite journal}}
: Check date values in:|date=
(help) - ↑ Lim, Weng Marc; Gunasekara, Asanka; Pallant, Jessica Leigh; Pallant, Jason Ian; Pechenkina, Ekaterina (2023-07-01). "Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators". The International Journal of Management Education. 21 (2): 100790. doi:10.1016/j.ijme.2023.100790. ISSN 1472-8117.
- ↑ "Drug discovery", Wikipedia, 2023-04-17, retrieved 2023-04-25
- ↑ Zeng, Xiangxiang; Wang, Fei; Luo, Yuan; Kang, Seung-gu; Tang, Jian; Lightstone, Felice C.; Fang, Evandro F.; Cornell, Wendy; Nussinov, Ruth; Cheng, Feixiong (2022-12). "Deep generative molecular design reshapes drug discovery". Cell Reports Medicine. 3 (12): 100794. doi:10.1016/j.xcrm.2022.100794. PMC 9797947. PMID 36306797.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: PMC format (link) - ↑ Macey-Dare, Rupert (2023). "How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession". SSRN Electronic Journal. doi:10.2139/ssrn.4366749. ISSN 1556-5068.
- ↑ "ChatGPT", Wikipedia, 2023-04-24, retrieved 2023-04-25
- ↑ Iu, Kwan Yuen; Wong, Vanessa Man-Yi (2023). "ChatGPT by OpenAI: The End of Litigation Lawyers?". SSRN Electronic Journal. doi:10.2139/ssrn.4339839. ISSN 1556-5068.
External links
[edit | edit source]- ChatGPT– for general use
- DALL·E 3– for AI image generation
- Grammarly– for writing
- ElevenLabs– for audio work
- Framer – for website building
- Wondershare Filmora– for video work