Skip to Main Content

Overview of AI Ethics

a motherboard with the word Ethics.

Overview of AI Ethics

Artificial intelligence offers immense potential for innovation and progress, but it also raises important ethical questions. AI ethics is the study and application of principles that guide the responsible development and use of artificial intelligence. It considers the social, legal, and moral impacts of AI technologies to help prevent harm and promote fairness, accountability, and transparency.

AI ethics is a vital part of AI literacy. As AI systems become more integrated into everyday life and decision-making, understanding these ethical challenges is essential to ensuring their responsible and equitable use. Below are a few of the key ethical considerations in AI.

Ethical Issues

AI & Academic Integrity

Lightbulb with books and a graduation cap.

As tools like ChatGPT, Grammarly, and AI research assistants become more common in education, it's essential to understand how to use them ethically and responsibly. This is where AI literacy and academic integrity intersect. Knowing if, when, and how to use these tools is a key part of being an informed and ethical learner.

➡️Students are responsible for following the AI policy set by each instructor for each course.

 

What Is Academic Integrity in the Age of AI?

Academic integrity means being honest, transparent, and responsible in your academic work. When using AI tools, this includes:

  • Being clear when and how you used AI,

  • Avoiding misrepresentation. Submitting AI-generated work as your own is a form of plagiarism

  • Ensuring your own understanding of the content,

  • Citing AI contributions when appropriate,

 

The table below outlines common academic scenarios.
Examples of Ethical vs. Unethical Use of AI for College Students
Context Ethical Use of AI Unethical Use of AI
Writing Assistance Using AI to brainstorm ideas. Submitting AI-generated text as your own work without disclosure.
Studying & Research Using AI to clarify concepts. Using AI to fabricate sources or citations in academic work.
Coding Assignments Using AI to debug code. Copy-pasting AI-generated code for assignments without understanding or attribution.
Group Work

Using AI to facilitate equitable group collaboration.

Using AI to do all the work in a group project without consent from team members.
Privacy & Consent Being mindful of sharing personal or peer information when using AI tools. Inputting private student data into AI systems without consent.

 

Guidelines for Using AI Responsibly

  • Use AI as a support tool, not a replacement for your own thinking.

  • Always review, revise, and reflect on AI-generated content

  • Cite AI tools appropriately.

➡️When in doubt, ask your instructor about what’s acceptable. Remember, students are responsible for following their instructor's AI policy in each course.

 

📚 Why It Matters

Maintaining academic integrity when using AI:

  • Builds trust between students and instructors,

  • Encourages real learning and skill development,

  • Prepares you to use AI tools ethically in professional settings,

  • Helps prevent academic dishonesty and plagiarism.

➡️All VVC students must follow the Student Code of Conduct outlined in VVC’s Student Handbook


Attribution:

  • Bowen, J. A., Association of American Colleges and Universities., & Watson, C. E. (2024). Teaching with AI : a practical guide to a new era of human learning. Johns Hopkins University Press. https://caccl-victorvalley.primo.exlibrisgroup.com/permalink/01CACCL_VICTORVALLEY/1vu1fcj/alma991000671140205300
  • OpenAI. (2025, June 5). ChatGPT (Version 4.0) [Large language model]. https://chat.openai.com/chat
  • Portions of the code and content in this guide were created with the assistance of ChatGPT (OpenAI, 2025). All AI-generated material was reviewed and edited for accuracy, accessibility, and relevance.

Algorithmic Bias

A significant limitation of artificial intelligence, particularly in generative AI, is the potential for embedded bias in the content it produces. Large Language Models (LLMs) like ChatGPT are trained on vast amounts of publicly available internet data, and are designed to predict the most likely sequence of words in response to a prompt. As a result, these models inevitably reflect and reinforce the biases present in their training data, including social, cultural, political, and linguistic prejudices.

Another layer of potential bias stems from the use of Reinforcement Learning with Human Feedback (RLHF) in model training. While RLHF is intended to improve the quality and safety of AI responses, the human annotators providing feedback are themselves influenced by personal and cultural perspectives, which may be inconsistent, non-neutral, or unrepresentative. Consequently, generative AI tools have been documented to produce content that exhibits socio-political bias and, in some cases, output that is sexist, racist, or otherwise offensive (González Barman, Lohse, & de Regt, 2025).

Recommendations for Responsible Use of Generative AI

  • Fact-check all AI-generated information, especially when academic accuracy is critical. Always verify the authenticity and source of any citation.

  • Critically assess content for potential bias—AI outputs may unintentionally reflect harmful stereotypes or misleading perspectives.

  • Avoid asking AI for bibliographies or reference lists, as it may generate fabricated or inaccurate citations.

  • Consult the AI tool’s documentation or update notes to understand its knowledge cutoff, data sources, and intended use cases.

  • Remember that generative AI is not a search engine. It does not retrieve factual information from a database but constructs responses based on patterns in its training data.

The Social Dilemma – Bonus Clip: The Discrimination Dilemma

How I'm fighting bias in algorithms | Joy Buolamwini


Attribution

Environmental Impact

environment"Training a single AI model can emit as much carbon as five cars in their lifetimes." Karen Hao

Generative AI tools require a significant amount of computational processing power to function, which is provided by high-performance servers housed in physical data centers located across the country. These centers require massive amounts of electricity to keep tools operational, as well as water to keep the servers cool. Many AI companies have not revealed just how much electricity and water are used by their tools, or how much will be needed in the future. As such, there are significant unanswered questions about the environmental costs of keeping generative AI tools functional. 

Select Resources on AI and the Environment

Image sourced on Canva.com

How AI and data centers impact climate change

CBS Mornings


Attribution

Privacy in the Age of AI

lockThere are ongoing privacy concerns and uncertainties about how AI systems harvest personal data from users. Users may not realize that the system is also harvesting information like the user’s IP address and their activity while using the service. This is an important consideration when using AI in an educational context, as some students may not feel comfortable having their personal information tracked and saved.

Additionally, OpenAI may share aggregated personal information with third parties in order to analyze usage of ChatGPT. While this information is only shared in aggregate after being de-identified (i.e. stripped of data that could identify users), users should be aware that they no longer have control of their personal information after it is provided to a system like ChatGPT.

Select Resources on AI and Privacy


Attribution

legal scalesIs Content Created by Generative AI Tool Copyrightable?

Currently, copyright protection is not granted to works created by Artificial Intelligence. The U.S. Copyright Office has issued guidance that explains the requirement for human authorship to be granted copyright protection and provides information to creators working in tandem with AI tools on how to effectively and correctly registered their works.

US Copyright Office and Artificial Intelligence – "The Copyright Office has launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training."

Copyright Registration Guidance – Guidance for registering Works Containing Material Generated by Artificial Intelligence by the U.S. Copyright Office.

Generative AI Copyright Lawsuits 

Copyright Issues

The input to generative AI

  • Should it be considered fair use? This is widely debated.

Argument A. No it's copyright violation
  • Copyright law is AI's 2024 battlefield - "Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside," James Grimmelmann, professor of digital and information law at Cornell, tells Axios. "Every time a new technology comes out that makes copying or creation easier, there's a struggle over how to apply copyright law to it." 
This will affect not only OpenAI, but Google, Microsoft, and Meta, since they all use similar methods to train their models.
 
Argument B. Yes, it's fair use

Several corporations have offered to pay legal bills of users of their tools
AdobeGoogle,  Microsoft, and Anthropic (for Claude) have offered to pay any legal bills from lawsuits against users of their tools.

The Output of Generative AI

Can you copyright something you made with AI?
Open AI says:
"... you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether output was generated through a free or paid plan."

The U.S. Copyright Office says:
The term “author" ... excludes non-humans.

But, if you select or arrange AI-generated material in a sufficiently creative way... In these cases, copyright will only protect the human-authored aspects of the work. For an example, see this story of a comic book. The U.S. Copyright Office determined that the selection and arrangement of the images IS copyrightable, but not the images themselves (made with generative AI).

In other countries, different rulings may apply, see:
Chinese Court’s Landmark Ruling: AI Images Can be Copyrighted


Attribution: 

ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?

Bloomberg Law

Transparency

a robot with a question mark sign.The increasingly common presence of AI in day-to-day life has heightened the need for transparency in its use: people should be aware of when they are interacting with artificial intelligence, who created the AI they're using, and for what purpose.

Advances in generative AI have made transparency a particular concern. Recent versions of software like ChatGPT can create text in response to a prompt that is indistinguishable from human-produced writing. In academia, this creates concerns over academic integrity in assignments, and is leading to a reevaluation of the types of writing assigned to students. In journalism, some online outlets have already begun publishing articles generated by AI. Given the issues with accuracy in generative AI, a lack of transparency in its use in journalism leads to lower confidence that what we're reading is correct.

Resources


Attribution

Labor

DIGITAL CLOUD"As we move into a detailed analysis of AI’s role in modern society, the focus shifts to how this technology, while heralded as a tool of efficiency and progress, actually reproduces and exacerbates inequalities. This is evident in the labor practices within the tech industry, where AI development often relies on underpaid and undervalued workers from marginalized communities, perpetuating a cycle of exploitation and exclusion."

Nelson Colón Vargas 

AI At What Cost?

AI still needs human intervention to function properly, but this necessary labor is often hidden. For example, ChatGPT uses prompts entered by users to train its models. Since these prompts are also used to train its subscription model, many consider this unpaid labor.

Taylor & Francis recently signed a $10 million deal to provide Microsoft with access to data from approximately 3,000 scholarly journals. Authors in those journals were not consulted or compensated for the use of their articles. Some argue that using scholarly research to train generative AI will result in better AI tools, but authors have expressed concern about how their information will be used, including whether the use by AI tools will negatively impact their citation numbers

In a more extreme case, investigative journalists discovered that OpenAI paid workers in Kenya, Uganda and India only $1-$2 per hour to review data for disturbing, graphic and violent images. In improving their product, the company exposed their underpaid workers to psychologically scarring content. One worker referred to the work as “torture”.


Attribution:

  • University of Texas Libraries; Image sourced on Canva.com
  • Vargas, N. C. (2024). Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups. https://doi.org/10.48550/arxiv.2403.06332

Deep Fakes

Deepfakes are videos, images or audio that appear very realistic but are fake. Using AI tools, people can create deep fakes that make it seem like someone has done or said something they have not. This guide from MIT goes in-depth about deep fakes and how to spot them.

Spotting Deepfakes

Identifying deepfakes can be challenging without the assistance of emerging technologies. When watching a video that seems suspicious, follow this advice from MIT Media Lab:

  1. Pay attention to the face. High-end DeepFake manipulations are almost always facial transformations. 
  2. Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? DeepFakes are often incongruent on some dimensions.
  3. Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? DeepFakes often fail to fully represent the natural physics of a scene. 
  4. Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, DeepFakes often fail to fully represent the natural physics of lighting.
  5. Pay attention to the facial hair or lack thereof. Does this facial hair look real? DeepFakes might add or remove a mustache, sideburns, or beard. But, DeepFakes often fail to make facial hair transformations fully natural.
  6. Pay attention to facial moles.  Does the mole look real? 
  7. Pay attention to blinking. Does the person blink enough or too much? 
  8. Pay attention to the size and color of the lips. Does the size and color match the rest of the person's face?

Try it out! Can you Detect Fakes?

Remember that even if you can't identify whether the video is a deepfake just by watching it, you can use SIFT to find out more information and determine if the video is trustworthy.


Attribution

Harm Considerations