
Artificial Intelligence (AI) is transforming the way businesses and legal professionals operate. From contract review to legal compliance monitoring, AI offers speed, efficiency, and insight. However, with these benefits comes a hidden risk: AI bias. If you use AI for your business or legal operations, you should understand what AI bias is, why it is an issue, where it comes from, and how to mitigate it for effective legal risk management.
What is AI Bias?
AI bias refers to the different types of bias embedded in or externally applied to the AI model that can cause the AI to produce skewed outputs. These outputs can perpetuate and amplify existing human or societal biases, prejudices, and discrimination.
Why is AI Bias an Issue?
AI bias can cause skewed, inaccurate, or incorrect outputs. Relying on these outputs for business and legal operations can lead to faulty decision making. Further, if relied on for material decision making in regulated sectors, these outputs can increase legal risk and create compliance problems.
For example, biased AI outputs in hiring algorithms and lending practices have led to legal scrutiny under employment and fair lending laws, resulting in lawsuits[1] and settlements[2]. Before understanding how to mitigate AI bias to address these legal risks, it is important to understand where AI bias comes from.
Where Does AI Bias Come From?
AI bias can arise at various stages in the AI lifecycle – from the training data, how the model is designed and trained, and even how the AI is prompted.
Bias From the Ground Up: The Data Problem
Most large language models are trained on immense collections of text scraped from books, websites, academic articles, and other sources. While this training data may be vast, so are the inherent biases within it. Some types of biases found in training data include:
- Historical Bias: Training data may contain outdated views or discriminatory language, leading AI outputs to reflect historical biases.
- Representation Bias: Training data may underrepresent (or wholly omit) certain viewpoints, languages, or communities, skewing AI outputs toward the majority perspective.
- Measurement Bias: Training data and its features may be measured improperly, leading to over- or underrepresentation of variables in the output.[3]
Bias Built into the Design: The Training Problem
Developers train AI models on the training data prior to deployment. This step has the potential to introduce bias as developers make decisions regarding the scope of training data to use, how to label data, how to weigh classes of data, what guardrails to include, and how to define success. Some types of biases introduced while training AI models include:
- Stereotyping Bias: Training may reinforce existing racial, gender, and societal stereotypes, leading AI outputs to amplify harmful stereotypes.
- Confirmation Bias: Training may reinforce preexisting patterns, making it difficult for the AI model to recognize new patterns and overcome existing biases and stereotypes.
- Algorithm Bias: Training parameters and objectives may over- or underrepresent certain categories of data or features of data, leading to skewed results.[4]
Bias at the Point of Input: The Prompting Problem
Even with good quality training data and well-designed models, AI can still exhibit bias depending on how it is prompted. AI is designed to generate answers based on the cues in the prompt, meaning that the way a question or prompt is phrased can significantly influence how AI responds. Some types of bias introduced while prompting include:
- Leading Bias: Asking leading questions may nudge AI to respond with the expected output, rather than an unbiased or neutral output.
- Confirmation Bias: Asking AI to evaluate an input can lead it to accept the premise as true and reinforce any embedded assumptions or errors.
- Default Bias: Not specifying the audience, timeframe, or geography may result in AI defaulting to majority, modern, and Western viewpoints.
How to Mitigate the Risk of AI Bias
To manage the risk of AI bias in your outputs, you can adopt the following mitigation strategies.
Prompting Strategies
- Neutral Prompt Framing: Follow this checklist to reduce AI bias in your prompts:
- Frame Prompts Neutrally: Avoid wording your prompts with leading or loaded questions. Draft your prompt to ask for multiple perspectives and explicitly ask for reasoning to support each side.
- Include All Details: Include all relevant details, even if they are unfavorable or contradictory to your position.
- Be Fact-Specific: When possible, use verifiable facts. If including assumptions or hypotheticals, make sure to distinguish what is a fact from an assumption or hypothetical.
- Define the Scope: Clearly define the appropriate scope (e.g., persona, audience, timeframe, geography) to avoid default or majority assumptions.
- Flag Uncertainties and Limitations: Ask the AI model to flag any uncertainties or limitations in the output or its reasoning.
Example:
- Leading prompt: “Based on A, why is Company X liable for breach of contract?”
- Neutral prompt: “You are an attorney evaluating a breach of contract case involving Company X, headquartered in Delaware, US. The relevant facts are A, B, and C. [We are assuming Z.] Please provide arguments both supporting and opposing liability for Company X, noting applicable legal standards in Delaware. For each argument, include potential counterarguments, and identify any areas where the available facts may be insufficient or open to interpretation.”
Governance Strategies
- Understand the Data and the AI: Ensure your AI vendors are upholding AI governance principles of transparency, fairness, and explainability. Ask AI vendors about (and ensure they can clearly explain) the model’s data sources, training, safeguards, and bias-testing protocols.
- Create AI Output Benchmark: Regularly test the outputs to see if your AI model has regressed. You can set up a simple benchmark rubric in Excel, documenting the control input, the expected answer, the actual output, the rubric, and the resulting score. Especially for frequently automated or repeated tasks, regular benchmark tests will help you catch material regressions more quickly.
- Human Oversight: Perhaps most importantly, ensure human oversight, especially where AI is used in areas where bias is a significant concern (hiring decisions, lending decisions, law enforcement, etc.). Humans should be in the loop to meaningfully review AI outputs for bias, prejudice, or discrimination.
Takeaways
AI’s power lies in its ability to quickly synthesize vast amounts of data and generate actionable insights, but its outputs can reflect biases that arise across the AI lifecycle. By taking an informed approach to AI use and implementing mitigation strategies, business and legal professionals can harness the benefits of AI while avoiding the costly pitfalls of AI bias.
For any questions, please reach out to our Tech Transactions team!
---
[1] See Mobley v. Workday, Inc., 2025 WL 1424347 (N.D. Cal. May 16, 2025).
[2] See Assurance of Discontinuance Pursuant to G.L. c. 93A, §5, Commonwealth of Mass. v. Earnest Operations LLC, No. 2584-cv-01895 (Mass. Super. Ct. July 10, 2025) (settlement for $2.5 million dollars).
[3] Identifying Bias in AI, Kaggle, https://www.kaggle.com/code/alexisbcook/identifying-bias-in-ai (last visited Sept. 22, 2025).
[4] What is AI Bias?, IBM (Dec. 22, 2023), https://www.ibm.com/think/topics/ai-bias.
- Attorney
Valerie is a trusted legal advisor specializing in commercial and technology transactions, with a distinct focus on data privacy issues. Her practice is built on a deep passion for the intersection of privacy, law, and emerging ...
About this Blog
Stay current on legal news and issues, and learn more about Summit Law Group's practice groups.
Topics
Archives
Authors
Recent Posts
- AI Bias and How to Mitigate It
- Washington Employment Law Update: August 2025
- HB 1096 and SB 5559: Increasing Washington's Housing Supply Through Lot Splits and Unit Lot Subdivisions
- SB 5129: What it Means for HOAs, Homeowners, and the Future of Common Interest Communities
- Top 5 Things to Consider When Selling Your Company
- HB 1110: The End of Single-Family Zoning in Washington?
- Understanding Oregon's SB 426: Joint Liability for Unpaid Wages in the Construction Industry
- Key Takeaways for In-House Counsel from IAPP’s Global Privacy Summit 2025
- You’ve Formed Your LLC – Now What?
- Inaugural Report Update: What to know about Washington's Artificial Intelligence Task Force