Washington's AI Task Force Interim Report Explained: What Public Agencies, AI Vendors, and AI Deployers Need to Know

The Washington State Artificial Intelligence Task Force just released its Interim Report, a comprehensive set of findings and recommendations that will shape the state’s approach to AI governance, transparency, and innovation in 2026 and beyond.

The Interim Report identifies the issues the Washington legislature is expected to prioritize and signals how the state may govern the development and use of AI in the years ahead. As public agencies, private enterprises, and technology vendors accelerate their development and adoption of AI systems, the report outlines principles and practices that are already influencing procurement decisions, internal governance expectations, and lifecycle oversight for AI systems. Several recommendations are positioned for legislative consideration, and the themes of trustworthiness, transparency, and risk-based management are emerging as the baseline for governance of AI use in Washington.

This article provides a clear, practical explanation of Washington’s AI Task Force Interim Report, including its eight key recommendations, the state’s direction on AI regulation, and what organizations should do to prepare for upcoming AI governance requirements.

Washington's Emerging AI Regulatory Framework: Trustworthy, Transparent, and Risk-Based

Across its eight recommendations, the Task Force advances three central policy themes:

1. Trustworthy AI Grounded in Established Standards

The Task Force recommends that Washington formally adopt the NIST AI Risk Management Framework (AI RMF) as the state’s guiding principles for the development, deployment, and oversight of AI systems. The AI RMF principles – including validity and reliability, safety, security and resilience, transparency and accountability, explainability and interpretability, privacy enhancement, and management of harmful bias – would align Washington with widely accepted national and international norms. Washington Technology Solutions (WaTech) had previously indicated the intention of the state government to follow the principles in the AI RMF as part of the WaTech Interim Guidelines.

2. Meaningful Transparency Across the AI Lifecycle

The report emphasizes transparency in multiple contexts, including AI development practices, public-sector use of AI, and AI-influenced decisions affecting individuals. Transparency is positioned as the foundation of public trust, ethical use, and regulatory oversight.

3. A Risk-Based Regulatory Approach and Framework

The recommendations distinguish high-risk AI systems from lower-risk systems where lighter-touch governance is more appropriate. High-risk AI means those systems and uses that have potential to significantly affect a person’s life, health, safety, or fundamental rights. This risk-based regulatory approach calls for adopting policies proportionate to the potential for harm, rather than implementing a one-size-fits-all framework. This approach would prioritize oversight for high-risk applications (such as healthcare and finance), while avoiding burdensome regulations for the many low-risk AI uses.

Washington AI Task Force Interim Report: Summary of the Eight Recommendations

Below is a brief summary of the recommendations for specific actions the legislature should take regarding the development, deployment, and use of AI technology in Washington state. Page numbers refer to the report’s sections for those who want to dig deeper.

1.  Adopt Ethical AI Principles (pp. 18-21)

Because the rapid development and deployment of AI technologies raise significant ethical, social, and safety risks, the Task Force recommended adopting the NIST AI RMF principles for ethical and trustworthy AI as the guiding public policy framework for managing those risks. Public agencies would use these principles in procurement and oversight – and business customers might follow suit – so AI developers and deployers would increasingly need to demonstrate alignment.

2. Improve Transparency in AI Development (pp. 21-24)

Developers of AI systems should be required to disclose essential details about:

  • The provenance, quality, quantity, and diversity of AI model training datasets
  • How training data is processed to mitigate errors and biases during AI model development, particularly unintended biases or discriminatory outcomes

The goal would be to promote accountability and to facilitate external scrutiny of AI developers, but the Task Force recommended that the disclosure requirements should not require them to disclose trade secrets or other proprietary information that is protected by law.

3. Promote Responsible AI Governance (pp. 24-27)

Organizations developing or deploying high-risk AI (with potential to significantly impact people’s lives, health, safety, or fundamental rights) would be required to adopt recognized governance frameworks such as NIST AI RMF, ISO/IEC 42001 (AI management systems standard), and the EU General-Purpose AI Code of Practice. Those organizations would also be required to publicly disclose their risk-management strategies and practices. This is designed to ensure that AI systems are developed and deployed with appropriate risk mitigation strategies at every stage of their lifecycle.

The Task Force also recommends legislative evaluation of the risks and benefits of high-risk uses to determine whether such uses are appropriate and consideration of whether additional safeguards, restrictions, or even prohibitions are necessary to protect affected people. High-risk classifications are context-specific, but the report notes that sectors such as healthcare, finance, employment, housing, and public safety often present elevated risks because AI systems in these domains can meaningfully affect individuals’ rights, health, and economic well-being.

4. Invest in AI Literacy and Infrastructure in Education (pp. 27-30)

The Task Force recommends significant investment in statewide AI literacy and technical capacity, including expanded K–12 STEM and computing education, enhanced teacher training, improved broadband access, and the development of privacy-protective AI systems for classroom use. These proposed investments reflect a broader effort to ensure Washington’s workforce and public institutions can safely and effectively operate in an AI-driven environment.

5. Increase Accountability in Healthcare Prior Authorizations (pp. 30-32)

For AI compliance in healthcare, the Task Force recommends strengthening accountability in AI-supported prior authorization processes. AI systems would be permitted to assist with determinations, but not to replace a clinician’s judgment in adverse decisions. Any AI-driven denial would need to follow the same clinical criteria applied by human reviewers and include a clear, plain-language explanation. The recommendation also calls for payors to audit AI systems for fairness and accuracy.

6. Develop Guidelines for AI in the Workplace (pp. 33-34)

The Task Force proposes establishing a multi-stakeholder advisory group to develop statewide guiding principles for how employers use AI in the workplace, including how employers use AI to hire, manage, and evaluate employees, and to monitor workplace safety, predict attrition, and evaluate performance. They also emphasize AI risk management for employers:

  • Requirements: Employers should disclose when AI is being used in ways that directly affect employees and should provide employees with meaningful avenues to challenge automated outcomes.
  • Incentives: Employers should use AI tools to improve, not compromise, workplace safety and ergonomics, and to strengthen employee well-being.

7. Require Disclosure of AI Use by Law Enforcement (pp. 35-36)

The Task Force recommends that law enforcement agencies should be required to publicly disclose the AI systems they use, and officers should be required to attest to reviewing AI-assisted reports for accuracy in order to establish proper oversight and accountability.

8. Establish a Grant Program for AI Innovation (pp. 37-38)

A public-private grant fund would support small businesses and public-purpose AI projects in Washington, with an emphasis on ethical innovation and regional equity.

A Readiness Checklist: How Organizations Can Prepare Now

Whether you're deploying AI internally or providing it to customers, these actions will position you well for Washington’s emerging framework:

  • Inventory your AI models and uses, and flag those with higher potential impact.
  • Align internal practices to the NIST AI RMF (formal adoption isn’t required to benefit from its structure).
  • Create core transparency artifacts (model cards, data provenance summaries, risk notes) that you can share with partners or auditors.
  • Define when and how human oversight applies for decisions that affect people or public outcomes.
  • Tighten vendor and customer contract terms around documentation, auditing rights, data sourcing, and incident notification.
  • Build in recurring reviews for bias, drift, and performance, and log mitigation steps.
  • Monitor upcoming recommendations as the Task Force moves toward its final report in July 2026.

These steps are implementable today and can help your organization get ahead of likely compliance, procurement, and contractual expectations.

What This Means for AI Customers – Public and Private

1. Procurement Standards are Expanding

Public-sector agencies – including those already subject to WaTech’s Interim Guidelines – will see additional requirements around their use of AI. The underlying principles of transparency, governance, and risk management will also increasingly shape expectations for private-sector organizations deploying AI systems in Washington. Customers should expect vendors to provide NIST-aligned disclosures, explainability materials, data-source clarity, and evidence of governance and monitoring controls, enabling customers to conduct more rigorous vendor assessments and due diligence for AI systems.

2. Internal AI Governance Must Mature Quickly

Customers that inventory their current and planned AI uses, identify high-risk applications, embed “human-in-the-loop” oversight where decisions affect rights or benefits, and update procurement, privacy, and IT governance policies to account for AI will be better prepared for applicable new law. Beyond compliance requirements, they will also be better able to minimize and mitigate AI risks.

3. Greater Public Transparency Will Be Expected

Washington is likely to require clearer disclosure of AI use in public services and high-risk activities – including what AI systems are used, how they are validated, and how they are overseen. Organizations that begin developing explainability practices and public-facing documentation now will reduce implementation burdens later and strengthen public trust.

What This Means for Technology Vendors and Service Providers

1. Governance Documentation is Becoming a Competitive Advantage

Vendors supplying AI systems will increasingly need model development summaries, data-provenance documentation, explainability materials, risk assessments and bias-mitigation procedures, and post-deployment monitoring plans to satisfy agency procurement requirements. Expect private sector customers to follow the public sector lead: if you do business in Washington, aligning your internal practices to the NIST AI RMF and developing robust AI governance documentation now will keep you from falling behind.

2. Transparency is No Longer Optional

Vendors must be able to articulate what datasets they used, how models were trained and validated, and how they monitor for drift, bias, and other ongoing risks throughout the product lifecycle, particularly for high-impact systems. While this won’t require disclosing protected IP, it will need to be at a level of detail that will stand up to external scrutiny. Building AI governance policies and processes will enable you to manage to – rather than react to – those transparency requirements.

3. High-Risk Systems Will Face Heightened Scrutiny 

Vendors serving sectors like healthcare, employment, housing, education, and public safety will need mature governance and monitoring practices to remain compliant — and competitive — as expectations for lifecycle accountability increase.

Taken together, the Interim Report and existing WaTech guidelines suggest that Washington is moving toward a coordinated, statewide approach. Vendors that treat these principles as de facto requirements will be better positioned than those waiting for formal legislation.

What Comes Next?

The Task Force will continue its work into 2026. Additional recommendations are already on the horizon, particularly ones related to companion chatbots and AI energy impacts. The Legislature is expected to take up several AI proposals next session, and agencies will continue shaping procurement and deployment standards in real time.

Closing Thoughts

The AI Task Force Interim Report provides the roadmap for Washington’s AI policy: greater transparency, structured governance, proportionate safeguards, and alignment with recognized standards. For public agencies and private companies exploring and deploying AI, and technology companies developing and providing AI, this is the moment to operationalize trustworthy AI best practices and prepare for a future where strong AI governance is central to AI adoption.

As organizations evaluate the implications of these recommendations, proactive steps taken now can significantly reduce future compliance burdens. If you have questions about how these recommendations may affect your organization, or if you’d like help assessing your AI systems, updating contracts, or preparing governance materials, our Technology Transactions team is here to assist.

Resources

Inaugural Report Update: What to know about Washington's AI Task Force

AI Transparency: A Tale of One Task Force and Two States’ Legislative Bills

Washington AI Task Force

WaTech Interim Guidelines for Purposeful and Responsible Use of Generative Artificial Intelligence

NIST AI Risk Management Framework (AI RMF)

ISO/IEC 42001

EU General-Purpose AI Code of Practice

Related Services: Tech Transactions
Explore more on
Thought Leadership
  • Heath  Dixon
    Partner

    With over two decades of experience, including 16 years as senior in-house counsel for world-leading technology, retail, and professional services companies, Heath knows how to empower businesses that deliver and rely on ...

About this Blog

Stay current on legal news and issues, and learn more about Summit Law Group's practice groups.

Topics

Archives

Authors

Recent Posts

Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.