US-based think tank, FPF, releases findings on the latest developments in artificial intelligence regulation across various American states
The Federal Policy Forum (FPF) has recently published a new report titled "U.S. State AI Legislation: A Look at How U.S. State Policymakers Are Approaching Artificial Intelligence Regulation." The report provides an analysis of key bills introduced in 2023 and 2024, offering insights into the emerging trends in U.S. state AI legislation focusing on the governance of AI in consequential decisions.
Key trends in state AI legislation include the adoption of risk-based, transparency-oriented frameworks. For instance, Colorado has enacted comprehensive AI laws requiring organizations to implement AI risk management programs specifically addressing the use of high-risk AI systems in consequential decisions. The emphasis is on preventing algorithmic discrimination and ensuring fairness.
Several states, including California, mandate disclosures about AI use in automated decision-making processes impacting consumers. This enhances accountability for consequential uses such as in employment, credit, or healthcare.
Utah’s AI laws regulate AI use in consumer transactions, reflecting growing attention to protecting individuals in significant interactions involving AI tools, especially generative AI systems.
Despite attempts at federal uniformity, states continue to innovate and pass varied regulations targeting AI governance. The failure of the proposed 10-year federal moratorium on state AI regulations shows that states are determined to retain authority to regulate AI locally, especially regarding consequential decision-making harms and consumer protections.
However, the 2025 White House AI Action Plan emphasizes deregulation and pro-innovation policies, aiming to reduce barriers for AI adoption and foster U.S. AI leadership with less regulatory oversight. Yet, it warns that states enacting restrictive AI laws may risk losing federal AI research funding, reflecting a federal preference for industry-friendly governance over state-enforced guardrails.
A key goal of these legislative efforts is to mitigate the risk of algorithmic discrimination. Consistent definitions and principles are crucial for safeguarding individual rights in the interoperable framework. Most frameworks create role-specific obligations, including separate developer and deployer requirements for transparency, risk assessment, and AI governance programs.
The report incorporates insights from civil society groups, businesses, and technical experts, offering a comprehensive examination of the nuances and challenges in advancing AI regulations. Common consumer rights around AI include rights of notice and explanation, correction, and to appeal or opt-out of automated decisions.
In summary, U.S. state AI legislation increasingly adopts risk-based, transparency-oriented frameworks to govern AI in consequential decisions, emphasizing consumer protection and discrimination prevention. These state efforts persist despite federal moves toward deregulation and attempts (ultimately unsuccessful) to curb state-level AI rules. The emerging trends highlighted in the report point to a collaborative movement toward an interoperable framework, ensuring regulatory clarity in AI legislation.
- The Federal Policy Forum's report concludes that state AI legislation in the United States is primarily focused on developing risk-based, transparency-oriented frameworks, particularly for AI's role in consequential decisions.
- In order to prevent algorithmic discrimination and ensure fairness, Colorado has established stringent AI laws, requiring organizations to create AI risk management programs that specifically address high-risk AI systems in consequential decision-making processes.
- California, among other states, mandates the disclosure of AI usage in automated decision-making systems affecting consumers, thereby increasing accountability for areas such as employment, credit, and healthcare.
- Utah's AI regulations aim to safeguard individuals in significant interactions with AI tools, particularly generative AI systems, by focusing on consumer transactions.
- Although there have been attempts at federal uniformity, states continue to enact diverse regulations on AI governance, with the failure of the proposed federal moratorium on state AI regulations signifying their determination to maintain local authority, particularly over consequential decision-making harms and consumer protections.