Colorado AI Act — High-Risk AI System Checker

Check if Colorado's AI transparency law (SB 24-205) applies to your artificial intelligence system. Answer questions about where your system is deployed, what decisions it makes, and whether you are a developer or deployer to find out your obligations under the Colorado Artificial Intelligence Act, which takes effect on 1 February 2026. This checker helps you understand whether your AI system qualifies as high-risk and what transparency, risk management, and disclosure requirements apply to you.

Ad Space

What Is the Colorado Artificial Intelligence Act?

The Colorado Artificial Intelligence Act (SB 24-205), signed into law in May 2024, is one of the most significant state-level AI regulations in the United States. Taking effect on 1 February 2026, the law establishes obligations for developers and deployers of high-risk artificial intelligence systems that make or substantially factor into consequential decisions affecting Colorado consumers. The act focuses on preventing algorithmic discrimination, defined as any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavours a consumer based on protected characteristics including age, colour, disability, ethnicity, genetic information, national origin, race, religion, reproductive health, sex, veteran status, or similar categories.

The Colorado AI Act distinguishes between two types of regulated entities: developers and deployers. Developers are persons doing business in Colorado who develop or intentionally and substantially modify an AI system. Deployers are persons doing business in Colorado who deploy a high-risk AI system. A single entity can be both a developer and a deployer if it both creates and uses a high-risk AI system. The obligations differ between developers and deployers, with developers having responsibilities focused on documentation, transparency, and providing information to deployers, while deployers have responsibilities centred on risk management, consumer disclosure, and impact assessment.

What Makes an AI System High-Risk?

Under the Colorado AI Act, an AI system is considered high-risk if it makes or is a substantial factor in making a consequential decision. A consequential decision is defined as a decision that has a material legal or similarly significant effect on a consumer's access to or the cost, terms, or conditions of employment or employment opportunities, education or educational opportunities, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services. This broad definition captures a wide range of AI applications used across industries.

The act explicitly does not apply to certain technologies and use cases. AI systems that are used solely for narrow procedural tasks, spam filtering, cybersecurity functions, or basic data processing are generally not considered high-risk unless they contribute to consequential decisions. Similarly, AI tools used purely for internal research and development purposes without affecting consumer decisions are outside the scope of the law. The key determinant is whether the AI system's output has a meaningful impact on a consumer's access to important services or opportunities in the covered categories.

Developer Obligations Under the Colorado AI Act

Developers of high-risk AI systems face several important obligations under the Colorado AI Act. They must provide deployers with a general description of the high-risk AI system, documentation regarding the known limitations of the system, the types of data used to train the system, a description of how the system was evaluated for performance and mitigation of algorithmic discrimination, and instructions for how deployers can use the system in compliance with the law. Developers must also make available to the public and the Attorney General a statement describing the types of high-risk AI systems they develop and how they manage known or reasonably foreseeable risks of algorithmic discrimination.

Additionally, developers who discover that a high-risk AI system they developed has caused or is reasonably likely to cause algorithmic discrimination must notify the Attorney General and all known deployers within 90 days. This disclosure obligation creates an ongoing monitoring requirement for developers even after the system has been deployed. The combination of upfront documentation and ongoing monitoring represents a significant compliance burden that developers should plan for well in advance of the February 2026 effective date.