Understanding the Colorado AI Act

,

With the governor’s signature, Colorado has enacted a new consumer protection law focused on artificial intelligence (“AI”) systems.  The “Colorado AI Act” will go into effect on February 1, 2026. It will have a minor impact on developers and deployers of all public-facing AI systems used by Colorado residents and a more significant impact on developers and deployers of AI systems deemed to be high-risk under the law.

Disclosure of Interaction

Developer” means a legal or natural person doing business in Colorado that develops or intentionally and substantially modifies an AI system. A “Deployer” is a legal or natural person who does business in Colorado and uses a High-Risk Artificial Intelligence System.

A Developer or Deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available any artificial intelligence system must ensure it is disclosed to each Colorado resident who interacts with the artificial intelligence system that the resident is interacting with an artificial intelligence system.  However disclosure is not required under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.  This disclosure obligation is the only requirement imposed by the Colorado AI Act on Developers and Deployers of AI systems that are not deemed to be high-risk.

High Risk Systems

The statute defines a “High-Risk Artificial Intelligence System” as any AI system that makes, or is a Substantial Factor in making, a Consequential Decision. “Substantial Factor” means a factor that(i) assists in making a Consequential Decision; (ii) is capable of altering the outcome of a Consequential Decision; and (iii) is generated by an AI system. A Substantial Factor includes any use of an AI system to generate any content, decision, prediction, or recommendation concerning a Colorado resident that is used as a basis to make a Consequential Decision.

A “Consequential Decision” is a decision that has a material legal or similarly significant effect on the provision or denial to a Colorado resident, or the cost or terms of (a) education enrollment or opportunity; (b) employment; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.

A robust set of systems are excluded from being classified as High-Risk Artificial Intelligence Systems. Excluded AI systems are those intended to (a) perform a narrow procedural task; or (b) detect decision-making patterns or deviations from prior decision-making patterns and are not intended to replace or influence a previously completed human assessment without sufficient human review.  Additionally, certain specific technologies are deemed not to be High-Risk Artificial Intelligence Systems, such as (i) anti-fraud systems, as long as they do not use facial recognition, (ii) anti-virus and anti-malware systems, (iii) AI-enabled video games, (iv) cybersecurity tools, (v) databases, (vi) spam and robocall filtering, and (vii) technology that communicates with consumers in natural language to provide information, make referrals or recommendations, and answer questions if the system is subject to an acceptable use policy that prohibits generating content that is discriminatory or harmful.

Responsibilities of Developers

The statute imposes a duty on Developers of High-Risk Artificial Intelligence Systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of Algorithmic Discrimination arising from the intended uses of a High-Risk Artificial Intelligence System. “Algorithmic Discrimination” means any condition in which the use of an AI system results in an unlawful differential treatment or impact where the treatment or impact disfavors an individual or group on the basis of actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected by Colorado or federal laws.

Developers of High-Risk Artificial Intelligence Systems are required to make available to Deployers and other Developers using the High-Risk Artificial Intelligence System (a) a general statement of the reasonably foreseeable uses and known harmful or inappropriate uses of the High-Risk Artificial Intelligence System; (b) documentation disclosing (i) high-level summaries of the type of data used to train the system, (ii) known or reasonably foreseeable limitations of the system, including known or reasonably foreseeable risks of Algorithmic Discrimination arising from the intended uses of the system, (iii) the purpose of the system, (iv) the intended benefits and uses of the system, and (v) all other information necessary to allow a Deployer or other Developer to comply with the Deployer’s or other Developer’s obligations described below; and (c) documentation describing (i) how the system was evaluated for performance and mitigation of Algorithmic Discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the Deployer; (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation, (iii) the intended outputs of the system, (iv) the measures taken to mitigate known or reasonably foreseeable risks of Algorithmic Discrimination that may arise from the reasonably foreseeable deployment of the system, and (v) how the system should be used, not be used, and be monitored when the system is a Substantial Factor in making a Consequential Decision; and (d) any additional documentation that is reasonably necessary to assist the Deployer in understanding the outputs and monitoring the performance of the system for risks of Algorithmic Discrimination.

A Developer of a High-Risk Artificial Intelligence Systems must make available, on its website or in an open source project, a description of (i) the types of High-Risk Artificial Intelligence Systems that the Developer has developed or modified and currently makes available to Deployers or other Developers; and (ii) how the Developer manages known or reasonably foreseeable risks of Algorithmic Discrimination that arise from the development or modification of those High-Risk Artificial Intelligence Systems.

Additionally, the Developer of a High-Risk Artificial Intelligence System must disclose to the Colorado Attorney General and to all known Deployers or other Developers of the High-Risk Artificial Intelligence System any known or reasonably foreseeable risks of Algorithmic Discrimination arising from the intended uses of the system without unreasonable delay but no later than ninety days after the date on which: (a) the Developer discovers through the Developer’s ongoing testing and analysis that the High-Risk Artificial Intelligence System has been deployed and has caused or is reasonably likely to have caused Algorithmic Discrimination; or (b) the Developer receives from a Deployer a credible report that the High-Risk Artificial Intelligence System has been deployed and has caused Algorithmic Discrimination.  The Colorado Attorney General will later publish the manner for providing the required notice.

Responsibilities of Deployers

Deployers must (i) use reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of Algorithmic Discrimination, and (ii) implement a reasonable risk management program to govern the High-Risk Artificial Intelligence System. The risk management program must specify and incorporate the principles, processes, and personnel that the Deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of Algorithmic Discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a High-Risk Artificial Intelligence System.

Factors in determining the reasonableness of a Deployer’s risk management program include (i) (a) use of the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework, the ISO 42001 Standard for Artificial Intelligence Management, or another equivalent recognized risk management framework for artificial intelligence systems; (ii) the size and complexity of the Deployer’s organization; (iii) the nature and scope of the High-Risk Artificial Intelligence System deployed by the Deployer, including its intended uses; and (iv) the sensitivity and volume of data processed in connection with the High-Risk Artificial Intelligence System.

Deployers must complete an impact assessment for their High-Risk Artificial Intelligence Systems annually and also within ninety days after any change to the system is made available that introduces any reasonably foreseeable new risk of Algorithmic Discrimination.

Impact assessments must include (i) a statement by the Deployer disclosing the purpose, intended use cases, and deployment context of, and benefits provided by, the High-Risk Artificial Intelligence System; (ii) an analysis of whether the deployment of the system poses any known or reasonably foreseeable risks of Algorithmic Discrimination and, if so, the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the system processes as inputs and the outputs the system produces; (iv) if the Deployer used data to customize the system, an overview of the categories of data the Deployer used to customize the system; (v) any metrics used to evaluate the performance and known limitations of the system; (vi) a description of any transparency measures taken concerning the system, including any measures taken to disclose to a consumer that the system is in use when the system is in use; (vii) a description of the post-deployment monitoring and user safeguards provided concerning the system, including the oversight, use, and learning process established by the Deployer to address issues arising from the deployment of the system; and (viii) the extent to which the system was used in a manner that was consistent with, or varied from, the Developer’s intended uses of the system.

In addition to the impact assessment, the Deployer must annually evaluate the High-Risk Artificial Intelligence System to ensure that the system is not causing actual Algorithmic Discrimination.

Prior to using the system as a Substantial Factor in making a Consequential Decision about a Colorado resident, the Deployer must (a) notify the Colorado resident that the Deployer has deployed a High-Risk Artificial Intelligence System to be a Substantial Factor in making a Consequential Decision; (b) provide to the Colorado resident (i) a statement disclosing the purpose of the system and the nature of the Consequential Decision; (ii) the contact information for the Deployer; and (iii) a plain language description of the High-Risk Artificial Intelligence System; and (c) provide the Colorado resident with information, if applicable, about the right to opt out of the processing of personal data for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects under the Colorado Privacy Act. 

If a Consequential Decision is adverse to a Colorado resident, the Deployer must provide: (a) a statement disclosing the principal reason or reasons for the Consequential Decision, including: (i) the degree to which, and manner in which, the High-Risk Artificial Intelligence System contributed to the Consequential Decision; and (ii) the type and sources of data that was processed by the system in making the Consequential Decision; (b) an opportunity to correct any incorrect personal data that the High-Risk Artificial Intelligence System processed in making, or as a Substantial Factor in making, the Consequential Decision; and (c) an opportunity to appeal an adverse Consequential Decision, which must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the Colorado resident, including in instances in which any delay might pose a risk to the life or safety of the Colorado resident.

Deployers are required to post, and periodically update, on their websites a statement summarizing (i) the types of High-Risk Artificial Intelligence Systems currently deployed by the Deployer; (ii) how the Deployer manages known or reasonably foreseeable risks of Algorithmic Discrimination that may arise from the deployment of each High-Risk Artificial Intelligence System; and (iii) details of the nature, source, and extent of the information collected and used by the Deployer in connection with the High-Risk Artificial Intelligence System.

Deployers are required to notify the Colorado Attorney General without unreasonable delay (but no later than ninety (90) days) after the date of discovery that the High-Risk Artificial Intelligence System has caused Algorithmic Discrimination.

Enforcement

The Attorney General of Colorado has exclusive authority to enforce the Colorado AI Act. Violations of the Act are considered unfair trade practices, and Developers and Deployers must demonstrate compliance through documentation and adherence to recognized risk management frameworks. The Colorado AI Act also provides for rebuttable presumptions and affirmative defenses for Developers and Deployers who proactively address and mitigate risks.  For example, it is an affirmative defense in an enforcement action for a Developer or Deployer to have discovered and cured a violation of the Colorado AI Act if the Developer or Deployer is otherwise in compliance with a permitted AI risk management framework.

Conclusion The Colorado AI Act continues a trend toward imposing meaningful requirements on AI systems deemed to be high-risk while maintaining a regulatory touch for other AI systems.  Additionally, the Colorado Attorney General is empowered by the statute to issues rules for implementing and enforcing the Colorado AI Act, so there may be more details when the effective date approaches.

Andy Baer is the Chair of, and Chris Dodson is a partner in, Cozen O’Connor’s Technology, Privacy and Data Security Group.

Authors

  • Member

    Chris enjoys using his prior experience as a software engineer to solve clients' concerns where tech...

  • Chair, Technology, Privacy & Data Security

    Andrew Baer is the founder and chair of Cozen O’Connor’s Technology, Privacy & Data Security pra...

Related Posts