Client Alert: Ghosts In The Machine — Virginia’s AI Bill And The Future Of AI Legislation In Other States
Date: March 19, 2025
By:
Kyle R. René
Notwithstanding support for the Act among Virginia lawmakers, commentators have speculated that Governor Youngkin could veto the legislation out of concern for how it may impact regulated businesses. He has until March 24 to decide whether the Act becomes law. Regardless of the outcome, the Act is significant in what it shares and does not share with comparable legislation in Colorado and, more broadly, what this suggests with respect to potential future, similar legislation in other states.
Who is Covered by the Act
- The Act applies to “developers” and “deployers” of certain artificial intelligence systems (“AI”), covering both those that create or substantially modify such AI systems for use by Virginia consumers as well as businesses that use such AI systems to make “consequential decisions” impacting Virginia consumers.
- “Consumer” is in turn defined as “a natural person who is a resident of the Commonwealth and is acting only in an individual or household context." However, this definition “does not include a natural person acting in a commercial or employment context.” The meaning of this exclusion is somewhat ambiguous. Commentators have speculated, for example, that this may protect individuals seeking employment, but not individuals acting within the scope of existing employment (e.g. if their employer deployed AI for the purpose of assessing employee performance). It remains to be seen how this exclusion may be interpreted in practice if the Act ultimately becomes law.
What the Act Covers
- The Act specifically regulates those that develop, deploy, or use “high-risk” artificial intelligence, in turn defining “high-risk” with respect to AI that is “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
- “Substantial factor” is in turn defined as a factor that “uses the principle basis for making a consequential decision” and is “capable of altering” the decision’s outcome.
- “Consequential decisions” are defined with reference to determinations that have a “material legal, or similarly significant, effect on the provision or denial to any consumer” of:
- Parole, probation, a pardon, or any other release from incarceration or court supervision;
- Education enrollment or an education opportunity;
- Access to employment;
- A financial or lending service;
- Access to health care services;
- Housing;
- Insurance;
- Marital status; or
- A legal service.
The Act generally requires high-risk AI developers and deployers to exercise a “reasonable duty of care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination” relating to a given deployment or use of high-risk AI. More specifically (among other requirements):
- Developers are prompted to provide deployers with various statements and disclosures regarding the high-risk AI in question, including a statement of its intended uses, a summary of how the AI system was evaluated for performance before being licensed/sold, measures the developer has taken to mitigate risks of algorithmic discrimination, and how the AI system can be used and monitored.
- Deployers are prompted to implement a risk management policy and program for the AI system that helps identify, mitigate, and document potential risks, and to also complete impact assessments concerning (among other considerations) foreseeable risks and whether post-deployment use cases demonstrate alignment with the AI’s intended use.
Exemptions
The Act carves out several exemptions/exclusions from coverage for certain designated classes of entities, and several AI functions are specifically excluded from the definition of a “high-risk” artificial intelligence system. Subject to key nuances outlined in the Act:
- Excluded entities include certain financial institutions (e.g., banks, credit unions, mortgage lenders, savings institutions), insurers, and healthcare entities, among others.
- AI systems excluded from the definition of “high-risk” systems include cybersecurity activity and other network functions, as well as technology that communicates with consumers using natural language (as long as such communications are subject to a policy prohibiting discriminatory/unlawful content).
Beyond the above exclusions, the Act also outlines sweeping exclusions for a range of designated activities and technologies that are not subject to the Act’s requirements.
What this Means for You
As a general matter, it is important to note that, even for entities/technologies/conduct not explicitly excluded from the scope of requirements under the Act, the Act construes protections narrowly. For example, as noted above, the Act holds developers/deployers to a “reasonable” duty of care to address risks of algorithmic discrimination, rather than prohibiting algorithmic discrimination outright. By limiting the definition of high-risk AI to systems “specifically intended” to engage in a given activity, the Act also arguably excludes certain uses of AI that, though they may engage in activities otherwise meeting the definition of “high-risk,” are neither “specifically intended” to do so, nor are they a “principle factor” for a given decision (put simply, incidental/immaterial uses of AI are less likely to trigger requirements under the Act). Such exclusions, and others included in the Act, may shield businesses/AI developers from coverage under the Act in “edge” cases where it is unclear whether AI activity is in fact covered. To the extent the Act becomes law, those deploying or using AI in a business or other context are encouraged to review the Act closely and consult with counsel regarding questions and potential requirements
The Act may otherwise provide a lens into similar legislation that can be expected in other states. To that end, the Act contrasts with Colorado in a handful of key ways. For example, the Colorado legislation does not include carve-outs for consumers acting in an employment or commercial capacity. Further, unlike Virginia, Colorado’s legislation only requires that AI “assist” in making a decision rather than be a “principal basis” for a decision (as is required in Virginia) in order to trigger applicable requirements. Notwithstanding such differences, taken together, Virginia’s proposed Act and Colorado’s enacted legislation both suggest AI developers and businesses that deploy AI should continue to monitor for similar legislation in other states and to closely review:
- What the legislation specifically addresses/prohibits (e.g., algorithmic discrimination or something similar).
- The entities to which the legislation applies (and perhaps more importantly, the entities/technologies that are excluded from coverage).
- The parties intended to be protected by the legislation (and how such parties are defined).
- What the legislation requires (evaluations, disclosures/statements of purpose, risk management policies/programs, impact assessments, etc.)
- Exemptions from coverage (including exempted conduct, classes of businesses, etc.).
- How key terms are defined within the legislation, as these definitions often carve out sweeping exemptions from coverage not otherwise referenced in the broader legislative text.
- Other flexibilities evident in the text of the law (for example, if the law imposes a “reasonableness” standard or something more/less strict).
The information contained here is not intended to provide legal advice or opinion and should not be acted upon without consulting an attorney. Counsel should not be selected based on advertising materials, and we recommend that you conduct further investigation when seeking legal representation.