Easily browse the critical components of this report…
It’s impossible to get through a day without hearing a reporter, a colleague, a friend, or even your child mention “AI.” Its ubiquitousness has spread throughout our lives, and we use the technology on a regular basis, in many cases, not knowing it is behind some of our daily routines.
Artificial intelligence has woven itself into the fabric of our existence. But who “invented” AI? How did the term come into existence? Its roots trace back to the work of Alan Turing. According to the National Artificial Intelligence Initiative Office (NAIIO), researchers in computer science from across the United States met at Dartmouth College in 1956 to discuss groundbreaking concepts on an emerging branch of computing called artificial intelligence, or “AI.” The term “AI” was, in fact, coined at this Dartmouth conference.
This historic meeting launched the platform for decades of government and industry research in AI and led to an eventual world where machines could communicate, “think,” emulate human behavior and solve problems. As the NAIIO finds, “these early investments have led to transformative advances” we now consider de riguer, including mapping technologies, voice assistance and autocorrection on smart phones, financial trading, smart logistics, language translation, movie and product recommendations, and even document writing. The potential for AI to improve our well-being in areas such as medicine, environmental sustainability, education, and public welfare seems almost limitless.
But to realize this potential—and successfully contend with its challenges—citizens, business leaders and policymakers must understand the basics of this technology, analyze its limitations and decide whether and when regulation may be necessary.
With the advent of AI, state legislator and legislative staff members of NCSL’s Executive Task Force on Cybersecurity and Privacy recognized the need for intense education efforts on the use of Artificial Intelligence (AI), resulting in the development of NCSL’s “Approaches to Regulating Artificial Intelligence – A Primer.” The AI primer provides information for state legislators regarding international, federal and state activity surrounding the growth and use of AI so that state policymakers can determine how best to capture benefits and mitigate risks for their citizens.
The primer proposes language for defining AI, summarizes current state legislation and federal activity regulating AI, as well as showcases leading private sector frameworks for governing AI. The primer also encourages state legislators and legislative staff to consider and assess the appropriate role of state government in regulating AI, and to what extent states and the federal government can collaborate to create appropriate policy.
Many organizations and individuals have sought to define AI but no consensus has emerged on a uniform meaning. The lack of an overarching definition is challenging to lawmakers as they seek to create a regulatory framework. Nevertheless, policymakers need to settle on some guiding principles to continue progress.
The National Artificial Intelligence Initiative Act of 2020—the law that seeks to promote US leadership in AI defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
The official European Union (EU) Artificial Intelligence Act and accompanying European Union (EU) strategy for AI—the first comprehensive federal law proposed by any entity—defines it as: “Systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals.” The Organization for Economic Cooperation and Development has a similar definition to the NAIIA and the EU.
Computer science courses use a textbook by Stuart Russell and Peter Norvig, “Artificial Intelligence: A Modern Approach,” that defines AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.”
Google defines AI as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations and more.”
BSA The Software Alliance also has a definition of AI, referring to it as “systems that use machine learning algorithms that can analyze large volumes of training data to identify correlations, patterns, and other metadata that can be used to develop a model that can make predictions or recommendations based on future data inputs.”
Computer Scientist Sorelle Friedler, who served in the White House Office of Science and Technology Policy under the Biden-Harris administration and whose work focuses on analyzing practical solutions for lawmakers looking to oversee AI in their constituencies, argues that a definition for AI should incorporate the notion of “consequential decision.” The federal American Data Privacy and Protection Act (ADPPA) introduced in the House of Representatives in 2022, came closer to encapsulating this notion and defined the term “covered algorithm” as “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity and that makes a decision or facilitates human decision-making with respect to covered data.” As was similarly included in 2023 CA AB 331, the idea means “a decision or judgment that has a legal, material, or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of,” any of the following summarized below:
This definition is comprehensive and gives lawmakers a well thought out policy-area focused framework that is practical and inclusive. While no definition is perfect, lawmakers will need to coalesce on an acceptance of concepts and potential impacts before being able to effectively intervene and create meaningful policy.
Because of the rate and speed at which technology is improving we are in a time of urgency for understanding the benefits and risks of AI. Proliferation of data and the maturity of other innovations in cloud processing and computing power has resulted in the rapid acceleration of AI adoption and development. Organizations—and individuals—now have access to an unprecedented amount of data, including dark data—data collected that is personal or high-risk that would cause harm if disclosed—they didn’t even realize they had until today.
While concepts of AI were first discussed in the 1950’s, only in the last decade has the rate of adoption and use increased exponentially. This has necessitated a wholesale review of the AI ecosystem to determine the potential benefits of this powerful technology, as well as the hazards it can bring. In addition, national security risks have arisen as adversarial countries such as China, Russia and North Korea are increasing investments in AI, threatening our critical cyber networks and making AI-controlled warfare with little to no human decision making, a sobering reality.
But it’s not just cybersecurity and national security risks that necessitate the urgent focus on AI. Generative AI technology, such as ChatGPT, that uses algorithms to create new content “including audio, code, images, text, simulations, and videos” is increasing. As their use has grown, these platforms are raising issues of accuracy and bias around copyright, intellectual property, education, employment, and financial services, to name but a few. Few platforms have been as widely or quickly adopted as this technology without full understanding of the risks.
AI seemingly has unlimited potential for changing all aspects of our society. We’ve already experienced promising applications in many sectors of our economy and AI provides an opportunity for enormous economic development. According to PricewaterhouseCoopers, “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” PWC’s estimate factors in increases of $7 trillion in China and $3.7 trillion in North America with health, retail, and financial services experiencing the most growth. And China has set an ambitious goal of becoming the global leader in this sector by 2030.
Many academic organizations, think tanks, and industry leaders have researched the potential use cases for AI. And while the applications are seemingly infinite, The Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative has engaged in major groundwork to identify overarching areas where this technology might be useful.
According to researchers Darrell West and John R. Allen in their 2018 article, “How Artificial Intelligence is Transforming the World,” AI can be used in multiple areas including the financial sector, national defense, health care, criminal justice, and e-governance.
West and Allen argue that the use of AI in the financial sector for investing, portfolio management, loan applications, mortgages and retirement planning can make these decisions more efficient, less emotional and more analytic. AI can also be used to prevent fraud and detect financial anomalies in large institutions.
AI can help shore up our national defenses by allowing our military leaders access to analyze enormous amounts of data in faster and more efficient ways. Threat detection, signal and human intelligence can be amplified by the use of algorithms. West and Allen discuss that a concept called “hyperwar”,” or AI-controlled warfare, will be waged with autonomous weapons systems capable of lethal outcomes. While hyperwar is controversial, if harnessed properly, the advantages to our military can be significant.
Medical professionals have been using algorithms for years as this technology has proven useful in helping to diagnose and predict disease or illness. The use of AI in health care, particularly in medical imaging, interaction with patients and some administrative functions has proven beneficial. According to Deloitte’s “Future of Health Report,” algorithms can help predict potential challenges and allocate resources to patient education, sensing and proactive interventions that keep patients out of the hospital. Going beyond electronic health records, Deloitte argues that streams of health data—together with data from a variety of other relevant sources—will merge to create a multifaceted and highly personalized picture of every consumer’s well-being.
AI is being deployed in the criminal justice arena. And while algorithm use can be challenging when used to determine bail, sentencing, or the likelihood of offender recidivism is free from bias because of the sensitivity of decisions regarding individual privacy, many judicial experts posit that AI programs can expedite criminal investigations through gunshot detection and crime-mapping to solve crimes more quickly and keep communities safer. When carefully deployed, predictive risk analysis which plays to the strengths of machine learning, automated reasoning, and other forms of AI can improve law enforcement’s ability to more efficiently allocate resources, deter criminal activity, and track and capture criminals and terrorists.
AI and machine learning are also transforming the transportation sector. Cars, trucks, buses, and drone delivery systems are using AI for vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, and for real-time information analysis as safety measures and in the development of autonomous vehicles (AVs). West and Allen point out that high-performance computing and deep learning systems can help adapt to new traffic patterns, avoid crashes and improve ride-sharing services.
As the laboratories of governance, state and local governments are seeing the potential of AI in helping to create smart cities and implement e–governance. A number of metropolitan areas are adopting AI applications for citizen service delivery, urban and environmental planning, energy use, and crime prevention. Cities of all sizes including Boston, San Francisco, Washington, D.C., and Columbus, Ohio, are investing in smart city technology. From the state governing level, AI can be a tool to deal with large volumes of data, create efficient methods of responding to public requests, and be proactive in how they provide public services. For example, the Georgia Department of Labor upgraded its customer service technology to include a website virtual assistant—the George AI chatbot. And some western states including California, Nevada and Oregon are using AI to monitor live footage from networks of cameras in forests and mountains for signs of smoke.
Finally, AI can be a powerful tool for businesses providing customer service to consumers through the use of chatbots and other customer service-oriented tools. “AI-enabled customer service can increase customer engagement, resulting in increased cross-sell and upsell opportunities while reducing cost-to-serve.“
Promising developments in automated decision-making, algorithms and machine learning raise important policy, regulatory and ethical issues. Potential risks are associated with removing humans from the decision-making process, as is the case when AI technology becomes more advanced over time.
Critical to understanding AI technology is that these algorithms are all based on humans inputting data. The AI or machine learning pipeline is comprised of training data that is entered into a pattern-finding algorithm, that in turn creates a predictive model. The data collection and the actual data itself is based on human choices, responses or decisions. And because humans make decisions that are based on emotions, there is a risk that such algorithms can contain bias and inaccuracies.
In addition to bias, which can take many forms including historical, racial, or other discriminatory factors, ethical considerations and value choices can be embedded into algorithms as well. Policymakers need to be aware of the genesis of programming decisions to better understand how systems operate and can impact citizens.
Integrating AI into the workforce also brings uncertainty and challenges. To what extent will AI replace jobs and cause labor market disruptions? As the U.S. Chamber of Commerce’s report from the Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation notes, workers need to both obtain a core set of technological skills and adapt to technology as it advances. This requires significant investments from business leaders and governments to build adaptability into retraining and reskilling for both technical and soft skills.
Questions arise concerning the legal liability of AI systems. Who is responsible and should be held accountable if people are harmed or discriminated against by an algorithm? Current product liability rules can cover some infractions and dictate penalties. But what about new and soon-to-be developed use cases for AI platforms? Which step of the AI pipeline is to blame when a negative outcome occurs?
Cybersecurity and national security risks bring attention to the vulnerabilities of AI systems, as reported by many experts including a joint effort by Georgetown University’s Center for Security and Emerging Technology and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, which published a report: “Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications.” In addition to obvious risks of hyperwar, AI vulnerabilities are concerning, and traditional cybersecurity risk assessment tools are inadequate for addressing them.
Many researchers agree that one way to help ameliorate these risks and shortcomings of AI is to ensure equitable access to data. A McKinsey Global Institute study states that nations that promote open data sources and data sharing are the ones most likely to be successful in AI advances. AI depends on data that can be analyzed in real time and applied to actual situations. Without a coherent data strategy and a lack of protocols, data ownership is unclear. Uncertainty as to how much data resides in the public sphere can also hamper academic research.
But even this solution is not without its own complexities as the relationship between data privacy and AI is tenuous. Storing data means the risk of data breaches and unauthorized access to personal information is a major concern. But the goal for policymakers should be to find a balance between protecting citizen privacy and while not restricting the development of AI systems.
Because of the risks involved with this powerful technology, many fear there will be a race to the bottom for AI if no guardrails are offered. Proponents of regulatory frameworks raise concerns regarding rapidly developing AI tools and virtually untested technology being deployed rapidly and ubiquitously without an understanding of potential negative or dangerous impacts. Opponents of regulation argue that self-governance must occur from within the private sector as any governmental oversight can stifle innovation and development.
While views range on the level of regulation that is necessary—or if any is warranted at all—most AI proponents agree some balance between the innovations of AI with basic human values must be achieved. West and Allen, in their Brookings AETI initiative paper, believe this measured progress can be achieved by “improving data access, increasing government investment in AI, promoting AI workforce development, engaging with federal, state and local officials to ensure they enact effective policies, creating standards for broad AI objectives or risk management measures as opposed to specific algorithms, creating safeguards to combat bias, ensuring there are roles for human control and oversight, punishing malicious behavior, and safeguarding cybersecurity.”
At a recent congressional hearing on the subject of regulation, witnesses—many of whom represented larger tech companies such as OpenAI and IBM—proposed targeted federal AI regulation that focused on safety review standards, establishing a new federal agency dedicated to licensing and independent audits of existing and new systems, and appropriate funding for research.
There is also a middle ground position. Advocates for this position argue that we need to further understand the AI landscape. Regulations and laws are already on the books that companies have implemented to ensure their products that include AI for better functionality can be compliant. In this case, if those laws or regulations are sufficient, more and general AI regulation would unnecessarily complicate the regulatory landscape. The argument is to understand the landscape, identify any gaps, then determine the solution to fill those gaps.
While U.S. federal and state governments are debating and developing proposals for the oversight of AI, the European Union has led the charge in proposing a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence. Inching ever closer to actual implementation, the EU in May 2023 adopted a draft of the first rules implementing the Artificial Intelligence Act, which, according to the European Parliament press office, “would ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe” based on classifying risk for AI into three categories: acceptable risk, high-risk and unregulated.
These anticipated world’s-first rules on AI incorporate a risk-based approach, require that providers take responsibility for their platforms and define prohibited AI practices. It even creates a list of these banned activities—including biometric surveillance, emotion recognition, and predictive policing AI systems—while further classifying high-risk sectors and expanding the level of transparency requirements for providers.
The adoption of a technology-neutral, risk-based approach is significant as it sets the stage for how the rest of the world will fashion a regulatory scheme. It also showcases how the EU favors AI systems that are “overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.” And as the first governing body to propose regulation, the AI Act demonstrates Europe’s desire to be global leaders in this arena envisioning a “global hub of excellence in AI from the lab to the market,” ensuring that AI respects European values and rules, and harnesses the potential of AI for industrial use.
In October 2022, the Biden Administration took a step toward responsible innovation in its approach to AI regulatory principles by releasing a blueprint for an AI “Bill of Rights” with the purpose of creating a guide for society that protects all citizens from the threats of AI through the identification of five principles that should “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” Released through the White House Office of Science and Technology Policy, the framework posits that AI systems should be safe and effective, protect against algorithmic bias, ensure data privacy, be open and transparent, and allow for human stopgaps or alternatives for intervention.
In May 2023, the White House issued a series of announcements aimed at developing responsible, secure and comprehensive best practices for AI in the U.S. The administration’s focus areas include research and development funding from the National Science Foundation to establish seven new National AI Research Institutes which will bring the total number of Institutes nationwide to 25. The seven new institutes will concentrate on developing processes in climate, agriculture, energy, public health, education and cybersecurity.
Additionally, the administration obtained commitments from AI companies invited to the White House event to “participate in a public evaluation of AI systems consistent with responsible disclosure principles” at DEF CON 31 in August 2023. DEF CON brings together members of the technology community such as hackers, cryptographers, policymakers, law enforcement officers and even lawyers to share and learn about all things related to technology. Finally, the federal Office of Management and Budget will be releasing draft policy guidance for public comment in summer 2023 that it hopes will serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI.
Adding coherence to evolving U.S. policy on AI and further contributing to the ongoing international discourse on AI policy, the National Institute of Standards & Technology (NIST) issued Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF) in January 2023 as directed by the 2021 National Defense Authorization Act (H.R. 6395/P.Law 116-283). Designed to be a multi-tool for organizations to design and manage trustworthy and responsible artificial intelligence, this framework follows the template of previous information risk management and governance frameworks from NIST, the Cybersecurity Framework released in 2014 and a Privacy Framework released in 2020. Considered “a living document” that is “voluntary, rights-preserving, non-sector-specific, use-case agnostic,” and adaptable to all types and sizes of organizations, the AI RMF also follows these earlier frameworks in organizing implementation into “core functions,” subcategories, and implementation profiles. But unlike previous iterations, the AI RMF introduces “socio-technical” dimensions to its risk management approach and takes into consideration “societal dynamics and human behavior.”
On the private sector side, BSA The Software Alliance published its AI framework entitled “Confronting Bias: BSA’s Framework to Build Trust in AI.” This framework seeks to provide companies with the necessary tools to alleviate and even eliminate bias through performing impact assessments of AI systems. Reducing bias will increase trust and mitigate harm. It identifies the various ways bias can be introduced and infiltrate an AI system at various points in its lifecycle and how to manage those risks. This framework also addresses appropriate strategies for effective risk management involving an appropriate governance guide and a workable impact assessment process to identify and mitigate risks.
The U.S. Chamber of Commerce’s Technology Engagement Center also released its comprehensive recommendations for AI regulation which includes five pillars: efficiency, neutrality, proportionality collegiality and flexibility. According to the U.S. Chamber, policymakers must evaluate the applicability of existing laws and regulations, while ensuring new laws are technologically neutral. Laws should be flexible and adopt a risk-based approach to AI regulation and maximize federal interagency collaboration. Laws and regulations should encourage private sector approaches to risk assessment and innovation as nonbinding, self-regulatory approaches provide the flexibility of keeping up with rapidly changing technology. Understanding the urgency “to develop policies to promote responsible AI and to ensure economic and workforce growth,” the commission used these pillars to develop policy recommendations to put these priorities into action and recommends policymakers address “preparing the workforce through education, bolstering global competitiveness in the areas of intellectual property while shoring up partnerships, and protecting national security.”
There is no shortage of think-tank contribution to the AI regulation discussion. Georgetown University’s Center for Security and Emerging Technology and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center produced a report that made many recommendations regarding how to minimize vulnerabilities in AI systems using more traditional cybersecurity risk assessment tools to the extent possible, recognizing that addressing AI risks may not align exactly with traditional software risks. Other recommendations include increasing communication and transparency between organizations that use AI, exploring current legal frameworks in cybersecurity to determine how AI fits into existing laws, and encouraging research and continued testing of AI systems. For policymakers, the report discusses recommendations on how to support “the creation of a more secure AI ecosystem.”
While the perfect balance between protecting data privacy and enabling the AI economy to thrive may seem unattainable, lawmakers choosing to approach regulatory frameworks will need guidance on how to construct policy. Friedler, in her presentation before NCSL’s Cybersecurity and Privacy Task Force in June 2023 outlined three schemes lawmakers can use to approach draft policy: sector-specific scoping, regulatory refinement, cross-cutting or data-focused interventions.
In the sector-specific approach, lawmakers focus on narrow and specific goals for legislation that are easily defined and understandable. Current examples include the U.S. Senate Bill Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023 (S. 1394). This bill is focused only on nuclear launches and not on any broader national security issue. Another example of this preemptive requirement could be crafting a bill prohibiting the use of specific AI applications for hiring practices because, Americans may not want employers to track movements or facial expressions or that Americans want hiring decisions to be made by a person and not a program. To achieve this result, a state or federal department of labor could define a list of employment specific AI applications and then issue guidance meeting these principles.
In the regulatory refinement approach, which incorporates already existing statutes or agency guidance rather than creating new regulations, policymakers could identify and define “consequential decisions” and, then task a state agency to update a list of covered algorithms in those areas. As discussed in an earlier section, consequential decisions, as laid out in 2023 CA AB 331 are actions that have “a material effect on the impact of, access to, eligibility for, cost of, terms of, or conditions” of the following circumstances summarized below:
(1) Employment, worker management and self-employment, including hiring, pay, promotion, termination, task allocation, worker surveillance, supervision, unionization and labor relations.
(2) Education and vocational training, including assessment, proctoring, academic integrity, accreditation, certification, admissions, financial aid and scholarships.
(3) Housing and lodging, including rental and short-term housing and lodging, home appraisals, and access to rental subsidies and public housing.
(4) Essential utilities, including electricity, heat, water, municipal trash or sewage services, internet and telecommunications service and public transportation.
(5) Health care, including mental health care, family planning, adoption services, dental, and vision.
(6) Credit, banking, and other financial services, including financial services related to mortgages.
(7) Insurance, including mortgage, homeowners, health, dental, vision, rental, life and vehicle.
(8) The criminal justice system, immigration enforcement, border control, and child protective services, including risk assessments, sentencing, parole, surveillance, autonomous vehicles and machines, and predictive policing.
(9) Legal services, including public defense and other court-appointed counsel services, private arbitration, mediation and other alternative dispute resolution services.
(10) Voting, including redistricting, voter registration, detection of voter fraud, support or advocacy for a candidate for office, distribution of voting information, vote tabulation and election certification.
(11) Government benefits and services, including identity verification, fraud prevention, and assignment of penalties.
In the cross-cutting approach, Friedler suggests that policymakers can seek to ensure safety and efficacy, prevent algorithmic discrimination, require transparency, and assess the impact of technologies across all sectors. For example, an agency may be required to show that a specific technology actually works and a mechanism can be constructed where technical talent can work with agencies to ensure platforms are safe and effective. To prevent algorithmic discrimination, policymakers can draft legislation requiring impact assessments across agencies to assess whether an algorithm is causing unjustified different treatment of certain populations. If the goal is to ensure transparency of the use of AI in government, policymakers could require that public notices are made to impacted populations and explanations are provided on how decisions to use the technology were made.
Finally, if the mission of lawmakers is to explicitly protect data or intellectual property, legislation can be drafted with data minimization in mind such as in the American Data Privacy and Protection Act of 2022 (ADPPA) and have this broadly apply to all sectors of government. Similarly, if the goal is to ensure AI doesn’t violate intellectual property (IP) protections, bills requiring permission or contracts to use such IP could be crafted and applied to all sectors.
While no federal legislation enactments focusing on protecting people from the potential harms of AI and other automated systems appear imminent, states are moving ahead to address potential harms from these technologies. Bipartisan efforts in state legislatures seek to balance stronger protections for citizens while enabling innovation and commercial use of AI.
There is focus on the oversight of the impact of the algorithm rather than on the specific tool itself, which allows for innovation to flourish while necessary protections remain future-proofed. According to a recent Brookings Institute analysis on state AI activity, the term “artificial intelligence” is a catchall notion that motivates legislative action, but legislators are actually eschewing the term when defining the scope of oversight and are “focusing instead on critical processes that are being performed or influenced by an algorithm.” According to the research, state governments are including any type of algorithm used for the covered process and are concerned with the impact on people’s civil rights, opportunities for advancement and access to critical services.
Building data transparency is another area state legislators are focusing on. When using algorithms for important decisions, Brookings states businesses and governments should explicitly inform affected persons. Some states have also proposed making algorithmic impact assessments public to allow for another layer of accountability and governance. Brookings agrees that “requiring public disclosure about which automated tools are implicated in important decisions” is critical to enable effective governance and engender public trust. In their view, “states could require registration of such systems and further ask for more systemic information, such as details about how algorithms were used, as well as results from a system evaluation and bias assessment.”
In the 2019 legislative session, bills and resolutions dealing specifically with artificial intelligence were introduced in at least 20 states, and measures were enacted or adopted in Alabama, California, Delaware, Hawaii, Idaho, Illinois, New York, Texas and Vermont. Many of the measures proposed to create task forces or studies.
California enacted legislation regulating pretrial risk assessment tools to require each pretrial services agency that uses a pretrial risk assessment tool to validate the tool by Jan. 1, 2021, and on a regular basis at least once every three years, and to make information regarding the tool, including validation studies, publicly available.
The Illinois General Assembly enacted the Artificial Intelligence Video Interview Act. The act provides that employers must notify applicants before a videotaped interview that artificial intelligence may be used to analyze the interview and consider the applicant’s fitness for the position. Employers also must provide each applicant with information before the interview explaining how artificial intelligence will be applied and what general types of characteristics it uses to evaluate applicants. Before the interview, employers must obtain consent from the applicant to be evaluated by the artificial intelligence program. Employers also may not share applicant videos unnecessarily and they must delete an applicant’s interview upon request of the applicant.
Enacted in 2018, California’s Bolstering Online Transparency Act went into effect in 2019. This law makes it unlawful for any person to use a bot to communicate or interact online with another person in California with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication to incentivize a purchase or sale of goods or services or to influence an election.
During the 2020 state legislative session, at least 15 states introduced artificial intelligence bills and resolutions, with legislation enacted in Massachusetts and Utah. Massachusetts created a special commission to study the impact of automation, AI, global trade, access to new forms of data and internet of things (IoT) on the workforce, businesses and economy with the main objective to ensure sustainable jobs, fair benefits and workplace safety standards.
The Utah legislation created a deep technology talent initiative within higher education. “Deep technology” is defined as technology that leads to new products and innovations based on scientific discovery or meaningful engineering innovation, including those related to artificial intelligence.
Artificial intelligence bills and resolutions were introduced in at least 17 states in the 2021 legislative session, and enacted in Alabama, Colorado, and Mississippi. Alabama established the Alabama Council on Advanced Technology and Artificial Intelligence to review and advise the governor, the Legislature, and other interested parties on the use and development of advanced technology and artificial intelligence in this state. Colorado enacted legislation prohibiting insurers from using any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.
Illinois amended the 2019 Artificial Intelligence Video Interview Act requiring employers that rely solely upon artificial intelligence to determine whether an applicant will qualify for an in-person interview must gather and report certain demographic information to the Department of Commerce and Economic Opportunity, and requiring the department to analyze the data and report to the governor and General Assembly whether the data discloses a racial bias in the use of artificial intelligence.
Mississippi directed the State Department of Education to implement K-12 computer science curriculum to include instruction in robotics, artificial intelligence, and machine learning.
In the 2022 legislative session, artificial intelligence bills and resolutions were introduced in at least 17 states, and enacted in Colorado, Florida, Idaho, Maine, Maryland, Vermont and Washington.
Vermont created the Division of Artificial Intelligence within the state Agency of Digital Services to review all aspects of AI developed, employed or procured by the state. The legislation required the Division of Artificial Intelligence to propose a state code of ethics on the use of AI and required the Agency of Digital Services to conduct an inventory of all the automated decision systems developed, employed or procured by the state.
Washington provided funding for the office of the chief information officer to convene a work group to examine how automated decision-making systems can be reviewed and periodically audited to ensure they are fair, transparent and accountable in 2021. The legislation was amended in 2022, requiring the chief information officer to prepare and make publicly available on its website an initial inventory of all automated decision systems being used by state agencies.
The 2023 legislative session is seeing an uptick in state legislative action with at least 25 states, Puerto Rico and the District of Columbia introducing artificial intelligence bills, and 14 states and Puerto Rico adopting resolutions or enacting legislation.
Connecticut requires the state Department of Administrative Services to conduct an inventory of all systems that employ artificial intelligence and are in use by any state agency. Beginning on Feb. 1, 2024, the department shall perform ongoing assessments of systems that employ artificial intelligence and are in use by state agencies to ensure that no such system shall result in any unlawful discrimination or disparate impact. Further, the Connecticut legislation requires the Office of Policy and Management to develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by state agencies.
Louisiana adopted a resolution requesting the Joint Committee on Technology and Cybersecurity to study the impact of artificial intelligence in operations, procurement, and policy. Maryland established the Industry 4.0 Technology Grant Program in the Department of Commerce to provide grants to certain small and medium-sized manufacturing enterprises to assist those manufacturers with implementing new industry 4.0 technology or related infrastructure. The definition of industry 4.0 includes AI.
North Dakota enacted legislation defining a person as an individual, organization, government, political subdivision, or government agency or instrumentality; providing that the term does not include environmental elements, artificial intelligence, an animal or an inanimate object. Texas created an AI advisory council to study and monitor artificial intelligence systems developed, employed, or procured by state agencies, with North Dakota, Puerto Rico and West Virginia also creating similar studies.
As lawmakers seek practical and pragmatic approaches to creating a safe digital environment for citizens, Friedler puts forward additional considerations based on her research and expertise in evaluating AI governance. The approaches outlined in the above section give lawmakers a philosophy for how to devise a regulatory scheme. She also offers some additional useful guidance.
First, she recommends moving beyond simply creating a task force to study AI as the importance of the issue is building. She advocates for legislating around a specific issue instead. Second, she suggests focusing on impacts and not technical details as the latter is changing so rapidly that to legislate around a specific technology would be fleeting and quickly made obsolete. Next, she suggests crafting AI definitions that are limited and based on impact—hence the “consequential decision” model. Next, she recommends maximizing the expertise and knowledge of sector-specific and technical expertise already existing in agencies to regulate AI. Finally, she urges lawmakers to be specific when creating transparency requirements so that evaluations and assessments can be as data based and useful as possible.
AI is transforming our economy, how citizens live and work, and how countries interact with each other. Managing the potential negative impacts of this powerful technology is at the forefront of policymakers’ agendas so as not to stifle the potential benefits of AI. State lawmakers are keenly aware that the window of opportunity is short, and the sense of urgency high.
As we move forward into the next phase of AI, the regulatory debate must answer several critical questions:
NCSL staff Erlinda Doherty, Susan Frederick, and Heather Morton would like to thank the members of the NCSL Task Force on Cybersecurity and Privacy Work Group and our NCSL Foundation sponsors for contributing to and reviewing this primer.
For more information on this topic, use this form to reach NCSL staff.