Table of Contents

The ABC’s of 2026: Tech Policy Across the States

Action/Accountability/AI: State legislators continue to focus on technology policy and are continuing to take action related to AI safety and transparency. Debates are complicated with some proposals making it over the finish line and other halting in the process, like Florida’s proposed (SB 482), “AI Bill of Rights.” 

Build/Bureaucracy: State legislators are balancing many considerations when deciding what AI initiatives and infrastructure to invest in. For example, Idaho’s Legislature passed a bill (HB 687) aimed at preventing “biased” AI in government. On the other hand, other states, including New Mexico invested into AI projects. New Mexico’s investments include:

  • $2 million allocated to the Energy, Minerals and Natural Resources Department for a wildlife mapping database and AI-enabled early detection camera network system

  • $5 million to the University of New Mexico to develop AI and quantum computing capabilities for health sciences

  • Recurring $2 million appropriation to New Mexico State University to create an institute of AI and machine learning

  • $1 million to the Department of Environment for AI-powered data systems, including document management

Chatbots/Children Safety: This session, many states have a continued focus on chatbots and children safety. Alabama, Arizona, Oregon and Washington, are among states that passed minor-focused chatbot bills recently. 

Disclosure: States continue to debate when generative AI uses need to be disclosed and how. Settings where these new requirements may apply is elections, health services and during online content creation and distribution.

State Policy Action

CO: Colorado’s AI working group, convened by the Governor, released a blueprint for agreed upon changes by a diverse group of stakeholders to address concerns from the first US major comprehensive AI legislation passed by a state legislature. In 2024, Colorado was the first state to enact a comprehensive law (SB 205) that focused on prohibiting AI-systems from “algorithmic discrimination” during “consequential decisions” in areas like housing, education and financial services.

CT: Over 40 state legislators in the Connecticut General Assembly have co-sponsored an AI-focused piece of legislation (SB 5) which is a multi-pronged approach with elements like establishing some new requirements for AI chatbots and automated AI employment-related decisions; creating an AI Policy Office; investing in an AI Academy; and, creating workforce development programs; among others. Legislators have a little more than a month left this session to decide whether or not to go forward with the proposal.

NJ: The New Jersey Legislature is focused on the intersection of state-regulated professions and AI. One bill (AB 4731) requires the state’s professional and occupational boards to promulgate rules for allowable licensee use of generative AI. Another bill (AB 4733/SB 4088) prohibits a person or entity deploying AI in NJ from advertising or representing to the public that the generative AI system is able to practice a profession or occupation regulated by the state. Earlier this month, both proposals passed the Assembly Science, Innovation and Technology Committee and are awaiting hearings in the Assembly Regulated Professions Committee.

OH: Earlier this month, the Governor of Ohio urged state legislators to pass technology legislation, with priorities related to AI and social media platforms. In the Governor’s State of the State Address, he focused on AI-created pornography, AI safety and parental controls for cell phones and platforms.

TN: The Tennessee General Assembly passed (HB 1470/SB 1580), which prohibits AI systems from advertising or representing to the public that the system is a qualified mental health professional. The Senate bill is now awaiting the Governor’s signature.

TX: The Texas Attorney General has initiated a series of lawsuits (ex: The State of Texas v. TP-Link Systems Inc.) against Chinese-connected companies under the Texas Deceptive Trades Practice Act over data-sharing.

UT: Utah’s legislature wrapped up its business this month, with a lot of activity on the technology policy front. Last week, the Governor signed a new law requiring (SB 267) the State Board of Education to conduct a study on the use of software and digital services in public schools with a focus on best practices related to student learning, safety and privacy. Another related bill (HB 273) signed by the Governor requires the State Board of Education to create model policies on the use of technology and AI in the classroom. This week, the Governor signed (HB 276) dubbed the, “Digital Voyeurism Prevention Act,” to address non-consensual generation and distribution of counterfeit intimate images.

VA: Virginia lawmakers also wrapped up their business this month, with multiple bills awaiting the Governor’s signature deadline of April 13th. State legislators passed amendments (SB 338) to the state’s, “Consumer Data Protection Act,” to prohibit controllers from selling or offering for sale precise geolocation data. The legislature also passed a bill (SB 384/HB 797) focused on oversight of independent verification organizations focused on assessing AI models and applications. Another bill (HB 580) that passed creates a Division of Consumer Counsel tasked with receiving and investigating complaints by the Commonwealth's consumers involving emerging technologies.

WA: Earlier this month WA state legislators passed a bill (HB 1170) that focuses on including provenance data in any video, image or audio content that is altered by a generative AI system, and the bill is awaiting the Governor’s signature.

  • The Federal Trade Commission (FTC) announced it will not enforce the Children’s Online Privacy Protection Rule (COPPA) against digital platforms using tightly controlled data collection solely for age verification, offering temporary clarity as states expand age‑verification requirements.

  • Earlier this month, Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI signed a Trump-Administration-led “Ratepayer Protection Pledge” which includes five strategies companies agreed to protect American consumers from price hikes due to data center energy and infrastructure requirements, and lower electricity costs for consumers in the long term. Regarding data center creation, companies agreed to voluntarily negotiate new, separate rate structures with their utilities and relevant state governments.

  • The US Commerce Secretary and the Department of Commerce missed a major deadline in the federal push back against state-enacted AI laws. According to a December 2025 Executive Order, the Secretary was required to publish an evaluation of onerous state laws related to artificial intelligence by March 11, 2026.

  • The White House released “President Trump’s Cyber Strategy for America,” which outlines six policy pillars that emphasize the need for coordination across government and the private sector with the goal of defending the safety, security and prosperity of the American People.

    1. “Shape Adversary Behavior” by working together to create risks for potential adversaries and consequences for cybercrime.

    2. “Promote Common Sense Regulations” with a focus on streamlining cyber regulations, addressing liability, aligning regulators and industry globally, with a focus on privacy.

    3. “Modernize and Secure Federal Government Networks by implementing cybersecurity best practices such as post-quantum cryptography, zero-trust architecture, and cloud transition. The focus will also be on modernizing procurement processing and adopting AI-powered cybersecurity solutions.

    4. “Secure Critical Infrastructure” like the energy grid, financial and telecommunication systems, data centers, water utilities and hospitals. The strategy notes the role of state, local, Tribal and territorial authorities to complement, not substitute, national cybersecurity efforts.

    5. “Sustain Superiority in Critical and Emerging Technologies” by swiftly implementing AI-enabled cyber tools and deploying agentic AI to secure network defense and disruption.

    6. “Build Talent and Capacity” by investing in America’s cyber workforce through education and training in collaboration with industry, academia, government and the military.

  • The Trump Administration released recommendations for a national AI Policy Framework focusing on children's safety, fostering economic development, national security, copyright, digital creator support, freedom of speech, fostering innovation, an AI-ready workforce and a federal-led strategy to preempt state AI laws that impose undue burdens. U.S. Senator Marsha Blackburn (R-Tenn.) released a discussion draft of a proposed bill dubbed the “Trump America AI Act,” focused on protecting the 4 cs, “children, creators, conservatives, and communities” from exploitation, abuse and censorship.

  • Recently, Microsoft filed an amicus brief and employees of OpenAI and Google DeepMind filed a separate brief supporting Anthropic’s lawsuit challenging the Department of War’s designation of the company as a “supply chain risk.” Yesterday, a US District Judge granted Anthropic a preliminary injunction, temporarily blocking both the designation and President Trump’s directive ordering federal agencies to stop using the company’s technology.

  • One hundred AI experts from 30 countries collaborated on the “International AI Safety Report 2026,” which synthesized and identified growing evidence for AI-related risks related to criminal activity, manipulation, cyberattacks, and biological and chemical risks. The report aims to provide policymakers with a collaborative international effort to provide a scientific assessment of general-purpose AI capabilities and risks.

Anthropic

Anthropic

Brookings

The California Privacy Protection Agency

Pew Research Center

Pew Research Center

Rand

Science (Journal)/ UC Berkeley

Stanford University Human-Centered Artificial Intelligence

Harvard Business Review

The balancing act between what is technically feasible and how much the government should intervene with the market regarding online children's safety is still being determined and debated.  
  • Last Friday, the non-profit organization All Tech is Human hosted a digital event highlighting privacy challenges related to proposed age assurance requirements and capabilities. To address these challenges, speakers presented alternative policy and product design options, ranging from “independent accountability to developmentally appropriate product design.” 

  • Continuing the week’s focus on online safety, Pinterest’s CEO penned a Time Op-ed discussing online children's safety and supporting Australia’s policy intervention, the first country to ban social media for children under the age of 16.

Tech Policy & Governance Jobs

Company/Organization:

Title:

Closing Date:

CA Privacy Protection Agency

04/10/2026

Google

04/09/2026

Commonwealth of PA

04/07/2026

Information Technology & Innovation Foundation

Ongoing

State of Colorado

03/30/2026

Do you have leads, tips, corrections, feedback or resources you would like to share? Send your advice to [email protected].

Disclosure: This is a human-written and driven publication. As a small business owner and mighty team of 1, I use AI tools to optimize my small business operations as a part of my admin tech stack. Regarding this publication, AI is mainly used to help with catchy titles, as a thesaurus when writing and a partner when creating cartoons. (Thanks, Canva, and not an ad!) As a secret doodler, I add my human touch using my digital pad and pen. I also use Grammarly, with AI built in, to help with copy editing/grammar (again, mighty team of one!) Thanks for reading. 😊

Keep reading