
Table of Contents

State leaders are taking different steps to discuss complex considerations related to AI technology, including hearing from and gathering information from a broad range of stakeholders with different perspectives.
#1: One way is through committee work. Over the past two days, the Illinois Senate Executive Subcommittee on AI & Social Media discussed over 51 AI and social media-related bills, ranging from AI liability for minors and chatbots to the medical use of AI. The condensed sessions goal was to identify commonalities amongst committee members. Last week, the Texas House Speaker directed committees to study specific topics during the interim, ahead of next year’s legislative session. Four committees are assigned topics related to social media and artificial intelligence. Last year, Rhode Island created a new committee with jurisdiction over legislation related to emerging technologies, including artificial intelligence and its societal, ethical, and policy implications.
#2: Another way is through long-term studies and stakeholder engagement. Before the current legislative session, the Washington State AI Task Force released a preliminary report with 8 recommendations for the legislature and the governor based on work from a coalition of leaders and experts from government, business, community, and civil rights groups. The legislature created the task force in 2024 and plans to include recommendations on AI companion chatbot safeguards and on climate and energy impacts related to AI development and use in the final report due July 2026. The establishment of taskforces has been a popular strategy across the states over the past few years.
#3: One last strategy states’ are taking is through government partnerships with the private sector and investments in academia. In the last few years, Utah has created the Office of Artificial Intelligence Policy, along with a regulatory sandbox and an AI Learning Lab. The legislature created these programs to foster partnerships between the government and private sector by eliminating barriers, testing innovation in safe environments, and learning from developing tools together. States have also allocated funding to higher education institutions to fund intersectional AI research.
State Policy Action
AK: The Alaska State Legislature is advancing legislation (SB 247) that seeks to expand already established laws related to sexual abuse, exploitation, harassment of minors, including the possession and distribution of child sexual abuse material, to include computer-generated material. Yesterday, after working on some changes, the Senate Community and Regional Affairs committee allowed the bill to move to the Senate Judiciary Committee, with the next hearing scheduled on Wednesday.
CA: Members of the California State Legislature are working together on legislation regarding the use of children's AI chatbots (SB 1119/AB 2023). The legislation was last amended at the end of March, and SB 1119 has its next hearing on April 20th. The legislation focuses on required risk assessments, crisis response protocols, heightened notification settings, usage caps for AI chatbot use by minors, and new data protections, among other elements. California was the first state to pass a law focused on chatbots in 2019.
A bill (AB 1898) that aims to create transparency around AI at work is moving through the legislature. The legislation adds a section to the Labor Code requiring employers to notify employees before deploying workplace AI tools, such as AI-based surveillance technologies and automated decision systems.
Last week, California's Governor signed an executive order targeted towards state procurement and the acquisition and adoption of AI tools for public services. The executive order requires the following state agencies to take specific actions within 4 months toward this aim: the Department of General Services, the Department of Technology, the Government Operations Agency, and the California Department of Human Resources. Agencies must submit recommendations to the Governor, such as increased safety considerations in procurement contracts, supply chain risks, and new contractor responsibilities. The order also emphasizes AI use in state government by identifying new pilot projects, training, and the development of best practices.
GA: Georgia's General Assembly closed up its business last week. The legislature enacted three AI-related bills that are awaiting the Governor's signature. One bill (SB 540) requires disclosures for chatbots and imposes requirements for minor accounts, such as preventing chatbots from producing or generating sexually explicit content. Another bill (SB 444) prohibits AI systems from making the final determination of health insurance coverage. The last piece of legislation (SR 789) would create an AI study committee on the impact of AI on creative industries.
ID: Last week, the Idaho Legislature enacted, and the Governor signed, an AI bill (SB 1297). The bill targets "conversational AI service" operators requiring disclosures and protocols for specific content, like suicide ideation. For minors, operators must adhere to additional requirements and must allow parents to manage their accounts' privacy and platform settings.
IN: The Indiana General Assembly enacted, and the Governor signed legislation (HB 1408) that includes provisions related to minor social media use.
KY: Kentucky's legislative work ends next week, and many tech-specific bills recently made their way to the Governor's desk. Two bills focused on privacy include HB 58, which addresses privacy protections for license plate readers, and HB 692, which adds a definition of "automatic content recognition" to the Kentucky Consumer Data Protection Act. Other technology bills awaiting the Governor's signature include: 1) cell phone use in schools (HB 67), 2) stalking on social media platforms (HB 521), and 3) expanding child sexual abuse material laws to include computer-generated (HB 366).
LA: A bill (HB 459) related to AI use in elections is waiting its turn to be debated by the House of Representatives. Another bill (SB 246) establishes requirements for health insurers that use AI or automated decision systems and will soon be up for a vote in the Senate.
ME: Yesterday, the House of Representatives failed to pass a comprehensive data privacy law (LD 1822) with disagreements over who the law should apply to and other major elements.
MI: At the end of March, the Michigan Senate Finance, Insurance, and Consumer Protection Committee advanced a package of bills, dubbed the "Kids Over Clicks" bill package by the Senate Democrats.
OR: Last week, the Governor signed SB 1546, focused on AI companions. The new law, effective January 2027, requires platforms to adhere to specific protocols before granting access to minor users, among other requirements. The law includes a private right of action.
RI: Last week, a group of Rhode Island state representatives and state senators unveiled a package of bills with different strategies to protect kids from digital harm.
NY: New York City Public Schools, the largest school district in the US, released preliminary guidance on AI. The guidance defines when AI is allowed, when it is limited, and when it is off-limits. Public feedback is open through May 8th.
WV: Last week, the Governor signed a bill (HB 5638) which refines and clarifies the purpose of the authority and responsibilities of the state chief information security officer and outlines the process for cybersecurity program reviews.
WA: The Governor signed HB 2225, a bill focused on transparency and self-harm, suicidal ideation, or emotional crisis when minors interact with AI companion chatbots. The legislation, effective January 2027, includes a private right of action.
UT: The Governor signed HB 276, which focuses on the provenance of digital content and addresses the non-consensual generation and distribution of counterfeit intimate images.
WY: Wyoming's 2026 legislative session resulted in the enactment of a handful of tech-focused legislation, such as minors and deepfakes (HB 102), data privacy for government entities (SF 20), and school district cell phone and smart watch policies (SF 35).

The Department of Labor’s released the “Make America AI-Ready” initiative, free artificial intelligence literacy courses, designed to be delivered entirely over text message.
The US State Department settled a 2023 lawsuit brought forward by Republican Texas Attorney General Ken Paxton and the media outlets The Daily Wire and The Federalist. The original complaint claimed the State Department violated free speech protections, through grants to counter "propaganda and disinformation.”
Last week, public comment closed on the National Institute of Standards and Technology (NIST) draft guidance around practices for automated benchmark evaluations of language models.
The Children’s Parliament recently released their report, “Exploring Children’s Rights and AI” The project was in partnership with Scottish AI Alliance and The Alan Turing Institute and aimed to investigate children’s views of AI and its impact on their lives and rights. The findings resulted from work over three years, where the team worked with 140 children aged 8 to 12 years from 6 schools across Scotland.
Anthropic signed a memorandum of understanding with the Australian government around AI safety and research partnerships.
Chairman of the Select Committee on China cosponsored the Multilateral Alignment of Technology Controls on Hardware (MATCH) Act, a bipartisan bill focused on semiconductor manufacturing and national security. One of the provisions in the bills is to ban the sale of key chipmaking equipment to China.
The Library (Research & Reports)
Partnership for Public Services
Nava Labs/Georgetown University/Cornell University
Center for AI and Digital Policy
Anthropic
OpenAI
OpenAI
The Rithm Project

In the children's online safety debate, screen time has been a focus including trying to decide when and if screen time is harmful and who should be responsible for overseeing screen time limits, if that is the choice. During the 2025 legislative session, the Virginia General Assembly enacted SB 854 which limits social media use to one hour per day per app for users under 16. Recently, a US District court granted a preliminary injunction, temporarily halting enforcement of the law.
The Information Technology & Innovation Foundation penned a piece against social media time limit policies stating the approach would be ineffective and duplicative. An alternative approach offered is parent education around minor settings already available on platforms.
According to the American Academy of Pediatrics guidance, screen time should center around the quality rather than quantity of content that children and teens are consuming.
The Child Mind Institute provides resources on how families can determine if their child is spending too much time with screens and how to set-up reasonable screen time rules.
The American Medical Association points to evidence that too much screen time can affect physical and mental health, contributing to eye strain, sleep disruption, reduced activity and heightened stress. This resource provides tips on how to maintain balance with screen time and minimize health impacts.
Tech Policy & Governance Jobs
Company/Organization: | Title: | Closing Date: |
|---|---|---|
Recoding America Fund | 05/08/2026 | |
Anthropic | Ongoing | |
California Department of Technology | Ongoing | |
Center for Democracy & Technology | Ongoing | |
Discord | Ongoing | |
TikTok USDS Joint Venture LLC | Ongoing | |
Kapor Foundation | Ongoing | |
US Tech Force | Ongoing | |
Cambridge Boston Alignment Initiate | 04/12/26 |
Do you have leads, tips, corrections, feedback or resources you would like to share? Send your advice to [email protected].
Disclosure: Inclusion of external links are for informational purposes only and do not act as an endorsement of that perspective or organization.
