A group of state lawmakers began wading through the complex world of artificial intelligence Tuesday, providing an early glimpse at how Texas may seek to regulate the booming technology.
During a nearly four-hour hearing, the Texas Senate Business and Commerce Committee heard a wide range of concerns about the potential risks of AI, including the spread of misinformation, biased decision-making and violations of consumer privacy. By the end of the hearing, at least some of the 11 committee members appeared convinced that the state should enact laws that regulate how and when private companies use artificial intelligence.
“If you really think about it, it’s a dystopian world we could live in,” Sen. Lois Kolkhorst, R-Brenham, said during the hearing. “I think our challenge is, how do we get out there and put in those safeguards?”
Artificial intelligence is a broad term that spans a range of technologies, including chatbots that use language processing to help answer users questions, generative AI that creates unique content and tools that automate decisions, like how much to charge someone for home insurance or whether a job applicant should get an interview. Artificial intelligence can also be used to produce digital replicas of artists’ work.
Already, more than 100 of the 145 state agencies are using AI in some form, Amanda Crawford, chief information officer for the Texas Department of Information Resources, told lawmakers Tuesday. Crawford is a member of a new AI Council created this year by Gov. Greg Abbott, Lt. Gov. Dan Patrick and House Speaker Dade Phelan. The council is tasked with studying how state agencies use AI and assessing whether the state needs a code of ethics for AI. The council is expected to publish its report by the end of the year.
Leaders of several state agencies testified that artificial intelligence has helped them save significant time and money. Edward Serna, executive director of the Texas Workforce Commission, for example, said a chatbot the agency created in 2020 has helped answer 23 million questions. Tina McLeod, information officer in the Attorney General’s Office, said their workers have saved at least an hour a week with an AI tool that helps sift through lengthy child support cases.
But in other cases, stakeholders testified, AI technology could be used in ways that hurt Texans.
Josh Abbott, a country singer, said he worries AI could be used to replicate his voice and generate new songs that get distributed on Spotify.
“AI fakes don’t care if you’re famous,” Abbott said. “AI frauds and deep fakes affect everyone.”
Grace Gedye, a policy analyst for Consumer Reports, recounted how private companies have already used biased AI models to make critical housing and hiring decisions that hurt consumers. She said lawmakers could mandate companies who rely on AI for decision-making to audit that technology and disclose to consumers how they are being evaluated.
Gedye pointed to New York City, which enacted a law requiring employers who use automated employment tools to audit those tools. Few employers actually performed the audit, Gedye said.
In creating legislation, the state will need to tread carefully and make sure they don’t write laws that inadvertently prohibit positive uses of artificial intelligence while they try to clamp down on harms, said Renzo Soto, an executive director of TechNet, which represents technology CEOs.
“You almost have to look at it industry by industry,” Soto said.
Texas already passed a law in 2019 that makes it a crime to fabricate a deceptive video with the intent to influence an election. Last year, lawmakers passed another law prohibiting the use of deep fake videos for pornography.
When they consider future legislation to rein in artificial intelligence, lawmakers will need to make sure they don’t violate First Amendment free speech protections, said Ben Sheffner, an attorney for the Motion Picture Association.
Throughout the hearing, lawmakers repeatedly asked whether other states or countries could be looked at for templates on how to craft AI policy. So far, a patchwork of state and federal regulations have tried to limit AIs use, with limited success.
California lawmakers have introduced a bill that requires AI developers and deployers to reduce the risks of “catastrophic harm” from their technology. Tech companies are fighting to kill that law. Colorado has also passed a law regulating the use of AI in certain “high-risk” scenarios, such as those pertaining to education, employment or health care. Colorado’s governor has already said the law needs to be amended before it goes into effect in 2026.
Copyright 2024 KERA