Mr. Hertzberg is a state Senator from California and last year, after he introduced a bail reform bill, he noticed that automated accounts on Twitter and Facebook were attacking him and his position. He is currently seeking re-election, and one of his opponents in the Democratic primary was a bail agent.

“The bail agents against me had hundreds of bots working in order to create the false impression that there were people against me,” Mr. Hertzberg said in a recent interview.

For example, he said, one account rapidly responded to tweets about the bill in real time with the message: “Unconstitutional bail reform doesn’t work and is racist.”

So Mr. Hertzberg introduced another bill this year, the first of its kind in the United States, that would compel automated social media accounts to identify themselves as bots — in other words, to disclose their non-personhood.

Because bots are only effective if they seem convincingly human. Right?

Depending on how you define them, bots have been around since before most of us were using the internet. Their presence online was considered fairly benevolent, if considered at all, until 2016, when they were among the host of factors used to explain away the election of President Trump.

Since then, bots have become, for many people, a digital boogeyman, a viral weapon that can be wielded to influence political opinions, fool advertisers, prank unknowing social media users and get bad hashtags to trend. (They’re also the lifeblood of many users we call influencers.)

Last week, Twitter announced it would remove tens of millions of suspicious accounts to crack down on the bots that can be bought (through third parties) by users who want to inflate the number of their followers. The company also said last month that it has been “locking” almost 10 million suspicious accounts per week and removing others for violating anti-spam policies.

Still, bots are easy to make and widely employed, and social media companies are under no legal obligation to get rid of them. A law that discourages their use could help, but experts aren’t sure how the one Senator Hertzberg is trying to push through, in California, might work.

For starters, would bots be forced to identify themselves in every Facebook post? In their Instagram bios? In their Twitter handles?

The measure, SB-1001, a version of which has already left the Senate floor and is working its way through the state’s Assembly, also doesn’t mandate that tech companies enforce the regulation. And it’s unclear how a bill that is specific only to California would apply to a global internet.

Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, applauded the spirit of the law but was not as sold on its letter. “This is groundbreaking legislation,” he said. “We are on a trajectory where reality, the very fabric of the information we see, can be altered in an unprecedented fashion. When that’s done, as the law says, with an intent to mislead, that’s a huge problem.”

But “you don’t want to measure twice, regulate once,” he said. “You don’t want to put the wrong laws on the books and have unintended consequences.”

Jeremy Gillula, a technologist at the Electronic Frontier Foundation who has been critical of the bill since its inception, said the first version was “a little like trying to treat the flu using chemotherapy.”

“Not only will it not fix the thing you’re trying to fix,” he said earlier this month. “It’ll cause a lot of collateral damage at the same time.”

The bill was drafted by Common Sense Media, a nonprofit that provides consumer ratings about the age-appropriateness of movies and TV shows, in collaboration with the Center for Humane Technology, a group of former employees of big tech companies including Google and Facebook who have banded together to regulate their former employers.

Neither Senator Hertzberg nor Jim Steyer, the chief executive of Common Sense Media, was overly concerned with criticism when interviewed about the bill in June. The senator called skepticism “the drivel of people who want to stop progress.” He said that the analysis he had seen had been influenced by lobbyists and was flat out wrong.

But after they were interviewed and the bill moved through the Assembly’s committees, the content of the proposed law changed substantially. The definition of bot grew more precise (from “online account” to “automated online account on an online platform”), and language that recommended an online platform for reporting bots was scrapped. Furthermore, the bill now asks only bots that are hoping to sell consumers good and services, or to influence votes in an election, to identify themselves as bots.

But even with the changes, the bill summons significant constitutional questions, said Ryan Calo, a co-director of the Tech Policy Lab at the University of Washington, and Madeline Lamo, a former fellow at the lab.

Ms. Lamo said that language in the bill about bots “influencing a vote in an election” ran into a problem that has plagued campaign finance regulations and election-related speech laws: It can be difficult to distinguish speech about political issues from speech explicitly intended to influence voters.

Furthermore, she noted, the bill was simply not crafted to address the problem it had in mind. Insofar as bots have had sway over political views, they have acted at scale, with thousands of automated accounts working to spread a diverse array of messages. It’s hard to imagine, she said, that requiring individual accounts to identify themselves in a single state would do much to sap the strength of bot armies.

All parties agree that the bill illustrates the difficulty that lawmakers have in crafting legislation that effectively addresses the problems constituents confront online. As the pace of technological development has raced ahead of government, the laws that exist on the books — not to mention, some lawmakers’ understandings of technology — have remained comparatively stagnant. And, as Twitter’s action last week demonstrates, technology companies have the power to change dynamics on their platforms directly, and at the scale that those problems require. Turning a bill into a law can take a long time. And then the law runs the risk of being inexact.

Mr. Calo cited as an example a driverless car law passed by Nevada in 2012, which met with protest from luxury carmakers. They were miffed that the technologies in their cars fell under the loosely drawn definition of autonomous vehicles. The bill was revised and passed the following year.

“Embarrassingly it was the first definition of artificial intelligence I’ve ever seen in a state statute, and they had to strike it out and rewrite it,” Mr. Calo said.

With the bot bill, he said, similar issues could crop up.

“Political commentary comes in different forms,” he said. “Imagine a concerned citizen sets up a bot to criticize a particular official for failing to act on climate change. Now say that official runs for re-election. Is the concerned citizen now in violation of California law?”

Meanwhile, Mr. Gillula said that the bill sought to address a problem that may have been somewhat overhyped.

“I haven’t seen anyone say conclusively that Russian bots swung the election,” he said, noting that he was not an expert on American political discourse. “Given that people are concerned but there is no conclusive smoking gun proof of concrete harm, I really hesitate to jump to ‘let’s get something because its such an emergency.’”

But Senator Hertzberg, who also has bills in the works addressing blockchain technology and cannabis banking, is undeterred. He said that his bot bill could have a big impact if passed in California, where so many tech companies are based.

“The political industrial complex is designed to protect the status quo, not to invent the future,” he said. “Inventing the future means you have to think differently, be more inventive, be more creative.”

Jonah Bromwich is based in New York. He writes for the Style section. @jonesieman