Regulatory consultations are simply not plebiscites. They are not opportunities for the public to vote a ruling up or down. They are meant to determine if the ruling has addressed all reasonable issues. So if a member of the public raises an issue that is new or not addressed in the ruling, it is important, and the agency would need to address it. If the public simply raises a point made by others, or which has been addressed by the ruling, then it is superfluous.

For this reason, there have been efforts to use text analytics to read the many submissions enabled by email and other electronic messaging applications leading to thousands if not millions of submissions – more than a few people can read. The area of development has been called e-Rulemaking. Using text analytics, it is possible to identify a new point, and also easy to identify duplicated submissions, for example. One million duplicated submissions that raise no new points are not significant to a consultation, unless people are treating them like votes, which would be inappropriate. That is why it is really not that worrisome that bots are participating in a consultation. There is a human being behind every bot, and unless the bot makes a new and valid and reasonable point, then its significance is nil.

Am I wrong? Let me know.

But try as you may to use comments to consultations as if they are votes, this would be a misunderstanding of their role. They should not be counted as if they were a plebiscite. Software like ‘texifter’ is available and under development to filter and sort through tons of comments to discover valid points for considering in a consultation. Stu Shulman, the inventor of @discovertext playfully characterizes them as BS filters. See a recent talk by Stu to get a sense of the history and status of work in this area: http://quello.msu.edu/event/erulemaking-a-history-a-theory-a-flood/