How text analytics may be key in soliciting feedback on proposed rules

In October of last year, Congress issued a bi-partisan report finding that government “agencies do little or nothing to stop crimes and abuses committed in the systems they use to collect public comments on proposed regulations,” and that they are lax in the process of determining which comments on proposed rules are real or fake.

Tom Sabo

July 29, 2020

4 Min Read
How text analytics may be key in soliciting feedback on proposed rules

In October of last year, Congress issued a bi-partisan report finding that government “agencies do little or nothing to stop crimes and abuses committed in the systems they use to collect public comments on proposed regulations,” and that they are lax in the process of determining which comments on proposed rules are real or fake. Duplicative or fake public comments at the state and local level especially can skew public policy decisions and delegitimize entire fields of public commentary. The stakes are high as government agencies at all levels continue to seek public comment on COVID-19 issues that will impact rulemaking and shape pandemic response.

Agencies challenged to decipher between real comments on proposed rules versus fake ones are turning to text analytics – allowing them to sift through large swaths of written language data, which in turn can help accurately gauge sentiment and establish new policies. Natural language processing (NLP) capabilities coupled with text analytics and AI could be a workable path forward for the public sector to analyze publicly solicited comments or submission responses for proposed regulations or grants related to coronavirus.

How NLP, Text Analytics and AI Could Help Solve This Issue

One legislative example – “Regulation of Flavor in Tobacco Products” – sought to find information on how flavors attract youth to initiate tobacco product use and whether and how certain flavors may help adult cigarette smokers reduce cigarette use. This regulatory post received over half a million comments, making it difficult to evaluate which were legitimate and which weren’t. Manually going through this number of comments would take hundreds if not thousands of hours, but utilizing an API and subsequent text analytics pulls all of the applicable and legitimate comments, enabling the agency to appropriately adjust legislation.

Through NLP, text analytics and AI, data can be sorted through and analyzed more efficiently. Text analytics can be used to surface the patterns that are relevant to responding to the public, and put those in front of analysts, making it easier to identify themes that require a response or are grouped together. This can also help identify key organizations responding to the solicitation and their stance on the regulation, helping to further understanding of the responses.

Text analytics provides state and local governments with greater consistency, transparency and scalability. Consistency in that state and local governments are able to repeat the same action over and over again, transparency by providing a summary of findings and how it classifies themes together, and scalability meaning that it can address regulations receiving any amount of comments in approximately the same amount of time.

Challenges agencies still face

A common objection to implementing text analytics technology is the notion that because many citizens utilize form letters, their comments will be eliminated because of their similarity to other submissions. Fortunately, analytics is able to detect even slight differences in form letters – such as names, place of origin, and other details. As such, it is possible to train models to recognize both form letters and even fake comments at both the individual regulation level, and across all regulation comments.

Additionally, local governments face criticism because citizens are worried that AI will take jobs from human workers. In fact, the majority of text analytics is designed to supplement human intuition by highlighting patterns that ultimately require manual review. The technology is in many cases designed to operate with human-in-the-loop, augmenting rather than replacing their work and enabling the best of what machines and human ingenuity can offer. This augmentation ultimately delivers a better response to the public in a shorter amount of time.

Future Outlook

Having widespread deployment of text analytics will improve the two-way communication between local governments and citizens, and will push regulations to fulfill their duty of law. This is not a question of whether the technologies are mature, but rather a question of agency and consulting partner adoption. Individual agencies are already starting to leverage these capabilities directly through text analytics processes, or through consulting engagements.

As states and municipalities focus on resiliency in the wake of the pandemic, the impetus has been met for setting these processes in motion to enable better understanding of public sentiment on proposed legislation moving forward. Human-in-the-loop analytics leveraged in areas such as regulations analysis, economic development and recovery, and revenue impact will ultimately make for a more resilient government, better able to respond to changing needs for its citizens.

Tom Sabo is principal solutions architect with SAS who, since 2005, has been immersed in the field of text analytics as it applies to federal government challenges. He presents work on diverse topics including modeling applied to government procurement, best practices in social media, and using analytics to leverage and predict research trends.

Subscribe to receive American City & County Newsletters
Catch up on the latest trends, industry news, articles, research and analysis for government professionals