AI guidelines put in place for administering public benefits programs
As the use of artificial intelligence (AI) spreads to local governments, there is a growing opportunity to streamline processes and increase efficiency in government programs with the use of AI. The budding technology also poses risks, however, including a potential lack of human oversite, misuse, inaccuracies and discriminatory outcomes.
After issuing an executive order in October to develop AI safety, security and transparency policies, President Biden’s administration last week released initial guidelines on managing the technology when it comes to the administration of public benefits programs.
Frameworks have so far been developed by the Department of Agriculture, for the administration of its food and nutrition programs, and Department of Health and Human Services.
The guidelines provide definitions for key terms and describe the risk factors related to the use of AI in multiple instances. Both policies recommend providing transparency for where and when AI is used in government programs, as well as providing the option of users to opt out of AI when using the programs.
The guidelines also emphasize human oversight via a “human in the loop” process, which requires a human approve any AI recommendations before they are enacted.
“It is important to carefully design these ‘human in the loop’ processes to ensure that human oversight provides the intended validation of AI outputs and does not result in human actions becoming a ‘rubber stamp’ without the expected scrutiny,” the USDA stated.
Along with transparency, the guidelines require agencies to provide explanations to impacted parties if AI influences adverse decisions, such as denial of eligibility for benefits or the assessment of penalties. The USDA recommended not using AI to make any directly adverse decisions.
Agencies must also be able to identify the benefit of AI’s usage in each function it’s incorporated into and be able to estimate what will be gained from its use, according to the guidelines. To avoid risks of misuse, it is also critical that AI handle only stable, well-understood tasks in the administration of public benefits programs.
“AI should be used for business functions that are well understood and where staff have the knowledge and skills to evaluate performance,” the USDA stated. “AI should not be used for immature business functions with a goal of an AI discovering new approaches or efficiencies.”
Regarding data collection, the guidelines suggest agencies limit it to only data needed to effectively administer benefit programs, using high-quality representative data sets to train AI while also following best practices for privacy and security to ensure data does not leak beyond its intended use.
Likewise, should an AI not perform as expected or create a situation of risk or harm, its function “should be able to be disabled without creating unacceptable disruption to service delivery,” according to the guidelines. As an example, the USDA cited an AI-powered chatbot disbursing misinformation, an issue that recently occurred with New York’s MyCity chatbot, which had been giving business owners incorrect legal information in early April, according to a report by Reuters.
“Any time you use technology, you need to put it into the real environment to iron out the kinks,” New York Mayor Eric Adams told reporters at the time.
Numerous cities have already begun adopting their own AI guidelines and values, including Washington, D.C., in February and Boston in May 2023. In October, New York put an Artificial Intelligence Action Plan in place.
In other AI-related initiatives, the Biden administration announced the launch of an AI Safety and Security Board to advise the Secretary of Homeland Security. The Department of Defense also began piloting AI tools for identifying vulnerabilities in government software systems.