How one city is proactively managing AI use—and what local governments can learn from it
AI (artificial intelligence) had a big year last year. A topic once reserved for tech circles, it became a frequent news headline and a regular dinner table conversation piece. Governments across the world began shaping policies for AI. In the U.S., President Joe Biden enacted his executive order in October, which created several initiatives to promote transparency, safety and security. While this is likely just the beginning of more regulation to come, presently, local government agencies are more or less faced with creating their own nuanced approaches to AI policy.
While AI’s uses and implications are just being figured out, for local governments AI could create security issues because AI search becomes public, which can put sensitive information at risk (security protocols, etc.). And while some municipalities have banned AI outright for this reason, the city of Grove City, Ohio, is taking a different approach. They are creating a model for how other local governments can create and share an AI policy for safe and appropriate use.
Lay the groundwork
Grove City’s approach is grounded in proactive governance and risk mitigation. However, before creating an AI policy, the city first took a hard look at its policy management practices. At a high level, they wanted to ensure that employees had a clear understanding of acceptable conduct and performance from day one of their career journey. Upon review, the city realized it lacked formal policies for several aspects of employee conduct and performance. As a result, the city adopted a policy management system already utilized and proven by the Grove City Police Department, to lay the foundation, transfer its existing policies into the cloud, then establish new policies and add them into the mix, helping them to accomplish a more formalized sense of compliance and communication.
With this simple move, Grove City immediately added an enhanced layer of professionalism to its HR processes. When employees start their career at the city, they now have clear expectations and accountability for a wide range of policies, from the appropriate use of assets to the city’s drone policy. Employees acknowledge and sign off on these policies as well, so when the city must undergo an audit, it has a direct record of signatures to establish compliance.
Additionally, and perhaps most importantly, when a policy is updated or a new policy created to adapt to an evolving technology, Grove City has a simple, straightforward way to communicate with employees in real-time and ensure they have a comprehensive understanding of what the policy means for them.
Understand the nuances of new technology
According to a 2024 HR Trends report by NEOGOV released earlier this year, “operational efficiency is important for any organization, but especially for government agencies struggling with staffing shortages and high turnover. The majority of agencies say their operational efficiency, which is defined as the ability to deliver high-quality service at scale with few resources, is good (46 percent), and even excellent (10 percent). To improve these operational inefficiencies, agencies are turning to technology.”
AI is a good example of a technology that can be used to improve efficiencies, yet it also highlights why policy and systematic policy management is necessary. As a relatively new and rapidly evolving technology it is a welcome tool, especially when it comes to the HR function. It is clear that there are many advantages of AI, however for local governments, it is crucial to understand that there are also risks to consider.
Take ChatGPT. It is now one of the most widely used AI tools, according to Forbes. And with good reason. It has proven to be incredibly effective for many individuals and organizations, streamlining many otherwise tedious and time-consuming tasks. But using a tool like ChatGPT comes with inherent risk.
For example, hypothetically, if Grove City’s HR team was interested in changing the city’s dress code, it could easily ask the free-to-use AI ChatBot, ChatGPT, to write a policy that explains that a blue polo is now required. However, what many don’t realize is that the data fed into ChatGPT is now no longer proprietary; it is now in the public domain. This essentially means that this data could be used by others outside of the organization, and potentially in ways that were not intended.
In this hypothetical example, the data is not sensitive, therefore risk is low. However, local governments deal with a large amount of sensitive information that should not be shared publicly. This is where the right policy comes into play.
According to the same HR Trends report referenced earlier, a whopping 78 percent of government agencies do not have documented policies or procedures surrounding AI, even though it is already being used to help agencies automate routine tasks, develop data-driven policies, and improve service delivery.
Create a policy that works
In Grove City, the information systems department took all of this into consideration and championed the development of a policy to ensure the city could embrace the operational efficiencies that come with AI, while ensuring its sensitive data remained protected.
First, the city took a detailed look at its data and classified it accordingly. Generally, it fell into four classification levels: public data, internal data, confidential data and restricted data. The city then determined that AI fit into the appropriate use policy and updated that policy with the core consideration that all uses must be approved by the information systems department. Employees must sign off on this policy at the time of hire, and then on an annual basis.
When considering a request to use AI, the information systems team reviews what classification of data a department is using and if it will put the city’s data at risk. For example, any information that would need to be redacted upon reporting, i.e. personal identifiable information (PII) or victim information would not be approved for AI use. Additionally, Grove City’s information systems department manages IT for five township fire departments in the city, so in that case healthcare considerations come into the equation. For example, patient information and HIPAA considerations.
While the above scenarios are mostly a common sense take on what would qualify as sensitive information, there are often scenarios that are more nuanced. For example, sharing information about the city’s infrastructure, and the potential for cybersecurity risk. According to Forbes, “global cybercrime damage costs are expected to grow by 15 percent per year over the next two years, reaching $10.5 trillion USD annually by 2025.” This is a crucial consideration. Water, sanitary, stormwater, communication, IT infrastructure in the state of Ohio is protected infrastructure and should not be disclosed as they can put a municipality at high cybersecurity risk.
An evolving technology landscape
Whether local governments decide to enact a formal policy or more informal guidelines, it is important that they recognize that AI is a growing force in our technology landscape—one that must be given careful consideration.
The efforts in Grove City serve as a model for local governments grappling with the complexity of AI governance. As the nature of AI changes, and experts learn more about its many pros and cons, the policies can and will shift as well. Having a strong framework in place will help adapt to an ever-changing technology environment to continue to ensure the safety and security of employees, citizens, and municipalities as a whole—now and in the future.