How can local governments regulate generative AI—Just ask ChatGPT
Hardly a day goes by without some new warning or danger being reported about artificial intelligence (AI) and, in particular, generative AI. State and local governments have little choice but to confront AI with proactive policies regarding their usage policy given the vast amounts of personal data collected and stored on government systems. The governor of Connecticut recently signed a bill into law calling for an AI Bill of Rights aimed at protecting citizens. The legislation follows a similar initiative from the White House, which presented a framework for an AI Bill of Rights at the federal level. Professors across the nation worry that students will turn to generative AI to research and write class assignments and even dissertations. To add to the growing list of warnings and concerns, the Federal Trade Commission (FTC) has opened an investigation into the company Open AI that makes ChatGPT over whether the chatbot has harmed consumers.
Public managers share many of the same concerns with private sector managers when it comes to limiting or even prohibiting workers from using generative AI at work. The shared concern is that employees will input sensitive information in return for the solutions they seek. The problem is that once data is entered into an open AI system, there is no control over what happens to the content and where and when in some form such data might be shared with others.
Even the companies who offer generative AI cannot assure people what happens with any data entered into their systems. Ironically, they designed the very “black boxes” where no one is certain what happens inside—not even the programmers. This growing list of concerns includes the fact that information that looks incredibly real, well-laid out without any typos, can provide information, citations, which are absolutely false. It did not take long for the term “hallucinations” to explain such phenomena.
Given generative AI’s complexity, it is near impossible to identify the source(s) for data models, and all such systems lack transparency and accountability. Even sophisticated users have found it difficult if not impossible to assess the veracity of written reports and other forms of information. Complex algorithms are at the heart of today’s generative AI systems where there are inherent dangers in mixing reliable source data with bad, with the possibility of promulgating unintended bias and or prejudice. Worries extend to the protecting of intellectual property and the possibility that bad actors (both domestic and international) could manipulate or manufacture data sources and propaganda, disinformation or misinformation, adversely impacting public opinion, tilting an election, turning policies upside down—all of which would further erode public trust.
With so much at stake, governments at all levels are struggling with creating policies that offer legal remedies, the protection of privacy and liabilities from unintended consequences. So as a starting point, ChatGPT was asked, “How can local governments regulate generative AI?”
According to ChatGPT: “Regulating generative AI at the local government level can be a complex task, but here are some potential approaches that local governments can consider:
- Establishing ethical guidelines: Local governments can create ethical guidelines that outline the responsible use of generative AI technology. These guidelines can address issues such as bias, privacy, transparency and accountability. They can provide a framework for developers and users to follow when creating or implementing generative AI systems.
- Licensing and certification: Governments can require individuals or organizations working with generative AI to obtain licenses or certifications. This process can involve demonstrating a certain level of understanding and adherence to ethical guidelines and best practices. Licensing and certification can help ensure that those developing and using generative AI technologies have the necessary expertise and are accountable for their actions.
- Data protection and privacy: Local governments can enact regulations to protect the privacy of individuals and the data used by generative AI systems. This can include requirements for obtaining explicit consent, secure data storage and anonymization practices. These regulations can help mitigate privacy risks associated with generative AI applications.
- Algorithmic transparency: Local governments can advocate for transparency in generative AI algorithms. This can involve requiring developers and organizations to disclose information about the algorithms and data sources used in generative AI systems. Transparency measures can help uncover biases or potential ethical concerns associated with the outputs generated by AI systems.
- Public-private partnerships: Governments can collaborate with industry experts, researchers, and AI developers to develop standards and best practices for generative AI technology. By fostering collaboration between the public and private sectors, local governments can leverage the expertise of different stakeholders and develop effective regulations that balance innovation and societal impact.
- Testing and evaluation: Governments can establish testing and evaluation procedures to assess the performance, safety and ethical implications of generative AI systems. These procedures can include audits, third-party assessments and regular reporting to ensure compliance with regulations and guidelines.
- Public awareness and education: Local governments can play a crucial role in educating the public about generative AI technology, its potential benefits and risks. They can organize workshops, public forums and awareness campaigns to help citizens understand the implications of generative AI and encourage responsible use.
It is important to note that the regulation of generative AI is a rapidly evolving field, and local governments should engage with experts, stakeholders and the wider AI community to develop informed and effective regulatory approaches. Collaboration between various levels of government and international bodies can also help create consistent and globally harmonized regulations for generative AI.”
Given ChatGPT’s vast knowledge base, it is not particularly surprising that it provided the above outline for local governments to consider regarding what AI policies and regulations might contain. But one must ask, how was this outline developed? Single source, multiple sources? While serving as a useful outline, sound policy development requires greater levels of details and specifics. Here, policy development is still very much in the hands of humans. Clearly there is so much more to learn about AI and ChatGPT, but judging by its initial response to the question asked, this technology is already way ahead of humans.
Dr. Alan R. Shark is the executive director for the Public Technology Institute (PTI), a division of Fusion Learning Partners. and associate professor at the Schar School of Policy and Government, George Mason University. He is a fellow of the National Academy of Public Administration and co-chair of the Standing Panel on Technology Leadership. A noted author, his most recent textbook book, “Technology and Public Management,” was published in January 2023. He is also the host of the popular bi-monthly podcast, Sharkbytes.net.