Generative AI expands the race for search and meaning—Implications for society and government

Dr. Alan R. Shark

February 15, 2023

7 Min Read
Generative AI expands the race for search and meaning—Implications for society and government

When it comes to generative artificial intelligence (AI) the public is ready—but is the technology? Early this month Google’s parent company Alphabet lost $1 billion in market value in one day when their highly promoted rollout of its new Bard AI search engine ad showing a factual error. Clearly investors were spooked by any such public display of failure. Only a day earlier Microsoft went public with its demo of its Bing search engine powered by ChatGPT. Microsoft was a bit more cautious when it said, “while Bing using ChatGPT is highly accurate, it should not be relied upon as a sole source of information and its responses should be critically evaluated and confirmed with other sources before being used.” In the days leading up to all the announcements ChatGPT was being widely discussed and tried by the general public which resulted with major school systems banning its use and was at the center of countless debates at faculty meetings. Both companies are introducing generative AI.

Stepping back in time, society has come a long way from the days when well-dressed salesman went door-to-door selling volumes of encyclopedias that provided printed knowledge through expensive book volumes for home use where libraries had been the principal domain of knowledge. Who would envision that many decades later we would be accessing data and information from our ever-smarter devices—let alone chat and speak with them. Similarly, there were many heated debates in engineering schools regarding the use of handheld calculators instead of the traditional slide rule.

Today we are aided by automated spell- and grammar-check, and the ever-irritating autofill. We can now create amazing and beautiful NFT art through AI powered software without any previous art training.

There are of course serious limits to what we as humans can absorb and remember at any given point in time and thus having our trusty smart devices by our side (and bedside) has become indispensable. We are addicted to search. Search has become a multi-billion-dollar business and continues to grow in many significant ways. And we want to search at no cost, but instead we pay by giving up much of our privacy in order for search companies to monetize our searches, preferences, and whereabouts.

Today we talk to our smart TVs, asking for suggestions of things to watch or listen to. We communicate with our smart devices, and to our “smart speakers” in our homes. It appears every holiday season, new enhancements are released quickly making last year’s obsolete.

Even local and state governments who initially stood back and watched the evolution of speech recognition and chat technologies but would soon begin to embrace it. Google search is found embedded in many local government websites—searching first in the city or county databases and, if nothing is found, it takes the citizen to a rather annoying open-ended search where the results usually have little to do with the original inquiry. Local governments have also found themselves using SIRI and Alexa technologies licensed by Apple or Amazon. Even before the pandemic, search and chat enhancements were growing. But during the pandemic, the use of such technologies exploded with mostly positive results. Both in the private and public sectors, chatbot technology proved to be a worthy addition to staff who were overwhelmed with calls and having to ask the same routine questions over and over. Chatbots were often deployed as a first line of communication to augment staff resources and to fill in on nights and weekends, thus allowing staff to focus on unique challenges requiring their expertise. Chatbots began as text only and as the technology for speech recognition improved, citizens were able to converse—some with the ability to speak and recognize different languages.

So, what exactly is generative AI? Well, why not go directly to the source and ask ChatGPT. Here is “its” response (it took four seconds): “Generative AI refers to a category of artificial intelligence that focuses on the creation of new, original content. The goal of generative AI models is to generate new examples or variations of existing data, such as images, music, text or even video.

Generative AI is based on deep learning algorithms that are trained on large amounts of data. The model is designed to understand the patterns and relationships within the data, which it then uses to generate new examples that are similar to the training data. This is achieved by sampling from a probability distribution learned by the model.

Generative AI has many potential applications, ranging from creative industries, such as music and art, to more practical uses, such as data augmentation and anomaly detection. However, it also poses some challenges, such as controlling the generated content, ensuring that it is coherent, and avoiding undesirable outputs, such as biased or offensive content.”  

Under generative AI, we can now move beyond searching for a website or piece of information and can generate a report or provide in-depth answers to questions, and yes, do a homework assignment. Many see these latest advancements as the further erosion of creative and critical thought. Worse, some view this as the further dumbing down of our nation’s students.

Regardless of how you feel about generative AI, it is here to stay and is growing every moment. It satisfies the curious, helps people understand difficult concepts and helps to digest information for a wide variety of sources. Like any technology, for every advantage there is always a countervailing possibility. Any success will depend on the quality of its data and sources. Without a more robust approach to data acquisition and validation the more chances we will soon be facing “fake AI.” Nothing new here, garbage in, garbage out. The difference is the potential for the speed and reach of misinformation.

So, getting back to the larger issue regarding society at large, I was about to write what I thought would be a good predictive list—but I stopped and thought, why not go to ChatGPT and ask that very question. In perfect English and punctuation, here is what “it” said:

As a language model created by OpenAI, I have the potential to assist society in a number of ways:

  • Education: I can help students learn and understand complex concepts by providing concise and accurate explanations. I can also assist educators by generating question-answer pairs for exams, assignments, and more.

  • Customer service: I can be integrated into websites, apps and customer service platforms to provide quick and accurate responses to customer inquiries, 24/7.

  • Health care: I can provide medical information to patients, such as the symptoms, causes, and treatments for various conditions. I can also assist health-care professionals by providing information about drugs, dosages and interactions.

  • Accessibility: I can provide information and support for people with disabilities, such as those who are visually or hearing impaired.

  • Language translation: I can be used to translate text and speech into different languages, making it easier for people to communicate with each other regardless of their native language.

These are just a few examples of how I can assist society. The possibilities are virtually endless, and I will continue to be developed and improved over time to better serve the needs of humanity”.

I’m not sure that all the above can be attributed to ChatGPT and to the larger field of generative AI, as many of the applications mentioned were being carried out long before its arrival. But it sounds good and reads well, and it is not inaccurate. But I must ask myself, could I have done better as opposed to simply accepting the novelty of something else spewing mediocre prose. So, it appears at least for now, I happily give credit (perhaps blame) where it is due and that is why I decided to share my byline. Will my students do the same, will others provide attribution, and will we succumb to intellectual laziness and simple use to cut and paste our life ahead? Only time will tell.

Dr. Alan R. Shark is associate professor for the Schar School of Policy and Government, George Mason University, and executive director of the CompTIA Public Technology Institute (PTI) in Washington, D.C. He is a fellow of the National Academy for Public Administration and co-chair of the Standing Panel on Technology Leadership. A noted author, his most recent textbook book, “Technology and Public Management” has now been published. He is also the host of the popular bi-monthly podcast, “Sharkbytes.net.” 

Subscribe to receive American City & County Newsletters
Catch up on the latest trends, industry news, articles, research and analysis for government professionals