What AI and Large Language Models can bring to government
As artificial intelligence (AI) tools begin to transform technology in general, they too are making their mark in government.
The reactions so far from agencies and leaders to these new AI tools have been mixed, from outright prohibition to cautious exploration.
At Ad Hoc, drawing on our practical experience with our federal customers and extensive prototyping and research, we believe AI tools present a valuable opportunity. We argue it’s time agencies realize this potential for several reasons:
- The consumer technology world is swiftly integrating AI into its products. AI is already enhancing existing software such as office productivity suites. Companies are also building AI-native tools from the ground up that offer fundamentally new experiences. Soon, how the public judges what is a high quality experience or service will be based in part on whether that service uses the many advantages of AI. Government agencies must be intentional in using AI to improve their customer experience if they don’t want to widen the gap between public expectations and what agencies can deliver.
- When applied appropriately, AI sets itself apart from other emerging technologies by showing real potential to bring value to government. Technologies that have received significant hype in the past, like blockchain, have yet to prove their worth in solving real government customer experience challenges. On the contrary, early abilities of new AI tools like LLMs have already demonstrated impressive alignment with common government problems. For example, as we mention below, LLMs are well-suited to helping agencies refine and organize large amounts of unstructured data, which is a challenge facing almost every government agency.
- While artificial intelligence as a field isn’t new, recent advancements, particularly in the performance of LLMs, have been significant. These improvements make AI not just an exciting field, but a practical one for enhancing user experiences and the capabilities of software stacks. Contrary to emergent technologies like blockchain, the new progress in AI allows for simpler, less intrusive integration into existing systems and applications, because of the mature tools and resources available.
The key to responsibly and effectively using AI tools remains the same as for any technology; agencies must use AI to solve real problems for real people. Those people may be internal staff looking to make sense of unstructured data, or the public looking to find plain-language answers about benefits eligibility. At every turn, agencies should be asking themselves whether the use of an AI tool will help their agency improve the experience of the customers that use its services. That should be the metric they use to choose technologies, how they apply them, and the partners they work with to implement those technologies.
There are still many unsolved policy and implementation issues with AI tools in government, as we’ll address in this post. However, we think that a pragmatic approach to applying AI tools to customer experience problems, steering clear of the riskier and more experimental uses of AI still under heavy research, is a golden opportunity for government to improve public services.
Government and the consumer technology industry are still in the early days of applying AI tools to their services, but we see a number of valuable opportunities for agencies to use AI to improve their customers’ experience.
Potential use cases of AI in government
Enhanced search experience
LLMs are receiving so much attention right now because the experiences of using services such as ChatGPT, Google Bard, and Microsoft Bing are so different compared to that of traditional web search. Users have expressed a clear preference in asking questions, chatting back and forth, and refining their queries to find the information they’re seeking. It’s a much more natural and familiar interface than trying to type just the right keyword and then browsing through pages of results.
Government agencies should be evaluating AI-powered semantic search for their own use. Semantic search is a method of searching content that understands a user’s query within its context. Instead of merely looking for keywords, semantic search takes into account the intent and meaning behind the query, providing more relevant and precise results. Coupled with LLMs’ ability to summarize, directly answer questions, and even explain in plain language, semantic search can dramatically shorten and simplify a user’s experience of finding and understanding important information.
Federal agencies provide a wealth of critical information to the public. Adopting LLM tools to support how people find information, both within federal websites and in difficult-to-access text currently in PDFs and other formats, could vastly improve the discoverability and accessibility of information. Agencies can view the potential inaccuracies of LLMs not as a setback, but as a chance to pioneer strategies in the industry. They can focus on providing more reliable, dependable experiences for the public seeking information, setting a high standard for others to follow.
Insights out of messy data
As Ad Hoc has already seen with one of our government customers LLMs can give agencies insights into existing unstructured data in ways that are not possible, or not practical, with other tools. Unstructured data refers to information in a database without any indicators about what it means or how it is to be interpreted, such as large text fields. With our customer, we used an LLM to deduplicate a database with tens of thousands of records to match text with very subtle variations that were missed by other methods. With that clean data, the team was able to extract new insights about program performance that helped them make more informed decisions. Then we hooked an API to the LLM to nudge future users to input data in a way that matched current writing conventions to help keep the database clean.
This type of data problem is present in almost every large program in almost every agency in government. Government forms and databases are full of unstructured text fields or text fields that were once governed by one set of rules and now play by different rules. That system inevitably leads to unstructured data, which reduces the value of that data and the ability for leaders to make data-driven decisions. Today’s LLMs can extract meaning from any piece of text and transform it, increasing the value latent in large data systems.
LLMs provide a new tool agencies can carefully apply to their specific data challenges as a way to learn more about their customers, how people use their systems, and what changes would have the most value for the public.
Plain language translations
Even though many federal agencies have made great strides in meeting plain language requirements, many nuances of benefit eligibility, process documentation, and regulations are tied up in legal language that is difficult for even experts to parse. LLMs are remarkably well suited to translating the style and tone of a piece of text. This capacity isn’t just for creating amusing versions of the State of the Union address in a Shakespearean style, but can be a powerful tool in government as well.
Paired with their capability to summarize large amounts of text, LLMs could be used to convert complex information into more understandable, plain language. This has immense potential to aid individuals in grasping how federal laws, regulations, and programs apply to them.
Additionally, the interactive nature of LLMs, which allows it to field clarifying questions and provide updated responses, could enable users to find precisely the information they’re seeking. This makes government services and benefits more accessible, promoting a more efficient and inclusive public service delivery system.
A call center representative’s new best friend
While increasing the accuracy of responses from LLMs will be a central challenge for both agencies and industry, LLMs can also be invaluable tools for power users of government programs and services, serving as intelligent assistants for important tasks.
For example, call center representatives could have LLM tools available on their workstations that have been trained on an agency’s policies and documentation. They could use those tools to look up information for callers, using the specific needs of an individual caller rather than memorized keywords, and have the LLMs inform their answer. As people who are deeply knowledgeable about the particulars of an agency, call center representatives would be able to spot inaccuracies and either refine prompts or use other sources for information.
An LLM tool like this doesn’t have to be accurate 100% of the time in order to boost the efficiency of call center representatives, the information they share, and the experience customers have when calling in for help.
Expanding beyond text to audio and images
Just as text-based AI has improved, so have tools focused on audio and images. Audio tools have made huge strides in converting speech to text and text to speech even with limited computing power. Combined with an LLM, you could even build a system that takes in audio from phone calls, converts it to text, generates a response with an LLM, and converts that text back to speech for the caller. These so-called “multi-modal” interactions with government information make experiences more accessible for people with a wider range of devices.
Similarly, AI models focused on images and computer vision can now do an excellent job of creating text descriptions of an image. That could provide a breakthrough automatically detecting document formats and recognizing written characters as the government works to digitize paper records. Even if it’s only a small step in a larger modernization project, AI tools have the ability to help agencies make significant advances toward their goals.
What AI is not ready to do
All of the use cases above apply the specific benefits of AI tools to problems they’re well suited to address. One area where AI tools in general are not ready is to make actual decisions on behalf of people. These are referred to conventionally as “agents” in the AI research field. There is a critical difference between asking an LLM if you’re eligible for a government benefit based on the information available on a website, and submitting your personal information to a tool based on AI that then makes an authoritative decision about your eligibility.
We should use AI tools as just that — tools. They can be tools for individuals to find or translate information, for teams to organize or clear data, or for agencies to provide clearer text to their users. We believe humans must remain in the loop in a decision-making process, and AIs should not yet complete transactions for people.
The risks of AI in government
Like with any new tool or technology, there are questions and risks about applying AI to complex government systems and within government regulations. Those risks are worth addressing and are best addressed by exploring exactly what it’s like to apply these tools to government challenges.
Some of the primary risks that we see with AI in government are safety, equity, accuracy, and control. The safety and equity questions are largely connected to whether AI tools are providing information or whether they’re making decisions. As we’ve said, we think AI tools are not ready for use in decision-making processes such as hiring, eligibility determinations, or procurement awards. They’re worth exploring as tools that provide input to people who make decisions.
Accuracy is a current issue for people who use AI tools to find definitive facts like they would with search engines. This issue is best addressed by both applying AI tools to the right problems (they are not search engines crawling the internet) and by teams using an iterative approach to constantly improve the accuracy of their AI models. The benefit of AI models and training is that you can always improve on the accuracy of the last version you used. Agencies have a real opportunity to be leaders in this space as the incentives for government are substantially different than the private sector.
Agencies are also rightly concerned with controlling how the data they send to AI tools are used and stored. Some agencies currently prohibit employees from putting any non-public information into AI tools and others are concerned with how commercial AI services store and use submitted data. We recommend agencies explore self-hosted AI tools that allow teams to take advantage of these services while retaining control of their information. Look for more posts soon from Ad Hoc on how to do this.
The risk of inaction
Along with the risks described above is the danger of government agencies waiting until every implementation detail is solved before exploring how AI tools can be applied to their customer experience challenges. The ground is shifting right now on what it means to have a smooth, easy-to-use customer experience. As federal agencies rightly prioritize security, accuracy, and privacy more than some consumer technology companies, they also will face the risk of further widening the gap between the experience people get with consumer services and the one they expect from their government.
Agencies looking to close that gap and further improve the efficiency of their teams and the customer experience of their services should begin exploring how to best apply AI tools to their challenges.
To find out how Ad Hoc can help your team explore the use of AI tools, email us directly or fill out this form to get occasional updates about our work.
Related posts
- How AI and LLMs will change government websites
- Open data and AI readiness
- Parking signs and possible futures for LLMs in government
- Announcing the Platform Smells Playbook: Diagnose and fix common problems with your technology platform
- Using open-source LLMs to optimize government data
- Using generative AI to transform policy into code