Public sector data stewardship for the AI era

blog_header_image_publicsector_ai_175791.jpg

Artificial intelligence (AI) and generative AI (GenAI) are rapidly transforming the public sector, moving beyond theoretical possibilities to real-world applications. Proper data preparedness, stewardship, and governance will play critical roles in successful GenAI implementations. 

We recently hosted a webinar,
Public sector data stewardship for the AI era, with industry experts Max Klaps, research director at IDC, and Dave Erickson, distinguished architect at Elastic. They explored the current state of GenAI adoption in government, education, and defense and dove into the data challenges and opportunities GenAI presents.

The evolution of AI in government

There’s been a significant shift in how government agencies and other public sector organizations approach AI. Initially, organizations experimented with various AI tools and pilot projects. However, the focus has now shifted toward identifying specific use cases that deliver tangible value and align with the organization's mission and key performance indicators (KPIs).

According to IDC research, about half of public sector organizations are running pilots, and 20% are implementing AI in production. The key question now is where AI can drive the most significant impact. Organizations are prioritizing use cases that enhance operational efficiency, improve resilience, reduce errors, ensure compliance, and provide better observability into their processes. Ultimately, the goal is to leverage AI, particularly GenAI, to achieve better outcomes for the public sector workforce, citizens, and students.

Prioritizing high-impact use cases

The focus has been on several key use cases, categorized as "horizon one," which aim for early wins and test existing capabilities and future-oriented use cases with higher impact and external focus.

Horizon one use cases often involve internal processes, such as critical natural infrastructure protection, financial market oversight, dynamic digital legislation, public communication and notification, and AI research and writing assistance for higher education. These use cases often revolve around content access, summarization, and preparation.

Looking ahead, public sector leaders are exploring and scaling use cases that directly impact mission outcomes. These include enhancing service delivery, reducing the burden of tax compliance, ensuring payment integrity and reducing fraud, integrating natural language capabilities into 311 systems, and hyper-personalizing student recruitment and intervention in higher education.

Overcoming challenges and ensuring data readiness

Implementing GenAI is not without its challenges, with common obstacles such as: 

  • Governance

  • Risk

  • Security

  • Cost control

  • Scalability 

    But one recurring theme is the critical importance of data readiness. Although there's a need for high-quality data, quantity isn’t necessarily the primary concern. Public sector organizations can leverage pretrained models and focus on providing the AI with relevant, curated data for specific use cases. This approach, known as retrieval augmented generation (RAG), ensures that AI answers are grounded in authoritative information and reduces the risk of inaccurate or biased outputs. The quality of data being fed to the generative models is critical.

RAG: A key pattern for success

RAG is a crucial workflow for grounding GenAI with proper context. Instead of relying solely on the model’s pre-existing knowledge, RAG involves retrieving relevant data from an organization's proprietary data (e.g., documents, images, audio) and using that data to inform the AI's response. This approach enhances the accuracy, trustworthiness, and explainability of AI-generated answers.

Elastic plays a significant role in enabling
RAG. Our vector database enables organizations to store, retrieve, and analyze vast amounts of data, making it easier to ground AI in authoritative information.

Responsible AI and risk mitigation

Responsible AI involves ensuring that AI systems are ethical, explainable, and transparent. Organizations can take several practical steps to promote responsible AI, including:

  • Assessing and categorizing the risk levels of different use cases

  • Prioritizing risk mitigation strategies, such as implementing data security protocols and detecting bias

  • Establishing clear accountability and reporting mechanisms

  • Engaging with the public to explain the risks and opportunities of AI

It’s essential to use a common language and framework for discussing AI risks, such as the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) in the United States. Another important consideration is separating the compensating controls for responsible AI from the AI itself — in other words, maintaining control over the guardrails you need. Also, continuous evaluation of AI-generated answers is essential for ensuring ongoing public trust.

Preparing the workforce for GenAI

People are crucial to the successful implementation of GenAI. Organizations need to invest in training and development to ensure that their workforce is prepared for this shift. Key areas of focus include:

  • Establishing AI awareness (and risk) training for all employees

  • Providing technical staff with the tools and opportunities to work with AI

  • Leveraging the expertise of the partner ecosystem, such as academic research institutions and standards bodies

Create spaces where staff can experience AI's limitations and learn how to use it effectively as a tool. Emphasize moving away from the mindset of AI as an all-knowing entity and embrace a more practical approach that stresses understanding AI's capabilities and limitations.

Learn more

Tune into Public sector data stewardship for the AI era for more insights on capitalizing on the incredible power and potential of GenAI.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

In this blog post, we may have used or referred to third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. 

Elastic, Elasticsearch, ESRE, Elasticsearch Relevance Engine and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.