
Five Key Actions for Government Leaders to Prepare for the AI Era
As the fast-paced evolution of artificial intelligence (AI) continues to rapidly transform how organizations operate, Government leaders face immense opportunities and increased risks. By taking the following actions, leaders can position their agencies to harness AI’s benefits while safeguarding public trust.
Establish Clear Ethical Frameworks for AI Use
Leaders need to establish clear and transparent rules that set boundaries for ethical AI use without limiting innovation. Instead of attempting to regulate fast-moving technologies directly, agency frameworks should focus on setting standards for conduct, accountability, safety, and ethical behavior. These frameworks allow agencies to apply AI technology to mission goals as AI continues to evolve.
One example is NASA’s Ethical AI Framework which outlines six guiding principles for how the agency approaches AI use. The framework explains each principle in the context of NASA’s work and offers a set of reflective questions for users to consider when applying AI.
The framework emphasizes practical steps relevant to the next five to ten years, while also laying early groundwork for potential transformative shifts as AI begins to reach the theoretical stage of human-level outputs and beyond.
Invest in Federal Employee Training and Education
To help their workforce thrive in an AI-driven environment, leaders should invest in training and communication efforts to build the right mindset. Providing opportunities for hands-on learning and practice helps employees gain confidence in using AI tools while developing a clear understanding of their strengths and limitations.
The U.S. General Services Administration designed a training series to educate all Federal employees on topics such as compliance with AI-related regulations, generative AI fairness, multimodal foundation models, data privacy considerations, and AI auditing.
In addition to formal training, employees can be given opportunities and encouraged to experiment with AI tools in low-stakes contexts. Agencies must build safeguards against AI-enabled threats such as cyberattacks, disinformation, and deepfakes. Training programs that help employees recognize and respond to AI-related risks will not only make Government operations more secure but also reassure the public that these technologies are being used with integrity.
Build Capacity for AI Through Targeted Pilots and Feedback
Government leaders should take an iterative approach when implementing AI in their agencies. They should pilot AI tools in targeted use cases, evaluate outcomes, capture lessons learned, solicit feedback, and refine strategies before full implementation.
For instance, the Department of Veterans Affairs’ (VA) Office of the Chief Technology Officer is piloting two internal generative AI chat interfaces that assist employees with administrative tasks such as drafting emails and summarizing documents and meeting notes. According to the AI Use Case Inventory page on the VA website, the pilot currently has 40,000 users, with over 80% of users responding to a survey agreeing that the AI tool has helped them be more efficient
Taking this iterative approach allows for continuous improvement and tailoring AI solutions to meet the needs of employees and agencies.
Prioritize Transparency to Promote Public Trust and Collaboration
AI adoption in Government succeeds if the public is confident in its responsible use. Government leaders can secure public trust through transparency with how they are adopting AI tools in their work.
The Department of Veterans Affairs’ website has a link to an Excel workbook with a comprehensive list of how they are using AI to improve services. The workbook contains over 200 rows of data describing the AI use case name, topic area, the intended purposes and expected benefits of the AI, the AI system’s outputs, and the stage of development. This level of transparency shows the public how the Government is aligning new technologies with its mission to serve citizens effectively.
Increased transparency also opens the door for internal and external partners to exchange insights and collaborate on building successful AI initiatives. By publishing the VA AI Inventory, best practices can be shared across VA departments, other Federal agencies, and beyond.
Strategic Foresight
Government leaders should assess possible futures and uncover prospective needs that directly inform their AI strategy, governance, and implementation plans. Agencies should conduct a foresight study to define the objectives they want to achieve with AI, understand what should be analyzed, and estimate resource requirements and schedules.
Next, agencies should gather information to frame their current AI operating environment including workforce readiness, data infrastructure, and governance maturity. With a strong understanding of the agency’s current operating model and context, a horizon scan can be conducted to gather relevant evidence to inform futures forecasts and scenarios. This may look like reviewing journals, media articles, studies, and research to identify drivers of change and emerging trends in the AI landscape. Agencies can use this research to extrapolate trends, project them into the future, and develop scenarios that test innovation and resilience.
Arc Aspicio’s strategic foresight framework provides a systematic approach for anticipating emerging AI capabilities, evaluating risks, and aligning technology investments with mission outcomes. This evidence-based approach allows AI strategies to be technically sound, socially informed, and aligned with long-term mission goals.