Hello, Guest!

How the New AI Executive Order Will Impact Government’s AI Landscape

President Biden issued a landmark executive order in October that established new standards for artificial intelligence and laid the foundation for AI implementation within the federal government. Deltek Senior Vice President of Information Solutions Kevin Plexico, a Govcon Expert and four-time Wash100 Award winner, spoke with Executive Mosaic about how the executive order will shape the future of AI in the government landscape.

Be a part of the AI conversation at the Potomac Officers Club’s 5th Annual AI Summit on March 21, 2024. Leaders from the Pentagon, DARPA, NGA and DHS will deliver keynote addresses on urgent AI issues and pressing topics. Register here to save your spot at the AI event of the year.

To set the scene for our conversation, can you give us an overview on the state of the AI landscape? 

Looking at the full AI landscape, I think I would divide AI adoption and government into traditional use of AI, which involves machine learning technologies and data, versus generative AI. The federal government is much further along in adopting traditional AI, as expected. They have robust amounts of data and have applied machine learning technologies with tight control over data and outcomes. I think the government is very nascent in the adoption of generative AI because it relies on vast amounts of disparate data, making it harder to control the input. The use cases for generative AI are broader and more general. Many agencies like DOD, VA, Treasury and HHS have made extensive use of traditional AI. However, they are just starting to explore and manage the adoption of generative AI and its potential.

How does the recent AI executive order impact the AI landscape?

The executive order aspires to set up guardrails for agencies to understand how to implement generative AI while mitigating associated risks. Notable risks include social or racial bias in training models. We also have to make sure privacy controls are in place so that AI technology is not compromising personal or citizen data. Another risk of generative AI is that it can sometimes make up information or fill in gaps with information that’s inaccurate or incorrect, leading to errors in decision making or faulty outcomes. 

A concern that’s unique to the generative AI area is the impact on the workforce. As agencies are implementing generative AI, they have to have a firm understanding of how it’s going to impact the workforce. 

If you think back to how agencies adopted cloud computing or cybersecurity, it often started with governance models put in place to help agencies have clear lines of understanding around the appropriate and safe ways to implement some of these new technologies. Generative AI, while it’s unique in many ways, has some of the same patterns that we see in those other technologies. There needs to be a governance model and a set of tests and processes that companies and agencies abide by in order to implement AI and get it into production environments. 

Can you elaborate on the timing and significance of the executive order, especially given the recent boom in generative AI tools?

Until now, there have not been any guiding principles or governance models across agencies. As a result, we’ve seen some agencies like DOD, DHS and HHS take action on their own to implement governance models. Across government, we need consistency and commonality to how agencies are approaching and pursuing AI. The executive order will really help many of the other agencies that perhaps don’t have some of the same technical capabilities that the more scientific and engineering-oriented agencies do have to figure out the path forward. 

This order sets the foundation for policies that will be put in place down the road, but it may take years to play out. For example, when FedRAMP was introduced as a way of certifying cloud application providers, only a handful of companies were able to get authorizations, and now you see literally hundreds. I expect a similar rollout pace for this executive order — it may take months or years for agencies to adopt the guidance and implement the governance models that are necessary. 

The government is requiring a lot of testing, which could be very manual in nature. Over time, I expect to see products come out that help automate the testing and verify whether a model has any bias or is compromising any privacy data. 

Kevin, what is your vision for the future of AI in the public sector, considering both traditional and generative AI models? How does the executive order contribute to this vision?

I see a convergence of traditional and generative AI models, where organizations can train large language models using traditional machine learning technologies. I think of it as asking generative AI to develop a model based on a set of data using traditional machine learning capabilities. 

I also think there will be an emergence of standards that require generative AI capabilities to produce the same level of accuracy and quality as a human would have applied. We’re still going to have a lot of human intervention in these activities. But instead of thinking of AI as replacing humans, I think of it as augmenting and improving the productivity of humans by automating the tasks that they might achieve, but still requiring manual intervention, oversight and quality control to make sure the results are held to the same caliber. This will help mitigate fears of AI replacing humans and shift the focus to driving productivity in our economy through the use of AI.

It’s a really exciting time in the market and in the world for this technology. AI has the potential to have the same magnitude of impact that the internet had many years ago. The importance of the executive order is to lay out a foundation for government agencies to start to implement AI in a way that’s safe, effective and without negative consequences.

Video of the Day

Related Articles