Over the past decade, artificial intelligence has become a key element within defense operations and a major priority for the Department of Defense, which has noted the increasing importance of the technology within the modern battlespace.
For federal organizations, the road to reaping the full benefits of AI is long and complex. Applying the technology to government operations and the warfighter brings about a unique set of challenges that require a strong understanding of any AI tool used to ensure that risk is properly managed.
According to experts in the field, understanding is necessary for trust, which serves as the driving force behind successful AI use in public sector activities.
“You need to understand trust from where it is secured all the way to how it is deployed. We have to think about where the data is coming from and how the models are being consumed,” Sek Chai, co-founder and chief technology officer of Latent AI, said during a discussion at the Potomac Officers Club’s 4th Annual AI Summit last week.
According to Chai, who was one of four participating panelists in the event’s DOD Emerging Needs to Accelerate Decision Advantage panel, building trust in a model begins in development phases as early as the supply chain and continues into a model’s deployment.
Jay Meil, chief data scientist and managing director of AI at SAIC, said that “explainability, interpretability and auditability” are key factors in establishing confidence in an AI model.
“Model uncertainty comes from a lack of understanding of what the model is functionally doing,” he emphasized.
This understanding, Meil said, starts with identifying the goal of a model in a given use case, which determines the necessary level of accuracy required. Maintaining human oversight is also necessary, he said.
Reiterating the importance of context, Young Bang, principal deputy assistant secretary of the Army for acquisition, logistics and technology, said that without understanding the unique characteristics of a given scenario, operators will lack the information needed to properly manage risk. Bang emphasized that in a warfighting scenario, lives are at stake.
“As you move to the warfighting side, things like providence and understanding the implications of false calls and false negatives are really important,” said Greg Little, deputy chief digital and AI officer for business analytics.
Trust and understanding is of crucial importance when using industry-developed AI tools, Bang said. While the warfighter plays a role in data supply, he said, industry needs to provide data observability for the algorithms that they have developed, including training data sets, validation data sets and parameters for tuning the technology.
Having this level of traceability is key for trusting and verifying AI models, he stressed.
According to Little, there is still a language gap between public and private sector organizations. The way industry looks at data and risk, he said, is completely different from the way the federal government does. He stressed that both sides need to work to “meet in the middle” to better understand how they can collaborate.
“In the DOD, we have Mount Everest types of problems, and I think people in the technology space want to solve Mount Everest-like problems,” Little said.
“Many of these problems have immense purpose, not only for specific civilians and military members in the department, but for the entire world,” he added.