Matt Tarascio and Chris Benson of Lockheed Martin (NYSE: LMT) said agencies should begin with having unbiased, secure datasets in order to build up the public’s trust in artificial intelligence applications and comply with legal requirements.
“Algorithms and other foundational datasets must get monitored during training and inference to counter attack vectors such as corruption and data,” Tarascio and Benson wrote.
They discussed the human-machine teaming concept and the importance of integrating the human-machine interface at the early stage of the training process to establish trust between the user and an AI-based system. “Humans need to learn how to partner with a machine to supplement their ability with the machine’s capabilities,” they added.
Benson and Tarascio cited the efforts of the Pentagon’s Defense Advanced Research Projects Agency (DARPA), Defense Innovation Board (DIB) and the Joint AI Center (JAIC) to advance ethical AI principles and touched on the application of AI in predictive maintenance and other use cases and the role of humans in the decision cycle.
“We still have a lot of work to do to develop robust AI solutions that meet or exceed the high standards of our legal, regulatory and ethical frameworks — so other use cases on the battlespace will require that humans continue to be a critical part of the decision cycle,” they noted.
Tarascio is chief data and analytics officer at Lockheed and Benson is the company’s principal AI strategist.