Author: Jane Edwards|| Date Published: October 22, 2024
Rijul Gupta, founder and CEO of Deep Media, said the disruption of truth due to deepfakes and other unethical uses of generative artificial intelligence presents a systemic risk to the U.S. military and other government agencies.
In this article published on Carahsoft.com, Gupta wrote that the ease of use and accessibility of generative AI tools have transformed the nature of disinformation.
The Deep Media chief executive cited how the company’s AI models help detect deepfakes and other media manipulations.
“Such technology will never be 100% accurate because that’s now how it works, but we regularly achieve more than 95% accuracy on identifying the use of generative AI in images, audio and videos. That alone is a force multiplier for analysts,” he noted.
Gupta discussed the company’s partnership with academia and government agencies, such as the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology, to promote the ethical use of AI.
“Ensuring the ethical use of AI is a complex challenge that can’t be resolved by one organization, so we’re doing our best to build a community to address it,” he added.
He also mentioned the company’s work with various partners to integrate its work into other open source intelligence platforms and advance the use of AI to analyze images, videos and audio in support of analysts and other users.
Adam Toy, a seasoned senior solutions architect, has been promoted to chief technology officer at Rancher Government Solutions. The secure enterprise open-source…
Former Raytheon executive Paul Ferraro has joined GE Aerospace as vice president and general manager for defense engines and services. Amy Gowder, president…