Author: Naomi Cooper|| Date Published: June 12, 2023
The General Services Administration has issued an interim security policy directing employees and contractors to limit the use of generative artificial intelligence systems that work with large language models from the GSA network and equipment owned by the government.
GSA cited risks that LLMs, which train on public data sources and inputted data, may leak government information to unauthorized platforms.
The rule is valid until June 30, 2024.
Generative AI tools, including OpenAI ChatGPT, Google BARD and Salesforce Einstein, use LLMs to generate text-based content based on the data patterns learned from their training.
Craig Martell, chief digital and artificial intelligence officer at the Department of Defense and a 2023 Wash100 awardee, previously warned that such language models could be used by adversaries for disinformation.
Antenna Research Associates has appointed Jay Abendroth, a seasoned defense electronics executive, as chief growth officer to lead business development…
Precise Systems has appointed Michael “Mike” Risik as vice president of business development. The Lexington Park, Maryland-based company said Wednesday Risik will…
Aerospace and defense technology company Merlin has closed its business combination with Inflection Point Acquisition Corp. IV, a special purpose acquisition company…
Raytheon, an RTX business, has received a potential $212.1 million cost-plus-fixed-fee contract to provide operations and maintenance services for a relocatable over-the-horizon…