Author: Naomi Cooper|| Date Published: June 12, 2023
The General Services Administration has issued an interim security policy directing employees and contractors to limit the use of generative artificial intelligence systems that work with large language models from the GSA network and equipment owned by the government.
GSA cited risks that LLMs, which train on public data sources and inputted data, may leak government information to unauthorized platforms.
The rule is valid until June 30, 2024.
Generative AI tools, including OpenAI ChatGPT, Google BARD and Salesforce Einstein, use LLMs to generate text-based content based on the data patterns learned from their training.
Craig Martell, chief digital and artificial intelligence officer at the Department of Defense and a 2023 Wash100 awardee, previously warned that such language models could be used by adversaries for disinformation.
Radiance Technologies has elevated Darien Hammett to chief operating officer, placing him in charge of daily operations and execution across the company.…
latter’sRocket Lab announced Tuesday it has completed the acquisition of Mynaric, a laser-optical communications terminal provider, in a $155.3 million…
Federal agencies are rapidly scaling artificial intelligence to modernize operations, enhance decision-making and improve mission outcomes. From defense to diplomacy…
GreyNoise Intelligence has launched a command-and-control detection capability designed to give federal agencies earlier visibility into compromised infrastructure. GreyNoise’s new…