Author: Naomi Cooper|| Date Published: June 12, 2023
The General Services Administration has issued an interim security policy directing employees and contractors to limit the use of generative artificial intelligence systems that work with large language models from the GSA network and equipment owned by the government.
GSA cited risks that LLMs, which train on public data sources and inputted data, may leak government information to unauthorized platforms.
The rule is valid until June 30, 2024.
Generative AI tools, including OpenAI ChatGPT, Google BARD and Salesforce Einstein, use LLMs to generate text-based content based on the data patterns learned from their training.
Craig Martell, chief digital and artificial intelligence officer at the Department of Defense and a 2023 Wash100 awardee, previously warned that such language models could be used by adversaries for disinformation.
Client Solution Architects has appointed Ellen Barletto as chief growth officer, expanding her leadership responsibilities after nearly two decades with…
Brian Meyer, federal field chief technology officer at Axonius Federal, said cybersecurity asset management could help government agencies make dozens…
“Technology transformation company Red River has acquired Invictus International Consulting to expand its cybersecurity and enterprise modernization capabilities to support…
Synergy ECP, a software engineering, cybersecurity and systems engineering services provider, has acquired NetServices, a company offering secure, mission-focused technology services. The…