John Sourk, director of federal sales at Abnormal AI, said artificial intelligence could help agencies strengthen email security by identifying behavioral anomalies and detecting email-based threats that traditional defenses miss.
“AI tools can determine a baseline for how users typically behave and then identify behavioral anomalies and potential misuse, which traditional signature-based defenses aren’t able to detect,” Sourk wrote in an article published on Carahsoft.com.
“Further, AI enables defenders to work at a speed and scale that’s not otherwise possible,” he added.
Why Are Signature-Based Defenses Falling Short?
According to Sourk, signature-based tools are no longer sufficient because they rely on known threat patterns that quickly become outdated. He said adversaries continuously evolve their tactics, making it difficult for static defenses to detect new and emerging threats.
Sourk added that this shift has driven the need for more adaptive approaches, including behavioral analysis and endpoint detection and response.
How Does Behavioral Anomaly Detection Work?
Sourk said behavioral anomaly detection uses AI to establish a baseline of normal user activity and identify deviations that may indicate malicious behavior.
He explained that this approach enables agencies to detect threats that do not match known signatures, such as social engineering attacks and account takeovers.
The technology analyzes how users typically communicate, who they interact with and how they access systems to flag suspicious activity in real time.
How Does AI-Powered Behavioral Analysis Block Anomalous Activities?
Sourk said AI-driven technologies like those of Abnormal AI work by helping agencies establish behavioral baselines to safeguard networks from email threats.
“That includes understanding how people typically operate, the relationships they have within and outside the organization, and how and where they work. With that information, the technology continually monitors for actions that fall outside those patterns,” he wrote.
He noted that Abnormal AI’s FedRAMP-authorized, cloud-native tool aligns with zero trust and integrates with Google Workspace and Microsoft 365 platforms to speed up analysis of data related to behavior, identity and content to distinguish anomalous messages from legitimate ones.
Sourk explained that the company’s AI tool could help agencies detect sophisticated social engineering attacks.
“When federal agencies can easily identify activities that deviate from established baselines, they strengthen their defense against these high-impact, socially engineered attacks while allowing their employees to continue being productive,” he added.














