Unveiled Risks: Open-Source LLMs Under Cyber Threat

Open-source large language models are increasingly vulnerable to misuse by hackers, according to research by SentinelOne and Censys. These models are used for potentially illegal activities such as spam, phishing, and more. The study highlights the need for stronger security measures within the AI community.


Devdiscourse News Desk | Updated: 29-01-2026 19:31 IST | Created: 29-01-2026 19:31 IST
Unveiled Risks: Open-Source LLMs Under Cyber Threat

Hackers and criminals can easily exploit computers that use open-source large language models (LLMs) beyond the control of leading artificial-intelligence platforms, increasing security risks, researchers reported on Thursday. These hackers could manipulate LLMs to conduct phishing schemes, spam operations, or disinformation campaigns, bypassing existing platform security checks.

Research from cybersecurity firms SentinelOne and Censys, conducted over 293 days and shared exclusively with Reuters, sheds light on how thousands of LLM deployments could be used for illegal activities, including hacking, harassment, and data theft. The study found that many LLMs online are variants of Meta's Llama, Google DeepMind's Gemma, among others, and noted that some open-source models lacked necessary security guards.

Juan Andres Guerrero-Saade, of SentinelOne, compared the neglected security issues in the AI industry to an unacknowledged 'iceberg.' The research found that 7.5% of observable LLMs could potentially facilitate harmful activities. Rachel Adams of the Global Center on AI Governance emphasized the shared responsibility in preventing foreseeable harms in open-source models.

(With inputs from agencies.)

Give Feedback