Cybersecurity researchers at SentinelOne have come across a curious sample that may be the first malware to actually embed a Large Language Model (LLM). The program, called MalTerminal, was revealed during the LABScon 2025 conference.
The sample consists of a Windows executable and a set of Python scripts. It hasn’t been seen in real-world attacks, but it demonstrates a potential shift in how attackers might use AI. Instead of carrying fixed malicious code, it contacts OpenAI’s GPT-4 (through an API endpoint retired in 2023) to create either ransomware or a reverse shell at the moment it runs. That makes it harder to spot using traditional security tools.
Interestingly, the researchers also found a defensive utility named FalconShield. It prompts GPT-4 to analyze Python files for threats, showing that AI can be used both for attack and defense experiments.
MalTerminal is part of a small but growing group of LLM-enabled malware, including PromptLock and LameHug/PROMPTSTEAL, which generate system commands and exfiltrate data. Analysts say hunting for these threats involves looking for unusual API keys, embedded prompt text, or connections to AI services.
Beyond malware, attackers are also using AI in phishing campaigns. Cybercriminals embed hidden code in HTML email attachments to evade AI-based security checks, then exploit the Follina flaw (CVE-2022-30190) to run scripts that compromise the system and maintain access. In another tactic, they set up fake CAPTCHA pages on hosting platforms like Netlify or Vercel; while automated scanners detect only the visible challenge, the concealed scripts collect user credentials when a person interacts with the page.
Even though this particular sample is inactive, it’s a warning: cybercriminals are experimenting with AI inside malware. Security teams need to rethink detection strategies and keep an eye on emerging LLM-enabled threats.
Caught feelings for cybersecurity? It’s okay, it happens. Follow us on LinkedIn and Instagram to keep the spark alive.