cognitive cybersecurity intelligence

News and Analysis

Search

New PromptFix Attack Tricks AI Browsers to Run Malicious Hidden Prompts

New PromptFix Attack Tricks AI Browsers to Run Malicious Hidden Prompts

A new attack vector called PromptFix exploits AI-powered browsers by embedding malicious instructions within seemingly innocent web content. 

The attack represents an evolution of traditional ClickFix scams, specifically designed to manipulate agentic AI systems rather than human users.

The research, conducted by security experts testing Perplexity’s Comet AI browser, demonstrates how attackers can hijack AI agents through sophisticated prompt injection techniques. 

Key Takeaways
1. Hidden prompts in fake captchas trick AI browsers into malicious actions.
2. Social engineering bypasses AI security through helpful AI behavior.
3. Single attack replicates across millions of AI users simultaneously.

Unlike conventional phishing that relies on deceiving human judgment, PromptFix directly targets the AI’s decision-making processes, creating what researchers term “Scamlexity” – a new era of AI-powered scam complexity.

Hidden Instructions Behind Fake Captchas

Guardio reports that the PromptFix attack leverages a deceptively simple mechanism: hidden text elements embedded within fake captcha interfaces. 

While humans see only a standard checkbox verification, the underlying HTML contains invisible prompt injections that AI browsers inadvertently process as legitimate instructions.

The attack exploits AI models’ inability to distinguish between genuine user commands and maliciously injected content within the same processing context. 

Using CSS styling techniques like style=”display:none” or color:transparent, attackers can hide prompts such as:

When the AI browser processes the page’s HTML, these concealed instructions become part of the AI’s directive set, potentially triggering unauthorized actions like drive-by downloads or data exfiltration.

Prompt injection disguised as “AI-Friendly Captcha”

The attack’s effectiveness stems from manipulating AI browsers’ core programming: to assist users quickly and completely without hesitation. 

Rather than attempting to “glitch” the model through traditional prompt injection, PromptFix employs social engineering techniques adapted for AI consumption.

The attack creates compelling narratives that appeal to the AI’s service-oriented design. For instance, the hidden prompt might claim the captcha is “AI-solvable” and that clicking through will expedite the user’s task. 

This approach exploits the AI’s built-in drive to help instantly without triggering traditional security safeguards.

Promptfix: Successful prompt injection

Security experts warn that successful attacks against one AI model can be replicated across millions of users simultaneously, creating an unprecedented threat landscape requiring proactive defensive measures rather than reactive detection approaches.

Safely detonate suspicious files to uncover threats, enrich your investigations, and cut incident response time. Start with an ANYRUN sandbox trial → 
The post New PromptFix Attack Tricks AI Browsers to Run Malicious Hidden Prompts appeared first on Cyber Security News.

Source: cybersecuritynews.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts