ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
Monday, 27 October 2025
Add Comment
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. "The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday
from The Hacker News https://ift.tt/apfWkGt
Genrerating Link.... 15 seconds.
Your Link is Ready.
0 Response to "ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands"
Post a Comment