Security researchers have uncovered a dangerous flaw in Docker’s Ask Gordon AI system that allowed hackers to run malicious code. The vulnerability, called DockerDash, was found by researchers at Noma Security and Pillar Security. It let attackers hide harmful instructions in Docker image metadata that Ask Gordon would execute without checking.
Critical security flaw exposed in Docker’s AI system, enabling attackers to execute malicious code through hidden metadata instructions.
The attack worked in three simple stages. First, an attacker would publish a Docker image with hidden commands in the LABEL fields. Then, when a user asked Gordon AI something like “Describe this repo,” the AI would read these hidden instructions. Finally, Gordon would pass these commands to the MCP Gateway, which would run them without any verification.
This security gap was especially concerning because it created a complete attack chain with zero validation at any stage. The impact varied by environment – in cloud settings, attackers could run code remotely, while in Docker Desktop they could steal sensitive data like API keys, build IDs, and network details. This vulnerability represented a significant trust boundary violation that allowed attackers to exploit the AI system’s reasoning process.
Researchers explained the success of this attack using the CFS Framework – Context, Format, and Salience. The malicious instructions fit naturally with the AI’s task, looked like normal data, and were positioned for high priority processing. The vulnerability was formally classified as CWE-1427 in security databases following successful data theft demonstrations.
Noma Labs reported the issue to Docker on September 17, 2025, which was confirmed on October 13. Docker released a fix in version 4.50.0 on November 6, 2025. The patch adds human approval for sensitive actions and requires explicit permission before making external connections.
The discovery highlights bigger concerns about AI in development workflows. It shows how productivity tools can create security risks when AI systems make decisions without proper checks. Security experts now recommend treating AI systems as potential attack vectors and implementing zero-trust validation for all AI contextual data.
This incident serves as a warning about the growing risks of autonomous AI agents in software development environments and the importance of monitoring AI behavior.
References
- https://thehackernews.com/2026/02/docker-fixes-critical-ask-gordon-ai.html
- https://hackread.com/docker-ask-gordon-ai-flaw-metadata-attacks/
- https://noma.security/blog/dockerdash-two-attack-paths-one-ai-supply-chain-crisis/
- https://www.apono.io/blog/when-agentic-ai-becomes-an-attack-surface-what-the-ask-gordon-incident-reveals/
- https://www.infosecurity-magazine.com/news/dockerdash-weakness-dockers-ask/
- https://www.scworld.com/brief/severe-ask-gordon-ai-vulnerability-addressed-by-docker