GitLab ‘Vulnerability Highlights the Double-Edged Nature of AI Assistants’

Visualization of cyber attacks.
Picture: CROCOTHERY/Adobe Inventory

A newly disclosed vulnerability in GitLab Duo, GitLab’s AI-powered coding assistant, has raised critical issues concerning the security of AI instruments embedded in software program growth workflows.

Cybersecurity researchers at Legit Safety have uncovered a distant immediate injection flaw that allowed attackers to steal non-public supply code, manipulate AI-generated code solutions, and leak undisclosed safety vulnerabilities from non-public tasks.

How the exploit labored

GitLab Duo, powered by Anthropic’s Claude mannequin, is designed to assist builders write, overview, and analyze code, however researchers discovered it was far too trusting of the content material it analyzed.

In line with Legit Safety’s weblog submit, attackers had been capable of plant hidden prompts inside numerous components of GitLab tasks, together with merge request descriptions, commit messages, and subject feedback — even contained in the supply code itself.

As a result of Duo scans and processes this content material to supply useful AI responses, the hidden prompts tricked it into taking malicious actions, with out the person realizing it.

“Duo analyzes the whole context of the web page, together with feedback, descriptions, and the supply code — making it weak to injected directions hidden anyplace in that context,” stated safety researcher Omer Mayraz within the Legit Safety report.

To maintain the malicious prompts invisible to human eyes, attackers used a number of intelligent strategies, together with:

  • Unicode smuggling to masks malicious directions.
  • Base16 encoding to cover prompts in plain sight.
  • KaTeX formatting in white textual content to make malicious textual content invisible on the web page.

For instance, white-colored textual content may very well be embedded in feedback utilizing KaTeX in order that it’s solely seen to Duo, not the person.

This allowed attackers to control Duo’s habits, reminiscent of recommending malicious JavaScript packages or presenting pretend URLs as respectable, which might doubtlessly lead victims to phishing websites.

HTML injection and code theft

As a result of GitLab Duo streams its responses, rendering them in HTML as they’re generated, attackers might sneak in uncooked HTML, reminiscent of <img> tags. These tags may very well be set as much as ship HTTP requests to attacker-controlled servers, carrying stolen supply code encoded in base64.

Legit Safety demonstrated this by planting a immediate that instructed Duo to extract non-public supply code from a hidden merge request, encode it, and insert it into an <img src=…> tag. When a person considered the response, their browser would routinely ship the stolen knowledge to the attacker.

“We realized we might inject uncooked HTML tags immediately into Duo’s reply,” the researchers defined. “The reply content material is handed into the ‘sanitize’ perform of DOMPurify… Nevertheless, sure HTML tags like <img>, <type>, and <a> aren’t eliminated by default.”

GitLab’s response and patch

GitLab was notified of the difficulty on Feb. 12. The corporate confirmed each the immediate injection and the HTML injection vulnerabilities and issued a repair underneath patch duo-ui!52.

In line with Legit Safety, the patch now prevents Duo from rendering unsafe HTML tags that time to exterior domains not hosted on GitLab. This closes the door on the kind of exploit used within the demonstration.

GitLab’s proactive response earned reward from the researchers, who stated, “We admire GitLab’s transparency and swift collaboration all through the method.”

This incident highlights a broader concern concerning the rising use of AI in software program growth and different delicate environments.

“This vulnerability highlights the double-edged nature of AI assistants like GitLab Duo: when deeply built-in into growth workflows, they inherit not simply context — however danger,” stated Mayraz.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...