Security Debt Looms—GitHub Copilot Autofix Steps In

github copilot autofix

According to IDC, 69% of developers cite frequent security-related context-switching as a hindrance, leading to security oversights, alongside impacting productivity.

To solve this, GitHub Copilot today announced a new update to Copilot Autofix. Just as GitHub Copilot helps developers code more quickly, Copilot Autofix accelerates the pace of remediation so security teams make real progress with the backlog of existing vulnerabilities, commonly known as security debt.

This new feature supports integration with various third-party tools and security campaigns, enabling security teams and developers to address vulnerabilities at scale using their preferred tools. This includes ESLint, JFrog SAST, and Black Duck’s PolarisTM platform powered by Coverity®, so developers can streamline security workflows with their code scanning tooling of choice.

This new feature is available today in public preview.

For instance, the integration between JFrog and GitHub offers developers a seamless DevSecOps experience by bringing together JFrog’s Advanced Security SAST and Runtime Security with GitHub’s Copilot Autofix, enhancing automated vulnerability remediation and real-time runtime monitoring in GitHub workflows.

As noted at GitHub Universe, this integration eliminates context-switching by allowing developers to “write, debug, and secure their code simultaneously,” addressing industry pain points of productivity and security oversight.

Since its introduction in public beta in March 2024, developers have used Copilot Autofix in their pull requests to help them quickly fix vulnerabilities in new code before they get merged into production where they can impact customers.

Copilot Autofix in action: Behind the scenes, Copilot Autofix utilises the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs to generate code suggestions. Copilot Autofix builds an LLM prompt based on sources including CodeQL analysis and short snippets of code around the flow path.

AI Testing AI

A recent GitHub study found that 97% of developers use AI coding tools, yet using AI to assess AI remains questionable. While GitHub Copilot Autofix employs automated testing, red team scrutiny, and filtering to mitigate risks, experts underscore limitations in self-verifying AI systems, suggesting that relying on another AI model for review may be fraught with redundancy and cost challenges.

“It’s hard to use AI to trust AI for the same reason people often miss their own mistakes,” said David Timothy Strauss, CTO at Pantheon.

Closing the Loop on Vulnerabilities

Developers are now deploying software at an unprecedented pace, frequently rolling out new features. However, despite their commitment to secure coding, vulnerabilities still find their way into production, remaining a major cause of breaches. This challenge is intensified by the complexity of security requirements, which many developers struggle to grasp and apply effectively.

As a result, achieving robust security remains difficult, leading to more vulnerabilities being released into the open. GitHub claimed that code scanning tools identify vulnerabilities but don’t solve the core issue: fixing them requires specialised security knowledge and time—both of which are scarce. The challenge isn’t finding vulnerabilities, but resolving them.

That is where Copilot Autofix comes into play. Team GitHub previously claimed that during the public beta, developers were able to fix code vulnerabilities over three times faster compared to manual efforts, demonstrating how AI agents can significantly streamline and accelerate secure software development.

The post Security Debt Looms—GitHub Copilot Autofix Steps In appeared first on AIM.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...