AI
AI

Will AI Code Generators Resolve Insecurities by 2025?

Photo credit: www.darkreading.com

In 2024, the adoption of large language models (LLMs) for code generation has significantly increased, with a substantial number of developers utilizing tools such as OpenAI’s ChatGPT, GitHub Copilot, Google Gemini, and JetBrains AI Assistant to streamline their coding processes.

Despite this rise in usage, concerns about the security of the generated code persist, leading to a lack of trust among developers. Research published in September highlighted that over 5% of code produced by commercial models and nearly 22% from open-source models included non-existent package names. Further studies in November revealed that at least 48% of snippets had vulnerabilities.

Ryan Salva, senior director of product at Google, emphasized that as developers adopt these AI tools, they must reform their coding practices to enhance security. He states, “We cannot rely blindly on these models; they should be complemented by careful human oversight.”

A major concern is the phenomenon of AI systems generating misleading information, known as hallucinations, which can lead to serious flaws in code. The State of Enterprise Open-Source AI report indicates that 60% of IT leaders recognize the significant impact that errors from AI tools can have on their operations.

Peter Wang, co-founder of Anaconda, asserts that AI should enhance developer capabilities rather than replace them. He cautions, “It is crucial for users to meticulously scrutinize AI-generated code before using it, as these tools can inadvertently introduce harmful code.”

Developers Pursue Efficiency Gains

According to GitHub’s 2024 Open Source Survey, around 73% of developers engaged in open-source projects leverage AI tools for coding and documentation, while another survey revealed that 97% of developers from the US, Brazil, Germany, and India have integrated AI coding tools into their workflows.

This trend has led to a marked increase in code production, with approximately 25% of code generated at Google attributed to AI, as informed by Salva. Furthermore, developers actively using GitHub and Copilot could create 12% to 15% more code, based on insights from GitHub’s Octoverse 2024 report.

The efficiency gains reported by developers are significant, with nearly half (49%) stating they save no less than two hours each week thanks to AI tools, as noted in the annual “State of Developer Ecosystem Report” from JetBrains.

Although many AI products emphasized versatility in their early stages, experts believe that precision will improve in the coming year. Vladislav Tankov, director of AI at JetBrains, points out that while before the LLM boom, specialized models were prevalent, LLMs introduced flexibility but occasionally at the cost of accuracy. He anticipates a new breed of models that will effectively balance these two needs.

Recently, JetBrains unveiled Mellum, an LLM specifically designed for coding tasks. Tankov explained that the model underwent various training phases, starting from basic comprehension and advancing to more intricate coding assignments, allowing it to maintain a general perspective while excelling at specific tasks.

JetBrains also implemented feedback systems and additional filtering to minimize the risk of generating vulnerable code, according to Tankov.

Security Remains a Concern

Trust in LLM-generated code appears to be on the rise, even though 59% of developers still express security apprehensions related to AI-generated outputs. However, more than 76% believe that AI-powered tools can produce more secure code than manual efforts.

AI tools are capable of accelerating the development of secure code, provided developers use them wisely. Wang estimates that productivity could potentially double with these tools, albeit with observed error rates of 10% to 30%.

He suggests that senior developers can regard AI tools as a skilled intern assisting with basic tasks, while junior developers could utilize these tools to learn and reduce research time, though caution is advised. “It’s essential for junior developers to avoid relying on AI for code they do not fully understand,” he states.

AI also plays a role in mitigating security issues. Wales from GitHub highlights services like Copilot Autofix, which enhances the security coding process, enabling developers to resolve vulnerabilities more than three times quicker than when addressing them manually.

Wales adds that since offering Copilot Autofix to open-source developers for free, the remediation rates have dramatically increased, reaching nearly 100% from an initial level of almost 50%.

Enhancements are evident in AI tools, with a noted yearly increase of approximately 5% in the acceptance rates for code suggestions, though experts like Salva point out that the current rates remain around 35%, which is viewed as insufficient.

Salva explains that this plateau results from tools being predominantly tied to limited context related to the cursor in an integrated development environment (IDE). “Expanding the contextual understanding beyond the IDE will facilitate a major advancement in the quality of generated outputs,” he asserts.

Discrete AIs for Developers’ Pipelines

AI assistants are becoming increasingly specialized, catering to various phases of the development pipeline. As developers continue to integrate AI tools both within development environments and as standalone applications, the need for specialized roles may emerge to ensure the security of generated code.

Wales from GitHub states, “The introduction of AI is already changing our approach to cybersecurity.” He projects that by 2025, the role of the AI engineer will become prominent, causing shifts in the composition of security teams.

However, as adversaries become more adept at utilizing code-generation tools, the potential for novel attack vectors is likely to grow, warns Tankov. “As AI-driven code generation normalizes, security challenges will become paramount, with coding agents themselves being prime targets for attacks.”

As the use of AI in code generation becomes a standard practice by 2025, developers will need to prioritize identifying vulnerabilities and ensuring that their AI tools uphold security protocols effectively.

Source
www.darkreading.com

Related by category

Cybersecurity Leaders Condemn ‘Political Persecution’ of Chris Krebs in Letter to the President

Photo credit: www.csoonline.com In November 2018, President Trump appointed Chris...

Broadcom-Supported SAN Devices Vulnerable to Code Injection Attacks Due to Critical Fabric OS Flaw

Photo credit: www.csoonline.com Critical Vulnerability Found in Broadcom’s Brocade Fabric...

Cyberattack on berlin.de | CSO Online

Photo credit: www.csoonline.com Cyberangriff auf Berlins Info- und Serviceportal berlin.de Ende...

Latest news

Firefly’s Rocket Experiences One of the Most Unusual Launch Failures in History

Photo credit: arstechnica.com Firefly Aerospace's Alpha Rocket: Navigating a Niche...

Saskatchewan Students Experience Hands-On Automotive Training

Photo credit: globalnews.ca On Tuesday, April 29th, the Saskatchewan Distance...

NASA Assembles Specialists to Explore Advancements in Astrophysics Technologies

Photo credit: www.nasa.gov The Future of Astrophysics: Harnessing Emerging Technologies The...

Breaking news