Deep-dive: Navigating vulnerabilities generated by AI coding assistants
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!
Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
Explore the security risks of AI in software development and learn how to navigate these challenges effectively with Secure Code Warrior.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoLaura Verheyde is a software developer at Secure Code Warrior focused on researching vulnerabilities and creating content for Missions and Coding labs.
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!
Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!
Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoHow do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself! This mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Try the missionLaura Verheyde is a software developer at Secure Code Warrior focused on researching vulnerabilities and creating content for Missions and Coding labs.
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!
Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
Table of contents
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise
The Secure-by-Design movement is the future of secure software development. Learn about the key elements companies need to keep in mind when they think about a Secure-by-Design initiative.
DigitalOcean Decreases Security Debt with Secure Code Warrior
DigitalOcean's use of Secure Code Warrior training has significantly reduced security debt, allowing teams to focus more on innovation and productivity. The improved security has strengthened their product quality and competitive edge. Looking ahead, the SCW Trust Score will help them further enhance security practices and continue driving innovation.
Resources to get you started
Trust Score Reveals the Value of Secure-by-Design Upskilling Initiatives
Our research has shown that secure code training works. Trust Score, using an algorithm drawing on more than 20 million learning data points from work by more than 250,000 learners at over 600 organizations, reveals its effectiveness in driving down vulnerabilities and how to make the initiative even more effective.
Reactive Versus Preventive Security: Prevention Is a Better Cure
The idea of bringing preventive security to legacy code and systems at the same time as newer applications can seem daunting, but a Secure-by-Design approach, enforced by upskilling developers, can apply security best practices to those systems. It’s the best chance many organizations have of improving their security postures.
The Benefits of Benchmarking Security Skills for Developers
The growing focus on secure code and Secure-by-Design principles requires developers to be trained in cybersecurity from the start of the SDLC, with tools like Secure Code Warrior’s Trust Score helping measure and improve their progress.
Driving Meaningful Success for Enterprise Secure-by-Design Initiatives
Our latest research paper, Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise is the result of deep analysis of real Secure-by-Design initiatives at the enterprise level, and deriving best practice approaches based on data-driven findings.