Reaping the benefits of AI innovation depends on starting with secure code
Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.
According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.
However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.
Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.
Coding bugs can spread quickly
Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase.
AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment.
Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.
The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.
The challenges of AI code creation and remediation
The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.
AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.
The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements.
The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.
Developers and AI can work together to create secure code
Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage.
The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance.
For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.
Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.
Generative AI offers financial services companies a lot of advantages, but also a lot of potential risk. Training developers in security best practices and pairing them with AI models can help create secure code from the start.
Secure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoSecure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.
Secure Code Warrior builds a culture of security-driven developers by giving them the skills to code securely. Our flagship Agile Learning Platform delivers relevant skills-based pathways, hands-on missions, and contextual tools for developers to rapidly learn, build, and apply their skills to write secure code at speed.
Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.
According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.
However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.
Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.
Coding bugs can spread quickly
Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase.
AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment.
Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.
The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.
The challenges of AI code creation and remediation
The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.
AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.
The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements.
The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.
Developers and AI can work together to create secure code
Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage.
The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance.
For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.
Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.
Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.
According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.
However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.
Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.
Coding bugs can spread quickly
Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase.
AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment.
Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.
The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.
The challenges of AI code creation and remediation
The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.
AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.
The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements.
The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.
Developers and AI can work together to create secure code
Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage.
The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance.
For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.
Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.
Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoSecure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.
Secure Code Warrior builds a culture of security-driven developers by giving them the skills to code securely. Our flagship Agile Learning Platform delivers relevant skills-based pathways, hands-on missions, and contextual tools for developers to rapidly learn, build, and apply their skills to write secure code at speed.
Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.
According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.
However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.
Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.
Coding bugs can spread quickly
Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase.
AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment.
Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.
The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.
The challenges of AI code creation and remediation
The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.
AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.
The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements.
The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.
Developers and AI can work together to create secure code
Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage.
The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance.
For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.
Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.
Table of contents
Secure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise
The Secure-by-Design movement is the future of secure software development. Learn about the key elements companies need to keep in mind when they think about a Secure-by-Design initiative.
DigitalOcean Decreases Security Debt with Secure Code Warrior
DigitalOcean's use of Secure Code Warrior training has significantly reduced security debt, allowing teams to focus more on innovation and productivity. The improved security has strengthened their product quality and competitive edge. Looking ahead, the SCW Trust Score will help them further enhance security practices and continue driving innovation.
Resources to get you started
Trust Score Reveals the Value of Secure-by-Design Upskilling Initiatives
Our research has shown that secure code training works. Trust Score, using an algorithm drawing on more than 20 million learning data points from work by more than 250,000 learners at over 600 organizations, reveals its effectiveness in driving down vulnerabilities and how to make the initiative even more effective.
Reactive Versus Preventive Security: Prevention Is a Better Cure
The idea of bringing preventive security to legacy code and systems at the same time as newer applications can seem daunting, but a Secure-by-Design approach, enforced by upskilling developers, can apply security best practices to those systems. It’s the best chance many organizations have of improving their security postures.
The Benefits of Benchmarking Security Skills for Developers
The growing focus on secure code and Secure-by-Design principles requires developers to be trained in cybersecurity from the start of the SDLC, with tools like Secure Code Warrior’s Trust Score helping measure and improve their progress.
Driving Meaningful Success for Enterprise Secure-by-Design Initiatives
Our latest research paper, Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise is the result of deep analysis of real Secure-by-Design initiatives at the enterprise level, and deriving best practice approaches based on data-driven findings.