Blog

Reaping the benefits of AI innovation depends on starting with secure code

Secure Code Warrior
Published Apr 09, 2024

Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.

According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.

However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.

Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.

Coding bugs can spread quickly

Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase. 

AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment. 

Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.

The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.

The challenges of AI code creation and remediation

The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.

AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.

The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements. 

The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.

Developers and AI can work together to create secure code

Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage. 

The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance. 

For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.

Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.

View Resource
View Resource

Generative AI offers financial services companies a lot of advantages, but also a lot of potential risk. Training developers in security best practices and pairing them with AI models can help create secure code from the start.

Interested in more?

Secure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Secure Code Warrior
Published Apr 09, 2024

Secure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.

Secure Code Warrior builds a culture of security-driven developers by giving them the skills  to code securely. Our flagship Agile Learning Platform delivers relevant skills-based pathways,  hands-on missions, and contextual tools for developers to rapidly learn, build, and apply  their skills to write secure code at speed.

Share on:

Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.

According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.

However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.

Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.

Coding bugs can spread quickly

Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase. 

AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment. 

Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.

The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.

The challenges of AI code creation and remediation

The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.

AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.

The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements. 

The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.

Developers and AI can work together to create secure code

Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage. 

The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance. 

For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.

Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.

View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.

According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.

However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.

Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.

Coding bugs can spread quickly

Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase. 

AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment. 

Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.

The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.

The challenges of AI code creation and remediation

The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.

AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.

The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements. 

The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.

Developers and AI can work together to create secure code

Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage. 

The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance. 

For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.

Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.

Access resource

Click on the link below and download the PDF of this resource.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
Download PDF
View Resource
Share on:
Interested in more?

Share on:
Author
Secure Code Warrior
Published Apr 09, 2024

Secure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.

Secure Code Warrior builds a culture of security-driven developers by giving them the skills  to code securely. Our flagship Agile Learning Platform delivers relevant skills-based pathways,  hands-on missions, and contextual tools for developers to rapidly learn, build, and apply  their skills to write secure code at speed.

Share on:

Software developers have shown they are ready and willing to make use of generative artificial intelligence (AI) to write code, and they have generally seen some favorable results. But there are also plenty of signs that they could be playing a dangerous game.

According to a recent survey by GitHub, more than 90% of U.S. developers are using AI coding tools, citing advantages such as faster completion times, quick resolution of incidents, and a more collaborative environment, which is something they feel is important. Working with AI tools allows developers to hand off routine tasks, allowing them to work on more creative jobs that benefit their companies and, not incidentally, reduces the chances of on-the-job burnout.

However, studies have also shown that AI tools have a propensity to introduce flaws when writing code. A survey by Snyk found that although 75.8% of respondents said that AI code is more secure than human code, 56.4% admitted that AI sometimes introduces coding issues. Alarmingly, 80% of respondents said they bypass AI code security policies during development.

Since OpenAI's ChatGPT debuted in November 2022, the use of generative AI models has spread with lightning speed throughout the code development process in financial services, as it has in many other fields. The fast emergence of other models, such as GitHub Copilot, OpenAI Codex, and a growing list of others, suggests that we have only scratched the surface of what generative AI can do and the impact it can have. But for that impact to be positive, we need to ensure that the code it generates is secure.

Coding bugs can spread quickly

Whether created by human developers or AI models, code is going to contain some errors. With AI helping to accelerate code development to meet the ever-increasing demands in highly distributed, cloud-based computing environments, the chances of bad code propagating widely before it is caught could increase. 

AI models being trained to write code will ingest thousands of examples of code that perform various tasks, and then they can draw on those examples to create their own code. But if the samples that it’s working from contain flaws or vulnerabilities—whether they were originally created by a human or another AI—the model could transfer those flaws to a new environment. 

Considering research has shown that AI models aren’t capable of reliably recognizing flaws in the code it’s using, there is little built-in defense against the spread of flaws and vulnerabilities. AI will not only make mistakes in coding, but it will repeat its own mistakes and those of other sources until the vulnerability is identified somewhere down the line—perhaps in the form of a successful breach of a company using the software it created.

The real defense against the proliferation of coding flaws is for humans and AI models to work together. Human developers should oversee AI code writing and serve as a check against unsecure coding practices and vulnerable code. But for that to happen, developers must be thoroughly trained in the best practices of secure code writing so they can identify coding mistakes an AI might make and quickly correct them.

The challenges of AI code creation and remediation

The sudden explosion of large language models (LLMs) like ChatGPT has been something of a double-edged sword. On one side, companies and everyday users have seen tremendous productivity gains from using AI to handle time-consuming, onerous, or difficult chores. On the other side, there have been plenty of examples of what can go wrong when blindly trusting AI to handle the work.

AI models have made glaring mistakes, demonstrated bias and produced flat-out hallucinations. In many cases, the root of the problem was inadequate or irresponsible training data. Any AI model is only as good as the data it’s trained on, so it’s essential that training data be comprehensive and carefully vetted. Even then, however, some mistakes will be made.

The use of AI for coding faces many of the same hurdles. Code generated by AI has been shown to contain a range of flaws, such as vulnerabilities to cross-site scripting and code injection, as well as attacks specific to AI and machine learning (ML), such as prompt injection. AI models also operate in a black box because their processes aren’t transparent, which prevents a security or development team from seeing how an AI reaches its conclusions. As a result, the model can repeat the same mistakes over and over. The same shortcomings that can affect code writing also carry over to using AI for code remediation and meeting compliance requirements. 

The potential for flaws being created or repeated by AI models has grown to the point that LLMs now have their own Open Web Application Security Project (OWASP) list of top ten vulnerabilities.

Developers and AI can work together to create secure code

Concerns about the potential flaws in AI-generated code might give some organizations pause, albeit briefly, about moving ahead with the technology. But the potential benefits are too great to ignore, especially as AI developers continue to innovate and improve their models. The financial services industry, for one, is unlikely to put the genie back in the bottle. Banks and financial services companies are technology-driven already, and they operate in a field where they’re always looking for a competitive advantage. 

The key is to implement AI models in a way that minimizes risk. And that means having developers who are security-aware and thoroughly trained in secure coding best practices—so they can write secure code themselves and closely monitor the code AI models produce. By having AI engines and human developers working together in a tight partnership, with developers having the final say, firms can reap the benefits of improved productivity and efficiency while also improving security, limiting risk and ensuring compliance. 

For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior guide: The ultimate guide to security trends in financial services.

Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.

Table of contents

Download PDF
View Resource
Interested in more?

Secure Code Warrior makes secure coding a positive and engaging experience for developers as they increase their skills. We guide each coder along their own preferred learning pathway, so that security-skilled developers become the everyday superheroes of our connected world.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts