Code out of the cage: Why secure developers can ship without limits
This article originally appeared in SD Times. It has been updated and syndicated here.
Skills verification has been a facet of our lives for most of the modern era, granting us validity and opening doors that wouldn’t otherwise be available. Driving, for example, is an important rite of passage for most, and we’re expected to pass a set of standardized assessments to confirm that we can be trusted with a four-thousand-pound machine, capable of traveling over a hundred miles an hour. Mistakes, especially at speed, can cost you that privilege, or even a human life.
But what if, for some, driving is more than a day-to-day convenience, and it becomes an elite profession? A person can continue their upskilling journey and potentially become an F1 driver, where they are permitted to operate machines that go faster than any civilian could realistically handle without a huge likelihood of error at high speeds.
To that end, it seems baffling that most developers who work on code that powers critical infrastructure, automobiles, medical tech, and everything in between, do so without first verifying their security prowess. On the other hand, why do security-skilled developers, who have repeatedly proven that they understand how to build things securely, need to queue with everyone else in the ever-slowing development pipelines because of all the security gates? The industry doesn’t see this as an oversight, it’s the norm.
We know from extensive research that most developers simply do not prioritize security in their code, and lack the regular education required to navigate a lot of common security bugs. They tend to be part of the reason that security at speed seems like a pipe dream, and many security-enabled developers feel like they are stuck in the slow lane on the Autobahn behind a bunch of learner drivers.
Despite this, the security world is slowly lurching forward, and there is an increasing demand for developers to have verified security skills who can hit the ground running. The Biden Administration’s Executive Order on Improving the Nation’s Cybersecurity specifically calls for the evaluation of vendors - and their development cohort’s - security practices, for any supplier in the US government’s software supply chain. It stands to reason that emphasis on developer security skills will only grow across most sectors, but with little on offer in the way of industry-standard assessments, how can organizations prove their security program is growing verifiable developer security skills in a way that won’t bring delivery to its knees, or stop the security-aware developers from spreading their wings?
Merit-based access control: Could it work?
Least-privilege security controls are a mainstay in a lot of organizations, with the idea that each role is assigned access to software, data, and systems on a need-to-know basis in the context of their jobs, and nothing more. This method - especially when paired with zero-trust authorization principles - is helpful in reeling in the full extent of the attack surface. And, really, we should apply this same strategy to API permissions, and other software-based use cases as standard.
Most of us in the security business are hyper-aware of the fact that software is eating the world, and the embedded systems code running your air fryer is really no different from the code keeping the power grid up and running, in terms of its potential to be exploitable. Our lives and critical data are at the mercy of threat actors, and every developer must understand the power they have to fortify their code when properly educated. It requires a serious upgrade to an organization’s security culture, but for true DevSecOps-style shared responsibility, developers do need a reason to care more about the role they play, and perhaps the fastest way to shift their mindset would be to tie code repository access to secure coding learning outcomes.
If we take an organization in the BFSI space, for example, chances are good that there will be highly sensitive repositories containing customer data, or storing valuable information like credit card numbers. Why, then, should we assume each engineer that has been granted access is security-aware, compliant with stringent PCI-DSS requirements, and able to make changes to the master branch quickly and without incident? While that may be the case for some, it would be far safer to restrict access to these delicate systems until this knowledge is proven.
The challenge is that in most companies, enacting a “license to code” scenario would be arduous, and depending on the training solution, a little too manual to support any kind of security at speed objectives. However, the right combination of integrative education and tooling can be the core of a developer-driven, defensive security strategy.
Effective training integration is not impossible.
Finding developer upskilling solutions that complement both high-velocity business objectives and their workflow is half the battle, but going the extra mile to move past “one-and-done” style compliance training is the only way we will start to see a meaningful reduction in code-level vulnerabilities. And for developers who successfully prove themselves? Well, the coding world is their oyster, and they don’t have to be hamstrung by security controls that assume they can’t navigate the basics.
Hands-on skills advancement that integrates seamlessly with the development environment provides the context needed for engineers to truly understand and apply secure coding concepts, and these same integrations can be used to effectively manage access to critical systems, ensuring those who excel at their learning outcomes are working on the highest-priority sensitive tasks without hindrance. It also makes it easier to implement rewards and recognition, ensuring security-skilled developers are seen as aspirational in their cohort.
Like many things in life, fortune favors the brave, and breaking the status quo to adopt an out-of-the-box approach to developer skills verification is exactly what we need to uplift tomorrow’s standards of acceptable code quality without sacrificing speed.
It seems baffling that most developers who work on code that powers critical infrastructure, automobiles, medical tech, and everything in between, do so without first verifying their security prowess. On the other hand, why do security-skilled developers, who have repeatedly proven that they understand how to build things securely, need to queue with everyone else in the ever-slowing development pipelines because of all the security gates?
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoThis article originally appeared in SD Times. It has been updated and syndicated here.
Skills verification has been a facet of our lives for most of the modern era, granting us validity and opening doors that wouldn’t otherwise be available. Driving, for example, is an important rite of passage for most, and we’re expected to pass a set of standardized assessments to confirm that we can be trusted with a four-thousand-pound machine, capable of traveling over a hundred miles an hour. Mistakes, especially at speed, can cost you that privilege, or even a human life.
But what if, for some, driving is more than a day-to-day convenience, and it becomes an elite profession? A person can continue their upskilling journey and potentially become an F1 driver, where they are permitted to operate machines that go faster than any civilian could realistically handle without a huge likelihood of error at high speeds.
To that end, it seems baffling that most developers who work on code that powers critical infrastructure, automobiles, medical tech, and everything in between, do so without first verifying their security prowess. On the other hand, why do security-skilled developers, who have repeatedly proven that they understand how to build things securely, need to queue with everyone else in the ever-slowing development pipelines because of all the security gates? The industry doesn’t see this as an oversight, it’s the norm.
We know from extensive research that most developers simply do not prioritize security in their code, and lack the regular education required to navigate a lot of common security bugs. They tend to be part of the reason that security at speed seems like a pipe dream, and many security-enabled developers feel like they are stuck in the slow lane on the Autobahn behind a bunch of learner drivers.
Despite this, the security world is slowly lurching forward, and there is an increasing demand for developers to have verified security skills who can hit the ground running. The Biden Administration’s Executive Order on Improving the Nation’s Cybersecurity specifically calls for the evaluation of vendors - and their development cohort’s - security practices, for any supplier in the US government’s software supply chain. It stands to reason that emphasis on developer security skills will only grow across most sectors, but with little on offer in the way of industry-standard assessments, how can organizations prove their security program is growing verifiable developer security skills in a way that won’t bring delivery to its knees, or stop the security-aware developers from spreading their wings?
Merit-based access control: Could it work?
Least-privilege security controls are a mainstay in a lot of organizations, with the idea that each role is assigned access to software, data, and systems on a need-to-know basis in the context of their jobs, and nothing more. This method - especially when paired with zero-trust authorization principles - is helpful in reeling in the full extent of the attack surface. And, really, we should apply this same strategy to API permissions, and other software-based use cases as standard.
Most of us in the security business are hyper-aware of the fact that software is eating the world, and the embedded systems code running your air fryer is really no different from the code keeping the power grid up and running, in terms of its potential to be exploitable. Our lives and critical data are at the mercy of threat actors, and every developer must understand the power they have to fortify their code when properly educated. It requires a serious upgrade to an organization’s security culture, but for true DevSecOps-style shared responsibility, developers do need a reason to care more about the role they play, and perhaps the fastest way to shift their mindset would be to tie code repository access to secure coding learning outcomes.
If we take an organization in the BFSI space, for example, chances are good that there will be highly sensitive repositories containing customer data, or storing valuable information like credit card numbers. Why, then, should we assume each engineer that has been granted access is security-aware, compliant with stringent PCI-DSS requirements, and able to make changes to the master branch quickly and without incident? While that may be the case for some, it would be far safer to restrict access to these delicate systems until this knowledge is proven.
The challenge is that in most companies, enacting a “license to code” scenario would be arduous, and depending on the training solution, a little too manual to support any kind of security at speed objectives. However, the right combination of integrative education and tooling can be the core of a developer-driven, defensive security strategy.
Effective training integration is not impossible.
Finding developer upskilling solutions that complement both high-velocity business objectives and their workflow is half the battle, but going the extra mile to move past “one-and-done” style compliance training is the only way we will start to see a meaningful reduction in code-level vulnerabilities. And for developers who successfully prove themselves? Well, the coding world is their oyster, and they don’t have to be hamstrung by security controls that assume they can’t navigate the basics.
Hands-on skills advancement that integrates seamlessly with the development environment provides the context needed for engineers to truly understand and apply secure coding concepts, and these same integrations can be used to effectively manage access to critical systems, ensuring those who excel at their learning outcomes are working on the highest-priority sensitive tasks without hindrance. It also makes it easier to implement rewards and recognition, ensuring security-skilled developers are seen as aspirational in their cohort.
Like many things in life, fortune favors the brave, and breaking the status quo to adopt an out-of-the-box approach to developer skills verification is exactly what we need to uplift tomorrow’s standards of acceptable code quality without sacrificing speed.
This article originally appeared in SD Times. It has been updated and syndicated here.
Skills verification has been a facet of our lives for most of the modern era, granting us validity and opening doors that wouldn’t otherwise be available. Driving, for example, is an important rite of passage for most, and we’re expected to pass a set of standardized assessments to confirm that we can be trusted with a four-thousand-pound machine, capable of traveling over a hundred miles an hour. Mistakes, especially at speed, can cost you that privilege, or even a human life.
But what if, for some, driving is more than a day-to-day convenience, and it becomes an elite profession? A person can continue their upskilling journey and potentially become an F1 driver, where they are permitted to operate machines that go faster than any civilian could realistically handle without a huge likelihood of error at high speeds.
To that end, it seems baffling that most developers who work on code that powers critical infrastructure, automobiles, medical tech, and everything in between, do so without first verifying their security prowess. On the other hand, why do security-skilled developers, who have repeatedly proven that they understand how to build things securely, need to queue with everyone else in the ever-slowing development pipelines because of all the security gates? The industry doesn’t see this as an oversight, it’s the norm.
We know from extensive research that most developers simply do not prioritize security in their code, and lack the regular education required to navigate a lot of common security bugs. They tend to be part of the reason that security at speed seems like a pipe dream, and many security-enabled developers feel like they are stuck in the slow lane on the Autobahn behind a bunch of learner drivers.
Despite this, the security world is slowly lurching forward, and there is an increasing demand for developers to have verified security skills who can hit the ground running. The Biden Administration’s Executive Order on Improving the Nation’s Cybersecurity specifically calls for the evaluation of vendors - and their development cohort’s - security practices, for any supplier in the US government’s software supply chain. It stands to reason that emphasis on developer security skills will only grow across most sectors, but with little on offer in the way of industry-standard assessments, how can organizations prove their security program is growing verifiable developer security skills in a way that won’t bring delivery to its knees, or stop the security-aware developers from spreading their wings?
Merit-based access control: Could it work?
Least-privilege security controls are a mainstay in a lot of organizations, with the idea that each role is assigned access to software, data, and systems on a need-to-know basis in the context of their jobs, and nothing more. This method - especially when paired with zero-trust authorization principles - is helpful in reeling in the full extent of the attack surface. And, really, we should apply this same strategy to API permissions, and other software-based use cases as standard.
Most of us in the security business are hyper-aware of the fact that software is eating the world, and the embedded systems code running your air fryer is really no different from the code keeping the power grid up and running, in terms of its potential to be exploitable. Our lives and critical data are at the mercy of threat actors, and every developer must understand the power they have to fortify their code when properly educated. It requires a serious upgrade to an organization’s security culture, but for true DevSecOps-style shared responsibility, developers do need a reason to care more about the role they play, and perhaps the fastest way to shift their mindset would be to tie code repository access to secure coding learning outcomes.
If we take an organization in the BFSI space, for example, chances are good that there will be highly sensitive repositories containing customer data, or storing valuable information like credit card numbers. Why, then, should we assume each engineer that has been granted access is security-aware, compliant with stringent PCI-DSS requirements, and able to make changes to the master branch quickly and without incident? While that may be the case for some, it would be far safer to restrict access to these delicate systems until this knowledge is proven.
The challenge is that in most companies, enacting a “license to code” scenario would be arduous, and depending on the training solution, a little too manual to support any kind of security at speed objectives. However, the right combination of integrative education and tooling can be the core of a developer-driven, defensive security strategy.
Effective training integration is not impossible.
Finding developer upskilling solutions that complement both high-velocity business objectives and their workflow is half the battle, but going the extra mile to move past “one-and-done” style compliance training is the only way we will start to see a meaningful reduction in code-level vulnerabilities. And for developers who successfully prove themselves? Well, the coding world is their oyster, and they don’t have to be hamstrung by security controls that assume they can’t navigate the basics.
Hands-on skills advancement that integrates seamlessly with the development environment provides the context needed for engineers to truly understand and apply secure coding concepts, and these same integrations can be used to effectively manage access to critical systems, ensuring those who excel at their learning outcomes are working on the highest-priority sensitive tasks without hindrance. It also makes it easier to implement rewards and recognition, ensuring security-skilled developers are seen as aspirational in their cohort.
Like many things in life, fortune favors the brave, and breaking the status quo to adopt an out-of-the-box approach to developer skills verification is exactly what we need to uplift tomorrow’s standards of acceptable code quality without sacrificing speed.
Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoThis article originally appeared in SD Times. It has been updated and syndicated here.
Skills verification has been a facet of our lives for most of the modern era, granting us validity and opening doors that wouldn’t otherwise be available. Driving, for example, is an important rite of passage for most, and we’re expected to pass a set of standardized assessments to confirm that we can be trusted with a four-thousand-pound machine, capable of traveling over a hundred miles an hour. Mistakes, especially at speed, can cost you that privilege, or even a human life.
But what if, for some, driving is more than a day-to-day convenience, and it becomes an elite profession? A person can continue their upskilling journey and potentially become an F1 driver, where they are permitted to operate machines that go faster than any civilian could realistically handle without a huge likelihood of error at high speeds.
To that end, it seems baffling that most developers who work on code that powers critical infrastructure, automobiles, medical tech, and everything in between, do so without first verifying their security prowess. On the other hand, why do security-skilled developers, who have repeatedly proven that they understand how to build things securely, need to queue with everyone else in the ever-slowing development pipelines because of all the security gates? The industry doesn’t see this as an oversight, it’s the norm.
We know from extensive research that most developers simply do not prioritize security in their code, and lack the regular education required to navigate a lot of common security bugs. They tend to be part of the reason that security at speed seems like a pipe dream, and many security-enabled developers feel like they are stuck in the slow lane on the Autobahn behind a bunch of learner drivers.
Despite this, the security world is slowly lurching forward, and there is an increasing demand for developers to have verified security skills who can hit the ground running. The Biden Administration’s Executive Order on Improving the Nation’s Cybersecurity specifically calls for the evaluation of vendors - and their development cohort’s - security practices, for any supplier in the US government’s software supply chain. It stands to reason that emphasis on developer security skills will only grow across most sectors, but with little on offer in the way of industry-standard assessments, how can organizations prove their security program is growing verifiable developer security skills in a way that won’t bring delivery to its knees, or stop the security-aware developers from spreading their wings?
Merit-based access control: Could it work?
Least-privilege security controls are a mainstay in a lot of organizations, with the idea that each role is assigned access to software, data, and systems on a need-to-know basis in the context of their jobs, and nothing more. This method - especially when paired with zero-trust authorization principles - is helpful in reeling in the full extent of the attack surface. And, really, we should apply this same strategy to API permissions, and other software-based use cases as standard.
Most of us in the security business are hyper-aware of the fact that software is eating the world, and the embedded systems code running your air fryer is really no different from the code keeping the power grid up and running, in terms of its potential to be exploitable. Our lives and critical data are at the mercy of threat actors, and every developer must understand the power they have to fortify their code when properly educated. It requires a serious upgrade to an organization’s security culture, but for true DevSecOps-style shared responsibility, developers do need a reason to care more about the role they play, and perhaps the fastest way to shift their mindset would be to tie code repository access to secure coding learning outcomes.
If we take an organization in the BFSI space, for example, chances are good that there will be highly sensitive repositories containing customer data, or storing valuable information like credit card numbers. Why, then, should we assume each engineer that has been granted access is security-aware, compliant with stringent PCI-DSS requirements, and able to make changes to the master branch quickly and without incident? While that may be the case for some, it would be far safer to restrict access to these delicate systems until this knowledge is proven.
The challenge is that in most companies, enacting a “license to code” scenario would be arduous, and depending on the training solution, a little too manual to support any kind of security at speed objectives. However, the right combination of integrative education and tooling can be the core of a developer-driven, defensive security strategy.
Effective training integration is not impossible.
Finding developer upskilling solutions that complement both high-velocity business objectives and their workflow is half the battle, but going the extra mile to move past “one-and-done” style compliance training is the only way we will start to see a meaningful reduction in code-level vulnerabilities. And for developers who successfully prove themselves? Well, the coding world is their oyster, and they don’t have to be hamstrung by security controls that assume they can’t navigate the basics.
Hands-on skills advancement that integrates seamlessly with the development environment provides the context needed for engineers to truly understand and apply secure coding concepts, and these same integrations can be used to effectively manage access to critical systems, ensuring those who excel at their learning outcomes are working on the highest-priority sensitive tasks without hindrance. It also makes it easier to implement rewards and recognition, ensuring security-skilled developers are seen as aspirational in their cohort.
Like many things in life, fortune favors the brave, and breaking the status quo to adopt an out-of-the-box approach to developer skills verification is exactly what we need to uplift tomorrow’s standards of acceptable code quality without sacrificing speed.
Table of contents
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise
The Secure-by-Design movement is the future of secure software development. Learn about the key elements companies need to keep in mind when they think about a Secure-by-Design initiative.
DigitalOcean Decreases Security Debt with Secure Code Warrior
DigitalOcean's use of Secure Code Warrior training has significantly reduced security debt, allowing teams to focus more on innovation and productivity. The improved security has strengthened their product quality and competitive edge. Looking ahead, the SCW Trust Score will help them further enhance security practices and continue driving innovation.
Resources to get you started
Trust Score Reveals the Value of Secure-by-Design Upskilling Initiatives
Our research has shown that secure code training works. Trust Score, using an algorithm drawing on more than 20 million learning data points from work by more than 250,000 learners at over 600 organizations, reveals its effectiveness in driving down vulnerabilities and how to make the initiative even more effective.
Reactive Versus Preventive Security: Prevention Is a Better Cure
The idea of bringing preventive security to legacy code and systems at the same time as newer applications can seem daunting, but a Secure-by-Design approach, enforced by upskilling developers, can apply security best practices to those systems. It’s the best chance many organizations have of improving their security postures.
The Benefits of Benchmarking Security Skills for Developers
The growing focus on secure code and Secure-by-Design principles requires developers to be trained in cybersecurity from the start of the SDLC, with tools like Secure Code Warrior’s Trust Score helping measure and improve their progress.
Driving Meaningful Success for Enterprise Secure-by-Design Initiatives
Our latest research paper, Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise is the result of deep analysis of real Secure-by-Design initiatives at the enterprise level, and deriving best practice approaches based on data-driven findings.