Software Security is in the Wild West (and it's going to get us killed)
Originally published in CSO Online
As a (semi-retired) ethical hacker, security professional and all-round computer geek, you might say I care a lot about technology. I care about how it was crafted, what it does and how it's going to make some aspect of our lives better or more efficient. I look "under the hood'of devices all the time, seeing some of the best (and worst) examples of code out there. Recently, I looked at how my air-con system could be controlled remotely with an Android app (imagine my surprise when I discovered that anyone on the WiFi network could control that thing without any authentication whatsoever).
Software security is always front-of-mind for me, as is the very real danger posed by our increasingly digital, personal information-sharing lifestyles. After all, we are in a largely unregulated, unsupervised and blissfully ignored territory. We're in the Wild West.
As a collective society, we're not looking under the hood of the technology we use every day. Although popular and highly acclaimed TV series like Mr. Robot help with general awareness, we're not security-minded... in fact, most of us have no idea how secure the software is within the myriad of applications, services and increasingly connected things we purchase and use. It's not even that we inherently trust them - we simply don't think about them at all.
Sony PlayStation Network (PSN), Ticketmaster, Yahoo!, Facebook, Target: every single one of these widely-used companies has been a victim of a data breach. Their software vulnerabilities were exploited, and millions upon millions of customer records were exposed. These examples represent a fraction of the global breaches that have taken place in the last ten years. They are a costly consequence of poor software security that allows the bad guys to steal our precious information.
When most people consider data breaches, they think about information security breaches, They are understood to be a nightmare for the company breached and inconvenient for those whose personal details have been impacted, but seriously, what's the big deal? If security continues to be ignored, are the consequences really that big? Nothing that major has happened so far - data breaches have severe impacts for the companies responsible for them, but it's their problem, right? They lose business, they lose consumer trust; it's ultimately their job to sort it out and pay for the damage.
Software security should be every organization's priority.
There's a fairly simple reason as to why software security isn't the number one concern for every organization out there with a dev team: not enough people have lost their lives yet and there's not enough knowledge about the risks.
Morbid? Perhaps. But it's the honest truth. Regulation, build standards and law-changing attention is paid (like from, for example, government agencies) when there is a real human cost.
Take a bridge, for example. Civil engineers (a line of work that is hundreds of years old) consider safety as a core part of constructing a bridge. Their approach goes far beyond aesthetics and basic functionality. Every bridge built is expected to adhere to stringent safety regulations, with both the civil engineering profession and society as a whole learning over time to expect a high level of safety. A bridge today that does not meet the safety requirements is considered dangerous and unusable. This is still an evolution we need to get to within software engineering.
By way of a more modern example, let's take a look at the toy industry. The 1950s saw a huge surge in toy production and sales thanks to the post-war baby boom. Interestingly, the amount of emergency room visits involving toy-related incidents was also reported to have increased during this time. Toy arrows were causing eye injuries, small toys (and detachable pieces from larger toys) were being ingested, and toy ovens marketed to little girls were capable of heating to higher temperatures than regular home ovens.
It was somewhat of a "Wild West'out there, with little regulation except for a few isolated bans and product recalls in the most dire of circumstances. Toy makers were essentially free to produce whatever toys they wished, with any concerns for safety typically flagged after several reported incidents had already occured. It wasn't until bills like Richard Nixon's 1969 Toy Safety Act were passed into law that the testing and subsequent banning of hazardous toys became standard, in the US and around the world. While accidents will still happen, today the general process of toy manufacturing adopts a "safety first" policy, and our kids are in far less potential danger.
In software security right now, we're in the Wild West. Apart from obvious laws and regulations in relation to privacy (especially recently with the GDPR) and protection of customer data as well as mandatory breach reporting legislation in some countries, very little is said and done in the mainstream business or community about the level of security built into software. Even those laws relate more to company responsibility than they do to the actual software being regulated, or having a compulsory standard of security to meet.
We will get there, but it may require a path of destruction first.
Gartner estimates that there will be 8.4 billion internet-connected devices in use by 2020; a figure that represents a 31 percent increase since 2016. This includes consumer electronics, as well as things like medical devices and industry-specific equipment. That's a heck of a lot of opportunities for a hacker.
Imagine the software running someone's pacemaker is insecure. A hacker could break in and potentially stop the heart of their victim (think that's ridiculous? Doctors disabled the WiFi in Dick Cheney's pacemaker to thwart potential assassination via hacking). A connected microwave or kettle could be blown up remotely (along with all manner of the Internet of Things devices we enjoy in our homes), or a connected electric car could have its brakes disabled. This might sound like a far-fetched Hollywood action movie, but if the software of any of these advanced pieces of connected technology can be breached, we really do have a potential disaster on our hands - just like the threats we've already covered with the explosive ramifications of cyberattacks in the oil and gas industry.
We can preempt the dire consequences of malicious hacking as our lives become more and more digitally dependent. It all starts with getting developers more excited about secure coding, and getting serious about growing a strong security mindset and culture in development teams.
Your software revolution starts here. The banking industry is leading the way in embracing gamified training in the fight against bad code, in a truly innovative approach that turns traditional (read: boring) training on its head. In fact, each of the top six banks in Australia are currently engaging their developers this way, igniting their security mindset. Check out what our client, IAG Group did with their next-level tournament.
Software security is always front-of-mind for me, as is the very real danger posed by our increasingly digital, personal information-sharing lifestyles. After all, we are in a largely unregulated, unsupervised and blissfully ignored territory. We're in the Wild West.
Chief Executive Officer, Chairman, and Co-Founder
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoChief Executive Officer, Chairman, and Co-Founder
Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.
Originally published in CSO Online
As a (semi-retired) ethical hacker, security professional and all-round computer geek, you might say I care a lot about technology. I care about how it was crafted, what it does and how it's going to make some aspect of our lives better or more efficient. I look "under the hood'of devices all the time, seeing some of the best (and worst) examples of code out there. Recently, I looked at how my air-con system could be controlled remotely with an Android app (imagine my surprise when I discovered that anyone on the WiFi network could control that thing without any authentication whatsoever).
Software security is always front-of-mind for me, as is the very real danger posed by our increasingly digital, personal information-sharing lifestyles. After all, we are in a largely unregulated, unsupervised and blissfully ignored territory. We're in the Wild West.
As a collective society, we're not looking under the hood of the technology we use every day. Although popular and highly acclaimed TV series like Mr. Robot help with general awareness, we're not security-minded... in fact, most of us have no idea how secure the software is within the myriad of applications, services and increasingly connected things we purchase and use. It's not even that we inherently trust them - we simply don't think about them at all.
Sony PlayStation Network (PSN), Ticketmaster, Yahoo!, Facebook, Target: every single one of these widely-used companies has been a victim of a data breach. Their software vulnerabilities were exploited, and millions upon millions of customer records were exposed. These examples represent a fraction of the global breaches that have taken place in the last ten years. They are a costly consequence of poor software security that allows the bad guys to steal our precious information.
When most people consider data breaches, they think about information security breaches, They are understood to be a nightmare for the company breached and inconvenient for those whose personal details have been impacted, but seriously, what's the big deal? If security continues to be ignored, are the consequences really that big? Nothing that major has happened so far - data breaches have severe impacts for the companies responsible for them, but it's their problem, right? They lose business, they lose consumer trust; it's ultimately their job to sort it out and pay for the damage.
Software security should be every organization's priority.
There's a fairly simple reason as to why software security isn't the number one concern for every organization out there with a dev team: not enough people have lost their lives yet and there's not enough knowledge about the risks.
Morbid? Perhaps. But it's the honest truth. Regulation, build standards and law-changing attention is paid (like from, for example, government agencies) when there is a real human cost.
Take a bridge, for example. Civil engineers (a line of work that is hundreds of years old) consider safety as a core part of constructing a bridge. Their approach goes far beyond aesthetics and basic functionality. Every bridge built is expected to adhere to stringent safety regulations, with both the civil engineering profession and society as a whole learning over time to expect a high level of safety. A bridge today that does not meet the safety requirements is considered dangerous and unusable. This is still an evolution we need to get to within software engineering.
By way of a more modern example, let's take a look at the toy industry. The 1950s saw a huge surge in toy production and sales thanks to the post-war baby boom. Interestingly, the amount of emergency room visits involving toy-related incidents was also reported to have increased during this time. Toy arrows were causing eye injuries, small toys (and detachable pieces from larger toys) were being ingested, and toy ovens marketed to little girls were capable of heating to higher temperatures than regular home ovens.
It was somewhat of a "Wild West'out there, with little regulation except for a few isolated bans and product recalls in the most dire of circumstances. Toy makers were essentially free to produce whatever toys they wished, with any concerns for safety typically flagged after several reported incidents had already occured. It wasn't until bills like Richard Nixon's 1969 Toy Safety Act were passed into law that the testing and subsequent banning of hazardous toys became standard, in the US and around the world. While accidents will still happen, today the general process of toy manufacturing adopts a "safety first" policy, and our kids are in far less potential danger.
In software security right now, we're in the Wild West. Apart from obvious laws and regulations in relation to privacy (especially recently with the GDPR) and protection of customer data as well as mandatory breach reporting legislation in some countries, very little is said and done in the mainstream business or community about the level of security built into software. Even those laws relate more to company responsibility than they do to the actual software being regulated, or having a compulsory standard of security to meet.
We will get there, but it may require a path of destruction first.
Gartner estimates that there will be 8.4 billion internet-connected devices in use by 2020; a figure that represents a 31 percent increase since 2016. This includes consumer electronics, as well as things like medical devices and industry-specific equipment. That's a heck of a lot of opportunities for a hacker.
Imagine the software running someone's pacemaker is insecure. A hacker could break in and potentially stop the heart of their victim (think that's ridiculous? Doctors disabled the WiFi in Dick Cheney's pacemaker to thwart potential assassination via hacking). A connected microwave or kettle could be blown up remotely (along with all manner of the Internet of Things devices we enjoy in our homes), or a connected electric car could have its brakes disabled. This might sound like a far-fetched Hollywood action movie, but if the software of any of these advanced pieces of connected technology can be breached, we really do have a potential disaster on our hands - just like the threats we've already covered with the explosive ramifications of cyberattacks in the oil and gas industry.
We can preempt the dire consequences of malicious hacking as our lives become more and more digitally dependent. It all starts with getting developers more excited about secure coding, and getting serious about growing a strong security mindset and culture in development teams.
Your software revolution starts here. The banking industry is leading the way in embracing gamified training in the fight against bad code, in a truly innovative approach that turns traditional (read: boring) training on its head. In fact, each of the top six banks in Australia are currently engaging their developers this way, igniting their security mindset. Check out what our client, IAG Group did with their next-level tournament.
Originally published in CSO Online
As a (semi-retired) ethical hacker, security professional and all-round computer geek, you might say I care a lot about technology. I care about how it was crafted, what it does and how it's going to make some aspect of our lives better or more efficient. I look "under the hood'of devices all the time, seeing some of the best (and worst) examples of code out there. Recently, I looked at how my air-con system could be controlled remotely with an Android app (imagine my surprise when I discovered that anyone on the WiFi network could control that thing without any authentication whatsoever).
Software security is always front-of-mind for me, as is the very real danger posed by our increasingly digital, personal information-sharing lifestyles. After all, we are in a largely unregulated, unsupervised and blissfully ignored territory. We're in the Wild West.
As a collective society, we're not looking under the hood of the technology we use every day. Although popular and highly acclaimed TV series like Mr. Robot help with general awareness, we're not security-minded... in fact, most of us have no idea how secure the software is within the myriad of applications, services and increasingly connected things we purchase and use. It's not even that we inherently trust them - we simply don't think about them at all.
Sony PlayStation Network (PSN), Ticketmaster, Yahoo!, Facebook, Target: every single one of these widely-used companies has been a victim of a data breach. Their software vulnerabilities were exploited, and millions upon millions of customer records were exposed. These examples represent a fraction of the global breaches that have taken place in the last ten years. They are a costly consequence of poor software security that allows the bad guys to steal our precious information.
When most people consider data breaches, they think about information security breaches, They are understood to be a nightmare for the company breached and inconvenient for those whose personal details have been impacted, but seriously, what's the big deal? If security continues to be ignored, are the consequences really that big? Nothing that major has happened so far - data breaches have severe impacts for the companies responsible for them, but it's their problem, right? They lose business, they lose consumer trust; it's ultimately their job to sort it out and pay for the damage.
Software security should be every organization's priority.
There's a fairly simple reason as to why software security isn't the number one concern for every organization out there with a dev team: not enough people have lost their lives yet and there's not enough knowledge about the risks.
Morbid? Perhaps. But it's the honest truth. Regulation, build standards and law-changing attention is paid (like from, for example, government agencies) when there is a real human cost.
Take a bridge, for example. Civil engineers (a line of work that is hundreds of years old) consider safety as a core part of constructing a bridge. Their approach goes far beyond aesthetics and basic functionality. Every bridge built is expected to adhere to stringent safety regulations, with both the civil engineering profession and society as a whole learning over time to expect a high level of safety. A bridge today that does not meet the safety requirements is considered dangerous and unusable. This is still an evolution we need to get to within software engineering.
By way of a more modern example, let's take a look at the toy industry. The 1950s saw a huge surge in toy production and sales thanks to the post-war baby boom. Interestingly, the amount of emergency room visits involving toy-related incidents was also reported to have increased during this time. Toy arrows were causing eye injuries, small toys (and detachable pieces from larger toys) were being ingested, and toy ovens marketed to little girls were capable of heating to higher temperatures than regular home ovens.
It was somewhat of a "Wild West'out there, with little regulation except for a few isolated bans and product recalls in the most dire of circumstances. Toy makers were essentially free to produce whatever toys they wished, with any concerns for safety typically flagged after several reported incidents had already occured. It wasn't until bills like Richard Nixon's 1969 Toy Safety Act were passed into law that the testing and subsequent banning of hazardous toys became standard, in the US and around the world. While accidents will still happen, today the general process of toy manufacturing adopts a "safety first" policy, and our kids are in far less potential danger.
In software security right now, we're in the Wild West. Apart from obvious laws and regulations in relation to privacy (especially recently with the GDPR) and protection of customer data as well as mandatory breach reporting legislation in some countries, very little is said and done in the mainstream business or community about the level of security built into software. Even those laws relate more to company responsibility than they do to the actual software being regulated, or having a compulsory standard of security to meet.
We will get there, but it may require a path of destruction first.
Gartner estimates that there will be 8.4 billion internet-connected devices in use by 2020; a figure that represents a 31 percent increase since 2016. This includes consumer electronics, as well as things like medical devices and industry-specific equipment. That's a heck of a lot of opportunities for a hacker.
Imagine the software running someone's pacemaker is insecure. A hacker could break in and potentially stop the heart of their victim (think that's ridiculous? Doctors disabled the WiFi in Dick Cheney's pacemaker to thwart potential assassination via hacking). A connected microwave or kettle could be blown up remotely (along with all manner of the Internet of Things devices we enjoy in our homes), or a connected electric car could have its brakes disabled. This might sound like a far-fetched Hollywood action movie, but if the software of any of these advanced pieces of connected technology can be breached, we really do have a potential disaster on our hands - just like the threats we've already covered with the explosive ramifications of cyberattacks in the oil and gas industry.
We can preempt the dire consequences of malicious hacking as our lives become more and more digitally dependent. It all starts with getting developers more excited about secure coding, and getting serious about growing a strong security mindset and culture in development teams.
Your software revolution starts here. The banking industry is leading the way in embracing gamified training in the fight against bad code, in a truly innovative approach that turns traditional (read: boring) training on its head. In fact, each of the top six banks in Australia are currently engaging their developers this way, igniting their security mindset. Check out what our client, IAG Group did with their next-level tournament.
Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoChief Executive Officer, Chairman, and Co-Founder
Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.
Originally published in CSO Online
As a (semi-retired) ethical hacker, security professional and all-round computer geek, you might say I care a lot about technology. I care about how it was crafted, what it does and how it's going to make some aspect of our lives better or more efficient. I look "under the hood'of devices all the time, seeing some of the best (and worst) examples of code out there. Recently, I looked at how my air-con system could be controlled remotely with an Android app (imagine my surprise when I discovered that anyone on the WiFi network could control that thing without any authentication whatsoever).
Software security is always front-of-mind for me, as is the very real danger posed by our increasingly digital, personal information-sharing lifestyles. After all, we are in a largely unregulated, unsupervised and blissfully ignored territory. We're in the Wild West.
As a collective society, we're not looking under the hood of the technology we use every day. Although popular and highly acclaimed TV series like Mr. Robot help with general awareness, we're not security-minded... in fact, most of us have no idea how secure the software is within the myriad of applications, services and increasingly connected things we purchase and use. It's not even that we inherently trust them - we simply don't think about them at all.
Sony PlayStation Network (PSN), Ticketmaster, Yahoo!, Facebook, Target: every single one of these widely-used companies has been a victim of a data breach. Their software vulnerabilities were exploited, and millions upon millions of customer records were exposed. These examples represent a fraction of the global breaches that have taken place in the last ten years. They are a costly consequence of poor software security that allows the bad guys to steal our precious information.
When most people consider data breaches, they think about information security breaches, They are understood to be a nightmare for the company breached and inconvenient for those whose personal details have been impacted, but seriously, what's the big deal? If security continues to be ignored, are the consequences really that big? Nothing that major has happened so far - data breaches have severe impacts for the companies responsible for them, but it's their problem, right? They lose business, they lose consumer trust; it's ultimately their job to sort it out and pay for the damage.
Software security should be every organization's priority.
There's a fairly simple reason as to why software security isn't the number one concern for every organization out there with a dev team: not enough people have lost their lives yet and there's not enough knowledge about the risks.
Morbid? Perhaps. But it's the honest truth. Regulation, build standards and law-changing attention is paid (like from, for example, government agencies) when there is a real human cost.
Take a bridge, for example. Civil engineers (a line of work that is hundreds of years old) consider safety as a core part of constructing a bridge. Their approach goes far beyond aesthetics and basic functionality. Every bridge built is expected to adhere to stringent safety regulations, with both the civil engineering profession and society as a whole learning over time to expect a high level of safety. A bridge today that does not meet the safety requirements is considered dangerous and unusable. This is still an evolution we need to get to within software engineering.
By way of a more modern example, let's take a look at the toy industry. The 1950s saw a huge surge in toy production and sales thanks to the post-war baby boom. Interestingly, the amount of emergency room visits involving toy-related incidents was also reported to have increased during this time. Toy arrows were causing eye injuries, small toys (and detachable pieces from larger toys) were being ingested, and toy ovens marketed to little girls were capable of heating to higher temperatures than regular home ovens.
It was somewhat of a "Wild West'out there, with little regulation except for a few isolated bans and product recalls in the most dire of circumstances. Toy makers were essentially free to produce whatever toys they wished, with any concerns for safety typically flagged after several reported incidents had already occured. It wasn't until bills like Richard Nixon's 1969 Toy Safety Act were passed into law that the testing and subsequent banning of hazardous toys became standard, in the US and around the world. While accidents will still happen, today the general process of toy manufacturing adopts a "safety first" policy, and our kids are in far less potential danger.
In software security right now, we're in the Wild West. Apart from obvious laws and regulations in relation to privacy (especially recently with the GDPR) and protection of customer data as well as mandatory breach reporting legislation in some countries, very little is said and done in the mainstream business or community about the level of security built into software. Even those laws relate more to company responsibility than they do to the actual software being regulated, or having a compulsory standard of security to meet.
We will get there, but it may require a path of destruction first.
Gartner estimates that there will be 8.4 billion internet-connected devices in use by 2020; a figure that represents a 31 percent increase since 2016. This includes consumer electronics, as well as things like medical devices and industry-specific equipment. That's a heck of a lot of opportunities for a hacker.
Imagine the software running someone's pacemaker is insecure. A hacker could break in and potentially stop the heart of their victim (think that's ridiculous? Doctors disabled the WiFi in Dick Cheney's pacemaker to thwart potential assassination via hacking). A connected microwave or kettle could be blown up remotely (along with all manner of the Internet of Things devices we enjoy in our homes), or a connected electric car could have its brakes disabled. This might sound like a far-fetched Hollywood action movie, but if the software of any of these advanced pieces of connected technology can be breached, we really do have a potential disaster on our hands - just like the threats we've already covered with the explosive ramifications of cyberattacks in the oil and gas industry.
We can preempt the dire consequences of malicious hacking as our lives become more and more digitally dependent. It all starts with getting developers more excited about secure coding, and getting serious about growing a strong security mindset and culture in development teams.
Your software revolution starts here. The banking industry is leading the way in embracing gamified training in the fight against bad code, in a truly innovative approach that turns traditional (read: boring) training on its head. In fact, each of the top six banks in Australia are currently engaging their developers this way, igniting their security mindset. Check out what our client, IAG Group did with their next-level tournament.
Table of contents
Chief Executive Officer, Chairman, and Co-Founder
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Resources to get you started
10 Key Predictions: Secure Code Warrior on AI & Secure-by-Design’s Influence in 2025
Organizations are facing tough decisions on AI usage to support long-term productivity, sustainability, and security ROI. It’s become clear to us over the last few years that AI will never fully replace the role of the developer. From AI + developer partnerships to the increasing pressures (and confusion) around Secure-by-Design expectations, let’s take a closer look at what we can expect over the next year.
OWASP Top 10 For LLM Applications: What’s New, Changed, and How to Stay Secure
Stay ahead in securing LLM applications with the latest OWASP Top 10 updates. Discover what's new, what’s changed, and how Secure Code Warrior equips you with up-to-date learning resources to mitigate risks in Generative AI.
Trust Score Reveals the Value of Secure-by-Design Upskilling Initiatives
Our research has shown that secure code training works. Trust Score, using an algorithm drawing on more than 20 million learning data points from work by more than 250,000 learners at over 600 organizations, reveals its effectiveness in driving down vulnerabilities and how to make the initiative even more effective.
Reactive Versus Preventive Security: Prevention Is a Better Cure
The idea of bringing preventive security to legacy code and systems at the same time as newer applications can seem daunting, but a Secure-by-Design approach, enforced by upskilling developers, can apply security best practices to those systems. It’s the best chance many organizations have of improving their security postures.