COVID-19 contact tracing: What's the secure coding situation?
A version of this article originally appeared in DevOps Digest. It has been updated and syndicated here.
At this point, I'm sure we're all getting a little tired of the phrase, "in these unprecedented times"... but, these really are unprecedented times. Who'd have thought at the end of last year, that we would be racing to defeat a globally destructive pandemic this year, and throwing everything we possibly could at it? It would have seemed almost laughable, and more along the lines of a new Netflix sci-fi series than part of our worldwide reality. COVID-19 has completely transformed our social lives, economy, and job security, not to mention political priorities.
One of the counter-attacks against COVID-19 has been through technology, with many countries rolling out contact tracing apps. Australia has COVIDSafe, modeled from Singapore's TraceTogether. Hong Kong, Taiwan, China, South Korea, Israel and Germany all have contact-tracing technology implemented, or on the way. The UK has been the hardest-hit region in Europe, with tens of thousands of virus-related deaths and a high infection rate. The release of their app is imminent. The USA -- also deeply affected with many people tragically losing their lives -- also has technology rolling out, but their state-by-state approach to contact tracing makes their situation quite complex.
With the exception of more state-controlled countries like China and Taiwan, the use of these apps is voluntary, requiring citizens to download and use the technology of their own accord. Some adoption rates are more successful than others; for example, Singapore's TraceTogether app had an adoption rate of 25%, rendering it quite ineffective for its desired purpose.
The idea behind contact tracing apps is sound. This technology, when functioning well, would ensure hotspots are quickly revealed and comprehensive testing can occur - both essential components of fighting the spread of a contagious virus. However, the words "government" and "tracing" don't exactly sound very inviting, and it's natural that people are cautious about what downloading something like this would actually mean for them.
So, what are the chief concerns of users? If online commentary is anything to go by, some of these misgivings include:
- Lack of trust in the government to use collected data responsibly
- Apprehension over how well personal data will be protected from cyberattacks
- Lack of clarity in what data is actually being collected, where it is stored, and with whom
- ... and for the developers/geeks among us, how solidly the apps are actually built.
It is always a bit of a worry when apps are built quickly, and these contact tracing apps are having to be rolled out in record time. It's a nightmare for developers, security people, and government agencies.
So, is mistrust a valid reaction? And what should we consider as a priority in our assessment of COVID-19 contact tracing apps and end-user safety? As a security guy, my instinct is, of course, to drill down into the cybersecurity elements of the program, namely how secure the codebase is for an app we're all (out of the best of intentions) being pushed to install.
Many of the apps are copies of each other (and inherit the same problems).
Australia's COVIDSafe app is essentially based on OpenTrace, as is Singapore's TraceTogether software. The problem, however, is that TraceTogether had a range of reported issues and a poor uptake, with just 25% of the population opting in as users - far short of the 75% required for it to be effective. There have been complaints regarding its general performance, especially on iOS, including batteries being drained very quickly. COVIDSafe has a potential UX flaw in its iOS version, requiring the phone to be unlocked and the app running in the foreground to record all data properly.
While the above issues are annoying, the more pressing concern is that Bluetooth vulnerabilities are rife and neither TraceTogether, nor Australia's COVIDSafe, are immune to them. On May 14th, NIST reported that COVIDSafe had a Denial of Service vulnerability, allowing an attacker to remotely crash the app if they are in Bluetooth handshake distance. This would allow an organized attack to disrupt contact tracing in densely populated areas, where it is most useful - something explained in detail by security researcher Richard Nelson. It is known to affect COVIDSafe, TraceTogether, Poland's ProteGO and Canada's ABTraceTogether - all inheriting the issue from OpenTrace's faulty manuData.subdata call.
There are other privacy and security issues relating to Bluetooth functionality in general, as well. The fact that this technology is being used to trace human movement through a unique ID (TempID) and collect meaningful data will inevitably mean a spiked interest in attackers testing for weaknesses, at which point exactly what is being collected, where it is being stored, and for how long, must be scrutinized.
Some apps are already showing signs of simple errors that cause complex weaknesses.
Australian software engineer Geoffrey Huntley has been studying the source code of COVIDSafe, and sadly, there are issues that are not necessarily being highlighted to us, the end-users.
One critical example was a privacy-breaching logic error that would allow an attacker to perform long-term tracking of devices; something that poses an enormous amount of risk for vulnerable users, not to mention it contravenes the Privacy Policy of the app itself.
It's important to note that these logic vulnerabilities have been patched as of May 14th, but the more pressing issue is that this was left unpatched, in the wild, for 17 days after Mr. Huntley reported it. He and other members of the awesome security community are tracking CVEs relating to the COVIDSafe app here.
One thing Huntley points out, post-patch, is that even the fix shows signs of, well, incompetence. In his public log, he notes the patch involved adding logic rather than simply deleting a flawed cache, with the latter being a far more robust remedy. Both work, but the live solution lacks finesse - a concern with such an important application.
Although we have diligent members of society using their own time and expertise to pore through source code and highlight issues, their job is made much harder than if the code was open source in the first place. As it stands, 28 apps are still closed off to security researchers.
Secure coding continues to trip us up at the finish line.
While I can certainly sympathize with overworked developers -- as well as the highly unusual situation of having to churn out a life-saving app in the midst of a pandemic -- the above should highlight that a few simple vulnerabilities in what is essentially a communal codebase could spell significant issues for millions of users.
I'd like to think most people want to be good citizens, support the app, and give everyone the best possible chance of contact tracing and controlling outbreaks of this horrific virus. I too am in support of technology that can help achieve this, but in many ways, this has unearthed the general lack of secure coding principles inherent in developers all over the world.
In any situation where software has to be written quickly, mistakes are not exactly unexpected. However, common security vulnerabilities like logic flaws, misconfigurations, and code injection errors should be something that can be prevented as code is written, not after volunteer white hats pick the codebase apart.
And it's not the developers'fault, by the way. They leave their tertiary education with little skills in secure coding, and in their careers, their KPIs almost always relate to feature functionality and speed of delivery - the security part is for someone else to deal with once they're done. We need to get to an end-state of secure coding at speed, and while now is not the time to make seismic culture shifts in the departments building these apps, it's a timely reminder that our digital risk area is expanding, and they are in pole position to make a difference if they're given the tools and knowledge to share the responsibility for security best practices.
Is it safe to download the app?
Here's the thing: for me, a security guy, I've come to the conclusion that the benefits of the app outweigh the issues. It's not ideal that the above vulnerabilities are - or have been - present in this software, but the implications of these being weaponized are worst-case scenarios. At the moment, contact tracing is a vital component of assisting our medical heroes all around the world in controlling the spread, stemming the flow of hospital admissions, and keeping each other as safe as possible.
It serves to highlight that we have a long way to go when it comes to enacting security best practices by default in a software build, and it's important the public does have the information needed to make informed decisions.
My family and I will continue to use it, though we remain vigilant with staying up-to-date with our Android patches, as we all should.
The idea behind contact tracing apps is sound. This technology, when functioning well, would ensure hotspots are quickly revealed and comprehensive testing can occur - both essential components of fighting the spread of a contagious virus.
Chief Executive Officer, Chairman, and Co-Founder
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoChief Executive Officer, Chairman, and Co-Founder
Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.
A version of this article originally appeared in DevOps Digest. It has been updated and syndicated here.
At this point, I'm sure we're all getting a little tired of the phrase, "in these unprecedented times"... but, these really are unprecedented times. Who'd have thought at the end of last year, that we would be racing to defeat a globally destructive pandemic this year, and throwing everything we possibly could at it? It would have seemed almost laughable, and more along the lines of a new Netflix sci-fi series than part of our worldwide reality. COVID-19 has completely transformed our social lives, economy, and job security, not to mention political priorities.
One of the counter-attacks against COVID-19 has been through technology, with many countries rolling out contact tracing apps. Australia has COVIDSafe, modeled from Singapore's TraceTogether. Hong Kong, Taiwan, China, South Korea, Israel and Germany all have contact-tracing technology implemented, or on the way. The UK has been the hardest-hit region in Europe, with tens of thousands of virus-related deaths and a high infection rate. The release of their app is imminent. The USA -- also deeply affected with many people tragically losing their lives -- also has technology rolling out, but their state-by-state approach to contact tracing makes their situation quite complex.
With the exception of more state-controlled countries like China and Taiwan, the use of these apps is voluntary, requiring citizens to download and use the technology of their own accord. Some adoption rates are more successful than others; for example, Singapore's TraceTogether app had an adoption rate of 25%, rendering it quite ineffective for its desired purpose.
The idea behind contact tracing apps is sound. This technology, when functioning well, would ensure hotspots are quickly revealed and comprehensive testing can occur - both essential components of fighting the spread of a contagious virus. However, the words "government" and "tracing" don't exactly sound very inviting, and it's natural that people are cautious about what downloading something like this would actually mean for them.
So, what are the chief concerns of users? If online commentary is anything to go by, some of these misgivings include:
- Lack of trust in the government to use collected data responsibly
- Apprehension over how well personal data will be protected from cyberattacks
- Lack of clarity in what data is actually being collected, where it is stored, and with whom
- ... and for the developers/geeks among us, how solidly the apps are actually built.
It is always a bit of a worry when apps are built quickly, and these contact tracing apps are having to be rolled out in record time. It's a nightmare for developers, security people, and government agencies.
So, is mistrust a valid reaction? And what should we consider as a priority in our assessment of COVID-19 contact tracing apps and end-user safety? As a security guy, my instinct is, of course, to drill down into the cybersecurity elements of the program, namely how secure the codebase is for an app we're all (out of the best of intentions) being pushed to install.
Many of the apps are copies of each other (and inherit the same problems).
Australia's COVIDSafe app is essentially based on OpenTrace, as is Singapore's TraceTogether software. The problem, however, is that TraceTogether had a range of reported issues and a poor uptake, with just 25% of the population opting in as users - far short of the 75% required for it to be effective. There have been complaints regarding its general performance, especially on iOS, including batteries being drained very quickly. COVIDSafe has a potential UX flaw in its iOS version, requiring the phone to be unlocked and the app running in the foreground to record all data properly.
While the above issues are annoying, the more pressing concern is that Bluetooth vulnerabilities are rife and neither TraceTogether, nor Australia's COVIDSafe, are immune to them. On May 14th, NIST reported that COVIDSafe had a Denial of Service vulnerability, allowing an attacker to remotely crash the app if they are in Bluetooth handshake distance. This would allow an organized attack to disrupt contact tracing in densely populated areas, where it is most useful - something explained in detail by security researcher Richard Nelson. It is known to affect COVIDSafe, TraceTogether, Poland's ProteGO and Canada's ABTraceTogether - all inheriting the issue from OpenTrace's faulty manuData.subdata call.
There are other privacy and security issues relating to Bluetooth functionality in general, as well. The fact that this technology is being used to trace human movement through a unique ID (TempID) and collect meaningful data will inevitably mean a spiked interest in attackers testing for weaknesses, at which point exactly what is being collected, where it is being stored, and for how long, must be scrutinized.
Some apps are already showing signs of simple errors that cause complex weaknesses.
Australian software engineer Geoffrey Huntley has been studying the source code of COVIDSafe, and sadly, there are issues that are not necessarily being highlighted to us, the end-users.
One critical example was a privacy-breaching logic error that would allow an attacker to perform long-term tracking of devices; something that poses an enormous amount of risk for vulnerable users, not to mention it contravenes the Privacy Policy of the app itself.
It's important to note that these logic vulnerabilities have been patched as of May 14th, but the more pressing issue is that this was left unpatched, in the wild, for 17 days after Mr. Huntley reported it. He and other members of the awesome security community are tracking CVEs relating to the COVIDSafe app here.
One thing Huntley points out, post-patch, is that even the fix shows signs of, well, incompetence. In his public log, he notes the patch involved adding logic rather than simply deleting a flawed cache, with the latter being a far more robust remedy. Both work, but the live solution lacks finesse - a concern with such an important application.
Although we have diligent members of society using their own time and expertise to pore through source code and highlight issues, their job is made much harder than if the code was open source in the first place. As it stands, 28 apps are still closed off to security researchers.
Secure coding continues to trip us up at the finish line.
While I can certainly sympathize with overworked developers -- as well as the highly unusual situation of having to churn out a life-saving app in the midst of a pandemic -- the above should highlight that a few simple vulnerabilities in what is essentially a communal codebase could spell significant issues for millions of users.
I'd like to think most people want to be good citizens, support the app, and give everyone the best possible chance of contact tracing and controlling outbreaks of this horrific virus. I too am in support of technology that can help achieve this, but in many ways, this has unearthed the general lack of secure coding principles inherent in developers all over the world.
In any situation where software has to be written quickly, mistakes are not exactly unexpected. However, common security vulnerabilities like logic flaws, misconfigurations, and code injection errors should be something that can be prevented as code is written, not after volunteer white hats pick the codebase apart.
And it's not the developers'fault, by the way. They leave their tertiary education with little skills in secure coding, and in their careers, their KPIs almost always relate to feature functionality and speed of delivery - the security part is for someone else to deal with once they're done. We need to get to an end-state of secure coding at speed, and while now is not the time to make seismic culture shifts in the departments building these apps, it's a timely reminder that our digital risk area is expanding, and they are in pole position to make a difference if they're given the tools and knowledge to share the responsibility for security best practices.
Is it safe to download the app?
Here's the thing: for me, a security guy, I've come to the conclusion that the benefits of the app outweigh the issues. It's not ideal that the above vulnerabilities are - or have been - present in this software, but the implications of these being weaponized are worst-case scenarios. At the moment, contact tracing is a vital component of assisting our medical heroes all around the world in controlling the spread, stemming the flow of hospital admissions, and keeping each other as safe as possible.
It serves to highlight that we have a long way to go when it comes to enacting security best practices by default in a software build, and it's important the public does have the information needed to make informed decisions.
My family and I will continue to use it, though we remain vigilant with staying up-to-date with our Android patches, as we all should.
A version of this article originally appeared in DevOps Digest. It has been updated and syndicated here.
At this point, I'm sure we're all getting a little tired of the phrase, "in these unprecedented times"... but, these really are unprecedented times. Who'd have thought at the end of last year, that we would be racing to defeat a globally destructive pandemic this year, and throwing everything we possibly could at it? It would have seemed almost laughable, and more along the lines of a new Netflix sci-fi series than part of our worldwide reality. COVID-19 has completely transformed our social lives, economy, and job security, not to mention political priorities.
One of the counter-attacks against COVID-19 has been through technology, with many countries rolling out contact tracing apps. Australia has COVIDSafe, modeled from Singapore's TraceTogether. Hong Kong, Taiwan, China, South Korea, Israel and Germany all have contact-tracing technology implemented, or on the way. The UK has been the hardest-hit region in Europe, with tens of thousands of virus-related deaths and a high infection rate. The release of their app is imminent. The USA -- also deeply affected with many people tragically losing their lives -- also has technology rolling out, but their state-by-state approach to contact tracing makes their situation quite complex.
With the exception of more state-controlled countries like China and Taiwan, the use of these apps is voluntary, requiring citizens to download and use the technology of their own accord. Some adoption rates are more successful than others; for example, Singapore's TraceTogether app had an adoption rate of 25%, rendering it quite ineffective for its desired purpose.
The idea behind contact tracing apps is sound. This technology, when functioning well, would ensure hotspots are quickly revealed and comprehensive testing can occur - both essential components of fighting the spread of a contagious virus. However, the words "government" and "tracing" don't exactly sound very inviting, and it's natural that people are cautious about what downloading something like this would actually mean for them.
So, what are the chief concerns of users? If online commentary is anything to go by, some of these misgivings include:
- Lack of trust in the government to use collected data responsibly
- Apprehension over how well personal data will be protected from cyberattacks
- Lack of clarity in what data is actually being collected, where it is stored, and with whom
- ... and for the developers/geeks among us, how solidly the apps are actually built.
It is always a bit of a worry when apps are built quickly, and these contact tracing apps are having to be rolled out in record time. It's a nightmare for developers, security people, and government agencies.
So, is mistrust a valid reaction? And what should we consider as a priority in our assessment of COVID-19 contact tracing apps and end-user safety? As a security guy, my instinct is, of course, to drill down into the cybersecurity elements of the program, namely how secure the codebase is for an app we're all (out of the best of intentions) being pushed to install.
Many of the apps are copies of each other (and inherit the same problems).
Australia's COVIDSafe app is essentially based on OpenTrace, as is Singapore's TraceTogether software. The problem, however, is that TraceTogether had a range of reported issues and a poor uptake, with just 25% of the population opting in as users - far short of the 75% required for it to be effective. There have been complaints regarding its general performance, especially on iOS, including batteries being drained very quickly. COVIDSafe has a potential UX flaw in its iOS version, requiring the phone to be unlocked and the app running in the foreground to record all data properly.
While the above issues are annoying, the more pressing concern is that Bluetooth vulnerabilities are rife and neither TraceTogether, nor Australia's COVIDSafe, are immune to them. On May 14th, NIST reported that COVIDSafe had a Denial of Service vulnerability, allowing an attacker to remotely crash the app if they are in Bluetooth handshake distance. This would allow an organized attack to disrupt contact tracing in densely populated areas, where it is most useful - something explained in detail by security researcher Richard Nelson. It is known to affect COVIDSafe, TraceTogether, Poland's ProteGO and Canada's ABTraceTogether - all inheriting the issue from OpenTrace's faulty manuData.subdata call.
There are other privacy and security issues relating to Bluetooth functionality in general, as well. The fact that this technology is being used to trace human movement through a unique ID (TempID) and collect meaningful data will inevitably mean a spiked interest in attackers testing for weaknesses, at which point exactly what is being collected, where it is being stored, and for how long, must be scrutinized.
Some apps are already showing signs of simple errors that cause complex weaknesses.
Australian software engineer Geoffrey Huntley has been studying the source code of COVIDSafe, and sadly, there are issues that are not necessarily being highlighted to us, the end-users.
One critical example was a privacy-breaching logic error that would allow an attacker to perform long-term tracking of devices; something that poses an enormous amount of risk for vulnerable users, not to mention it contravenes the Privacy Policy of the app itself.
It's important to note that these logic vulnerabilities have been patched as of May 14th, but the more pressing issue is that this was left unpatched, in the wild, for 17 days after Mr. Huntley reported it. He and other members of the awesome security community are tracking CVEs relating to the COVIDSafe app here.
One thing Huntley points out, post-patch, is that even the fix shows signs of, well, incompetence. In his public log, he notes the patch involved adding logic rather than simply deleting a flawed cache, with the latter being a far more robust remedy. Both work, but the live solution lacks finesse - a concern with such an important application.
Although we have diligent members of society using their own time and expertise to pore through source code and highlight issues, their job is made much harder than if the code was open source in the first place. As it stands, 28 apps are still closed off to security researchers.
Secure coding continues to trip us up at the finish line.
While I can certainly sympathize with overworked developers -- as well as the highly unusual situation of having to churn out a life-saving app in the midst of a pandemic -- the above should highlight that a few simple vulnerabilities in what is essentially a communal codebase could spell significant issues for millions of users.
I'd like to think most people want to be good citizens, support the app, and give everyone the best possible chance of contact tracing and controlling outbreaks of this horrific virus. I too am in support of technology that can help achieve this, but in many ways, this has unearthed the general lack of secure coding principles inherent in developers all over the world.
In any situation where software has to be written quickly, mistakes are not exactly unexpected. However, common security vulnerabilities like logic flaws, misconfigurations, and code injection errors should be something that can be prevented as code is written, not after volunteer white hats pick the codebase apart.
And it's not the developers'fault, by the way. They leave their tertiary education with little skills in secure coding, and in their careers, their KPIs almost always relate to feature functionality and speed of delivery - the security part is for someone else to deal with once they're done. We need to get to an end-state of secure coding at speed, and while now is not the time to make seismic culture shifts in the departments building these apps, it's a timely reminder that our digital risk area is expanding, and they are in pole position to make a difference if they're given the tools and knowledge to share the responsibility for security best practices.
Is it safe to download the app?
Here's the thing: for me, a security guy, I've come to the conclusion that the benefits of the app outweigh the issues. It's not ideal that the above vulnerabilities are - or have been - present in this software, but the implications of these being weaponized are worst-case scenarios. At the moment, contact tracing is a vital component of assisting our medical heroes all around the world in controlling the spread, stemming the flow of hospital admissions, and keeping each other as safe as possible.
It serves to highlight that we have a long way to go when it comes to enacting security best practices by default in a software build, and it's important the public does have the information needed to make informed decisions.
My family and I will continue to use it, though we remain vigilant with staying up-to-date with our Android patches, as we all should.
Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoChief Executive Officer, Chairman, and Co-Founder
Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.
A version of this article originally appeared in DevOps Digest. It has been updated and syndicated here.
At this point, I'm sure we're all getting a little tired of the phrase, "in these unprecedented times"... but, these really are unprecedented times. Who'd have thought at the end of last year, that we would be racing to defeat a globally destructive pandemic this year, and throwing everything we possibly could at it? It would have seemed almost laughable, and more along the lines of a new Netflix sci-fi series than part of our worldwide reality. COVID-19 has completely transformed our social lives, economy, and job security, not to mention political priorities.
One of the counter-attacks against COVID-19 has been through technology, with many countries rolling out contact tracing apps. Australia has COVIDSafe, modeled from Singapore's TraceTogether. Hong Kong, Taiwan, China, South Korea, Israel and Germany all have contact-tracing technology implemented, or on the way. The UK has been the hardest-hit region in Europe, with tens of thousands of virus-related deaths and a high infection rate. The release of their app is imminent. The USA -- also deeply affected with many people tragically losing their lives -- also has technology rolling out, but their state-by-state approach to contact tracing makes their situation quite complex.
With the exception of more state-controlled countries like China and Taiwan, the use of these apps is voluntary, requiring citizens to download and use the technology of their own accord. Some adoption rates are more successful than others; for example, Singapore's TraceTogether app had an adoption rate of 25%, rendering it quite ineffective for its desired purpose.
The idea behind contact tracing apps is sound. This technology, when functioning well, would ensure hotspots are quickly revealed and comprehensive testing can occur - both essential components of fighting the spread of a contagious virus. However, the words "government" and "tracing" don't exactly sound very inviting, and it's natural that people are cautious about what downloading something like this would actually mean for them.
So, what are the chief concerns of users? If online commentary is anything to go by, some of these misgivings include:
- Lack of trust in the government to use collected data responsibly
- Apprehension over how well personal data will be protected from cyberattacks
- Lack of clarity in what data is actually being collected, where it is stored, and with whom
- ... and for the developers/geeks among us, how solidly the apps are actually built.
It is always a bit of a worry when apps are built quickly, and these contact tracing apps are having to be rolled out in record time. It's a nightmare for developers, security people, and government agencies.
So, is mistrust a valid reaction? And what should we consider as a priority in our assessment of COVID-19 contact tracing apps and end-user safety? As a security guy, my instinct is, of course, to drill down into the cybersecurity elements of the program, namely how secure the codebase is for an app we're all (out of the best of intentions) being pushed to install.
Many of the apps are copies of each other (and inherit the same problems).
Australia's COVIDSafe app is essentially based on OpenTrace, as is Singapore's TraceTogether software. The problem, however, is that TraceTogether had a range of reported issues and a poor uptake, with just 25% of the population opting in as users - far short of the 75% required for it to be effective. There have been complaints regarding its general performance, especially on iOS, including batteries being drained very quickly. COVIDSafe has a potential UX flaw in its iOS version, requiring the phone to be unlocked and the app running in the foreground to record all data properly.
While the above issues are annoying, the more pressing concern is that Bluetooth vulnerabilities are rife and neither TraceTogether, nor Australia's COVIDSafe, are immune to them. On May 14th, NIST reported that COVIDSafe had a Denial of Service vulnerability, allowing an attacker to remotely crash the app if they are in Bluetooth handshake distance. This would allow an organized attack to disrupt contact tracing in densely populated areas, where it is most useful - something explained in detail by security researcher Richard Nelson. It is known to affect COVIDSafe, TraceTogether, Poland's ProteGO and Canada's ABTraceTogether - all inheriting the issue from OpenTrace's faulty manuData.subdata call.
There are other privacy and security issues relating to Bluetooth functionality in general, as well. The fact that this technology is being used to trace human movement through a unique ID (TempID) and collect meaningful data will inevitably mean a spiked interest in attackers testing for weaknesses, at which point exactly what is being collected, where it is being stored, and for how long, must be scrutinized.
Some apps are already showing signs of simple errors that cause complex weaknesses.
Australian software engineer Geoffrey Huntley has been studying the source code of COVIDSafe, and sadly, there are issues that are not necessarily being highlighted to us, the end-users.
One critical example was a privacy-breaching logic error that would allow an attacker to perform long-term tracking of devices; something that poses an enormous amount of risk for vulnerable users, not to mention it contravenes the Privacy Policy of the app itself.
It's important to note that these logic vulnerabilities have been patched as of May 14th, but the more pressing issue is that this was left unpatched, in the wild, for 17 days after Mr. Huntley reported it. He and other members of the awesome security community are tracking CVEs relating to the COVIDSafe app here.
One thing Huntley points out, post-patch, is that even the fix shows signs of, well, incompetence. In his public log, he notes the patch involved adding logic rather than simply deleting a flawed cache, with the latter being a far more robust remedy. Both work, but the live solution lacks finesse - a concern with such an important application.
Although we have diligent members of society using their own time and expertise to pore through source code and highlight issues, their job is made much harder than if the code was open source in the first place. As it stands, 28 apps are still closed off to security researchers.
Secure coding continues to trip us up at the finish line.
While I can certainly sympathize with overworked developers -- as well as the highly unusual situation of having to churn out a life-saving app in the midst of a pandemic -- the above should highlight that a few simple vulnerabilities in what is essentially a communal codebase could spell significant issues for millions of users.
I'd like to think most people want to be good citizens, support the app, and give everyone the best possible chance of contact tracing and controlling outbreaks of this horrific virus. I too am in support of technology that can help achieve this, but in many ways, this has unearthed the general lack of secure coding principles inherent in developers all over the world.
In any situation where software has to be written quickly, mistakes are not exactly unexpected. However, common security vulnerabilities like logic flaws, misconfigurations, and code injection errors should be something that can be prevented as code is written, not after volunteer white hats pick the codebase apart.
And it's not the developers'fault, by the way. They leave their tertiary education with little skills in secure coding, and in their careers, their KPIs almost always relate to feature functionality and speed of delivery - the security part is for someone else to deal with once they're done. We need to get to an end-state of secure coding at speed, and while now is not the time to make seismic culture shifts in the departments building these apps, it's a timely reminder that our digital risk area is expanding, and they are in pole position to make a difference if they're given the tools and knowledge to share the responsibility for security best practices.
Is it safe to download the app?
Here's the thing: for me, a security guy, I've come to the conclusion that the benefits of the app outweigh the issues. It's not ideal that the above vulnerabilities are - or have been - present in this software, but the implications of these being weaponized are worst-case scenarios. At the moment, contact tracing is a vital component of assisting our medical heroes all around the world in controlling the spread, stemming the flow of hospital admissions, and keeping each other as safe as possible.
It serves to highlight that we have a long way to go when it comes to enacting security best practices by default in a software build, and it's important the public does have the information needed to make informed decisions.
My family and I will continue to use it, though we remain vigilant with staying up-to-date with our Android patches, as we all should.
Table of contents
Chief Executive Officer, Chairman, and Co-Founder
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Resources to get you started
10 Key Predictions: Secure Code Warrior on AI & Secure-by-Design’s Influence in 2025
Organizations are facing tough decisions on AI usage to support long-term productivity, sustainability, and security ROI. It’s become clear to us over the last few years that AI will never fully replace the role of the developer. From AI + developer partnerships to the increasing pressures (and confusion) around Secure-by-Design expectations, let’s take a closer look at what we can expect over the next year.
OWASP Top 10 For LLM Applications: What’s New, Changed, and How to Stay Secure
Stay ahead in securing LLM applications with the latest OWASP Top 10 updates. Discover what's new, what’s changed, and how Secure Code Warrior equips you with up-to-date learning resources to mitigate risks in Generative AI.
Trust Score Reveals the Value of Secure-by-Design Upskilling Initiatives
Our research has shown that secure code training works. Trust Score, using an algorithm drawing on more than 20 million learning data points from work by more than 250,000 learners at over 600 organizations, reveals its effectiveness in driving down vulnerabilities and how to make the initiative even more effective.
Reactive Versus Preventive Security: Prevention Is a Better Cure
The idea of bringing preventive security to legacy code and systems at the same time as newer applications can seem daunting, but a Secure-by-Design approach, enforced by upskilling developers, can apply security best practices to those systems. It’s the best chance many organizations have of improving their security postures.