When Encryption Fails: Lessons in Visibility, Governance, and Readiness
- Giuliana Bruni

- Nov 13
- 2 min read

Encryption is often treated as the final line of defence and the one safeguard that turns a data breach into a harmless event. Yet history shows that encryption itself can fail, not because mathematics stops working, but because governance and visibility fall short. Most organisations simply do not know what encryption they use, where it is implemented, or whether it still meets current standards. The result is predictable: digital locks that quietly get picked over time, leaving systems exposed long before anyone realises.
In 2018, Under Armour’s MyFitnessPal breach exposed 150 million user accounts. The company had adopted strong password hashing with bcrypt, but missed a few spots. A portion of its systems still relied on the long-deprecated SHA-1 algorithm, making those credentials far easier to crack once stolen. This was not a failure of technology but of visibility. A complete inventory of cryptographic functions would have revealed inconsistent protection before attackers did.
A similar lack of oversight weakened the foundations of public safety itself. In 2023, researchers uncovered severe flaws in the TETRA radio standard used by police and emergency services worldwide. The supposedly secure TEA1 algorithm embedded within the protocol turned out to have an intentionally reduced key length, cutting its effective strength from 80 to roughly 32 bits. For decades, millions trusted that encryption without independent verification or disclosure. The problem was no one had a transparent view of what algorithmic trust was built upon.
And sometimes, encryption fails simply because it was never properly applied. The 2015 breach of the US Office of Personnel Management compromised highly sensitive employee data that had not been encrypted at all. Investigations found legacy systems, incomplete controls, and a lack of cryptographic governance. This was not a zero-day exploit or an advanced quantum attack. It was preventable neglect.
These three incidents share a single pattern: invisibility. Each could have been avoided through a systematic inventory of cryptographic assets and governance enforcing algorithm lifecycle management. Without this visibility, organisations are left assuming that encryption “must be strong enough.”
This is where good cryptographic visibility changes the equation. Our Encryption Dataset enables organisations to identify, classify, and track the cryptographic functions embedded in their software, across source code, dependencies, and third-party components. It detects not only which algorithms are present but whether they are outdated, deprecated, or non-standard. By integrating this intelligence into existing DevSecOps pipelines, SCANOSS gives leaders a clear picture of their cryptographic posture before failures occur.
Cryptography will continue to evolve, and quantum computing will eventually force a global transition to new standards. But that transition begins with visibility. Knowing what you have today is the first step towards securing what you will need tomorrow.
The history of encryption failures teaches one clear lesson: prevention begins with seeing the unseen. SCANOSS provides that visibility, giving organisations the confidence to manage their cryptographic lifecycle proactively, before the next breach becomes the next headline.


