A startup's entire user database got decrypted by a researcher in an afternoon. The root cause wasn't a misconfigured server or a leaked .env file — it was a custom AES implementation where the developer had reused the IV (initialization vector) across every single encryption call. The code looked reasonable to anyone without deep cryptographic knowledge. The tests passed. It shipped to production. And it was completely broken.
Use an established library. That's the recommendation, stated plainly. The rest of this article explains why that's harder to ignore than it sounds, what specifically goes wrong when you implement your own, and which libraries actually deserve your trust.
The Problem Isn't Intelligence, It's Surface Area
The standard response to "don't roll your own crypto" is "cryptography is hard." That undersells the real issue. The problem isn't that developers who attempt this are unintelligent — it's that cryptographic security depends on dozens of independent implementation decisions, each of which must be correct simultaneously. Missing any one of them can nullify the entire system.
Consider what a correct AES-GCM implementation actually requires:
- A cryptographically secure random number generator for the IV (not
Math.random(), notrandom.random()) - A fresh IV for every single encryption operation, never reused under the same key
- Proper authentication tag verification before decryption — and verifying it in constant time to prevent timing attacks
- Correct key derivation if you're deriving a key from a password (PBKDF2, bcrypt, or Argon2 — not SHA-256)
- Safe key storage, rotation handling, and padding behavior
A senior developer implementing this from scratch will probably get most of these right. "Most" is not acceptable in cryptography.
Timing Attacks Are Invisible to Normal Testing
Here's the gotcha that catches experienced developers: string comparison leaks timing information. When you compare an HMAC signature or an authentication tag using a standard equality check (===, ==, .equals()), the comparison short-circuits as soon as it finds the first mismatched byte. An attacker making thousands of requests can measure response times and statistically determine the correct value byte by byte.
Timing attack demo: a naive string comparison leaks byte-by-byte timing information, letting an attacker recover a secret tag with far fewer requests than brute force.
This class of vulnerability won't appear in your unit tests, won't be caught by static analysis, and won't show up in code review unless your reviewer specifically knows to look for it. The fix is constant-time comparison, which every serious crypto library already implements internally.
The same principle applies to padding oracles. If your decryption function returns different errors for "bad padding" versus "bad authentication tag" — or worse, takes measurably different amounts of time — you've handed attackers a cryptographic oracle they can exploit to decrypt arbitrary ciphertext without the key. Vaudenay demonstrated this attack class in 2002 and implementations are still getting it wrong today.
What "Established Library" Actually Means
Not all libraries are equal. Here's what you should be reaching for:
For symmetric encryption (AES-GCM): Use the primitives built into your runtime's standard library, or a well-audited wrapper. These have received years of public scrutiny and are maintained by people whose entire job is cryptography.
For password hashing: bcrypt, Argon2, or scrypt — never SHA-anything directly. SHA-256 of a password can be brute-forced with commodity GPU hardware because it's designed to be fast. Password hashing functions are intentionally slow.
For key generation and random secrets: Use your OS's CSPRNG. Not seeded PRNGs, not timestamps, not UUIDs.
The Framework Wrapper Trap
One pattern that catches developers who know better: writing a thin wrapper around a low-level primitive and getting the wrapper wrong. You reach for the right underlying algorithm (AES-256) but wire it up incorrectly — hardcoding an IV as a constant string, storing the key in plaintext in a config file, or discarding the authentication tag on encrypt and not checking it on decrypt.
AES without authentication (AES-CBC without a MAC, or AES-ECB) is technically "using AES" but is vulnerable to bit-flipping attacks. An attacker who can modify ciphertext in transit can predictably alter the decrypted plaintext without knowing the key. This is exactly how padding oracle attacks work in practice — the library isn't broken, but the way it's being used is.
AES-GCM is authenticated by default, which is why it's the right default. If you're using AES-CBC, you need to add an HMAC over the ciphertext yourself, in the correct order (encrypt-then-MAC, not MAC-then-encrypt), with constant-time verification. AES-GCM handles all of this for you.
The Real Cost of Getting It Wrong
The Sony PlayStation Network breach in 2011 exposed 77 million accounts partly because their RSA signing implementation used a static random value — breaking the entire security guarantee of the algorithm. LinkedIn's 2012 breach exposed 117 million passwords stored as unsalted SHA-1 hashes. These weren't amateur mistakes made by developers who didn't care. They were implementation details that required specific cryptographic expertise to get right.
The regulatory consequences compound the technical ones. A breach caused by a known-bad cryptographic practice (ECB mode, unsalted hashes, static IVs) is extremely difficult to defend in a GDPR audit or a HIPAA investigation. "We implemented our own encryption" is not a defense — it's an aggravating factor.
Generate your secrets, IVs, and keys using a proper CSPRNG right now. If you're not sure whether your current implementation uses one, open the random secret generator, pull a 256-bit key from it, and trace back through your codebase to find every place you're generating keys or IVs — then verify each one against the code examples above.
Frequently Asked Questions
Related posts
Secure Password Reset Tokens — Expiry, Storage, and What Most Implementations Get Wrong
A practical guide to building secure password reset flows: token generation, expiry windows, one-time use enforcement, and the edge cases that cause real account takeovers.
Mar 30, 2026 · 7 min readIncident Response for Developers: What to Do When You Get Hacked
A practical incident response guide for developers covering detection, containment, eradication, recovery, and communication when a security breach happens.
Mar 29, 2026 · 9 min readPhishing Prevention: A Developer's Guide to SPF, DKIM, and DMARC
Understand how email spoofing enables phishing attacks and how to implement SPF, DKIM, and DMARC to protect your domain from being impersonated.
Mar 29, 2026 · 9 min read