Cryptography Software Security

Soatok’s Guide to Side-Channel Attacks

If you’re ever tasked with implementing a cryptography feature–whether a high-level protocol or a low-level primitive–you will have to take special care to ensure you’re not leaking secret information through side-channels.

The descriptions of algorithms you learn in a classroom or textbook are not sufficient for real-world use. (Yes, that means your toy RSA implementation based on GMP from your computer science 101 class isn’t production-ready. Don’t deploy it.)

But what are these elusive side-channels exactly, and how do you prevent them? And in cases where you cannot prevent them, how can you mitigate the risk to your users?

Soatok Explains it All
Art by Swizz.


Cryptographic Side-Channels

The concept of a side-channel isn’t inherently cryptographic, as Taylor Hornby demonstrates, but a side-channel can be a game over vulnerability in a system meant to maintain confidentiality (even if only for its cryptography keys).

Cryptographic side-channels allow an attacker to learn secret data from your cryptography system. To accomplish this, the attacker doesn’t necessarily study the system’s output (i.e. ciphertext); instead, they observe some other measurement, such as how much time or power was spent performing an operation, or what kind of electromagnetic radiation was emitted.

Important: While being resistant to side-channels is a prerequisite for implementations to be secure, it isn’t in and of itself sufficient for security. The underlying design of the primitives, constructions, and high-level protocols needs to be secure first, and that requires a clear and specific threat model for what you’re building.

Constant-time ECDSA doesn’t help you if you reuse k-values like it’s going out of style, but variable-time ECDSA still leaks your secret key to anyone who cares to probe your response times. Secure cryptography is very demanding.

CRYPTO IS HARD / LET'S GO SHOPPING (in the spirit of the Barbie meme)
Art by Riley.

Timing Leaks

Timing side-channels leak secrets through how much time it takes for an operation to complete.

There are many different flavors of timing leakage, including:

  • Fast-failing comparison functions (memcmp() in C)
  • Cache-timing vulnerabilities (e.g. software AES)
  • Memory access patterns
  • Conditional branches controlled by secrets

The bad news about timing leaks is that they’re almost always visible to an attacker over the network (including over the Internet (PDF)).

The good news is that most of them can be prevented or mitigated in software.

Art by Kyume
Art by Kyume.

Power Usage

Different algorithms or processor operations may require different amounts of power.

For example, squaring a large number may take less power than multiplying two different large numbers. This observation has led to the development of power analysis attacks against RSA.

Power analysis is especially relevant for embedded systems and smart cards, which are easier to extract a meaningful signal from than your desktop computer.

Some information leakage through power usage can be prevented through careful engineering (for example: BearSSL, which uses Montgomery multiplication instead of square-and-multiply).

But that’s not always an option, so generally these risks are mitigated.

Soatok has a eureka moment
My reaction when I first learned of power leaks: WATT (Art by Swizz)

Electromagnetic Emissions

Your computer is a reliable source of electromagnetic emissions (such as radio waves). Some of these emissions may reveal information about your cryptographic secrets, especially to an attacker with physical proximity to your device.

The good news is that research into EM emission side-channels isn’t as mature as side-channels through timing leaks or power usage. The bad news is that mitigations for breakthroughs will generally require hardware (e.g. electromagnetic shielding).

Aren’t computers terrifying? (Art by Swizz)

Side-Channel Prevention and Mitigation

Now that we’ve established a rough sense of some of the types of side-channels that are possible, we can begin to identify what causes them and aspire to prevent the leaks from happening–and where we can’t, to mitigate the risk to a reasonable level.

Note: To be clear, I didn’t cover all of the types of side-channels.

Prevention vs. Mitigation

Preventing a side-channel means eliminating the conditions that allow the information leak to occur in the first place. For timing leaks, this means making all algorithms constant-time.

There are entire classes of side-channel leaks that aren’t possible or practical to mitigate in software. When you encounter one, the best you can hope to do is mitigate the risk.

Ideally, you want to make the attack more expensive to pull off than the reward an attacker will gain from it.

What is Constant-Time?

Toto, I don’t think we’re in Tanelorn Kansas anymore.

When an implementation is said to be constant-time, what we mean is that the execution time of the code is not a function of its secret inputs.

Vulnerable AES uses table look-ups to implement the S-Box. Constant-time AES is either implemented in hardware, or is bitsliced.

Malicious Environments and Algorithmic Constant-Time

One of the greatest challenges with writing constant-time code is distinguishing between algorithmic constant-time and provably constant-time. The main difference between the two is that you cannot trust your compiler (especially a JIT compiler), which may attempt to optimize your code in a way that reintroduces the side-channel you aspired to remove.

A sufficiently advanced compiler optimization is indistinguishable from an adversary.

John Regehr, possibly with apologies to Arthur C. Clarke

For compiled languages, this is a tractable but expensive problem to solve: You simply have to formally verify everything from the source code to the compiler to the silicon chips that the code will be deployed on, and then audit your supply chain to prevent malicious tampering from going undetected.

For interpreted languages (e.g. PHP and JavaScript), this formal verification strategy isn’t really an option, unless you want to formally verify the runtime that interprets scripts and prove that the operations remain constant-time on top of all the other layers of distrust.

Is this level of paranoia really worth the effort?

For our cases, anyway! (Art by Khia.)

For that reason, we’re going to assume that algorithmic constant-time is adequate for the duration of this blog post.

If your threat model prevents you from accepting this assumption, feel free to put in the extra effort yourself and tell me how it goes. After all, as a furry who writes blog posts in my spare time for fun, I don’t exactly have the budget for massive research projects in formal verification.

Mitigation with Blinding Techniques

The best mitigation for some side-channels is called blinding: Obfuscating the inputs with some random data, then deobfuscating the outputs with the same random data, such that your keys are not revealed.

Two well-known examples include RSA decryption and Elliptic Curve Diffie-Hellman. I’ll focus on the latter, since it’s not as widely covered in the literature (although several cryptographers I’ve talked with were somehow knowledgeable about it; I suspect gatekeeping is involved).

Blinded ECDH Key Exchange

In typical ECDH implementations, you will convert a point on a Weierstrass curve (X, Y) to a Jacobian coordinate system (x, y).

The exact conversion formula is (x = X / Z^{2}, y = X / Z^{3}). The conversion almost makes intuitive sense.

Where does Z come from though?

It turns out, the choice for Z is totally arbitrary. Libraries typically set it equal to 1 (for best performance), but you can also set it to a random number. (You cannot set it to 0, however, for obvious reasons.)

Choosing a random number means the calculations performed over Jacobian coordinates will be obscured by a randomly chosen factor Z (and thus, if Z is only used once per scalar multiplication, the bitwise signal the attackers rely on will be lost).

Evil Laugh
Blinding techniques are cool. (Art by Khia.)

I think it’s really cool how one small tweak to the runtime of an algorithm can make it significantly harder to attack.

Design Patterns for Algorithmic Constant-Time Code

Mitigation techniques are cool, but preventing side-channels is a better value-add for most software.

To that end, let’s look at some design patterns for constant-time software. Some of these are relatively common; others, not so much.

Constant-Time String Comparison

Rather than using string comparison (== in most programming languages, memcmp() in C), you want to compare cryptographic secrets and/or calculated integrity checks with a secure compare algorithm, which looks like this:

  1. Initialize a variable (let’s call it D) to zero.
  2. For each byte of the two strings:
    1. Calculate (lefti XOR righti)
    2. Bitwise OR the current value of D with the result of the XOR, store the output in D
  3. When the loop has concluded, D will be equal to 0 if and only if the two strings are equal.

In code form, it looks like this:

function ct_compare(string $left, string $right): bool
    $d = 0;
    $length = mb_strlen($left, '8bit');
    if (mb_strlen($right, '8bit') !== $length) {
        return false; // Lengths differ
    for ($i = 0; $i < $length; ++$i) {
        $leftCharCode = unpack('C', $left[$i])[1];
        $rightCharCode = unpack('C', $right[$i])[1];
        $d |= ($leftCharCode ^ $rightCharCode);
    return $d === 0;

In this example, I’m using PHP’s unpack() function to avoid cache-timing leaks with ord() and chr(). Of course, you can simply use hash_equals() instead of writing it yourself (PHP 5.6.0+).

Alternative: “Double HMAC” String Comparison

If the previous algorithm won’t work (i.e. because you’re concerned your JIT compiler will optimize it away), there is a popular alternative to consider. It’s called “Double HMAC” because it was traditionally used with Encrypt-Then-HMAC schemes.

The algorithm looks like this:

  1. Generate a random 256-bit key, K. (This can be cached between invocations, but it should be unpredictable.)
  2. Calculate HMAC-SHA256(K, left).
  3. Calculate HMAC-SHA256(K, right).
  4. Return true if the outputs of step 2 and 3 are equal.

This is provably secure, so long as HMAC-SHA256 is a secure pseudo-random function and the key K is unknown to the attacker.

In code form, the Double HMAC compare function looks like this:

function hmac_compare(string $left, string $right): bool
    static $k = null;
    if (!$k) $k = random_bytes(32);
    return (
        hash_hmac('sha256', $left, $k)
        hash_hmac('sha256', $right, $k)

Constant-Time Conditional Select

I like to imagine a conversation between a cryptography engineer and a Zen Buddhist, that unfolds like so:

  • CE: “I want to eliminate branching side-channels from my code.”
  • ZB: “Then do not have branches in your code.”

And that is precisely what we intend to do with a constant-time conditional select: Eliminate branches by conditionally returning between one of two strings, without an IF statement.

Mind. Blown. (Art by Khia.)

This isn’t as tricky as it sounds. We’re going to use XOR and two’s complement to achieve this.

The algorithm looks like this:

  1. Convert the selection bit (TRUE/FALSE) into a mask value (-1 for TRUE, 0 for FALSE). Bitwise, -1 looks like 111111111…1111111111, while 0 looks like 00000000…00000000.
  2. Copy the right string into a buffer, call it tmp.
  3. Calculate left XOR right, call it x.
  4. Return (tmp XOR (x AND mask)).

Once again, in code this algorithm looks like this:


function ct_select(
    bool $returnLeft,
    string $left,
    string $right
): string {
    $length = mb_strlen($left, '8bit');
    if (mb_strlen($right, '8bit') !== $length) {
        throw new Exception('ct_select() expects two strings of equal length');
    // Mask byte
    $mask = (-$returnLeft) & 0xff;
    // X
    $x = (string) ($left ^ $right);
    // Output = Right XOR (X AND Mask)
    $output = '';
    for ($i = 0; $i < $length; $i++) {
        $rightCharCode = unpack('C', $right[$i])[1];
        $xCharCode = unpack('C', $x[$i])[1];
        $output .= pack(
            $rightCharCode ^ ($xCharCode & $mask)
    return $output;

You can test this code for yourself here. The function was designed to read intuitively like a ternary operator.

A Word of Caution on Cleverness

In some languages, it may seem tempting to use the bitwise trickery to swap out pointers instead of returning a new buffer. But do not fall for this Siren song.

If, instead of returning a new buffer, you just swap pointers, what you’ll end up doing is creating a timing leak through your memory access patterns. This can culminate in a timing vulnerability, but even if your data is too big to fit in a processor’s cache line (I dunno, Post-Quantum RSA keys?), there’s another risk to consider.

Virtual memory addresses are just beautiful lies. Where your data lives on the actual hardware memory is entirely up to the kernel. You can have two blobs with contiguous virtual memory addresses that live on separate memory pages, or even separate RAM chips (if you have multiple).

If you’re swapping pointers around, and they point to two different pieces of hardware, and one is slightly faster to read from than the other, you can introduce yet another timing attack through which pointer is being referenced by the processor.

Soatok angrily grasping computer monitor
It’s timing leaks all the ways down! (Art by Swizz)

If you’re swapping between X and Y before performing a calculation, where:

  • X lives on RAM chip 1, which takes 3 ns to read
  • Y lives on RAM chip 2, which takes 4 ns to read

…then the subsequent use of the swapped pointers reveals whether you’re operating on X or Y in the timing: It will take slightly longer to read from Y than from X.

The best way to mitigate this problem is to never design your software to have it in the first place. Don’t be clever on this one.

Constant-Time String Inequality Comparison

Sometimes you don’t just need to know if two strings are equal, you also need to know which one is larger than the other.

To accomplish this in constant-time, we need to maintain two state variables:

  1. gt (initialized to 0, will be set to 1 at some point if left > right)
  2. eq (initialized to 1, will be set to 0 at some point if left != right)

Endian-ness will dictate the direction our algorithm goes, but we’re going to perform two operations in each cycle:

  1. gt should be bitwise ORed with (eq AND ((right – left) right shifted 8 times)
  2. eq should be bitwise ANDed with ((right XOR left) – 1) right shifted 8 times

If right and left are ever different, eq will be set to 0.

If the first time they’re different the value for lefti is greater than the value for righti, then the subtraction will produce a negative number. Right shifting a negative number 8 places then bitwise ANDing the result with eq (which is only 1 until two bytes differ, and then 0 henceforth if they do) will result in a value for 1 with gt. Thus, if (righti – lefti) is negative, gt will be set to 1. Otherwise, it remains 0.

At the end of this loop, return (gt + gt + eq) – 1. This will result in the following possible values:

  • left < right: -1
  • left == right: 0
  • left > right: 1

The arithmetic based on the possible values of gt and eq should be straightforward.

  • Different (eq == 0) but not greater (gt == 0) means left < right, -1.
  • Different (eq == 0) and greater (gt == 1) means left > right, 1.
  • If eq == 1, no bytes ever differed, so left == right, 0.

A little endian implementation is as follows:

function str_compare(string $left, string $right): int
    $length = mb_strlen($left, '8bit');
    if (mb_strlen($right, '8bit') !== $length) {
        throw new Exception('ct_select() expects two strings of equal length');
    $gt = 0;
    $eq = 1;
    $i = $length;
    while ($i > 0) {
        $leftCharCode = unpack('C', $left[$i])[1];
        $rightCharCode = unpack('C', $right[$i])[1];
        $gt |= (($rightCharCode - $leftCharCode) >> 8) & $eq;
        $eq &= (($rightCharCode ^ $leftCharCode) -1) >> 8;
    return ($gt + $gt + $eq) - 1;

Demo for this function is available here.

Constant-Time Integer Multiplication

Multiplying two integers is one of those arithmetic operations that should be constant-time. But on many older processors, it isn’t.

Of course there’s a microarchitecture timing leak! (Art by Khia.)

Fortunately, there is a workaround. It involves an algorithm called Ancient Egyptian Multiplication in some places or Peasant Multiplication in others.

Multiplying two numbers x and y this way looks like this:

  1. Determine the number of operations you need to perform. Generally, this is either known ahead of time or max(ceil(\log_2 x), ceil(\log_2 y)).
  2. Set z to 0.
  3. Until the operation count reaches zero:
    1. If the lowest bit of y is set, add x to z.
    2. Left shift x by 1.
    3. Right shfit y by 1.
  4. Return z.

The main caveat here is that you want to use bitwise operators in step 3.1 to remove the conditional branch.

Rather than bundle example code in our blog post, please refer to the implementation in sodium_compat (a pure PHP polyfill for libsodium).

For big number libraries, implementing Karatsuba on top of this integer multiplying function should be faster than attempting to multiply bignums this way.

Constant-Time Integer Division

Although some cryptography algorithms call for integer division, division isn’t usually expected to be constant-time.

However, if you look up a division algorithm for unsigned integers with a remainder, you’ll likely encounter this algorithm, which is almost constant-time:

if D = 0 then error(DivisionByZeroException) end
Q := 0                  -- Initialize quotient and remainder to zero
R := 0                     
for i := n − 1 .. 0 do  -- Where n is number of bits in N
  R := R << 1           -- Left-shift R by 1 bit
  R(0) := N(i)          -- Set the least-significant bit of R equal to bit i of the numerator
  if R ≥ D then
    R := R − D
    Q(i) := 1

If we use the tricks we learned from implementing constant-time string inequality with constant-time conditional selection, we can implement this algorithm without timing leaks.

Our constant-time version of this algorithm looks like this:

if D = 0 then error(DivisionByZeroException) end
Q := 0                  -- Initialize quotient and remainder to zero
R := 0                     
for i := n − 1 .. 0 do  -- Where n is number of bits in N
  R := R << 1           -- Left-shift R by 1 bit
  R(0) := N(i)          -- Set the least-significant bit of R equal to bit i of the numerator
  compared := ct_compare(R, D) -- Use constant-time inequality
  -- if R > D  then compared ==  1, swap = 1
  -- if R == D then compared ==  0, swap = 1
  -- if R < D  then compared == -1, swap = 0
  swap := (1 - ((compared >> 31) & 1))

  -- R' = R - D
  -- Q' = Q, Q[i] = 1
  Rprime := R - D
  Qprime := Q
  Qprime(i) := 1 -- The i'th bit is set to 1

  -- Replace (R with R', Q with Q') if swap == 1
  R = ct_select(swap, Rprime, R)
  Q = ct_select(swap, Qprime, Q)

It’s approximately twice as slow as the original, but it’s constant-time.

(Art by Khia.)

Constant-Time Modular Inversion

Modular inversion is the calculation of 1/x \pmod{p} for some prime p. This is used in a lot of places, but especially in elliptic curve cryptography and RSA.

Daniel J. Bernstein and Bo-Yin Yang published a paper on fast constant-time GCD and Modular Inversion in 2019. The algorithm in question is somewhat straightforward to implement (although determining whether or not that implementation is safe is left as an exercise to the rest of us).

A simpler technique is to use Fermat’s Little Theorem: 1/x = x^{p-2} mod p for some prime p. This only works with prime fields, and is slower than a Binary GCD (which isn’t necessarily constant-time, as OpenSSL discovered).

BearSSL provides an implementation (and accompanying documentation) for a constant-time modular inversion algorithm based on Binary GCD.

(In the future, I may update this section of this blog post with an implementation in PHP, using the GMP extension.)

Further Reading and Online Resources

If you’re at all interested in cryptographic side-channels, your hunger for knowledge probably won’t be sated by a single blog post. Here’s a collection of articles, papers, books, etc. worth reading.


  • 2020-08-27: The original version of this blog post incorrectly attributed Jacobian coordinate blinding to ECDSA hardening, rather than ECDH hardening. This error was brought to my attention by Thai Duong. Thanks Thai!
  • 2020-08-27: Erin correctly pointed out that omitting memory access timing was a disservice to developers, who might not be aware of the risks involved. I’ve updated the post to call this risk out specifically (especially in the conditional select code, which some developers might try to implement with pointer swapping without knowing the risks involved). Thanks Erin!

I hope you find this guide to side-channels helpful.

Soatok heart sticker
Thanks for reading!

Follow my blog for more Defense Against the Bark Arts posts in the future.

By Soatok

Security engineer with a fursona. Ask me about dholes or Diffie-Hellman!

8 replies on “Soatok’s Guide to Side-Channel Attacks”

Understood, I realize trailing commas were tolerated in function calls as of version 7.3; the page you link to by default includes the 7.2 version so I thought I’d let you know about it.


Critical error. Did not read article in constant time. Need audio or video? Oh no, a constant-time video player would probably be very slow! Can the blinding factors save us?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s