Jump to content

Side-channel attack

From Wikipedia, the free encyclopedia
(Redirected from Side channels)
An attempt to decode RSA key bits using power analysis. The left pulse represents the CPU power variations during the step of the algorithm without multiplication, the broader right pulse – step with multiplication, allowing an attacker to read bits 0, 1.

In computer security, a side-channel attack is any attack based on extra information that can be gathered because of the fundamental way a computer protocol or algorithm is implemented, rather than flaws in the design of the protocol or algorithm itself (e.g. flaws found in a cryptanalysis of a cryptographic algorithm) or minor, but potentially devastating, mistakes or oversights in the implementation. (Cryptanalysis also includes searching for side-channel attacks.) Timing information, power consumption, electromagnetic leaks, and sound are examples of extra information which could be exploited to facilitate side-channel attacks.

Some side-channel attacks require technical knowledge of the internal operation of the system, although others such as differential power analysis are effective as black-box attacks. The rise of Web 2.0 applications and software-as-a-service has also significantly raised the possibility of side-channel attacks on the web, even when transmissions between a web browser and server are encrypted (e.g. through HTTPS or WiFi encryption), according to researchers from Microsoft Research and Indiana University.[1]

Attempts to break a cryptosystem by deceiving or coercing people with legitimate access are not typically considered side-channel attacks: see social engineering and rubber-hose cryptanalysis.

General classes of side-channel attack include:

  • Cache attack — attacks based on attacker's ability to monitor cache accesses made by the victim in a shared physical system as in virtualized environment or a type of cloud service.
  • Timing attack — attacks based on measuring how much time various computations (such as, say, comparing an attacker's given password with the victim's unknown one) take to perform.
  • Power-monitoring attack — attacks that make use of varying power consumption by the hardware during computation.
  • Electromagnetic attack — attacks based on leaked electromagnetic radiation, which can directly provide plaintexts and other information. Such measurements can be used to infer cryptographic keys using techniques equivalent to those in power analysis or can be used in non-cryptographic attacks, e.g. TEMPEST (aka van Eck phreaking or radiation monitoring) attacks.
  • Acoustic cryptanalysis — attacks that exploit sound produced during a computation (rather like power analysis).
  • Differential fault analysis — in which secrets are discovered by introducing faults in a computation.
  • Data remanence — in which sensitive data are read after supposedly having been deleted. (e.g. Cold boot attack)
  • Software-initiated fault attacks — Currently a rare class of side channels, Row hammer is an example in which off-limits memory can be changed by accessing adjacent memory too often (causing state retention loss).
  • Allowlist — attacks based on the fact that the allowlisting devices will behave differently when communicating with allowlisted (sending back the responses) and non-allowlisted (not responding to the devices at all) devices. Allowlist-based side channel may be used to track Bluetooth MAC addresses.
  • Optical - in which secrets and sensitive data can be read by visual recording using a high resolution camera, or other devices that have such capabilities (see examples below).

In all cases, the underlying principle is that physical effects caused by the operation of a cryptosystem (on the side) can provide useful extra information about secrets in the system, for example, the cryptographic key, partial state information, full or partial plaintexts and so forth. The term cryptophthora (secret degradation) is sometimes used to express the degradation of secret key material resulting from side-channel leakage.

Examples

[edit]

A cache side-channel attack works by monitoring security critical operations such as AES T-table entry[2][3][4] or modular exponentiation or multiplication or memory accesses.[5] The attacker then is able to recover the secret key depending on the accesses made (or not made) by the victim, deducing the encryption key. Also, unlike some of the other side-channel attacks, this method does not create a fault in the ongoing cryptographic operation and is invisible to the victim.

In 2017, two CPU vulnerabilities (dubbed Meltdown and Spectre) were discovered, which can use a cache-based side channel to allow an attacker to leak memory contents of other processes and the operating system itself.

A timing attack watches data movement into and out of the CPU or memory on the hardware running the cryptosystem or algorithm. Simply by observing variations in how long it takes to perform cryptographic operations, it might be possible to determine the entire secret key. Such attacks involve statistical analysis of timing measurements and have been demonstrated across networks.[6]

A power-analysis attack can provide even more detailed information by observing the power consumption of a hardware device such as CPU or cryptographic circuit. These attacks are roughly categorized into simple power analysis (SPA) and differential power analysis (DPA). One example is Collide+Power, which affects nearly all CPUs.[7][8][9] Other examples use machine learning approaches.[10]

Fluctuations in current also generate radio waves, enabling attacks that analyze measurements of electromagnetic (EM) emanations. These attacks typically involve similar statistical techniques as power-analysis attacks.

A deep-learning-based side-channel attack,[11][12][13] using the power and EM information across multiple devices has been demonstrated with the potential to break the secret key of a different but identical device in as low as a single trace.

Historical analogues to modern side-channel attacks are known. A recently declassified NSA document reveals that as far back as 1943, an engineer with Bell telephone observed decipherable spikes on an oscilloscope associated with the decrypted output of a certain encrypting teletype.[14] According to former MI5 officer Peter Wright, the British Security Service analyzed emissions from French cipher equipment in the 1960s.[15] In the 1980s, Soviet eavesdroppers were suspected of having planted bugs inside IBM Selectric typewriters to monitor the electrical noise generated as the type ball rotated and pitched to strike the paper; the characteristics of those signals could determine which key was pressed.[16]

Power consumption of devices causes heating, which is offset by cooling effects. Temperature changes create thermally induced mechanical stress. This stress can create low level acoustic emissions from operating CPUs (about 10 kHz in some cases). Recent research by Shamir et al. has suggested that information about the operation of cryptosystems and algorithms can be obtained in this way as well. This is an acoustic cryptanalysis attack.

If the surface of the CPU chip, or in some cases the CPU package, can be observed, infrared images can also provide information about the code being executed on the CPU, known as a thermal-imaging attack.[citation needed]

An optical side-channel attack examples include gleaning information from the hard disk activity indicator[17] to reading a small number of photons emitted by transistors as they change state.[18]

Allocation-based side channels also exist and refer to the information that leaks from the allocation (as opposed to the use) of a resource such as network bandwidth to clients that are concurrently requesting the contended resource.[19]

Countermeasures

[edit]

Because side-channel attacks rely on the relationship between information emitted (leaked) through a side channel and the secret data, countermeasures fall into two main categories: (1) eliminate or reduce the release of such information and (2) eliminate the relationship between the leaked information and the secret data, that is, make the leaked information unrelated, or rather uncorrelated, to the secret data, typically through some form of randomization of the ciphertext that transforms the data in a way that can be undone after the cryptographic operation (e.g., decryption) is completed.

Under the first category, displays with special shielding to lessen electromagnetic emissions, reducing susceptibility to TEMPEST attacks, are now commercially available. Power line conditioning and filtering can help deter power-monitoring attacks, although such measures must be used cautiously, since even very small correlations can remain and compromise security. Physical enclosures can reduce the risk of surreptitious installation of microphones (to counter acoustic attacks) and other micro-monitoring devices (against CPU power-draw or thermal-imaging attacks).

Another countermeasure (still in the first category) is to jam the emitted channel with noise. For instance, a random delay can be added to deter timing attacks, although adversaries can compensate for these delays by averaging multiple measurements (or, more generally, using more measurements in the analysis). When the amount of noise in the side channel increases, the adversary needs to collect more measurements.

Another countermeasure under the first category is to use security analysis software to identify certain classes of side-channel attacks that can be found during the design stages of the underlying hardware itself. Timing attacks and cache attacks are both identifiable through certain commercially available security analysis software platforms, which allow for testing to identify the attack vulnerability itself, as well as the effectiveness of the architectural change to circumvent the vulnerability. The most comprehensive method to employ this countermeasure is to create a Secure Development Lifecycle for hardware, which includes utilizing all available security analysis platforms at their respective stages of the hardware development lifecycle.[20]

In the case of timing attacks against targets whose computation times are quantized into discrete clock cycle counts, an effective countermeasure against is to design the software to be isochronous, that is to run in an exactly constant amount of time, independently of secret values. This makes timing attacks impossible.[21] Such countermeasures can be difficult to implement in practice, since even individual instructions can have variable timing on some CPUs.

One partial countermeasure against simple power attacks, but not differential power-analysis attacks, is to design the software so that it is "PC-secure" in the "program counter security model". In a PC-secure program, the execution path does not depend on secret values. In other words, all conditional branches depend only on public information. (This is a more restrictive condition than isochronous code, but a less restrictive condition than branch-free code.) Even though multiply operations draw more power than NOP on practically all CPUs, using a constant execution path prevents such operation-dependent power differences (differences in power from choosing one branch over another) from leaking any secret information.[21] On architectures where the instruction execution time is not data-dependent, a PC-secure program is also immune to timing attacks.[22][23]

Another way in which code can be non-isochronous is that modern CPUs have a memory cache: accessing infrequently used information incurs a large timing penalty, revealing some information about the frequency of use of memory blocks. Cryptographic code designed to resist cache attacks attempts to use memory in only a predictable fashion (like accessing only the input, outputs and program data, and doing so according to a fixed pattern). For example, data-dependent table lookups must be avoided because the cache could reveal which part of the lookup table was accessed.

Other partial countermeasures attempt to reduce the amount of information leaked from data-dependent power differences. Some operations use power that is correlated to the number of 1 bits in a secret value. Using a constant-weight code (such as using Fredkin gates or dual-rail encoding) can reduce the leakage of information about the Hamming weight of the secret value, although exploitable correlations are likely to remain unless the balancing is perfect. This "balanced design" can be approximated in software by manipulating both the data and its complement together.[21]

Several "secure CPUs" have been built as asynchronous CPUs; they have no global timing reference. While these CPUs were intended to make timing and power attacks more difficult,[21] subsequent research found that timing variations in asynchronous circuits are harder to remove.[24]

A typical example of the second category (decorrelation) is a technique known as blinding. In the case of RSA decryption with secret exponent and corresponding encryption exponent and modulus , the technique applies as follows (for simplicity, the modular reduction by m is omitted in the formulas): before decrypting, that is, before computing the result of for a given ciphertext , the system picks a random number and encrypts it with public exponent to obtain . Then, the decryption is done on to obtain . Since the decrypting system chose , it can compute its inverse modulo to cancel out the factor in the result and obtain , the actual result of the decryption. For attacks that require collecting side-channel information from operations with data controlled by the attacker, blinding is an effective countermeasure, since the actual operation is executed on a randomized version of the data, over which the attacker has no control or even knowledge.

A more general countermeasure (in that it is effective against all side-channel attacks) is the masking countermeasure. The principle of masking is to avoid manipulating any sensitive value directly, but rather manipulate a sharing of it: a set of variables (called "shares") such that (where is the XOR operation). An attacker must recover all the values of the shares to get any meaningful information.[25]

Recently, white-box modeling was utilized to develop a low-overhead generic circuit-level countermeasure[26] against both EM as well as power side-channel attacks. To minimize the effects of the higher-level metal layers in an IC acting as more efficient antennas,[27] the idea is to embed the crypto core with a signature suppression circuit,[28][29] routed locally within the lower-level metal layers, leading towards both power and EM side-channel attack immunity.

See also

[edit]

References

[edit]
  1. ^ Shuo Chen; Rui Wang; XiaoFeng Wang & Kehuan Zhang (May 2010). "Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow" (PDF). Microsoft Research. IEEE Symposium on Security & Privacy 2010. Archived (PDF) from the original on 2016-06-17. Retrieved 2011-12-16.
  2. ^ Ashokkumar C.; Ravi Prakash Giri; Bernard Menezes (2016). "Highly Efficient Algorithms for AES Key Retrieval in Cache Access Attacks". 2016 IEEE European Symposium on Security and Privacy (EuroS&P). pp. 261–275. doi:10.1109/EuroSP.2016.29. ISBN 978-1-5090-1751-5. S2CID 11251391.
  3. ^ Gorka Irazoqui; Mehmet Sinan Inci; Thomas Eisenbarth; Berk Sunar, Wait a minute! A fast, Cross-VM attack on AES (PDF), archived (PDF) from the original on 2017-08-11, retrieved 2018-01-07
  4. ^ Yuval Yarom; Katrina Falkner, Flush+Reload: a High Resolution, Low Noise, L3 Cache Side-Channel Attack (PDF), archived (PDF) from the original on 2017-07-05, retrieved 2018-01-07
  5. ^ Mehmet S. Inci; Berk Gulmezoglu; Gorka Irazoqui; Thomas Eisenbarth; Berk Sunar, Cache Attacks Enable Bulk Key Recovery on the Cloud (PDF), archived (PDF) from the original on 2016-07-17, retrieved 2018-01-07
  6. ^ David Brumley; Dan Boneh (2003). "Remote timing attacks are practical" (PDF). Archived (PDF) from the original on 2011-07-28. Retrieved 2010-11-05.
  7. ^ Kovacs, Eduard (2023-08-01). "Nearly All Modern CPUs Leak Data to New Collide+Power Side-Channel Attack". SecurityWeek. Archived from the original on 2024-07-11. Retrieved 2023-08-02.
  8. ^ Claburn, Thomas. "Another CPU data-leak flaw found. Luckily, it's impractical". www.theregister.com. Retrieved 2023-08-02.
  9. ^ Collide+Power, Institute of Applied Information Processing and Communications (IAIK), 2023-08-01, archived from the original on 2023-08-01, retrieved 2023-08-02
  10. ^ Lerman, Liran; Bontempi, Gianluca; Markowitch, Olivier (1 January 2014). "Power analysis attack: an approach based on machine learning". International Journal of Applied Cryptography. 3 (2): 97–115. doi:10.1504/IJACT.2014.062722. ISSN 1753-0563. Archived from the original on 25 January 2021. Retrieved 25 September 2020.
  11. ^ Timon, Benjamin (2019-02-28). "Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis". IACR Transactions on Cryptographic Hardware and Embedded Systems: 107–131. doi:10.13154/tches.v2019.i2.107-131. ISSN 2569-2925. S2CID 4052139. Archived from the original on 2021-11-12. Retrieved 2021-11-19.
  12. ^ "X-DeepSCA: Cross-Device Deep Learning Side Channel Attack" Archived 2020-02-22 at the Wayback Machine by D. Das, A. Golder, J. Danial, S. Ghosh, A. Raychowdhury and S. Sen, in 56th ACM/IEEE Design Automation Conference (DAC) 2019.
  13. ^ "Practical Approaches Toward Deep-Learning-Based Cross-Device Power Side-Channel Attack" Archived 2024-07-11 at the Wayback Machine by A. Golder, D. Das, J. Danial, S. Ghosh, A. Raychowdhury and S. Sen, in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 27, Issue 12, 2019.
  14. ^ "Declassified NSA document reveals the secret history of TEMPEST". Wired. Wired.com. April 29, 2008. Archived from the original on May 1, 2008. Retrieved May 2, 2008.
  15. ^ "An Introduction to TEMPEST | SANS Institute". Archived from the original on 2017-09-05. Retrieved 2015-10-06.
  16. ^ Church, George (April 20, 1987). "The Art of High-Tech Snooping". Time. Archived from the original on June 4, 2011. Retrieved January 21, 2010.
  17. ^ Eduard Kovacs (February 23, 2017), "Hard Drive LED Allows Data Theft From Air-Gapped PCs", Security Week, archived from the original on 2017-10-07, retrieved 2018-03-18
  18. ^ J. Ferrigno; M. Hlaváč (September 2008), "When AES blinks: introducing optical side channel", IET Information Security, 2 (3): 94–98, doi:10.1049/iet-ifs:20080038, archived from the original on 2018-01-11, retrieved 2017-03-16
  19. ^ S. Angel; S. Kannan; Z. Ratliff, "Private resource allocators and their Applications" (PDF), Proceedings of the IEEE Symposium on Security and Privacy (S&P), 2020., archived (PDF) from the original on 2020-06-24, retrieved 2020-06-23
  20. ^ Tortuga Logic (2018). "Identifying Isolation Issues in Modern Microprocessor Architectures". Archived from the original on 2018-02-24. Retrieved 2018-02-23.
  21. ^ a b c d "A Network-based Asynchronous Architecture for Cryptographic Devices" Archived 2011-09-29 at the Wayback Machine by Ljiljana Spadavecchia 2005 in sections "3.2.3 Countermeasures", "3.4.2 Countermeasures", "3.5.6 Countermeasures", "3.5.7 Software countermeasures", "3.5.8 Hardware countermeasures", and "4.10 Side-channel analysis of asynchronous architectures".
  22. ^ "The Program Counter Security Model: Automatic Detection and Removal of Control-Flow Side Channel Attacks" Archived 2009-04-19 at the Wayback Machine by David Molnar, Matt Piotrowski, David Schultz, David Wagner (2005).
  23. ^ ""The Program Counter Security Model: Automatic Detection and Removal of Control-Flow Side Channel Attacks" USENIX Work-in-Progress presentation of paper" (PDF). Archived (PDF) from the original on 2017-08-14. Retrieved 2014-10-04.
  24. ^ Jeong, C.; Nowick, S. M. (January 2007). "Optimization of Robust Asynchronous Circuits by Local Input Completeness Relaxation". 2007 Asia and South Pacific Design Automation Conference. pp. 622–627. doi:10.1109/ASPDAC.2007.358055. ISBN 978-1-4244-0629-6. S2CID 14219703.
  25. ^ "Masking against Side-Channel Attacks: A Formal Security Proof" Archived 2017-08-11 at the Wayback Machine by Emmanuel Prouff, Matthieu Rivain in Advances in Cryptology – EUROCRYPT 2013.
  26. ^ "EM and Power SCA-Resilient AES-256 in 65nm CMOS Through >350× Current-Domain Signature Attenuation" Archived 2020-08-07 at the Wayback Machine by D. Das et al., in IEEE International Solid-State Circuits Conference (ISSCC), 2020,
  27. ^ "STELLAR: A Generic EM Side-Channel Attack Protection through Ground-Up Root-cause Analysis" Archived 2020-02-22 at the Wayback Machine by D. Das, M. Nath, B. Chatterjee, S. Ghosh, and S. Sen, in IEEE International Symposium on Hardware Oriented Security and Trust (HOST), Washington, DC, 2019.
  28. ^ "ASNI: Attenuated Signature Noise Injection for Low-Overhead Power Side-Channel Attack Immunity" Archived 2020-02-22 at the Wayback Machine by D. Das, S. Maity, S.B. Nasir, S. Ghosh, A. Raychowdhury and S. Sen, in IEEE Transactions on Circuits and Systems I: Regular Papers, 2017, Vol. 65, Issue 10.
  29. ^ "High efficiency power side-channel attack immunity using noise injection in attenuated signature domain" Archived 2020-02-22 at the Wayback Machine by D. Das, S. Maity, S.B. Nasir, S. Ghosh, A. Raychowdhury and S. Sen, in IEEE International Symposium on Hardware Oriented Security and Trust (HOST), Washington, DC, 2017.

Further reading

[edit]

Books

[edit]

Articles

[edit]
  • [1], Differential Power Analysis, P. Kocher, J. Jaffe, B. Jun, appeared in CRYPTO'99.
  • [2], Side channel attack: an approach based on machine learning, 2011, L Lerman, G Bontempi, O Markowitch.
  • [3], Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems, P. Kocher.
  • [4], Introduction to Differential Power Analysis and Related attacks, 1998, P Kocher, J Jaffe, B Jun.
  • Nist.gov, a cautionary Note Regarding Evaluation of AES Candidates on Smart Cards, 1999, S Chari, C Jutla, J R Rao, P Rohatgi
  • DES and Differential Power Analysis, L Goubin and J Patarin, in Proceedings of CHES'99, Lecture Notes in Computer Science Nr 1717, Springer-Verlag
  • Grabher, Philipp; et al. (2007). "Cryptographic Side-Channels from Low-power Cache Memory". In Galbraith, Steven D. (ed.). Cryptography and coding: 11th IMA International Conference, Cirencester, UK, December 18-20, 2007: proceedings, Volume 11. Springer. ISBN 978-3-540-77271-2.
  • Kamal, Abdel Alim; Youssef, Amr M. (2012). "Fault analysis of the NTRUSign digital signature scheme". Cryptography and Communications. 4 (2): 131–144. doi:10.1007/s12095-011-0061-3. S2CID 2901175.
  • Daniel Genkin; Adi Shamir; Eran Tromer (December 18, 2013). "RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis". Tel Aviv University. Retrieved October 15, 2014.
[edit]