• No results found

Tamper-resistant Smart Cards Too Much To Ask For?

N/A
N/A
Protected

Academic year: 2021

Share "Tamper-resistant Smart Cards Too Much To Ask For?"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)

Ville Taponen

Helsinki University of Technology

Telecommunications Software and Multimedia Laboratory Ville.Taponen@hut.fi

Abstract

Smart cards are convenient and reasonably secure cryptographic tokens. We will therefore see an increasing number of smart card systems as electronic purses, per-sonal health information storages, pay-TV, etc. This paper will discuss the difficulty of making smart cards tamper-resistant to persons with different levels of expertise, resources and time. Some general design guidelines are given and various common attack types are introduced along with their respective countermeasures. Despite of the fact that Differential Power Analysis (DPA) is highly effective in extracting keys from almost any system, commercially reasonable tamper-resistance is available. Keywords: tamper-resistance, smart cards, standards, certification, physical attack, differential power analysis, DPA, countermeasures

1 Introduction

Smart cards have existed in various forms and applications since the beginning of the 1970’s. Since then, smart cards have received a great amount of attention in the control device market. Rapid advances in the design and manufacture of components have driven the growth of the smart card industry. Smart card technology will grow into what analysts estimate will be a $16 billion market by the year 2005 [7].

Picture this: At a supermarket checkout counter, you pay for your merchandise with money stored on a small electronic card. To reload your electronic purse afterward, you plug the card into your Global System for Mobile Communications (GSM) phone and dial up your bank account. Then, on the metro, the same card gives you access to the track area because it holds your prepaid monthly ticket. Once you get home, the card identifies you as a subscriber of your favorite TV channel as you plug it into the set-top box.

Portions of this multi-functional smart card scenario are reality today. Unfortunately, there is little analysis of the security risks specific to smart cards, and the unique threat environments they face. A smart card might seem to be an intrinsically secure device. It provides a safe place to store valuable information such as private keys, passwords, or personal information such as health maintenance data. It is also a secure place to perform processes that one does not want exposed to the world such as performing a public key or private key encryption. However, smart cards offer some level of tamper-resistance but are not tamper-proof. Today, many cryptography experts claim that they have already

(2)

found ways of hacking smart cards. This paper will offer general guidelines for smart card design and present various theoretical and real-world attacks. In section 5 we will discuss some important countermeasures in the literature.

2 Smart Card Technology

Smart cards contain a miniature computer system built on a single chip. This computer system has important similarities to and differences from other kinds of computers. Like others, it has a central processing unit (CPU) and various kinds of memory. Unlike oth-ers, cost is a major constraint, as the final chips must be sold for a few dollars rather than tens to thousands of dollars other computer chips sell for. The chips must also be as small as possible for both cost and reliability reasons.

2.1 Physical

Structure

The physical structure of a smart card is specified by the International Standards Organi-zation (ISO) and usually conforms to ISO 7810, 7813 and 7816 [15]. Generally a smart card is made up of three elements. The plastic card is the most basic one and has the di-mensions of 85.60 mm × 53.98 mm × 0.80 mm but may have the smaller size of a GSM subscriber identification module known as SIM. A printed circuit and an integrated cir-cuit (IC) are embedded on the card. Figure 1 shows an overview of the physical structure of a smart card.

Figure 1: Physical structure of a smart card.

The printed circuit provides five connection points for power and data. It is hermetically fixed in the recess provided on the card and is burned onto the circuit chip, filled with a conductive material, and sealed with contacts protruding [4]. The printed circuit protects the circuit chip from mechanical stress and static electricity. Communications with the chip is accomplished through contacts that overlay the printed circuit.

The capability of a smart card is defined by its IC chip. Most current smart cards use 8-bit microprocessors. 32-bit microprocessor is very likely to replace the 8-bit microprocessor

(3)

in the near future. This increase in capacity offers the application developers to imple-ment directly in 32-bit integers without breaking it down into 8-bit parts. It can accept more advanced applications without constraints and hence enableJava smart card to han-dle more complicated jobs for human [7]. Although more powerful chips will be avail-able shortly, none as yet, have multi-threading and other powerful features that are com-mon in standard computers. Smart card chips use several types of memory, all imple-mented on a single chip. Memory sizes range from as little as 1 kB of programmable non-volatile memory to as much as 24 kB. Non-non-volatile memory size is similarly limited. Pro-grammable non-volatile memory is generally EEPROM (Electrically Erasable Program-mable Read-Only Memory). It can be programmed after chip manufacture, which is both its strength and its weakness. Non-volatile memory is generally ROM (Read-Only Mem-ory), which is put in the chip hardware when it is manufactured. It cannot be changed, although its operation can be logically blocked. Volatile memory is generally RAM (Random Access Memory), used as a temporary storage for interim operations.

2.2

General Life Cycle Model

There is no single typical smart card life cycle. Minimally, a smart card must meet some user requirements, without which there will be no market for the final product. It must be designed, manufactured, issued, used, and taken out of use [4][15]. Each of these steps requires careful consideration.

2.2.1 User

Requirements

All smart card deployments begin with user requirements. These requirements drive the rest of the process. Among the issues that may be noted in the user requirements are the following [15]:

• whether there should be one or more than one application • the level(s) of security needed

• whether the requirements are clearly or only partially known

• what level and kind of IT skills the potential users are expected to have • how much flexibility the product should afford the user

• acceptable price ranges and how many cards are anticipated

2.2.2 Software And Hardware Design

Smart cards put applications and operating systems into integrated circuits that are em-bedded in a printed card. Each of these (application, OS, IC, and card) is generally de-signed and manufactured by a different company, although the application and the operat-ing system are sometimes done by the card company [15].

Operating system, chip and card designers frequently try to anticipate several users' re-quirements and design general-purpose operating systems and chips. If the applications

(4)

has lower security requirements or anticipates a small number of cards, or if all the details of the successful application are not yet known, a decision may be made to use one of these general-purpose products. Whether the card is a proprietary one or not, there will be several designers, often working for different companies.

2.2.3 Manufacturing

Integrated circuits are tested after the wafer1 is manufactured and at several other points during development. One potential attack is to place the chip in to test mode; this should not be possible after the chip has passed beyond the test phase of the life cycle.

Applications may be added during or after the chip manufacturing stage. The chip manu-facturer may not even know what applications are being added to his chip if the applica-tion is added during card manufacture or after the card has been issued.

2.2.4 Issuance

The application issuer may or may not also be the card issuer. The card issuer may au-thorize other applications to be placed on the card, each with their own life cycles and requirements. The card may or may not be personalized to an individual, depending on the application's requirements. A banking application is issued by a financial institution, which has a contract with the end user that governs use of the application. Debit and credit applications typically are used to access an account at the issuing institution.

2.2.5 Utilization

This is the phase for the normal use of the card by the card holder. The application is used in a transaction, which requires supporting software in a card acceptance device (CAD) – often called a terminal. Most attacks are anticipated to occur at this stage of the life cycle. Much of the security required in the development environ-ment is designed to protect against these attacks.

2.2.6 End of Card Life

Typically, cards have an expiration date. The terminal will not accept a transaction from a card that has expired. The end user card is seldom returned to the issuer but is usually simply discarded. Attackers may obtain discarded cards and use them to study the rity features of similar cards still in use. If cryptographic keys provide some of the secu-rity, they must be securely managed. Generally, smart cards do not have the ability to de-stroy keys, other than session keys [15]. Therefore expiration of keys must be carefully considered.

1

A thin slice of semiconducting material, such as a silicon crystal, upon which microcircuits are constructed by diffusion and deposition of various materials.

(5)

3 Standardized Smart Card Security Design

The ultimate goal of smart card security is proven robustness and correct functioning of every single card delivered to the card user. Chip security and card life cycle security are the key links in this chain. Chip and card life cycle security are non-competitive issues which means that these properties should not and cannot be separated in the design proc-ess.

3.1 Background

The market for smart cards is highly cost sensitive; differences of a few cents per card matter when millions of units are involved. This means that any defensive measures must meet very stringent cost effectiveness tests that are unusual with other IT products. At-tacks that involve multiple parts of a security system are difficult to predict and model. If cipher designers, software developers, and hardware engineers do not understand or re-view each other's work, security assumptions made at each level of a system's design may be incomplete or unrealistic. As a result, security faults often involve unanticipated inter-actions between components designed by different people. For example, National Insti-tute of Standard and Technology (NIST) emphasizes the importance of computer security awareness and of making information security a management priority that is communi-cated to all employees [6].

3.2

Smart Card Security Evaluation

Currently, Financial Payment Systems, i.e. credit card brands, individually do smart card evaluations – unstandardized, possibly conflicting [3]. Vendor's products may be subject to conflicting requirements, repeated and expensive evaluations by different users. ISO 15408 – Common Criteria for Information Technology Security Evaluation, the "CC", represents the outcome of efforts to develop criteria for evaluation of IT security that are widely useful within the international community. It is an alignment and development of a number of source criteria: The existing European, US, and Canadian criteria (ITSEC, TCSEC and CTCPEC respectively). The Common Criteria resolves the conceptual and technical differences between the source criteria. It is a contribution to the development of an international standard, and opens the way to worldwide mutual recognition of evaluation results. Version 1.0 of the CC was published for comment in January 1996. Version 2.1, the current version, was published in December 1999. If independent third party evaluation should become mandatory, it would require sharing test methods and information about vulnerabilities between private companies and independent institutions [11]. A public acceptance of an evaluation scheme could even require an open discussion and disclosure of information about risks and vulnerabilities to the public. It is therefore unfortunate if smart card security really depends on confidentiality of CPU design and specifications.

3.2.1 Common Criteria Important Concepts

1. Common structure and language for expressing product or system IT secu-rity requirements [8].

(6)

2. Catalogs of standardized IT security requirement components and packages. The CC presents requirements for the IT security of a product or system under the distinct categories of functional requirements [9] and assurance require-ments [10]. The CC functional requirements define desired security behavior. Assurance requirements are the basis for gaining confidence that the claimed security measures are effective and implemented correctly.

The CC envisages the definition of Protection Profile (PP), standardized and well under-stood sets of implementation independent security requirements developed by a user group to specify their security functionality needs for a particular product. [15] is an ex-ample. This allows a manufacturer or product developer to build a product according to the requirements of a PP. They can then have it evaluated and claim conformance to the PP. The product is still evaluated against a security target (ST) but the contents of the ST mirror the requirements laid down in the PP. A security target is created by the product vendor and is therefore implementation specific.

The smart card protection profile presented in this study is a joint effort of the Smart Card Security User Group (SCSUG). SCSUG is a global financially oriented industry group formed specifically to represent the security needs of the user community and at the time of revision of [15], comprises of American Express, Europay, JCB, MasterCard, Mondex, Visa, NIST and NSA.

3.2.2 CC versus ITSEC

A study in [14] gives us an overview how certification work under ITSEC and Common Criteria schemes. It points out the difficulty of comparison of these two schemes. There are several issues which favor CC over ITSEC. An overview is given below.

The advantage of the second important concept in CC is that the security functionality will be expressed in an explicit, unambiguous way. The wording is well understood and [8] includes detailed guidance for interpretation and application. The first important con-cept in CC makes comparison of certifications by users and mutual recognition by certifi-cation bodies more practical. The detailed guidance in CC on calculating attack potential aims at removing some of the subjectivity from this difficult assessment task and it may offer more clarity than the ITSEC. The Smart Card Security User Group protection pro-file emphasizes that a vulnerability to certain types of threats can only be ascertained by examining the IC, operating system and applications as an integrated whole because ef-fective security relies on a synergistic contribution of these three layers.

It was further noted in [14] that all the examined ITSEC certifications claimed a high Strength of Mechanisms (SoM) but the scope of each evaluation was also limited in some way, either to particular phases of the card life cycle, by exclusion of the chip from the Target of Evaluation or by specifically excluding relevant threats. It can be questioned whether a high SoM would have been attained if all threats were considered in the context of the integrated product, as it is issued to the user in its actual mode of use.

(7)

4 Attacking Tamper-resistant Devices

Section two presented the basic structure of a smart card and introduced the general life cycle model to highlight the stages where security issues must be dealt with. Section three discussed some fundamental properties of smart card design and suggested the use of Common Criteria as a means to standardize and certify smart card security evaluation. The CC has existed since 1996 and other schemes years before that. It is now year 2000; why do we still read articles about successful attacks on smart card systems? To under-stand better which countermeasures are of practical value, let use first identify the capa-bilities an attacker may have, then discuss some trade-offs in design and finally advance to real-world attacks.

4.1 Attacker

Capabilities

Attackers are assumed to have various levels of expertise, resources, and motivation. Mo-tivation may include economic reward or the satisfaction and notoriety of defeating ex-pert security. Relevant exex-pertise may be in general semiconductor technology, software engineering, hacking techniques, or in the specific smart card. We will adopt the taxon-omy of attackers proposed by IBM to guide designers of security systems that rely to some extent on tamper-resistance [1]:

Class I (clever outsiders): They are often very intelligent but may have insuffi-cient knowledge of the system. They may have access to only moderately sophisti-cated equipment. They often try to take advantage of an existing weakness in the system, rather than try to create one.

Class II (knowledgeable insiders): They have substantial specialized technical education and expertise. They have varying degrees of understanding of parts of the system but potential access to most of it. They often have highly sophisticated tools and instruments for analysis.

Class III (funded organizations): They are able to assemble teams of specialists with related and complementary skills backed by great funding resources. They are capable of in-depth analysis of the system, designing sophisticated attacks, and us-ing the most advanced analysis tools. They may use Class II adversaries as part of the attack team.

4.2

Time and Cost Trade-off

Tamper-resistance is not absolute. It is generally believed that, given sufficient invest-ment, any chip-sized tamper-resistant device can be compromised [1]. So the level of tamper-resistance offered by any particular product can be measured by the time and cost penalty that the protective mechanisms impose on the attacker. Estimating these penalties is clearly an important problem, but is one to which security researchers, evaluators and engineers have paid less attention than it deserves.

The cost and inconvenience of the kind of protection used in the (military) nuclear indus-try are orders of magnitude greater than even major banks would be prepared to tolerate.

(8)

If we really wish to prevent the loss of a cryptographic key from a class III opponent we had better use explosives and guard the device [2]. The obvious downside is that these protection methods are non-applicable to smart cards.

4.3 Attack

techniques

The critical question is always whether an attacker can obtain unsupervised access to the device [2]. If the answer is no, then relatively simple measures may suffice. But in an in-creasing number of applications, the attacker can obtain completely unsupervised access. This is the case that most interests us: It includes pay-TV smart cards, prepayment meter tokens, remote locking devices for cars and mobile phone SIM cards. We can distinguish four major attack categories [13][15]:

Physical attacks like microprobing can be used to access the chip surface di-rectly, thus we can observe, manipulate, and interfere with the IC.

Logical attacks use the normal communication interface of the processor and exploit security vulnerabilities found in the protocols, cryptographic algo-rithms, or their implementation.

Information monitoring techniques monitor, with high time resolution, the analog characteristics of all supply and interface connections and any other electromagnetic radiation produced by the processor during normal operation. • Fault generation techniques use abnormal environmental conditions to

gener-ate malfunctions in the processor that provide additional access.

All physical attacks are invasive attacks. They require hours or weeks in a specialized laboratory and in the process they destroy the packaging. The other three are non-invasive attacks. After we have prepared such an attack for a specific processor type and software version, we can usually reproduce it within seconds on another card of the same type. The attacked card is not physically harmed and the equipment used in the attack can usually be disguised as a normal smart card reader.

Non-invasive attacks are particularly dangerous in some applications because the owner of the compromised card might not notice that the cryptographic keys have been stolen, therefore it is unlikely that the compromised keys will be revoked before any abuse. The design of most non-invasive attacks requires a class II attacker. On the other hand, inva-sive techniques require very little initial knowledge and usually work with a similar set of techniques on a wide range of products. Attacks therefore often start with invasive re-verse engineering, the results of which then help to develop cheaper and faster non-invasive attacks [13].

4.3.1 Physical

Attacks

Invasive tampering on microcontrollers is typical and for some chips almost trivial [2][4]. Physical attacks start with the removal of the chip package. The plastic card is heated un-til it becomes flexible. This softens the glue and the chip module can then be removed

(9)

easily by bending the card and using a knife. The module is placed in fuming nitric acid as shown in Figure 2. After a while the black epoxy resin has dissolved and the chip is washed with acetone in an ultrasonic bath. The smart card processor is now glued into a test package, whose pins are then connected to the contact pads of the chip with fine alu-minum wires in a manual bonding machine (Figure 3).

Figure 2: Hot fuming nitric acid is used to dissolve the encapsu-lating epoxy.

Figure 3: The processor is glued into a test package and its pins are connected with a manual bonding machine.

(10)

Functional tests with pay-TV and prepaid phone smart cards have shown that EEPROM content is not affected by hot nitric acid. No knowledge beyond school chemistry is nec-essary; the materials are easily available in any chemistry lab, and several undergraduate students have already reported the successful application of this method some time ago on an Internet mailing list dedicated to amateur smart card hacking. [2]

With semiautomatic image-processing methods, significant portions of a processor can be reverse-engineered within a few days [13]. The resulting polygon data can then be used to automatically generate transistor and gate-level net lists for circuit simulation. However, this requires class II expertise.

4.3.2 Logical

Attacks

Poorly designed protocols are a more common source of attacks than many people think. Many of them also require only very simple and cheap equipment to exploit. For exam-ple, satellite TV decoders typically have a hardware crypto processor that decrypts the video signal, and a micro controller which passes messages between the crypto processor and the customer smart card that contains the key material. If a customer stops paying his subscription, the system typically sends a message over the air which instructs the de-coder to disable the card. By covering specific contacts on your card, or by clamping it inside the decoder using a diode, subscribers could prevent these signals affecting the card. They could then cancel their subscription without the vendor being able to cancel their service [1].

4.3.3 Differential Power Analysis (DPA)

Power analysis techniques fall into the category of information monitoring and they are of great concern because a very large number of vulnerable products are on the market today. The attacks are easy to implement, have a very low cost per device, and are non-invasive. A class I attacker is able to perform DPA because the attack can be automated. A computer will locate correlated regions in a device's power consumption.

DPA is based on the phenomenon that storing a 1-bit in a flip-flop consumes typically more power than a 0-bit. Also, state changes typically cause extra power consumption. In addition to large-scale power variations due to the instruction sequence, there are effects correlated to data values being manipulated. These variations tend to be smaller and are sometimes overshadowed by measurement errors and other noise. In such cases, it is still often possible to compromise the system using statistical functions tailored to the target algorithm.

Public key algorithms can be analyzed using DPA by correlating candidate values for computation intermediates with power consumption measurements. For modular expo-nentiation operations, it is possible to test exponent bit guesses by testing whether pre-dicted intermediate values are correlated to the actual computation [12]. In general, sig-nals leaking during asymmetric operations tend to be much stronger than those from many symmetric algorithms, for example because of the relatively high computational complexity of multiplication operations. As a result, implementing effective Simple Power Analysis or DPA countermeasures can be challenging.

(11)

DPA can be used to break implementations of almost any symmetric or asymmetric algo-rithm. For example, a 128-bit Twofish secret key, which is considered to be safe, was recovered from a smart card after observing 100 independent encryptions [5]. In this case we can easily see that DPA reveals 1 to 2 bits of information per encryption. A study in [12] shows that it is possible to reverse-engineer even unknown algorithms and protocols.

4.3.4 Fault

Generation

Power and clock transients can be used in some processors to affect the decoding and execution of individual instructions. In a glitch attack, we deliberately generate a mal-function that causes one or more transistors to adopt the wrong state. The aim is usually to replace a single critical machine instruction with an almost arbitrary other one. Glitches can also aim to corrupt data values as they are transferred between registers and memory. So if we apply a clock glitch2 or a power glitch3, this will affect only some tran-sistors in the chip. Although we do not know in advance which glitch will cause wrong instructions in a specific chip, it can be found relatively easy by a systematic search [2]. Glitch attacks seem to be most useful in practical attacks [13].

A typical subroutine found in security processors is a loop that writes the contents of a limited memory range to the serial port:

1 b = answer_address 2 a = answer_length 3 if (a == 0) goto 8 4 transmit(*b) 5 b = b + 1 6 a = a - 1 7 goto 3 8 …

We can look for a glitch that increases the program counter as usual but transforms either the conditional jump in line 3 or the loop variable decrement in line 6 into something else. Conditional jumps create windows of vulnerability in the processing stages of many security applications that often allows us to bypass sophisticated cryptographic barriers by simply preventing the execution of the code that an authentication attempt was unsuc-cessful.

4.3.5 Fault Generation And Differential Fault Analysis

There exists an attack on DES based on 200 ciphertexts in which 1-bit errors have been induced by environmental stress [1]. It is assumed that by exposing a processor to a low level of ionizing radiation, or some other comparable method, that 1-bit errors can be in-duced in the data used and specifically in the key material fed into the successive rounds.

2

A clock pulse much shorter than normal. 3 A rapid transient in supply voltage.

(12)

It is further shown how this method could be extended to reverse-engineer unknown algo-rithms [1]. In each case, the critical observation is that errors that occur in the last few rounds of the cipher leak information about the key, or algorithm structure, respectively. The problem with these proposed attacks is that no one has demonstrated the feasibility of the fault model. Indeed, with many security processors, the key material is held in EEPROM together with several kilobytes of executable code; so it is likely that a random 1-bit error, which did have an effect on the device’s behavior, would be more likely to crash the processor.

5 Countermeasures

The nuclear business supplies the only examples known of tamper-resistant packages de-signed to withstand a class III attacker who can obtain unsupervised physical access [2]. These are the missile sensors developed to verify the SALT II treaty and the seismic sen-sor package developed for nuclear test ban treaty verification. In this latter system, the seismic sensors are fitted in a steel tube and inserted into a drill hole that is backfilled with concrete. The whole assembly is so solid that the seismometers themselves can be relied upon to detect tampering events with a fairly high probability. This physical protec-tion is reinforced by random challenge inspecprotec-tions.

5.1 Physical Attacks (Invasive Attacks)

There is no really effective short-term protection against carefully planned invasive tam-pering attacks involving Focused Ion-Beam (FIB) tools [13].

5.1.1 Top-layer Sensor Meshes

Metallization layers that form a sensor mesh above the actual circuit and that do not carry any critical signals remain one of the more effective annoyances to physical attacker [13]. A sensor mesh in which all paths are continuously monitored for interruptions and short-circuits while power is available prevents laser cutter or selective etching access to the bus lines. Mesh alarms should immediately zeroize the non-volatile memory for all plain-text cryptographic keys and other unprotected critical security parameters [6][13]. A well-designed mesh can make attacks by manual micro probing alone rather difficult, and more sophisticated FIB editing procedures will be required to bypass it.

5.1.2 Destruction of Test Circuitry

Micro controller production has a yield of typically around 95%, so each has to be thor-oughly tested after production. Test engineers, like microprobing attackers, have to get full access to a complex circuit with a small number of probing needles. On normal proc-essors, the test circuitry is left fully intact after the test. In smart card procproc-essors, it is common practice to blow polysilicon fuses that disable access to these test circuits. How-ever, attackers have been able to reconnect these with microprobes or FIB editing, and then simply used the test circuitry to dump the entire memory content [13]. Therefore, it

(13)

is essential that any test circuitry is not only slightly disabled but also structurally de-stroyed by the manufacturer.

5.2

General Non-invasive Attacks

A microprocessor is basically a set of a few hundred flip-flops (registers, latches, etc.) that define its current state, and additional combinatorial logic that calculates from the current state the next state during every clock cycle. Some analog effects in such a system can be used in non-invasive attacks as was seen in sections 4.3.3 and 4.3.4. This section presents two countermeasures that can reduce the success of certain non-invasive attacks. The first method is low-cost and easy to implement. The latter method is more theoretical and its practical impact is yet to discover.

5.2.1 Randomized Clock Signal

Many non-invasive attacks require the attacker to predict the time at which a certain in-struction is executed. A strictly deterministic processor that executes the same inin-struction

n clock cycles after each reset, if provided with the same input at every cycle, makes this easy.

The obvious countermeasure is to insert random time delays between any observable re-action and critical operations that might be subject to an attack. If the serial port were the only observable channel, then a few random delay routine calls controlled by a hardware noise source would seem sufficient. However, since attackers can use cross-correlation techniques to determine in real-time from the current fluctuations the currently executed instruction sequence, a few localized delays will not suffice.

A study in [13] therefore strongly recommends introducing timing randomness at the clock-cycle level. A random bit-sequence generator that is operated with the external clock signal should be used to generate an internal clock signal.

5.2.2 Randomized

Multi-threading

To introduce even more non-determinism into the execution of algorithms, it is conceiv-able to design a multi-threaded processor architecture that schedules the processor by hardware between two or more threads of execution randomly at a per-instruction level [13]. Such a processor would have multiple copies of all registers, and the combinatorial logic would be used in a randomly alternating way to progress the execution state of the threads represented by these respective register sets.

5.3 Differential Power Analysis

The only reliable solution to DPA involves designing cryptosystems with realistic as-sumptions about the underlying hardware. DPA highlights the design principle in section three where different people at different levels are expected review each other's work for

(14)

a better whole. However, there are techniques for preventing DPA and related attacks. These attacks fall roughly into three categories.

Firstly we can reduce signal size, such as by using constant execution path code, choosing operations that leak less information in their power consumption or adding extra gates to compensate for the power consumption [12]. Unfortunately such signal size reduction cannot reduce the signal size to zero and an attacker with an infinite number of samples will still be able to perform DPA on the signal.

Secondly we may introduce noise into power consumption measurements but like in the previous case, an infinite number of samples will still enable statistical analysis. In addi-tion, execution timing and order can be randomized [12]. Designers and reviewers must approach temporal obfuscation with great caution because many techniques can be used to bypass or compensate for these effects.

A final approach involves using non-linear key update procedures. For example, hashing a 160-bit key with SHA should effectively lose all partial information an attacker might have gathered about the key [12]. Similarly, aggressive use of exponent and modulus modification processes in public key schemes can be used to prevent attackers from gath-ering data across large numbers of operations. Key use counters can prevent attackers from obtaining large numbers of samples.

6 Conclusions

Designing and implementing security in the various stages of a smart card’s life cycle model can be fairly difficult. It is generally believed that, given sufficient investment, any chip-sized tamper-resistant device can be compromised. The level of tamper-resistance offered by any particular product can be measured by the time and cost penalty that the protective mechanisms impose on the attacker.

Smart cards may be used in on-line as well as off-line operation. Therefore, countermea-sures which depend on network monitoring alone or assume solely off-line operation will be generally ineffective. The mixture of operation must be carefully considered in the development and specification of countermeasures.

Physical attacks utilizing techniques derived from semiconductor engineering must be evaluated or the evaluation process is inadequate. Invasive tampering is a typical attack and for some chips it was shown to be almost trivial.

Differential Power Analysis was shown to be particularly dangerous because of its ease of use, a very low cost per device and non-invasive character. DPA can be used to extract keys from almost any physical form factor. An example was given where a 128-bit Twofish symmetric key was extracted by observing 100 different encryptions.

The cost and inconvenience of the kind of protection used in the nuclear industry are or-ders of magnitude greater than even major banks would be prepared to tolerate. However, according to this study the most effective tamper-resistance on the commercial side can be achieved by at least two ways: a well designed top-layer sensor mesh combined with a key zeroization circuitry can make attacks by manual microprobing alone rather difficult.

(15)

References

[1] Anderson, R. & Kuhn, M.G., Low Cost Attacks on Tamper Resistant Devices, In M. Lomas, et al. (eds.), Security Protocols,5th International Workshop, LNCS 1361, pp. 125-136, Springer–Verlag, 1997

<http://www.cl.cam.ac.uk/~mgk25/tamper2.pdf>

[2] Anderson, R. & Kuhn, M.G., Tamper Resistance – a Cautionary Note, Second USENIX Workshop on Electronic Commerce Proceedings, pp. 1-11, Oakland, Cali-fornia, 18-21.11.1996

<http://www.cl.cam.ac.uk/~mgk25/tamper.html>

[3] CardTech/SecurTech 2000 Presentation, Smart Card Security for the New Millen-nium, Miami Beach, 3.5.2000

<http://csrc.nist.gov/cc/sc/troy-ctst2000.pdf>

[4] Chan, S.C., An overview of Smart Card security, 1997

<http://home.hkstar.com/~alanchan/papers/smartCardSecurity/>

[5] Chari, S. & Jutla, C. & Rao, J.R. & Rohatgi, P., A Cautionary Note Regarding Evaluation of AES Candidates on Smart-Cards, AES Second Candidate Conference, Rome, Italy, 22-23.3.1999

<http://csrc.nist.gov/encryption/aes/round1/conf2/papers/chari.pdf>

[6] FIPS PUB 140-1,Security Requirements for Cryptographic Modules, National Insti-tute of Standards and Technology, U.S. Department of Commerce, 11.1.1994 <http://csrc.nist.gov/fips/fips1401.htm>

[7] Hofland, P. & Janowski, L., Smarter Smartcards, Byte Magazine, February 1998 <http://www.byte.com/art/9802/sec17/art1.htm>

[8] ISO 15408:1999, Common Criteria for Information Technology Security Evaluation, Part 1: Introduction and general model, Version 2.1, August 1999

<http://csrc.ncsl.nist.gov/cc/ccv20/p1-v21.pdf>

[9] ISO 15408:1999, Common Criteria for Information Technology Security Evaluation, Part 2: Security functional requirements, Version 2.1, August 1999

<http://csrc.ncsl.nist.gov/cc/ccv20/p2-v21.pdf>

[10] ISO 15408:1999, Common Criteria for Information Technology Security Evalua-tion, Part 3: Security assurance requirements, Version 2.1, August 1999

<http://csrc.ncsl.nist.gov/cc/ccv20/p3-v21.pdf>

[11] Jøsang, A., The difficulty of standardizing smart card security evaluation, Computer Standards and Interfaces Journal, pp. 333-341, 17.9.1995

<http://www.item.ntnu.no/~ajos/papers/cardeval.ps>

[12] Kocher, P. & Jaffe, J. & Jun, B., Differential Power Analysis, CRYPTO ‘99 Pro-ceedings, Springer–Verlag, pp. 388-397, 1999

(16)

[13] Kömmerling, O. & Kuhn, M.G., Design Principles for Tamper-Resistant Smartcard Processors, USENIX Workshop on Smartcard Technology, Chicago, IL,

10-11.5.1999

<http://www.cl.cam.ac.uk/~mgk25/sc99-tamper.pdf>

[14] Reid, J. & Looi, M., Making Sense of Smart Card Security Certifications,Fourth Working Conference on Smart Card Research and Advanced Applications, 20-22.9.2000, Bristol, UK

[15] Smart Card Security User Group,Smart Card Protection Profile Draft, Version 2.0, 1.5.2000

References

Related documents

8. The GRANTOR may immediately recover possession the aforedescribed real property, including all permanent structures and improvements introduced thereon, without

is the cluster defect generation rate ( cm ), is the fraction of cluster, is the cascade efficiency, is the recombination rate ( ), is the density of sink of n type in the

Villain tie it is a jojo siwa birthday party ideas for the latest trends and products too perfect for this category only one love this is kate.. Teaching miranda how to a jojo

The purpose of this exploratory study was to investigate the relationship between Campus Recreation student employee’s levels of satisfaction and their intent to stay at the job,

The discussion between school administration and parents of the offending child often helps the parents examine their goals and desires for their child as well as

Visceral larva migrans is a condition in humans caused by the migratory larvae of certain nematodes, humans being a dead-end host... 1) Toxocara canis (dogs) canis

An analysis of the economic contribution of the software industry examined the effect of software activity on the Lebanese economy by measuring it in terms of output and value

OS Command Injection 584 Backdoor Passwords 584 Native Software Bugs ° 585 Buffer Overflow Vulnerabilities 585 Integer Vulnerabilities 586 Format String Vulnerabilities 586 Source