• No results found

Attacker Models for Wireless Sensor Networks

N/A
N/A
Protected

Academic year: 2021

Share "Attacker Models for Wireless Sensor Networks"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

ed b y Germ an copy right law. You may copy and distrib

ute this artic

le for your pe rsonal us e only . Other u se is onl y a llowed with writ ten permis sion by the cop yright holder.

Attacker Models for Wireless Sensor

Networks

Angreifermodelle für drahtlose Sensornetze

Zinaida Benenson, University of Mannheim,

Erik-Oliver Blaß, EURECOM, Sophia Antipolis,

Felix Freiling, University of Mannheim

Summary Assumptions about attackers critically influence

security and efficiency of protocols. Without precisely defined attacker models, proving security of protocols becomes im-possible. Especially the area of sensor network security lacks well-established and precise attacker models. We present a general framework for attacker models in wireless sen-sor networks that structures and orders typical assumptions

about attackers.  Zusammenfassung Präzise An-nahmen über mögliche Angreifer sind unbedingt erforderlich zur Entwicklung von Sicherheitslösungen. Dennoch gibt es im Bereich Sensornetze keine etablierten Angreifermodelle. In dieser Arbeit wird ein Rahmenwerk zur Definition und Struk-turierung von Angreiferannahmen in Sensornetzen vorge-stellt.

Keywords C.2.0 [Computer Systems Organization: Computer-Communication Networks: General] Security and Protection; C.2.4

[Computer Systems Organization: Computer-Communication Networks: Distributed Systems] Distributed Applications; security, design, wireless sensor network, attacker model, adversary model  Schlagwörter Sicherheit, Design, drahtloses Sensornetz, Angreifermodell

“[A] system without an adversary definition cannot be

in-secure. It can only be astonishing.”

Virgil Gligor [12]

1 Introduction

Designing security solutions should always be preceded by the analysis of possible threats to the system. A pre-cise description of these threats is usually called attacker

model, threat model or adversary definition. A rigorous and

formal attacker model allows detailed system validation that raises the confidence in the security of the solution. Imprecise attacker models make it hard to assess system security.

The downside of rigor is textual complexity, i. e., for-mal attacker models are usually rather lengthy and hard to understand. This complexity can be tackled by defin-ing and namdefin-ing useful classes of attacker models that fix certain parameters in a precise way. The result is

a “common language” to describe and communicate an attacker’s abilities in a precise and yet compact way.

In the literature on dependability and security, there are some examples of such common languages. For example, the area of fault-tolerant systems enjoys a hi-erarchy of functional failure models of which the well-known Byzantine failure model [15] allows arbitrary behavior of a subset of components in the system. As another example, in the area of cryptographic proto-cols Dolev and Yao [9] formalized an attacker that has full control over the communication channels and can interact with system participants by means of message exchange. The Dolev-Yao and the Byzantine models can be combined to form a rich set of attacker models that have been used to analyze cryptographic protocols [18].

So far, attacker modeling for sensor networks has not received similar attention. There is no common language and many papers still contain attacker models that are ambigious and imprecise. In this work, we provide such

(2)

ed b y Germ an copy right law. You may copy and distrib

ute this artic

le for your pe rsonal us e only . Other u se is onl y a llowed with writ ten permis sion by the cop yright holder.

a “language” of attack classes that can be used to combine brevity with precision. This language captures a major aspect of an attacker model for sensor networks, namely the change in the functional behavior of sensor nodes that are under the control of the adversary. We also sketch how each individual class can be rigorously formalized. 2 Attacks on Sensor Nodes

An important prerequisite to attacker modeling is the knowledge of the characteristics of wireless sensor net-works (WSN) that make them different from all other kinds of computer networks. Most important of these characteristics are unattended operation and

resource-limited cheap devices.

Unattended operation means that after the sensor nodes are deployed in the field, they should work for months and even for years in a self-organized manner without detailed maintenance. That is, human operators will not permanently pay close attention to every single sensor node. Given the lack of close human attention to sensor nodes, an attacker can find numerous occasions to access sensor nodes – even physically. In particular, the attacker can destroy nodes, analyze in-field their hard-and software or remove the nodes from the field for sub-sequent analysis. Due to cheap, unreliable node hardware, such attacks cannot be distinguished from node failures, and, consequently, have to be tolerated by the sensor network.

Typical sensor node hardware features some basic sen-sor capabilities (light, temperature, acceleration sensing etc.), a radio interface, an EEPROM chip for logging sen-sor data, an 8- or 16-bit microcontroller with 48–128 kB of flash memory for program storage, and 2–10 kB RAM for program execution [13]. Power is provided by bat-teries. In general, no tamper-proof packaging is provided because of the high cost of this protection.

Each of the above system components can be at-tacked [2]. Nodes can be destroyed or forced to reboot, covered in order to deny communication, sensor data can be altered by locally changing the environment (e. g., by warming the temperature sensor). Moreover, the attacker can gain full control over the node or its components, from sensor to the radio interface to external flash and to microcontroller, resulting in attacks on integrity, confi-dentiality, and availability of program and data memory. Unattended operation and easy physical access to un-protected, resource-limited hardware are points that are unique for sensor networks. In other kinds of networks, including ad hoc networks, individual devices are better protected or have more resources available. The above consequences from the WSN system model are reflected in our framework.

3 The Intervention Attribute

In general, an attacker model can be structured by a set of attributes. An attribute describes a particular aspect of the attacker model, e. g., where in the sensor network

the attacker is present or the attacker’s computational power [3]. In the following, we focus on one particular attribute that we call intervention. It describes what the attacker can do with a node, i. e., it describes the possible impact of the attacker on the behavior of an individual node.

Intervention is the attribute that most significantly em-phasizes the difference between the attacker models for sensor networks and any other previously known attacker model.

3.1 Theoretical Basis for Intervention

The intervention parameter is based on a theoretical model of system properties that has its roots in linear temporal logic. Using such a formal model, it is possible to distinguish three orthogonal types of system properties: safety, liveness [1;14], and information flow [19]. This is in accordance with related work, cf., Rushby [22]. Briefly spoken, safety properties describe what is not allowed to happen, liveness properties demand what eventually must happen, and information flow properties specify the type and amount of information flow in multi-level security environments.

Considering an individual sensor node as a finite state machine, we now describe how the attacker can change the system properties of that node.

Safety. We distinguish three incremental ways in which

the attacker can influence the safety properties of a node: 1. The attacker can change certain parts of the state of the node. We call this limited state perturbation. In sensor networks, this corresponds to manipulation of sensor readings, for example, by covering a light sensor or heating a temperature sensor. Another example is the replay or injection of messages since the input registers of the sensor node also can be considered as local state. 2. The attacker can change the entire state of the node. We call this full state perturbation. This allows full modification of data, as is usual in the area of

self-stabilization [8].

3. The attacker can change the entire state as well as the entire code of the node. We call this full code

perturbation. Usually, this implies full access of the

attacker to the sensor node and results in arbitrary behavior, a worst-case assumption also made in the Byzantine failure model [15].1

Liveness. The attacker has two different ways to influence

the liveness properties of a node:

1. The attacker can prevent the node from executing fur-ther processing steps forever. We call this behavior

crash. Crash can be regarded as the effect of physically 1Note that limited code perturbation is also possible in the systems

where the operating system allows different levels of protection. How-ever, such protection measures are usually quite resource-intensive and, to our knowledge, are not used in sensor nodes.

(3)

ed b y Germ an copy right law. You may copy and distrib

ute this artic

le for your pe rsonal us e only . Other u se is onl y a llowed with writ ten permis sion by the cop yright holder.

destroying the node or removing it totally from the sensor network.

2. The attacker can temporarily prevent the node from executing steps. This means that the crashed node can later recover and continue executing steps. We call this behavior crash-recovery. In crash-recovery, a pro-cess may recover from a crash and restart its execution from a defined state. Since recovery is not mandatory, the crash-recovery model includes the crash model. Multiple crashes and recoveries may delay processing in similar ways as a single and final crash. However, the uncertainty whether or not a process recovers adds additional difficulty in tolerating this behavior. There-fore, crash-recovery is considered more severe than crash. For example, if a process crashes during a trans-action and later recovers, effort must be spent to bring the recovered process up to date with respect to the outcome of the transaction.

Information Flow. Regarding the information flow

prop-erties of the node, we distinguish two incremental types of information flow useful to the attacker:

1. The attacker can learn the contents of the messages in transit from or to a node. This means that the attacker can observe the network.

2. Additionally, the attacker can also learn the contents of the node’s memory. Especially data can be interesting because it may contain the cryptographic keys. Here, the attacker can look into the participant.

3.2 Lattice of Intervention

The three distinct classes of properties (safety, liveness, information flow) form the basis of a three dimensional property space that can be used to define useful levels of the intervention aspect of our attacker model [4]. The different “grades” of these dimensions open up a lattice of

reprogramming

safety

information flow

full code perturbation

full state perturbation

limited state perturbation

crash liveness crash−recovery network participants eavesdropping crash X−ray disturbing modifying

Figure 1 Three dimensions of intervention and possible useful combinations.

intervention levels. Individual points in this lattice iden-tify particular instantiations of intervention (see Fig.1).

We find a particular set of instantiations more useful than others. In Fig.1these points are marked by a black dot with the name of the intervention level. We now briefly explain each of these instantiations that can also be ordered within a sublattice structure shown in Fig.2.

Crashing. The crashing attacker cannot change safety and

information flow properties, but it completely changes the liveness properties of the node: the node may stop to execute steps at any time and at any time resume to execute steps. Nothing else is allowed as additional behavior. Since physical access to sensor nodes is possible for the attacker, physical destruction, or forcing the node to reboot by taking out the batteries and inserting them afterwards, must be included in many attacker models. Therefore, the crashing attacker is included in almost all of the other instantiations of intervention.

Eavesdropping. The eavesdropping attacker results from

no change in safety, no change in liveness, but network level information flow, i. e., the attacker can learn the contents of the messages in transit.

X-raying. The X-raying attacker combines crashing with

complete information flow to the attacker. This attacker can read full memory contents (data and code) of the sen-sor node. However, it cannot modify node’s state. This attacker comes close to what is usually termed passive

adversary in many cryptographic protocols: such an

at-tacker can globally eavesdrop on all communication and inspect the memory of all compromised nodes. However, a passive adversary usually cannot influence the liveness of the node, which an X-raying attacker can, due to phys-ical accessibility of sensor nodes.

(4)

ed b y Germ an copy right law. You may copy and distrib

ute this artic

le for your pe rsonal us e only . Other u se is onl y a llowed with writ ten permis sion by the cop yright holder. null reprogramming modifying disturbing X−raying crashing eavesdropping

Figure 2 Lattice of intervention levels, where X→ Y means Y includes all attacker behavior of X.

Disturbing. The disturbing attacker is the combination of

crashing, eavesdropping and limited state perturbation. For example, such an attacker can influence node’s rout-ing table by sendrout-ing fake routrout-ing table updates, or influ-ence temperature reading by warming the node. The dis-turbing attacker can also delete, inject, replay and spoof packets, as the effect of these actions can also be mod-eled as changing the contents of node’s communication registers. However, a disturbing attacker does not have any direct reading or writing access to node’s other data or program memory. Note that neither does a disturbing attacker imply an X-raying attacker nor vice versa.

Modifying. The modifying attacker covers the

combina-tion of X-raying and full state perturbacombina-tion. Here, the attacker can influence the data of a node but not the code. This distinction is motivated, e. g., by the difference between data and program space in Harvard architecture devices such as MICA2 [6] and MICAz [7] sensor nodes.

Reprogramming. Finally, the reprogramming attacker is

the maximum of the lattice. The attacker can influence safety, liveness and information flow without restriction. The reprogramming attacker basically is another name for a Byzantine failure [15].

4 Benefits

In order to assess the adequacy and the benefits of our framework, we investigated over 70 papers on WSN security with respect to their attacker definitions [3]2.

Probably the most important concern revealed by our analysis is that often it is exceptionally difficult to pin-point the exact attacker model used in the protocols due to the ambiguity of the attacker descriptions.

For example, one of the first security solutions for sensor networks, a key establishment protocol by Es-chenauer and Gligor [10] (called the EG scheme in the

2Due to space limit we cannot present our analysis here.

following)3, considers a globally eavesdropping attacker

and, additionally, an X-raying attacker that can compro-mise a certain number of nodes chosen at random [5;10]. However, due to imprecise attacker description, the EG attacker was frequently misunderstood to be of the re-programming class.

Whereas absence of an exact attacker model is under-standable for a pioneering work, problems arise when the EG protocol is used as a building block, for ex-ample in secure positioning [23], localization [16], or time synchronization [11]. All three protocols assume a reprogramming attacker and therewith exhibit serious security problems due to relying on EG. The EG scheme is susceptible, e. g., to man-in-the-middle attacks [20] and to node replication attacks [21].

5 Rigorous Formalization

The intervention levels described above give descriptions of possible attackers in natural language. In order to prove security of protocols, the details of the attacker model must be rigorously formalized, as we now explain by using an example. We formalize intervention levels by allowing the attacker to send different queries to the com-promised nodes. This type of modeling is well-establshed in traditional computer security.

The attackerA can interact with the sensor network by means of the following queries:

1.

Stop

(s):A “pauses” node s’s operation. 2.

Resume

(s):A “resumes” node s’s operation. 3.

Eavesdrop

(s, s, m):A intercepts message m sent to

node s by node s. Node s also receives the message. 4.

Intercept

(s, s, m):A intercepts message m sent to

node s by node s. Node s does not receive the message. 5.

Send

(s, s, m):A delivers message m to sensor node s pretending that it comes from node s. This function gives the attacker means to delay and replay legitimate messages, and also to falsify messages.

6.

SetReading

(s. x, v): A sets the value measured by node s using its sensor x to value v.

7.

Open

(s):A reads out all of node s’s memory content. 8.

WriteData

(s, d):A writes data d into s’s data

mem-ory.

9.

WriteCode

(s, c):A writes code c into s’s code mem-ory.

The crashing attacker can only use

Stop

and

Resume

query types, and the eavesdropping attacker can only use the

Eavesdrop

query. Disturbing attacker can use

Intercept

,

Send

and

SetReading

queries and also queries of all attackers that are weaker, i. e., of the crash-ing and of the eavesdroppcrash-ing attacker. X-raycrash-ing attacker can use

Open

in addition to the queries used by the crashing and the eavesdropping attackers. The modifying attacker uses queries of the disturbing and the X-raying

3This influential work was cited by more than 1500 papers according

(5)

ed b y Germ an copy right law. You may copy and distrib

ute this artic

le for your pe rsonal us e only . Other u se is onl y a llowed with writ ten permis sion by the cop yright holder.

attacker, and additionally the

WriteData

query. The reprogramming attacker can use all query types.

Using these operations it is possible to “program” the adversary, i. e., to anticipate all possible behaviors. Subsequently, it is possible to precisely define and val-idate security requirements. For example, Manulis and Schwenk [17] formalize the reprogramming attacker in a similar way and formally prove security of their sensor data aggregation protocol.

6 Conclusion

We presented a novel general framework for modeling attackers in sensor networks that gives the WSN security researchers a common language for attacker description. Our framework is based on theory of trace-based system properties. We showed using the example of the EG key establishment scheme that the lack of precise attacker descriptions can be a source of security problems, and outlined how our framework can be used for formal pro-tocol analysis.

References

[1] B. Alpern and F. B. Schneider. Defining liveness. In: Information

Processing Letters 21:181–185, 1985.

[2] A. Becher, Z. Benenson, and M. Dornseif. Tampering with motes: Real-world physical attacks on wireless sensor networks. In:

Secu-rity in Pervasive Computing, Lecture Notes in Computer Science

3934, pp. 104–118. Springer, 2006.

[3] Z. Benenson, P. M. Cholewinski, and F. C. Freiling. Vulnerabilities and attacks in wireless sensor networks. In: J. Lopez and J. Zhou, editors, Wireless Sensors Networks Security, Cryptology & Infor-mation Security Series (CIS), pp. 22–43. IOS Press, 2008. [4] Z. Benenson, F. C. Freiling, T. Holz, D. Kesdogan, and L. D. Penso.

Safety, liveness, and information flow: Dependability revisited. In: 19th Int’l Conf. on Architecture of Computing Systems (ARCS), pp. 56–65. Lecture Notes in Informatics 81, Gesellschaft für Informatik, 2006.

[5] H. Chan, A. Perrig, and D. Song. Random key predistribution schemes for sensor networks. In: IEEE Symposium on Security

and Privacy, pp. 197–213, 2003.

[6] Crossbow, Inc. MICA2 data sheet. [7] Crossbow, Inc. MICAz data sheet.

[8] E. W. Dijkstra. Self stabilizing systems in spite of distributed control. In: Communications of the ACM 17(11):643–644, 1974. [9] D. Dolev and A. Yao. On the security of public key protocols. In:

IEEE Transactions on Information Theory 29(2):198–208, 1983.

[10] L. Eschenauer and V. D. Gligor. A key-management scheme for distributed sensor networks. In: 9th ACM Conf. on Computer and

Communications Security, pp. 41–47. ACM Press, 2002.

[11] S. Ganeriwal, C. Pöpper, S. ˇCapkun, and M. B. Srivastava. Secure time synchronization in sensor networks. In: ACM Transactions

on Information and Systems Security 11(4):1–35, 2008.

[12] V. Gligor. On the evolution of adversary models in security protocols: from the beginning to sensor networks. In: ACM

Symposium on Information, Computer and Communication Security (ASIACCS), p. 3, ACM, 2007.

[13] J. Hill, M. Horton, R. Kling, and L. Krishnamurthy. The platforms enabling wireless sensor networks. In: Communications of the ACM 47(6):41–46, 2004.

[14] L. Lamport. Proving the correctness of multiprocess programs. In: IEEE Transactions on Software Engineering 3(2):125–143, 1977.

[15] L. Lamport, R. Shostak, and M. Pease. The byzantine generals problem. In: ACM Transactions on Programming Languages

Sys-tems 4(3):382–401, 1982.

[16] D. Liu, P. Ning, A. Liu, C. Wang, and W. K. Du. Attack-resistant location estimation in wireless sensor networks. In: ACM

Trans-actions on Information and System Security 11(4):1–39, 2008.

[17] M. Manulis and J. Schwenk. Security model and framework for information aggregation in sensor networks. In: ACM Transactions

on Sensor Networks 5(2):1–28, 2009.

[18] U. Maurer. Secure multi-party computation made simple. In:

Discrete Applied Mathematics 154(2):370–381, 2006.

[19] J. McLean. Security models and information flow. In: IEEE Symp.

on Research in Security and Privacy, pp. 180–187, 1990.

[20] T. Moore and J. Clulow. Secure path-key revocation for symmetric key pre-distribution schemes in sensor networks. In: 22nd IFIP

Int’l Information Security Conf. (SEC), 2007.

[21] B. Parno, A. Perrig, and V. Gligor. Distributed detection of node replication attacks in sensor networks. In: IEEE Symp. on Security

and Privacy, 2005.

[22] J. Rushby. Critical system properties: Survey and taxonomy. In:

Reliability Engineering and System Safety 43(2):189–219, 1994.

[23] S. ˇCapkun and J.-P. Hubaux. Secure positioning of wireless devices with application to sensor networks. In: 24th Annual Joint Conf.

of the IEEE Computer and Communications Societies (Infocom),

pp. 1917–1928, vol. 3, 2005.

Received: June 1, 2010

Dr. Zinaida Benenson is a postdoctoral

research-er at Univresearch-ersity of Mannheim, Gresearch-ermany. Hresearch-er research interests are dependability and security issues in distributed systems, especially in wire-less sensor networks and in pervasive comput-ing.

Address: University of Mannheim, A 5, 6, D-68159 Mannheim, Germany, Tel.: +49 621 181 2556, Fax: +49 621 181 3577,

e-mail: zina@uni-mannheim.de

Dr. Erik-Oliver Blaß is a postdoctoral researcher

at EURECOM’s Networking & Security depart-ment. Being interested in all areas of computer and communication security, his current work fo-cuses on secure, privacy-preserving protocols for RFID systems.

Address: EURECOM, Route des Cretes 1, F-06560 Sophia Antipolis, France, Tel.: +33 4 93 00 81 25, Fax: +33 4 93 00 82 00,

e-mail: erik-oliver.blass@eurecom.fr

Prof. Dr.-Ing. Felix C. Freiling is a full professor

of computer science at University of Mannheim, Germany. He is interested in all aspects of de-pendable and secure computing, theory and prac-tice.

Address: University of Mannheim, A 5, 6, D-68159 Mannheim, Germany, Tel.: +49-621-181-2545, Fax: +49 621 181 3577,

References

Related documents

Therefore, “Sense and Avoid” algorithm, considering “Brake” and “Avoid and Continue” modes, was developed to avoid UAV collisions and to ensure autonomous

Worse self-reported disability in individuals with acetabular cartilage lesions and association of acetabular cartilage lesions with HOOS scores, suggests that lesions in the

Chapter 1 Introduction to Healthcare Finance 3 Theme Set-Up: Careers in Healthcare Management 3 Learning Objectives 4 1.1 Introduction 5 1.2 Defining Healthcare Finance 5 1.3 Purpose

hard-of-hearing graduate, last fall entered the University of Maryland Graduate School of Social Work, where, competing with hearing students, he is making better than a

In the simplest scheme based on the boolean model for retrieval, we retrieve the documents that have occurrences of both good and teacher.. Such

The effect of attitudes toward surveys on the probability of not provid- ing income information, of refusing to answer other questions and not being judged perfectly willing to

The purpose of this phenomenological qualitative study was to explore the perceptions of therapists about eye movement desensitization and reprocessing (EMDR) as a tool to

For your ear problem try exhaling (through your nose) one deep breath as you place the pressure tubing on to the mask with the CPAP's blower turned on, this sometimes helps to