• No results found

Operating System Security

N/A
N/A
Protected

Academic year: 2021

Share "Operating System Security"

Copied!
236
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)
(3)

Synthesis Lectures on

Information Security,

Privacy and Trust

Editor

Ravi Sandhu, University of Texas, San Antonio Operating System Security

Trent Jaeger 2008

(4)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher.

Operating System Security Trent Jaeger

www.morganclaypool.com

ISBN: 9781598292121 paperback ISBN: 9781598292138 ebook

DOI 10.2200/S00126ED1V01Y200808SPT001

A Publication in the Morgan & Claypool Publishers series

SYNTHESIS LECTURES ON INFORMATION SECURITY, PRIVACY AND TRUST Lecture #1

Series Editor: Ravi Sandhu, University of Texas, San Antonio Series ISSN

Synthesis Lectures on Information Security, Privacy and Trust ISSN pending.

(5)

Operating System Security

Trent Jaeger

The Pennsylvania State University

SYNTHESIS LECTURES ON INFORMATION SECURITY, PRIVACY AND

TRUST #1

C

M

(6)

ABSTRACT

Operating systems provide the fundamental mechanisms for securing computer processing. Since the 1960s, operating systems designers have explored how to build “secure” operating systems — operating systems whose mechanisms protect the system against a motivated adversary. Recently, the importance of ensuring such security has become a mainstream issue for all operating systems. In this book, we examine past research that outlines the requirements for a secure operating system and research that implements example systems that aim for such requirements. For system designs that aimed to satisfy these requirements, we see that the complexity of software systems often results in implementation challenges that we are still exploring to this day. However, if a system design does not aim for achieving the secure operating system requirements, then its security features fail to protect the system in a myriad of ways. We also study systems that have been retrofit with secure operating system features after an initial deployment. In all cases, the conflict between function on one hand and security on the other leads to difficult choices and the potential for unwise compromises. From this book, we hope that systems designers and implementors will learn the requirements for operating systems that effectively enforce security and will better understand how to manage the balance between function and security.

KEYWORDS

Operating systems, reference monitor, mandatory access control, secrecy, integrity, vir-tual machines, security kernels, capabilities, access control lists, multilevel security, pol-icy lattice, assurance

(7)
(8)
(9)

ix

Contents

Synthesis Lectures on Information Security, Privacy and Trust . . . .iii

Contents . . . ix

Preface . . . xv

1

Introduction . . . 1

1.1 Secure Operating Systems . . . 3

1.2 Security Goals . . . 4

1.3 Trust Model . . . 6

1.4 Threat Model . . . .7

1.5 Summary . . . .8

2

Access Control Fundamentals . . . .9

2.1 Protection System . . . 9

2.1.1 Lampson’s Access Matrix . . . 9

2.1.2 Mandatory Protection Systems . . . 11

2.2 Reference Monitor . . . 14

2.3 Secure Operating System Definition . . . 16

2.4 Assessment Criteria . . . 19

2.5 Summary . . . 21

3

Multics . . . 23

3.1 Multics History . . . .23

3.2 The Multics System . . . 24

3.2.1 Multics Fundamentals . . . 25

3.2.2 Multics Security Fundamentals . . . 25

(10)

3.2.4 Multics Protection System . . . 30

3.2.5 Multics Reference Monitor . . . 31

3.3 Multics Security . . . 33

3.4 Multics Vulnerability Analysis . . . .36

3.5 Summary . . . 37

4

Security in Ordinary Operating Systems . . . 39

4.1 System Histories . . . 39

4.1.1 UNIX History . . . 39

4.1.2 Windows History . . . 40

4.2 UNIX Security . . . 41

4.2.1 UNIX Protection System . . . .41

4.2.2 UNIX Authorization . . . 43

4.2.3 UNIX Security Analysis . . . .45

4.2.4 UNIX Vulnerabilities . . . 47

4.3 Windows Security . . . 49

4.3.1 Windows Protection System . . . .50

4.3.2 Windows Authorization . . . 51

4.3.3 Windows Security Analysis . . . .53

4.3.4 Windows Vulnerabilities . . . 55

4.4 Summary . . . 56

5

Verifiable Security Goals . . . 57

5.1 Information Flow . . . 57

5.2 Information Flow Secrecy Models . . . 59

5.2.1 Denning’s Lattice Model . . . 60

5.2.2 Bell-LaPadula Model . . . 62

5.3 Information Flow Integrity Models . . . 64

5.3.1 Biba Integrity Model . . . 65

5.3.2 Low-Water Mark Integrity . . . 67

(11)

CONTENTS xi

5.3.4 The Challenge of Trusted Processes . . . 69

5.4 Covert Channels . . . 70

5.4.1 Channel Types . . . 71

5.4.2 Noninterference . . . 72

5.5 Summary . . . 73

6

Security Kernels . . . 75

6.1 The Security Kernel . . . 76

6.2 Secure Communications Processor . . . .77

6.2.1 Scomp Architecture . . . 78

6.2.2 Scomp Hardware . . . 79

6.2.3 Scomp Trusted Operating Program . . . 82

6.2.4 Scomp Kernel Interface Package . . . 83

6.2.5 Scomp Applications . . . 84

6.2.6 Scomp Evaluation . . . 84

6.3 Gemini Secure Operating System . . . 86

6.4 Summary . . . 89

7

Securing Commercial Operating Systems . . . 91

7.1 Retrofitting Security into a Commercial OS . . . 91

7.2 History of Retrofitting Commercial OS’s . . . 93

7.3 Commercial Era . . . 93

7.4 Microkernel Era . . . 95

7.5 UNIX Era . . . 97

7.5.1 IX . . . 97

7.5.2 Domain and Type Enforcement . . . .98

7.5.3 Recent UNIX Systems . . . 100

7.6 Summary . . . 101

8

Case Study: Solaris Trusted Extensions . . . 103

Glenn Faden and Christoph Schuba, Sun Microsystems, Inc. 8.1 Trusted Extensions Access Control . . . 104

(12)

8.2 Solaris Compatibility . . . .105

8.3 Trusted Extensions Mediation . . . 106

8.4 Process Rights Management (Privileges) . . . 108

8.4.1 Privilege Bracketing and Relinquishing . . . 109

8.4.2 Controlling Privilege Escalation . . . 111

8.4.3 Assigned Privileges and Safeguards . . . 112

8.5 Role-based Access Control (RBAC) . . . 112

8.5.1 RBAC Authorizations . . . 112

8.5.2 Rights Profiles . . . 114

8.5.3 Users and Roles . . . 114

8.5.4 Converting the Superuser to a Role . . . 114

8.6 Trusted Extensions Networking . . . 115

8.7 Trusted Extensions Multilevel Services . . . .116

8.8 Trusted Extensions Administration . . . 118

8.9 Summary . . . 119

9

Case Study: Building a Secure Operating System for Linux . . . .121

9.1 Linux Security Modules . . . 121

9.1.1 LSM History . . . 121

9.1.2 LSM Implementation . . . 123

9.2 Security-Enhanced Linux . . . 126

9.2.1 SELinux Reference Monitor . . . 126

9.2.2 SELinux Protection State . . . 129

9.2.3 SELinux Labeling State . . . 132

9.2.4 SELinux Transition State . . . 134

9.2.5 SELinux Administration . . . 135

9.2.6 SELinux Trusted Programs . . . .136

9.2.7 SELinux Security Evaluation . . . 137

9.3 Summary . . . 139

(13)

CONTENTS xiii

10.1 Capability System Fundamentals . . . 141

10.2 Capability Security . . . .142

10.3 Challenges in Secure Capability Systems . . . 143

10.3.1 Capabilities and the-Property . . . 144

10.3.2 Capabilities and Confinement . . . 144

10.3.3 Capabilities and Policy Changes . . . 145

10.4 Building Secure Capability Systems . . . 146

10.4.1 Enforcing the-Property . . . 146

10.4.2 Enforcing Confinement . . . 147

10.4.3 Revoking Capabilities . . . 149

10.5 Summary . . . 151

11

Secure Virtual Machine Systems . . . 153

11.1 Separation Kernels . . . 155

11.2 VAX VMM Security Kernel . . . 157

11.2.1 VAX VMM Design . . . 158

11.2.2 VAX VMM Evaluation . . . 160

11.2.3 VAX VMM Result . . . 162

11.3 Security in Other Virtual Machine Systems . . . 163

11.4 Summary . . . 166

12

System Assurance . . . 169

12.1 Orange Book . . . 170

12.2 Common Criteria . . . .173

12.2.1 Common Criteria Concepts . . . 174

12.2.2 Common Criteria In Action . . . .176

12.3 Summary . . . 178

Bibliography . . . 179

Biographies . . . 205

(14)
(15)

xv

Preface

Operating system security forms the foundation of the secure operation of computer systems. In this book, we define what is required for an operating system to ensure enforcement of system security goals and evaluate how several operating systems have approached such requirements.

WHAT THIS BOOK IS ABOUT

Chapter Topic

2. Fundamentals Define an Ideal, Secure OS

3. Multics The First OS Designed for Security Goals 4. Ordinary OS’s Why Commercial OS’s Are Not Secure 5. Verifiable Security Define Precise Security Goals

6. Security Kernels Minimize OS’s Trusted Computing Base 7. Secure Commercial OS’s Retrofit Security into Commercial OS’s 8. Solaris Trusted Extensions Case Study MLS Extension of Solaris OS

9. SELinux Case Study Examine Retrofit of Linux Specifically 10. Capability Systems Ensure Security Goal Enforcement 11. Virtual Machines Identify Necessary Security Mechanisms 12. System Assurance Methodologies to Verify Correct Enforcement

Figure 1: Overview of the Chapters in this book.

In this book, we examine what it takes to build a secure operating system, and explore the major systems development approaches that have been applied towards building secure operating systems. This journey has several goals shown in Figure 1. First, we describe the fundamental concepts and mechanisms for enforcing security and define secure operating systems (Chapter 2). Second, we examine early work in operating systems to show that it may be possible to build systems that approach a secure operating system, but that ordinary, commercial operating systems are not secure fundamentally (Chapters 3 and 4, respectively). We next describe the formal security goals and corresponding security models proposed for secure operating systems (Chapter 5). We then survey a variety of approaches applied to the development of secure operating systems (Chapters 6 to 11). Finally, we conclude with a discussion of system assurance methodologies (Chapter 12).

The first half of the book (Chapters 2 to 5) aims to motivate the challenges of building a secure operating system. Operating systems security is so complex and broad a subject that we cannot introduce everything without considering some examples up front. Thus, we start with just

(16)

the fundamental concepts and mechanisms necessary to understand the examples. Also, we take the step of showing what a system designed to the secure operating system definition (i.e., Multics in Chapter 3) looks like and what insecure operating systems (i.e., UNIX and Windows in Chapter 4) looks like and why. In Chapter 5, we then describe concrete security goals and how they can be expressed once the reader has an understanding of what is necessary to secure a system.

The second half of the book surveys the major, distinct approaches to building secure operating systems in Chapters 6 to 11. Each of the chapters focuses on the features that are most important to these approaches. As a result, each of these chapters has a different emphasis. For example, Chapter 6 describes security kernel systems where the operating system is minimized and leverages hardware features and low-level system mechanisms. Thus, this chapter describes the impact of hardware features and the management of hardware access on our ability to construct effective and flexible secure operating systems. Chapter 7 summarizes a variety of ways that commercial operating systems have been extended with security features. Chapters 8 and 9 focus on retrofitting security features on existing, commercial operating systems, Solaris and Linux, respectively. Glenn Faden and Christoph Schuba from Sun Microsystems detail the Solaris (TM) Trusted Extensions. In these chapters, the challenges include modifying the system architecture and policy model to enforce security goals. Here, we examine adding security to user-level services, and extending security enforcement into the network. The other chapters examine secure capability systems and how capability semantics are made secure (Chapter 10) and secure virtual machine systems to examine the impact and challenges of using virtualization to improve security (Chapter 11).

The book concludes with the chapter on system assurance (Chapter 12). In this chapter, we discuss the methodologies that have been proposed to verify that a system is truly secure. Assurance verification is a major requirement of secure operating systems, but it is still at best a semi-formal process, and in practice an informal process for general-purpose systems.

The contents of this book derive from the work of many people over many years. Building an operating system is a major project, so it is not surprising that large corporate and/or research teams are responsible for most of the operating systems in this book. However, several individual researchers have devoted their careers to operating systems security, so they reappear throughout the book in various projects advancing our knowledge on the subject. We hope that their efforts inspire future researchers to tackle the challenges of improving operating systems security.

WHAT THIS BOOK IS NOT ABOUT

As with any book, the scope of investigation is limited and there are many related and supporting efforts that are not described. Some operating system development approaches and several repre-sentative operating systems are not detailed in the book. While we attempted to include all broad approaches to building secure systems, some may not quite fit the categorizations and there are several systems that have interesting features that could not be covered in depth.

Other operating systems problems appear to be related to security, but are outside the scope of this book. For example,fault toleranceis the study of how to maintain the correctness of a computation

(17)

PREFACE xvii given the failure of one or more components. Security mechanisms focus on ensuring that security goals are achieved regardless of the behavior of a process, so fault tolerance would depend on security mechanisms to be able to resurrect or maintain a computation.The area ofsurvivabilityis also related, but it involves fault tolerance in the face of catastrophic failures or natural disasters. Its goals also depend on effective computer security.

There are also several areas of computer science whose advances may benefit operating system security, but which we omit in this book. For example, recent advances insource code analysis im-proves the correctness of system implementations by identifying bugs [82,209,49] and even being capable of proving certain properties of small programs, such as device drivers [210,18]. Further, programming languages that enable verifiable enforcement of security properties, such as

security-typed languages[219,291], also would seem to be necessary to ensure that all the trusted computing

base’s code enforces the necessary security goals. In general, we believe that improvements in lan-guages, programming tools for security, and analysis of programs for security are necessary to verify the requirements of secure operating systems.

Also, a variety of programs also provide security mechanisms. Most notably, these include databases (e.g., Oracle) and application-level virtual machines (e.g., Java). Such programs are only relevant to the construction of a secure operating system if they are part of the trusted computing base. As this is typically not the case, we do not discuss these application-level mechanisms.

Ultimately, we hope that the reader gains a clearer understanding of the challenging problem of building a secure operating system and an appreciation for the variety of solutions applied over the years. Many past and current efforts have explored these challenges in a variety of ways. We hope that the knowledge and experiences of the many people whose work is captured in this book will serve as a basis for comprehensive and coherent security enforcement in the near future.

Trent Jaeger

The Pennsylvania State University August 2008

(18)
(19)

1

C H A P T E R

1

Introduction

Operating systems are the software that provides access to the various hardware resources (e.g., CPU, memory, and devices) that comprise a computer system as shown in Figure 1.1. Any program that is run on a computer system has instructions executed by that computer’s CPU, but these programs may also require the use of other peripheral resources of these complex systems. Consider a program that allows a user to enter her password. The operating system provides access to the disk device on which the program is stored, access to device memory to load the program so that it may be executed, the display device to show the user how to enter her password, and keyboard and mouse devices for the user to enter her password. Of course, there are now a multitude of such devices that can be used seamlessly, for the most part, thanks to the function of operating systems.

As shown in Figure 1.1, operating systems run programs inprocesses. The challenge for an operating system developer is to permit multiple concurrently executing processes to use these resources in a manner that preserves the independence of these processes while providing fair sharing of these resources. Originally, operating systems only permitted one process to be run at a time (e.g.,

batch systems), but as early as 1960, it became apparent that computer productivity would be greatly

enhanced by being able to run multiple processes concurrently [87]. Byconcurrently, we mean that while only one process uses a computer’s CPU at a time, multiple other processes may be in various states of execution at the same time, and the operating system must ensure that these executions are performed effectively. For example, while the computer waits for a user to enter her password, other processes may be run and access system devices as well, such as the network. These systems were originally calledtimesharing systems, but they are our default operating systems today.

To build any successful operating system, we identify three major tasks. First, the operating system must provide various mechanisms that enable high performance use of computer resources. Operating systems must provide efficientresource mechanisms, such as file systems, memory manage-ment systems, network protocol stacks, etc., that define how processes use the hardware resources. Second, it is the operating system’s responsibility to switch among the processes fairly, such that the user experiences good performance from each process in concert with access to the computer’s devices. This second problem is one ofscheduling access to computer resources. Third, access to resources should be controlled, such that one process cannot inadvertently or maliciously impact the execution of another. This third problem is the problem of ensuring thesecurityof all processes run on the system.

Ensuring the secure execution of all processes depends on the correct implementation of resource and scheduling mechanisms. First, any correct resource mechanism must provide boundaries between its objects and ensure that its operations do not interfere with one another. For example, a file system must not allow a process request to access one file to overwrite the disk space allocated

(20)

Operating System

Resource Mechanisms Process 1 Program Data Process 2 Program Data Process n Program Data

...

Security Scheduling

Disk Network Display

...

Memory Device Disk Device Network Device Display Device

...

Memory

Figure 1.1: An operating system runssecurity,scheduling, andresource mechanismsto provideprocesseswith access to the computer system’s resources (e.g., CPU, memory, and devices).

to another file. Also, file systems must ensure that one write operation is not impacted by the data being read or written in another operation. Second, scheduling mechanisms must ensure availability of resources to processes to prevent denial of service attacks. For example, the algorithms applied by scheduling mechanisms must ensure that all processes are eventually scheduled for execution. These requirements are fundamental to operating system mechanisms, and are assumed to be provided in the context of this book. The scope of this book covers the misuse of these mechanisms to inadvertently or, especially, maliciously impact the execution of another process.

Security becomes an issue because processes in modern computer systems interact in a variety of ways, and the sharing of data among users is a fundamental use of computer systems. First, the output of one process may be used by other processes. For example, a programmer uses an editor program to write a computer program’s source code, compilers and linkers to transform the program

(21)

1.1. SECURE OPERATING SYSTEMS 3 code into a form in which it can be executed, and debuggers to view the executing processes image to find errors in source code. In addition, a major use of computer systems is to share information with other users. With the ubiquity of Internet-scale sharing mechanisms, such as e-mail, the web, and instant messaging, users may share anything with anyone in the world. Unfortunately, lots of people, or at least lots of email addresses, web sites, and network requests, want to share stuff with you that aims to circumvent operating system security mechanisms and cause your computer to share additional, unexpected resources. The ease with which malware can be conveyed and the variety of ways that users and their processes may be tricked into running malware present modern operating system developers with significant challenges in ensuring the security of their system’s execution.

The challenge in developing operating systems security is to design security mechanisms that protect process execution and their generated data in an environment with such complex interactions. As we will see, formal security mechanisms that enforce provable security goals have been defined, but these mechanisms do not account or only partially account for the complexity of practical systems. As such, the current state of operating systems security takes two forms: (1) constrained systems that can enforce security goals with a high degree of assurance and (2) general-purpose systems that can enforce limited security goals with a low to medium degree of assurance. First, several systems have been developed over the years that have been carefully crafted to ensure correct (i.e., within some low tolerance for bugs) enforcement of specific security goals. These systems generally support few applications, and these applications often have limited functionality and lower performance requirements. That is, in these systems, security is the top priority, and this focus enables the system developers to write software that approaches the ideal of the formal security mechanisms mentioned above. Second, the computing community at large has focused on function and flexibility, resulting in general-purpose, extensible systems that are very difficult to secure. Such systems are crafted to simplify development and deployment while achieving high performance, and their applications are built to be feature-rich and easy to use. Such systems present several challenges to security practitioners, such as insecure interfaces, dependence of security on arbitrary software, complex interaction with untrusted parties anywhere in the world, etc. But, these systems have defined how the user community works with computers. As a result, the security community faces a difficult task for ensuring security goals in such an environment.

However, recent advances are improving both the utility of the constrained systems and the security of the general-purpose systems. We are encouraged by this movement, which is motivated by the general need for security in all systems, and this book aims to capture many of the efforts in building security into operating systems, both constrained and general-purpose systems, with the aim of enabling broader deployment and use of security function in future operating systems.

1.1

SECURE OPERATING SYSTEMS

The ideal goal of operating system security is the development of a secure operating system.A secure operating system provides security mechanisms that ensure that the system’s security goals are enforced despite

(22)

the context of the resource and scheduling mechanisms. Security goals define the requirements of secure operation for a system for any processes that it may execute. The security mechanisms must ensure these goals regardless of the possible ways that the system may be misused (i.e., is threatened) by attackers.

The term “secure operating system” is both considered an ideal and an oxymoron. Systems that provide a high degree of assurance in enforcement have been called secure systems, or even more frequently “trusted” systems1. However, it is also true that no system of modern complexity is completely secure. The difficulty of preventing errors in programming and the challenges of trying to remove such errors means that no system as complex as an operating system can be completely secure.

Nonetheless, we believe that studying how to build an ideal secure operating system to be useful in assessing operating systems security. In Chapter 2, we develop a definition ofsecure operating system

that we will use to assess several operating systems security approaches and specific implementations of those approaches. While no implementation completely satisfies this ideal definition, its use identifies the challenges in implementing operating systems that satisfy this ideal in practice. The aim is multi-fold. First, we want to understand the basic strengths of common security approaches. Second, we want to discover the challenges inherent to each of these approaches. These challenges often result in difficult choices in practical application. Third, we want to study the application of these approaches in practical environments to evaluate the effectiveness of these approaches to satisfy the ideal in practice. While it appears impractical to build an operating system that satisfies the ideal definition, we hope that studying these systems and their security approaches against the ideal will provide insights that enable the development of more effective security mechanisms in the future.

To return to the general definition of a secure operating system from the beginning of this section, we examine the general requirements of a secure operating system. To build any secure system requires that we consider how the system achieves itssecurity goalsunder a set of threats (i.e.,

athreat model) and given a set of software, including the security mechanisms, that must be trusted2

(i.e., atrust model).

1.2

SECURITY GOALS

A security goal defines the operations that can be executed by a system while still preventing unautho-rized access. It should be defined at a high-level of abstraction, not unlike the way that an algorithm’s worst-case complexity prescribes the set of implementations that satisfy that requirement. A secu-rity goal defines a requirement that the system’s design can satisfy (e.g., the way pseudocode can be proven to fulfill the complexity requirement) and that a correct implementation must fulfill (e.g., the way that an implementation can be proven experimentally to observe the complexity).

1For example, the first description of criteria to verify that a system implements correct security mechanisms is called the Trusted

Computer System Evaluation Criteria [304].

2We assume that hardware is trusted to behave as expected. Although the hardware devices may have bugs, the trust model that

(23)

1.2. SECURITY GOALS 5 Security goals describe how the system implements accesses to system resources that satisfy the following:secrecy,integrity, and availability. A system access is traditionally stated in terms of which subjects (e.g., processes and users) can perform which operations (e.g., read and write) on whichobjects(e.g., files and sockets). Secrecy requirements limit the objects that individual subjects canreadbecause objects may contain secrets that not all subjects are permitted to know. Integrity requirements limit the objects that subjects canwritebecause objects may contain information that other subjectsdepend onfor their correct operation. Some subjects may not be trusted to modify those objects. Availability requirements limit the system resources (e.g., storage and CPU) that subjects

mayconsumebecause they may exhaust these resources. Much of the focus in secure operating systems

is on secrecy and integrity requirements, although availability may indirectly impact these goals as well.

The security community has identified a variety of different security goals. Some security goals are defined in terms of security requirements (i.e., secrecy and integrity), but others are defined in terms of function, in particular ways to limit function to improve security. An example of a goal defined in terms of security requirements is thesimple-security propertyof the Bell-LaPadula model [23].This goal states that a process cannot read an object whose secrecy classification is higher than the process’s. This goal limits operations based on a security requirement, secrecy. An example of an functional security goal is theprinciple of least privilege[265], which limits a process to only the set of operations necessary for its execution. This goal is functional because it does not ensure that the secrecy and/or integrity of a system is enforced, but it encourages functional restrictions that may prevent some attacks. However, we cannot prove the absence of a vulnerability using functional security goals. We discuss this topic in detail in Chapter 5.

The task of the secure operating system developer is to define security goals for which the security of the system can be verified, so functional goals are insufficient. On the other hand, secrecy and integrity goals prevent function in favor of security, so they may be too restrictive for some production software. In the past, operating systems that enforced secrecy and integrity goals (i.e., the constrained systems above) were not widely used because they precluded the execution of too many applications (or simply lacked popular applications). Emerging technology, such as virtual machine technology (see Chapter 11), enables multiple, commercial software systems to be run in an isolated manner on the same hardware. Thus, software that used to be run on the same system can be run in separate, isolated virtual systems. It remains to be seen whether such isolation can be leveraged to improve system security effectively. Also, several general-purpose operating systems are now capable of expressing and enforcing security goals. Whether these general-purpose systems will be capable of implementing security goals or providing sufficient assurance for enforcing such goals is unclear. However, in either case, security goals must be defined and a practical approach for enforcing such goals, that enables the execution of most popular software in reasonable ways, must be identified.

(24)

1.3

TRUST MODEL

A system’strust modeldefines the set of software and data upon which the system depends for correct enforcement of system security goals. For an operating system, its trust model is synonymous with the system’strusted computing base(TCB).

Ideally, a system TCB should consist of the minimal amount of software necessary to enforce the security goals correctly. The software that must be trusted includes the software that defines the security goals and the software that enforces the security goals (i.e., the operating system’s security mechanism). Further, software that bootstraps this software must also be trusted. Thus, an ideal TCB would consist of a bootstrapping mechanism that enables the security goals to be loaded and subsequently enforced for lifetime of the system.

In practice, a system TCB consists of a wide variety of software. Fundamentally, the en-forcement mechanism is run within the operating system. As there are no protection boundaries between operating system functions (i.e., in the typical case of a monolithic operating system), the enforcement mechanism must trust all the operating system code, so it is part of the TCB.

Further, a variety of other software running outside the operating system must also be trusted. For example, the operating system depends on a variety of programs to authenticate the identity of users (e.g.,loginandSSH). Such programs must be trusted because correct enforcement of security goals depends on correct identification of users. Also, there are several services that the system must trust to ensure correct enforcement of security goals. For example, windowing systems, such as the X Window System [345], perform operations on behalf of all processes running on the operating system, and these systems provide mechanisms for sharing that may violate the system’s security goals (e.g., cut-and-paste from one application to another) [85]. As a result, the X Window Systems and a variety of other software must be added to the system’s TCB.

The secure operating system developer must prove that their systems have a viable trust model. This requires that: (1) the system TCB must mediate all security-sensitive operations; (2) verification of the correctness of the TCB software and its data; and (3) verification that the software’s execution cannot be tampered by processes outside the TCB. First, identifying the TCB software itself is a nontrivial task for reasons discussed above. Second, verifying the correctness of TCB software is a complex task. For general-purpose systems, the amount of TCB software outside the operating system is greater than the operating system software that is impractical to verify formally. The level of trust in TCB software can vary from software that is formally-verified (partially), fully-tested, and reviewed to that which the user community trusts to perform its appointed tasks. While the former is greatly preferred, the latter is often the case. Third, the system must protect the TCB software and its data from modification by processes outside the TCB. That is, the integrity of the TCB must be protected from the threats to the system, described below. Otherwise, this software can be tampered, and is no longer trustworthy.

(25)

1.4. THREAT MODEL 7

1.4

THREAT MODEL

Athreat modeldefines a set of operations that anattackermay use tocompromisea system. In this threat model, we assume a powerful attacker who is capable of injecting operations from the network and may be in control of some of the running software on the system (i.e., outside the trusted computing base). Further, we presume that the attacker is actively working to violate the system security goals. If an attacker is able to find a vulnerability in the system that provides access to secret information (i.e., violate secrecy goals) or permits the modification of information that subjects depend on (i.e., violate integrity goals), then the attacker is said to have compromised the system.

Since the attacker is actively working to violate the system security goals, we must assume that the attacker may try any and all operations that are permitted to the attacker. For example, if an attacker can only access the system via the network, then the attacker may try to send any operation to any processes that provide network access. Further, if an attacker is in control of a process running on the system, then the attacker will try any means available to that process to compromise system security goals.

This threat model exposes a fundamental weakness in commercial operating systems (e.g., UNIX and Windows); they assume that all software running on behalf of a subject is trusted by that subject. For example, a subject may run a word processor and an email client, and in commercial systems these processes are trusted to behave as the user would. However, in this threat model, both of these processes may actually be under the control of an attacker (e.g., via a document macro virus or via a malicious script or email attachment). Thus, a secure operating system cannot trust processes outside of the TCB to behave as expected. While this may seem obvious, commercial systems trust any user process to manage the access of that user’s data (e.g., to change access rights to a user’s files viachmodin a UNIX system).This can result in the leakage of that user’s secrets and the modification of data that the user depends on.

The task of a secure operating system developer is to protect the TCB from the types of threats described above. Protecting the TCB ensures that the system security goals will always be enforced regardless of the behavior of user processes. Since user processes are untrusted, we cannot depend on them, but we can protect them from threats. For example, secure operating system can prevent a user process with access to secret data from leaking that data, by limiting the interactions of that process. However, protecting the TCB is more difficult because it interacts with a variety of untrusted processes. A secure operating system developer must identify such threats, assess their impact on system security, and provide effective countermeasures for such threats. For example, a trusted computing base component that processes network requests must identify where such untrusted requests are received from the network, determine how such threats can impact the component’s behavior, and provide countermeasures, such as limiting the possible commands and inputs, to protect the component. The secure operating system developer must ensure that all the components of the trusted computing base prevent such threats correctly.

(26)

1.5

SUMMARY

While building a truly secure operating system may be infeasible, operating system security will improve immensely if security becomes a focus. To do so requires that operating systems be designed to enforce security goals, provide a clearly-identified trusted computing base that defines a trust model, define a threat model for the trusted computing base, and ensure protection of the trusted computing base under that model.

(27)

9

C H A P T E R

2

Access Control Fundamentals

Anaccess enforcement mechanismauthorizes requests (e.g., system calls) from multiplesubjects(e.g.,

users, processes, etc.) to performoperations(e.g., read, write, etc.) on objects (e.g., files, sockets, etc.). An operating system provides an access enforcement mechanism. In this chapter, we define the fundamental concepts of access control: aprotection systemthat defines the access control specifi-cation and areference monitorthat is the system’s access enforcement mechanism that enforces this specification. Based on these concepts, we provide an ideal definition for a secure operating system. We use that definition to evaluate the operating systems security of the various systems examined in this book.

2.1

PROTECTION SYSTEM

The security requirements of a operating system are defined in itsprotection system.

Definition 2.1. Aprotection systemconsists of aprotection state, which describes the operations that system subjects can perform on system objects, and a set ofprotection state operations, which enable modification of that state.

A protection system enables the definition and management of a protection state. Aprotection stateconsists of the specific system subjects, the specific system objects, and the operations that those subjects can perform on those objects. A protection system also definesprotection state operationsthat enable a protection state to be modified. For example, protection state operations are necessary to add new system subjects or new system objects to the protection state.

2.1.1 LAMPSON’S ACCESS MATRIX

Lampson defined the idea that a protection state is represented by anaccess matrix, in general, [176].

Definition 2.2. Anaccess matrixconsists of a set of subjectssS, a set of objectsoO, a set of operationsopOP, and a functionops(s, o)OP, which determines the operations that subject scan perform on objecto. The functionops(s, o)is said to return a set of operations corresponding to cell(s, o).

Figure 2.1 shows an access matrix. The matrix is a two-dimensional representation where the set of subjects form one axis and the set of objects for the other axis. The cells of the access matrix store the operations that the corresponding subject can perform on the corresponding object. For example, subjectProcess 1can performreadandwriteoperations on objectFile 2.

(28)

File 1 File 2 File 3 Process 1 Process 2 Process 1 Read Read, Write Read, Write Read

-Process 2 - Read Read, Write - Read

Figure 2.1: Lampson’s Access Matrix

If the subjects correspond to processes and the objects correspond to files, then we need protection state operations to update the protection state as new files and processes are created. For example, when a new file is created, at least the creating process should gain access to the file. In this case, a protection state operationcreate_file(process, file)would add a new column for the new file and addreadandwriteoperations to the cell(process, f ile).

Lampson’s access matrix model also defines operations that determine which subjects can modify cells. For example, Lampson defined anownoperation that defines ownership operations for the associated object. When a subject is permitted for theownoperation for an objecto, that subject can modify the other cells associated with that objecto. Lampson also explored delegation of ownership operations to other subjects, so others may manage the distribution of permissions.

The access matrix is used to define theprotection domainof a process.

Definition 2.3. Aprotection domainspecifies the set of resources (objects) that a process can access and the operations that the process may use to access such resources.

By examining the rows in the access matrix, one can see all the operations that a subject is authorized to perform on system resources. This determines what information could be read and modified by a processes running on behalf of that subject. For a secure operating system, we will want to ensure that the protection domain of each process satisfies system security goals (e.g., secrecy and integrity).

A process at any time is associated with one or more subjects that define its protection domain. That is, the operations that it is authorized to perform are specified by one or more subjects. Sys-tems that we use today, see Chapter 4, compose protection domains from a combination of subjects, including users, their groups, aliases, and ad hoc permissions. However, protection domains can also be constructed from an intersection of the associated subjects (e.g., Windows 2000 Restricted Contexts [303]). The reason to use an intersection of subjects’ permissions is to restrict the protec-tion domain to permissions shared by all, rather than giving the protecprotec-tion domain subjects extra permissions that they would not normally possess.

Because the access matrix would be a sparse data structure in practice (i.e., most of the cells would not have any operations), other representations of protection states are used in practice. One representation stores the protection state using individual object columns, describing which subjects have access to a particular object. This representation is called anaccess control listor ACL. The other representation stores the other dimension of the access matrix, the subject rows. In this case, the

(29)

2.1. PROTECTION SYSTEM 11 objects that a particular subject can access are stored. This representation is called acapability listor C-List.

There are advantages and disadvantages to both the C-List and ACL representations of protection states. For the ACL approach, the set of subjects and the operations that they can perform are stored with the objects, making it easy to tell which subjects can access an object at any time. Administration of permissions seems to be more intuitive, although we are not aware of any studies to this effect. C-Lists store the set of objects and operations that can be performed on them are stored with the subject, making it easy to identify a process’s protection domain. The systems in use today, see Chapter 4, use ACL representations, but there are several systems that use C-Lists, as described in Chapter 10.

2.1.2 MANDATORY PROTECTION SYSTEMS

This access matrix model presents a problem for secure systems: untrusted processes can tamper with the protection system. Using protection state operations, untrusted user processes can modify the access matrix by adding new subjects, objects, or operations assigned to cells. Consider Figure 2.1. SupposeProcess 1has ownership overFile 1. It can then grant any other process read or write (or potentially even ownership) access overFile 1. A protection system that permits untrusted processes to modify the protection state is called adiscretionary access control(DAC) system. This is because the protection state is at the discretion of the users and any untrusted processes that they may execute.

The problem of ensuring that particular protection state and all possible future protection states derivable from this state will not provide an unauthorized access is called thesafety problem[130]1. It was found that this problem is undecidable for protection systems with compound protection state operations, such as forcreate_fileabove which both adds a file column and adds the operations to the owner’s cell. As a result, it is not possible, in general, to verify that a protection state in such a system will be secure (i.e., satisfy security goals) in the future. To a secure operating system designer, such a protection system cannot be used because it is not tamperproof; an untrusted process can modify the protection state, and hence the security goals, enforced by the system.

We say that the protection system defined in Definition 2.1 aims to enforce the requirement

ofprotection: one process is protected from the operations of another only if both processes behave

benignly. If no user process is malicious, then with some degree of certainly, the protection state will still describe the true security goals of the system, even after several operations have modified the protection state. Suppose that aFile 1in Figure 2.1 stores a secret value, such as a private key in a public key pair [257], andFile 2stores a high integrity value like the corresponding public key. IfProcess 1is non-malicious, then it is unlikely that it will leak the private key toProcess 2

through eitherFile 1or File 2 or by changing theProcess 2’s permissions toFile 1. However, ifProcess 1is malicious, it is quite likely that the private key will be leaked. To ensure that the

(30)

secrecy ofFile 1is enforced, all processes that have access to that file must not be able to leak the file through the permissions available to that process, including via protection state operations.

Similarly, the access matrix protection system does not ensure the integrity of the public key fileFile 2, either. In general, an attacker must not be able to modify any user’s public key because this could enable the attacker to replace this public key with one whose private key is known to the attacker. Then, the attacker could masquerade as the user to others. Thus, the integrity compromise ofFile 2also could have security ramifications. Clearly, the access matrix protection system cannot protectFile 2from a maliciousProcess 1, as it has write access toFile 2. Further, a malicious

Process 2could enhance this attack by enabling the attacker to provide a particular value for the public key. Also, even ifProcess 1is not malicious, a maliciousProcess 2may be able to trick

Process 1into modifying File 2 in a malicious way depending on the interface and possible vulnerabilities inProcess 1. Buffer overflow vulnerabilities are used in this manner for a malicious process (e.g.,Process 2) to take over a vulnerable process (e.g.,Process 1) and use its permissions in an unauthorized manner.

Unfortunately, the protection approach underlying the access matrix protection state is naive in today’s world of malware and connectivity to ubiquitous network attackers. We see in Chapter 4 that today’s computing systems are based on this protection approach, so they cannot be ensure enforcement of secrecy and integrity requirements. Protection systems that can enforce secrecy and integrity goals must enforce the requirement ofsecurity:where a system’s security mechanisms can enforce system security goals even when any of the software outside the trusted computing base may be malicious.

In such a system, the protection state must be defined based on the accurate identification of the secrecy and integrity of user data and processes, and no untrusted processes may be allowed to perform protection state operations. Thus, the dependence on potentially malicious software is removed, and a concrete basis for the enforcement of secrecy and integrity requirements is possible.

This motivates the definition of a mandatory protection system below.

Definition 2.4. Amandatory protection systemis a protection system that can only be modified by trusted administrators via trusted software, consisting of the following state representations:

• Amandatory protection stateis a protection state where subjects and objects are represented by

labelswhere the state describes the operations that subject labels may take upon object labels;

• Alabeling statefor mapping processes and system resource objects to labels;

• Atransition statethat describes the legal ways that processes and system resource objects may

be relabeled.

For secure operating systems, the subjects and objects in an access matrix are represented by system-definedlabels. A label is simply an abstract identifier—the assignment of permissions to a label defines its security semantics. Labels are tamperproof because: (1) the set of labels is defined by trusted administrators using trusted software and (2) the set of labels is immutable. Trusted

(31)

2.1. PROTECTION SYSTEM 13 administrators define the access matrix’s labels and set the operations that subjects of particular labels can perform on objects of particular labels. Such protection systems are mandatory access

control(MAC) systems because the protection system is immutable to untrusted processes2. Since

the set of labels cannot be changed by the execution of user processes, we can prove the security goals enforced by the access matrix and rely on these goals being enforced throughout the system’s execution.

Of course, just because the set of labels are fixed does not mean that the set of processes and files are fixed. Secure operating systems must be able to attach labels to dynamically created subjects and objects and even enable label transitions.

Alabeling stateassigns labels to new subjects and objects. Figure 2.2 shows that processes

and files are associated with labels in a fixed protection state. Whennewfileis created, it must be assigned one of the object labels in the protection state. In Figure 2.2, it is assigned thesecretlabel. Likewise, the processnewprocis also labeled asunclassified. Since the access matrix does not permitunclassifiedsubjects with access tosecretobjects,newproccannot accessnewfile. As for the protection state, in a secure operating system, the labeling state must be defined by trusted administrators and immutable during system execution.

Atransition stateenables a secure operating system to change the label of a process or a system

resource. For a process, a label transition changes the permissions available to the process (i.e., its protection domain), so such transitions are calledprotection domain transitionsfor processes. As an example where a protection domain transition may be necessary, consider when a process executes a different program. When a process performs anexecvesystem call the process image (i.e., code and data) of the program is replaced with that of the file being executed. Since a different program is run as a result of theexecve system call, the label associated with that process may need to be changed as well to indicate the requisite permissions or trust in the new image.

A transition state may also change the label of a system resource. A label transition for a file (i.e., object or resource) changes the accessibility of the file to protection domains. For example, consider the fileacctthat is labeledtrustedin Figure 2.2. If this file is modified by a process with anuntrustedlabel, such asother, a transition state may change its label tountrustedas well.The Low-Water Mark (LOMAC) policy defines such kind of transitions [101,27] (see Chapter 5). An alternative would be to change the protection state to prohibituntrustedprocesses from modifying

trustedfiles, which is the case for other policies. As for the protection state and labeling state, in a secure operating system, the transition state must be defined by trusted administrators and immutable during system execution.

2Historically, the termmandatory access controlhas been used to define a particular family of access control models, lattice-based

access control models [271]. Our use of the termsmandatory protection systemandmandatory access control systemare meant to include historical MAC models, but our definition aims to be more general. We intend that these terms imply models whose sets of labels are immutable, including these MAC models and others, which are administered only by trusted subjects, including trusted software and administrators. We discuss the types of access control models that have been used in MAC systems in Chapter 5.

(32)

secret secret unclassified unclassified trusted trusted untrusted untrusted

read read read

read read read

read

read read read

write write write write write write write File: newfile Process: newproc Labeling State Process: other File: acct write Transition State Protection State

Figure 2.2: A Mandatory Protection System: The protection stateis defined in terms of labels and is immutable. The immutablelabeling stateandtransition stateenable the definition and management of labels for system subjects and objects.

2.2

REFERENCE MONITOR

Areference monitoris the classical access enforcement mechanism [11]. Figure 2.3 presents a general-ized view of a reference monitor. It takes a request as input, and returns a binary response indicating whether the request isauthorizedby the reference monitor’s access control policy. We identify three distinct components of a reference monitor: (1) its interface; (2) its authorization module; and (3) its policy store. The interface defines where the authorization module needs to be invoked to perform an authorization query to the protection state, a labeling query to the labeling state, or a transition query to the transition state. The authorization module determines the exact queries that are to be made to the policy store. The policy store responds to authorization, labeling, and transition queries based on the protection system that it maintains.

Reference Monitor Interface The reference monitor interface defines where protection system queries

are made to the reference monitor. In particular, it ensures that all security-sensitive operations are authorized by the access enforcement mechanism. By asecurity-sensitive operation, we mean an

operationon a particularobject(e.g., file, socket, etc.) whose execution may violate the system’s security requirements. For example, an operating system implements file access operations that would allow one user to read another’s secret data (e.g., private key) if not controlled by the operating system. Labeling and transitions may be executed for authorized operations.

(33)

2.2. REFERENCE MONITOR 15

Figure 2.3: Areference monitoris a component that authorizes access requests at thereference monitor interfacedefined by individualhooksthat invoke the reference monitor’sauthorization moduleto submit an authorization query to thepolicy store. The policy store answers authorization queries, labeling queries, and label transition queries using the corresponding states.

The reference monitor interface determines where access enforcement is necessary and the information that the reference monitor needs to authorize that request. In a traditional UNIX file

openrequest, the calling process passes a file path and a set of operations. The reference monitor interface must determine what to authorize (e.g., directory searches, link traversals, and finally the operations for the target file’sinode), where to perform such authorizations (e.g., authorize a directory search for each directory inode in the file path), and what information to pass to the reference monitor to authorize the open (e.g., aninodereference). Incorrect interface design may allow an unauthorized process to gain access to a file.

Authorization Module The core of the reference monitor is its authorization module. The

autho-rization module takes interface’s inputs (e.g., process identity, object references, and system call name), and converts these to a query for the reference monitor’s policy store. The challenge for the

(34)

authorization module is to map the process identity to a subject label, the object references to an object label, and determine the actual operations to authorize (e.g., there may be multiple oper-ations per interface). The protection system determines the choices of labels and operoper-ations, but the authorization module must develop a means for performing the mapping to execute the “right” query.

For theopenrequest above, the module responds to the individual authorization requests from the interface separately. For example, when a directory in the file path is requested, the authorization module builds an authorization query. The module must obtain the label of the subject responsible for the request (i.e., requesting process), the label of the specified directory object (i.e., the directory

inode), and the protection state operations implied the request (e.g., read or search the directory). In some cases, if the request is authorized by the policy store, the module may make subsequent requests to the policy store for labeling (i.e., if a new object were created) or label transitions.

Policy Store The policy store is a database for the protection state, labeling state, and transition

state. An authorization query from the authorization module is answered by the policy store. These queries are of the form {subject_label, object_label, operation_set} and re-turn a binary authorization reply. Labeling queries are of the form{subject_label, resource}

where the combination of the subject and, optionally, some system resource attributes deter-mine the resultant resource label returned by the query. For transitions, queries include the

{subject_label, object_label, operation, resource}, where the policy store deter-mines the resultant label of the resource. The resource may be either be an active entity (e.g., a process) or a passive object (e.g., a file). Some systems also execute queries to authorize transitions as well.

2.3

SECURE OPERATING SYSTEM DEFINITION

We define asecure operating systemas a system with a reference monitor access enforcement mecha-nism that satisfies the requirements below when it enforces a mandatory protection system.

Definition 2.5. Asecure operating systemis an operating system where its access enforcement satisfies thereference monitor concept[11].

Definition 2.6. Thereference monitor conceptdefines the necessary and sufficient properties of any system that securely enforces a mandatory protection system, consisting of three guarantees:

1. Complete Mediation: The system ensures that its access enforcement mechanism mediates all security-sensitive operations.

2. Tamperproof: The system ensures that its access enforcement mechanism, including its pro-tection system, cannot be modified by untrusted processes.

(35)

2.3. SECURE OPERATING SYSTEM DEFINITION 17 3. Verifiable:The access enforcement mechanism, including its protection system, “must be small

enough to be subject to analysis and tests, the completeness of which can be assured” [11]. That is, we must be able to prove that the system enforces its security goals correctly.

The reference monitor concept defines the necessaryand sufficient requirements for access control in a secure operating system [145]. First, a secure operating system must provide complete mediation of all security-sensitive operations. If all these operations are not mediated, then a security requirement may not be enforced (i.e., a secret may be leaked or trusted data may be modified by an untrusted process). Second, the reference monitor system, which includes its implementation and the protection system, must all be tamperproof. Otherwise, an attacker could modify the enforcement function of the system, again circumventing its security. Finally, the reference monitor system, which includes its implementation and the protection system, must be small enough to verify the correct enforcement of system security goals. Otherwise, there may be errors in the implementation or the security policies that may result in vulnerabilities.

A challenge for the designer of secure operating system is how to precisely achieve these requirements.

Complete Mediation Complete mediation of security-sensitive operations requires that all program

paths that lead to a security-sensitive operation be mediated by the reference monitor interface. The trivial approach is to mediate all system calls, as these are the entry points from user-level processes. While this would indeed mediate all operations, it is often insufficient. For example, some system calls implement multiple distinct operations. Theopensystem call involves opening a set of directory objects, and perhaps file links, before reaching the target file.The subject may have different permission for each of these objects, so several, different authorization queries would be necessary. Also, the directory, link, and file objects are not available at the system call interface, so the interface would have to compute them, which would result in redundant processing (i.e., since the operating system already maps file names to such objects). But worst of all, the mapping between the file name passed into anopensystem call and the directory, link, and file objects may be changed between the start of the system call and the actual open operation (i.e., by a well-timed rename operation). This is called atime-of-check-to-time-of-use(TOCTTOU) attack [30], and is inherent to theopen

system call.

As a result, reference monitors require interfaces that are embedded in the operating system itself in order to enforce complete mediation correctly. For example, the Linux Security Modules (LSM) framework [342] (see Chapter 9), which defines the mediation interface for reference moni-tors in Linux does not authorize theopensystem call, but rather each individual directory, link, and file open after the system object reference (i.e., theinode) has been retrieved. For LSM, tools have been built to find bugs in the complete mediation demanded of the interface [351,149], but it is difficult to verify that a reference monitor interface is correct.

(36)

Tamperproof Verifying that a reference monitor is tamperproof requires verifying that all the refer-ence monitor components, the referrefer-ence monitor interface, authorization module, and policy store, cannot be modified by processes outside the system’strusted computing base(TCB) (see Chapter 1). This also implies that the TCB itself is high integrity, so we ultimately must verify that the entire TCB cannot be modified by processes outside the TCB. Thus, we must identify all the ways that the TCB can be modified, and verify that no untrusted processes (i.e., those outside the TCB) can perform such modifications. First, this involves verifying that the TCB binaries and data files are unmodified. This can be accomplished by a multiple means, such as file system protections and binary verification programs. Note that the verification programs themselves (e.g., Tripwire [169]) must also be protected. Second, the running TCB processes must be protected from modification by untrusted processes. Again, system access control policy may ensure that untrusted processes cannot communicate with TCB processes, but for TCB processes that may accept inputs from untrusted processes, they must protect themselves from malicious inputs, such as buffer overflows [232,318], format string attacks [305], and return-to-libc [337]. While defenses for runtime vulnerabilities are fundamental to building tamperproof code, we do not focus on these software engineering defenses in this book. Some buffer overflow defenses, such as StackGuard [64] and stack randomization [121], are now standard in compilers and operating systems, respectively.

Second, the policy store contains the mandatory protection system which is a MAC system. That is, only trusted administrators are allowed to modify its states. Unfortunately, access control policy is deployment-specific, so administrators often will need to modify these states. While admin-istrators may be trusted they may also use untrusted software (e.g., their favorite editor). The system permissions must ensure that no untrusted software is used to modify the mandatory protection system.

Tamperproofing will add a variety of specific security requirements to the system. These requirements must be included in the verification below.

Verifiable Finally, we must be able to verify that a reference monitor and its policy really enforce

the system security goals. This requires verifying the correctness of the interface, module, and policy store software, and evaluating whether the mandatory protection system truly enforces the intended goals. First, verifying the correctness of software automatically is an unsolved problem. Tools have been developed that enable proofs of correctness for small amounts of code and limited properties (e.g., [18]), but the problem of verifying a large set of correctness properties for large codebases appears intractable. In practice, correctness is evaluated with a combination of formal and manual techniques which adds significant cost and time to development. As a result, few systems have been developed with the aim of proving correctness, and any comprehensive correctness claims are based on some informal analysis (i.e., they have some risk of being wrong).

Second, testing that the mandatory protection system truly enforces the intended security goals appears tractable, but in practice, the complexity of systems makes the task difficult. Because the protection, labeling, and transition states are immutable, the security of these states can be assessed.

(37)

2.4. ASSESSMENT CRITERIA 19 For protection states, some policy models, such as Bell-LaPadula [23] and Biba [27], specify security goals directly (see Chapter 5), but these are idealizations of practical systems. In practice, a variety processes are trusted to behave correctly, expanding the TCB yet further, and introducing risk that the security goals cannot be enforced. For operating systems that have fine-grained access control models (i.e., lots of unique subjects and objects), specifying and verifying that the policy enforces the intended security goals is also possible, although the task is significantly more complex.

For the labeling and transition states, we must consider the security impact of the changes that these states enable. For example, any labeling state must ensure that any label associated with a system resource does not enable the leakage of data or the modification of unauthorized data. For example, if asecretprocess is allowed to createpublicobjects (i.e., those readable by any process), then data may be leaked. The labeling of some objects, such as data imported from external media, presents risk of incorrect labeling as well.

Likewise, transition states must ensure that the security goals of the system are upheld as processes and resources are relabeled. A challenge is that transition states are designed to enable

privilege escalation. For example, when a user wants to update their password, they use an unprivileged process (e.g., a shell) to invoke privileged code (e.g., the passwd program) to be run with the privileged code’s label (e.g., UNIXrootwhich provides full system access). However, such transitions may be insecure if the unprivileged process can control the execution of the privileged code. For example, unprivileged processes may be able to control a variety of inputs to privileged programs, including libraries, environment variables, and input arguments. Thus, to verify that the system’s security goals are enforced by the protection system, we must examine more than just the protection system’s states.

2.4

ASSESSMENT CRITERIA

For each system that we examine, we must specify precisely how each system enforces the reference monitor guarantees in order to determine how an operating system aims to satisfy these guarantees. In doing this, it turns out to be easy to expose an insecure operating system, but it is difficult to define how close to “secure” an operating system is. Based on the analysis of reference monitor guarantees above, we list a set of dimensions that we use to evaluate the extent to which an operating system satisfies these reference monitor guarantees.

1. Complete Mediation: How does the reference monitor interface ensure that all security-sensitive operations are mediated correctly?

In this answer, we describe how the system ensures that the subjects, objects, and operations being mediated are the ones that will be used in the security-sensitive operation. This can be a problem for some approaches (e.g.,system call interposition[3,6,44,84,102,115,171,250]), in which the reference monitor does not have access to the objects used by the operating system. In some of these cases, a race condition may enable an attacker to cause a different object to be accessed than the one authorized by reference monitor [30].

(38)

2. Complete Mediation: Does the reference monitor interface mediate security-sensitive oper-ations on all system resources?

We describe how the mediation interface described above mediates all security-sensitive op-erations.

3. Complete Mediation: How do we verify that the reference monitor interface provides com-plete mediation?

We describe any formal means for verifying the complete mediation described above. 4. Tamperproof: How does the system protect the reference monitor, including its protection

system, from modification?

In modern systems, the reference monitor and its protection system are protected by the operating system in which they run. The operating system must ensure that the reference monitor cannot be modified and the protection state can only be modified by trusted computing base processes.

5. Tamperproof: Does the system’s protection system protect the trusted computing base pro-grams?

The reference monitor’s tamperproofing depends on the integrity of the entire trusted com-puting base, so we examine how the trusted comcom-puting base is defined and protected. 6. Verifiable: What is basis for the correctness of the system’s trusted computing base?

We outline the approach that is used to justify the correctness of the implementation of all trusted computing base code.

7. Verifiable: Does the protection system enforce the system’s security goals?

Finally, we examine how the system’s policy correctly justifies the enforcement of th

References

Related documents

To scale back the impact of non-trusted nodes and attain the next level of security this work proposes a composite identity and trust based model (CIDT) that

Threat analysis based on anti-goals consists in (1) negating specification patterns for security goals instantiated to sensitive attributes/associations from the object model,

Further, as nodes in flying ad-hoc networks are susceptible to attacks, a trust model based on direct, as well as indirect trust degrees from similar trusted neighbors is integrated

An improved trusted cloud computing platform model based on DAA and Privacy CA scheme 2 1.. Cloud Computing : The Limits of Public Clouds for

Trust modeling is the process performed by the security architect to define a complementary threat profile and trust model based on a use-case-driven data flow analysis.. The result

Providing secure access to cloud by trusted cloud computing and by using service level agreements, made between the cloud provider and user; requires lots of

Learn the cloud computing threat model by examining the assets, vulnerabilities, entry points, and actors in a

The existing user security model requires that a student identity and authentication goals only are satisfied prior to accessing the online test; thus, this implies that the