• No results found

INFORMATION OPERATIONS & SECURITY SPRING REVIEW MTG

N/A
N/A
Protected

Academic year: 2021

Share "INFORMATION OPERATIONS & SECURITY SPRING REVIEW MTG"

Copied!
32
0
0

Loading.... (view fulltext now)

Full text

(1)

1

Integrity

Service

Excellence

INFORMATION OPERATIONS

& SECURITY

SPRING REVIEW MTG

March 04, 2014

Dr. Robert Herklotz

Program Officer

INFORMATION OPERATIONS

& SECURITY

Air Force Office of Scientific Research

(2)

2

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

2014 AFOSR SPRING REVIEW

NAME: DR. ROBERT HERKLOTZ

BRIEF DESCRIPTION OF PORTFOLIO:

Fund science that will enable the AF and DOD to dominate

cyberspace: Science to develop secure information systems for our

warfighters and to deny the enemy such systems.

LIST SUB-AREAS IN PORTFOLIO:

S1: SOS-Science of Security

S2: Secure Humans

S3: Secure Networks

S4: Secure Hardware

S5: Covert Channels

S6: Execute on Insecure Systems

S7: Secure Data

(3)

3

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Cyber security: for AF space and air systems, C4ISR

systems, embedded cyber hardware and software

Defensive and offensive cyber at basic research level

Key scientific areas: advanced formal methods; analysis

and modeling of cyber security properties of software,

hardware, humans and networks; discovery of cyber

predictive security laws and metrics; human cognition

and trust

Air Force importance: needed to dominate cyberspace;

execute AF missions securely while denying same

to enemies.

Portfolio Introduction

(4)

4

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Develop science of cyber security (SOS)

Develop methods to execute mission securely

on insecure systems

Invent theory and methods to discover covert

channels, side channels, hidden software,

hidden circuits (functionality) in hardware

(5)

5

Funding research to discover laws that relate and govern

relationships: classes of polices, attacks, and defenses in

cyber systems- Defense class D enforces policy class P

despite attacks from class A, to discover the theory at the

intersection of cyber and electronic warfare, and to

discover methods to analyze and compare methods to

achieve agility and resiliency in AF systems

DEVELOP SCIENCE OF CYBER

SECURITY (SOS)

(6)

6

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Key areas:

Discover how to build inherently secure systems—

advanced formal analysis methods,

hyperproperties, develop information flow theory,

develop formal definition of cyber security

properties of a system, discovery of laws of cyber

security to guide system architecture design,

develop methods to compose artifacts with given

properties to obtain a new artifact with predictable

properties

Know how to compare architectures wrt security

properties-develop security and operations

metrics: new models of information leakage, all

information not of equal value

DEVELOP SCIENCE OF CYBER

SECURITY (SOS)

(7)

7

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution 7

Portfolio Subarea

3F-S1: SOS-SCIENCE OF SECURITY

Area: Theory of Cyber Security

• Discover laws that relate

and govern relationships:

classes of polices, attacks,

and defenses in cyber

systems- Defense class D

enforces policy class P

despite attacks from class A

Why/Payoff:

• Discover how to build inherently secure

systems

• Know how to compare architectures wrt

security properties-security and operations

metrics

• Insight into how to break the

attack-patch-attack cycle that the attack-patch-attacker always wins

Uniqueness

Other Agencies Investing in

this Area:

• NSA- organizing research and publicity in this area

• NSA – partnered w/ AFOSR on grants and MURI

• OSR – first to invest in area, selected by ASD(R&E) to

develop MURI-NSA sent money • ARL-just starting center, ONR

no specific investment • NSF-general investment

Impact to Not Funding This

Area:

• All AF/DOD systems currently at risk of cyber attack remain at risk-

• Decreased mission survivability • Increased cyber defense costs

with little value added

• Loss of AF/NSA leadership and ability to support cyber command in this area

Scientific Excellence

Potential Transformational

Impact/Scientific Highlights:

• First formal definition of cyber security properties of a system • Compose artifacts with given

properties to obtain a new one with predictable properties • Discovery of laws of cyber

security to guide system architecture design

• Laws to guide system optimal operation: ability to trade-off cyber security for other system properties when appropriate

AF/DOD Relevance

Attacks

(8)

8

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Funding research to do distributed secure

calculations; to by-pass insecure OS; to understand

true security of artificial diversity and moving target

techniques

Key areas: new methods of secret splitting, hardware

support for isolation, secure computation with

commonly used data types and their collections,

floating point arithmetic, string operations, new

hardware architectures, new techniques for binary

rewriting, new OS and software architectures, new

advances in formal methods, new cryptographic

techniques, etc.

DEVELOP METHODS TO EXECUTE MISSION

SECURELY ON INSECURE SYSTEMS

(9)

9 9

Portfolio Subarea

3F-S6: EXECUTE ON INSECURE SYSTEMS

Area: distributed execution on

insecure systems

• Cyber Attacker expected to

continue to be superior to

defender

• Develop methods to secure

mission execution on

insecure systems

Why/Payoff:

• AF/DOD does not own all systems/networks

it uses and cannot be sure of the security of

systems it owns

• Cloud computing saves money but is

insecure- new methods needed to secure

cloud use

Uniqueness

Other Agencies

Investing in this Area:

• ONR, NSF, NSA, ARO/ARL – specific projects but not a focused area of research as a whole

• OSR- 09 MURI on topic, secure data subarea supports this area, new YIP in this area

Impact to Not Funding This Area:

• AF/DOD missions executed on distributed and cloud systems will remain insecure

• Moving target defense cannot be as robust as possible-area to move reduced to secure networks and systems only

Scientific Excellence

Potential Transformational

Impact/Scientific Highlights

• Survive a malicious OS; perhaps with degraded functionality or availability: • new hardware architectures • new techniques for binary

rewriting

• new OS and software architectures

• new advances in formal methods

• new cryptographic techniques

AF/DOD Relevance

Hardware! Protect me

from this malicious OS!

(10)

10

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Funding research to develop theory of covert channels,

improve cyber forensics theory and techniques for current

and future software and hardware, develop efficient V&V

methods for software and hardware

Key areas: develop advanced binary analysis, invent

semantics preserving transforms for deobfuscation,

advanced dynamic analysis, develop theory of

steganography and steganalysis, hardware trust metrics,

PUFs, develop understanding of don’t care space in HW

and SW

INVENT THEORY AND METHODS TO DISCOVER COVERT

CHANNELS, SIDE CHANNELS, HIDDEN SOFTWARE,

(11)

11 11

Portfolio Subarea

3F-S5: COVERT CHANNELS

Area: Covert and side

channels

Invent theory of and

methods to discover covert

channels, side channels,

hidden software and

hidden circuits in hardware

Cyber Forensics

Why/Payoff:

• Covert and side channels often are the

method of access to and exfiltration from

AF/DOD systems

• Hidden software imbedded in AF/DOD

procured applications is a major vulnerability

• Hidden circuits in AF/DOD hardware chips is

a major vulnerability

Uniqueness

Other Agencies

Investing in this Area:

• ONR, ARO, ARL, NSA, NSF – no particular emphasis

• OSR research in steganography coordinated with NSA and

broadening into this focus area • Advanced forensics theory and

techniques developed: FX Cyber forensics method

transitioned to national labs for more development

Impact to Not Funding This Area:

• Discoveries in this area will

continue to be haphazard, after the fact, after successful attack

• Most cryptographic security broken though un-modeled side channels • Most software and hardware not

produced in US, cannot guarantee their security currently

Scientific Excellence

Potential Transformational

Impact/Scientific Highlights

• Development of theory of covert channels

• Improved cyber forensics theory and techniques for current and future software and hardware • Efficient V&V methods for

software and hardware

(12)

12

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution 12

Portfolio Subarea

3F-S8: SECURE SYSTEMS

Area: Anticipate Future Cyber

Attacks

• Discover potential

vulnerabilities in hardware

and software

• Advanced cloud and

botnet research

• Anti-forensics research

Why/Payoff:

• New systems have new vulnerabilities better

to discover them before deployment

• Find potential zero day attacks first

• Understand and get ahead of the cyber

attackers

Uniqueness

Other Agencies

Investing in this Area:

• ONR, ARL, ARO – little research in these areas

• NSF – no focused research in these areas

• NSA/OSR – research transitions from OSR to NSA through

AFRL/RI

Impact to Not Funding This Area:

• AF/DOD systems remain more

vulnerable to zero day attacks

• AF/DOD ability to anticipate

future attacks impaired

Scientific Excellence

Potential Transformational

Impact/Scientific Highlights

• Discovering vulnerabilities and theory to predict vulnerabilities in virtual machines, operating systems, hypervisors, hardware • Discovering new methods to

find new vulnerabilities in hardware and software

(13)

13

Current state of the art for research in these

areas: all three areas have had ad-hoc

investment until recently- theory and metrics

in all areas need to be discovered

Investment approach: focus on SOS, leading

theoretical thrust as others focus on more

experimental and practical thrusts

Leverage teaming/collaborating opportunities:

NSA, ONR, ARO, NIST, DOD, NSF

(14)

14

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution 14

Objective:

Develop a new model for adversarial signal

processing, focusing on “counterdeception” problems: problems in which one principal is attempting to

commit malicious behavior that evades a detector. • Identification of inherent exploitable properties of common signal processing and detection

primitives

• Establishing bounds in signal processing and communication problems when noise is replaced with an intelligent adversary

Towards a General Theory of Counterdeception

Scott Craver, Binghamton University

Tel. (607) 727-7166, E-Mail: scraver@binghamton.edu

Scientific/Technical Approach:

- Identify and catalog inherent exploitable properties of common signal processing and detection

primitives

- Identify specific attacks and means of robust detection

- Establish theoretical bounds on communication and detection when noise is replaced with an intelligent adversary with varying degrees of power

-Find information theoretic quantification of adversarial environments, e.g. estimating the “reversing entropy” of secret detection regions

Accomplishments:

- Proved that in steganographic channels, error correction without a shared key, and hence key exchange, can be derailed by an adversary - Uncovered attacks on speaker identification, watermark detection, and face recognition

- Developed steganalytically immune methods of transmission through random sub channels

Challenges:

- Developing a usable and general approach to detector structure analysis for secure system design

DATA

FEATURES

SIGNAL LIBRARY DETECT

PROCESS EXPLOITABLE PROPERTIES

“REVERSING” ENTROPY

ADVERSARIAL CAPACITY

COUNTERMEASURES

WORST, AVERAGE-CASE ATTACK PERFORMANCE

(15)

15 15

Objective:

Exploit our knowledge of adversarial signal

processing to subvert face and voice recognition systems

• Construct unusual signals that induce unexpected results in detection

• Identify and exploit weaknesses in common detection primitives used in biometric detectors • Explore insider attacks to hide exploitable errors in biometric user databases

• Demonstrate working attacks with proof-of-concept hardware and software

Attacks on Multimodal Biometric Detectors

Scott Craver, Binghamton University

Tel. (607) 727-7166, E-Mail: scraver@binghamton.edu

Scientific/Technical Approach:

- Sophisticated biometric detectors often use complex (multi-modal) statistical models of biometric features - Use insider attack to affix spurious modes to users’

database entries

- Exploit weaknesses of histogram comparison, used in local-binary pattern histogram face recognizers -Innovation: “sidestepping” attacks, whereby an injected mode matches to an attacker only when an unusual sound or image is presented to a detector

Accomplishments:

-Achieved injection attack that allows impersonation of multiple people with printed gradient patterns -Found multiple flaws in histogram comparison in OpenCV library for face recognition.

-Developed additive noise attacks for fooling speaker identification systems

-Capability of insider depends on choice of signal processing and detection primitives

Challenges:

- Hardening detectors against attacks

- Applying sidestep attacks without insider access

feature vector Model Alice Model Bob Model Carol Database Faulty histogram

(16)

16

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution 16

Objective:

Advance the science base for trustworthiness by developing concepts, relationships, and laws with predictive value.

 Modeling: characterize system, threats, and desired properties for current and future

systems.

Composition: develop principles for explaining when security schemes compose, and how to achieve compositionality

Measurement: quantify design improvements and relative strength of defensive mechanisms, measure efficacy of development techniques.

Science of Cyber Security: Modeling, Composition, and Measurement

John Mitchell, Stanford University

Tel. (650) 723-8634, E-Mail: mitchell@cs.stanford.edu

Scientific/Technical Approach:

Combine modeling and analysis techniques from formal methods and game theory with experimental evaluation yielding quantitative measurement

 Unified approach to system modeling, focusing on security properties, threat models

 Composition based on assume-guarantee paradigm, tailored to security analysis

 Experiments evaluating security design and evaluation practices, with surprising results Collaborative research developing foundational frameworks, explored using case studies

Accomplishments/Highlights:

 Information-theoretic and computational modeling of quantitative information flow

 Adaptive mitigation of timing channels with sublinear leakage

 Model human users of secure systems using finite-state approximations, game-theoretic incentives  Usable and security password schemes, based on training

 Measure security effectiveness of code reviewers, security effectiveness of developers, tools. In many cases, diversity more important than experience

Quantitative modeling of information leak

Measure Operational characterization Shannon

entropy How much information leaks? Probability

of guessing

How likely is adversary to guess a secretly correctly?

Guessing entropy

How long does it take the adversary to guess a secret, on average?

(17)

17 17

Objective:

Develop mathematical foundations for

cybersecurity, especially:

capability to specify and verify

ANY cybersecurity policy

logical foundations of access

control

quantification of privacy

(YIP) Making Cybersecurity Quantifiable

Michael Clarkson, George Washington University Tel. (202) 994-0718, E-Mail: clarkson@gwu.edu

Scientific/Technical Approach:

Extend temporal logic verification

techniques to work with hyperproperties,

which can express all security policies

Increase trustworthiness of authorization

logic through formal semantics and

metatheory

Apply quantitative information flow to

measure leakage of personal information

Accomplishments:

Created first temporal logic of hyperproperties, and

built prototype model-checker

Defined first formal semantics of access control logic based on beliefs; proved equivalence of belief semantics and Kripke semantics; first machine-checked proof of soundness for authorization logic

Current Challenges:

• Efficient model checking for all hyperproperties

security policy as

HyperLTL formula program

YES,

program satisfies policy

NO, + counterexample

HyperLTL

model

checker

automata library

(18)

18

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

18

Security Policies Today

Confidentiality

“Protection of assets from unauthorized

disclosure”

Integrity

“Protection of assets from unauthorized

modification”

Availability

“Protection of assets from loss of use”

(19)

19

19

Safety and Liveness Properties

Intuition

[Lamport 1977]

:

Safety:

“Nothing bad happens”

Liveness:

“Something good

happens”

Partial correctness

Bad thing: program

terminates with incorrect

output

Access control

Bad thing: subject completes

operation without required

rights

Termination

Good thing: termination

Guaranteed service

(20)

20

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

20

Properties

Trace: Sequence of execution states

t

=

s

0

s

1

Property: Set of infinite traces

Trace

t

satisfies property

P

iff

t

is an element of

P

Satisfaction depends on the trace alone

System: Also a set of traces

System

S

satisfies property

P

iff all traces of

S

(21)

21

21

Success!

Alpern and Schneider (1985, 1987):

Theorem.

Every property is the intersection of a

safety property and a liveness property.

Theorem.

Safety proved by invariance.

Theorem.

Liveness proved by well-foundedness.

Theorem.

Topological characterization:

Safety

= closed sets

Liveness = dense sets

(22)

22

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

22

Back to Security Policies

Formalize and verify any property?

Formalize and verify any security policy?

(23)

23

23

Hyperproperties

A hyperproperty

is a set of properties

[Clarkson and Schneider 2008, 2010]

A system

S

satisfies a hyperproperty

H

iff

S

is an element of

H

…a hyperproperty specifies exactly the allowed

sets of traces

(24)

24

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

24

Hyperproperties

Security policies are hyperproperties!

Information flow: Noninterference, relational

noninterference, generalized noninterference,

observational determinism, self-bisimilarity,

probabilistic noninterference, quantitative

leakage

Service-level agreements: Average response

time, time service factor, percentage uptime

(25)

25

25

Safety and Liveness is a Basis

(still)

Theorem.

Every hyperproperty is the

intersection of a safety hyperproperty and a

liveness hyperproperty.

(26)

26

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

26

Summary so far…

Theory of hyperproperties :

Parallels theory of properties

Safety, liveness (basis, topological

characterization)

Verification: ???

Expressive completeness

(27)

27

27

Logic and Verification

Polices are predicates …but in what logic?

Second-order logic suffices

Verify second-order logic?

Can’t!

(effectively and completely)

Temporal logic: LTL, CTL*?

Highly successful for trace properties

But not for security policies

[McLean 1994, Alur et

al. 2006]

Let’s hyper-ize…

with quantification over

multiple traces

(28)

28

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

28

Stepping Back…

Safety and liveness?

Verification?

Model-checking (expensive)

Reduce to trace properties (

k

-safety)

Refinement (hypersafety)

Proof system?

…verify by decomposing to safety+liveness?

(29)

29

29

Summary

A logical foundation for security:

Hyperproperties: framework for all security

policies

HyperLTL: logic for verification of security

Potential impact:

A formal, orthogonal, verifiable, and

expressively complete basis for

cybersecurity

(30)

30

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Discovered new class of HW trojans

Developed generalized detection methodology for

HW trojan circuits

Uncovered attacks on speaker identification,

watermark detection, and face recognition

Created first temporal logic of hyperproperties and

built prototype model-checker

(31)

31

Covert Hardware Trojan Horse Implantation

Implanting a covert HTH is accomplished by subverting

transitions in the unused state space of the host implementation.

This reconfiguration can have minimal impact on host

performance metrics.

(32)

32

DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution

Develop science of cyber security (SOS)

Develop methods to execute mission

securely on insecure

Invent theory and methods to discover

covert channels, side channels, hidden

software, hidden circuits in hardware

References

Related documents