Projects – Completed

Classification and understanding of attacks is the first step towards developing principled design technologies for resilient cyberphysical systems. Building up on the existing hybrid system and stochastic process theories, the project classifies a broad range of attacks on monolithic and distributed CPS which encompass traditional attacks on computing infrastructure as well as attacks that exploit the dynamical and network characteristics. There are two sides to the proposed classification: First, it provides algorithms for detecting classes of attacks that can be detected with reasonable resources. Second, it provides lower bound result characterizing classes of attacks that cannot be detected and yet can destabilize the system. At the theoretical level, the outcomes from this project will provide new techniques for reasoning about adversaries in switched and hybrid systems which combine indistinguishability-based arguments from distributed computing and control theory. In summary, this project develops the foundations of a security science for hybrid systems where physics-based processes play principal roles alongside those of computation and communication.

This project aims to define foundational data-driven methodologies and related science to identify the basis for continuous and dynamic monitoring and placing of detectors, making it possible to enable adaptive approaches to mitigation and containment of the spread of attacks. This will be achieved in the presence of changes in the underlying infrastructure as well as growing sophistication of attackers.

This project explored a framework for characterizing side channels that is based on end-to-end analysis of the side channel process. As in covert channel analysis, we are using information-theoretic tools to identify the potential of a worst-case attack, rather than the success of a give ad hoc approach. However, instead of measuring the capacity of an information channel, which presumes optimal coding and thus overestimates the impact of the side channel, we are measuring the mutual information between the sensitive data and observations available to the adversary.

The technical merit of the proposal is to bring the mathematical science of importance sampling to bear on critical problems in network security. The work is important because the existing tools for validating access control configurations are inadequate for large systems compromised of multiple interacting access control mechanisms. Our work will provide a basis for assessing how well a system meets global policy objectives, and for comparing different configurations to determine which better meets those objectives. In addition, the sampling approach provides a mathematical basis for assessing the resiliency of a system’s access control mechanisms to intrusions that create connections that bypass its intent. The immediate impact will be an increased “in the field” capability to assess a system’s access control posture and its resilience to intrusion. The long-term impact is in providing a first basis for an engineering science of access control.

This project looks at how security metrics that take into account user behavior can be used in the design process. We propose both fundamental work in modeling methodology, formalisms, and solution methods, and practical implementation in software form. When completed, this modeling approach will provide a structured and quantitative means of analyzing cyber security problems whose outcomes are influenced by user, attacker, and system, interactions. This capability is a key element in creating a true science of security.

Classical methods for expressing and checking properties do not allow us to express many relevant global security properties. By developing a probabilistic distributed temporal logic, important global properties of systems can be represented, which will enable us to represent distributed attacks and probabilistic reasoning. We propose to develop the logic and efficient, scalable methods for reasoning through decentralized monitoring and adaptive sampling. This will allow confidence that certain types of global security properties ar not violated in systems such as clouds or clusters.

The criticality of the information protection and assurance (IPA) problem has understandably sparked rich intellectual and material investment into finding a solution. Several efforts have centered on understanding, identifying, tolerating, and patching security vulnerabilities at different levels of the electronic system stack for various security attack models. Most of these approaches tend to fall into the “sand-boxing” category, whereby unusual events are sequestered until their potential impacts are identified. Such efforts tend to be directed at well-known threats, and thus require that all existing techniques be revisited as newer attack models emerge.

This project formulated a framework for designing secure computing platforms that treat security infractions as computational errors and employ error-resiliency techniques to tolerate them, while simultaneously providing the user with alert levels based on grading of the severity of the infractions. Our work on stochastic computing has been leveraged to provide a foundation for the framework.

Starting from the premise that cyber attacks have quantifiable objectives, our long-term goal was to develop computational tools that quickly recover the objective associated with any given attack by observing its effects. Knowledge of this objective is critical to threat assessment. It allows us to understand the scope of an attack, to predict how the attack will proceed, and to respond in a systematic and targeted way. Our approach to recovering this objective was based on a new method for solving the problem of “inverse optimal control”. Much of our work focused on establishing the theoretical foundations for this method of solution. In this project we explored its application to threat assessment.

The security and resilience of modern infrastructures require an interdisciplinary research effort to investigate security mechanisms at each layer and develop tools for analyzing an designing complex systems to achieve inter- and intra-layer resilience. Game-theoretic approaches allow us to adopt a holistic viewpoint towards designing systems of multi-layer interactions. The science of resilience is a critical component of modern system engineering. This project is an attempt in the direction of developing a new general resilient system theory, with potential applications to many emerging complex systems such as those in health care, biomedicine, smart power grid, and transportation systems.

This project proposed to develop a common logical framework that accounts for two principal sources of trust in software: digital signatures and explicit proof. Such a framework will allow us to rigorously specify, enforce, and analyze security policies that rely on multiple modes and sources of trust. Based on recent work by the PI and collaborators, the framework will be cast as a modal type theory which comes equipped with a notation for programs and proofs. We exploit this relationship to guarantee composability and usability form the programmers point of view.