Research Experience for Undergraduates (REU)

CERIAS REU 2014

The Program…

The CERIAS Information Security REU Program provides the opportunity for undergraduate students to engage in the forefront of information security research working on individual project areas. Some areas include:

A security-enabled interface for humanoid robot-to-robot communication, within a HARMS Model. [+]

Inter-robot communication interfaces a number of robots, working in a common task domain. In the case of mobile, autonomous robots the communications medium between the robots is wireless and therefore subject to security compromises. To provide a secure interface between robots a secure layer can be added to a HARMS Model (Humans, software Agents, Robots, Machines, Sensors). In this research project, humanoid robots will be used and a secure module will be added to HARMS networking and communication layers, to enable the robots to work as a team and be insulated from malicious intrusion or dereliction of task, due to external, unsafe influence.

Humor perception and preference varies from person to person, and depends on the personal “sensitivity” to an event that may be joked about. This makes it possible to use humor analysis and appreciation to gauge an unsuspecting individual’s involvement in events in question or events that are similar to those in question. This study is an early step in this direction, so several possible variations of it will be negotiated to fit a student’s own interests and strengths. What we are particularly interested in is what information is not stated explicitly in a joke because it is supposed to be familiar to the hearer/reader and what has to be added as new information. There are several information security applications, in which humor can be used, such as breaking anonymity and (re)identification of speakers/writers, insider threat and social engineering forensics, discovery of concealed information, such as data provenance, and other intelligence and diligences activities.

Assessing Risk of Biomedical Research PHI in of Public HPC Environments [+]

With the inclusion of the phrase “any other unique identifying number, characteristic, or code” in its description of Protected Health Information (PHI), the Health Information Portability and Accountability Act (HIPAA, 1996) endeavors to protect the privacy of human subjects. However, the ambiguity of this phrase and its possible implications as the state of the art in the life sciences advances raises concerns about the viability of community/shared High Performance Computing (HPC) resources (e.g., the Extreme Science and Engineering Discovery Environment (XSEDE) project) as “safe” environments in which to conduct “in silico” experiments or computationally perform data analyses. In this project, students will have the opportunity to conduct a risk assessment of shared HPC resources that leverages NIST’s “Guide for Conducting Risk Assessments” (NIST SP - 800-30rev1).

One of the computational areas currently receiving significant attention from researchers and practitioners is the field of data integration and analytics known colloquially as Big Data, and of particular importance to Big Data projects is information visualization. On said projects, information visualization solutions provide the main mechanism for users to consume, interpret, and communicate the findings from the analyses enabled by Big Data, and hence they are critical to the overall success of the projects. Moreover, visualization can serve as a powerful resource to users in that the opportunity to “see” the data may present dimensions and lead to inferences that otherwise would not be accessible to the users. However, the privacy and security dimensions that may be attached to the new insights culled from visualizations have not received a commensurate amount of attention. On this project, we will investigate these dimensions in the context of the leading Big Data Visualization tools and methods with the goal of helping researchers and practitioners to foresee and address privacy and security concerns related to their Big Data Visualization solutions.

Data spillage is the situation where data of a certain classification is accidentally put on a system of lower classification. NSA has developed a procedure for handling data spillage issues that take place on a Hadoop cloud. The procedure involves determining the nodes that have been touched by the “dirty” data, them taking down, removing and replacing nodes one-by-one until the dirty data is gone and the clean data has be replicated to the “clean” notes. The task entails testing the feasibility and effectiveness of this procedure by setting up and populating a Hadoop database with data, simulating a data spillage issue, and the following our procedure for recovering. The goals of the task are to ensure that 1) it is possible to find which notes the data has touched; 2) the procedure results in the removal of the dirty data from all nodes and 3) the nodes that have been removed from the cluster can be used for further forensic examination. Also of interest is anything that would make the procedure more effective or automatable.

Side channel analysis of a secure communication system is a very interesting technique of gaining valuable information not from the substance of the implemented cryptosystem itself, but rather from its implementation characteristics or its usage specific characteristics. This type of cryptanalysis poses challenges to cryptosystem designers to pay special attention to potential information leakage from cryptosystems’ practical implementations in various communication environments. The significance of the derived intelligence from an encrypted system emphasizes the need for collaborative development of both encryption techniques and conversational communication protocols, in order to adopt security level of the overall communication that will minimize any potential information leakage to the adversaries. This project will evaluate some of the current side channel analysis techniques in order to identify practical security improvement that will be suitable for implementation of the current VoIP networking implementation. Obtained results may be used as input for further communication protocol adoption that will prevent side channel analysis of encrypted voice stream, and leveraging the overall security level of VoIP based communication.

An exciting property in human language and information processing activity is our ability to aggregate separate statements into larger chunks of information that frequently recur, are familiar, and become associated with standard routines, such as going to a restaurant or boarding a plane. The current computational implementations of language and information processing do not yet have this capability. We will look at various examples of such chunks of information, often referred to as scripts, and figure out how we, humans, recognize them after seeing just a phrase or two and think about programming the computer to do the same.

Virtual server platforms include virtual switches integral to the hypervisor that facilitate inter-VM network traffic. One consequence of this architecture is that this traffic is not visible to other network management elements which complicates intrusion detection and other security management functions. Two approaches are under development to remedy this situation, Open vSwitch and Virtual Ethernet Port Aggregator (VEPA). This task will compare and contrast these two standards from a security management perspective, re-enabling visibility and control of this traffic and from standpoint of market viability.

This research project will engage students in developing an information security survey instrument using a risk-based approach based on security standards (ISO, COBIT, NIST), legal requirements (HIPAA, FISMA, State privacy laws, etc.), and industry rules (PCI-DSS) to help data stewards understand compliancy requirements while assessing the potential threats to unencrypted data.

The threat associated with public cloud providers varies greatly as compared to a legacy application behind an enterprise boundary. This task will summarize the threat applicable to a public cloud instantiation and compare that to the security model offered by the provider.

The output will be a security usage guide. While hacking against a public
cloud provider is expressly NOT included in this task, comments on the
completeness and robustness of the security model based upon advertised and observed characteristics shall be included in the final deliverable, with an emphasis on actual hands-on observation of the cloud offerings. Amazon EC3 and Google Docs are of interest.

The study is a continuation of ongoing work on determining how new and informative a new text is in relation to the ones already processed by the system. It is part of a computational semantic approach to develop computer applications that closely emulate human language and information processing ability. The possibility to develop an Internet search that is much more intelligent that Google because it actually understands the user’s query rather than just searching for the character strings is quite exciting, and this is only one of the numerous ways, many of them cyber security related, that an advanced notion of informativeness can improve in our understanding of and emulating the human mind. We will compare new texts to old ones in order to discover what exactly makes a new text seem informative in relation to some old texts but not others. Our purpose will be to formulate computational rules and regularities that are useful for the task.

Smart meters for the power grid are beginning to play a key role in the monitoring and analysis of power delivery. They will provide many points of power flow control, data measurement, and customer interface. A typical utility will eventually have millions of smart meters in service. Therefore, cyber-security and privacy have also become critical aspects of smart meter deployment. We are developing an architecture that will allow utilities to receive vital information on the power grid without being flooded with unnecessary raw data, and to secure this information network as simply as practical for utility concerns. A beneficial result of this architecture will be to incorporate privacy and/or anonymity for power customers, while balancing the utility’s need to read and control individual meters. Specific tasks may include: developing aggregating algorithms, developing new cryptographic methods and code, and preparing a new smart meter test facility in which to evaluate these approaches.

Nondiscrimination Policy Statement

Other Resources for Students

Use this website to find programs such as undergraduate summer research opportunities, graduate fellowships, postdoctoral positions, as well as resources and materials pertaining to recruitment, retention, and mentoring.