Game Theory to Combat Evasive Adversary
Recently, the arm race between defensive measures and adversaries for the security of digital systems has gained significant pace in favor of adversaries. Digitalization of our personal information and financial assets has made them vulnerable to threats throughout the world. It is no longer surprising to see news about data breach in world-wide organizations. Such organizations are expected to be secure against cyber threats by deploying the state-of-the-art defense mechanisms. However attacks are becoming more and more sophisticated, more stealthy, and with far wider attack surfaces. Signature based defense mechanisms can no longer defend against such advanced threats effectively. Adaptation of attacks against existing defense measures or brand new attacks have made it necessary to consider zero-day vulnerabilities in digital systems and to take precautions beyond signature based defenses. In the following (selected) papers, we provide cohesive analytical framework to break the vicious cycle of cat-and-mouse-game-like interaction in computer security through a game-theoretical lens that anticipates attacks’ adaptation:

Deception-as-Defense FrameworkCyber-physical systems, incorporating both physical and cyber parts together, e.g., process control systems, robotics, smart grid, and autonomous vehicles, have resulted in new and distinct, e.g., security related, challenges for control system design. Due to the asymmetry of information, how information flows in-between attackers and the control system (also including defense mechanisms) plays significant role in their success/failure. As an example, attackers need to use system-related information in order to learn the system dynamics, to design the best (or successful) attacks, and to evade the intrusion detection systems. Therefore, defenders can filter the system-related information to control the attacker’s perception strategically so that he/she can detect and mitigate the attacks. In the following (selected) papers, we introduce secure sensor design framework that filters the sensors’ outputs strategically and we address the resiliency of control systems prior to detection of any infiltration into the controllers:

Deceptive Signaling Framework
Data-driven engineering applications, e.g., machine learning and artificial intelligence, build on data, i.e., information. However, this implies that information has tremendous power on decision making, and correspondingly, information providers have influential power on decision makers. Importantly, the information providers can be deceptive such that they can benefit, whereas the decision makers can suffer, due to the strategically filtered information. To be able to deceive the decision maker, the information provider should anticipate the decision maker’s reaction while facing a trade-off between deceiving at the current stage and the ability to deceive in the future stages. In the following (selected) papers, we address how a deceptive information provider can filter the information to control the decision maker’s decisions:

Deception-Proof Mechanisms – Autonomous Intersection Control
Classical traffic lights, even the adaptive ones equipped with sensors, are not efficient for the quality of transportation. In that respect, communication based intersection control can be a novel alternative to the classical traffic lights. However, diversification in the drivers’ objectives and possibility of malicious (or selfish) ones result in non-cooperative multi-agent environments, where incentives play significant role in the agents’ actions. In the following (selected) papers, we address those issues by designing strategy-proof mechanisms in a game-theoretical perspective:

Sensor Fusion – Distributed Online Learning (and Optimization)
A network of agents equipped with monitoring, processing, and communication modules, brings in a new dimension for information processing applications. As an example, in remote sensing applications, each sensor can monitor certain phenomena, process the measurements, and enhance the processing performance by communicating with other sensors. In energy harvesting sensor networks, distributed processing algorithms with limited computational complexity and communication load are desirable since computation and communication can consume substantial amount of power. In the following (selected) papers, we, specifically, address the issues related to the excessive communication load:

Adaptive Filtering (Prediction and Estimation) Theory
Adaptive filtering (prediction and estimation) has been extensively studied in the literature and has been (and is going to be) used extensively in the industry due to its adaptability (or flexibility) to changes in the problem parameters and scalability for large-scale problems. There are various algorithms that can lead to superior/inferior performances depending on the specifics of the problem, e.g., stationary structure, computational complexity, or large-scale data. In the following (selected) papers, we, specifically, address the issues related to stability and robustness of the adaptive algorithms while seeking to achieve improved trade-off in terms of adaptability (i.e., convergence) rate and steady state performance:

Muhammed O. Sayin is currently pursuing the Ph.D. degree in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign (UIUC). He received the B.S. and M.S. degrees in Electrical and Electronics Engineering from Bilkent University, Ankara, Turkey, in 2013 and 2015, respectively. His research interests include game theory, mechanism design, cyber-physical systems, security, and stochastic control.

Email: sayin2@illinois.edu

Recent News

Our paper titled “Hierarchical multi-stage Gaussian signaling games in noncooperative communication and control systems” has been accepted for publication in Automatica.