ADVISE

Mobius supports multiple modeling formalisms, including ADversary VIew Security Evaluation (ADVISE). ADVISE allows modelers to create and analyze a specific adversary profile in an executable state-based security model of a system (11LEM02). Using graphical primitives, ADVISE provides a high-level modeling formalism to easily create the adversary profile and the executable state-based security model of the system under consideration.

ADVISE Primitives

ADVISE models consist of five primitive objects: attack steps, accesses, knowledges, skills, and goals. Attack steps represent the actions taken by the adversary. Accesses, knowleges, and skills represent the accesses, knowledge, and skills of the adversary, respectively. Goals represent the goals (or flags) of the adversary.

Attack Step

Attack steps represent the actions taken by the adversary. Attack steps are represented graphically as rectangles. An attack step is fully described by its input, output, and properties. An attack step can have inputs from accesses, knowledges, skills, and goals. It can also have outputs to the same four types of elements (accesses, knowledges, skills, and goals). An attack step also contain four properties: attack cost, attack evaluation time, preconditions, and outcomes.

The attack cost of an attack step represents the relative cost of the attack step with respect to the adversary. For example, if an attack step is difficult or expensive (such as hacking into a locked vault door), then the attack cost will be relatively large. If the attack step is easy or inexpensive (such as opening an unlocked door), then the attack cost will be relatively small. Since an adversary is more likely to take the easiest route to the goal, she will likely choose an attack step with a lower attack cost over one with a higher attack cost.

The attack execution time of an attack step represents the relative time it will take to complete the attack step.
For example, if an attack step will take a long time to complete (such as downloading several terabytes of logs), then the attack execution time will be relatively large. If the attack step will not take much time to complete (such as downloading a few kilobytes of a small subset of the logs), then the attack execution time will be relatively small. Since an adversary is more likely to take the quickest route to the goal, she will likely choose an attack step with a shorter attack execution time over one with a longer attack execution time.

The preconditions of an attack step represent the conditions that must take place before the adversary is able to attempt the attack step. The preconditions of an attack step are closely related to the inputs of the attack step since the inputs of the attack step provide the state variables of the model that can be used in the conditional expression. For example, if the attack step requires a certain access and level of skill to attempt, then the inputs of the attack step would be that access and that skill, and the precondition of the attack step would be a conditional expression such as \verb|return (access1 && (skill1 > 0.7)|. As a more concrete example, suppose the attack step is to pick the lock on a safe. The access in this case would be having close proximity to the lock, and the skill would be lock-picking.

The outcomes of an attack step represent the outcomes that occur if the attack step is successfully completed. The outcomes of an attack step are closely related to the outputs of the attack step since the outputs of the attack step provide the state variables of the model that can be modified depending on the resulting outcome. Every attack step has one or more outcomes, each with a probability of occurring. The sum of the probability of each of the outcomes of the attack step is always 1. Each outcome also has an associated detection probability which represents the probability that the adversary will be detected if this outcome occurs. Since an adversary is likely to want to avoid being detected, she will likely choose attack steps with a lower weighted detection probability of its outcomes over an attack step with a higher weighted detection probability of its outcomes.

Access

Accesses represent the relevant accesses the adversary may have (or eventually gain) in the executable state-based security model of the system. Accesses are state variables that store whether or not the adversary has the given access. An access may represent a physical access (such as close proximity to a target or having a key to a relevant door in the model) or a more abstract access (such as administrator privileges in a target machine). Accesses are represented graphically as purple squares.

Knowledge

Knowledges represent the relevant knowledge the adversary may have (or eventually gain) in the executable state-based security model of the system. Knowledges are state variables that store whether or not the adversary has knowledge of the given information. Knowledge may represent a fact such as the adversary knowing a certain password of a target system or the type of encryption that is used between two target nodes. Knowledges are represented graphically as green circles.

Skill

Skills represent the relevant skills the adversary may have (or sometimes may eventually gain) in the executable state-based security model of the system. Skills are state variables that store the extent to which the adversary is skilled in that given area. For example, a certain adversary may have a lock-picking skill of 0.7 which could mean that she is adept at lock-picking, but not yet a master of the skill. Skills are represented graphically as blue triangles.

Goal

Goals (or flags) represent what the adversary is trying to ultimately achieve. Goals are state variables that indicate whether or not the adversary has accomplished the goal yet. Goals can represent achievements such as successfully accessing sensitive information, shutting down a target system, or escaping from a bank with stolen jewels. Goals are represented graphically as gold ovals.

Editor

This section looks into the atomic formalism that represents ADVISE with emphasis on creation, editing, and manipulation of atomic models using the Mobius ADVISE editor.
The ADVISE atomic model formalism is composed of two parts. The first is the Attack Execution Graph (AEG), which is composed of several related sets: these are Access Domains, Knowledge Items, Skills, Steps, and Goals. The Steps represent attack steps an attacker would execute in order to gain new knowledge or access or to achieve new goals. Steps have preconditions which can depend on the state of access, knowledge, skills, and goals as depicted in the graph. The second component of an ADVISE model is an adversary profile. An adversary profile defines the qualities and interests of an attacker. This profile determines the relative importance costs, payoffs, and detection avoidance to the adversary. The profile also determines what skills they adversary possesses and at what level of proficiency. Finally, the profile specifies what initial access and knowledge the adversary has and what goals (with what amount of payoff) the adversary is interested in.

Attack Execution Graph (AEG) Editor

When a new ADVISE Atomic model is created the first thing that should be defined is the Attack Execution Graph. This is done using the editor displayed on the right of the window shown above. The left pane of the window (A) contains the graph palette, which contains items for Access, Knowledge, Skills, Attack Steps, Goals, and Connections (directed arcs). The user should click on an item in the palette that they want to add to the graph. Next, choose a place for the new item on the canvas (B). The canvas will automatically expand as the diagram grows. Users can connect two items with an arc by clicking on the connection item in the palette, selecting the source item on the canvas first and then the target item on the canvas second.

The palette can be hidden by click on the Hide Palette Button (C). To view the palette again, a similar button can be clicked or the border of the palette can be clicked and the palette will temporarily pop out until something is selected from the palette.

The cursor item in the palette will switch the mouse cursor back to its normal selection mode. While in this mode, individual components can be selected and then moved around the canvas. Selected items can be deleted from the graph by right clicking them while they are selected and clicking on the Delete menu item in the context menu.

The Adversary Editor allows the user to define various attributes of the adversary. The naming fields (A) are the same as those described in the Node Details view. The planning horizon (B) determines the number of steps into the future an adversary considers when determining the attractiveness of each of next steps available to them at some point in time. The attack preference weights (C) are used by the adversary decision algorithm to express the relative importance to them of cost, detection, and payoff. The values in the attack preference weights (C) must sum up to 1. The future discount factors (D) are used to discount the value of future attack steps. This is sometimes necessary to keep an adversary from procrastinating by choosing the do nothing because they will choose a highly attractive step several steps down the road.

The adversary skills (E) are the set of skills the adversary possesses. Each skill is given a proficiency value between 0 and 1000. The proficiency value is available for use in the AEG in any of the code expressions, e.g. a step precondition may require the adversary to have a proficiency of a skill above 200. The add/remove buttons (F) can be used to add or remove AEG elements to the adversary profile. The initial access (G) is the access the adversary possesses at time point 0. The initial knowledge (H) is the knowledge the adversary possesses at time point 0. The goals (I) are the goals of interest for the adversary. Each goal has an associated payoff value, which is used by the adversary decision algorithm to determine attractiveness of attack steps.

See the QEST paper for a detailed description of how the adversary decision algorithm works and what role these attributes play.

All fields of the adversary editor (including skill proficiencies and goal payoffs) can contain constant values, global variables, or any other in-line C++ expressions.

Edit

The Attack Cost Section

The Attack Cost section contains a code text box in which the user should enter an expression that returns the cost value for attempting this attack step. Like all code expressions, the model state can be accessed by referencing the code names of model components. This allows the attack cost to be state dependent. This code expression must explicitly return a value using the C++ return statement. In the example shown to the right, cost has been set to the fixed value 30.

The Attack Execution Time Section

The Attack Execution Time section contains a form that allows the user to define a time distribution for generating execution times of the attack step. First, the distribution type (A) should be chosen. Once the desired type is selected, a set of tabs (B) will appear that are the parameters required for the distribution, e.g. the normal distribution requires a mean and a variance parameter. The user should enter a code expression (C) for each of the parameters in the distribution. These code expressions must explicitly return a value using the C++ return statement.

The Preconditions Section

The Preconditions section contains a code expression box (B) for the user to define a code expression which must return a boolean value which determines whether or not the preconditions for the attack step to be attempted have been met. This code expression must explicitly return a value using the C++ return statement. The list (A) above the code expression box (B) is the list of items that have an incoming arc to the attack step. If the user double clicks on one of the items above, the code name for that item will be inserted into the current position in the code expression box (B). The variable name is a pointer to the Mobius state variable object that stores the underlying data value. To access that data value, the Mark() member function of the variable must be called and used in standard C++ expressions. For example, I could set the value of one variable to be the value of another plus one, like so:

The Outcomes section defines a set of possible outcomes when an attack step is performed, e.g. Failure, Success. The quantity of outcomes can be changed using the Number of Outcomes field (A). If the quantity is increased, a new outcome will be added to the selected outcome drop down (B). If the quantity is decreased, the last items in the drop down list will be deleted. The selected outcome drop down (B) selects which outcome should be edited in the Outcome Details section below. The Name field (C) allows the user to rename the currently selected outcome. The outcome probability (D) is a code expression that defines the probability that this outcome will be chosen from the set of outcomes of the attack step when the step is executed. The detection probability (E) is a code expression that defines the risk of detection, which is used by the adversary to decide its next move. The Effects code box (G) allows the user to define what effects this outcome will have on the model when this outcome is chosen. This code expression is unlike almost every other code expression in that it does not need to return any value. The Available Objects list (F) shows any item that is connected by an arc to the attack step where the attack step is the source node. If the user double clicks an item in the list, the code name of the item will be pasted into the current position in the code box below it.

The Do Nothing Step

Every attack execution graph has a special, hidden attack step called the Do Nothing step. This step represents the behavior of an adversary when they decide that the wisest option is not to do anything for some amount of time. The details of this step can be edited by the user by clicking on the Edit -> Do Nothing Step menu item. The Node Details view will appear and display the details for the Do Nothing step. Certain properties of the Do Nothing step cannot be changed. Specifically, the name and the preconditions cannot be changed. This is necessary because the preconditions must always be true, i.e. the Do Nothing step must always be a possible option. However, the user may choose to charge the adversary some cost for doing nothing, specify how long the user's choice of doing nothing takes, and even alter the state of the model as a result of doing nothing.

When an item on the graph is selected, the Node Details view will appear and various attributes of the item can be defined. For Access, Knowledge, Skills, and Goals, the only attributes are the name and code name (see the next paragraph for an explanation of these fields).

When an attack step is selected on the canvas, the Node Details view contains significantly more information as attack steps require more details to specify. The Description field (A) is the human readable name for the selected object. This name can contain spaces and special characters. The Code Name field (C) defines the name for this object when it needs to be referenced in a code segment, e.g. in the preconditions code of an attack step. This field must contain a variable name that adheres to the rules defined in standard C++ variable definitions. The user may select the Use Default Code Name checkbox and the editor will automatically generate an acceptable Code Name, based on the name in the Description field. Alternatively, the user may want a more truncated or different variable name for this object. The rest of the node details (D), (E), (F), and (G) (which are not included when an item other than a step is selected), are broken up into collapsable sections. These sections can be expanded and collapsed by clicking on the section titles.