1.8.6. Types of Access Control Systems for File Systems

1.9. pranksters

1.9.1. hacker who conduct tricks on others, but are not intending to inflict any long-lasting harm.

2. Threats

2.1. Application threats

2.1.1. Buffer overflows

2.1.2. Covert channel

2.1.2.1. Timing channel.

2.1.2.2. Storage channel

2.1.3. Data remanence

2.1.4. Dumpster diving

2.1.5. Eavesdropping

2.1.6. Emanations

2.1.7. Hackers

2.1.8. Impersonation

2.1.9. Internal intruders

2.1.10. Loss of processing capability

2.1.11. Malicious code

2.1.12. Masquerading/man-in-the-middle attacks

2.1.13. Mobile code

2.1.14. Object reuse

2.1.15. Password crackers

2.1.16. Physical access

2.1.17. Replay

2.1.18. Shoulder surfing

2.1.19. Sniffers

2.1.20. Social engineering

2.1.21. Spoofing

2.1.22. Spying

2.1.23. Targeted data mining

2.1.24. Trapdoor

2.1.25. Tunneling

2.2. Transmission Threats

2.2.1. Passive attacks

2.2.1.1. involve monitoring or eavesdropping on transmissions.

2.2.2. Active attacks

2.2.2.1. involve some modification of the data transmission or the creation of a false transmission.

2.2.3. Denial-of-Service (DoS)

2.2.3.1. occurs when invalid data is sent in such a way that it confuses the server software and causes it to crash.

2.2.3.2. Examples

2.2.3.2.1. E-mail spamming

2.2.3.2.2. Distributed Denial-of-Service

2.2.3.2.3. Ping of Death

2.2.3.2.4. Smurf

2.2.3.2.5. SYN Flooding

2.2.3.3. backhoe transmission loss

2.2.3.3.1. backhoe cuts into the cabling system carrying transmission links

2.2.3.3.2. smart pipes - provide damage detection information. Thus, if a cable were damaged, the smart pipe would be able to determine the type of damage to the cable, the physical position of the damage, and transmit a damage detection notification.

2.2.4. Distributed Denial-of-Service (DDoS)

2.2.4.1. requires the attacker to have many compromised hosts which overload a targeted server with packets until the server crashes.

2.2.4.2. A zombie is a computer infected with a daemon/ system agent without the owner’s knowledge and subsequently controlled by an attacker

2.2.4.3. Clients: TFN2K

2.2.4.4. Fixes

2.2.5. Ping of Death

2.2.5.1. Fixes

2.2.6. Smurfing

2.2.6.1. Fixes

2.2.7. SYN Flooding

2.2.7.1. Fixes

2.3. Malicious Code Threats

2.3.1. Virus

2.3.2. Worms

2.3.3. Trojan Horse

2.3.4. Logic Bomb

2.3.5. Fixes

2.3.5.1. Antivirus

2.3.5.2. Awareness

2.4. Password Threats

2.4.1. An unauthorized user attempts to steal the file that contains a list of the passwords.

2.4.2. Users may create weak passwords that are easily guessed.

2.4.3. Social engineering can be used to obtain passwords

2.4.4. Sniffers can be used to intercept a copy of the password as it travels from the client to the authentication mechanism.

2.4.5. Trojan horse code can be installed on a workstation that will present an unauthorized login window to the user.

2.4.6. Hardware or software keyboard intercepts can be used to record all data typed into the keyboard

3. Top Level

3.1. Accountability

3.2. Access Controls

3.2.1. Discretionary Access Control

3.2.2. Mandatory Access Control

3.3. Lattices

3.4. Methods of Attack

3.4.1. Malicious Code

3.4.1.1. Virus

3.4.1.2. Worm

3.4.1.3. Trojan

3.4.1.4. Logic Bomb

3.4.1.5. Trap Doors

3.4.2. Denial of Service

3.4.2.1. Resource Exhaustion

3.4.2.1.1. Fork Bomb

3.4.2.1.2. Flooding

3.4.2.1.3. Spamming

3.4.3. Cramming

3.4.3.1. Buffer Overflow

3.4.3.1.1. Stack Smashing

3.4.3.2. Specifically crafted URLs

3.4.4. Brute Force

3.4.5. Remote Maintenance

3.4.6. TOC/TOU

3.4.6.1. Time of Check

3.4.6.2. Time of Use

3.4.6.3. Exploits time base vulnerabilities

3.4.7. Interrupts

3.4.7.1. Faultline Attacks

3.4.7.2. Exploits hardware vulnerabilities

3.4.8. Code alteration

3.4.8.1. Root kits

3.4.8.2. When someone has altered

3.4.8.2.1. your code

3.4.9. Inference

3.4.9.1. Learning something through

3.4.9.1.1. analysis

3.4.9.2. Traffic analysis

3.4.10. Browsing

3.4.10.1. Sift through large volumes of

3.4.10.1.1. data for information

3.5. Overview

3.5.1. Controlling who can do what

3.5.2. Access Controls protect CIA

3.5.3. Access Controls reduce Risk

3.6. Threats to Access Control

3.6.1. User distrust of biometrics

3.6.1.1. Order of Acceptance

3.6.1.1.1. Voice Pattern

3.6.1.1.2. Keystroke Pattern

3.6.1.1.3. Signature

3.6.1.1.4. Hand Geometry

3.6.1.1.5. Hand Print

3.6.1.1.6. Finger Print

3.6.1.1.7. Iris

3.6.1.1.8. Retina Pattern

3.6.2. Misuse of privilege

3.6.3. Poor administration knowledge

3.7. Current Practices

3.7.1. Implement MAC if possible

3.7.2. Use third party tools in RBAC

3.7.2.1. for NDS and AD

3.7.3. Layered defences

3.7.4. Tokens

3.7.5. Biometrics

4. Access Control Measures

4.1. Preventive

4.1.1. try to Prevent attacks from occuring

4.1.2. Can be partially effective with Defence in Depth

4.1.3. Not always effective

4.1.4. Works with Deterrent measures

4.1.5. Examples

4.1.5.1. Physical

4.1.5.1.1. Fences

4.1.5.1.2. Guards

4.1.5.1.3. Alternate Power Source

4.1.5.1.4. Fire Extinguisher

4.1.5.1.5. Badges, ID Cards

4.1.5.1.6. Mantraps

4.1.5.1.7. Turnstiles

4.1.5.1.8. Limiting access to physical resources through the use of bollards, locks, alarms, or

4.1.5.2. Administrative

4.1.5.2.1. Policies and procedures

4.1.5.2.2. Security awareness training

4.1.5.2.3. Separation of duties

4.1.5.2.4. Security reviews and audits

4.1.5.2.5. Rotation of duties

4.1.5.2.6. Procedures for recruiting and terminating employees

4.1.5.2.7. Security clearances

4.1.5.2.8. Background checks

4.1.5.2.9. Alert supervision

4.1.5.2.10. Performance evaluations

4.1.5.2.11. Mandatory vacation time

4.1.5.3. Technical

4.1.5.3.1. Access control software, such as firewalls, proxy servers

4.1.5.3.2. Anti-virus software

4.1.5.3.3. Passwords

4.1.5.3.4. Smart cards/biometrics/badge systems

4.1.5.3.5. Encryption

4.1.5.3.6. Dial-up callback systems

4.1.5.3.7. Audit trails

4.1.5.3.8. Intrusion detection systems (IDSs)

4.1.6. Firewalls

4.1.6.1. Packet Filtering

4.1.6.1.1. Decision based on IP and Port

4.1.6.1.2. Does not know state

4.1.6.1.3. very fast

4.1.6.2. Stateful

4.1.6.2.1. Knows if incoming packet was

4.1.6.2.2. Unknown packets discarded

4.1.6.3. Proxy

4.1.6.3.1. Slow

4.1.6.3.2. Never a connection from

4.1.7. Network Vulnerability Scanner

4.1.7.1. Nessus

4.1.7.2. GFI LanGuard

4.1.7.3. ISS

4.1.7.4. NAI

4.1.8. Vulnerability Assessment

4.1.8.1. Scanning key servers

4.1.8.2. Looks for common known

4.1.8.2.1. vulnerabilities

4.1.9. Penetration Tests

4.1.9.1. Simulates an attacker trying to

4.1.9.1.1. break in

4.1.9.2. Finds weaknesses

4.1.9.3. Only as good as the attacker

4.1.9.4. Does not provide

4.1.9.4.1. comprehensive view

4.1.9.5. Usually done after Vulnerability

4.1.9.5.1. Assessment

4.1.10. Security Assessment

4.1.10.1. Comprehensive view of

4.1.10.1.1. Network Security

4.1.10.2. Analyzes entire network from inside

4.1.10.3. Creates a complete list of risks

4.1.10.3.1. against critical assets

4.2. Detective

4.2.1. Assumes Attack is Successful

4.2.2. Tries to detect AFTER an attack occurs

4.2.3. Time critical when attack is occuring

4.2.4. Examples

4.2.4.1. Physical

4.2.4.1.1. Motion Detectors

4.2.4.1.2. CCTV

4.2.4.1.3. Smoke Detectors

4.2.4.1.4. Sensors

4.2.4.1.5. Alarms

4.2.4.2. Administrative

4.2.4.2.1. Audits

4.2.4.2.2. Regular performance reviews

4.2.4.2.3. Background Investigations

4.2.4.2.4. Force users to take leaves

4.2.4.2.5. Rotation of duties

4.2.4.3. Technical

4.2.4.3.1. Audits

4.2.4.3.2. Intrusion Detection Systems

4.2.5. Intrusion Detection Systems

4.2.5.1. Pattern Matching

4.2.5.2. Anomaly Detection

4.3. Other

4.3.1. Deterrent

4.3.1.1. Discourages security violations (Preventative)

4.3.1.2. Examples

4.3.1.2.1. Administrative

4.3.1.2.2. Physical

4.3.1.2.3. Technical

4.3.2. Compensating

4.3.2.1. Provide alternatives to other controls

4.3.3. Corrective

4.3.3.1. Reacts to an attack and takes corrective action for data recovery

4.3.4. Recovery

4.3.4.1. Restores the operating state to normal after an attack or system failure

4.4. Areas of Application

4.4.1. Administrative

4.4.2. Physical

4.4.3. Technical

5. Techniques

5.1. Access Management

5.1.1. Account Administration

5.1.1.1. Most important step

5.1.1.2. Verifies individual before providing access

5.1.1.3. Good time for orientation/training

5.1.2. Maintenance

5.1.2.1. Review Account data

5.1.2.2. Update periodically

5.1.3. Monitoring

5.1.3.1. Logging

5.1.3.2. Review

5.1.4. Revocation

5.1.4.1. Prompt revocation

5.2. Access Control Modes

5.2.1. Information Flow

5.2.1.1. Manages access by evaluating system as a whole

5.2.1.2. Emphasizes Garbage in Garbage out

5.2.1.3. Closely related to Lattice

5.2.1.3.1. Assigned classes dictate whether an object being accessed by a subject can flow into another class

5.2.1.4. Defined:

5.2.1.4.1. A type of dependency that relates two versions of the same object, and thus transformation of one state into another, at successive points in time.

5.2.1.5. the tuple

5.2.1.5.1. subject

5.2.1.5.2. object

5.2.1.5.3. operation

5.2.1.6. related to access models

5.2.1.6.1. in lattice one security class is given to each entity in the system. A flow relation among the security classes is defined to denote that information in one class (s1) can flow into another class (s2).

5.2.1.6.2. in the mandatory model, the access rule (s,o,t) is specified so that the flow relation between the subject (s) and the object (o) holds. Read and Write are the only considered forms of operations (t)

5.2.1.6.3. in the role based model, a role is defined in a set of operations on objects. The role represents a function or job in the application. The access rule is defined to bind a subject to the roles.

5.2.2. State Machine

5.2.2.1. Example: Authentication

5.2.2.1.1. Unauthenticated

5.2.2.1.2. Authentication Pending

5.2.2.1.3. Authenticated

5.2.2.1.4. Authorization Pending

5.2.2.1.5. Authorized

5.2.2.2. Captures the state of a system at a given point of time

5.2.2.3. Monitors changes introduced after the initial state

5.2.2.3.1. By chronology

5.2.2.3.2. By Event

5.2.3. Covert Channels

5.2.3.1. Information flows from higher to lower classifications

5.2.3.2. Can be introduced deliberately

5.2.3.3. Can not be stopped

5.2.3.4. Uses normal system resources to signal information

5.2.3.5. Additional reading

5.2.3.5.1. Sans Reading Room

5.2.3.5.2. ucsb.edu

5.2.4. Non-Interference

5.2.4.1. Based on variations in the input there should be no way to predict the output

5.2.4.2. Each input processing path should be independent and have no internal relationships

6. Systems and Methodologies

6.1. Mandatory (MAC)

6.1.1. All data has classification

6.1.2. All users have clearances

6.1.3. All clearances centrally controlled and cannot be overridden

6.1.3.1. Users cannot change security attributes at request

6.1.4. Subjects can only access objects if they have the right access level (clearance)

6.1.5. Also known as Lattice Based Access Control (LBAC)

6.1.6. Examples of MAC

6.1.6.1. Linux

6.1.6.1.1. RSBAC Adamantix Project

6.1.6.1.2. SE by NSA

6.1.6.1.3. LIDS

6.1.6.2. eTrust CA-ACF2

6.1.6.3. Multics-based Honeywell

6.1.6.4. SCOMP

6.1.6.5. Pump

6.1.6.6. Purple Penelope

6.1.7. Strengths

6.1.7.1. Controlled by system and cannot be overridden

6.1.7.2. Not subject to user error

6.1.7.3. Enforces strict controls on multi security systems

6.1.7.4. Helps prevent information leakage

6.1.8. Weaknesses

6.1.8.1. Protects only information in Digital Form

6.1.8.2. Assumes following:

6.1.8.2.1. Trusted users/administrators

6.1.8.2.2. Proper clearances have been applied to subjects

6.1.8.2.3. Users do not share accounts or access

6.1.8.2.4. Proper physical security is in place

6.2. Discretionary (DAC)

6.2.1. User can manage

6.2.1.1. Owners can change security attributes

6.2.2. Administrators can determine access to objects

6.2.3. Examples of DAC

6.2.3.1. Windows NT4.0

6.2.3.2. Most *NIX versions

6.2.3.3. Win2K can be included when

6.2.3.3.1. context is limited to files and

6.2.3.3.2. folders

6.2.4. Strengths

6.2.4.1. Convenient

6.2.4.2. Flexible

6.2.4.3. Gives users control

6.2.4.3.1. Ownership concept

6.2.4.4. Simple to understand

6.2.4.5. Software Personification

6.2.5. Weaknesses

6.2.5.1. No distinction between users

6.2.5.1.1. and programs

6.2.5.1.2. Processes are user surrogates

6.2.5.1.3. Processes can change access

6.2.5.1.4. DAC generally assumes a

6.2.5.1.5. Subject to user arbitrary discretion

6.2.5.2. Higher possiblity of unintended

6.2.5.2.1. results

6.2.5.2.2. Open to malicious software

6.2.5.2.3. Errors lead to possible great

6.2.5.2.4. No protection against even

6.3. Non-Discretionary

6.4. Role based (RBAC)

6.4.1. Assigns users to roles or groups based on organizational functions

6.4.2. Groups given authorization to certain data

6.4.3. Centralized Authority

6.4.4. Database Management

6.4.5. Based on Capabilities

6.4.6. Access rights established for each role

6.4.7. Examples of RBAC

6.4.7.1. Database functionality

6.4.7.1.1. Adjusting the schema

6.4.7.1.2. Default Sorting Order

6.4.7.1.3. Ability to Query (Select)

6.4.7.2. Microsoft Roles

6.4.7.2.1. Data Reader

6.4.7.2.2. Data Writer

6.4.7.2.3. DENY Data Reader

6.4.7.2.4. DENY Data Writer

6.5. Rule-Based (RSBAC)

6.5.1. Actions based on Subjects

6.5.1.1. operating on Objects

6.5.2. Based on Generalized Framework

6.5.2.1. for Access Control by Abrams and

6.5.2.2. LaPadula

6.6. List Based (Access Control LIsts)

6.6.1. Associates lists of Users and

6.6.1.1. their Privileges with each object

6.6.2. Each object has a list of default

6.6.2.1. privileges for unlisted users

6.7. Token Based

6.7.1. Associates a list of objects and their privileges with each User

6.7.2. Opposite of List Based

6.8. New Implementations

6.8.1. Context Based Access Control (CBAC)

6.8.1.1. XML Data Restrictions

6.8.1.2. Quotas

6.8.1.3. Preceeding actions

6.8.2. Privacy Aware RBAC (PARBAC)

7. Access Control Models

7.1. Lattice

7.1.1. Deals with Information Flow

7.1.2. Formalizes network security models

7.1.3. Shows how information can or cannot flow

7.1.4. Drawn as a graph with directed arrows

7.1.5. Properties of a Lattice

7.1.5.1. A set of elements

7.1.5.2. A partial Ordering relation

7.1.5.3. The property that any two elements must have unique least upper bound and greatest lower bound

7.2. Confidentiality: Bell-LaPadula

7.2.1. Deals with confidentiality

7.2.2. Two Key principles

7.2.2.1. No Read Up (Simple Property)

7.2.2.2. No Write Down (Property)

7.2.2.2.1. Prevents write-down trojans for declassifying data

7.2.3. Also: Strong Property

7.2.3.1. No read down

7.2.3.2. No write up

7.2.3.3. Can only act on a single level

7.2.4. Tranquility Properties

7.2.4.1. Weak Tranquility:

7.2.4.1.1. Security labels of subjects never change

7.2.4.1.2. in such a way as to violate a defined

7.2.4.1.3. security policy

7.2.4.2. Strong tranquility property:

7.2.4.2.1. Labels never change during system operation

7.3. Integrity: Biba

7.3.1. Deals with integrity

7.3.2. Opposite of BLP

7.3.2.1. No read down

7.3.2.2. No write up

7.3.3. Two key principles

7.3.3.1. Simple integrity property

7.3.3.1.1. A user cannot write data to a higher level than they are assigned

7.3.3.1.2. A user cannot read data of a lower integrity level than theirs

7.3.3.2. Integrity Property

7.3.4. Developed by Ken Biba in 1975

7.4. Commercial: Clark-Wilson

7.4.1. Deals with Integrity

7.4.2. Adapted for Commercial use

7.4.3. Two Properties

7.4.3.1. Internal Consistency

7.4.3.1.1. Properties of the internal state of the system

7.4.3.2. External Consistency

7.4.3.2.1. Relation of the internal state of a system to the outside world

7.4.4. Separation of Duties

7.4.5. Rules

7.4.5.1. Integrity Monitoring (certification)

7.4.5.1.1. Notions

7.4.5.2. Integrity Preserving (enforcement)

7.4.5.2.1. How integrity of constrained items is maintained

7.4.5.2.2. Subjects Identities are Authenticated

7.4.5.2.3. How integrity of constrained items is maintained

7.4.5.2.4. Triples are carefully maintained

7.4.5.2.5. Transformational proceedures executed serially and not in parallel

7.4.6. Triples

7.4.6.1. subject

7.4.6.2. program

7.4.6.3. object

7.5. Others

8. Identity, Authentication, and Authorization

8.1. Identity and Authentication are not the same thing

8.1.1. Identity is who you say you are

8.1.2. Authentication is the process of verifying your Identity

8.2. Identity

8.2.1. User Identity enables accountability

8.2.2. Positive Identification

8.2.3. Negative Identification

8.2.4. Weak in terms of enforcement

8.3. Authentication

8.3.1. Validates Identity

8.3.2. Involves stronger measure that

8.3.3. indentification

8.3.4. Usually requires a key piece of information only the user would know