2 Abstract SURFdrive is a private cloud service created by SURF for the dutch higher education and researchers. Developed to provide a secure environment where users can save, synchronize and share their files. SURFdrive is based on the enterprise version of owncloud, a open-source actively maintained platform which can be used to create your own cloud. However, due to the open source nature of the platform, anybody with malicious intentions might use the source code to actively look for zero-day vulnerabilities within owncloud and target these clouds. As currently existing solutions do not suffice in detecting zero-day attacks. This paper presents a novel way of using anomaly detection to detect the effects of the zero-day exploits on the collective usage of the owncloud server. And thereby detecting the exploit. The implementation presented in this paper makes use of the available hooks in the owncloud code, to provide detailed usage statistics. The results are promising and we also give some further recommendations on how to extend this research to be even more useful.

4 Introduction SURFdrive is a private cloud service created by SURF for the dutch higher education and researchers. It was developed by SURF as dutch universities requested a secure platform where students and employees could store and share data, while keeping that data under their - or SURF s - control. SURFdrive as currently implemented provides a secure environment in which students, employees and researchers of affiliated universities can save, synchronize and share files. SURFdrive is hence an alternative to public cloud services, such as Dropbox. It stores all data on dutch servers, controlled by SURF, only and users keep ownership of their data stored in SURFdrive [1]. SURFdrive is based on (the enterprise version of) owncloud 1, an opensource - AGPLv3 licensed - self hosted file sync and share server. It provides a platform to view, sync and share data across devices and includes an API to extend its functionality[2]. OwnCloud comes in two editions, an Community and an Enterprise edition. Both share a common codebase, but the Enterprise edition has some extra features including a file firewall, Single Sign On (SSO) via Shibboleth/SAML and additional logging and auditing features. The SSO feature helps to integrate owncloud into the existing infrastructure and is also used in SURFdrive [3]. The enterprise version is licensed under the owncloud Commercial License[4]. However, since owncloud is an open-source project anybody has access to the source code 2 and could use this information to search for (zero-day) security vulnerabilities and exploits within owncloud. All known security vulnerabilities found in owncloud are published online 3. These vulnerabilities are labeled - with CVE Identifiers - using the CVE system. This system makes it easier to share information between organizations to about security vulnerabilities and their impact. All security vulnerabilities found are also rated using the CVSS system. This produces a score between zero and ten, denoting the severity of the vulnerability. 1 https://owncloud.org 2 https://github.com/owncloud 3 2

5 The National Institute of Standards and Technology (NIST) has made a mapping between all CVSS scores and a qualitative severity ranking for their U.S. government repository of standards based vulnerability management data, the NVD[5]. This ranking consists of three levels: Low, Medium and High, each denoting the severity. Using this mapping, all publicly known owncloud vulnerabilities - from 2012 until now - can be put in one of these three ranks, as seen in figure Number of vulnerabilities High Low Medium NVD Vulnerability Severity Ratings Figure 1.1: A historgram showing the number of publicly known owncloud security vulnerabilites - from 2012 until now - rated using the NIST NVD Vulnerability Severity Rating. Of the in total ninety-six vulnerabilities found, seventy-six are rated as Medium, twelve are rated as Low and the remaining eight are rated as High. The number of (severe) vulnerabilities found, can be a reason for concern when trying to create a secure (private) owncloud installation. All owncloud related security issues reported to owncloud via responsible disclosure, are - after they have been verified - fixed before the vulnerability is made public and added to the CVE database. So keeping the installation up-to-date ensures those vulnerabilities pose no threat. However, as stated before, anyone has access to the source code of owncloud and people with malicious intentions might use this information to attack and break in to (private) owncloud installations. 3

6 To protect against this kind of attack, using zero-day vulnerabilities, is much harder due to the nature of a zero-day vulnerability. Since by definition there is no patch available to fix the vulnerability. During the initial research it became clear that currently existing solutions, such as ModSecurity discussed in section 2.2, to prevent attacks work very efficiently by whitelisting static resources and block any access to other resources. This will prevent any zero-day attack which uses unexpected parameters to create an url e.g. a directory traversal by appending a path to some parameter in the url. However, unexpected, but legal, behavior such as logging in from two places at the same time, is not detected, but can be an indication that something bad is going on. Since detecting those attacks in owncloud is, to the best knowledge of the author of this paper, not the subject of any previous research, the subject of the paper changed from the prevention of zero-day attacks to detecting zero-day attacks against an owncloud installation. The method presented in this paper combines a static whitelisting approach using ModSecurity, with a novel approach of using owncloud core hooks 4, to gather statistical information about how the owncloud service is being used. Combined with anomaly detection it can then be used to detect unusual - possibly malicious - activities. Which would not have been detected by using the whitelisting approach alone. 1.1 Organisation This paper is structured in the following way: Section 1.2 gives an overview of anomaly detection. Section 1.3 describes the software components which are used to visualize and analyze the input data. Section 1.4 states the research question. Chapter 2 presents the related work on using anomaly detection to detect (unknown) web-based attacks and describes some existing solutions. Chapter 3 describes the data model used in this paper. The steps to go from input data to useful metrics that can be analyzed are described in chapter 4. The results are shown in chapter 5. In chapter 6 a conclusion is given and chapter 7 describes the recommendations and future work. 1.2 Anomaly detection According to [6] anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior.. These patterns are the so-called anomalies or outliers. Finding anomalies in a dataset can be very useful as the anomalies often translate to actionable information, and 4 Hooks are functions that can be used by third party developers to ensure additional code is executed when a specific event is triggered in owncloud s core code. 4

7 highlight problems or events that would otherwise have gone unnoticed [6]. Therefore anomaly detection is applied in many different domains such as (credit card) fraud detection and (network) intrusion detection Types of anomalies Anomalies as defined by [6] are patterns in data that do not conform to a well defined notion of normal behavior.. Anomalies can be divided into three categories, which describe the nature of the anomaly [6]. The categories are described below. Point anomalies A subset x of dataset D is a point anomaly if x is not part of one (or more) of the subsets N i D which define normal behavior. Figure 1.2 taken from [6] shows an example of a two-dimensional dataset, which has two subsets N 1 and N 2, called normal regions 5, which define the normal behavior. The points o 1, o 2 and points in region O 3 are point anomalies because they are not contained in any of the normal regions. Figure 1.2: An example of point anomalies. Regions N 1 and N 2 are normal regions. The points o 1, o 2 and points in region O 3 are point anomalies since they are not part of the normal regions. 5 The normal regions contain most of the data points from the dataset, so they define the normal behavior. 5

8 Contextual anomalies A subset x of dataset D is a contextual anomaly if the value of x lies within a normal region, but due to a specific condition, the value becomes anomalous. So in this case the value itself is not anomalous, it is the combination of the value and a specific condition that make it an anomaly. An example is shown in figure 5.1 taken from [6], the temperature measured at time t 1 (or winter) is the same as the temperature measured at time t 2 (or summer). However, at time t 2 the context is different, at time t 2 it is summer, so the low temperature would be considered a contextual anomaly. Figure 1.3: An example of a contextual anomaly. At time t 1 it is winter so a low temperature is expected. At time t 2 it is summer and although the temperature is the same as at time t 1 the context is different and the temperature at time t 2 is a contextual anomaly. 6

9 Collective anomalies A collection of related data points is a collective anomaly if the data points are - although not anomalous by themselves - as a collection anomalous to the entire data set. Figure 1.4 shows a human ECG output from [7] which shows an example of a collective anomaly because the (highlighted in red) low value on the ECG exists for a long time. So although as a collection it is an anomaly, the value of a single point does not have to be anomalous. Figure 1.4: An example of a collective anomaly. The low (highlighted in red) value exists for a longer than normal period. As a collection this makes those values a collective anomaly. 1.3 Graphite As will be explained in Chapter 4, an open-source visualization tool called Graphite 6 will be used to store 7 and visualize the input data gathered. Graphite itself consists of three software components which are each responsible for a separate task: Carbon A daemon that acts as the storage backend for Graphite. Its task is to store data it receives and flushing this data to disk. It does not handle the actual storage, that is the task of Whisper 6 https://github.com/graphite-project 7 To be absolutely correct, Graphite itself will not store the data, that is the job of Whisper 7

10 Whisper The database backend for Graphite. It is a fixed size database similar in design to RRD. This stores all data in archives with different levels of detail for each archive. In the configuration file of Carbon, aging thresholds can be configured. As the age of the data in Whisper passes such a threshold, the data will be recalculated and moved to a new archive with lower resolution. This makes storing larger dataset very efficient if the aging and resolutions are set properly. Graphite webapp This component is responsible for the actual visualization of the data. It is able to render graphs on-demand using an URL-API. Besides Graphite, there is another component involved: the collector program. Because Graphite only visualizes data it gets fed, it needs this component in order to be useful. The collector program that is used in this research is StatsD. It is a daemon that can be used to send arbitrary data to Graphite and works by creating a listening interface where it listens for UDP traffic containing the data. As the data comes in, it will be aggregated and send to Graphite at the interval Graphite expects the data e.g. if data gets sent to StatsD at a higher frequency than Graphite expects data points, StatsD will collect the data and sent the aggregated result to Graphite at the correct frequency. There are numerous clients written for StatsD in various programming languages. In this research we will be using the statsd-php-client 8. This seemed like an actively maintained, well tested PHP client for StatsD that was easy to implement [8] [9]. 1.4 Research question As explained in the introduction the research changed from the prevention of zero-day attacks, to the detection of zero-day attacks. The research question is now formulated as: What external techniques can be applied to owncloud to detect future zero-day exploits, and how can they be implemented? 8 https://github.com/liuggio/statsd-php-client 8

11 Related work Section 2.1 describes all previously done related research about using anomaly detection to protect against zero-day exploits. In section 2.2. some existing solutions will be described. 2.1 Related research A broad overview of the extensive research done on anomaly detection is done by Chandola et al [6]. The research regarding detecting anomalies to provide protection against (unknown) web-based malicious activities can be divided by their input data: URL-based anomaly detection and network intrusion-detection based anomaly detection. URL-based anomaly detection Kruegel et al. [10] present an intrusion detection system which correlates the parameters in queries done by clients with the server-side programs executing those queries. Using a number of different anomaly detection techniques, the system is then able to detect attacks or misuse by detailed analysis of the application-specific parameters, reducing false positives. The benefit of this technique is that the system, by analyzing the queries, will automatically derive the parameterspecific details e.g. the length of a parameter and needs no applicationspecific tuning. To detect malicious URL messages in instant messaging Guan et al. [11] presented a novel approach, which combines anomalies in URL messages and the behavior of the sender of the messages. Their anomaly detection is able to identify known malicious URL features to speed up detection, while for unknown malicious URLs a scoring model they developed is used to evaluate each anomaly. Network intrusion-detection based anomaly detection Bolzoni and Etalle [12] presented an architecture, called APHRODITE, designed to reduce false positives in network intrusion-detection systems. Their system detects anomalies in the outgoing traffic, and correlates the anomalies with alerts raised by the intrusion-detection system for incoming traffic. This system can be used with both signature based network intrusion-detection 9

12 systems as well as anomaly based intrusion detection systems. 2.2 Existing solutions SilentDefense Web and ICS Both are commercial products developed by SecurityMatters 1. A company created by the researchers who wrote the paper about APHRODITE. SilentDefense is a product that continuously monitors the target network using Deep Protocol Behavior Inspection TM (DPBI). This technology can analyze and understand network traffic, which can then be used to detect anomalous behavior in the target network. It should perform better and produce less false-positives than traditional detection methods including black- and/or whitelisting and anomaly detection. The DPBI operation of consists of three phases: the learning phase, the tuning phase and the detection phase. The self learning capabilities of the system (the learning phase) avoids manually configuring the whole system to fit the target network. The tuning phase makes it possible to still adjust the behavior of the system, if desired. The final phase is letting the system monitor the application or network and raising alerts if problems arise [13]. ModSecurity ModSecurity is a web application firewall. It is able to filter traffic, both incoming and outgoing, and is able to classify traffic based on customizable rules. An action can then be performed, based on the classification, for example block traffic classified as malicious [14]. Additional rules for ModSecurity can be found online, both free and commercial. Rules can be fully customized to fit a web application and when taking a whitelisting approach i.e. only allowing urls which are known and conform to normal behavior of the web application, zero-day exploits can be blocked if they do not conform to the whitelisted behavior. This approach can be used to detect point anomalies, listing 2.1 shows an example of how a Mod- Security rule might look when whitelisting character ranges for an username. If the username contains any character not whitelisted (the point anomaly) the username will not be allowed. Listing 2.1: An example of a ModSecurity rule used to only allow certain characters in a username # In this case we only want to allow the character ranges # a-z (ASCII 65-90) and A-Z (ASCII ) SecRule ARGS:username 65-90, " Besides point anomalies, ModSecurity can also be used to detect collective anomalies such as brute force password guessing. It can be considered normal behavior that a user sometimes forgets or mistypes his password and 1 10

13 access to his account is denied. However, it is uncommon for a user to tries every possible password combination for his account, so as a collection the wrong passwords form an anomaly. Listing 2.2 shows an example of how ModSecurity can be used to detect and stop such a brute force attack. The initial idea for these rules came from [14]. Listing 2.2: An example of ModSecurity rules used to block brute force password guessing # Initialize a collection of IP addresses SecAction "initcol:ip=%{remote_addr},pass,phase:1" # The protected resource lives under /login and each # time it is accessed update counter SecRule REQUEST_URI "^/login/" "pass,phase:1,setvar: ip.attempts=+1" # If it was an successful login i.e. in 200 range # then set attempts to zero SecRule REQUEST_URI "^/login/" "chain,pass,phase:3" SecRule RESPONSE_STATUS "^2..$" "setvar:ip.attempts=0" # Block if more than 5 access attempts SecRule IP:ATTEMPTS 5" "phase:1,deny" A final example shows how ModSecurity could be used to protect against (some) zero-day attacks which are using directory traversals i.e. accessing a file which is not located somewhere under the root of the web server serving the file. Usually to exploit this vulnerability the attack uses a../ in the URL, this way any file can accessed using a relative path. Since any field in a web application that accepts unvalidated user input is vulnerable, forgetting to validate user input may cause a future zero-day attack against the application. Using ModSecurity all directory traversals can be blocked, preventing future zero-day attacks on any field. Listing 2.3 shows the ModSecurity rule. Listing 2.3: An example of a ModSecurity rule used to block all directory traversals # All encodings of../ e.g. %2e./ will be caught and blocked SecRule REQUEST_URI "../" "t:urldecode,deny" As the previous examples show, ModSecurity can be used to create very detailed rules to deny or allow almost every type of request possible. It is just a matter of understanding the application and implementing all rules, which can take a lot of time. However, as mentioned before, a problem arises when the (zero-day) exploit does conform the whitelisted behavior. ModSecurity will not perform an action on that traffic as it seems legit. Or if attacks are specifically crafted to avoid detecting by ModSecurity as shown by [15] 11

14 and [16]. So to further enhance the security of an owncloud installation, additional measures need to be taken. As described in the introduction, this paper will present a novel way of using anomaly detection to detect exploits not detected by ModSecurity. The next chapter will describe the data model used as input for the anomaly detection. 12

15 Data model As input for the anomaly detection, data needs to be gathered. This data should be able to define normal behavior of the collective owncloud server usage. Yet provide enough detail that if anomalies are detected, a decision can be made about what action to take. An existing third party application for owncloud called eslog 1 found online, used the available owncloud hooks (as explained in the introduction) to gather usage data from an owncloud server. As this approach seems to be a very elegant solution as no existing owncloud code has to be changed in order for the hooks to work. This paper will use the same approach to gather the input data. Some additional benefits are listed below: Known event A hook is designed to only trigger on a specific event, this means it is clear what happened since no other event should trigger the hook. Human friendly The available hooks should trigger on events which correspond to an single action performed by a user, so, for example it is unnecessary to parse URLs to find out what happened. Amount of information OwnCloud is largely written in PHP 2 as are the plugins written for owncloud. Within PHP, the $ SERVER array contains information such as HTTP headers and the IP (version 4 or version 6) address of the client making the request. All entries described in RFC 3875 should be available from the array. Furthermore the array may contain information such as the HTTP User-Agent. All this information is available to be used from within the function that gets executed when a hook gets triggered. Using all this information, it is possible to create a very detailed profile of the event that happened. Including what (the event) happened, when (timestamp) it happened and who (IP address in combination with the User-Agent) performed the action. 1 https://github.com/xme/eslog 2 13

16 Implementation This chapter describes the steps needed to go from input data as described in chapter 3 to useful metrics which could indicate possible malicious activities. This is the crucial step, since data that is not analyzed will show no correlations or anomalies. In our approach to analyze the data, a similar approach was used as in eslog (see chapter 3). Data gathered using the hooks was sent to an ElasticSearch, Logstash and Kibana (ELK) stack for further analysis. However, this approach did not give the desired results, since it was hard to get to the exact data that should be analyzed/visualized as the eslog application gathered al lot of data. This made it necessary to write custom filters from within Kibana to get to the interesting data, which in turn made the metrics hard to understand when looking at them. And this was something that should be avoided as the hooks themselves trigger at logical events. So a different approach was taken and a new third party application was created which ensured more intuitive metrics and better anomaly detection than the currently existing third party applications. Section 4.1 describes the general workings of the newly created application. The source code of the new third party application can be found at https://github.com/ jorianvo/eslog. 4.1 Overview From a high level overview, the new application does the following: When a hook the app listen to gets triggered, the function which is executed in return will get the IP address of the client that initiated the event. The location - both the country and city - from which the IP address originates is determined using the MaxMind GeoLite2 country and city databases. This data is then used to build a counter which both describes the action performed as well as the location the action originated from. A counter might for example look like browser.uploads.japan.tokyo and describes all file uploads using a web browser originating from Tokyo, Japan. This counter is then - using the StatsD PHP client (see section1.3) - sent to the Carbon daemon which will flush the updated counter to disk. Whisper will 14

17 in turn update the database with the new information and the new metric will be visible in Graphite. From Graphite the data - as time series - can be exported in the comma separated file (csv) format. With this time series - in csv format - the Twitter AnomalyDetection package can mark any local and/or global anomalies in the data. As these anomalies may indicate malicious behavior, for example a file upload from an unexpected location or time. Further investigation may be necessary; using the timestamps from the AnomalyDetection markings, it should be possible using this method to get a full trace of who performed the - possible malicious - action and appropriate measures can be taken. 15

18 Results To validate our implementation and show that the new application is able to properly detect all events on which hooks are set. A testing setup was created, this setup consisted of: a virtual vanilla owncloud instance, running owncloud which is at the moment of writing the stable version. A virtual Graphite instance running Graphite (the stable version at the moment of writing). And the virtual machine host that also acts as the owcloud client. In owncloud our application is enabled and configured to sent data from the owncloud server to StatsD running on the Graphite host. We ran three experiments: the first experiment tests the mapping between the IP address and the location as shown in Graphite, whilst keeping the action the same. The second experiment verifies if the event registered in Graphite corresponds to the hook that was triggered. The third experiment tests the Twitter AnomalyDetection package against a synthesized dataset. Mapping between IP address and location This experiment verifies the mapping between the IP address of the client which performed the action and the location as registered in Graphite. The experiment was carried out by uploading a empty text file to the owncloud server using a web browser. Using the ModHeader plugin for Google Chrome, for each upload a new IP address was set in the X-Forwarded-For header. This header is normally used identify the user who visits a website through some kind of web proxy. In this experiment it was chosen because this header can be easliy changed using the ModHeader plugin. As opposed to the REMOTE ADDR environmental variable returned by the web server 1, which contains the IP address used to create the socket for the connection. Since it is much harder to spoof this address. However in our experiment is was necessary to spoof an IP address to see if the application would be able to locate the client, so the X-Forwarded-For header trick was used. The IP addresses used in this experiment are addresses pointing to web servers used by universities. We used a mix of IPv4 and IPv6 addresses from all over the world to test the mapping on an existing 1 In PHP this variable can by accessed as $ SERVER[ REMOTE ADDR ] 16

19 public addresses. Local address were also verified both for IPv4 and IPv6, as they should give an unknown location. For each test we set the X-Forwarded-For header to a single address and upload the text file. In Graphite the hook which triggers on a browser upload created a stats.counter.browser.uploads.<location>.count counter. Afterwards the location is verified with the location as determined by RIPESTAT 2 (which also uses the MaxMind GeoLite2 databases). Table 5.1 shows the results. IP address Real location Graphite metric Eindhoven, The Netherlands stats.counters.browser.uploads.netherlands.eindhoven.count Berkeley, USA stats.counters.browser.uploads.united States.Berkeley.count Unknown, RU stats.counters.browser.uploads.russia.unknown.count 2607:f140:0:81::f Unknown, USA stats.counters.browser.uploads.united States.Unknown.count 2001:610:158:960::70 Unknown, The Netherlands stats.counters.browser.uploads.netherlands.amsterdam.count Unknown stats.counters.browser.uploads.unknown.count fe80::1610:9fff:fed1:957d Unknown stats.counters.browser.uploads.unknown.count Table 5.1: This table shows the mapping between the IP addresses from clients perfoming a file upload via the browser and the location as registered by Graphite. For reference the location as found by RIPESTAT is also added. Mapping between trigger and recorded action This experiment verifies the mapping between the action performed by the client and the event registered in Graphite. For this experiment three hooks are set: the post write() hook, when a file is uploaded via the browser. The read() hook, when a file is read in the browser and the webdav 3 hook, when a file is uploaded via webdav (as this is the protocol the official sync clients use). All three actions are then performed on the own- Cloud server, and the results are compared to the events registered in Graphite. The results are shown in table https://stat.ripe.net/ 3 The parenthesis are omitted on purpose, since the webdav hook is technically not a hook by itself. The webdav events are hooked by using a separate hook which only triggers when it sees the webdav protocol is being used. 17

20 Action as performed by client A single file upload using browser Multiple files uploaded using browser A single file read in browser A single file (<5MB) uploaded using sync client A single file (>5MB) read in browser Action as registered by Graphite stats.counters.browser.uploads. <LOCATION>.count incremented by 1 stats.counters.browser.uploads. <LOCATION>.count incremented linearly (one to one) stats.counters.browser.reads. <LOCATION>.count incremented by 1 stats.counters.webdav.putrequests.<location>.count incremented by 1 stats.counters.webdav.putrequests.<location>.count incremented by one per 5MB Table 5.2: This table shows the mapping between the action performed by a user and the action as registered in Graphite Testing the Twitter AnomalyDettection package The Twitter AnomalyDetection package is written in R, and so the experiment will load a synthesized dataset into R and run in through the Twitter package. The dataset is formatted as a time series (coming from Graphite) and an example of how a dataset might look is shown below: "date","value" :00:00, :00:00, :00:00,7 After installing the package, loading the package was the first step: > library(anomalydetection) Getting the data into R and convert the date field to actual UTC dates: > data <- read.csv("data.csv") > data$date <- as.posixct(strptime(data$date, "%Y-%m-%d %H:%M", tz = "UTC")) The next step is to execute the detection: > anomalydetectionresult <- AnomalyDetectionTs(data, max_anoms=0.2, threshold = "None", direction= both, plot=true, e_value = TRUE) 18

Out of the Fire - Adding Layers of Protection When Deploying Oracle EBS to the Internet March 8, 2012 Stephen Kost Chief Technology Officer Integrigy Corporation Phil Reimann Director of Business Development

BASELINE SECURITY TEST PLAN FOR EDUCATIONAL WEB AND MOBILE APPLICATIONS Published by Tony Porterfield Feb 1, 2015. Overview The intent of this test plan is to evaluate a baseline set of data security practices

E-Guide Signature vs. anomaly-based behavior analysis News of successful network attacks has become so commonplace that they are almost no longer news. Hackers have broken into commercial sites to steal

Web Application Firewall Getting Started Guide August 3, 2015 Copyright 2014-2015 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks

A Server and Browser-Transparent CSRF Defense for Web 2.0 Applications Slides by Connor Schnaith Cross-Site Request Forgery One-click attack, session riding Recorded since 2001 Fourth out of top 25 most

White Paper PCI-DSS compliance and Software products The Payment Card Industry Data Standard () compliance is a set of specific security standards developed by the payment brands* to help promote the adoption

CHAPTER 14 This chapter describes how to monitor the health and activities of the system. It covers these topics: About Logged Information, page 14-121 Event Logging, page 14-122 Monitoring Performance,

Firewalls and Intrusion Detection What is a Firewall? A computer system between the internal network and the rest of the Internet A single computer or a set of computers that cooperate to perform the firewall

Web Application Threats and Vulnerabilities Web Server Hacking and Web Application Vulnerability WWW Based upon HTTP and HTML Runs in TCP s application layer Runs on top of the Internet Used to exchange

Recon and Mapping Tools and Exploitation Tools in SamuraiWTF Report section Nick Robbins During initial stages of penetration testing it is essential to build a strong information foundation before you

THE SMARTEST WAY TO PROTECT WEBSITES AND WEB APPS FROM ATTACKS INCONVENIENT STATISTICS 70% of ALL threats are at the Web application layer. Gartner 73% of organizations have been hacked in the past two

Web Application Guidelines Web applications have become one of the most important topics in the security field. This is for several reasons: It can be simple for anyone to create working code without security

The Benefits of SSL Content Inspection ABSTRACT SSL encryption is the de-facto encryption technology for delivering secure Web browsing and the benefits it provides is driving the levels of SSL traffic

Cloud Security:Threats & Mitgations Vineet Mago Naresh Khalasi Vayana 1 What are we gonna talk about? What we need to know to get started Its your responsibility Threats and Remediations: Hacker v/s Developer

MIS5206 Week 13 Your Name Date 1. When conducting a penetration test of an organization's internal network, which of the following approaches would BEST enable the conductor of the test to remain undetected

VMware vcenter Log Insight Getting Started Guide vcenter Log Insight 1.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

2010 White Paper Series Layer 7 Application Firewalls Introduction The firewall, the first line of defense in many network security plans, has existed for decades. The purpose of the firewall is straightforward;

Trend Micro Incorporated reserves the right to make changes to this document and to the product described herein without notice. Before installing and using the product, review the readme files, release

Firewalls and Software Updates License This work by Z. Cliffe Schreuders at Leeds Metropolitan University is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Contents General

IBM Security QRadar SIEM Version 7.1.0 MR1 Vulnerability Assessment Configuration Guide Note: Before using this information and the product that it supports, read the information in Notices and Trademarks

Help Documentation This document was auto-created from web content and is subject to change at any time. Copyright (c) 2016 SmarterTools Inc. Advanced Settings Abuse Detection SmarterMail has several methods

DEFENSE THROUGHOUT THE VULNERABILITY LIFE CYCLE WITH ALERT LOGIC THREAT AND Introduction > New security threats are emerging all the time, from new forms of malware and web application exploits that target

is currently used by many large organizations including banks, health care organizations, educational institutions and government agencies. Thousands of organizations rely on File- Cloud for their file

To ensure the functioning of the site, we use cookies. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy &amp Terms.
Your consent to our cookies if you continue to use this website.