An AppSec Consultant's diaryhttps://webomania.wordpress.com
Application Security Trends, Techniques, Tools and Remediations
Mon, 10 Dec 2018 05:40:34 +0000 en
hourly
1 http://wordpress.com/https://s0.wp.com/i/buttonw-com.pngAn AppSec Consultant's diaryhttps://webomania.wordpress.com
Vulnerability Aggregator or Management Tools in the markethttps://webomania.wordpress.com/2018/09/24/vulnerability-aggregator-or-management-tools-in-the-market/
https://webomania.wordpress.com/2018/09/24/vulnerability-aggregator-or-management-tools-in-the-market/#respondMon, 24 Sep 2018 05:28:52 +0000http://webomania.wordpress.com/?p=237Continue reading Vulnerability Aggregator or Management Tools in the market]]>After working in the Application Security Sector for more than 9 years, I see that most of the struggle is not in finding security vulnerabilities or in fixing them. The most common pain points are rather below.

Having a common enterprise vulnerability repository that aggregates all vulnerabilities and make meaningful correlation.

Business Aligned Risk where one doesn’t give same priority to a XSS issue found in a business critical app and an intranet less critical app.

Innovation and automating manual tasks.

Security Metrics which help the CISO office to tell them what the security posture is.

Ability to have non-repeatable issues so that the fix you do today, doesn’t break and create an issue that was fixed last year.

Arbitration – This is the most painful task of being stuck in between the Security Group who think that Security is more important than functionality and the business who think that Security is just a bottle neck.

There is not a single tool in the market that answers all six issues. But there are some tools that are at least trying to attempt finding solution to some of the above. Some tools that I explored are

Vulnerability Tracking and Management – Some of these tools integrate with defect trackers and ticketing tools like Service Now

Dashboard – The graphs of Kenna and Tenable IO are good when it comes to projecting meaningful information that can be processed.

Security Orchestration – Code DX comes with inbuilt scan detection capability and open source scanner capability so that even if you don’t have a commercial scanner support, you can still scan using the commercial scanners without spending even 1 single minute in integrating the tools.

Risk Scoring – Some tools offer CVSS based ranking and can be customized further.

Still, there is a long way to go as most of these tools are either application security vulnerability aggregators or network security. There is not much of a meaningful correlation between different kinds of detection methods and hence it is plain aggregation and consolidation of vulnerabilities.

]]>https://webomania.wordpress.com/2018/09/24/vulnerability-aggregator-or-management-tools-in-the-market/feed/0BinduDASP Top 10 – 2018https://webomania.wordpress.com/2018/09/21/dasp-top-10-2018/
https://webomania.wordpress.com/2018/09/21/dasp-top-10-2018/#respondFri, 21 Sep 2018 04:55:48 +0000http://webomania.wordpress.com/?p=233Continue reading DASP Top 10 – 2018]]>Below is the NCC Groups’ initiative in discovering vulnerabilities related to smart contracts and block chain and the order of the vulnerabilities.

DASP Top 10

Reentrancy – This could be a medley of our usual race function with multi threading issues where external contract calls are allowed to make further new calls when a similar execution is already in place and has not completed its execution.

Access Control – This is our age old appsec issue and will not leave smart contracts too.

Unchecked Low Level Calls – First of all, avoid using low level calls. But if you must, please check the return value for Christ sake!

Denial Of Service – Again, DOS is not new.

Bad Randomness – This again is not new

Front Running – Similar to RACE condition, where one can exploit the situation mainly become someone who is qualified enough to WIN can be kept waiting to be mined and the other stealing party can take it over with higher fees. Of all the issues, I think this is a more practical one and will always be exploited by users of malicious intent as it is how its in real world.

Time Manipulation – Reliance on the timestamp that someone has control over. Why did they even allow this?

Short Addresses – Though it could be termed new, to me it looks plain like missing input validation.

Unknown Unknowns – The vagueness of all. Its the fear of the unknown since not many actually understand blockchain or smart contracts even though they claim that their entire country now runs on that. Some kid on the block may stumble upon something interesting any might loot your country away.

Would the OS where the application server is running, need many open ports and if yes, for what reasons?

If Secure Socket Layer is used, then what are the acceptable protocols and algorithms?

Would the system use least privileged process for permissions?

What about the encryption keys and their storage?

Would the database server be an open source one or commercial with enough encryption at record level for sensitive data?

What trust levels would the target environment support?

How would session state be managed?

There may also be some questions that are specific to the technology stack chosen like one has to considered encrypted VIEWSTATE in case of a .Net application. In case of PHP, the secure configuration one needs to do in Apache and php.ini file may also have to be considered.

]]>https://webomania.wordpress.com/2018/09/21/security-architecture-review-deployment-considerations/feed/0BinduApplication Security Architecture Reviewhttps://webomania.wordpress.com/2018/09/15/application-security-architecture-review/
https://webomania.wordpress.com/2018/09/15/application-security-architecture-review/#respondSat, 15 Sep 2018 13:30:46 +0000http://webomania.wordpress.com/?p=229Continue reading Application Security Architecture Review]]>Application Security Architecture Review is a security activity done after the application architecture is defined and drafted and before the design starts. Most often, I get called to do an application security architecture review only to discover that people who want it done have no idea of what they wanted in the real place. This activity does not propose the security architecture but it reviews the existing architecture.

WHEN

This activity cannot be done during your testing or development phase but in the initial stages. You need to do this even before you start coding because the more you delay considering security for your software, the more the cost goes up later to fix security issues coming out of the product.

WHY

You have put forth your business specification, drawn out how the functional requirements should be based on the business specification and drafted the technical specification also. For the given technical specification, how do you know that the technical controls used are fit for use with respect to security? For example, lets say that you want to use Tomcat.7.0 as the application server. How do you know that Tomcat has no security vulnerabilities? Lets say that you want users to register to your site using a registration module. Since you would be allowing anonymous users to use this form, you may also need a captcha though this is neither a business need or part of a functional specification. This captcha control is needed as part of a security need so that your form does not get abused by bots.

WHAT

Now, that you know why we should be doing this activity, we need to define what we need to do this.

PRE-REQUISITES: A business specification (BSD), functional specification (FSD) and a technical specification (TS) document as inputs to you from your customer. As a security consultant, you would also need an application security architecture review checklist for the given technology so that you can go through the specification and review the security controls for every security domain.

HOW

Get the documents first and go through it. Note down grey areas where you either don’t understand or need more clarification.

Set up a meeting and clarify all questions. Sample meeting questions can be like what are the entry paths to the application, what will be the data validation approach, what privilege levels (roles) are being defined, nature of data used in the application (data classification) etc

Analyze the entire specification and requirements based on the clarifications.

Review the architecture based on application security architecture review checklists for the given technology/

]]>https://webomania.wordpress.com/2018/09/15/application-security-architecture-review/feed/0BinduHow do you refine time spent on application security scans?https://webomania.wordpress.com/2018/03/12/how-do-you-refine-time-spent-on-application-security-scans/
https://webomania.wordpress.com/2018/03/12/how-do-you-refine-time-spent-on-application-security-scans/#respondMon, 12 Mar 2018 04:05:45 +0000http://webomania.wordpress.com/?p=222Continue reading How do you refine time spent on application security scans?]]>A technocrat I respect asked me this question. “Year on year, you do scans using Fortify, Web Inspect, Appscan etc. But the scan time is always the same. Why can’t you refine it?”

I replied saying “Even brushing am doing for 30 decades. But I still take the same amount of time. I am dead scared to automate the process”. Though he took this in good sense and we laughed over it, I did tell him that scans can indeed be refined and effort cut down. But it is not like you did a scan for 2 hours this year and next year you want to increase productivity and hence wanted to scan within 30 minutes. That kind of blind refinement doesn’t exist.

So, what exactly can you do to cut down effort on the scan time? To know this, you should first know why a certain application with X number of pages takes Y amount of time for a scan. All tools that you use are nothing but automated script engines that would want to spider your application with certain rules/malicious vectors. So,

The more the number of web pages in your scan, your scan time would increase.

The more the number of input fields in your site, the more the time taken to execute the rules as the steps would have to be tried out per input field.

The more the number of complexities in your site, the more time it would take. For example, if the application has a file upload feature, or a CAPTCHA or a dynamic generation script, it is going to take a certain amount of time for that.

So, these three parameters are not exactly in your hand and tweaking them will reduce the quality of your scan output. So, what and all can you reduce?

Get the best infrastructure that is possible. Don’t expect a scan to run using 8 MB RAM. Go for the max that is allowed in your organization. If you are using dual core processor, ask for quad core or even better.

All scan engines write to temporary files and log files in drives where the OS also is. Change this default setting so that the the system doesn’t slow down as the log file gets huge. If the OS is in C:/, you can change the log file settings to another drive.

Policy -> Web Inspect uses ‘Standard Policy’ by default and App Scan uses Complete. But if you would go into these policies and inspect you will realize that they have a bunch of automated attack vectors that need to be executed. It may include finding a Struts vulnerability and also a PHP wordpress related vulnerability. So, if you are really sure about the application you are testing, experienced enough and can exercise sound judgement, this policy can be refined to cater to your application’s landscape. I have tried it out in applications and had scan time reduce by more than half.

Threading -> The more the number of threads your tool is using per second, the sooner it will complete your scan. But it also comes at the cost of CPU usage. If it is looking like the tool is crashing, then reduce the number of threads.

Scan Configuration Parameters :: There are other parameters that would let you test a page only once per attack, or once per every unique parameter, or repeat for every parameter. If customer wants scan time reduced and that seems to be the ultimate goal and quality can be compromised, you can try this out. But here, you will miss out on finding issues at every parameter.

Rule Suppression, Sanitization and Others -> What if there is some code issue that is already fixed at deployment level but the tool is still finding it? One good example is the parseDouble() issue. In this case, you can write a suppression rule at rule pack level so that this isssue is suppressed and you don’t have to waste time analyzing it later.

Last but not the least -> Schedule your scans so that it can run during non-work hours. If the application goes down during the scan time, you will have none to support you. But if you are running it at your own instance then this will work. In one project that I worked, we had to share the same instance with performance engineering team also and hence opted for a different timeslot.

Do you have any other measure that can reduce scan time?

]]>https://webomania.wordpress.com/2018/03/12/how-do-you-refine-time-spent-on-application-security-scans/feed/0BinduEffort Estimation Model for DAST and SASThttps://webomania.wordpress.com/2018/03/09/effort-estimation-model-for-dast-and-sast/
https://webomania.wordpress.com/2018/03/09/effort-estimation-model-for-dast-and-sast/#respondFri, 09 Mar 2018 05:00:04 +0000http://webomania.wordpress.com/?p=218Continue reading Effort Estimation Model for DAST and SAST]]>Most often during my pre-sales work, I am asked to derive the estimation effort for DAST (Dynamic Application Security Testing) and SAST (Static Application Security Testing). These two testing methodologies are not new in a Software Development life cycle and are almost always done if a web application is internet facing. Read on if you are a customer (software owner) who requires DAST and SAST services for your application or a service provider who wishes to provide this service.

DAST – Dynamic Application Security Testing. This is also called as Black Box testing or Application Vulnerability Assessment. It involves navigating a web application during run time and mimicking a hacker by providing malicious attack vectors of all possibilities (using popular tools like Appscan, WebInspect, Acunetix or WhiteHat), doing false positive analysis and then providing a report.

SAST – Static Application Security Testing. Otherwise known as White Box testing or Secure Code Analysis. It involves scanning the code using tools like fortify, appscan, veracode, checkmarx etc for known weaknesses and code flaws and listing out the security weaknesses.

Both the methodologies involve tools, scan time, going over the results to reduce false positives, re-categorize the issues based on severity, documentation and reporting. All the steps involve a certain amount of time and would have to be done if you really want to weed out security issues before deploying your software. Customers most often try to bargain for lesser price and lesser time (why not give me the results in 1 hour ?) and service providers (just the deal in the mind) may ask people like me to reduce effort as much as possible by possibly employing more staff. Well, quoting a good friend here, “You cannot ask a woman to deliver a baby in 1 month by providing her 10 men”.

Some steps involved in both these methodologies are sequential and some are parallel. Some need an human element and some can be automated. I will list out the activities and the effort part first and in the next series mention ways to refine the effort.

Scan Time For DAST – This is dependent on the scan tool’s efficiency, infrastructure (RAM, CPU core, Hard disk (where logs get saved)), size of the application (number of pages, input parameters per page, number of workflows and complexity), security posture of the application and also the application deployment infrastructure. From my experience, I have seen application with 20-30 pages getting scanned in 2 hours’ time. I have also seen website with 1000 urls’ taking 2 weeks.

Always do 3 preliminary scans with applications of varying size to find out the probable scan time that your tool takes on a given infrastructure and then use that as baseline.

Scan Time for SAST – This is dependent on executable lines of code of application, code flow, infrastructure (again your RAM, CPU core etc comes into picture) and the choice of tool. Here also, do a base line scan to determine the time taken. Most simple applications with 10,000 LOC have taken 30-45 minutes to scan in my experience but that may vary based on your infrastructure and tool.

Issue Analysis Time For DAST and SAST – The most important factor here is the number of total issues to analyze. The second factor is the expertise level of personnel analyzing the issue. While it could be true that the total number of issues may increase linearly based on the size of the application, it needn’t be true always. A 50 web page application can give out 25 issues but a 20 web page application that had no security in mind or intention of being secure may give 100 issues. Likewise a 30K LOC code may give 2000 issues and 50K LOC may give only 300 issues.

So, instead of blindly putting an effort saying that it will take 1 day to analyze issues, go by the parameter ‘Vulnerability Density’. What is the expected density say in 1K LOC? Accordingly the resultant total issues will be for an application. An expert looking at a given issue may weed out false positive in 5-10 minutes or if you are giving to a fresher/not-interested personnel, he may take 30 minutes per issue. For example, I can check whether the XSS issue is real or not just by looking at the response (10 – 20 seconds) but to check if a padding oracle attack really succeeded and giving rise to DOS, I may want to execute and see (probably 30 minutes).

Gathering Evidence – This is directly proportional to the number of total real issues you have got and you just need to replicate the steps manually and take screenshots for further use in documentation.

Documentation and Reporting – I would usually give 4-5 hours for this considering that you have already put enough time in gathering evidence.

As you would notice, I didn’t really give you a thumb rule that says estimation effort is 1:3:5 for Simple, Medium, Complex of DAST or 2:4:6 for a SAST. Even if I give, that would be wrong as what may be right for my software and infra may not be true for you. So, always do a base-line and then get started. I will however provide some tips to refine these activities in next article.

]]>https://webomania.wordpress.com/2018/03/09/effort-estimation-model-for-dast-and-sast/feed/0BinduWhat to do when your XSS attack vector is converted into CAPITAL letters by the application?https://webomania.wordpress.com/2017/07/21/what-to-do-when-your-xss-attack-vector-is-converted-into-capital-letters-by-the-application/
https://webomania.wordpress.com/2017/07/21/what-to-do-when-your-xss-attack-vector-is-converted-into-capital-letters-by-the-application/#respondFri, 21 Jul 2017 03:45:40 +0000http://webomania.wordpress.com/?p=193Continue reading What to do when your XSS attack vector is converted into CAPITAL letters by the application?]]>We keep encountering many types of unintended filters used by applications to present their input. One of them is to present all user input in CAPITAL letters. Even if there is no input validation done by the application, our normal XSS attack vector doesn’t work in this scenario.

Here is an example: Within scripts tags you would have given an inline alert(document.cookie);

In this case, the application converts it into ALERT(DOCUMENT.COOKIE). As Javascript is case sensitive, the alert fails to popup. Below are the options that you can do in this case.

Option 1: If VBscript is supported, then try out below. Since vbscript is case insensitive, it should not matter.

vbscript:msgbox(“hello”);

Option 2: Try loading external javascript. if your target application is behind a firewall, you can load your own JS file in an internal network and try loading it.

If the above two options don’t work, you can try out iframe or img src tags to inject your attack vector. There are some more ingenious tricks like shown in the below link but those are for rare cases. Hope this tip helped you.http://www.jsfuck.com/

]]>https://webomania.wordpress.com/2017/07/21/what-to-do-when-your-xss-attack-vector-is-converted-into-capital-letters-by-the-application/feed/0BinduOWASP Top 10 – 2017 – Release Candidatehttps://webomania.wordpress.com/2017/04/12/owasp-top-10-2017-release-candidate/
https://webomania.wordpress.com/2017/04/12/owasp-top-10-2017-release-candidate/#respondWed, 12 Apr 2017 06:41:13 +0000http://webomania.wordpress.com/?p=184Continue reading OWASP Top 10 – 2017 – Release Candidate]]>The OWASP Top 10 – 2017 may be finalized in July or August this year but I had a chance to look at the release candidate version.

Some Changes:

The category ‘Unvalidated Redirects and Forwards’ have been dropped.

Categories ‘Indirect Direct Object References’ and ‘Missing Function Level Access Control’ have been clubbed together. So, if the issue is with either the data or the functionality, that difference wouldn’t matter any more.

Two new categories have made into Top 10. ‘Insufficient Attack Protection’ that aims to detect and deter automated attacks against applications. ‘Underprotected API’ that targets issues in API like REST, JSON based services.

]]>https://webomania.wordpress.com/2017/04/12/owasp-top-10-2017-release-candidate/feed/0BinduAcunetix Version 11https://webomania.wordpress.com/2017/03/08/acunetix-version-11/
https://webomania.wordpress.com/2017/03/08/acunetix-version-11/#respondWed, 08 Mar 2017 06:00:54 +0000http://webomania.wordpress.com/?p=141Continue reading Acunetix Version 11]]>Got an opportunity to look into the acunetix version 11. With this version, they have gone ahead with the web based version which is kind of good. When I look into it, these are the positive vibes I get.

I can exclude certain hours from the scan configuration. Say, I don’t want the scan to run at my night time, I can set so.

Likewise, if I need a manual intervention for a captcha, I can have options for that.

But that’s it. I am not able to find another feature that will make me go gaga over Acunetix.

Their scanning profile actually looks scary as I don’t know what rules are part of complete scan and what rules are part of High Risk scan. I can’t seem to customize either.

I seem to have had lot more control on the scan and application configuration with the desktop based product than on the web version. Though I realize that many utilities that came shipped with the desktop based version are now freebies, the web version looks like of empty.

I really don’t seem to figure how to pause and resume a scan. Desktop version had it.

Detailed Logging, Google hacking database and many fine tuning option, it looks like it all went missing.

A much disappointing build I should say. Probably, will wait for the next build.

]]>https://webomania.wordpress.com/2017/03/08/acunetix-version-11/feed/0BinduGemFire OQL – Information Leakagehttps://webomania.wordpress.com/2017/02/13/gemfire-oql-information-leakage/
https://webomania.wordpress.com/2017/02/13/gemfire-oql-information-leakage/#respondMon, 13 Feb 2017 00:22:20 +0000http://webomania.wordpress.com/?p=112Continue reading GemFire OQL – Information Leakage]]>Gemfire is an in-memory data grid.It pools memory across multiple processes to manage application objects and behavior. Its written in Java and has a key-value structure storage. This data is stored in something called as a ‘region’ which can be queried using Object query language much like how one would have SQL for RDBMS.

I came to know about this some time back and since the opportunity of abusing an OQL is rare, I googled a bit about it. The only journal which references gemfire OQL and a remote command injection is below.

While doing penetration testing, I tried following the examples cited in this blog. The application I was testing was doing for black list filtering and hence all attack vectors weren’t going through properly but the first attack vector that was successful was something like below.

select * from /region1 limit 10.

Comments: My query returned exactly 10 rows and I knew then that I would be able to pass any data in the parameter and it is getting appended to code also.

This query is similar to what is explained in the emaze blog and using the above construct, you would be able to list out all the methods of the run time. Getting to this stage was a little tough as I needed to tweak and find out which parameter was getting accepted and which wasn’t. For some reason, the invoke method wasn’t working at all. Before calling it a day, I had passed on this query to my colleague Prashanth working in a different timezone who cracked the shell.