One of the problems with have with the 'software-driven applications' that we create every day is that after a while, there is nobody that really understands how the whole system actually behaves.

And the reason is simple: Too much Complexity.

Unless an application is built in Assembly, the code written by the programers is executed against a number of abstraction layers, each with its own behaviour, reality and side-effects.

And since we currently don't have a way to model that behaviour, we end up with the current situation where we 'Code and Execute it to see what happens (i.e. see if it does what the programmer/manager/architect/buyer is thinking that it will do)'.SAST technology and run-time-analysis are the key since we need to be able to model an application's behaviour and create rules that describe the expected (or not expected) traces, activities, practices, etc...

But for that we need to approach application behaviour analysis (which is what SAST is doing) in a different way.

We need to apply the Trillions concepts and look at a piece of software that has trillions of nodes (i.e. code blocks). And like the the MAYA guys like to say, this has already been done by nature , we just need to apply the same concepts :)

Btw, I really like the idea of applying UUIDs to bits of code. This is one of the key missing pieces of the current Sandboxing puzzle and one of the ways we can scale.

Saturday, 29 September 2012

It should be obvious that security consultants are one of the target markets since they are the ones who are able to make the most use of the product and can show the clients how to get value from those products.

What should happen is that we should be pushing away companies saying 'not now ... maybe later'. Instead we have to beg for licenses so we can create PoCs or test their products.

Some companies even ask you to sign NDAs to test/evaluate their products, its crazy.

The key concept is that it should be as easy as possible to access their technology and products , and of course, if the product is used in a commercial environment/engagement the correct license should be paid.

The problem is the most (Sales team) know that they will only get one sale! So they play the scarcity game and (almost) prevent full access to the technology.

What they should do is to play the Productivity/Empowerment game where the customer cannot live without it.

And a good example of the side effects of this strategy, is the 'Product Engine Rules'. Most vendors threat their rules as this as massive proprietary 'thing', that is not easy to share ,maintained, customised or published.

But what the customers really want, is what we did with TeamMentor: easy access to the application+content+source-code and an environment that is easy to install, deploy, duplicate and change

And I'm very happy to say that the feedback we're getting from customers on this level of openness is really good

If you are designing an application that you will share the code on GitHub (open source or not), the most important advise I can give you is to 'Design for Fork'

What I mean is that the idea of 'creating a fork (or clone) and starting a new instance of the app' should not be something that is left to the Open Source Gods, as in '...hey the code is available so you can go out and run your own instance...'.

If your app is not 'Fork/Clone Friendly' then it will always be a pain to deploy and use, and you will be missing a very key mechanism to keep complexity at bay. Deployments should be measured in seconds/minutes and with only a couple clicks required.

Maya talks about Information Liquidity in their Trillions concept where information is supposed to flow like a liquid and be able to move over rough , uneven or new surfaces by being liquid. Note that although there is a common carrier on any liquid (water/H2O) there are lots of different types of liquids (i.e. information) that can be carried, packaged, exchanged, consumed, etc...

This is liquidity is exactly what we should aim to get from Open Source or GitHub repositories. Its code AND content, must be easily consumed, exchanged, manipulated, modified, etc...

For me, this concept is not something theoretically, it is something that I put in real practice while developing TeamMentor and the O2 Platform

One of the areas of TeamMentor that I'm more proud of, is how the latest version is very light to install, deploy and set-up (the only requirement is .NET 4.0 and in some cases IIS ). There are no databases to install/configure, the source code is included with the deployment, the content/articles is all stored on XML files and we use Git/GitHub not only as version/control control , BUT as a distribution medium.

The idea of storing the content on the filesystem is very important since part of making your data liquid is to have NO database (as in MSSQL, MySql, NoDB, etc...).

My recommendation is to store the data on file system (as files) and use GIT (and GitHub) as your database. You will not create something more powerful and flexible than Git and it is a very scalable and robust solution for content control.

To make it easier to consume and integrate the O2 Platform APIs with other tools, all the major components are now available via NuGet:

FluentSharp - CoreLib - FluentSharp is an API that dramatically simplifies the use of .NET Framework APIs. As an example, the reflection wrapper is probably one of the most powerful .NET Reflection APIs, since it provides (via user-friendly methods) full access to all .NET classes, methods, properties, fields and enums (regardless of their public/private status).

FluentSharp - BCL - This is the FluentSharp BCL which provides support for System.Forms and Web

There are tons of changes on this new 3.2 version and I've been working solidly on this release for the past 3 months. Some of the new features are REALY cool and I will try to blog about them (specially the automation/backend features provided by using the latest version of O2 FluentSharp APIs)

Tuesday, 25 September 2012

Everytime it takes me 5 minutes to install chrome on a new VM/EC2 it reminds me how much its UI and Security settings are a case study of what not to do (and this was on IE9, so a recent version of IE).

What I don't understand is why Google Chrome team doesn't give us couple direct download options (chrome.exe, chrome.exe.txt, chrome.exe.zip), so that it is even faster/easier to start using it.

It is amazing how IE went to be the best browser (and my preferred one) to the one I very rarely use. These days you run IE9 on a server and bing and google doesn't work, WTF!!!. I understand the security issues, and I would be happy if it started some really low privilege process, but to break the internet!!! (I wonder how many websites work under IE Protected mode).

One the best quotes I heard recently was "why does IE goes out of its way to break the Internet?" :)

The only time I use IE these days is when I test TeamMentor against it, which is just another reminder of how much it sucks.

The way I look at it, every user has an emotional relationship with the tools/websites they use. The happier they are the more forgiving to little things they are. But they will only be happy with it if that tool/website does add value and helps with what the users is trying to do

Saturday, 22 September 2012

The Trillions Video is one of the most important videos that I have seen over the last couple years and one that gave me a nice warm felling that I'm doing the right thing with my O2 Platform development strategy.

A key message in the video and book is that to deal with new paradigms and systems, we need complete new strategies, approaches, tools and ideas.

And that is exactly what I'm doing with O2 Platform. Instead of doing what just about every other Security tools vendors is doing (i.e. 'trying to create a 'blackbox' solution with some customisation features on top'), I'm creating an environment/platform where Scripting and Customisation are first-class citizens. In fact most of the O2 Platform is already 'scripts' and the expectation is that when facing the target application/website, the question is not 'do we really need to customise our technology/tools/approach?' but 'how fast can we customise our technology/tools/approach so that it actually represents reality?'.

It's the customisation-time-delta that matters, and of course that the faster that happens, the more we (Application Security Knowledge) will scale :)

Back to Trillions, I see Application Security (and its complexity) as they see Trillions. Each node (from source-code to app's behaviour) is something that needs to be analysed, modeled, managed, controlled and (sometimes) fixed.

In fact, a business model that still yet to take hold in our industry is 'Security Tools/Technologies/APIs Customisation Services' (with clients paying for it and service companies providing it)

Even their name is really powerful, since MAYA means Most Advanced Yet Acceptable.
Finally, if you want to explain what 'Is An API?' to a non-developer audience, point them to MAYA's latest video on Containerization (I would love to have videos like this to example how SAST, DAST and even O2 works :) )

As you can see on the screenshot above, to run in on the mac ou will need to have Mono installed (download it from here) and execute the exe using mono Util\ -\ Text\ Based\ C#\ REPL.exe or mono "Util - Text Based C# REPL.exe" (mono doesn't seem to register the *.exe extension)

Although there is no code-complete (the idea of this REPL is to keep it as simple as possible), a good number of O2 FluentSharp API's seem to be working ok.

For example here is a script that downloads a new DLL (MarkdownSharp.dll) and consumes it

here is reply I sent today to an 'Hey how I can help/join/participate on the O2 Platform' question:

Hey .... , I would love you have you help (or join) the O2 Platform project , there are a lot of areas that you can help :)

To start I would ask you to focus on your C# skills and get your head around O2's REPL scripting environment, VisualStudio Plug-In and Cat.NET integration. We can move on to the Eclipse Plug-in later on.

I really think that we can change/improve the way developers consume Security Knowledge, and since you understand SAST and 'Static Analysis Technology', you can help me in the development of the next version of the Cat.NET VisualStudio Extension (for example: adding Guis for the Rules, adding support for MVC Frameworks, add support for 'offline/out-of-process' scans (on same box or in the cloud), etc...)

Btw, have you seen/tried the real-time scanner PoCs? It is a amazing learning tool for security vulnerabilities