There is a new JavaScript obfuscation service (www.jscrambler.com) opened for beta test register atm. Testing the available transformations might be an interesting thing to do :) so here it goes an enumeration of the features, transformations and techniques that can be found there:

Using for..in loops for objects is bad because it will break code whenever a object prototype is modified and is trivial to decode. You use ternary operations to obfuscate numbers??? <hugeSarcasm>Yeah that's really good</hugeSarcasm>

You might want to look at this:-
http://www.businessinfo.co.uk/labs/hackvertor/hackvertor.php#PEBoYXNlZ2F3YV8wKCKqwMHCw8TGyMnKy8zNzs%2FQ0dLT1NXW2Nna29zd3t%2Fg4eLj5OXm5%2Bjp6uvs7e7v8PHy8%2FT19vj5%2Bvv8%2Ff4kXyIpPmFsZXJ0KCdXYWtlIHVwIGFuZCBzbWVsbCB0aGUgbm9uLWFscGhhbnVtZXJpYyBjb2RlJyk8QC9oYXNlZ2F3YV8wPg%3D%3D

Gareth Heyes Wrote:
-------------------------------------------------------
> Using for..in loops for objects is bad because it
> will break code whenever a object prototype is
> modified

But when? After obfuscation or before obfuscation? If it is after obfuscation forget it, because there is no point trying to change the object prototype after obfuscation. If it is before obfuscation, no worries because the transformation you are referring to targets only DOM objects and you are not able to mess with internal DOM prototypes so easily. Even if doing so it would work for a specific browser, it would not work for all, and that is what I would call break the code - even before the transformation is applied.

So the use of for..in loops to access DOM properties by enumerating associative arrays that represent the content of DOM objects are not condemned to fail because of that. They would fail more easily if the differences between the content of those associative arrays in all the existing browsers are not taken in consideration.

Anything of that size or anything using only one obfuscation transformation is, in most cases, easily de-obfuscated. Now, when using a set of transformations (more than one) that go further than what can be called polymorphic transformations, e.g., transformations that change the execution flow, data structures, even the introduction of anti-debugging techniques, make the "is trivial to decode" argument disappear.

Transformation like that one produce the base for others to act. Anyone that knows what obfuscation quality means, knows that this particular transformation it is not an enough resilient transformation on its own. Maybe with (as an example) hardly predictable variables in the place of the literals found at the ternary operations' arguments would make the trick.

I find hard to believe when that is said so ligthly, even more when a chance to try a solution was not given yet. <sharingWisdom> A wise man once told me that we should not express or opinion as fast as we take a shit </sharingWisdom>. That is something that always comes to my mind when reading something like "it sucks.".

Ah yes, that might be a good consideration as well. Personally I have no clue if his obfuscation is any good, I don't feel very at home at obfuscating stuff, let alone de-obfuscating it, although I do know that it's pretty much all the time pointless to pursuit cloaking JavaScript it from an attackers standpoint. Not to mention the valid argument of yours that it can break all sorts of things, plus some AV scanners might go berserk on a blob of random code that changes all the time. ;-)

Yeah but my point was for your jsdecoder:-
try{delete eval;}catch(e){}

How would you get round that? And then modifying the function inside in any way will break the payload. Of course the entire function should be obfuscated but I was just making it more clear.

Two possible ways to auto decode it:-
1. Rewrite the function then grab the hash of the x function. Could be made more difficult by using chained functions and closures.
2. Spy on String.fromCharCode etc if try{delete String;}catch(e){} returned the original function this wouldn't work

Now if both functions check themselves by hashing themselves it makes it difficult to decode. You could even change the toString/valueOf to modify one character thus breaking the hash. The decoder would have to specifically look for toString/valueOf of a hashed function in order to remove it.