A place to share our knowledge

It is beyond doubt that WinRT simplifies the development of our Apps, specially when we’ve to access devices like webcams or gyroscopes. But, is it possible to use these new APIs from our good old friend Winforms or are they restricted to Windows Store Apps?

I don’t know why Visual Studio does not allow you to use them out of the box, but don’t worry cause the procedure to enable that feature is quite simple. Just follow these steps:

I’m one of those who always need to have the latest version of everything, regardless of the drawbacks that might lead me. I’m, by definition, an early bird. Having said that, it’s easy to figure out that my Surface RT was running Windows RT 8.1 Preview.

If the Store is still refusing to show the update, open a browser and go to this URI ms-windows-store:WindowsUpgrade

That last URI should open the Store and show the update. Keep in mind that the process will take up to 2 hours and there is no indicator whatsoever about the progress of the update. You can take a look at the Task Manager to see CPU and Network usage to ensure that something is really going on.

Hopefully after that process you’ll have a Windows RT 8.1 RTM device ready to be used

Windows 8 is Fast and Fluid. It boots much faster than its predecessors, but sometimes you just wanna make it hibernate. Unfortunately, that doesn’t seem to be an option, cause if you try, the option won’t be there

Does this means that Windows 8 doesn’t know how to hibernate? No. It means that the vast majority of the users don’t give a shit about what’s going to happen once they press the power button. They trust the OS to do what’s best for them. However, does who know what hibernate mean might be tempted to use it so, how can we enable it?

We’ll have to go Power Options (you can search of it in de Start Screen under Settings or execute powercfg.cpl):

From there we’ll get into “Choose what the power button does” and then “Change settings that are currently unavailable”.

That’ll allow you to modify the options and there is one for hibernate. You only have to check it and the next time you try to shutdown your machine the option to hibernate will be present.

I’ve to say that the vast majority of the time I only sleep my machines, but if you’re gonna be away from home for a while it might be useful to have the option 😉

I know it’s pretty simple to open Visual Studio and then launch the Windows Phone emulator, but let’s be honest… we’re pretty lazy, and sometimes it’s just convenient to launch it with just one click.

We could create a launcher on our taskbar by creating a shortcut to %ProgramFiles(x86)%\Microsoft XDE\8.0 and putting it in the taskbar. However, we’ll have to tune it a little bit more. We’ve to change the target of the shortcut (we can change it from its properties) and set it to:

If you’ve used Remote Desktop you’ve noticed that the button to shutdown and restart the computer in the Start Menu changes to Disconnect. This makes perfectly sense because the vast majority of the time you want to close Remote Desktop but for the remote computer to keep working as usual. However, what if what you really want to do is to shutdown the remote computer?

Windows Security

The first option is to invoke the Windows Security interface from the Start Menu, which is the one you’d get locally by pressing CTRL + ALT + DEL.

This interface allows you to shutdown, or reboot, the remote computer by pressing the buttons at the bottom right corner of the screen.

Command Line

Another less popular option among domestic users is to use the shutdown utility. While this option gives you more flexibility (like deferred or remote shutdowns) it also requires the use of the command line.

The first step is to open the command line with administration privileges.

The command to run would be: shutdown /s /t 0. /s indicates local shutdown and /t 0 that we want to wait 0 seconds for it, so shutdown now.

Anonymity vs. security

There is a widely extended misconception that cryptographic systems achieve both security and anonymity. However, this is far away from being true. Cryptography has been used in systems which aim is anonymity and security, but the sole fact of using some kind of cryptography guarantees nothing.

When two users, Alice and Bob, establish a communication, they want to achieve security to prevent anyone else from knowing the message they are exchanging. They want anonymity to prevent anyone, outside the conversation or in the conversation, from knowing the identity of the sender of the message.

The general consensus is to think of anonymity as the anonymity of the sender of the message, however far too little attention has been paid to the anonymity of the responder. This might sound absurd because the sender should know the identity of the sender in the first place, but this does not need to be true in all cases, like Peer-to-Peer networks, and responder anonymity is indeed achievable.

Proxies

Proxies were initially developed to solve technical problems when accessing the same resource from different clients and to hide the underlying technology from the clients. Since those days, proxies have evolved and adapt to the Internet providing very different functionality.

One of the most common proxies on the Internet is web-cache proxies. These are putted in place by organizations and Internet Service Providers to save bandwidth by temporarily storing previous requested data.

The client want to access some resource stored in the Target, but instead of connecting directly to the Target, it asks the proxy to connect to the Target for him. The first time the client asks for the resource, the Proxy has to connect to the Target and retrieve it. However, the next requests do not need to be retrieved from the target but from the Proxy, which saves bandwidth assuming that the Client and the Proxy are closer than the Client and the Target.

Despite the simple idea of a cache-proxy, there is research that suggests that this kind of system is not efficient on current networks due to the extensive use of dynamic content (the same resource is different depending on who is asking for it) and the increasingly speed on today’s connections.

Anonymity as a side effect

As show in the previous figure, when the client wants to access the target, he asks the proxy and it is the proxy the one who access the target. This means that the target is never connected by the client but by the proxy, and therefore the identity of the client remains concealed.

Additionally, if two clients, A and B contact the proxy for resource R, the proxy would only retrieve R once, which means that the second client to ask for R will not generate any traffic from the proxy to R.

For anonymity to be achieved in this way, it would be necessary for the proxy to not keep track of any requests, because it has all the information about who access what and when.

Tunneling

Tunneling is a technique in which one communication protocol is wrapped into another. By itself, this technique does not provide either anonymity, security, privacy or anything, it is just a technical option.

The original use of this technique was to use new protocols over systems that only allow certain protocols. The new protocol could be wrapped inside the previous and then sent across the network without the old system noticing it.

Circumventing firewalls

Notwithstanding its power for backwards compatibility, tunneling can be used to circumvent certain types of censorship and security measures like firewalls.

When the client requests a censored resource, the firewall blocks it as expected. However, if the request is tunnelled into an accepted way of communications, the firewall lets that request pass through.

SSH tunnels

SSH stands for Secure SHell and was designed to be a replacement of the out-dated and insecure system for remote login telnet. SSH however superseded the set of functionality originally intended for telnet and is currently used for different purposes beyond remote login. One of these uses is tunnelling.

SSH sessions are protected cryptographically to prevent any observer from knowing the content of the messages being sent or even its type. When a protocol is tunnelled through an SSH session it is indistinguishable from a non-tunnelled SSH session. This behaviour allows circumventing firewalls.

On the figure, the Client wants to access some resource that is blocked by the firewall. However, outside the firewall there are free machines (those that are not under censorship) that can access to that resource.

The client then, establish an SSH connection to a free machine and creates a tunnel. All traffic between the Client and the Free machine is encrypted and the firewall does not know either its content nor its type rather that it is some kind of encrypted communication.

Almost two years ago I decided to learn a bit more about how CLR manages reflectivity, anonymous types and so on. It turned out that helpful documentation about CLR internals were few and far between. So I started my own disassembler in an attempt to learn more. Today, I’m going to use that disassembler to circumvent Terraria’s security. Why? Because it’s fun 🙂

First things first, we want to be able to modify the abilities of our character (life, mana, objects, etc..). It’s quite obvious that information is being stored under \My Documents\My games\Terraria\Players\ however, the file has been enciphered and any modification would result in a useless file. Therefore, the application should have a way to decipher it, modify it and then encipher it again.

I’ve started by looking the types. To be honest I was expecting to find something like Terraria.Crypto or something like that, however, this is what I’ve found:

Just looking to that there is no obvious place to look at. Instead of spending a lot of time blindly looking for anything related with cryptography, I’ve tried something else. If the file that stores the player’s information is enciphered, the type Terraria.Player looks like a good place to put the code to deal with that:

The last two methods are really interesting: EncryptFile and DecryptFile. Both have two string arguments… the original file and the resulting file? Let’s try the assumption and execute DecryptFile over one the saved players:

If everything has been successful, we should be able to read the saved player. Here is the difference; the left is the decrypted file, the right the original one:

That’s it. Now we can execute DecryptFile to get the info, modify what we want and then, execute EncryptFile to encrypt it. Obviously, the DecryptFile is under a format we know nothing about… but c’mon, you can make some sense out of it.

For example, after every item, there is a number that represent the number (Int32) of items of that type the player has. 0x12 is the current life (Int32) and 0x16 is the maximum life (Int32), 0x1A is the current mana (Int32) and 0x1E the maximum mana (Int32).

Ok, the process is manual and error prone… or am I the only one who writes big-endian? But you can write a simple tool to perform this changes for you if you like 🙂

You might be a large corporation or a freelance developer, but it you’re in the software business you’re working for the end-user even if you don’t even know a single one of them. And let me tell you something about them: they’re stupid, but they hate to realize.

You want to make profit of your work, you have to make your user feel clever… or at least, don’t make him feel stupid or frustrated. A lot of users change powerful software for a crappy one that’s easy. I’m not saying technical quality is not important, I’m saying that if it two different applications can do the job, users will use the one which makes them feel clever by doing the job with less irritating error messages or cryptic questions.

Why so many users recommend using 32 bits?

Because they’re scared. They don’t know what 64 bits is apart from being the double of 32 bits… what they know is that some of their friends were using 64 bits and had to fallback to 32 because their applications weren’t working or the computer was unstable or… who knows what, but something wasn’t working.

But, is that true or is just a legend?

I’m afraid both. If we’re talking about windows, 32 bits applications will work on a 64 bits Windows without issues… but (there’s always a but) 16 bits applications will not work neither 32 bits drivers.

This means that if you’re running very old applications (MS-DOS and Windows 3.11 mainly) and you switch to 64 bits, you’re going to run into troubles. This is indeed a problem for business, but normal end-users do not rely on ancient software. However, they rely on something equally bad: cheap and crappy hardware.

Windows provides drivers for most common hardware, but it is likely that some bits of your system won’t be covered by the catalog windows offers. There is where hardware vendors step in providing you drivers for your operating system. A lot of 64-bits windows users found out, sadly, that their hardware vendors won’t provide 64 bits drivers (welcome to the Linux world, where your hardware vendor will tell you “I don’t care”).

32, 64 bits… who cares?

Let’s be honest. Users don’t care if your application is using 64, 32 bits or a giant wheel powered by a hamster. They want to do their job as fast as possible, knowing the minimum about the tool (ideally nothing) so they can move to important things (like watching big brother, or reading The Sun, but that’s another story).

However, as engineers we know that 64 bits is the right choice because we’re using more and more memory every day (chrome is actually eating memory) and relying on “tricks” like PAE is not a solution but a workaround.

The question is not if we should move to 64 bits, but how.

Universal Binaries: The Holy Grail

The solution to having to know about the architecture and choosing the right binary is: Universal binaries. Apple also used this solution when they changed their PowerPCs for Intel microprocessors.

The basic idea is to create a binary that would work on both, 32 and 64 bits by including both versions of the binary merged. Besides this, due to the particular way applications are packaged in MacOS X, applications have only one download for all architectures and all languages.

This is great for the user. It doesn’t matter if he’s using one or another architecture, in English or in Russian…. the same file is going to work smoothly and in the correct language.

The downside

Universal Binaries are the holy grail because the user doesn’t need to know anything. However, they achieve this by including binaries for all supported architectures and languages within the same distribution. Which means that the size of the application is going to be multiplied by the number of architectures supported and increased by the amount of languages included.

I’ve a 64 bits Intel Processor and I use everything in English. Why should I download application three times larger than expected (some of them have 64 and 32 bits versions along with a PowerPC version) and with support for German if I don’t have those processors and I can’t read German?

As an example, Google’s uploader for Google Music, called Music Manager, weights 45MB. After removing languages other than English and unused architectures (in this case, PowerPC support) the size is 11.7MB. In other words, I’ve an application that sizes nearly 4 times what it should.

Fortunately enough, there is software, like Xslimmer to strip out the unnecessary architecture and language support from our applications. But take this with caution, because this procedure could cause some updating methods to fail. Moreover, some applications won’t survive the process because realizing their size has changed, they refuse to work assuming they’re corrupted or tampered. You’re warned.

Google Music allows you to store up to 7.000 songs. That’s it. No size limit, just 7.000 songs. Wouldn’t it be great if, somehow, you could upload a regular files instead of just songs? Think about it, 7.000 files, of any size, stored for free!

To put it simple, files have two main sections, header and content. The header is a bunch of metadata that describes what kind of information the file contains (music, video, document…). This tool takes a real mp3 and uses its header, along with a little bit of its music, and then appends the file you want to store, making Google Music believe it’s a real music file.

Doesn’t it make my file bigger?

Yes, it does. Exactly 100kb more.

But, if the file seems to be an mp3, how the hell am I gonna use it afterwards?

Do not worry, this tool is also capable of “un-hiding” the mp3, so you’ll get the original file 😉

Will I have my data files and music files mixed in Google Music?

Yes, and no. You will see all your uploaded data files inside the album “Data” of the artist “Google Data Upload”. You will be able to create playlist that mix data a real music files… but that will be a really weird thing to do, won’t it?

How do I use the tool?

It’s a command line tool. If you know nothing about command line you can either learn it or wait until I create a graphical tool to do it… which might be available tomorrow or never.

Fair enough. Let’s start saying that my first attempt was to make use of the ID3Tags by injecting the desired file as the cover of the mp3. I’ve used that trick in different occasions to defeat some monitoring tools, however Google Music looks to this kind of metadata and attempts to resize the cover. I’m guessing they try to do this to save space. Obviously it can’t resize a binary file, so instead of ignoring it, it deletes the metadata it does not understand and thereby making this trick useless to my end.

The current implementation relies on the fact that file headers can be stored at the beginning of the file (that’s why it’s a header) but also at the end of it. The vast majority of them have their headers at the beginning, but some of them, like zip files, have it at their end.

Google Data Uploader takes a regular mp3 file and takes it’s header and some of its content (the first 100kb) and then it adds the target file compressed in zip format. This makes the file structure look like:

The tool also adds some information to the ID3Tags. To be more accurate, it sets the Album to “Data”, the artist to “Google Data Uploader” and the title to whatever the name of the file was. This makes it easy for you to find your files within Google Music.

Google Music expects to find an MP3 file, so starts looking at the beginning of the file. It finds a valid MP3 header, and therefore decides to upload the whole thing.

Once the file is in Google Music, you can “play it”. It will go for a few seconds and then suddenly ends as soon as it runs out of song and reaches the End of file (which is in fact the end of file of the zip file).

If you download the file (using the tool I posted before) you’ll find yourself with an mp3 slightly different from the one you’ve uploaded. Google Music sends you the file without the ID3Tags. That’s not a problem cause we don’t need it anymore, all the information we care about from now on is inside the zip file which remains unaffected.

You can use this tool (–unmerge option) to “split” the zip file from the mp3 and extract all its files or you can just rename the file to “.zip” and open it with a zip program that doesn’t care about malformed files (like 7-Zip). This works because, under the assumption that the file is a ZIP, the program should start reading from its end, finding a valid header, continuing with the content and reaching the “End of file” preventing it from access the mp3 data.

So you’re just taking advantage of a naïve implementation of Google’s uploader, aren’t you?

Definitely. This is an extremely easy trick that works just because Google is not checking the content of the whole file assuming that if the header says the file is an mp3, it has to be an mp3.

Google Music is a new service from Google that allows you to store your music on the cloud and listen to them wherever you want 🙂 However, there is no option available at this time to download the songs you’ve previously uploaded.

I’ve created a simple app that allows you to download ALL the music you’ve on Google Music. It supports the download of all the songs, searches and playlists… mainly because download all the songs when you only want one is not very useful 😛

It is quite simple:

Open the app (just in case :P)

Login to Google Music (you’ll be asked, don’t worry)

Go to “Songs” or search something or go to a playlist

Click “Download those songs!”

Wait until it finishes and enjoy 🙂

Songs will be stored on the same directory under the name “<Artist> – <Title>.mp3” where artist and title are the real title and artist of the song.

I’d love to hear from you and use your feedback to improve the app 😉

Updates

01/12/2011

Download link now points to the correct location… sorry about that

17/11/2011

UltraID3Lib replaced with TagLibSharp

11/11/2011

[Feature] Title, album and artist are now written to the file with ID3 tags using UltraID3Lib (Alex McChesney)

[FIX] Application now closes instead of running in background without notice (Alex McChesney)

[FIX] Invalid characters are now changed to underscores (Alex McChesney)

[FIX] When downloading multiple files with the same name, the application will now rename them to file(counter) instead of overwriting it all over again (Alex McChesney)