Recently the subject of compression algorithms was raised on Twitter, and a number of new additions to the field were mentioned in the ensuing discussion. As I’m currently working on improving load times for a project, these newcomers piqued my interest and I set out to do some evaluation of them with “game-ish” assets.

I haven’t been able to test every algorithm I wanted to get to yet (new and old), and only have PC results to show so far – but am releasing the source so it can be extended as people desire. See the end of the post for a download link. Future algorithms on my list to add to the test suite are ZLib, lzham, LZMA, and a few variants there of.

UPDATE April 24, 2011: LZ4 has been open-sourced, so I’ve added it to the test suite and updated the results. Charts have also been added to make comparison easier.

The test data I chose came mainly from the DirectX SDK samples, with a few files that were kicking around my hard drive for good measure. Highly scientific! I hope to do more realistic tests soon!

Tests were performed on a low-end Core2 Duo laptop. Without further delays, here’s the numbers.

While performance is often thought of as a programmers problem, the truth is that even some simple changes to how art is authored can have a drastic affect on a games performance. Unfortunately programmers suck at divulging much of this information – something I hope to remedy somewhat in this post. As with anything performance related, this isn’t an absolute list of rules to live by – some will depend on the architecture of the engine you’re working with, others on the particular scene, and yet others on the artistic look and style of the game. It’s also by no means comprehensive; I think this will likely spill over into another blog post. It’s best to discuss with your rendering team how (and if) each of these impact your game.

Pixel Quads and Pixie Dust

A few days ago an artist asked me for information on specular behaviour for materials, to which I sent him a link to the post Everything is Shiny over on the excellent Filmic Games blog. While browsing the site, he came across the post Where are my sub-pixel triangles???, which discusses performance issues arising from small triangles in meshes. Amazed that he’d never heard of such issues he came back and asked me if it really was true – and it is. It’s just never really discussed beyond the programming tracks at GDC, Gamefest, and so on. By the time it gets to artists it probably has been reduced to "make sure there are LODs", with not further details on what this means or why they need to exist. After all, LODs just reduce the vertex count – that couldn’t possibly affect the pixel shading performance! Right? It turns out that this is absolutely incorrect, and LODs have in fact a dramatic affect on pixel shading performance.

What. The. Hell. Yeah, it sounds odd. Your LOD is half the vertex count of the full-resolution mesh but still takes up exactly the same number of pixels on screen as the full mesh at the same distance from the camera, however the pixel shading performance of the full mesh is much lower than that of the LOD?! This is because there is a trade-off in the hardware between the size of triangles and pixel shader performance. The trade-off varies between hardware, and in some cases can be very extreme, but it never doesn’t exist.

Being a hardware trade-off this does mean that I have to explain a bit about how the hardware works when a triangle is rendered.

What is the MAGIC step? That depends on the platform, but the general formula is that each triangle is broken into 2×2 blocks of pixels called quads, and then some (potentially large) number of quads are run through the pixel shader at once. As the X360 has a lot of great documentation on this publicly available from Gamefest, let’s take that as an example (the PS3 isn’t all that different anyway).

This might start to ring alarm bells as to why small triangles are expensive. To take the worst-case that you have a single 1 pixel triangle, for that single pixel you might as well be executing the pixel shader 64 (2*2*16) times! If you have 64 single-pixel triangles so there are enough quads generated to fill a full pixel vector, you will still at best have 25% pixel shading efficiency due to the 2×2 quads. To put this another way, the lighting, texture filtering, and all-round shiny-awesomeness of the pixels could be FOUR times as complex if we avoid these cases. There’s nothing that anybody but the artist creating the mesh can do about this either (other than a programmer running some ugly automatic decimation algorithm on the mesh in the pipeline, which I’ve never seen any artist happy about – for good reason). You won’t be able to make a perfect mesh, but it’s important to make a good one.

This can be distilled down to a simple ratio for easy comparison: vertex to pixel quad density. Ask a rendering programmer to show you how to figure this out for your art, or see the presentations listed in the References section at the end of this post.

A few other random stats pulled from various Gamefest presentations:

Triangles that cover 2×2 pixels waste 60% or more of the pixel shading performance

Going from 2×2 pixel triangles to 4×4 can increase the pixel shading performance by 33%

Overdraw Overload

The cost of overdraw is proportional to the speed at which memory can be written to of the platform and the size of the screen. It may come as a surprise to many, but the PS2 actually performed better in this area than the X360 and PS3 (or even most modern PC graphics cards). As such, overdraw is actually quite significantly more expensive for this generation of consoles than it was for the previous (at least, on the PS2). We are saved somewhat by more flexible hardware with optimizations to fight the cost of overdraw, but those unfortunately don’t help when the overdraw consists of alpha-blended geometry (particles are obviously the worst for this).

So uh… that’s about it – overdraw is still a terrible problem for alpha-blended geometry, so try to reduce the number of layers of overdraw as much as possible.

Light, Shadow and Their Dark Interactions

Low-angle light directions are the enemy of efficient shadows, as they require far more objects to be classified as casting shadows for a frame than otherwise. The easiest way to look at this is that the length of the shadow that an object casts directly relates to how expensive it will be to be shadowed. This isn’t about the actual cost of calculating the shadow for the particular object, but about the cost to include it in the shadow calculations because the longer a shadow than an object can cast, the further outside of the current cameras view it must still be evaluated since it could be casting a shadow over something that is visible to the camera.

It doesn’t help that these low-angle light directions make for the most interesting lighting environments of course, so a compromise will probably have to be reached between the art and programming teams.

Death by Draw Calls

Everyone hates unnecessary bookkeeping; it’s red tape in the way of getting things done, and pulls you away from what you really want to be doing. Generally speaking, if you have different materials (shaders, textures, lighting values, etc.) applied to different parts of a mesh, it will have to be split up by the engine and each piece rendered individually. When you have multiple meshes on multiple objects split into multiple parts, you’re cutting down the amount of useful work that the GPU and engine can do as it’s having to do a lot of bookkeeping instead.

Try to use texture pages wherever possible, UV some parts of a mesh to a black part of the specular texture rather than a separate material with no specular, and weld together any meshes that can be.

Summary

This is about half of the items that I wanted to cover in this post, so it’ll definitely be covered further in my next blog post. But for now that should be plenty to chew on, and certainly covers a lot of the bigger issues affecting performance of art.

Personal projects are an often overlooked part of a professional career – be it games, embedded, or even web development. There’s only so much you will be exposed to in an office environment, where you can lean on others for areas that aren’t your specialty. But when it comes to personal projects, you have to be a jack of all trades.

This is the reason I ask about hobbies and personal projects of those who I interview and those for whom I interview with (interviews go both ways remember!). It’s not something I ask because I want to know if you’ll be my buddy, even though that is very important factor in selecting coworkers in my opinion, but because it is one of the most clear indicators of a curious and active mind – and that trait is very important. An engineering team with curious and active minds is the cornerstone of a solid development process. They are the ones who will be the most forward-looking, identifying the potential pitfalls ahead and working to avoid them now, because they’re thinking. They will be versatile, because they’re used to it. They will actively improve the codebase, systems and procedures, because they want to work somewhere that they enjoy what they work with. Great work is not achieved by punching a clock, and someone who is curious will work on the problem because they want to solve it. This isn’t to say that I like curious coworkers because they do lots of overtime – my opinion on overtime is not positive at all – quite the opposite in fact. I don’t really care the hours you do if you’re doing a great job in that time and not slacking off. The unfortunate reality of the games industry is that you will be doing overtime, and that overtime will make up for any short days taken. If you’re making no progress on a problem I’d much rather see my team take off early and relax than sit around bored and frustrated – then when the solution finally comes you’ll be fresh and happy to work hard on doing a great job rather than burnt out and doing just a good job.

Of course that isn’t to say that if you don’t have any personal projects on the go that you don’t have a curious and active mind, only that it’s an a clear indicator in an interview; I know plenty of great engineers who don’t fit this profile, and there’s nothing wrong with that. There will be things that you’ll miss out on though, because often there isn’t time in the schedule to bring someone up to speed on a new area, meaning that if you don’t already know it, you won’t get the opportunity to learn. This isn’t for any malicious reason; it’s simply the reality of schedules.

There’s no reason that a personal project has to be directly related to your daily work. If it was, that’d be little more than homework. Not much fun there if that’s all you do. For instance, my personal projects are usually a mix of embedded (electronics and low-level programming), Android (mobile programming and OpenGL), and the occasional work-related R&D. What counts is that you’re pushing yourself to learn, and having fun doing it.

I want to start out by apologising for a lack of pretty graphics in here, which is a little odd for a post about visual quality. It’s a simple answer as to why this is though – I’m currently typing this on my laptop, watching as the Windows 7 system recovery progress bar loops on the screen of my main PC (uh… yeah, it’s behaving a little odd. But that’s another topic entirely).

Ok, now that you know you’re in for a lot of monotonous text, let’s get on with it!

Gamma correct rendering may sound like a simple enough concept at first, but to do it correctly can be very challenging – especially once you throw hardware variations into the mix. Possibly worse yet is that it is something you must keep in mind throughout development, and educate your teammates about. Or you ignore the issue completely and live with the consequences, but it will come around to bite you down the road. Repeatedly.

First some definitions:

<Gamma/sRGB/Linear> space
The curve of raw data values to represented values for the data that you’re working with (could by bytes, floats, or rgb triple). Linear is easy as it’s an exact match. Gamma and sRGB on the other hand are not identical, and instead define a curve that gives more raw data values to the lower ranges of the represented values compared to the higher ranges. For the purpose of this post, we’ll call gamma and sRGB the same thing (though they aren’t necessarily this way, as sRGB refers to a very specific curve, while gamma can be anything).

To gamma a texture/value
To convert from linear space to gamma space. This will result in raw linear values (the same as their represented values) of 0.2 going to raw gamma values of ~0.5 (but still representing 0.2). The exact values depend on the curve of the gamma space that you convert to of course.

To degamma a texture/value
The opposite of the above; to convert from gamma space to linear space by applying the inverse of the gamma curve. So, with the above example you would get back the original linear raw value of 0.2 from the gamma value ~0.5.

So what does it really take to be gamma correct? There are 3 primary areas of concern: the pipeline, the shader, and the render target.

The Pipeline and Tools

Some of the source data that you’re given will be in gamma space, and other source data will be in linear space. The pipeline has to know what it has, and what it should do with it. Easier said than done, as this requires meta data to be present – either artist-set or automatically-set based on usage. Having artists specify what everything should be interpreted as is obviously the easiest choice when faced with having to shoehorn gamma correct behaviour into an existing pipeline without the backend architecture to support it, but does carry with it the consequence that user-errors will be abundant.

If you’re going to perform any processing on textures in the pipeline (resizing, mipmap generation, blending the edges of cubemap faces, etc), the operations must be done in linear space. The gotcha with this is that you must operate with as much precision as possible throughout this process to avoid issues with quantization due to the conversion to linear space and back again. This usually means converting to a floating point texture immediately, and only converting back as the very last step. Yes, and you’ll probably have issues achieving this with some of the external libraries you use. So go and modify them too (and diverge from what’s in SVN, making taking updates all that much harder). Fun stuff.

The final conversion to gamma space after you’ve done your processing may also bite you due to hardware variations. If you’re lucky you’ll only be targeting platforms that have proper support for sRGB – but many are not so lucky and as such will be in for a world of fiddly pain thanks to . But at least it’s documented now, which has only happened in the last couple of years. Extremely fun stuff.

The Shader

Everything you do in a shader should be in linear space. Simple.

There are states that you can set on the various platforms to automatically convert textures when sampled from gamma space to linear space, but these states do live in different places for different platforms.

The Render Target

The joys of hardware variations will strike you severely here, and throw a spanner (wrench for those of North American heritage) in the works.

Frame buffers are usually stored in gamma space and you output linear space values from the pixel shader. Thus, blending the output of the pixel shader with the frame buffer can be done in linear space (correct) or gamma space (incorrect) depending on the platform. DX9 and the PS3 will do it incorrectly, but DX10+ and the X360 will do it correctly.

Here’s a spoiler: not everything should be gamma corrected at every step. But what does that mean? Why is that? Well, that’s exactly what this post is about!

The simplest rule for whether a texture should be treated in the pipeline as linear or not is if it was authored by an artist painting using values that give the look they want on screen (“I want this blue for the sky”), it should be treated as being in gamma space. Everything else should be linear. Except when it shouldn’t be. Crap.

Oh, and that means that vertex colours should also be treated as being in gamma space. Except when they shouldn’t be. Double crap.

What are these exceptions? Take a lightmap as an example; according to the above rule, it should be in linear space – and it should. However, it’s not uncommon to have very dark lightmaps and when in linear space this will result in excessive banding in dark areas. If the same lightmap was stored in gamma space and converted to linear space only when it was sampled in the shader, you would have far more precision (and thus less banding) in the dark areas. The trade-off here is reduced precision in the light areas, but generally that’s less noticeable thanks to human vision being more sensitive to variations in darks than lights (undoubtedly to see creatures with big fangs lurking in the shadows). A trade-off is a trade-off though, and it’s not always what you want. Crytek for instance uses a metric of something like if at least 15-20% of an image has values of less than 96, an image will be stored in gamma space, and otherwise it’ll be stored in linear space – but that’s based on little more than what works for them.

There are still cases when you don’t want to do this though, such as for normal maps, as the value range has absolutely nothing to do with brightness.

What about other things? If an artist picks a tint colour for a material, it has been selected in gamma space and thus should be converted to linear space. Same goes for vertex colours that are used similarly. Fog colours too. Oh, and colours a player picks for their character’s clothes. Seeing a pattern here? There’s a lot more to being gamma correct than just textures, and it impacts a lot of people on the team who probably don’t even know what gamma correction is.

That’s why it’s a difficult area to get right.

I’m hopeful that the next generation of consoles will address the remaining issues and allow us to be entirely gamma correct across all platforms, consistently.

A quick heads up that I’ve moved this blog from Blogger to WordPress. For the most part this change should be quite seamless to the readers (other than a different look), however please bare with me while I wait for the search engines to update as they’re currently pointing at old files. Until then, there’ll be some 404′s if you get here via Google/Yahoo/Live/etc searches.

I’ve redirected the old RSS feed URLs to their new locations, so hopefully no feed readers will be broken by this change. If there are a lot of 404′s coming from bookmarks and links to the old files, I’ll add redirects for those too.

If you notice anything broken or weird about the site however, please leave a comment and I’ll take a look!

After 6 weeks, the PCBs I ordered from BatchPCB have arrived – which was disappointingly slow even for BatchPCB. Their site gives a “worst-case” (US) delivery time of 18 days I believe, however the boards weren’t even posted until 21 days, and took that long again to arrive in my mailbox here in Canada. I expected an additional 1-1.5 weeks on top of the US delivery time as Canada Post is pretty terrible, but 3 weeks for a small envelope? Ugh.

As I mentioned in my previous post, I ordered 3 copies of the same board so I’d have room to screw up a few times without having to start over with another PCB order – and that was a good thing, as one of the three boards had several manufacturing defects. The most noticeable of the defects is a large area on the bottom layer that did not have any soldermask applied to it, and thus was coated in solder like any other pad would be. Thankfully the defect only touches a components pad that is also connected to the ground plane (where the defect is) and is at the edge of the board, so it shouldn’t require scrapping the board. The other defect was solder mask over some of the pads, which required scraping off with a scalpel.

While neither of these defects are fatal for the board, they do show a severe lack of quality control. Combined with the long delivery time, I think it’s probably going to be enough for me to cut my losses and choose a more local PCB manufacturer next time, or at the very least try somewhere else. BatchPCB is cheaper (well, no actually it isn’t), but no where near cheap enough to put up with such low quality. Depending on how cheap I’m feeling at the time, and how likely the PCB is to be working, I think I’ll either go for AP Circuits as they’re in Alberta, the next province over (http://www.apcircuits.com), or OurPCB, which is another Chinese PCB manufacturer with awesome capabilities for their prototype PCBs, like 5mil traces/space (http://www.ourpcb.com).

The next step is to order some parts that I’m missing from DigiKey, as I’d been putting it off until the boards arrived. Not that I meant to do that – I simply forgot to place the order. Of course, now a few of the parts that I need are out of stock for a few more weeks. I might place the order anyway and include some nearly-equivalent parts to use in the mean time, and then place another order when the correct parts are back in stock and rework the board when those arrive.

Up until now I have been making my PCBs at home (which you can read about in my previous posts), with varying levels of success. The issues mainly arose due to my heavy use of tiny surface-mount devices (SMDs), and obsession with using the smallest board possible for a design. Whenever I’m laying out a SMD board, I can’t help but think that doing so requires the designer to be at least somewhat obsessive/compulsive due to the high level of concentration required for extended periods of time. If you can shut out the rest of the world and get lost in the process, it’s easy.

Anyway, I can’t see myself going away from the tiny-SMDs, tiny-PCBs mentality as anything else feels like I’ve let myself down whether due to being a waste of material/space/effort or for whatever other reason. This leaves me with two issues that have been causing me a lot of grief in my projects – transferring a design with 0.2mm (slightly less than 8mils, or 8/1000″) traces and trace spacing onto PCBs, and soldering components with equally tiny pins/pads without any soldermask on the PCBs. The first issue hasn’t been all that bad to deal with as it just takes more time and effort to get a good transfer, but the second issue has caused far too many boards to be scrapped. The worst part is that they get scrapped after I’ve already invested hours in them through the artwork transfer, etching, drilling, inserting vias, and soldering components.

That brings me to the subject of this post – I’ve finally decided to give a commercial PCB manufacturer a go with my latest (and greatest, of course) board. I don’t want to give anything away of the designs purpose yet, so I’ll just discuss its components in a non-specific way for now.

The board consists of:

44 pad QFN (7x7mm)

36 pad QFN (10x10mm)

28 pad QFN (8x8mm)

28 pad QFN (5x5mm)

8 other smaller SMD ICs (most with no leads)

Around 40-45 0402 capacitors

Around 30 0402 resistors

Around 10 0603 and 0805 capacitors

A few dozen miscellaneous SMDs like ICs, connectors, etc

The PCB is 85x60mm, double sided

For a nice picture of the sizes of some of these components, check out Curious Inventors guide to Surface Mount Soldering. That little speck near the tip of his little finger is an 0402 component – and I have 70-80 of those to solder!

I decided to do a bit of shopping around as this was going to be my first attempt at having a PCB commercially manufactured – not so much to find the best price, but to find one that I had a way to verify that my design was acceptable and not be rejected in a few days time. This lead me to try two services, Advanced Circuits thanks to their Free DFM site, and BatchPCB (SparkFun Electronics) also thanks to their DFM bot.

Advanced Circuits

I was quite impressed by the quality of Advanced Circuits FreeDFM.com service since it handled my board in my first go at running it through the verifier, and the errors (on my PCB) it provided me with were understandable and gave me everything I needed to fix them. I’m not sure if I did something wrong, but I just couldn’t seem to get a good price from FreeDFMs automatic quote for my design unfortunately, so I was scared off by the prospect of paying a couple hundred dollars for a few boards of a design that I have no way of knowing would work. I’ll look at them again should I need to produce at least 10 boards, but it’ll simply not be worth it otherwise.

BatchPCB

BatchPCBs verifier script is unfortunately more crude than Advanced Circuits, and many times it returned PHP errors while trying to just upload my design. Once that was taken care of, I was confronted with a lovely error of “aperture 36 too small: 0.0056″ (or similar). Now seriously, what are they thinking? You don’t know what aperture 36 is – it’s something the PCB program makes up for submitting the files to the PCB manufacturer. The only way you can know what it’s talking about is to manually open the files, find a line similar to “%ADD36C,0.0056*%”, change the size to something huge so it’s recognisable on the board, view it in a program like ViewMate, and narrow down the problem from there. This particular error turned out to be due to a library part I was using being designed with rounded SMD pads (rather than the standard square pads) – modifying the library to use square pads fixed the problem right up.
After that was out of the way, BatchPCB did provide me with an additional error that FreeDFM did not tell me about, but I think that may have been due to FreeDFM having slightly less restrictive requirements for board layout. The final price from BatchPCB? $20 per board + $16 shipping (I ordered 3 boards, so I have a few spare just in case).

So there we go, it did take nearly a day of fiddling and Googling, but I have my first PCB order under way. As BatchPCB works as an aggregator for a Chinese PCB manufacturer, I’m not expecting to see the boards for another month due to shipping times – but maybe I’ll get a pleasant surprise.

It’s been a few weeks since my last post, but rest assured that I’ve been quite busy working on more projects!

Homemade PCB

The home-made PCB in the previous post turned into a bit of a disaster when I tried to take the toner off the traces with a Scotch-Brite pad. It turns out that brute force and abrasive materials aren’t the best way to remove toner from 0.2mm traces. Who’d have thought? So I’ve etched the board again, and took the opportunity to revise the design slightly while I was at it, and this time tried acetone (well, nail-polish remover) to remove the toner. Acetone works much, much better than Scotch-Brite pads.

In the new revision of the board, I made the following changes:

VIAs have a larger diameter for the pad-area as my drilling wasn’t terribly accurate and I destroyed many VIAs on the original board.

Clearance around many VIAs was improved to reduce the risk of solder bridges.

Added my name and URL to the top layer (it was already on the bottom layer).

Now all I have to do is get it soldered without any short circuits, and that’s proving to be a big challenge!

V7 Navigation 1000 Menu Button

Shortly after posting the ‘M’ button fix with the steps required if you’d done the USB-port hack in another of my posts, I realized that it was entirely unnecessary to do any hardware modifications to fix the bug – rather you could do it all with just Explorer and a little back-and-forth with the SD card between your GPS and PC.

Anyway, life kept me from posting an update to that, but thankfully someone else has saved me the time! If you head on over to the always-awesome instructables.com, you will find a nice article by dadefatsax which explains all the necessary steps. Unlock V7 Navigation 1000 GPS.

A coworker has been raving for a few weeks about how easy and quick it is to make PCBs (printed circuit boards) at home using what is commonly called the “toner transfer” method. Given that the only other way I’ve used to make PCBs at home (using photo-resist coated boards and a UV exposure box) had pretty terrible results and that I wasn’t too eager to drop money on an untested circuit design to get it commercially made for me, it seemed worth the effort to have a go for myself.

Tom Gootee describes the method in incredible detail (if a little disorganized) on his website, Easy PCB Fabrication. I also bought a laser printer for this purpose (well, I wanted a laser printer, and this was a good excuse…) – I decided to go for a Brother HL-5250DN as it has 1200×1200 DPI resolution as I didn’t feel that a 600×600 DPI printer would be able to reproduce the SMD pads and fine traces that I would be using (0.2mm! That’s less than 8mils for those of the non-metric persuasion, or about 4 ‘dots’ at 600 DPI). I didn’t diverge very far from Tom’s instructions as this was my first attempt (ok, second – the first failed to transfer the toner completely to the PCB) with the exception of spending quite a bit more time ironing the design to make sure that the toner transfered this time.

I also picked up an etching kit, heater, and supplies from a local electronics store. Originally I was a bit annoyed when I opened the box to find a regular Tupperware container, an aquarium pump, and a piece of plastic to (poorly) hold the PCB while it etches – however I probably couldn’t have picked up the items individually for too much cheaper anyway, so it wasn’t so bad. I was hoping that the kit would contain a nice tank with graduations marked for the volume of etchant or at least depth though. Instead, the ‘tank’ was quite large – far larger in fact than I’ll ever need – which also means that it requires a lot of etchant to fill it enough to cover even a small PCB. As I did not know this at the time, I picked up only a single 500ml bottle of Ferric Chloride (a common etchant) and as a result had to dilute it 1 part FC to 3 parts water in order to get it to just over the height of my PCB.

At this point I was having serious doubts that I’d see any success as I hadn’t seen anything in any of the PCB making tutorials online about diluting FC. As it turns out, there was no problem other than it taking longer to etch, about 20 minutes I believe (but I wasn’t looking at the clock). I’m sure that the heater and pump helped greatly with this, and without them I probably wouldn’t have seen any results for a much longer period.

So what were the results like for my crazy-tiny traces and SMD pads? Outstanding! Considering 0.2mm is the minimum size for many commercial PCB manufacturers, and even below the minimum for some, I’m very surprised by how well even the tightest areas of the PCB turned out. To the left you can see a closeup of a MLF32 footprint IC with 0.2mm traces and spaces.

The bottom layer of the PCB has a few quite large holes in it, but I think that was mainly due to the fact that I didn’t spend much time ironing that side during the toner-transfer step as it didn’t have many fine traces on it. The holes didn’t break any traces, so that’s fine.

Those who have a V7 Navigation 1000 have probably noticed that the Menu (M) button doesn’t actually go to the menu. It doesn’t do much of anything actually (other than behave as “up” for a few menus), which is quite disappointing since there are only really 3 buttons you can use on the front (excluding the backlight on/off button) and the other two you will usually want to keep at their default functionality (zooming in and out).

Oh hey, what do you know – if you followed the instructions in my last blog entry, you will have access to change this!

Plug the V7 Navigation 1000 into a USB port on your computer and turn it on.

Browse to \Flash Disk\MyGuide\ in Explorer (on the PC, not the GPS).

Copy the 1.3mb Data.zip to your PC’s hard drive.

Rename the Data.zip on the GPS to Data.orig (as a backup).

Open the Data.zip that you copied to your PC and extract config\keybind.txt from the archive.

Open keybind.txt in Notepad and scroll down to the [CARPOINTA1000] section.

Below this, add the following line: UP=”ROUTEINFO”. The section will now look like this:[CARPOINTA1000]UP=”ROUTEINFO”39=”ZOOMIN_DISCRETE”37=”ZOOMOUT_DISCRETE”

Save keybind.txt and replace the original keybind.txt in Data.zip with your edited keybind.txt. Make sure that you actually replace the file and that it goes into the correct directory of Data.zip.

Copy the updated Data.zip back to the GPS (put it back in \Flash Disk\MyGuide\).

Done and done! You can now start MyGuide and the Menu (M) button will now behave just like going to Route > Info.

You can actually replace ROUTEINFO with many other useful commands, of which MAINMENU is a good choice (as it is obviously the intended behavior of the Menu button, but I found ROUTEINFO to be more useful personally). Have a look through keybind.txt to get an idea of what other commands you can try.

Of course, if you don’t use the zoom in/out buttons, you can change the bindings on those too.