Search this blog

22 January, 2011

Today I want to write a bit about deferred rendering and its myths. Let's go...

1) Is deferred good?

Yes, deferred is great. Indeed, you should always think about it. If for "deferred" we mean doing the right computations in the right space... You see, deferred shading is "just" an application of a very general "technique". We routinely take these kind of decisions, and we should always be aware of all our options.

Do we do a separable, two-pass blur or a single pass one? Do we compute shadows on the objects or splat them in screen-space? What do I pass through vertices, and what through textures?

We always choose where to split our computation in multiple passes, and in which space to express the computation and its input parameters. That is fundamental!

Deferred shading is just the application of this technique to a specific problem: what we do if we have many analytic, lights in a dynamic scene? With traditional "forward" rendering the lights are constant inputs to the material shader, and that creates a problem when you don't know which lights will land on which shader. You have to start to create permutations, generate the same shader with support of different number of lights, then at runtime see how many lights influence a given object and assign the right shader variant... All this can be complicated, so people started thinking that maybe having lights as shader constants was not really the best solution.

Bear with me. Let's say that you have a forward renderer that does assign lights to objects, it's working but you're fed of it. You might start noticing that it works better if the objects are not huge and you can cap the maximum number of lights per object. In theory, the finer you can split your objects the best, you don't have too many lights overlapping a given pixel, maybe 3/4 maximum, but when the objects are large compared to the lights area of influence things start to be painful.

What would you start thinking? Wouldn't it be natural to think that maybe you can write the indices of the lights somewhere else, not in the pixel shaders constants? Well, you might think to write some indices to your lights in the pixels... here it comes the Light-Indexed Deferred Rendering.

Let's say on the other hand that in your forward renderer you really hated to create multiple shaders to support different numbers of lights per object. So you went all multipass instead. First you render all your objects with ambient lighting only, then for each extra light you render the object with additive blending feeding as input that light.

It works fine but each and every time you're executing the vertex shader again, and computing the texture blending to multiply your light with the albedo. As you add more textures, things really become slow. So what? Well, maybe you could write the albedo out to a buffer and avoid computing it so many times. Hey! Maybe I could write all the material attributes out, normals and specular. Cool. But now really I don't need the original geometry at all, I can use the depth buffer to get the position I'm shading, and draw light volumes instead. Here it comes, the standard deferred rendering approach!

So yes, you should think deferred. And make your own version, to suit your needs!

p.s. it would have been better probably to call deferred shading "image-space shading". It's all about the space the computations happen into. What will we call our rendering the day we combine virtual texture mapping (clipmaps, megatextures or what you wanna call them) with the idea of baking shading in UV space? Surface caching, I see. Well it's ok, nowadays people call globals "singletons" and pure evil "design patterns", you always need to come up with cool names.

2) Deferred is the only way to deal with many lights.

Well if you've read what I wrote above you already know the answer. No :)

Actually I'll go further than that and say that nowadays that the technique has "cooled down" there is no reason for anyone to be implementing pure deferred renderers. And if you're doing deferred chances are that you have a multipass forward technique as well, just to handle alpha. Isn't that foolish? You should at the very least leverage it on objects that are hit by a single light!

And depending on your game multipass on everything can be an option, or generating all the shader permutations, or doing a hybrid of the two, or of the three (with deferred thrown in too). Or you might want to defer only some attributes and not others, work in different spaces...

But it's not only about this kind of optimizations. You should really look at your game art and understand what its the best space to represent your lights. The latest game I shipped defers only SSAO and shadows, and I experimented with many different forms of lighting, from using analytic to spherical bases to cubemaps. And it ends up using a mix of everything...

3) Deferred is an alternative to light maps.

Not really.

Many artists came to me thinking that deferred is the future because it allows you to place lights in realtime. Well I'm telling you, with CUDA, distributed processing and so forth, if your lightmap generation is taking ages and your artists can't have decent feedback, it's a problem of your tools, not really of the lightmap technique per se.

Also deferred handles a good number of lights only if they are unshadowed point lights. So either you get a game that looks like some bad phong-shiny 3d studio4 dos, rendering or your artists have to start placing a lot of lights with "cookies" to fake GI (and back to the 90ies we are).

Or we will need better realtime GI alternatives... like computing light maps in realtime... Still, they are not bad, for static scenes or the static part of your scene light maps still make a lot of sense and they will always do as they are an expression of another of these general techniques: precomputation a.k.a. trading space for performance.

Depends. The usual argument here is that deferred shading requires more memory than deferred lighting, thus more bandwidth, thus it's slower because deferred is bandwidth limited. It turns out that's a mix of false statements with some questionable ones.

First, it's false that deferred shading uses considerably more memory. Without going into too many details, usually deferred shading engines use four RGBA8 rendertargets plus the depth, plus a rendertarget to store the final shaded stuff. Deferred lighting needs at least a RGBA8 (normals and specular exponent) plus depth for lighting. Lighting needs to be stored into two RGBA8 to have comparable quality (some squeeze this into a single RGBA8, some use two RGBA16, it really depends on what's the dynamic range of your lights), plus you need the buffer for the final rendered stuff. So it's basically 6 32bit buffers versus 5, not such a huge difference.

Second, more memory does not imply more bandwidth. And here is where things start to be variable. Both methods are basically overdraw free in the "attribute writing" passes, as you can sort your objects front to back and use a bit of z-prepass (or even a full one if you want to compute SSAO or shadows in screenspace at that point). The geometry pass in the deferred lighting is also basically overdraw free (as the hi-z has been already primed). So what it really matters is the lighting pass. Now if you have a lot of overdraw there, you are in danger. You can decide if to be bottlenecked in the blend stage (i.e. on ps3 with deferred lighting) or in the texture one (deferred shading). Or you can decide to do the right thing and do the "tiled" variant of deferred rendering.

Last but not least, deferred might not be actually be bandwidth limited. I've seen more than one engine where things were actually ALU bound. And I've seen more than one engine struggling with vertex shading, thus being limited by the two geometrical passes deferred lighting has.

And I'm not alone, in the end it really depends on your scene, on your platform and your bottlenecks. Deferred lighting is an useful technique but it's not a clear winner over deferred shading or any other technique.

Nah, not really. It's true that you have a second geometry pass where you can pass per vertex attributes and do some magic, but it turns out that the amount of magic you can perform with the lights already computed and fixed to the phong model is really little. Really little.

Also consider that deferred lighting works with a fundamental flaw, that is blending together specular contributions. It also in many implementations allow only for monochromatic "specular light".

Now there are some lighting hacks that work better with deferred lighting and some others that work better with deferred shading but in general both techniques decouple lights from objects "too late". They do it at the material parameter level, that's to say deep into the BRDF. In the end all your materials will use the very same shading model, minus some functions applied to it via lookup tables.

To the opposite end it the light-indexed technique, that decouples lighting from materials as "early" as possible, that's to say at the light attribute fetching stage. Can something be in a middle ground between the two? Maybe we could encode the lights in a way that still allows BRDF processing without needing to fetch the single analytical light attributes and integrate them one at a time? Maybe we could store the radiance instead of the irradiance... Maybe in a SH? I've heard a few discussions over this and it's in general impractical but recently Crytek managed to do something related to this in the cryEngine 2 to express anisotropic materials.

6) Deferred lighting works better with MSAA.

Yes. Sort-of.

If you don't write per sample attributes, all deferred techniques really do not work with MSAA if you don't use some care when fetching the screen space attributes, that might be a bilateral filter, the "inferred lighting" discontinuity filter or other solutions. This ends up to be the "preferred way" of doing MSAA and it's applicable to everything. And many nowadays do not MSAA at all and do a postfilter, like MLAA, instead.

Even if you just do shadows in screen space, as for example the first Crysis does, you will end up with aliasing if you don't filter (and crysis does, but the shadow discontinuities are not everywhere in the scene so it's ok-ish).

Now with DirectX 10.1 or with some advanced trickery on previous APIs you can read individual MSAA samples and decide where to compute shading per sample (instead of per pixel). That means that you will need to store all the samples in memory and keep them around for the lighting stage, as these attributes are not colors, can't be blended together in a meaningful way.

This enables you to compute and read per-sample attributes at discontinuities, and this is where deferred lighting has an advantage, as the attributes that go into the lighting stage are packed in just a single buffer so storing them per-sample requires the same amount of memory as your final buffer (and in fact it can be shared with your final buffer, as you won't need these after the lighting stage), and the lighting buffer can be MSAA-resolved as lighting will blend properly.

Doing the same with deferred shading would be a bit crazy, as you would need to store per-sample attributes of four buffers. It is possible even if really not too good to do a "manual" MSAA-resolve on the G-Buffer (thus not keeping all the samples for the lighting stage) where you do standard MSAA averaging for the albedo and use the "nearest" (to the camera) sample for the rest and it somewhat works.

Update: What they don't want you to know! :)
To close this article, I'll put here some tips and less talked about things about deferred techniques. One advantage of both deferred lighting and shading is that you get cheap decals (both "volumetric" and "standard" ones) as you don't have to compute lighting multiple times, the decals lie on a surface, so you can fetch the lighting you've already computed for it.
That of course means that the decal will need to change the surface's normals if needed blending its own, so you don't really get two separate and separately lit layers but it's still a great way to add local detail without multitexturing the whole surface...
If you think about it a second, it's the very same advantage you have with lights, you don't need to assign them per mesh thus potentially wasting computations on parts of the mesh that are not lit by a given light or need to split the mesh in complicated ways.

Also, it is neat to be able to inject some one-pixel "probes" in the gbuffer here and there, and have lighting computed on them "for free" (well... cache trashing and other penalties aside) for particles, transparencies and other effects, i.e. see the work Saints Row 3 did.

Another advantage that is especially relevant today (with tessellation...), is that you can somehow lessen the problem of non full pixel quads generated by small triangles (and the absence of quad fusion on GPUs). This is especially true of deferred shading, as it employs a single gbuffer pass that is taxing on bandwidth but less on processing (or... it could be that way). In deferred lighting you can achieve a nearly perfect culling (i.e. with occlusion queries generated during the first gbuffer pass) and zero overdraw in the second geometry pass, but you still have the quad problem...

Which unfortunately, will also affect what AMD calls "Forward plus" (and should be called light indexed deferred instead, as that's what it is... there is still research to be done on that and how to compress light information per tile, avoiding to have to store a variable light list and so on... AMD did not much there really).

I've always, always hated Linux. I've never even really loved too much the opensource movement, I really like the idea but in practice most projects end up being tech-oriented piles of "features" with no clear design or user target in mind. It's somewhat ironic, even if totally reasonable, that most of the really usable, great opensource projects are usually derived or inspired by commercial applications (i.e. netscape - mozilla - firefox) or backed by big corporations.

But still I've always been interested in it and how can you not be... An OS that you can hack freely, it's a sexy idea for every programmer.

So time to time I give linux a chance, every couple of years let's say, only to be frustrated by it.

At the beginning you needed to recompile the kernel for almost everything. Then it came the time where you could not have a driver for almost any non-obsolete external periphery. Then it was the KDE vs Gnome era, both were ugly and slow and crashed every few minutes so it was really a tough choice between the two. For a while I was into the live-CD craze, with Knoppix and all the other distributions. They were never really useful to me but I still managed to burn dozens of DVDs with them (we're talking of times before cheap and big USB keys... where people burned CDs!), and it was a way to keep an eye on the progresses made...

And then it came... Ubuntu. A serious project, with money behind it and a focus. And everything apparently changed.

I'm seriously becoming to think that Linux made it. This might not be a surprise for all of you that work with VIM and love GCC and so forth, but for me and I bet many others, it's news.

I've installed Jolicloud, a netbook, webapp centric ubuntu spinoff on my two netbooks (an ancient EEEPC 901 and a HP Mini 1000) and I'm impressed. My girlfriend is currently away for work and she brought with her the EEEPC. She is not a computer geek (at all) and she is even more conservative than me when it comes to changing user interfaces and trying to not be annoyed by technology, and she is liking Linux...

I didn't have to download any driver. I didn't have to download applications from websites. I have a centralized point for the updates. I have a great interface, with great font rendering. Performance is great. It does not crash.

To be totally honest, I've found a small glitch. The USB installer provided through the website did not work, and the more updated one stored on the ISO... was on the ISO. Just plain dumb. But after extracting it, everything went smoothly.

It's the closest thing to having a Mac without paying for it! Some friend of mine also suggested to have a look at MeeGo, it looks cool from the screenshots but I didn't try it yet. Already the idea of having too much choice between distributions is starting to worry me :)

I even tried Ubuntu also on an oldish Acer but it didn't work too great and I've uninstalled it. As you might have understood by the tone of this post I'm really not into fiddling with the machine, so I didn't try to benchmark the system or fix it, it might be something specifically wrong with that machine or just that the default Ubuntu is too heavy for it.

A key to the Jolicloud success is also that it specifically supports a given hardware (netbooks, and it has a long list of compatible ones) so it can be successfully optimized for that target.

Could it be that an OS built on a mediocre kernel and outdated, server-centric concepts, evolved so much to really be a viable alternative for the consumers? And not as an embedded thing, but on the PC hardware, competing with Windows? Incredible, but true.

I'm a believer. Now if only they had a decent IDE... Or if photoshop ran on it...

14 January, 2011

Unsurprisingly the game that got most of the "best graphics" awards also won our poll. God of War III had 28% of the votes. I don't have a PS3 (couldn't give money to sony after they gave me such a bad SDK... and with that ugly design... now I actually like it, I'm considering to buy one) so I could not play it too much, but I see its greatness.

Frostbite comes second with Battlefield Bad Company 2, 15%

The surprisingly there is a tie between Mass Effect 2 and Red Dead Redemption. I say surprisingly because in my opinion Red Dead is one of the best looking games ever. I finished it mostly because I enjoyed so much the vistas. Mass Effect 2 is an awesome game, but graphically it's pretty standard Unreal Engine stuff (but very well used, especially in the cutscenes... proving also that dropping frames is not that important during cutscenes if your visuals are nice).

Among the iOS devices Carmak clearly wins over Sweeney (3% vs 1%)

All the other games follow with just a few votes each, I didn't expect Kirby's Epic Yarn to be more popular than Black Ops among developers (6% vs 4%), Starcraft 2 deserves its 7% and I'm also happy to see that it's not only me to think that NBA2K does not look like such a great achievement graphically (only one person voted for it).

---

Originally I had in mind another poll for this most. Something more technical, about programming languages... But a year just ended, all the gaming websites are running best-of 2010 specials, and I've been tempted to do something similar. Also this year something happened while I was playing a game...

For me games and programming have been pretty much part of the same experience. I started on a commodore 64 that belonged to some cousins of mine, playing games. I was a kid, around seven-eight, and my mother didn't like the idea of me playing videogames too much so she asked one of my bigger cousins to teach me programming...

Now even if I've been doing those two things for a while now, on a professional level they are very distinct. I'm a rendering programmer, and I never really cared too much about the game I was making, other than trying to achieve the best visuals possible.

That's not to say that I do that on purpose, it just happened that I've always worked on game genres that were not what I like to play, and that was never a problem for me. In general I don't care too much about the overall product, I would try to help other game areas for sure but it's not quite the same thing.

And I always thought I would be fine even working on a mediocre game in terms of gameplay if it was striving to be the best it could in terms of graphics. And even when I look for a job I rarely care if a given company makes games that I enjoy or not...

...That at least until this year I played Modern Warfare 2. I have to say, this game hooked me so much on its single player campain, that after finishing it I went straight to check if Infinity Ward had some suitable openings. It's just that amazing.

So my 2010 poll is the following: which game (among the ones shipped in 2010) you'd like to be part of (rendering development wise!)?

As always, I'll post the results on the blog. You can vote using the widget on the right of this blog page.

12 January, 2011

DXOLabs is a company devoted to digital camera and lens testing software. Their DXOMark is a standard for lens testing, and they surely know their business.

So you might imagine that when you read an article on a well respected website, as Luminous Landscape is, that tells you that your loved (and expensive) digital camera is cheating you, it's something to worry about.

The article is "An Open Letter To The Major Camera Manufacturers" by Mark Dubovoy. The thesis is that the digital sensors, having the photosites somewhat inside a "tube" are shielded from some of the peripheral light rays (think of a honeycomb grid on a flash), so very fast lenses hit this limit and become "darker". Thus the camera manufacturers hacked the problem by silently raising the ISO sensitivity with such lenses.

This is interesting and so on, but it sounds strange. The graph they gave Dubovoy seem to depend only on the f/stop and not on the design and focal of the lens (latest data is here). If the problem is the angle of the light rays, wideangles should be more affected than tele.

Anyway, I had to test. So I bought a 85mm 1.2 and did some shots. I first replicated the test I saw here, checking that the aperture does indeed change the shape of the circle of confusion, but how do we know that the full-open aperture is indeed as wide as it should have been? Well, I did a simple test, just shielding the lens contacts with some tape so the camera does not recognize it anymore. Result? Same brightness (even if I didn't shoot the images exactly at the same angles due to the fact that I had to move the camera to patch the lens), surely not the half EV drop that dxomark predicted.

Myth busted (at least with a 85mm 1.2, maybe I should try the 35 1.4 too just to be sure)Note: that doesn't mean that indeed there is or could not be some light loss (albeit it would be easy to measure for your camera, just compare the image exposed to a given stop via shutter speed versus the same done by opening the aperture, on a taped lens), but that, at least for my camera, there seems to be no ISO cheating that I could detect.

I thought that I did post already this little experiment, but I didn't find it on the blog and I had to dig into my old sources...

1) Identify edges. (From color? From normals and depth? In practice I've found the latter to be better)

2) Fit a primitive to edges, find parameters. A line? A line centered on the pixel? A curve? Most algorithms fit one of the first two. MLAA for example finds lines from the discrete edge detection, and the key of the algorithm is that is able to find lines even if they are nearly vertical (or horizontal). These lines have long steps when rasterized, and are hard to detect because you have to "look" way far from the pixel you're considering. MLAA uses pattern matching to achieve that, and that's also why it does not fit the GPU too well (at least DX9ish ones)

Simple edge detection is too local, won't know if a horizontal discontinuity is an horizontal line or part of a line at a nearly horizontal angle. Also it won't know how long that line is, so it won't know for how long the line approximation holds, in practice it will work decently only on curves, organic surfaces. This can be remedied by searching in a larger neighborhood, but it's not too simple and of course it costs performance...

3) Blend along the primitive. Either identify a "foreground" and "background" color and blend between the two using the coverage of the primitive on the pixel considered, or smooth by integrating (sampling) along the primitive. MLAA does the former, many post-filters do the latter (or a generic isotropic blur) to avoid computing the exact integral of the fitted line through the pixel.

Now how you do that defines your post-aa filter. MLAA is currently the most popular algorithm (especially on ps3, where you can use all these SPUs... there is a GPU version but it doesn't seem suited to run on the 360 GPU) and it seems to be almost the "only" choice nowadays but it's quite possible to do something decente even with more "conventional" filters.

I started experimenting with this a while ago, to improve the PS3 2x quincunx filter resolve (that is commonly used "against" a 4x MSAA on 360 as PS3 is way slower to do MSAA).

This idea of mixing MSAA and edge filtering is not new, this paperfrom ATI explores it and it's a very interesting read (also its citations, likethis one, are a good inspiration)

It's important to be able to do this on the samples before the MSAA downscale because otherwise you'll loose information and it's harder to identify geometric discontinuities while preserving detail in the already antialiased (hopefully) interior shaded regions.

Also, as you're running your post effect on the full HD framebuffer, it's fundamental to be as fast as possible. With that in mind I started experimenting with the goal of doing the simplest filter that still looked good. This is what I've ended up with (beware, it's not shader code but Adobe Pixel Bender stuff):

The results are not too bad... As you can see MLAA is vastly better on straight, clean lines (see the bench) but it's actually a fair bit worse on curves/complex surfaces (see the leaves) and tends to mess/blur things that change often direction (see the metal rods on the left, above the zombie head). Maybe it's a matter of tuning, I used the original MLAA sourcecode from Intel.

11 January, 2011

Intro:I've always liked the analogies between the art of computer programming and classical arts, many have been made by various authors but the most famous are surely the essays "Hackers and Painters" by Paul Graham and "The Cathedral and the Bazaar" by Raymond.

And it's not about some sort of self-indulgent flattery, as being proficient in some scientific field is still seen as a less culturally valid than having technical expertise in one of the "classical" ones. No, the fact is that I really believe that coding is an amazingly creative process and that we are still really bad at it. We should learn more from the other creative arts, it's striking how badly "engineered" our process is.

A painter gathers references (papers), draws studies (exploratory programming), then starts drawing (design), roughing in blocks and detailing areas (top-down coding). All this keeping a great control over his vision, and for most of the work he does... can we say that we do the same when coding our average tasks?

Now most of this has to do with the process itself, with our cumbersome workflow. It's harder for us to learn, to become proficient in these skills to the level that artists reach because our process is so far removed from our products. We don't interact with our creations, nor it's as-easy to visually assess their qualities.

I've wrote about this in some older posts, and I'm a strong advocate of live-coding and visualization techniques, but that's not what I wanted to write today (even if I already wrote quite a bit)...

Tourism:

Today I want to focus on another aspect, that is how we learn our art. How did you learn to program? I guess the story is the same for everyone, we probably started to read a book or study a language, then we went to do some exercises maybe, and we started to... program. Then more reading, more programming, maybe talking to other people, working together, sharing ideas.

All this is fine, but aren't we missing something? I mean, you might bet that artists share a similar education, maybe they learn about techniques, maybe they have some mentors or teachers, and they start drawing. True. But they also do another thing. They leverage on their history. They study the masters. Visit cities, look at the architecture, visit galleries, get inspired by other people's work, or by their surroundings.

I started thinking about this while reading "the ultimate code kata"; I bet when you started coding you did look at other people's source-code. Maybe got involved in a few projects, started tweaking stuff. Reading and understanding other people's ideas. But it's something that often we quickly dismiss after we get a bit more experienced.

Why? Well one of the reasons is that on average we think too highly of ourselves, but another is that is really a time consuming process. Starting to understand the code in a project can take quite some effort, and how can we know if there is anything interesting for us to look at anyway?

We need code galleries! We need tourist information offices, guided tours, sightseeing. If you are in charge of a opensource project, or so, you should think about this, maybe adding a page to your wiki with some "entry points" (for example, see this).

Some projects are by themselves more "touristic", neatly made examples of great code (but also, as touristic cities tend to be, they can be less livable, like boost), some others contain great treasures, but they are hidden from the casual viewers (like my hometown Naples, that can be pretty hostile to tourists).

But this is not really only about opensource, I think it would be a great practice for a company too.

Usually one of the first things I do when I start working on a new project is to start understanding the main rendering flow, and usually there is no documentation about it, so I end up writing a wiki page with pointers to where to start, what functions are the main, most interesting ones, documenting a tour of the main "streets".

But it could be more than that... why not having a "gallery" of links and short introductions to interesting bits of code and functionality in your project? People could be interested, and even start learning across disciplines. Same goes for "notable snippets" maybe they can come from code reviews, maybe the lead or the TD can write some notes when looking at the changelists...

So where do we start? If you have suggestions, send some links to files/functions in the comments!

06 January, 2011

Off topic, but it took me a bit to figure everything out and I hate, I hate, I hate having to tinker with technology (I love that I can, but I hate that I have to) so I'll post some tips if you, like me, did a cheap contract with Wind or Mobilicity, hated their phones and took the bad decision of buying an android based samsung galaxy from the US instead (only the US T-Mobile T959 will work with these carriers in canada).

The T959 officially only has Android 2.1 that sucks, so I've also ventured into the realms of firmware updating. Anyways, this is the to-do list.

2) Root the phone (easiest way is to connect it to wifi and download this app: http://forum.xda-developers.com/showthread.php?t=746129 then reboot the phone into recovery mode and select "install packages").

3) Remove the carrier lock, it's not too hard. You'll need to download the android drivers, connect it with the USB cable and use ADB, that's a command-line android shell, let's you log into the phone from the PC. It involves doing some trickery with nv_data and generating some keys with a program, it will take a few commands (http://forum.xda-developers.com/showthread.php?t=822008&highlight=unlock)

4) Download the ROM Manager application and flash a good rom. I've found the team whiskey ones (http://www.teamwhiskey.com/DownVibrant.html) to be the best so far, it's a clean rom (no weird themes, no shit) and it's updated with the latest firmware and latest 2.2 modem (JL5). Update: Now the 2.2 is official from samsung. Still you can download the teamwhiskey version that comes with all sorts of tunings and most importantly the "voodoo" improvements (ext4, better display and audio) built in.The modem is very important to have a decent data reception, it also affects battery use and GPS. You can even flash different modems on top of an existing firmware but I guess it's better to have a firmware that already includes the latest modem. Note that you might need to reboot into recovery and use the "install packages" option twice in order to have the rom manager recovery menu installed in the phone... Note also that the firmware .zip has to be moved into the "internal sd" can't stay into the external (transflash) storage. About "lagfixes": these appear to be hacks that let older kernels to use ext4 (I think) filesystem, thus making the phone quicker and removing lags. I'm not enabling these, if you do beware that if you flash a firmware that does not support these lagfixes the phone will not boot, as it won't recognize the FS. A flash with Odin, with the phone in download mode will be necessary then.

5) Condition your battery. Fully charge it and then let it fully discharge a few times, the phone will "record" the power levels and optimize the indicator. It's also advised to flash the new rom with the battery fully charged, otherwise the battery indicator will be wrong until you recondition.

6) If you screw up don't panic, chances are that you can still put your phone into download mode and use Odin to flash the stock samsung rom. Normally the ROM Manager is way better to flash stuff, use Odin only if the phone does not work anymore.

7) See this to set the phone parameters to work with Wind. Note that it might work even without touching the service mode settings, even it it might be convenient to lock it to the 1700 and 2100 bands only if you want to avoid roaming and have the phone lock on the "wind home" faster: http://forum.xda-developers.com/showthread.php?p=10241844

Hi all! After many posts "against" C++ (and OOP, and design patterns), and after the very successful collaborative engine design experiment, I think it's the time for a new, more positive post.

In the end we're still tied to use C++ in our day to day jobs (well, if you do realtime rendering at least...) so we have to survive that...

Also I've always wanted to write some sort of coding "flashcards", something visual (also to help code reviews), of the form "if you see something like this, then you should think about that", so why not trying to do that, and also to leverage on the experience of you guys to make it super-awesome?

I will keep this etherpad around for a while. I hope it will see the same (or better!) participation of the last etherpad experiment I proposed. When the document grows "stable", I'll publish the results on the blog.

02 January, 2011

As expected, we're mostly developers here, so we all want a new better and bigger console.

61% More of the same. A powerful, traditional console. DX11, 3d etc...

10% Media-less console.

Also, even if publishers would surely love that, most people still believe that the main distribution channel will still be the shelves, and we won't see a medialess console yet.

Interestingly though, if 71% (115 persons) believe in that it also means that there is a substantial amount of people that believe that the console golden era is over, and the gaming will move towards other platforms. Let's see the ranking of the contender platforms:

9% App-store like on a variety of devices.

8% Casual games, novel ways to interact with games.

5% Low-power streaming set-top box.

3% Social gaming, free to play.

I fear that these numbers are heavily biased by the fact that this is a technical blog. The social platforms and the new devices are becoming huge on the market, with some companies already surpassing in value big names among more traditional publishers. But I do think these new platforms won't replace traditional gaming, just complement it. We'll see.

Angelo Pesce. Twitter: @kenpex.
I'm a rendering technical director.
This blog is my place to jot (incoherent, disorganized) notes about various things, so I can remove them from my head and keep them safely on the internet.