All posts by Ken Slade

During a project this week, my project partner and I ran into a problem that is somewhat interesting: how do you parse a string whose delimiters are null characters (‘\0’)? For example, how do you separate something like this:

This\0is\0a\0null\0character\0separated\0string

…into something like this:

This
is
a
null
character
separated
string

The problem is, of course, that you can’t put this into a std::string and iterate over the string to separate it out, because std::string terminates on the null character. So doing something like this:

…results in str returning only "This" when it is used, and iterating would stop at the first null character.

So what can we do? Assuming our null terminated string is of type char *, we can actually use this string constructor behavior to our advantage by pointing at the beginning of each part of the string, then moving forward past the null terminator until we’ve reached the end of our character stream.

Share this:

A few weeks back, someone asked me a question that, as someone who has worked with computer graphics quite a bit, I should have instantly known the answer to: how to compute a dot product of two vectors. Instead, I floundered for an answer and tried just about every way but the correct way I could conceive of because the correct way ‘didn’t look right’ to me. If you’ve ever spelled a word and then thought, ‘that doesn’t look right’ even though you have spelled it correctly, you’ll know what I mean.

In my defense, I had been running low on sleep and energy for a few weeks at that point due to working on a proposal. So not only was I tired, I was cold on writing code. Worse, I hadn’t written a line of graphics-related code in months.

These are really just excuses though. Eventually I slogged through to the correct answer, but I felt like I had completely misrepresented myself to this person.

So, without further ado, here is an example of how to compute a dot product between two vectors in C++:

There are a dozen other methods that should be added to this class as well, like a cross-product, conversion to a unit vector, or a method for determining the angle between this vector and another, but it occurred to me that there are plenty of C++ libraries out there that already do this. However, is there one for Swift? I’m not sure – there are the beginnings of a Vector2D in the Swift Programming Language Guide, but it is clearly not finished.

I decided I’d mirror the functionality of the above C++ class in a Swift class in a playground as a start in the direction of creating a reusable vector class:

Share this:

I’ve been spending a bit of time over the last couple of weeks on learning the Swift language that Apple debuted at WWDC ’14. So unfortunately, this post really doesn’t have much to do with computer graphics except that this will eventually become my entry point to working with Metal. I thought I’d post a bit about Swift – in particular, a few things that have caught my eye from the language so far. I’m not quite all of the way through the Swift iBook yet, so I’m sure I’ll miss a thing or two.

Swift switch cases do not fallthrough by default.

Swift throws away the old tradition of breaking (via the ‘break’ keyword) to prevent fallthrough and instead introduces a ‘fallthrough’ keyword. So instead of falling through by default, you must explicitly tell Swift that you would like your switch case to fallthrough to the next one. This should save a few headaches and make for tidier code for most people and for most usages.

Values in switch cases can be checked for inclusion within a range.

I can’t express how awesome I think this is. As someone who’s written Java and C/C++ way too much, simply being able to use ranges is great, but being able to create switch cases for them is amazing. Here’s what an example of what this might look like in Swift:

Optional types are pretty neat.

There’s a chance you could already know (or know someone who knows) C#. From what I understand, Swift’s optional types has quite a bit in common with C#’s “nullable types”.

The general idea is that unless you specify a variable as having an optional type (with a ? modifier), the variable will be treated as though there is guaranteed to be a value stored in that location in memory. However, an optional type can have a value of nil. What’s more, there’s nothing preventing structures of optional types, resulting in ‘optional chains’. For example:

var myInt = dataStructure?.subStructure?.intValue

The variable myInt gets an inferred type of Int?, or an ‘optional Int’. If either dataStructure or subStructure are nil, myInt gets set to nil and nothing else happens. Nothing breaks, nothing throws exceptions, we just move on in our code. If both are not nil, myInt gets set to intValue. If we use myInt later on in our code, we would reference it with myInt?. If we are eventually certain that myInt has a value, we can forcibly unbox it with myInt!.

Swift will be another entry point for Metal.

When I first looked at the Metal API docs on Apple’s developer site, I was concerned. Just about everything I came across was written with Objective-C, which is (in my opinion) not the easiest language to read or write. Add to that a graphics API whose purpose is to work very close to the bare metal of a GPU, and you have a recipe for unreadable complexity.

But then when I was working within XCode 6 Beta 2, I saw that the game template allowed me to write code in Swift and use Metal for graphics. I wanted to see what this looked like, so I created a project using the template. But then something unexpected happened, although being a beta, I probably should have expected it: Metal could not be found by XCode. In fact, the brand-new project could not compile, let alone run, and I hadn’t even typed anything yet. What’s more, it turns out that XCode’s simulator won’t run Metal projects (yet) – you must have a device running iOS 8 beta to run them at all, and I don’t have a spare device at the moment. I guess that XCode Beta just isn’t ready to do it all just yet.

In the meantime, I’m going to continue learning Swift, and when XCode is ready for me to write Swift/Metal, I’ll post some tidbits.

Share this:

Tonight, I added some quick and dirty soft shadows to Stained. This had two effects:

It made the sphere in the scene look more realistic by effectively diffusing them.

It made the stained glass look more realistic by mixing some colors together while keeping some splashes of rich colors.

To accomplish this feat, I added some code to sample in the 6 basic directions relative to my light source, since technically my light source is not a point source, and has width, height, and depth, as well as the center point. So these 7 checks I added are done in a loop in my shadow function, including the center point of the light, of course. Normally, you would want to randomly sample dozens of these points, and probably in a greater number than 7, but this was fast and effective enough to create the basic desired effect in real-time in this case.

I’m actually really happy with the amount of improvement this one little adjustment has made to the scene.

Share this:

For my next project, I wanted to do something a bit more challenging than simply ray casting against spheres, so I chose something deceptively simple: I chose to ray trace some stained glass. Now, this seemed like a good idea at the time, and I’m pretty happy with the result, but my weekend was eaten by this little project because once I got one part working, another idea popped into my head – “wouldn’t it be cool if…” was a recurring theme.

There were more than a few subtle (and non-subtle) challenges with this little shader:

Up until this point, I’ve been sticking to a single ray cast, with the exception of the Light and Shadow shader, which also cast rays to check if a position was in shadow. For this shader, I am doing multiple bounces.

A stained glass window does not cast a simple shadow. It transmits some of its received light.

The original thought was to simply have a “window pane” cast its light at the ground and be done with it. It turned out that it was an extremely dull and uninteresting scene. So I added a sphere, thinking “how much complexity could it add?”

I kept running into more ideas like: “I could add light attenuation!” or, “what if I made it look like my light is rising and falling like the Sun?” And who could forget, “what if I made this light a simple flare instead of a janky looking sphere?”

So, all told, I managed to try a number of new things in this latest shader, and as a result it took me far longer than the original vision. I think it was the right choice, although there are certainly nitpicks that I have that I may have to revisit someday. For example, I’m not happy with how unconvincing the “glass” material is (I didn’t refract my light!), or with how shiny the sphere is.

What this little function does is to test for plane intersection via iPlane(), but then it also takes in some constraints, such as a starting (u,v) coordinate and a width and height from those coordinates, creating a rectangle. Granted, this only works for axis-aligned planes, but that was fine for this particular shader. However, the real purpose of this function was to return a UV coordinate of where a ray hit, allowing me to map my color scheme for the glass to the coordinates, essentially allowing me to texture a quad.

Once I textured a quad, the rest was fairly simple – firing a primary ray at the quad would give me a color, which I would apply to the shadow ray, resulting in a stained glass lighting effect.

Adding a sphere to the scene caused a few headaches, such as doing shadowing, but the most pain probably came from taking a whole bunch of crazy special purpose code and simplifying it so that it would survive a ray trace loop. There’s still quite a bit of it in the shader, but I’ve tried to comment it and use meaningful variable identifiers so that it’s clear what I’m trying to do in each section.

Also new for this post, I’m going to embed a live version of the shader in this post. To rotate the scene, simply click and drag below. If you’d like to see the full listing of code, click on “Stained” in the upper left after you mouse over. Enjoy!

This is the initial intersection with the scene. We send out a ray from our origin point with a direction, and in return we get a t (ray length) and sphere (defined by a vec4) as well as an identifier for which sphere was hit. If nothing is hit, we’ll get -1.0 for this identifier.

If we determine that we’ve hit something, we’ll figure out exactly where we hit it and calculate Lambertian reflectance for that point.

//check to see if this point is in shadowvec2 shadowT;vec4 shadowHit;//check for intersect between the sphere and the lightfloat shadowId = intersect(pos,normalize(light-pos), shadowT, shadowHit);//if we have a non-negative id, we've hit something other than the lightif(shadowId >=0.0){
col =vec4(0.0);}}

Then, we try to intersect the scene from our original intersection point with a ray in the direction toward the light. If we hit something, our point is in shadow, so we’ll replace the color with black, which will give us a hard shadow.

//If we hit the lightelseif(id ==-2.0){
col =vec4(1.0);}

Finally, we’ll add the light to the scene as its own special case, making it totally white. And that’s it!

Click on the image to see the “Light and Shadow” shader in action.

Share this:

Spent some time today cleaning up RenderLoupe‘s theme (based on Twenty Fourteen) and visuals for my posts to be a little bit to be more to my liking. I think it’s a lot closer to where I want things to be now. It’s a work in progress!

In honor of Throwback Thursday, I put together a shader based on that original code, and without any fixes for things like precision issues, although I did sort of indirectly accept a challenge posted by “iq” on Shadertoy to show an aliased fractal and an antialiased fractal side by side to demonstrate the improvement. The result was actually pretty good, and it zooms in real-time – which is a lot more than I could say for the renderer I wrote in 2003 – but the precision issues really put a damper on zooming in on things to examine them. Pixels appear to grow as the zoom scaling gets larger as you can see here:

Click on the image to see a larger version. Mandelbrot precision issues (visible as pixelation)

At reduced zoom levels, the fractal is sharp, even at full-screen resolution, and antialiasing seems to make it look even more appealing.

Click on the image to see a larger version. As you can see, the aliasing on the left side is pretty rough.

In an effort to start putting concepts together, I’ve combined some parts from two of my initial efforts (the interactive sphere and Perlin noise) into a single demonstration. I’ve also been interested in understanding how Fresnel reflection and refraction work, and so today I spent my afternoon assembling a fun shader to do all of this. The subtle challenges inherent in this shader:

I wanted to put something together for a simple Perlin noise shader, and I made an attempt or two before getting frustrated by GLSL’s mod() function , which did not work exactly the way I expected (integer support?). I probably could have kept going and hashed something out, but instead I decided to look up an implementation that I had seen referenced on ShaderToy elsewhere:

It worked right out of the box, and there really wasn’t much to it, so I threw in an improvised zoom effect, which I altered slightly to become more like a Ken Burns effect, although I’m really just flipping coordinates for the outputted buffer over time. Here’s the result:

Click on the image to see the “Perlin noise with a Ken Burns effect” shader