Technical Contributions

We modified the Render function to spawn 8 threads that split work on the width of the image. We implemented emissive media (atmospheres), as well as the ability to use an image’s alpha channel to make parts of a face transparent (clouds / saturn ring). We also implemented post-processing. The image is converted to luminosity values, and set to the powers of 2 and 8. The power 2 image is used for an outward radiating glow, while the power 8 image is used for a horizontal flare. We also perform chromatic aberration, with more aberration on the edges than center.

Progress

First Submission

In our first in-class demo, we had worked on obtaining meshes for our scene and doing preliminary rearranging of objects. Unfortunately, because the initial ray tracer code does not allow for multiple textures for each mesh, some of our meshes on the right side were colored incorrectly. The image without post-processing is shown here:

In addition, we had implemented some post-processing effects, including lens flares and chromatic aberration. The image with post-processing is shown here:

Development

As we continued adding to the image, we added more ship meshes and added planet meshes as well. We implemented emissive media for the planet atmospheres and made some elements transparent, such as the rings of Saturn. In addition, we made stand meshes in blender to be used with the planets to add a board game feel to the scene. An in development picture is shown here:

During development, we still had numerous issues. Lens flares were too dramatic, causing the entire scene to become too bright. In addition, the scene was too blurred to see any meshes clearly.

Final Submission

In our final demo, we had finished the blur and the lens flare issues from previous versions of the image. We colored the ships differently to indicate which ships are part of which team. We had also modified the color of the stands to make them more visible. The finished result is here:

Additional Work

After the final demo, we wanted to experiment with how the scene would look for camera angles. Using the barley machines at Stanford, we submitted jobs to render frames of the scene from different angles, and combined the frames together in a 24 fps video. The resulting video shows how different elements such as the reflections of the objects off of the board and the lens flares move with the rotating camera. A video showing our scene from different camera angles can be found here.

The first two statements check that the arguments are non-null. This could be avoided in languages like Scala or Kotlin that have non-nullable types. However, I want to focus on the third and fourth statements, where the constructor values are assigned to fields in the object. This pattern is annoying to write and maintain. You effectively have to write every variable name four times - once in the declaration, another time in the constructor arguments, and two more times in the assignment statements. Anytime something changes about the local variables, it must be updated in three different spots in the code.

This kind of code is also highly vulnerable to copy-paste errors, and I can think of several instances where I’ve made a mistake such as the one below:

Thankfully, some languages have solved this problem by letting you specially mark certain constructor arguments. These arguments are then “promoted” to fields, and automatically assigned from the constructor argument. These are some of the languages that implement this quality of life improvement:

Coffeescript

Coffeescript lets you use the @ symbol to mark a constructor parameter so that it will be automatically assigned to a local variable. This syntax makes a lot of sense, since the @ prefix, when used in the body of a method, accesses local variables.

# From https://arcturo.github.io/library/coffeescript/03_classes.htmlclassAnimalconstructor:(@name)->

Hack

Hack, a statically typed dialect of PHP, promotes constructor parameters to local fields when you add a visibility modifier.

Wednesday, March 29th 2017

This quarter, I worked with three other students on a mobile game called Runaway Robot. The game, made in Unity, is a side-scrolling runner featuring mechanics like gravity reversal. All of the music, all of the code, and a large portion of the art was made from scratch. The code is open source and can be found at https://github.com/Kahraymer/Runner-Game.

One interesting part of the game that I worked on is the jump mechanic, which has a couple of interesting features.

The gravity and initial jump velocity are calculated from the desired maxJumpHeight (the height at the top of the arc), and maxJumpTimeToApex (time to reach the top of the jump arc). This approach is based off of Kyle Pittman’s GDC Talk. While it doesn’t result in physically accurate jumps, it lets us tune the jump arc more intuitively. For example, a “tight” jump has a small maxJumpTimeToApex and a “looser” jump has a long maxJumpTimeToApex. maxJumpHeight is determined by the size of obstacles, which in turn is constrained by the size of the screen and the size of the player on the screen.

The jump controller allows the player to queue up a jump while they are falling, but before they actually hit the floor. Almost all games do this, since a player that is jumping repeatedly in a pattern might try to jump a few frames before the character actually hits the ground. We need to allow for some temporal flexibility so that the player doesn’t miss a jump, thus destroying their momentum. This is implemented below through the rebound boolean which, if true, triggers a jump immediately upon landing.

The jump can be modulated depending on how long the player holds the jump button. This is implemented by temporarily increasing gravity when the button is released. Gravity is returned to normal once the player reaches the top of their arc. Unfortunately, this led to a situation in which not all “taps” had equal arcs, because some “taps” lasted for slightly more frames than others. To normalize all “taps,” I implemented a minJumpTime, where the jump cannot be terminated until airTime >= minJumpTime. This effectively gives us a min and a max jump height, so it is easy to design our obstacles around these contraints.

I supported double jump, though we dropped this in the final game. In a pair of jumps, both jumps can be modulated, resulting in a wide variety of trajectories. Unfortunately, our player is too big on the screen. If we reduce jump height such that the player can never go off the top of the screen, than a single min-jump becomes so small that it is meaningless. Because of this, we decided to stick with a single, modulatable jump, as that still gave us a decent spread of jump arcs.

The gravity can be inverted, so the jump controller accounts for that.

publicclassPlayerController:MonoBehaviour{// These three tuneable values affect the 'feel' of the jump.[Tooltip("The height of a max jump.")]publicfloatmaxJumpHeight;[Tooltip("The time to apex of a max jump.")]publicfloatmaxJumpTimeToApex;[Tooltip("The minimum amount of time before a jump can be terminated.")]publicfloatminJumpTime;// Enable or disable double jump.[Tooltip("Can the player double jump?")]publicboolcanDoubleJump;// This mask only matches "Ground" objects, so that we can't jump off a coin.publicLayerMaskgroundMask;// This transform is a BoxCollider parented to the player which is used to// check the space under the player.publicTransformgroundCheck;// This field tracks the amount of time the player has spent in air after// jumping. Should be equal to 0 if the player is grounded or if the player// just started their second jump.privatefloatairTime=0.0f;// This enum tracks the current state in the jump.privateenumJumpPhase{Grounded,PreJump,Rising,TerminatedRising,Falling}privateJumpPhasejumpPhase=JumpPhase.Grounded;// Track whether or not the player should rebound when it hits the ground.privateboolrebound=false;// Check whether or not we are on the second jump of a pair.privateboolsecondJump=false;privateRigidbody2DrigidBody;voidStart(){this.rigidBody=GetComponent<Rigidbody2D>();}// Track whether or not gravity is inverted.privateboolinverted=false;publicboolInverted{get{returninverted;}set{inverted=value;}}// Update is called once per framevoidUpdate(){booljump=Input.GetButtonDown("Jump");// Note that the PreJump phase is used because physics must be applied in// `FixedUpdate`, but inputs have to be collected in `Update`.if(canDoubleJump&&!secondJump&&jumpPhase!=JumpPhase.Grounded&&jump){airTime=0.0f;// Reset air-time.rebound=false;// Reset the rebound flag.secondJump=true;// We are on our second jump.jumpPhase=JumpPhase.PreJump;// Move to the PreJump phase.}if(jumpPhase==JumpPhase.Grounded&&(jump||rebound)){rebound=false;// Reset the rebound flag.secondJump=false;// We are on our first jump.jumpPhase=JumpPhase.PreJump;// Move to the PreJump phase.}// Queue up a jump if the player tries to jump while we are falling.if(jumpPhase==JumpPhase.Falling&&jump){rebound=true;}}voidFixedUpdate(){// Calculate gravity.floatgravity=(-2*maxJumpHeight)/(maxJumpTimeToApex*maxJumpTimeToApex);if(inverted)gravity*=-1;if(jumpPhase==JumpPhase.TerminatedRising)gravity*=3;// Calculate initial jump velocity.floatjumpVelocity=(2*maxJumpHeight)/maxJumpTimeToApex;// Apply the difference between real gravity and desired gravity.rigidBody.AddForce(newVector2(0,gravity)-Physics2D.gravity);// Increment airTime.if(jumpPhase==JumpPhase.Grounded)airTime=0.0f;elseairTime+=Time.fixedDeltaTime;// In the PreJump phase, simply apply the initial velocity.if(jumpPhase==JumpPhase.PreJump){rigidBody.velocity=newVector2(rigidBody.velocity.x,inverted?-jumpVelocity:jumpVelocity);jumpPhase=JumpPhase.Rising;}// During the Rising phase, check to see if the jump has been terminated,// upon which switch to the TerminatedRising phase where gravity is// temporarily stronger.if(jumpPhase==JumpPhase.Rising){boolletgo=!Input.GetButton("Jump");if(airTime>=minJumpTime&&letgo)jumpPhase=JumpPhase.TerminatedRising;}if(inverted?rigidBody.velocity.y>0:rigidBody.velocity.y<0)jumpPhase=JumpPhase.Falling;if(jumpPhase==JumpPhase.Falling){if(groundCheck.GetComponent<Collider2D>().IsTouchingLayers(groundMask))jumpPhase=JumpPhase.Grounded;}}}

Tuesday, March 28th 2017

This blog is a static site that’s generated from a series of markdown files. The ‘publish date’ for each of the blog posts is taken from a YAML frontmatter section in the post’s markdown source. Most blogging systems track when a post was last modified, and display that in the page, as well as in the page’s metadata. If I wanted to track the last modified date, it would be somewhat annoying to do manually, as I would have to edit the date in the frontmatter every time I wanted to change something in the post.

However, since my blog posts and code are stored in git, I can very easily extract the last modified date from the git repository itself. The snippet below extracts the last modified date of a file in the ISO format.

git log -1--date=iso-strict --format="%ad"${filename}

Edit: A Jekyll Plugin

I just found out that there is a Jekyll plugin for this purpose. It uses git log if the site is in a Git repo, otherwise it uses File::mtime - determinator.rb.

Saturday, May 21st 2016

During Autumn quarter this year, all of my class lectures were recorded and available online. I initially tried to attend every lecture, but the ability to skip class and watch it later was too tempting. I soon found myself waking up at 2PM everyday and watching all my lectures through the SCPD video system.

This lifestyle presented couple of problems. First, the SCPD player was annoying to use since bad internet connections would force me to reload and lose my place in the video. Second, I found myself rebooting frequently to switch operating systems – this would also cause me to lose my place. I could use the download feature on the SCPD site to solve the first problem, but the second still remained.

Instead of fixing my life, I decided to fix my technology. What I really needed was a video player that could remember where I left off. This would also allow me to split my lectures into parts and rotate between them (I found that this prevents me from getting bored while watching 2-hr lectures). I looked at a wide variety of video players, but I didn’t find any ideal solutions. Most players don’t remember positions, and those that do typically only remember positions on the last video viewed. Furthermore, I often switch OS’s (primarily OSX and Linux), and I want my timecodes to work on both (and ideally be synchronized). Given these constraints, I figured the only solution was to make my own tool.

I obviously didn’t create a video player from scratch – instead I used MPlayer, a GUI-less video player that is launched from the command-line. What makes MPlayer awesome is that it outputs video progress to stdout as the video plays. Using this feature, I was able to write a simple wrapper script to launch MPlayer and read/store the timecodes.

MPlayer is interesting since, as the video progresses, the application actually erases the previous line in order to write the new timecode. It does so using “\r” (carriage return) to return to the beginning of the line and rewrite the previous data. From there, it was as simple as splitting the stdout stream on that code and extracting the video position with a regex. The timecode is saved every five seconds. When the same video is opened again, the wrapper script uses the “-ss” flag to pass in the saved timecode.

In order to share timecodes between machines, the timecodes are stored in a folder in my home directory, indexed by the SHA-1 hash of the video file’s contents. I then set up that folder to be a symlink into my Dropbox and voila! Cross-platform synchronized video resuming in less than 100 lines of code, which you can download and use here.

This simple project shows how existing tools can be recombined to solve a daily problem and boost productivity. I’ve encountered tons of these little hacks (every engineer has a couple!) and I’m always blown away by the cool ideas that people think up. Leave a comment if you have any similar projects you’d like to share!