http://catdevrandom.me/Ghost 0.11Thu, 08 Nov 2018 03:39:14 GMT60It's no secret that the King's Quest series from Sierra Entertainment is by far my most treasured video game series, with King's Quest VI holding the spot of my most favorite game ever. I've always had a soft spot for classic point-and-click adventure games, and certainly have noticed their absences]]>http://catdevrandom.me/2015/01/07/reimagining-adventure-games-with-virtual-reality/f2a54f4f-18a4-4bd2-8f1b-c1de24a256e6Wed, 07 Jan 2015 18:57:00 GMTIt's no secret that the King's Quest series from Sierra Entertainment is by far my most treasured video game series, with King's Quest VI holding the spot of my most favorite game ever. I've always had a soft spot for classic point-and-click adventure games, and certainly have noticed their absences in modern day gaming. I thought we might see them reemerge when the era of mobile gaming was getting big, but there's no real noticeable titles that have come from that. I couldn't help question why that might be, and Ken Williams of Sierra Entertainment makes a great point in a recent interview with Game Informer:

Game Informer: Point-and-click adventures may not be as popular as they once were, but they were the top-tier games in the '80s and '90s. What about the genre do you think audiences that made it so popular for so long?

Ken: Personally, I never liked being pigeon-holed as “point and click”. I like to think of Sierra’s products as interactive stories, and point/click was nothing more than the best we could do at the time to influence the story. The idea was to make you feel a part of the story, and neither text parsing or point/click were perfect answers. A perfect answer would probably be 3-d glasses and motion sensors in gloves. Then you are talking! And, that’s the right experience. It’s like a good book or a good movie. The goal is to immerse the audience in the story. A mouse or a keyboard pulls the player out of the game. I don’t know the magic solution but know that saying things like this is a “touch-game” or a “click-game” or a “parser-based-game” are all dead-ends. They lock in a point in time and miss the fact that interactive technologies are advancing faster than games are being built. The right answer is to say, “what new technologies will be around two years from now when this game releases, and how do we use it to immerse the player in a cool new universe?” Saying, “Point/Click games sold 3 million copies last year, and the market is growing 5 percent a year, so next year there will be 3.15 million point/click games sold” – that’s a sucker trap. Sure death.

tl;dr: Point-and-click was simply the best option at the time, but was never optimal. I found this response particularly interesting as I was already contemplating how VR could change adventure games prior to this interview coming out, and Ken specifically calls out "3-d glasses". The motion-sensing gloves, however, wasn't exactly the same route I was going down.

Head Gestures

There's generally a small set of actions you can do in adventure games (or at least King's Quest VI, which I'm using as a model). Touch, Talk, Walk, Use, etc. Dialog is generally predetermined, and generally has no real input from the user (except for cases where the player uses an item on the person they are speaking with).

But that's the old-school way, isn't it?

I started pondering, if you move from point-and-click, what becomes more important as a game mechanic? It's certainly not outside the realm of sanity to think that massively overhauling the input methods drastically changes how the game is played. I totally agree with Ken that something like motion-sensing gloves would be an ideal solution to interacting with the environment and managing and inventory system, where you could rummage through the goodies you've collected via some world-space menu, but what about that predetermined dialog? Parsed-based games offered some (but limited) control over dialog, but mouse-based games such as KQ6 gave you no option to answer questions or express emotion.

This is where I though things could go a new direction with VR. All of the current-day VR devices offer a common thing, head tracking. Not all offer depth tracking, but the Oculus Rift, Google Cardboard, etc all allow the user to turn their head up and down, as well as tilt their head. The first new game mechanic that came to mind was head gestures; nodding for yes, shaking for no, the ability to bashfully look down at ones feet in embarrassment or shyness. Head tracking allows for a new portrayal of emotions via body language, and in an era where companies like Telltale Games turn a profit on games based entirely around the idea of your responses driving the plot of the game (ie. The Walking Dead and The Wolf Among Us), offering a similar, more immersive experience to fans of the series doesn't sound like the craziest idea in the world.

The result, as you can see above, is a brief offer of cold tea from Sir Squarington (give him a break, he's been stuck on an empty island in space for eternity with no teapot). You can accept his offer by nodding your head up and down, or deny it by shaking left and right. Additionally, Mr. S changes colors based on your head state:

Blue: Looking

Green: Nodding ("Yes")

Red: Shaking ("No")

I don't believe that a VR-centric translation of the old-school point-and-click adventure games like KQ6 will ONLY take this as input. Ken is right, there's a secondary input method needed. A mouse and keyboard isn't it, but instead of motion sensing gloves, there's some promising tech such as the Nimble VR, a hand and finger tracker that was recently even acquired by Oculus.

Distribution of the HeadGesture Asset

Even though I wasn't the first to it, I do want to package up the HeadGesture controller I'm working on into a prefab. A great thing of Unity is how easy it is to redistribute assets, and the idea with this prefab is you will be able to drop it into your game, link up your VR camera, be it Oculus, Cardboard or anything else (these all work off of the mechanic of two normal camera slightly offset from each other), and then have your dialog system subscribe to the HeadGesture asset, that will fire off events as the head state changes.

]]>Yesterday I took the day to sit down and really evaluate the Oculus Rift as a user and as a developer. I'm not exactly professional game developers by a long shot, but I've gotten to the point I can comfortably call myself a "hobbyist". Also, early in the week I]]>http://catdevrandom.me/2014/12/27/virtual-reality-oculus-rift-and-google-cardboard/47f7cc0c-00ed-4c83-8e0b-8b3b190c8ec4Sat, 27 Dec 2014 16:09:00 GMTYesterday I took the day to sit down and really evaluate the Oculus Rift as a user and as a developer. I'm not exactly professional game developers by a long shot, but I've gotten to the point I can comfortably call myself a "hobbyist". Also, early in the week I also received my Google Cardboard and was pleasantly surprised at how well it worked, all things considered.

Using the Oculus Rift

Initial setup of the Rift was not too terrible involved. A couple USB cables, a sync cable between the camera and the Oculus Rift, power and I was up and running. Windows 8.1 actually also picked up the Rift, but we elected to pull down the latest runtime (at the time, 0.4.4-beta). Past that, I jumped right into a few things I had pre-downloaded such as the "Oculus Tuscany Demo" ("Welcome to Oculus" (video) experience which details the paradigm shift that current VR tech can offer and "A Chair in a Room" (video) which offers an immersive horror/scare experience.

The first thing that you notice with the Rift is that this isn't just a screen strapped to your face. Each eye is rendered separately, which gives a true feeling of 3D and allows you to properly analyze space and distance. The Tuscany demo has butterflies and plant seeds floating through the air, passing by your face and makes you feel like you need to brush them out of the way. This isn't the same 3D effect you get with passive glasses at the movie theater, objects have real depth to them.

Past that, what amazed me was the head tracking. I've read that the head tracking in the DK2 was good, but especially in the tech demos, it was near perfect. I'd be standing at a railing and lean forward, looking down below me. Part of the "Welcome to Oculus" experience, you're placed in a movie theater while clips from popular films play, and I was able to lean back behind me and look behind my chair. It wasn't just guessing, it was actually translating every movement of my head into the virtual space. Turning, leaning and tilting my head all translated perfectly (for most tests), which has a huge impact on immersion.

I proceeded to try out some full games/simulations with the Oculus, including:

iRacing - One of the closest real-to-life racing simulators that's commercially available. I was racing a Mazda MX-5 and a prototype Ford card on the Laguna Seca, looking around in the car, reading the speedometer and tachometer. Combined with a force-feed back racing wheel and pedals, this was the one I kept coming back to.

Euro Truck Simulator 2 - A...truck driving simulator. Not exactly the most fun game I own, but strap on the Rift, using the wheel and pedals and loading up some of our favorite tracks, it was great to be able to look around the truck cabin, actually turn our heads to look at the mirrors as we backed the trailer up to the loading dock and eventually jackknife the thing.

Half-Life 2 - This, to me, was the most surreal experience. Sure, the game is 10 years old and Oculus support is in beta, but I'm not alone in having this game on my Top 5 Games of All Time list. I've played through the game countless times but standing in front of the Metro Police as he knocked a soda can to the ground and forced me to pick it up, it was the closest I've felt to actually being in the shoes of Gordon Freeman, shoes I've played in countless hours since the late 90s.

Among a few others. Each one offering an amazing, immersive (there's that word again) experience. I couldn't stop trying new games and tech demos, and every new one we tried had some new cool feature or detail that outdid the last.

Developing for the Oculus Rift

I spent nearly the entire day playing with the Oculus that we nearly forgot the other major plan for the day, which was to actually make something for it. I was hoping to have a solid plan on what that thing was that we wanted to make, but by the time I loaded up Unity to begin developing, we had no idea what it was that we could build in a night that would leverage everything the Oculus had to offer as a true VR experience. So I did what so many others before us did, copied whatever it was that people liked and remade it. The result?

A room with a comfortable couch and a massively oversized TV. I didn't make the models or textures myself, as I was on a tight schedule, so I grabbed some pre-made furniture models and found some nice textures to go along with them. More than anything, I wanted to see how a scene I made in a very traditional way translated to the Oculus, and man did it not disappoint. I figured what better use of a virtual world with a giant TV to do something we can never do in real life...

...I watched The Walking Dead.

But this wasn't about what the TV played, it was how things translated from our traditional development automatically over to the Oculus. As I sat on the couch, I looked around the room, down at the coffee table. I looked behind and leaned over the couch, all these movements translating 1:1. This was enough to show us all of the possibilities of things we could make with the Oculus Rift.

Google Cardboard

Google Cardboard, for those unfamiliar, is Google's solution to the current high cost of entry into the world of VR. Currently, the Oculus Rift is $350 for just the development kit, it is not a commercial-grade product. On the other hand, Google Cardboard and it's third-party brothers and sisters can be had for the price of a few cups of coffee. Cardboard is literally that, a piece of cardboard (and a couple lenses, some tape and a magnet). The trick is, however, that apps are rendered in a split view (much like the Oculus), and when your phone is placed into the Cardboard, the lenses distort the two separate, flat images into a single 3D view. There are still plenty of issues, including:

Extreme "screen door effect" - As the resolution is much lower than the purpose-specific one of the Rift and is so close to your face, you can make out the individual pixels, which makes it look like you are looking through a screen door.

No head tracing - Which was one of the great parts of the Oculus. You can turn and tilt your head and everything translates fine, but there is no leaning

Lack of input - The magnet acts as an input device with Google Cardboard, and translates to essentially a single tap on the screen. Some versions of Cardboard actually do exactly that, use a conductive material to tap the screen. The Oculus on the other hand has a wide array of input devices inherently available, as it's meant to be used with a computer.

Apps are extremely taxing on the device - The Nexus 5 has pretty terrible battery life to begin with, but a short 10-15 demo of one or two Cardboard apps drains about 25% of my battery and the device gets extremely warm

However, for being less than 1/10th the price of an Oculus Rift, it is very impressive in it's own right. You still get the 3D feel of space and distance, and the freedom of mobility gives you some interesting options as far as games go.

Cardboard Integration with Unity

As I don't currently own an Oculus Rift, I wanted to see how it was to develop for Cardboard, as Google provides a full Cardboard SDK for Unity and as it turns out, it's just as easy to develop with Cardboard in mind as it is any other 3D game. There are certain considerations to take into account, such as how input changes and how you focus your core game mechanics, but I took the same scene I built for the Oculus, simply swapped out the prefab that Oculus provides for it's cameras with the one Google provides for Cardboard, and built to Android. I did have to remove the MovieTexture that was playing The Walking Dead, as Unity does not support MovieTextures on mobile devices, but other than that, it was a single-step process and my on-the-cheap VR goggles dropped me right back on that familiar couch.

I also took the same FPS Demo that I've been working on, as detailed in the previous post, and did the same process of simply swapping the cameras. Lo and behold, we have mobile VR!

These past few days really gave me a feel for the place that VR holds, not just in gaming but in countless applications. This isn't the first swag at VR, far from it even. But this is the first implementation that makes me feel like it finally has value, rather than just some gimmick.

]]>A little over a month ago, I wrote that I had begun learning the Unity Game Engine. A little over five weeks in, I wanted to share some of the lessons I've learned in this short time and demonstrate that you can achieve some pretty awesome things in Unity in]]>http://catdevrandom.me/2014/12/15/unity3d-a-checkpoint/3ab40382-2645-4c9f-828d-1bfb326b6e2cMon, 15 Dec 2014 14:06:00 GMTA little over a month ago, I wrote that I had begun learning the Unity Game Engine. A little over five weeks in, I wanted to share some of the lessons I've learned in this short time and demonstrate that you can achieve some pretty awesome things in Unity in an extremely short amount of time. Additionally, I've also had to dive into some extremely amateur 3D modeling (which I'm less proud of).

First Stab at an FPS

This is the genre I usually default to. I generally play anything, but FPS seems to be my "default". I figured I should start with something I was comfortable with, something I knew the mechanics of so I knew if something was "off".

I felt like a shooter would hit on a lot of topics. 3D world, flexible game mechanics, networking and would give me a good excuse to fudge some things into the game that might not make sense elsewhere. I learned some 3D modeling, what lightmapping was, basic networking and created...

...an ugly monstrosity. But it was good enough for a start! Let's break down what's behind this mess

Photon Unity Networking

Despite the complete lack of player models and opting for simple cylinders, this game does have multiplayer. It was actually pretty easy to accomplish using a very popular third-party add-on called Photon Unity Networking. It's completely free for up to 20 concurrent players and makes what would otherwise be a very complex solution very, very easy. I've chopped out some code I had to handle things specific to my game, but as you can see, to just join a lobby and a room, it's not a lot of code:

And just like that, the player is in a networked game. Granted this means there's only ever one room for all players to join, assumes there's ground under the player at coordinates 0,2,0 and I've removed the code to enable player movement scripts for only the local player, but it works nonetheless. From there, you can create a generic script that you can put on ALL components that should be updated over the network, and PUN will handle updating their location, rotation, and whatever else you include to send and receive in the script. Below is the code I wrote to not only update the local and remote player locations, but also do extremely basic smoothing over the movement to reduce jitteriness of the remote players on the local player's screen.

You'll see I was experimenting with some of the logic behind how it smooths the movement, which at this point is still questionable at best. The nice thing about Unity is I can place this script on ANY GameObject (players, random crates in the level, vehicles) and I don't have to do anything else to have them update their position or rotation over the network.

3D Modeling

Several times in the past I've attempted 3D modeling and it's never gone well. But, for game design, I had to have some basic level of understanding to at least prototype my ideas. I followed a few tutorials, one to make a coffee cup, which turned out great, and then I attempted to recreate a pint glass I just happened to have on my desk for...reasons. The pint glass left something to be desired, but for my first blind attempt, I'm still proud.

These were just learning experiences to see what I could apply to my games, and took a extremely rough stab at modeling a level for the FPS I was working on.

Does it look good? Absolutely not, but again, this is a learning experience.

Want to play it? I've been pushing up regular development builds here.

Future Ideas

One thing that struck me was how unbelievably easy it is to integrate the Oculus Rift, the championing technology in virtual reality and the first real device that makes it feel like VR has a potential future. I debated purchasing a Development Kit 2 (DK2), however with the next Oculus prototype, Crescent Bay, coming out within the next year, I've decided to wait. Luckily, I do have a buddy that has access to a DK2, and we've set aside a day this holiday to see what we can create using Unity and the Oculus Rift.

How does the integration work? Well, Unity ships with two pre-made "Character Controllers", objects you literally drag and drop into the game and get basic character movement and control. I say basic, but there's parameters for all types of things you can tweak, such has how steep a slope needs to be before the player can't scale it, how tall steps need to be before the character has to jump rather than walk up them, etc. The two pre-made ones are for first person control (which I'm using in my FPS Demo), and for third-person control. The Oculus team ships a third Character Controller which you can drag and drop into your game, an OVRCharacterController. Just as you do with the FirstPersonCharacterController, you drop this in your game, and instantly you get control of your character using the Oculus Rift.

I'm simplifying it a bit, as some things don't work out of the box. For example, your UI may need some tweaking, depending on how you structure it. But to get the Oculus working in your existing game with little to no work, to me, is absolutely amazing.

I'm extremely excited to develop on the DK2 later this month, and will be documenting the process as well.

]]>Game design is something I've toyed around with since I started programming. Actually, that's why I started learning to program. In middle school, I bought "Computer Programming in BASIC the Easy Way" from my local library and eventually made a Super Mario CYOA-style game. It was a mess of spaghetti-code,]]>http://catdevrandom.me/2014/11/04/learning-the-unity-game-engine/e4222f34-b3fa-49d0-ac53-eb83efd56bf4Wed, 05 Nov 2014 00:40:00 GMTGame design is something I've toyed around with since I started programming. Actually, that's why I started learning to program. In middle school, I bought "Computer Programming in BASIC the Easy Way" from my local library and eventually made a Super Mario CYOA-style game. It was a mess of spaghetti-code, filled with go-tos. Then again, the book was published in 1989.

In high school, I took an elective course titled simply Computer Programming. In between assignments, I was instructed to work on a project of my choosing. After doing some more reading online, gathering resources, and reaching out to a couple friends for artwork and music, we came up with our first game that consisted of more than text and buttons: Generic Space Shooter. It was, as you could guess, a Galaga clone. It was developed in VB6 and still to this day I have the busted 3.5" floppy disk that contains it's binary and source code.

Fast-forward to a couple weeks ago, when a friend of mine and I half-jokingly discussed finally creating a game, like we always said we wanted to. We dabbled a bit in Microsoft's XNA in the past, and he let me know that the Xbox One has moved to Unity instead. It was a bit intimidating at first, at least compared to XNA's simplicity, but after a couple weeks of working with it I'm quite impressed.

Unity's biggest draw for me was how portable it is. Out of the box, you can target desktop (Windows, Mac and Linux), mobile (iOS, Android, Windows Phone 8 and Blackberry) and even Web using the Unity Web Player. If you have an Xbox ID account, a Sony development account or are registered with Nintendo as a Wii-U developer, you can also target all current-gen consoles (Xbox One, PS4 and Wii-U) when building your project.The physics engine, lighting technology, built-in AI engine (if you so choose to use it) and a whole mess of other exciting features. What amazed me most, however, is how easy it is to target builds for the Oculus Rift for VR-based games.

There's still a lot to learn, but in just a couple hours of work, I was able to recreate Pong, admittedly following along with a great video tutorial. I've already uploaded this game and have made it playable over in the games section. Is it perfect? Of course not, but it was a great project to learn.

To bring things full circle, however, I am recreating my first full-fledged game, Generic Space Shooter, in the Unity Engine. It will be a 2D-perspective of a 3D world. I'm also trying to rescue some of the assets from the original version such as the music to include with the game. Most of all, though, it will be a great learning experience. Just over these past couple weeks, I've learned enough to create 95% of this game with no other reference.

Simultaneously, however, I have also been watching this great series on YouTube of the development of a First-Person Shooter, created from scratch in Unity. This already has me thinking of future projects, things to learn and just all of the possibilities of Unity.

In the future I'd like to start a series of technical blog posts to detail especially interesting things I've learned or issues I've hit while learning Unity. I've been keeping specific notes of things to cover, but in the mean time, it's been such a blast to learn. As a bonus, I'll be putting development builds of Generic Space Shooter over in the the Games section as well. It's not much to look at yet, but feel free to keep an eye on it as development progresses.

]]>Late last week, a couple colleagues and myself discovered a small bug in Cloud Foundry's gorouter in which a websocket upgrade was not completed if a comma-separated list of values in the Connection header was provided. A pull request was pieced together, submitted and is currently being looked at by]]>http://catdevrandom.me/2014/05/12/hacking-on-cloud-foundrys-gorouter/4e1de03d-8e78-4986-98aa-085353cf8c8dTue, 13 May 2014 02:39:00 GMTLate last week, a couple colleagues and myself discovered a small bug in Cloud Foundry's gorouter in which a websocket upgrade was not completed if a comma-separated list of values in the Connection header was provided. A pull request was pieced together, submitted and is currently being looked at by Pivotal. However, I figured, why let the learning stop there?

There were several things that I was unfamiliar with:

The gorouter codebase

Running the gorouter locally

The Go language

Obviously, I had my work cut out for me. Luckily, Go proved to have a very forgiving learning curve, but that's for a future post. That only other thing in my way was getting familiar with Go itself, but first, let's explore the issue at hand.

The App

We boiled it down to an extremely simple example. So simple, in fact, that we pulled the code straight from the sinatra-websocket README. The only addition was a bit of extra logging to dump the headers on each request.

The Issue

We were observing that the sample app worked perfectly fine in Chrome, but not in Firefox. Obviously, this meant a difference in how the websocket upgrade was being handled between the two browsers. After a bit of trial and error, we noticed the difference in the headers being sent. There's two requests here, the first is the HTTP GET / to get the static page, and the second is the protocol switch.

Notice the difference? As you can see, Firefox is providing a comma-separated list of values for the Connection header. According to RFC 2616 section 4.2, this is perfectly valid:

Multiple message-header fields with the same field-name MAY be present in a message if and only if the entire field-value for that header field is defined as a comma-separated list [i.e., #(values)]. It MUST be possible to combine the multiple header fields into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field-value to the first, each separated by a comma.

I hadn't had much experience with Go prior to this, but it doesn't take much expertise to find the small issue. The gorouter is checking if the value of the Connection header only contains the string "Upgrade". This means that our comma-separated string in our Connection header is returning false on this comparison, and it's not recognized as a websocket upgrade. Easy enough mistake to make (in fact, this issue is what caused me to learn about what I quoted from the RFC doc above), but lucky for us, easy enough to fix!

The Fix

The last bit of lead-up until we hit on the true point of this post is the actual code fix. What we want to do is split the string on commas and compare each value, rather than the entire header itself. You can see the same code below in the PR I've made to the gorouter:

The Tests

And so we reach the true point of this article, how do we test this? There's several things we need running for this:

The test app

NATS (which Gorouter uses to know where apps are running)

Gorouter

Let's go in order. You can find code used for testing on my Github. Not being a tutorial on running Sinatra apps, you can simply run:

bundle install
ruby app.rb

This will start our app available on 127.0.0.1:4567

Next, we can reference the Gorouter's README on the other two steps. Again, not being a guide on Go, I'll assume your $GOPATH and $PATH environment variables are properly set. You can install and run NATS (or rather, gnats) with the following:

This will pull the source of the gorouter to $GOPATH/src/github.com/cloudfoundry/gorouter. We'll make our changes here (in our example, it was the file at $GOPATH/src/github.com/cloudfoundry/gorouter/proxy/proxy.go), build it and run it:

Finally, we need to tell gorouter about our app. As mentioned, gorouter listens on the NATS bus for publish messages, telling it where the apps are running. There's a few ways to do this, but I happened to have the ruby NATS tools installed, which you can install with:

gem install nats

And finally, publish a message on the NATS bus to tell gorouter about the app:

However, this can be thought of as a heartbeat. This needs to be sent at regular intervals. There's a project in the Cloud Foundry Incubator called route-registrar to handle this, but a quick and dirty shell script will do the job as well:

Congrats! You now have your modified version of gorouter running and serving up requests to your app.

Closing Thoughts

Going into this, I was a bit worried. I was essentially looking to run a subset of a very complex system. However, I'm happy to see how easy it was to run only the pieces needed. This was the bug that is driving me to finally learn Go as well, so I'm hoping to contribute to the Cloud Foundry codebase, as well as discover other such ways to test individual components of the system.

]]>This is a topic I've seen come up a couple times in the last few weeks. It started with Dr. Nic Williams when we were discussing share-my-cloudfoundry when he wanted to provide compatibility with Cloud Foundry v1 and v2 in the same application. This situation came up again with a]]>http://catdevrandom.me/2013/09/05/cloud-foundry-the-api-way/fe89fb43-3ac7-441d-93d3-bca44f74bd65Thu, 05 Sep 2013 04:05:00 GMTThis is a topic I've seen come up a couple times in the last few weeks. It started with Dr. Nic Williams when we were discussing share-my-cloudfoundry when he wanted to provide compatibility with Cloud Foundry v1 and v2 in the same application. This situation came up again with a personal project that I will detail later.

It required a bit of discussion, but I finally tracked down the answers. Although the cfoundry gem states that it is compatible with Cloud Foundry v1 and v2, after some digging, it looks like it's only compatible with v2. To make things a bit more complicated, even though the old cfoundry library was moved to a separate repository, it retained the "cfoundry" gem name, meaning I could not include both gems in a single Gemfile.

Finally, Nic pointed out cloudfoundry-client developed by Ferran "ferdy" Rodenas, another Cloud Foundry superstar. I did a bit of playing around and it seems to fit my needs for Cloud Foundry v1. Combined with the currently maintained cfoundry gem for v2 compatibility, we've found the solution to our problem.

cloudfoundry-client's tests are written just as I like to write my tests, so I was able to read through those to understand the usage of the gem. The cfoundry gem has great tests as well, but also has a piece in the Cloud Foundry Docs. Unfortunately, they don't seem to be 100% up to date. So, I've decided to provide the code I used to push an application to both Cloud Foundry v1 and v2, explaining it along the way.

Preparing the App

In both Cloud Foundry v1 and v2, the vmc and cf CLI utilities create a zip file to upload to the environment. We need to handle this ourselves, which can be done with some simple code:

There's some code here specific to me, but you can reference the zip gem. The important thing here is that the top level of the zip file is exactly as your current directory would be when pushing you application to Cloud Foundry. The other disclaimer is that this little chunk was modified from the CF Docs fairly quickly, so not guarantees this specific piece of code is production ready.

The application itself doesn't particularly matter, but in my example code, I'll be pushing a Sinatra application. Here are the relevant pieces of code:

Gemfile

source "http://rubygems.org"
gem 'sinatra'

config.ru

require './app'
run Sinatra::Application

app.rb

require 'sinatra'
get '/' do
"Hello World!"
end

The config.ru file is only needed for CFv2, but this app will work on both CFv1 as well as v2.

Most of this code is self-explanatory, except maybe the manifest hash. You'll notice some of these fields don't match up with what you may be used to in a manifest.yml file. This is the format that the Cloud Foundry API expects, but I think once you see an example, it becomes pretty easy to know how to modify this as needed. The rest of the calls are mostly one-liners, except for actually starting the app. This is achieved in the CF API by setting the applications state to "STARTED". Now that the cloudfoundry-client gem is a part of the cloudfoundry-community project, I may spend some time adding some helper methods to the API to ease some of these things.

Pushing to Cloud Foundry v2

Pushing an application to CFv2 is a bit more complicated, so we'll break this out into chunks of code instead of looking at it all at once. First, we want to log in and create the application

The v2 gem doesn't take a hash like the v1 gem does, but rather treats the application as an object. So far we haven't done anything crazy, just simply target our CFv2 endpoint, logged in, and created a shell of an application. The next step is to create the route to map to our application:

# Get a new route object
route = client.route
# In our example env, we only have one route, so we'll use that
route.domain = client.domains.first
# We want to assign the route to the same space we created our app in
route.space = space
# As with the v1 example, the host part of the URL will be 'testapp'
route.host = 'testapp'
# Create the route
route.create!

So far, so good, but our app in CF isn't really an app yet, but all of the pieces are in place. We can now upload our application:

app.upload("app.zip")

Here we simply pass the path to the zip file of our application to the #upload method on the app object. And finally we need start the app:

app.start!

That's great and all, but what about all of that awesome information that CF spits out while staging and starting our application? Plus this is an async method, how can we get any idea on the progress of this? Well, the #start method actually can take in a block of code. Here's another example of how to start an app:

You'll notice that CFv2 complains about not specifying a version of Ruby to use in our Gemfile. I've left that out because in CFv1, it actually errors out if you DO specify a version of Ruby to use.

Conclusion

I'm not sure how many people will need CFv1 and CFv2 compatibility in their projects, but what we've seen is both APIs are very friendly to use, especially if you're familiar with the CF APIs. As I mentioned, cloudfoundry-client is now a part of the cloudfoundry-community organization, so if you feel like contributing to the v1 compatibility, you can find the repository on Github.

]]>I've mentioned Cloud Foundry v2 a fewtimesbefore, but I wanted to really get my hands dirty with the new bits and pieces before I wrote about it. Andy Piper wrote about CFv2 a bit on his blog, in which he outlines the major features to users of CF.]]>http://catdevrandom.me/2013/06/14/cloud-foundry-v2-whats-new/7f04af12-42fc-4660-aecc-15bf0ae3b98dSat, 15 Jun 2013 00:38:00 GMTI've mentioned Cloud Foundry v2 a fewtimesbefore, but I wanted to really get my hands dirty with the new bits and pieces before I wrote about it. Andy Piper wrote about CFv2 a bit on his blog, in which he outlines the major features to users of CF. I wanted to dig a bit deeper into the major changes in some of the components that make up CFv2.

Router

The router has been completely rewritten. It used to rely on Lua scripts tied into Nginx that called out to ruby code, which limited the more advanced webapp technologies such as websockets, quite possible the feature I've seen most requested around the Cloud Foundry community. In CFv2, the router has been rewriten and replaced by gorouter, an implimentation written entirely in Go.

Implementing a custom router in Go gives full control over every connection to the router, which makes it easier to support WebSockets and other types of traffic (e.g. via HTTP CONNECT). All routing logic is contained in a single process, removing unnecessary latency.

DEA and Stager

The DEA and Stager also recevied major facelifts. The biggest of these is probably the fact that the Stager is no longer a seperate component, but is now a function of the DEA. There's a few other big changes to the DEA as well:

Buildpacks - I've writen about this before. I'm still very excited for this and what it means for the new types of applications that can be ran on top of Cloud Foundry

Warden - Manages containers for both apps that are running, as well as those being used to stage applications. This handles isolation of CPU, memory, network usage, disk usage, etc.

The new staging process - With the Stager being absorbed into the DEA, apps are now staged on the same environment they're going to be ran on, all the way to the point of being staged in a Warden container like they'll be in when they run. Read more in the CFv2 docs.

Stacks - Think of these sort of like "tags" for your DEAs. This allows you to run a mix of DEAs with different characteristics, as minute as perhaps a certain package installed on the DEA, to as extreme as being a completely different OS. When the user pushes up the application, they can specify what stack they require, and CF will ensure the app runs on a DEA marked with the same stack.

Cloud Controller

The Cloud Controller now has several key changes.

Organizations and Spaces, for example, is a way to allow access to applications between multiple users. This means if you have more than one operations person in charge of managing an application on Cloud Foundry, you no long need to share credentials. Organizations and Spaces can be arranged in several ways. For example, you may choose to have an Org for each project and a Space for each phase of the development cycle (dev, staging, prod for example). Consider the diagram below from the CFv2 docs:

As you can see, the hierarchy is simple to follow. Organizations contain one or more space, and spaces contain zero or more applications.

Routes and Domains are another change in app management. Cloud Foundry has always had the ability to use external domains, that is, domains that are different from the domain that the API endpoint is on, but it was up to the service provider to enable this. cloudfoundry.com, for example, did not have this enabled, while AppFog did. CFv2 adds the ability for users to register custom domains, and map them to one or more space or domain. So while the default domain for an application on CFv2's new hosted solution might be "cfapps.io", I could easily map the "catdevrandom.me" domain to my org and host this blog on it.

To define the terms "route" and "domain", domains are the root part of the URL used in the app's URL. In the above example, both "cfapps.io" and "catdevrandom.me" would be domains. When you deploy an application, you choose a hostname that's prepended to the domain to create the full URL, or "route". So if I deploy the app "mycoolapp" and choose the domain "cfapps.io", my "route" would be the full URL of "mycoolapp.cfapps.io".

The cf CLI

In CFv1, you managed all of your applications from the command line using the "vmc" CLI. It was a ruby gem that contained all of the logic to interact with the Cloud Controller REST API and allowed you to create and manage applications. "cf" comes in and replaces vmc. It's still installed as a ruby gem but has been updated to handle all of the new features for CFv2. There's too many commands to list in this blog, so I'll just point you to the CFv2 docs once again.

Conclusion

I've been in the CF community for quite some time now, but this is the most exciting time for us. CFv2 brings some major changes to the table, including some long-requested features. There's still more goodies for us as a community in CFv2, and I havn't even mentioned BOSH in the slightest, but that's for another blog post.

Reference

]]>I'll admit, there was another reason I got excited about nise_bosh. Not only did it give me the oppertunity to set up Cloud Foundry v2 with ease, a first for me without access to a full BOSH environment that didn't involve me paying out of pocket, but I also]]>http://catdevrandom.me/2013/05/21/introducing-nise-bosh-vagrant/b67dc623-4fcf-471a-9b83-77e23fcc5aecTue, 21 May 2013 13:35:00 GMTI'll admit, there was another reason I got excited about nise_bosh. Not only did it give me the oppertunity to set up Cloud Foundry v2 with ease, a first for me without access to a full BOSH environment that didn't involve me paying out of pocket, but I also saw it as my oppertunity to get back into developing BOSH releases. This meant I needed a quick way to iterate on BOSH releases and test them.

nise-bosh-vagrant is a quick project I started last week to help orchestrate this workflow so that I could leverage nise-bosh, but quickly standup and tear down Vagrant VMs as well as install BOSH releases in them. The goal was to keep the workflow simple, and reduce the time from downloading a BOSH release to having it up and running.

So let's see how to use it. I've distributed this as a ruby gem for ease of use, so installation is simply...

There's only two required arguments: the path to the BOSH release and the path to the manifest file. Let's use the same release as we did when I wrote about nise-bosh, the mumble-release. We'll assume the release is at /home/brian/mumble-release and we've already ran bosh create release. We're also going to add two flags to help things out, --install and --start. This will automatically install the BOSH release after the machine has been prepared, and then start all the jobs in the release in the order that they're described in the manifest file.

/home/vagrant/install_release.sh -- Runs the nise-bosh commands to install all jobs described in the manifest file

/home/vagrant/start.sh -- Starts all jobs in the BOSH release

/home/vagrant/stop.sh -- Stops all jobs in the BOSH release

If you want to SSH into the Vagrant VM, the Vagrantfile is placed in the release directory, so in this example, it would be at /home/brian/mumble-release/Vagrantfile. There are plans for a nise-bosh-vagrant ssh command.

This project is, admittedly, a work in progress. It works, however every time it stands up a VM, it uses a fresh Ubuntu image and runs the prep scripts, which takes a bit of time. However, there are plans to create a Vagrant box with these preparations already in place. As of writing this, the current version of nise-bosh-vagrant is 0.2, but I'm hoping to continue to improve this, testing it by developing new BOSH releases.

So kick the tires! Break it, figure out what doesn't work and create an issue in GitHub!

]]>I mentioned nise_bosh in my previous post, albeit in passing. This is a topic, however, that deserves it's own blog post.

nise-bosh is the product of NTT, the world’s leading telecom headquartered in Tokyo, Japan. From the nise-bosh GitHub repo...

Nise BOSH is a lightweight BOSH emulator. You

]]>http://catdevrandom.me/2013/05/20/nise-bosh-a-new-way-to-bosh/6478f869-5ca5-4ccc-887a-0d5c72024243Mon, 20 May 2013 04:00:00 GMTI mentioned nise_bosh in my previous post, albeit in passing. This is a topic, however, that deserves it's own blog post.

nise-bosh is the product of NTT, the world’s leading telecom headquartered in Tokyo, Japan. From the nise-bosh GitHub repo...

In short, it aims to take standard BOSH releases, and allow users to install and run jobs without any instance of BOSH.

Iwasaki Yudai of NTT detailed the details of getting CFv2 up and running using nise-bosh here. Most of it is prep work under the assumption of a completely clean linux box, but the important lines are the following

This is installing the two jobs (micro and micro_ng) in the manifest file (micro.yml) included with the above linked gist from the cf-release BOSH release. It's a very simple and easy process, so much so that I was able to get CFv2 up and running in hardly any time, most of which was waiting for various packages to download and build.

Let's run through this with a simpler release. Not to self-promote, but the mumble BOSH release is very straight-forward. If you're not familiar with mumble, it's an open-source voice chat application and server. It's comparable with Ventrilo and TeamSpeak.

Nothing fancy here. Installing some prerequisites (git, curl, ruby, bundler), grabbing the two repos of interest (nise-bosh and the BOSH release we're deploying, mumble-release), and running bundle install over nise-bosh. The last command is running an init script that comes with nise-bosh that takes care of a few more prerequisites. This will set up a few directories, users and groups to run the release, as well as a few other apt packages and monit.

Before we actually install the release, let's take a look at the manifest we'll be using to deploy it

If you've deployed BOSH releases before, you'll notice things are a lot more trimmed down here. We're only defining the jobs and properties section, and nothing as far as networks or resource pools go. I would assume you could use your normal BOSH manifest, but I can't verify that as of writing this.

One last step before deploying the release is a part of the standard BOSH deployment cycle, we need to run bosh create release

nise-bosh takes three arguments: The path to the BOSH release, the path of manifest file, and the name of the job to install. As you can see in our above manifest file, we have but one job listed named "mumble". The output of this is very straightforward:

It's as simple as that. Finally, we just need to start the job. nise-bosh sets up monit config files as defined in the BOSH release, so all we need to do is tell monit to start all the jobs it's configured for...

The intent with this post was to introduce a project I've been working on named nise-bosh-vagrant, but this post is getting a bit long. Keep an eye out for another post very soon describing this project, what it does and how to do it.

Buildpacks are a convenient way of packaging framework and/or runtime support for your application. For example, Cloud Foundry doesn't support Django or Python by default. Using a buildpack for Python and Django would allow you to add support for these at the deployment stage.

These are very much the same as the buildpacks you're familiar with if you've ever used Heroku. In Cloud Foundry v1, to add a framework or runtime, you modified the actual CF source code, submitted a pull request, and it would get merged into the main repo. However, buildpacks plug right into Cloud Foundry v2. You don't need to write any code other than the buildpack itself. You don't need anyone's permission and you don't have to wait for a code review.

So how easy are buildpacks to create and use? Well, I decided to do a little experiment to figure this out for myself. I chose to pick a runtime I'm almost completely unfamiliar with: Haskell.

I say almost because I had a brief stint of learning Haskell using the above publication. I had fun, but didn't have enough time to dedicate to learning it.

So after a quick search, I found there was a Heroku buildpack for Haskell that already existed, which can be found here. Awesome, this is exactly what I was hoping for. I had no context of what was needed to build the buildpack other than knowledge of bash, and there was an existing one I could adapt as needed. I took the demo app, and tried to deploy it with no modifications...

Ok, so, it needs a little work. And I mean literally that, little work. I did the open-source coder's favorite thing and pressed the big ole' "Fork" button. You can find my fork here. As you can see from the commits, hardly any work needed done to get things working, and sure enough...

Cloud Foundry v2 is proving to refine the PaaS that I'm a huge fan of, and I can't wait to dig up other new features. Andy Piper wrote up a nice blog post today on the move from Cloud Foundry to Pivotal, as well Cloud Foundry v2 and BOSH. It's a good read, and I highly recommend you do.

]]>While still related to the Miles Platform, I wanted to take a break from the home automation topics to do a quick review of an interesting Ruby gem I found called Hallon.

For all my music needs, I would venture a guess that ~95% of my music listening is Spotify.

]]>http://catdevrandom.me/2013/05/08/spotify-and-ruby-using-the-hallon-gem/5f7eaaae-8650-488f-bb79-ff0b025085f4Wed, 08 May 2013 18:02:00 GMTWhile still related to the Miles Platform, I wanted to take a break from the home automation topics to do a quick review of an interesting Ruby gem I found called Hallon.

For all my music needs, I would venture a guess that ~95% of my music listening is Spotify. While I work, in the car, around the house, it's all Spotify. When I looked at integrating music services with the Miles Platform, I was quick to start looking around for Ruby Spotify libraries. Some better than others, I ended up with what seems to be the de facto Ruby gem. Hallon is a very easy to use gem that offers a ton of flexibility and has great examples to get started. In fact, the audio-service in the Miles Platform already has Spotify integration.

I wanted to run through a quick example of logging in, searching for a song, and playing it back. What better than Daft Punk's latest release, Get Lucky?

So first, let's do some prep work. The Hallon gem (or more accurately, Spotify’s APIs) requires you to provide three credentials: your username, your password, and an API key. API keys are binary files generated here. As I like to do with all projects, I'll go ahead and write up a quick Gemfile...

Gemfile

source 'http://rubygems.org'
gem 'hallon'
gem 'hallon-openal'

Yeah yeah, I know, versions. Anyways, these two gems do two very different things. The hallon gem provides the wrapper around libspotify, and is what interacts with the Spotify API, while hallon-openal is the audio driver for streaming music from Spotify.

Essentially what we're doing here is creating a Search object with the query of "Daft Punk Get Lucky" and telling Hallon to perform the search. Hallon returns an array of Track objects that are just bare-bones objects, we have reference IDs to Spotifty objects, but that's it, so we iterate over the first five results and load the entire object from Spotify. Now let's go ahead and run this file just to see what we find so far...

Some interesting results, but that first one is the one we're looking for. Now that just leaves us with setting up the player so we can crank up the tunes and get the party started. Like the rest of the gem, doing this is just as easy...

It's as easy as that. Create an object for the player, passing it Hallon::OpenAL, and then tell it to play a track object. You can see the whole thing put together here.

Notes

I had an issue (and vented about it on twitter) where the script was getting hung up logging in. Hallon creates a folder named "tmp" in the directory where your script is, and I had to remove this directory to be able to logged in. I have no explanation, only a possible solution.

You may notice when I played the track, I called play!. play also exists, but is non-blocking.

The rdoc for Hallon is insanely helpful. Read it, even if you're not stuck. There's some hidden gems of features in this project (ie. volume normalization).

]]>One of the most important pieces in the Miles Platform, in my opinion, will be the endpoints. In the unofficial dictionary for Miles, we're defining endpoints as the entity that connects the automation platform with physical components. Consider the blog post on automating lights, in that design, an endpoint would]]>http://catdevrandom.me/2013/02/09/home-automation-endpoints/0db959ed-1734-4aad-a0f0-96cb0973cf47Sat, 09 Feb 2013 15:03:00 GMTOne of the most important pieces in the Miles Platform, in my opinion, will be the endpoints. In the unofficial dictionary for Miles, we're defining endpoints as the entity that connects the automation platform with physical components. Consider the blog post on automating lights, in that design, an endpoint would control the relay that switches the light on and off.

Endpoint-Arduino

For the first stab at defining an endpoint, we decided to look at the Arduino. To be specific, we're looking at an Arduino Uno with the Ethernet shield. Our goal is to define a core set of capabilities that an endpoint has that can operate over, at minimum, a REST API. From there, an endpoint could define additional capabilities to enhance the features that it offers.

For this example, we'll take a look at endpoint-arduino, which currently supports the most basic of capabilities, writing to and reading from a pin. The REST API for these calls is defined as follows...

While this definition is a work in progress, I think it's best if this returned set of data is consistent between both endpoints and their special capabilities.

Endpoint-specific Capabilities

In discussions with Mark Kropf, he brings up being able to use SPI. I also have an immediate use for the Virtuabotix DHT11 Temperature and Humidity Sensor which, while it offers a very handy and easy to use library, would be difficult/impossible to operate by reading from and writing to individual pins over HTTP communication. This is our next step of development, where Mark has said he'll be looking at SPI, I'll be adding an interface to the DHT11.

]]>For the first prototype, I looked at a fairly visual, useful and simple problem. I want to be able to control my house lights. In the end, I hope to control if they're on or off, brightness and possibly even color (This one might be a bit far off, still]]>http://catdevrandom.me/2013/01/31/home-automation-lights/c0d1a03d-efb4-4374-af6f-d1f52fbb7e9bThu, 31 Jan 2013 22:22:00 GMTFor the first prototype, I looked at a fairly visual, useful and simple problem. I want to be able to control my house lights. In the end, I hope to control if they're on or off, brightness and possibly even color (This one might be a bit far off, still researching hardware). For starters, I figured I'd handle the simplest case: on or off.

Warning

Let me preface this by saying that I am not classicaly educated in circuitry at all. I have been, however, doing my homework so I don't injure myself. If you decide to do this yourself, please, please PLEASE be aware of the dangers of working your homes electricty. If this is a standalone circuit to learn, as I did, keep things unplugged while you're hooking things up and ensure not to touch the live wire while interacting with it. If this is already a part of a light circuit and you're applying this to your home, flip the breaker for that circuit. I'm not responsible for your injuries, but I certainly don't want anyone to get hurt.

Prototype

I knew I wanted to wire this up as if it were wired up with two three-way switches. This operates exactly the same way as if you had two light switches to control one light, except I've replaced one with a relay that will be operated by an Arduino. Here's a poorly drafted design of such a circuit

Again, not classically trained.

This is a fairly common setup in a lot of houses. As I mentioned, I've simply replaced one of the light switches with a relay. I've actually picked up a board with 4 relays and it's been great for prototyping. The neutral line runs straight to the light, and the hot wire runs to the common on the first switch, the two travelers to the second switch, and from the common on the second switch to the light.

After that, it was a matter of writing some quick code for the Arduino, which can be found here.

As described in the code, this provides a REST interface to each relay, where a POST turns the relay on, a DELETE turns the relay off, and a GET returns the state of the relay. All returned values are in JSON format, because I like to overdo things.

Conclusion

While this won't be my last prototype for the house lights, this certainly opened my eyes of things to keep in mind as I implement this throughout my house. Discussions are underway on how to improve this, both from a feature standpoint and an efficiency standpoint, and the next step will be to control the dimming of the lights.

]]>Home automation has been an interest of mine for some time. In fact, the backstory edges on the side of embarrassing. So we might as well start there.

Beginnings

When 2008's Iron Man released (I know, big shock where the inspiration came from) and Jarvis was presented as Artificial Intelligence,

]]>http://catdevrandom.me/2013/01/28/home-automation-intro/c1a9b327-da56-41ae-8c4b-603a3b70ef33Mon, 28 Jan 2013 20:44:00 GMTHome automation has been an interest of mine for some time. In fact, the backstory edges on the side of embarrassing. So we might as well start there.

Beginnings

When 2008's Iron Man released (I know, big shock where the inspiration came from) and Jarvis was presented as Artificial Intelligence, I've had one consistant thought stuck in my mind: "How close to reality is this?"

Over the next few years, I'd take a few trips down the path of a realistic Jarvis, each subsequent attempt being more successful than the last. In fact, it would be the motivator that took me into studies of artificial intelligence, machine learning, natural language processing and other related topics while in college.

Current Day

Ok that's boring. Let's get to the details. I'm reviving my classical education in AI and machine learning and crossing it with my hobbies of circuitry and making weird things while avoiding injury.

Currently my ideas include automation and pragmatic control of...

Lights

HVAC

Security

However, I want to do this as much in the public eye as I can (thus the blog), get the community involved and hopefully together we can make something awesome. So watch this space, as I'll be documenting my progress as well as providing the code I've written along the way.

Special Thanks

Josh Bartz -- This is a topic that he and I have been discussing for years, Josh has been a near limitless source of ideas and inspiration

Adron Hall -- Adron actually sparked my drive to make this a communal effort with a couple tweets

Mark Kropf -- Mark has had a strong interest in home automation and has been another great source of ideas

Alex Klein -- Alex has been my go-to guy for anything involving circuitry and wiring. Thank him for the fact I'm not in the hospital yet.

]]>Hello

... and welcome. Yes yes, I know. Yet another blog from Brian McClain, how many does this guy need, right? Well I actually have a plan. My other blog (old blog? I havn't decided yet) was named based on my awful Twitter handle. It was hard to remember for some,

... and welcome. Yes yes, I know. Yet another blog from Brian McClain, how many does this guy need, right? Well I actually have a plan. My other blog (old blog? I havn't decided yet) was named based on my awful Twitter handle. It was hard to remember for some, and only got referral traffic. CatDevRandom though? That's easy. I type that command at least once a week.

So what can you expect from this blog?

Tech

Code

General hackings

Even the choice of blogging engine was done to promote this. I chose Octopress, an open source blogging framework that promotes hacking, customizing, and general tinkering.

A lot of what drove this choice was really the hosting situation. It was a lot of manual work, it constantly went down, and when I did manage some traffic, it didn't handle it well. In my current position, I hack and develop on PaaS technology, so I decided to host this on AppFog, a Cloud Foundry provider. As a big Cloud Foundry fan, this was icing on the cake

So I might move 100% of my blog here, or I might leave this be and keep my writings on my old blog, http://brianmmcclain.com, or I might keep them split. Who knows!

So what do you think? A fan of the new idea? Like the new blog? Or miss the old? The new comment system needs some traffic as well :)