OK, so everyone’s talking about PS4 announcement. Here’s my thoughts – take them with a grain of salt though since I’m not a game developer. I’ve had playstations since the first one, and I’ve generally liked them. That said, the PS3 had several weaknesses:

Price: no one wants to pay $600 for a game console

Incessant updates that take forever to download and run

Bad ports: the Xbox had the better version of most cross platform games this generation, just because it was easy to develop for

Now, if rumors are correct, the next Xbox should be pretty similar to the PS4 (x86, large memory, AMD GPU). The main difference is that the rumored memory for the Xbox will be 16GB of DDR3 with a small amount of fast memory. The PS4 will have 8GB of GDDR5, so less, but faster memory. Let’s try and compare speeds here: according to Wikipedia, DDR3 maxes out at about 1.7 GB/sec, whereas GDDR5 supports a range of speeds, but 5 GB/sec seems to be on the low end, with crazy high speeds existing for some configurations. So the PS4 will have about 3 times faster memory at least.

What does all of this faster memory mean? Faster frame rates and better graphics? Maybe a little, but probably not much. Right now for PC what’s done is all the data for a fairly large chunk of the game (like say the current level) will be loaded into main memory, and the data needed for rendering is moved onto the graphics card (which uses GDDR5) as needed. More graphics memory allows you to draw a more complex scene at full speed , but it doesn’t otherwise speed things up. An architecture like this will work fine for the Xbox as far as graphics go: developers are used to it, and the PC can have the best graphics of any platform, so graphics wise this will work fine. However graphics isn’t all we use GPUs for anymore.

GPUs can be used for general computation as well as graphics. A pretty common use of GPU compute in games is to do the physics calculations for lots of small particles (Does that sound like something from the PS4 debut event?). One of the main difficulties with GPU compute is to transfer data back and forth between the GPU and CPU with good performance, something that should be super fast and easy on the PS4. The PS4 will be a GPU compute monster, this will enable a lot of cool effects that are hard to do even on a high end PC. In terms of pure polygon count a PC definitely will and the Xbox might (depending on it’s final specs) beat the PS4, but if the PS4 has flashier effects and enables art styles that are unfeasible on other platforms that could be a big strength. Hopefully Sony did the hard work of making an easy API for using this compute capacity, which seems possible since I hear that the PS4 tooling is quite good.

So when Ars claims that the PS4 won’t best a high-end PC on anything, they’re wrong; overall memory speed will be much greater. This will mean that GPU compute will be fast and relatively easy to use, but it also takes out one of the biggest overall bottlenecks in system performance. 8GB of memory will allow most of a games’s assets to be loaded in memory at once, and there will be no need to transfer data to the GPU. In later years of the PS4 lifetime, people may have to resort to crazy tricks to get maximum performance, but right now it should be very easy to make a game with good performance (assuming there’s no odd bottlenecks I’ve missed). This should make the PS4 the preferred system for developers and solve one of the big problems that the PS3 faced (and is still facing).

Now Sony has a good history of shooting itself in the foot, so there’s tons of ways the PS4 could screw up. For now I’m pretty optimistic: it looks a lot more appealing to me than the WiiU. We’ll find out about the new Xbox soon, but if the rumors are as accurate as they were for the PS4 then I think the PS4 will be the better gaming machine.

So my last post described
why you might want to use dependency injection and how to do it by hand.
That’s all well and good, but there’s a problem with doing dependency
injection by hand – it’s a big pain in the ass. You have to wire up
everything by yourself, you end up with tons of factories, it’s just not not
feasible once you get above a certain level of complexity. So a dependency
injection framework is super useful, and it’s also super easy to make a simple
one.

If you look historically, the first dependency injection frameworks came from
Java and used XML (yuck) as configuration, but has mostly moved on to
using things like @Inject annotations. Statically typed languages like Java
have one advantage over dynamically typed ones like javascript, in that you
know what type of thing to inject based on the static type. So a modern Java
DI framework looks something like:

You’re already specifying what type of thing you need in the type signature,
so you just have to tell the framework what parameters need to be injected. In
javascript, you need to come up with some other method, and the one that’s
most obvious is to base it on names. So you’ll ask for a http dependency,
and the framework will find one for you. How do you specify what dependencies
you need? Angular does some clever
magic with converting functions to strings to figure out the names
of their dependencies, but even they have several ways to specify explicitly
what you need (mostly because of code minification).

So you’ll end up with an api where you do something like the following to make
a service that can be injected:

1
2
3
4
5
6
7

injector.service({name:"myservice",inject:["filesystem","http","json"],factory:function(filesystem,http,json){// do stuff with your dependencies}})

And you get dependencies manually (which you ideally only need to do for your
entrypoint) using a function like:

1

myservice=injector.injectDependencies("myservice")

It looks a lot like node modules, no? Several node DI frameworks use node
modules as their basic level of abstraction, but I prefer to keep it at a
smaller level than that. Anyways, this is simple to implement, I’ve made one
in about an hour in under 20 lines of code, you can see it at
github. I’ve called it hypospray,
after the injector things they use in Star Trek :-)

It’s quite simple, you have a services object:

1

varservices={}

and then you have a simple function for registering your dependencies:

To see it in use, look at my example
script. There’s
a lot that would be needed before using this in production. For one, you
aren’t really saving any boilerplate here, however you could implement on top
of this a high-level framework for whatever it is you’re doing. A web
framework might look something like this:

That looks pretty clean, right? You could even use the sort of magic angular
uses and get dependencies by the function input names.

If you want to do this in production, I’d use a more battle tested framework,
though. On the client side, I can’t recommend Angular
enough, however the server-side picture is much less clear. I’ve looked at a
few modules, and I like dough the most.
Look around npm, there are plenty of choices.

Use some DI framework, though. It makes testing much easier and faster, which
means you’ll run your tests more often.

So on a recent episode of nodeup, the topic came up where
they were talking about when you have to deal with an external API or something,
how do you test that efficiently? If you do the naive thing you’ll end up with
tests that are fragile and take hours to run, because you’re calling out to the
API every time you run a test. So there are a couple of ways to deal with this
problem. You can on the one hand use something like
nock and have it intercept your http calls
and return something nice for testing without actually doing any http calls, or
you can use set up your code in a dependency injected style and then you run 90%
of your tests as true unit tests, not even needing to call out to the external
API, and you have 10% of your tests being integration tests that do real calls
out to APIs and external databases, etc. One of these styles is not really
better than the other (and in fact they are complementary), but doing dependency
injection is of general value for your codebase, so I’m going to talk about how
to do it in js.

The basic rule with dependency injection is that you never call ‘new’ (or the
equivalent in other languages) in your code that actually does work. You have
code that does stuff, and then you have glue that ties those parts together.
The glue is what allocates objects. The reason you don’t call new in your code
is that doing this hard codes what you’re going to do. Instead of calling new,
you take all of the objects you need as parameters either to a function, or to
your constructor for your object.

Let’s make this a bit more practical. If you were going to call some random
API, the code you would use might be like:

// version 1, no DIfunctiongetUserPosts(username,postId){varrequest=http.request({'host':'api.randomsite.com','path':'/users/'+username+'/posts/'+postId},function(response){response.on("data",function(chunk){analyzePost(chunk)})});request.end()}

// version 2, java-style DIfunctionUserPostsGetter(requester,postAnalyzer){this.requester=requester}UserPostsGetter.prototype.get=function(){requester.request(function(response){response.on("data",function(chunk){postAnalyzer.analyze(chunk)})})}// in your main file:functiongetUserPosts(username,postId){varrequester=newRequester({'host':'api.randomsite.com','path':'/users/'+username+'/posts/'+postId})varpostAnalyzer=newPostAnalyzer()varpostGetter=newUserPostGetter(requester,postAnalyzer)postGetter.get()}

However that’s a bit heavyweight for javascript. The main thing you want to do
with dependency injection is to not hard code your dependencies. A more
functional style might be something like:

// version 3, functional DIfunctiongetUserPosts(username,postId,requestFactory,callback){varrequest=requestFactory({'host':'api.randomsite.com','path':'/users/'+username+'/posts/'+postId},function(response){response.on("data",function(chunk){callback(chunk)})});request.end()}// in your main file:functiongetUserPostReal(username,postId){getUserPosts(username,postId,http.request,analyzePost)}

So dependency injection adds some overhead, but the real benefit comes in
testing. For version 1 above, you’d have to use nock or a similar tool to run
tests without actually interacting with the service. For the others you might
have to use mocks or fakes, but they can be very simple. For example a fake
requestFactory for version 3 might be:

getUserPosts("testUser","1",fakeRequestFactory("this is a test post"),function(responseText){assert.equal(responseText,"this is a test post")})

So that’s the basic idea behind dependency injection. In the Java world people
use a lot of very powerful DI frameworks (like
Guice) that automate a lot of the
generation of the glue between your components. Such a thing doesn’t exist in
javascript, but it’s a lot less needed because of the power of the language.
However anything other than version 1 above does add complexity and you need
to decide how much testing you’re going to do (almost certainly not 100%), and
how much will be integration testing and how much will be unit testing before
you decide whether to use a dependency injected style or not. What I’ve
started to do is to have 2 parts to my code: one part that is basically pure
(it might store state during calculation, but it appears pure to the outside
world) and never tries to do IO even indirectly (so you don’t need mocks).
Then I have a driver that’s all IO. This means I can unit test most of my code
in total isolation, and I run integration tests for the driver. If you’ve ever
programmed in Haskell you’ll see that this is similar to the way you work with
Monads.

Meteor and Derby are next-gen javascript web frameworks. The main thing they bring is writing the same code on the server and the client – they render the page server-side first and then update it client-side using the same code and templates. This is pretty awesome, even if they’re still quiete immature. It’s hard to argue with the performance and DRY-ness of the approach used in these frameworks, however is this what we’ll be using for everything in 5 years?

The short answer is probably not. However, I’d imagine that these will have the same effect rails had way back when it first came out: hybrid frameworks will become a mainstream choice with a large number of different options, like there are for MV* frameworks you see now. Remember that there are still people doing vanilla php development. There are a few reasons why you wouldn’t want to use one of these of frameworks

Legacy: This is by far the biggest issue: these frameworks don’t really play well with having any legacy technology, so they’re only good for new projects. As of right now you can’t even use a legacy datastore with one of these app frameworks (only Mongo for both), though I’m sure that will change in time.

Control: So you’re working on a new project, should you use one of these frameworks? Well, if you need to do something that doesn’t match the model of these frameworks then you’ll probably want to bypass them and use more traditional types of frameworks.

APIs: Someone might come up with a solution to this in the future, but as of right now, it looks like it would be hard to share much code with an api. You can’t use the built-in data passing these frameworks provide because the client and server are much too coupled in these types of frameworks (the coupling is not normally too much of an issue, because it’s transparent to the developer).

Languages: If you prefer another language over javascript (or coffeescript), then these types of frameworks will probably be inaccessible. Google wrote a Java->Javascript compiler for GWT, but even that has a lot of downsides vs. using native javascript on the client. I will predict that compiling to javascript will become even more popular than it is now, and some frameworks for server web programming will be ported, I doubt these will be mainstream. It will be mostly javascript (and coffeescript which has the advantage of playing very nicely with javascript).

There are lots of advantages though:

Speed: Launch fast by using server-side rendering, and update quickly on the client side

DRY: The same code is used on the client and server (baring sensitive things that only run server-side: authentication and the like), cutting down duplication a lot.

Live updates: These frameworks build-in updates to each client when the data changes (and in Derby the code as well), with little extra work needed. This makes a chat app nearly trivial.

Offline mode: This is in Derby now and probably will be in Meteor soon: all data changes can be cached on the client when a data connection is lost and sent when the connection is restored. conflicts can be resolved in a variety of ways.

What will probably happen is currently popular client-side frameworks will become much less popular. Why use rails+backbone (or django+angular or whatever your favorite combination is) if you can just use Meteor or Derby? Server-side programming is probably safe for the most part, but if you want a “single page app” (I hate that term) as opposed to a more traditional webapp I suspect people will end up using these hybrid frameworks. Right now the niche for Meteor and Derby is probably highly interactive and collaberative webapps that have complex UIs and data models that need high performance. I’m excited to see what happens with these frameworks, and I’ll probably be using one of these on my next personal project.

Go has been getting a lot of attention lately. I’d like to bring up somethign I haven’t seen a lot of: Go’s relation to Java. Now, I don’t know anything for sure about the Go developers, but I’m sure being inside Google they see a lot of Java code. Go certainly is in some ways a reaction to more complex classical OO languages like C++ and Java (and C# and D, etc, etc), but it also has a lot of respect for what Java did right, even when it uses vastly different names and syntax for it’s solutions.

The one obvious point of comparison is in the concept of interfaces. Now I won’t dwell on this too long, but Go refines Java’s best idea, making it so you don’t need to opt in to satisfy an interface. This makes things a lot more lightweight. Go makes interfaces even more central than Java does. Java has class polymorphism, abstract classes, and interfaces. In Go every type is distinct, but any type with appropriate methods can be used with an interface.

Both Go and Java decided to punt on generics initially, and use typesafe casting where you’d need generics. This brings us to where Go learns a lession from Java: binary compatibility is too costly. Java has always had the idea of cross-platform binary compatibility. In practice this isn’t very useful. Making seperate builds for seperate platforms isn’t too much of a problem if source compatibility is good (which it is for Go), and when it came time to add generics to the language, they had to be crippled to support older JVMs. The thing they both get right, is to release a useful tool knowing it will need to grow. Go is not complete, it will probably get generics at some point, but for now, you can still get work done without them. Compare this to a language like Rust, which is much more complex and is still not really a practical language to use for a real project yet.

One thing that Go gets a lot of flack for is lack of exceptions. In reality, Go has the exact same distinction as Java: in Java there are regular Exceptions, and RuntimeExceptions. Regular Exceptions are checked at compile time, so they’re part of the API of any library you call (even if they are often not treated that way). In theory this eliminates one of the big problems with exceptions: you never know what exception will come from where. However, in practice checked exceptions are extremely verbose. RuntimeExceptions on the other hand may be thrown and caught anywhere, and are used mainly in two cases: programmer errors (which should never be caught), and things outside of your control (out of memory, etc). It’s pretty rare you’ll actually want to catch these. Go has two types of error handling: error codes (using multiple return values, so it’s cleaner than in C), and panics which are somewhat similar to exceptions. Panics are fundamentally like RuntimeExceptions – you use them either internally to a package, or they’re for programmer errors. Error codes are for expected errors, things like files not found, network timeouts, etc. Multiple return values allowes you to use error codes to cleanly solve the same problem as checked exceptions: making errors part of API of a particular package, while still having them be out-of-band of the normal return value of a funcion or method. Both checked exceptions and error codes are verbose (though error codes in Go seem less verbose to me), but that’s because dealing correctly with errors is hard and involves a lot of code.

Both Go and Java have a baked-in concurrency model. It’s not really a headline feature in Java anymore, but any method can be marked synchronized and will take a lock on the object it’s a part of. C and C++ (until last year) don’t bake in any single threading model, leaving that to libraries. Java realized that proper concurency requires language support, and in the 90s threads and locks were what everyone used. Nowadays they the consensus seems to be that using straight threads and locks doesn’t scale. Too many people just add synchronized anywhere they have a deadlock, which kills the concurrency and is usually voodoo programming. Go uses lightweight threads and message passing, which is much easier to understand for multiple reasons. You might argue that in 20 years we might be saying the same about the Go concurrency model as we’re saying now about Java, but I doubt it. Go’s concurrency model is based on CSP, which has only become more and more relevant over the years.

So Go tries to solve a lot of the same problems as Java, but it ends up with very different solutions to those problems. I think that Go solves these problems better than Java, but only time will tell. Java 1 was very simple, but nowadays Java (more the libraries and frameworks than the language) is practically synonymous with complexity.