In Part III of this series, we discussed how the Unix kernel came to life using the C programming language, and a little bit of the genesis of the kernel. We also noted that there are some issues using C as a general purpose programming language in modern-day use.

Surprisingly, with the exception of Windows, the popular modern-day operating systems all use Unix or Unix derived kernels. The difference between the operating systems tend to be how the rest of the OS is implemented. Android uses a Linux kernel with higher level functions developed in Java. OSX and iOS use a Mach kernel with Objective C on top. Linux typically implements the higher level functions in C itself. Each OS has some assembler of course, and there is a little C++ sprinkled here and there.

Changes!

As we noted earlier in the series, the last ten years have brought major changes in hardware. Multi-core CPUs are commodity, GPUs are standard, networking is taken for granted. The types of peripherals supported offered has also expanded greatly, especially in the mobile arena. Radios, IMUs, and multi-touch screens are now prominent.

Another major change is the amount of information that programmers have available. This information comes from places like Stackoverflow. Another valuable source of information about coding comes from the open source repositories of Github. Github holds entire treasure troves of working code which solve a wide variety of computing issues. These resources are contributed and available globally. Life be good!

Well, mostly good. As it turns out, there are some issues. While hardware is much more capable, software has lagged behind. The array of new hardware needs to be supported, and as we’ve discussed the current set of tools has their genesis 15 years ago for C#, at least 20 years ago in the case of Java and Objective C, and 40 for C itself. Some of the “newer” hardware capabilities, like multiple CPU cores, are just hard to program and control at this point.

While there is a lot of great code on Github, and some wonderful answers on Stackoverflow, we also know that there are equally poor examples on both. How do you tell the difference? I’m sure you’ve run across Github repositories that provide a good 80% answer, but would take a major rewrite to turn into code that is suitable for your project. People have picked up the habit of copy/pasting Stackoverflow answers into their code. How do you know if code has been written by a seasoned developer, a researcher, or someone just starting out? Is the project a one-off demo? Is it a research project? Is it for production code use?

The Commercial Approach

As it turns out, many companies face much the same issues. High tech companies like Apple and Google have programmers with a wide range of experience. Some engineers have just been recruited out of college, others have been programming for decades. One of the major questions is, “How do you bring in new people and make them productive?” This is also at unprecedented scale, where there are billions of users of a company’s products.

Companies have distinct advantages over the open source community. They get to pick who works there, have money to invest, infrastructure and so on. They also have a vested interest in helping their developers build reliable software, because any issue can have a great financial impact. Let’s say you’re a backend programmer at Google. Today they do about 40K search queries a second. Each Google query uses 1,000 computers to retrieve an answer in 0.2 seconds. Then there’s the AdSense server area, which is generating actual revenue. What happens if a problem arises? How long will it take to detect it, find it and then fix it? Time is money at an unprecedented rate.

There are several paths companies can take to help level the playing field. There are some common defense mechanisms, mostly having to do with program memory management. Here are two of the new developments from Google and Apple, who are both building new open source programming languages.

The Go programming language was conceived in late 2007 as an answer to some of the problems we were seeing developing software infrastructure at Google. The computing landscape today is almost unrelated to the environment in which the languages being used, mostly C++, Java, and Python, had been created. The problems introduced by multicore processors, networked systems, massive computation clusters, and the web programming model were being worked around rather than addressed head-on. Moreover, the scale has changed: today’s server programs comprise tens of millions of lines of code, are worked on by hundreds or even thousands of programmers, and are updated literally every day. To make matters worse, build times, even on large compilation clusters, have stretched to many minutes, even hours.

Go was designed and developed to make working in this environment more productive. Besides its better-known aspects such as built-in concurrency and garbage collection, Go’s design considerations include rigorous dependency management, the adaptability of software architecture as systems grow, and robustness across the boundaries between components.

Apple & Swift

Apple comes at things from a different perspective. Since the introduction of the Mac, Apple has been thought of as a graphics front end for computers. For many years, Objective C has been their ace in the hole. Objective C integrates C with Object Oriented Programming in the Smalltalk tradition. Objective C has built-in garbage collection, and a lot of other things that make programming nice. Objective C has been around since the early 1980s. While it has served its master well, Apple is rolling out a new programming language called Swift to replace Objective C. Swift takes the lessons learned over the last few decades and rolls them into a new programming environment.

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns.

The goal of the Swift project is to create the best available language for uses ranging from systems programming, to mobile and desktop apps, scaling up to cloud services. Most importantly, Swift is designed to make writing and maintaining correct programs easier for the developer. To achieve this goal, we believe that the most obvious way to write Swift code must also be:

Safe. The most obvious way to write code should also behave in a safe manner. Undefined behavior is the enemy of safety, and developer mistakes should be caught before software is in production. Opting for safety sometimes means Swift will feel strict, but we believe that clarity saves time in the long run.

Fast. Swift is intended as a replacement for C-based languages (C, C++, and Objective-C). As such, Swift must be comparable to those languages in performance for most tasks. Performance must also be predictable and consistent, not just fast in short bursts that require clean-up later. There are lots of languages with novel features — being fast is rare.

Expressive. Swift benefits from decades of advancement in computer science to offer syntax that is a joy to use, with modern features developers expect. But Swift is never done. We will monitor language advancements and embrace what works, continually evolving to make Swift even better.

As is typical with most things Apple, everything seems very happy and fluffy 😉

The Others

Of course there are also the tried and true approaches. Most companies use Python and Java to help with their programming issues. Some use C++, with mixed results. C++ does not tend to lend itself to leveraging good results across a large programming population.

Conclusion

This series of articles has been some of the thought process and background for how to go about starting to program embedded systems on the Jetson Dev Kits.

Takeaways

First, you need to be able to efficiently interface with C routines. There’s too much existing infrastructure out there to ignore, including interfacing with the operating system kernel.

Second, you need a memory safe language (automatic garbage collection, range checking). While you may able to produce flawless code with memory cohesion, more than likely others can not. There will be times when you will need to be able to use third-party libraries, and you need to reduce the risk of memory corruption as much as possible. You don’t want these guys patching your code for you.

Third, multi-core execution and concurrency is a big deal. It’s also hard to get right. Make sure your programming language helps you with this.

Fourth, make sure that the programming language has enough critical mass behind it so that you can leverage other people’s knowledge. This is true for how to actually use the language or programming environment itself, as well as the availability of libraries and such. You can be using a kick ass language, but if you have to figure out everything yourself and write all the libraries that’s a problem.

Next Steps

Now we’re off to start working on the Jetson. To be clear, this is embedded programming in the large, not for things like base Arduinos and such. We’re talking robots and vision systems and the like. The inclusion of Googles’ Go programming language and Apples’ Swift were intentional. Remember that the iPhone runs ARM code, just like the Jetson does which makes Swift a natural candidate for use. Android and Java could also work on the Jetson in a similar role.

From a different perspective, Go seems like a great candidate for lower level programming for something like robots. As we’ve talked about, memory management and concurrency is difficult and in distributed systems even more so.