If you are getting the too many open files error with MacOS it could be VSCode trying to too many open files (or by default opening more than 10240 files).

You can confirm that with the following:

lsof |awk'{ print $2 " " $1; }'|sort-rn|uniq-c|sort-rn|head-20

So, what can you do about it? If the files are not important, say it is your output folder, then you can use VSCode settings to exclude them. In the example below, I configure VSCode to ignore build folders. I would encourage this as a workspace setting, so everyone in the team gets it.

After using a MacBook Pro for two years I thought it was time to share what utilities I found really useful to have. These are obviously weighted towards being a software developer, so your mileage might vary.

Brew

It is the missing package manager for MacOS, so as with NPM, Chocolatey, or Composer, where you can install what you need via the command line.

It may seem weird, like what is wrong with just download and install what you need?! The advantage is that you can write this stuff down so that if you need to reinstall it is easier (and also easier to share to help others get up and running).

A second advantage is updating, it takes one command to update all the tools I use.

Aerial

The AppleTV has the best screensaver I've ever seen, and some smart person ported it to MacOS with the name Aerial.

A word of warning, these videos are massive and will destroy your bandwidth. One tip to solve that is that under the settings is a Cache section - make sure you have the Cache Aerials As They Play checked else this will destroy your bandwidth. If you are on uncapped, then there is also a download now option which is a must to use.

Fish

Fish Node Manager

Part of my job has involved working with multiple projects, and that means multiple versions of Node, and that was a pain. Thankfully there is a Node Manager for Fish that lets you easily change what version of Node you are using.

Unfortunately, this isn't as easy to setup, as to install it you first need Fisherman, which is like Brew but for Fish; which leads to this 3 step process to install it and configure it.

Amphetamine

Amphetamine is a massively useful tool for MacOS, especially in a DevOps culture where you might get up in the night and just need your machine to behave the exact way you want it. Its core use is to not let your Mac go to sleep and you can control what triggers that, automatically or manually.

This post is one in a series about stuff formally trained programmers know – the rest of the series can be found here.

Binary Tree

In the previous post we looked at the tree pattern, which is a theoretical way of structuring data with many advantages. A tree is just a theory though, so what does an actual implementation of it look like? A common data structure implementation is a binary tree.

The name binary tree gives us a hint to how it is structured, each node can have at most 2 child nodes.

Classifications

As a binary tree has some flexibility in it, a number classifications have come up to have a consistent way to discuss a binary tree. Common classifications are:
- Full binary tree: Each node in a binary tree can have zero, one or two child nodes. In a full binary tree each node can only have zero or two child nodes.
- Perfect binary tree: This is a full binary tree with the additional condition that all leaf nodes (i.e. nodes with no children) are at the same level/depth.
- Complete binary tree: The complete binary tree is where each leaf node is as far left as possible.
- Balanced binary tree: A balanced binary tree is a tree where the height of the tree is as small a number as possible.

Implementations

While a binary tree is more than just a pattern, there are no out of the box implementations in C#, Java or JavaScript for it. The reason is that it is a very simple data structure and so if you need just the data structure you could implement it yourself but more importantly, you likely want more than the simple structure - you want a structure that optimises for traversal or data management.

References

This post is one in a series about stuff formally trained programmers know – the rest of the series can be found here.

Trees

This post will look at the mighty tree, which is more a pattern than a specific data structure. The reason to understand the pattern is that so many of the data structures we will look at in the future use it that a good understanding of it provides a strong basis to work from.

As a computer user though, you already have seen and used a tree structure - you may have just not known it. The most common form of it is the file system, where you have a root (i.e. / or C:\) and that has various folders under it. Each folder itself can have folders, until you end at an empty folder or a file.

This is the way a tree structure works too: you start with a root, then move to nodes and finally end with leaves.

In the basic concept of a tree there are no rules on the nodes and the values they contain, so a node may contain zero, one, two, three or a hundred other nodes.

What makes a tree really powerful, is that it really is a collection of trees. i.e. if you take any node it is in itself a tree and so the algorithms used to work with a tree work with each node too. This enables you to work with a powerful computer science concept, recursion.

Recursion

Recursion is a concept that lacks a real world equivalent and so can be difficult to grasp initially. At its simplest for these posts, it is a method or function which calls itself, until instructed to stop. For example, you might write a function called getFiles which takes in a path to a folder and returns an array of filenames. Inside getFiles it loops over all the files in the folder and adds them to a variable to return. Then it loops over all the folders in that folder and for each folder it finds, it calls getFiles again.

Implementations

It doesn't make sense to talk about coding implementations at this point since this is more a pattern than a structure and we would need a lot more information on what we want to achieve to do actually go through a code implementation. That said, it is interesting to see where trees are used:
- File systems
- Document Object Models (like HTML or XML)

This post is one in a series about stuff formally trained programmers know – the rest of the series can be found here.

Linked List

In the previous post on Array, we saw that all read operations are Θ(1), which is awesome. An important reality of programming is everything is a trade off, so when you get fast reads with Array adding items when you don't know the collection size is expensive.

Array Growth Issue Example

Lets say you create an array of ints, named X, and set the length to 5 (currently that is using 20 bytes). Now we want to add a 6th item, so the solution is to create a second array, named Y, with a larger length. If we just want to handle one more, it means Y is now taking up 24 bytes of memory. Then we need to do a bunch of copy operations as we copy items from X to Y, which is really slow. By the end of the process, just adding one item was really expensive.

Linked List to the rescue

The solution is to change the way we store the data structure in memory. With a Linked List which each value is wrapped with metadata and stored separately in memory (compared to an Array which stores all values in a single continuous block of memory). The reason each item is wrapped, is that it then gets pointer to the next item in the collection, so that you can still navigate through the collection.

Pros and Cons

The big advantage to Linked List is that since the values can go anywhere in memory the collection can be expanded indefinitely until you run out of memory for very little cost, either Θ(n) or Θ(1). The difference is if the collection implementation keeps a pointer to the final item in or not; if it does not then it needs to navigate through each item, Θ(n), and if it knows the location of the last item then it just needs just go directly to it and set its pointer to the next item.

Removing and reordering items is also much faster than an array since you just need to find the items before/after and change where their pointers point to.

What is the downside then? Navigation through the collection is slower than an array. For example if we create an integer array and I want to access the fifth item can be done with simple math: (start of array in memory) + (int size in memory * offset) - that will give us the location of the integer value we want to read, basically an Θ(1) operation.

With Linked List though, I need to ask the first item where the second is; then ask the second where the third is; then ask the third where the forth is; then ask the forth where the fifth is. So Θ(n) operation for reading.

Linked lists also use more memory since you aren't just storing values, we are storing the values and one or two pointers with each value. This is marginal when storing types without a constant size, like a class since an array then needs to store the pointers to the values, but it is worth remembering.

Structures

The interesting thing about linked list compared to array is that it is very flexible in its implementation. The simplest version is to just have a pointer to the first item and each item in the collection needs to point the next item. This is known as a singly linked list, as each item is linked to one other.

The linked list may also store a pointer to the last item to make adding faster.

Doubly Linked

Most common implementations though use a doubly linked list where each item in the collection not only points to the next item in the collection, but also points to the previous item in the collection. At the trade off of memory (for the extra pointer) and potentially more expensive operations (like a insert now impacts two items and not just one) you gain the ability to navigate forwards and in reverse.

Implementations

Java has a doubly linked list implementation with LinkedList and .NET also has a doubly linked list implementation with LinkedList. JavaScript has no native implementation of it, however there is plenty of articles on how to implement it.

References

This post is one in a series about stuff formally trained programmers know – the rest of the series can be found here.

Array

This is the first in the data structure reviews and likely the simplest; the humble array. The first issue is the term Array - it term differs depending on who uses it but we will get to that a bit later.

Generally I think of an array like this:

An array is a container object that holds a fixed number of values of a single type. The length of an array is established when the array is created. After creation, its length is fixed. – Oracle Java Documentation

Seems simple enough. There are two limits placed on our container: single type & fixed length and both relate to how the array is handled in memory. When an array is created it looks at the type & length and uses that to calculate how much memory is needed to store all of that. For example if we had an array of 8 items we would get a block of memory allocated for the array like this:

In some systems arrays can just grow but allocating more memory at the end, these are called dynamic arrays. However many systems do not allow this because the way memory is handled is there might not be any space after the last item to grow into, thus the array length is fixed as there isn’t any memory allocated for that array instance.

This has a major the advantage to read performance, since I can quickly calculate the where the item will be in memory – thus skipping having to read/navigate all other items. For example:

If my arrays values start at position 100 in memory and I want the 4th item in an int[], it would be 4 (for the position) multiplied by 4 (for the int size) + 100 (for the start address) & boom value!

Object[]

What happens when we can’t know the size of the items in the array, for example if we created an object[] which can hold anything?

In this scenario, when the array is created, rather than allocating memory based on length multiplied by type size, it allocates length multiplied by the size of a pointer and rather than storing the values themselves in the array memory, it stores pointers to other locations in memory where the value is.

Obviously this has a slightly worse performance than an array where we can have the values in it – but it is slight. Below is some output from BenchmarkDotNet comparing sequential reads of an int[] vs. object[] (code here) and it is close:

Associative Arrays/Dictionary

As mentioned above, not every array is an array – some languages (PHP & JavaScript for example) do not allocate a block of memory like described above. These languages use what is called an associative array, also known as a map (PHP likes to refer to it this way) or a dictionary.

Basically these all have a key and a value associated to them and you can lookup the value by using the key. Implementation details differ though from platform to platform.

For example on C#, Dictionary<TKey,TValue> it is handled with an array under the covers, however in JavaScript it is a normal object. When an item is added to the array in JavaScript, it merely adds a new property to the object and that property is the index in the array.

Associative arrays do take up more memory than a traditional array (good example here of PHP where it was 18 times larger).

Multi-dimensional arrays

Multi-dimensional arrays also differ platform to platform. The Java version of it is an array of arrays, which achieves the same goal is basically implemented the same as as object[] was described above. In C# these are known as jagged arrays.

C# and other languages have proper multi-dimensional arrays which work differently – they actually take all the dimensions, multiply them together and use that for the length of an array. The dimensions just give different offsets.

Example:

Jagged arrays do have one benefit over a multi-dimensional array, since each internal array is independent, they can be different sizes where a multi-dimensional array all the dimensions must be the same size.

C# – List<T>

If you are working in C#, you might be asking yourself what List<T> is and how it relates to Array since it can grow forever! List<T> is just an array with initial size of 4! When you call .Add to add a 5th item, it then does the following:

Create second array of where the length is double the current array

Copy all items from first array to second array

Use second array now

This is SUPER expensive and also why there is an optional constructor where you can override the initial size which helps a lot. Once again using BenchmarkDotNet you can see that it makes a nice difference (code):

JavaScript Arrays

As mentioned above, the standard JavaScript array is an associative array. However, JavaScript (from ES5) does contain support for typed arrays. The supported methods differ so this isn’t an easy replacement and it only supports a limited number numeric of types. Might make a lot of sense to use these from a performance reason since they are implemented as actual arrays by the JavaScript runtimes that support it.

GitHub has introduced a flat rate structure for unlimited private repos and I wanted to understand how it compares to the Visual Studio Team Services (VSTS – previously Visual Studio Online (VSO)) pricing where you get that already. I drew up a quick picture and tweeted it:

I have had mostly positive feedback for it, however there has been some confusion in it.

Date

Yes, it says 2017. I’m too lazy to change that to 2016, really. If it bugs you, just look away. Or pretend I’m a time traveler.

VSTS is cheaper yet more confusing

The title is my summary for the pricing difference and people have interpreted that to mean so many things. Including that I mean VSTS is a more confusing platform and ignoring the fact this is about price. I only meant the pricing is confusing. For example here is the math for GitHub vs. VSTS at 10 users:

GitHub

VSTS

Price

$70

$30

Math

((10-5)*9)+25

(10-5)*6

At this point it seems simple – GitHub is $25 for the first five users, so we subtract 5 from the total number of users and multiply by 9 for the remaining and add that to the $25 for the first five users. VSTS is even easier. Your first five are free, so we subtract those from the total and multiply the remaining by 6 which is the price for that tier.

The problem is VSTS is a tiered pricing, where GitHub is a fixed pricing. At 1500 users the math for GitHub remains the same but VSTS is way more complex.

GitHub

VSTS

Price

$13480

$5350

Math

((1500-5)*9)+25

(5*6)+(90*8)+(900*4)+(500*2)

You’ll note the VSTS math is way different. First I’m not even bothering with subtract 5 for free, so the total users is 1495. The first five are charged at $6 a month, the next 90 at $8 a month, the next 900 at $4 and the remaining 500 users are charged at $2. Once added up you get the total.

And it gets more complex, because if you have an EA (Enterprise Agreement – something your company signs with Microsoft to pay differently & pay less for licensing), then none of that applies – it is a flat $4 per user.

GitHub is also easier in user types – there is one. In VSTS there is three (note, these are my names for the user types – not official):

Dev: This is the paid ones we have been talking about.

MSDN: This is the same, except they have a TFS on-premise CAL (i.e. you have a user license for local TFS) or they have a MSDN subscription which includes VSTS.

Stake Holder: These are free – but really are about work item management only. This is what you give your customer who needs to prioritize the backlog but doesn’t need code or build access.

How would these types impact the cost? Lets see an example

Example

Let us pretend that we have a dev team of 40 people, split into 5 feature teams of 1x PM, 1x Tester, 6 x devs. In each feature team 2 of the devs are outside consultants and the tester & PM do not have MSDN cause the company only has MSDN for devs. Your gut might be you need 40 licenses, so $270 according the calculator. The reality is you won’t pay for the 5 PMs as they use stake holder licenses. You get 5 free licenses which you assign to your testers. Your 20 devs have MSDN so they don’t need anything extra. That just means the 10 consultants need licenses – so the price is $70 not $270 i.e. (5*6)+(5*8).

For GitHub, that would be $315 per month i.e. (40-5)*9.

Platform Confusion

To answer the trolls about is VSTS a more confusing platform? If you coming from GitHub, yes I think it might be more confusing as VSTS offers more, there is more to learn and it will be a bit off from what you know. The core, Git repos, remains the same. If you can learn Git, you can learn VSTS so in the medium term it is not more confusing at all.

The series post, which contains more stuff formally trained programmers know, can be found here.

Big O Notation

This one has had me always confused and always seemed to be something out of my reach. It really is simple once I actually sat down and worked through it. Lets start with the syntax:

O(n)

The "O" just is a indicator that we using big O notation and the n is the cost. Cost could mean a variety of things, memory, cpu cycles but mostly people think of it as the number of times the code will execute. The best cost would be code that never runs (i.e. `O(0)`) but that likely has no value.
To help explain it, let's look at a simple example:

Console.WriteLine("Hello 1");

The cost for that is 1, so we could write `O(1)`. If we put that in a for loop like this:

n

Rather than having to be explicit with number (like 10 above) we can use a short hand notation. The common one is ```n``` which means it will run once per item. For our for loop example about that means it could be written as `O(n)` so that regardless if we looping 10 times or a 100 times the relative cost is the same and can be referenced the same. From this point on it really is just about adding math to it.

If we were to have a loop inside a loop as follows, which will run 100 times (10 X 10) we could write this as `O(n<sup>2</sup>)`.

The other common one used with Big O Notation is `log`, i.e. Logarithm, which could be written like this: O(log n). In this case the cost per item gets less (relative to earlier items) as we add more items.

Further reading

This is going to be a series of posts where I intend to dive into the stuff which "formally trained" programmers seem to know.

What do I mean by "formally trained"?

The easy way to think of it is programmers who have a university education, or similar, where the focus on theory matters a lot. It also feels to me that the old & wise men of programming all just know this and the upcoming generation doesn't seem to have this knowledge. I don't put myself in that group of formally trained, and even after 20 years, I don't know these things well enough to hold a conversation about them.

What topics will I be covering? (these will be linked to the posts go up)

Languages

The biggest pain for me in 20 years of programming is not everyone speaks the same language. I am not referring to C# or JavaScript, rather terminology that we use. Is an Array always an Array? How do we talk about measuring performance?

Algorithms

Algorithms are ways of working with data and data structures in a consistent way. The advantage of knowing them is two-fold; First, it helps communication since we can all use the same names and secondly it expands our thinking about programming.