Main navigation

This is the 7th post in a multipart series.
If you want to read more, see our series index

Settings sync is the first extension I always install as it allows me to restore my settings AND extensions. It uses GitHub gists to store the config, so you have a slightly annoying setup process initially but once done, each time you change a setting or extension it updates it to the gist. Then when you get a new install, it pulls down the settings and installs all the extensions you had and you can get everything setup really easily and quickly.

This is the 6th post in a multipart series.
If you want to read more, see our series index

If you work in a team where choice is important, you may find everyone has a different editor. Today our team uses VSCode, Atom & IntelliJ. Editor Config is a set of extensions for many editors which tries to unify things like tab vs. spaces, trailing spaces, empty lines at the end etc… Think of this as your editor linting as you go. Unfortunately, support is limited for what can be done, but a lot of editors and IDEs are supported.

This is the 3rd post in a multipart series.
If you want to read more, see our series index

Initially, this extension allows your brackets, {} [] (), to be set to a unique colour per pair. This makes it really easy to spot when you are goofed up and removed a closing bracket. Behind the obvious is a lot of really awesome extras in it. You can have the brackets highlight when you click on them when you click on one the pair with bracketPairColorizer.highlightActiveScope and you can also add an icon to the gutter of the other pair bracketPairColorizer.showBracketsInGutter which makes it trivial to work our the size of the scope.

It also adds a function bracket-pair-colorizer.expandBracketSelection which is unbound by default but will allow you to select the entire area in the current bracket selection. Do it again, and it will include the next scope. For example, you can select the entire function, then the entire class.

This is the 2nd post in a multipart series.
If you want to read more, see our series index

The bookmarks extension adds another feature from Fat VS to code, the ability to bookmark to a place in a document/file/code and be able to quickly navigate backwards and forwards to it. One important setting that I think you should change is bookmarks.navigateThroughAllFiles - set that to true and you can jump to any bookmark in your project, with false (the default) you can only navigate to bookmarks in the current file.

This is the 18th post in a multipart series.
If you want to read more, see our series index

Following on from the previous post we looked at operators and being able to use them yourself by implementing the relevant operator methods. The first part I want to cover in this second post is the Unary operators +, -, and !.

When I was learning this, the term unary jumped out as one I did not immediately recognise, but a quick Wikipedia read it became clear. For example, if you use a negative unary with a positive number, it becomes a negative number... It is primary school maths with a fancy name.

One thing to remember about operators is it is totally up to you what they mean, so, for example, let's start with a simple pet class to allow us to define what type of pet we have.

packageblogcode

enumclass animal {

dog,

cat

}

dataclass pet(val type: animal);

fun main(args: Array<String>){

val myPet = pet(animal.dog)

println(myPet)

}

this produces pet(type=dog)

Now, maybe in my domain, the reverse of a dog is a cat, so I can do this to make this reflect my domain:

packageblogcode

enumclass animal {

dog,

cat

}

dataclass pet(val type: animal){

operatorfun not(): pet =when(this.type){

animal.cat-> pet(animal.dog)

animal.dog-> pet(animal.cat)

}

}

fun main(args: Array<String>){

val myPet = pet(animal.dog)

println(!myPet)

}

This produces pet(type=cat)

And this is the core thing, that while a Unary has a specific purpose normally you can totally use it the way that makes sense. This is really awesome and powerful but it doesn't stop there.

Normally when we think of something like the Unary not with a boolean, it goes from true to false (or vice versa), but it remains a boolean. There is nothing stating it has to be that way:

This is the 17th post in a multipart series.
If you want to read more, see our series index

This next post is an introduction to operators in Kotlin, not the basic the "use a plus sign to add numbers together" stuff (you read this blog, you got to be smart enough to figure that out). No, this post is about just how amazingly extensive the language is when it comes to support for allowing your classes to use them.

Adding things together

So let's start with a simple addition things together with plus I said we wouldn't do. In the following example code, we have a way to keep track of scoring events in a rugby game and I would like to add up those events to get the total score:

This obviously won't compile, you can't += an Int and my own class? Right?!

We could make this change, which is probably the better way but for my silly example, let us say this isn't ideal.

totalPoints = event.points()+ totalPoints

So to get our code to compile we just need a function named plus which has the operator keyword and whle I could call this myself, the Kotlin compiler is smart enough to now make it just work:

dataclass score(val type:scoreType){

fun points()=when(this.type){

scoreType.`try` ->5

scoreType.conversion->2

scoreType.kick->3

}

operatorfun plus(other:Int)=this.points()+ other

}

How cool is that?!

If I wanted to take it further and support say totalPoints += event then you would need to add a function to Integer which tells it how to add a score to it. Thankfully that is easy with extension functions:

operatorfunInt.plus(other:score)=this+ other.points()

Extensive

While the above is a bit silly, imagine building classes for distances, time, weights etc... being able to have a kilogram and a pound class and add them together! What makes Kotlin shine is how extensive it is, just look at this list!

Class

Operators

Method

Example Expression

Arithmetic

+, '+='

plus

first + second

Augmented Assignments

+=

plusAssign |first += second`

Unary

+

unaryPlus

+first

Increment & Decrement

++

inc

first++

Arithmetic

-, -=

minus

first - second

Augmented Assignments

-=

minusAssign

first -= second

Unary

-

unaryMinus

`-first

Increment & Decrement

--

dec

first--

Arithmetic

*, *=

times

first * second

Augmented Assignments

*=

timesAssign

first *= second

Arithmetic

/, /=

div

first / second

Augmented Assignments

/=

divAssign

first /= second

Arithmetic

%, %=

rem

first % second

Augmented Assignments

%=

remAssign

first %= second

Equality

==, !=

equals

first == second

Comparison

>, <, <=, >=

compareTo

first > second

Unary

!

not

!first

Arithmetic

..

rangeTo

first..second

In

in, !in

contains

first in second

Index Access

[, ]

get

This returns a value | first[index]

Index Access

[, ]

set

This sets a value | first[index] = second

Invoke

()

invoke

first()

I am going to go through some of these in more detail in future blog posts, but one I wanted to call out now:

Augmented Assignments vs. Plus or Minus

You might wonder why, when we just implemented plus above we got both support for + and += and the table lists += under both plus and augmented? Why both - because you may want to support just += without supporting +.

This post is one in a series of stuff formally trained programmers know – the rest of the series can be found in the series index.

The hash table, like the tree, needs a key. That key is then converted into a number using a hash function which results in the position in the array that is the data structure. So when you need to lookup an item, you pass in the key, get the same hash value and that points to the same place in memory so you get an O(1) lookup performance.

This is better than a pure array, where you cannot do a lookup so you have to search through giving an O(n) performance and better than a tree which also needs a search, so it ends at O(log n).

A hash table would be implemented with an array, which means that inserting is O(1) - unless the array is full then it is O(n), so this is a pretty good level of performance. An important note in comparing, O(1) when adding to the end of the array and O(1) with a hash table is that the hash table has additional computation so that while they have the same complexity it is still slower to use a hash table; this is why big O notation is useful for understanding it has limits on how it can help us choose what to use.

A hash table, in concept, is easy, but in practice, it is another thing altogether. First, you need a hash algorithm that ends up giving a value inside the bounds of the array - this is often done using modulus. For example, if we get a value of 14213 but our array is only 8 in size we can do 14213 % 8 to get its position, 5.

This converting of numbers brings a new problem, collisions; namely, we are taking a large number of possible numbers and converting them to a smaller number, so what happens when we get say 85? It also has a modulus of 5! Two items can't live in the same location. There is no single solution to collisions, there are many.

One option uses linked lists in the items, so if there is a collision then you can store it at the same index and then you only have to search the, hopefully, very limited set in the collisions list. Others use offsets to shift collided items around the existing array. For this series though, a complete analysis of the options is beyond the scope.