Kotlin has taken the world of Java by storm. It is in fact the default backend language at DripStat.

Kotlin on the JVM has 2 primary selling points:

Better syntactic sugar than Java

Completely seamless interop with Java code

The first point is especially relevant to the Android ecosystem which is stuck in Java 6 land. But even for server-side usage, Kotlin offers much better syntactic sugar that Java folks have been longing for decades.

But it would all be for nothing without the second point. Kotlin is 100% seamless with Java code. You can introduce a kotlin file in a Java codebase without disrupting anything.

Kotlin for JavaScript

Kotlin for JavaScript however has slightly different selling points:

Saner, more typesafe language than Javascript

Interop with existing javascript code and libraries

Use same language on both backend and frontend

However, all of the above is complicated due to the presence (and popularity) of TypeScript.

Saner, more typesafe language than Javascript

TypeScript is already typesafe. While its still Javascript at its core, it allows you to avoid most of the blunders of Javascript. Unlike Java, TypeScript already has a lot of syntactic sugar that Kotlin offers and existing libraries in Javascript ecosystem utilize them.

For a TypeScript user, the syntactic sugar part of Kotlin doesn't have enough to offer to make the switch.

Interop with existing javascript code and libraries

Interop is vastly complicated for Kotlin in Javascript world. In the JVM, Kotlin has type information in the bytecode of java .class files it can use to offer seamless interop. In Javascript, no type information exists, making this much more harder.

TypeScript's solution was to get library developers to write additional type definition files. Even after many years the type definitions continue to break/mismatch across library and typescript versions. It is unlikely those authors will now write another set of type definitions for Kotlin.

Kotlin for Javascript thus tries to read type information from the existing typescript definitions instead. This not only makes it depend on TypeScript, but exposes the end user to dealing with all the fragilities of TypeScript's type definition files.

Suffice to say Kotlin's interop with Javascript code will never be as seamless as it is in the Java world.

Use same language on both backend and frontend

This is the prime selling point of Kotlin for Javascript. However, in practice this will not ring true anytime soon.

You cannot avoid knowing javascript on the frontend. You will be forced to look at javascript code since you will be using javascript libraries and for things like debugging code in the Chrome dev tools. Not only that, since Kotlin will require TypeScript type definitions for those libraries, you will also need to learn Typescript so that you can debug when you run into issues with those type definitions.

Conclusion

Kotlin for Javascript has a very different set of challenges compared to JVM. It is complicated since the lines have moved from Javascript to TypeScript. In fact, TypeScript has already done in the JavaScript world, what Kotlin has in Java. To win, Kotlin will have to prove its value-add against TypeScript, which it currently doesn't offer enough of.

]]>

Intel processors offer the AVX-512 instruction set to allow high performance for vectorized workloads. You would be correct in being tempted to use it on your applications/databases deployed in the cloud.

Intel processors offer the AVX-512 instruction set to allow high performance for vectorized workloads. You would be correct in being tempted to use it on your applications/databases deployed in the cloud.

However, there is a flip side to it.

Using the AVX instructions will cause the entire processor to get clocked down! This has huge implications.

Affect on Cloud VMs

The AVX slowdown doesn't care about VM boundaries. When you rent a VM on AWS, GCP, etc, you are getting access to just a few of the many cores from any physical processor.

Lets say a processor on AWS has 4 cores, and you request 2 for your VM. Another account B on AWS spins up a VM and gets assigned 2 of the remaining cores from that same processor. Now B starts running some AVX heavy workload. Well, what do you know, it results in your VMs getting slowed down too!

AVX512 is architecturally transparent to VTx (or the other way around, depending on how you view these things).

It means your own docker containers running AVX workloads can slowdown your other containers, despite the resource limits being set. Not only that, a different account's Kubernetes cluster which has pods scheduled on a different VM but on the same physical processor as your VM can impact your containers!

This was pointed out by Kelly Sommers yesterday.

So here’s a real question. What does Amazon and Microsoft and other kubernetes cloud services do to prevent your containers from losing 11ghz of performance because someone deployed some AVX optimized algorithm on the same host?

This is a Catch-22 situation all around. Cloud vendors want to offer VMs with AVX-512 instructions enabled to allow their users to get better performance. It is in the best interest of the individual user to use it. However, doing so may not only impact their own VMs/containers but even another account's VMs.

]]>

Azure Kubernetes Service (AKS) was recently marked as GA. We decided to move our production workload to it last month. Following is an account of what its really like to use it in production.

1. Random DNS failures

We started seeing random DNS failures right away, both for domains outside

Azure Kubernetes Service (AKS) was recently marked as GA. We decided to move our production workload to it last month. Following is an account of what its really like to use it in production.

1. Random DNS failures

We started seeing random DNS failures right away, both for domains outside azure (eg sqs.us-east-1.amazonaws.com) and even for hostnames inside the Azure Virtual Network. While they would resolve eventually after multiple retries, it was surprising that a fundamental feature like DNS would be broken.

Azure Support told us that resolution of DNS names that point outside of Azure was not their problem (which is surprising, since that is what DNS is for). They would only work on DNS failures for hostnames inside Azure.

Azure resolved the issue by blaming CPU/Memory usage. We were told not to use too much CPU/Memory if we wanted the DNS to work reliably!

Apart from the ridiculousness of this resolution, they ignored our response when we told them the issue mostly surfaces duing application startup when cpu/memory usage is minimal.

2. Required daily reboot of Kubernetes API Server

After a few days we noticed that we could no longer launch Kubernetes Dashboard. After going through a harrowing time of dealing with multiple azure support personnel, this issue was resolved to be valid and the only resolution was to reboot the Kubernetes API Server. Since the API Server is managed by Azure, this meant opening a support ticket, escalating it to engineering, and then asking them to reboot it.

This problem would resurface daily, so we had to open a ticket every single day and escalate it to get the API Server rebooted.

I had to document this procedure in an email to Azure Support so they could escalate the daily tickets without asking too many questions over and over again.

3. Container crash would bring down entire node

If a docker image would crash, it would bring down the entire underlying VM. The only way to recover would be to login to the Azure portal and manually reboot the vm.

The resolution by Azure Support: "Yeah this is your problem. Just make sure your containers never crash".

One day I woke up to find every single node in the Kubernetes cluster was down! Rebooting the nodes from Azure portal did nothing.

Azure support tried to bring the cluster back up, but the nodes would keep going down regardless. Eventually after > 8 hrs, they finally brought the cluster back up, but we could no longer run any containers on it! From that point on, our containers wouldnt even start, the error message would point to some Golang code (our application is in Java). Yet Azure support blamed it as 'issue on your end' and closed the ticket.

We now sat with a cluster we couldn't deploy containers on, but which Azure considered fine.

5. SLA violation ignored

Even though there is no SLA for AKS, the individual VM nodes do have to abide by the 99.9% SLA of Azure. Since the VMs were down for many, many hours we opened a ticket to claim SLA on it, and Azure simply ignored it! It remains open and ignored weeks later.

Conclusion

Azure Kubernetes Service (AKS) is alpha product marked as GA by Microsoft.
Azure Support has been the worst support experience of my life. Not only were our P1 tickets (with <1hr response time) answered after >24 hours, the resolution of the tickets were laughable. Ignoring the SLA violation is downright fradulent behavior.

We have finally moved to Google Cloud which has the best Kubernetes implementation out there.

]]>

I am fairly certain the issue described below applies to other hosted CI services too. However, since I haven't had a chance to examine them, I will not mention them in this article.

The Basics

Both Bitbucket Pipeline and CircleCI allow you to deploy to your production or dev environment

I am fairly certain the issue described below applies to other hosted CI services too. However, since I haven't had a chance to examine them, I will not mention them in this article.

The Basics

Both Bitbucket Pipeline and CircleCI allow you to deploy to your production or dev environment by editing a config file that you check in to your git repo along with your source code.

Here is a sample Bitbucket Pipeline config file that deploys the master branch to production and all other branches to dev environments.

Note that, its the environment variables like PROD_AWS_ACCESS_KEY_ID and DEV_AWS_ACCESS_KEY_ID in each of the above 'steps', that define whether the deployment goes to production or dev environment. These environment variables can be defined by the repository admin such that their actual values are not visible to anyone.

The Problem

Since the deployment is controlled completely by the config file, this doesn't stop any dev from modifying the config file to deploy to production. Eg:

Any developer with write permission on your team can check this is in your git repo and now any branch pushed gets deployed straight to production!!!

Potential Fix

The problem stems from the fact that the admin interface of either Bitbucket Pipeline or Circle CI doesn't allow limiting the visibilty of defined Environment Variables to a specific branch.

A simple filter that said the variable PROD_AWS_ACCESS_KEY_ID is only visible while executing the master branch would solve the issue described above.

But can't you trust your devs enough?

Both Bitbucket and Github have a 'branch restriction' feature that limits who can push to a specific branch, eg the master branch. This is at odds with that security feature, since now literally anyone with write permissions can deploy directly to your production environment.

This can even happen by accident by any developer while editing the config file and copy/pasting portions of code that reference the production environment.

Relevant Bugs

As of writing this post, Atlassian seems to have acknowledged the bug but it seems to be low priority for them, since its scheduled for an end of 2018 fix. CircleCI has not even responded to the forum post. Their official support is for paid accounts only, which I currently do not use.

Conclusion

This is a huge, massive security issue with both Bitbucket Pipeline and CircleCI. While Atlassian has atleast acknowledged the issue, it is surprising that its so low on their priority list.

]]>

The signs are undeniable at this point.

No 64 bit support for Visual Studio

The very first sign was when Microsoft refused to port Visual Studio (VS) to 64 bit. While VS is indeed a large codebase, MS had no qualms doing the same for Microsoft Office. The fact that

The very first sign was when Microsoft refused to port Visual Studio (VS) to 64 bit. While VS is indeed a large codebase, MS had no qualms doing the same for Microsoft Office. The fact that they no longer want to invest too much resources into it should point to the fact Visual Studio is very much in maintenance mode now.

VS Community == VS Pro

Visual Studio was always paid software. But in 2014 MS introduced the Community Edition. The only real difference between it and the Pro (paid) version is the 'Code Lens' feature. Another sign that MS no longer sees Visual Studio as driving any meaningful revenue.

Language Service

Every Microsoft language release now comes with a Language Service feature which allows all the IDE style functionality like refactoring, code completion, etc to be tied to the version of a language. Any editor can use the Language Service API to get all the IDE features one would traditionally only get from Visual Studio.

Continued Investment in VS Code

Visual Studio Code continues to release enhancements every single month, moving at a fast pace. Compare that to Visual Studio Pro, whose development seems pretty much only about updating its integration of the various Language Services to the latest version.

Cross Platform and focus on Azure

With Microsoft's focus shifting from Windows to Azure, it is but natural that they no longer want an IDE that runs only on Windows. Thus comes in VS Code, a free, cross platform IDE that supports all modern languages.

Conclusion

Visual Studio was the very first IDE I used when growing up and learning to program. Its nostalgic to see it go. But the future looks bright in VS Code!

]]>

Imagine if instead of iPhone X, Apple released an updated iPhone 5c. That wouldn't be very exiciting would it? Nor would it bode well for the future of smartphones in general.

That is the directon Oculus seems to be heading. Instead of releasing a more powerful Oculus Rift, they seem

Imagine if instead of iPhone X, Apple released an updated iPhone 5c. That wouldn't be very exiciting would it? Nor would it bode well for the future of smartphones in general.

That is the directon Oculus seems to be heading. Instead of releasing a more powerful Oculus Rift, they seem to be obssessed with lower powered, mobile experiences like Oculus Go.

Moving in that direction assumes:

1 . Mobiliy is a major issue inside VR.

Not really. The current Oculus Rift, once setup, has only 1 cable running from the headset to the PC. The Touch controllers are already wireless. That single cable is inconsequential. You cannot really move physically inside VR. You will bump into your room's walls. However, Oculus' efforts seem entirely focused on just getting rid of that single cable.

2 . The specs of Oculus Rift are already high enough to allow for mainstream adoption given lower price and no cables.

Not at all. There are sooo many other areas to improve that will have much more impact. For example: Modifying headset design to allow easily wearing glasses. PSVR nails this. Higher resolution optics. Better refresh rate. Better field of view. Better tracking. Not just 360 tracking but better accuracy overall.

The current Oculus Rift was designed with the nVidia 980 card in mind. We are on the tail end of nVidia 1080 generation now, but no Oculus device to fully utilize it.

Conclusion

The focus on mobile and low end is killing VR. The current high end Oculus is not good enough. Not for users viewing it or for creators to create great content. Focusing on low end only shows people the worst of VR and turns them away from it.

It is also killing all excitement from VR. Couple years ago, I would be excited for the Oculus Connect conference to see what major leap Oculus has achieved for VR. Now it all seems to focus on getting mobile VR close to the Rift, which itself feels old.

]]>

4K TVs are everywhere in stores. But is it worth buying one at this time?

Hollywood movie trailers are still produced in 1080p, including for high profile movies like Star Wars: The Last Jedi

Music Videos are still produced in 1080p, including high profile videos like the latest Taylor Swift

Microsoft's Sculpt Ergonomic Mouse is one of those hidden gems that noone talks about.

The first thing you notice about this mouse is how tall it is. This is for a reason. The extended height allows your hand to be fully rested on the mouse and not have any portion touching your desk/mousepad. This is extremely comfortable. you can rest your arm's full weight on the mouse and you no longer have any friction on your hand while moving it.

The mouse is also tilted slightly to the right. This puts your hand on an almost vertical angle. Its tilted enough that you don't run into risk of carpal tunnel issues. But not fully vertical like some highly ergonomic mice, that you would need a full training period to use it.

Holding the mouse feels like gripping a ball that wraps perfectly around your palm. It is the most comfortable mouse I have ever used.

This is an ingeniously designed mouse designed by Microsoft Research. Due to its weird shape it doesn't seem to have gotten the popularity it deserves. However, I hope more manufacturers takes notice of this design and adopt it in higher end gaming mic.

]]>

I came an excellent review of Game of Thrones by book reviewer Matt Hillard. The review is a great critique and gets to the core of what Game of Thrones is really about and what are the issues with it.

I came an excellent review of Game of Thrones by book reviewer Matt Hillard. The review is a great critique and gets to the core of what Game of Thrones is really about and what are the issues with it.

The predictions he made in 2010 are coming to fruit now in the HBO series:

When all is said and done, whoever is left standing in the ruins of Westeros will be swept aside by Daenerys and Jon Snow as they confront the evil out of the north, so isn’t this something of a waste of time?

Matt makes the case that GoT feels like a genius piece of work simply because George Martin does a great job at hiding what the real story is about. He just kills off characters who the reader mistakenly thinks are the main characters.

Initially, Eddard Stark and his son Robb seemed like central characters, yet with the benefit of hindsight even from a position only halfway through the series, it’s obvious they are bit players. In a typically sized fantasy novel, they’d have a page or two of screen time. In fact, the actual main characters of the story, like Daenerys, are just as bulletproof as any normal story’s protagonists.

What I was getting at, but not quite putting my finger on, was that although the political side of Game of Thrones seems to be about fighting the Lannister’s usurpation of the throne, the series is actually about restoring the Targaryen dynasty. In such a story, obviously it’s the Targaryens (Daenerys and Jon Snow) who are the protagonists.

He even delves into what is wrong with the series and why portions of it are boring, especially in the books:

However, any Targaryen restoration must wait until near the end of the series. In the meantime, the story creates tension principally through the separation of characters.

However, any Targaryen restoration must wait until near the end of the series. In the meantime, the story creates tension principally through the separation of characters. Daenerys is separated from Westeros, of course, but also the Stark children are separated from their mother and each other. The Starks all want to reunite, and because we like them we want to see them do it, so we feel tension until it happens. Well, it still hasn’t happened, and that in turn contributes to the feeling that the series is wandering aimlessly. This brings us back to the series’ unpredictability. The reader is waiting for these things to happen, yet other things happen instead. When the series works, it’s because these other things also capture our interest. When they don’t, the cost on the reading experience can be high.

I tried to test 4k Netflix on my new rig. I went through the checklist Netflix lists to enable this:

Upgrade plan to 'UHD enabled'

Use Edge browser or Windows 10 Netflix app

Have Intel Kaby Lake processor

Have hdcp 2.2 compatible connection

Have mega-fast connection to internet

The requirements really boil down to HDCP 2.2 support which is needed to play 4k DRM content in the browser. This is currently only supported on Edge in Windows 10.

So after doing all the above, I was surprised I couldn't get over 1080p on my desktop. I tried using the Ctrl-Alt-Shift-S shortcut inside Netflix to verify this. Netflix support was absolutely useless in trying to solve this. So after a ton of head scratching I finally found the root cause.

Its my NVidia GTX 1080 card.

Now the Nvidia 10 series cards do very well support the HDCP 2.2 copy protection. But for some reason Netflix doesn't like that. In order to use 4k Netflix on the desktop, you need to use the Integrated graphics card of the Intel Kaby Lake CPU. Using the dedicated NVidia GPU will disable the 4k support.

This is a ridiculous, completely non-technical, limitation imposed by Netflix which only does a disservice to them. The end result was me downgrading my plan back to 'HD'.

]]>

Epic Games' Unreal Engine 3 was the most widely used game engine during the X360/PS3 era. Almost every big budget AAA title, from Mass Effect to Bioshock Infinite, used it. Things have taken a turn during the current XOne/PS4 era. Almost every big studio now has their own

Epic Games' Unreal Engine 3 was the most widely used game engine during the X360/PS3 era. Almost every big budget AAA title, from Mass Effect to Bioshock Infinite, used it. Things have taken a turn during the current XOne/PS4 era. Almost every big studio now has their own engine, eg EA has Frostbite, Ubisoft has Anvil.

About the only big AAA title released of late using Unreal Engine 4 (UE4) was Gears of War 4 and the upcoming Borderlands 3. Batman Arkham Knight while technically uses Unreal Engine, uses a highly customized build of UE3 (not UE4).

AAA game royalties used to be the bread and butter of Epic Games. With that revenue source dying, the current actions of Epic Games can be easily explained.

Independent for 20 years, in 2012 Epic Games sold off almost half of its equity (48.4%) to Tencent Holdings to raise capital.

Unreal Engine made open source with a flat 5% royalty to attract Indie developers. This royalty was closer to 25% during the PS3 era.

Trying to expand use into other industries like Movies and VR.

Develop Paragon, the MOBA game, to get a piece of revenues enjoyed by LOL and DOTA.

Neither Indie games nor VR is big enough at this point to provide the kind of revenues AAA titles used to provide in the previous era. Paragon is still not complete and seems to have already lost to OverWatch.

Conclusion

Unreal Engine is the classic case of running a business which relies on a handful of big customers. If its critical enough and given enough time those customers may decide to build their own copy of your product. Then you are out of business. This is what happened to CryEngine last year. Epic Games seems to be clasping in all directions right now but it has yet to find a moat to replace its AAA royalty business.

Unreal Engine is a marvel of technology. It is a shame that market forces have forced it in the situation it is currently.

]]>

I got to try the Pimax 8K VR at GDC 2017.

Current Oculus Rift and Vive systems have about 100 degrees FOV (Field of View) and about 2k resolution (1k per eye). PiMax's 8K VR has 200 degress FOV and 8K resolution (4k per eye).

Field Of View

Current Oculus Rift and Vive systems have about 100 degrees FOV (Field of View) and about 2k resolution (1k per eye). PiMax's 8K VR has 200 degress FOV and 8K resolution (4k per eye).

Field Of View

The 200 degree FOV does make an enormous difference. You no longer feel like you are wearing ski goggles. It feels like natural vision since your periphery is no longer blocked.

8K Resolution

The headset was running the 'Showdown' demo by Epic Games. Its an on-rails experience where you have soldiers firing bullets at a Robot in slow-mo Matrix style. The demo was using an NVidia GTX 1070 card.

8k resolution truly feels next-gen. There was zero screen door effect. Objects looked perfectly solid and you could see the details up close. There was no aliasing. Apart from the obvious graphical artwork of the world, there was nothing to indicate I was in a digital world. It felt like a real, solid world.

Ergonomics

The headset shown was a prototype which you had to hold with your hand in front of your eyes. There was no indication of how the final headset's ergonomics would look like.

Conclusion

Current VR systems are designed around the compute power of the NVidia 980 card. Pimax demonstrates what the NVidia 1080 card can do for VR.

Going back to Oculus Rift after trying the Pimax feels like wearing a last-gen headset. Like using the iPhone before its 'Retina' display. This demo proves that the tech currently does exist to support an 8k resolution VR experience. I am sure both Oculus and Vive teams know about this. The reason why neither of them mass produce it currently is due to the cost of the components involved and the need for consumers to yet again upgrade their graphics card to the latest NVidia offering.

However, using the PiMax gives you a sense of how all future VR systems will look like in a few years. It is a huge leap forward from systems of today.

]]>

Microsoft Teams is not designed to be sold to the same people as Slack.

Slack is sold bottom-up. Its a solid product. Its designed to get users to try it, like it and buy it when the trial expires.

Microsoft Teams is not designed to be sold to the same people as Slack.

Slack is sold bottom-up. Its a solid product. Its designed to get users to try it, like it and buy it when the trial expires.

Microsoft Teams is designed to be sold top-down. It is designed to be sold to Enterprises with Office 365 subscriptions, which is basically every Enterprise. If you are an Enterprise, and you want a chat app, now your 'default' option is MS Teams, which comes with Office 365.

MS doesn't even want you to try it unless you have Office 365. The complete opposite of Slack.

MS Teams doesn't have to be a better product than Slack. Heck, if its like most MS Software, it will be buggy for years. It just has to be 'not bad enough' that you don't have to buy Slack.

]]>

Having tried all 3 VR systems for a good period of time, I can say that each one has atleast one big flaw at the moment.

1. Oculus Rift

As of writing this, Oculus is the only system on the market that doesn't have hand tracking. Who would have thought

Having tried all 3 VR systems for a good period of time, I can say that each one has atleast one big flaw at the moment.

1. Oculus Rift

As of writing this, Oculus is the only system on the market that doesn't have hand tracking. Who would have thought the VR pioneer would end up having the most backward system.

Can't wear glasses with headset, which is a huge issue.

Price is too high since it requires you buying a whole new PC.

Content is barebones, especially considering how long the product has been available and teased.

2. Vive

Perfect tracking. Terrible headset.

Headset puts all the weight on front on your face. You won't even want to look down due to all that weight. No integrated headphones, means another peripheral and set of cables you have to manage. Cant wear glasses with headset.