If you have ever experienced Sublime Text overwriting your text when typing, and not knowing how to disable it other than re-opening your document, you know how frustrating it can be. I’ll show you how to toggle overwrite on/off and how to disable it completely if you never use it.

How to toggle overwrite on/off in Sublime Text

These are the default overwrite key bindings in Sublime Text:

Windows

simply press the [insert] key on your keyboard

Mac

cmd+alt+o

How to disable overwrite in Sublime Text

Overwrite can be handy in some circumstances, but if you never use it as me, it might just be easier to disable the key binding completely.

Sublime Text stores its key bindings in json format in a file which can’t be modified from within Sublime. Instead we will copy the “toggle_overwrite” key binding and paste it into our user key bindings file, which will overwrite the key binding specified in the default key bindings file.

Open up Sublime and go to the menu:

Sublime Text > Preferences > Key Bindings – Default

Search for “overwrite” and copy the line below:

{ "keys": ["super+alt+o"], "command": "toggle_overwrite" },

Go to the menu again:

Sublime Text > Preferences > Key Bindings – User

Paste in the “toggle_overwrite” line you just copied from default key bindings file and change the command to “unbound”. This will unbind the overwrite key binding.

Umbraco Codegarden 2016 live streams:

Microsoft’s big shift

A lot is happening in Microsoft land and the future looks very exciting as a .NET developer with the introduction of .NET Core and ASP.NET Core, making it possible to run .NET on any platform.

Times are changing, also within Microsoft. They created the .NET foundation and open sourced a lot of projects. For instance .NET is now open source and developed on Github, making it possible for everyone to contribute.

Red Hat has partnered up with Microsoft and joined the .NET foundation alongside with JetBrains and Unity. Microsoft partnered with Canonical – the company behind Ubuntu, making it possible to run Ubuntu on Windows.

Not long ago Microsoft acquired Xamarin, open sourced its SDK under the MIT license and made it a part of the .NET foundation. With Xamarin along came Mono. Microsoft changed the license of Mono from GPGL to MIT and put it in the .NET foundation as well.

SQL Server will be released for Linux and is scheduled for 2017.

ASP.NET 5 is killed and renamed to ASP.NET Core 1, because as the name indicates, it’s a totally new framework. With .NET Core and ASP.NET Core Microsoft takes a huge step in becomming platform independent, as both run on Windows, Linux and Mac. ASP.NET 4.6 will continue to be developed and supported, and new features are yet to be added. ASP.NET Core is still fresh from the oven, and lacks several of the features from ASP.NET 4.6, but we will most likely see it mature and be up to speed in version 1.3 or 1.4. Microsoft have been a bit vague about the future for ASP.NET 4.6 and onwards, but it looks like it’s slowly getting phased out. My idea is that the development of ASP.NET will stop when ASP.NET Core is fully matured.

The future of .NET

Let’s take a glance at the .NET family to get a better understanding of what the future holds and how the different frameworks will be structured. On the top layer in the diagram below we have the app models; .NET Framework, .NET Core and Xamarin, which all share the same base class library (BCL) called .NET Standard Library. Instead of adding new features to either of the 3 stacks, they will be added in the .NET Standard Library and shared across.

On microsoft.com/net things are clarified further. Basically the .NET Framework is meant for Windows development. That being native, mobile and web applications running in the Windows ecosystem.

.NET Core on the other hand is meant for cross-platform applications, which can run on either Windows, Linux or Mac.

.NET Overview | Build 2016

In the following video from Build 2016 Scott Hunter and Scott Hanselman gives an overview of the future for .NET. In the end of the video Todd Mancini from Red Hat demonstrates how to deploy a ASP.NET/Core application to Red Hat Enterprise Linux, running in a Docker container.

Go checkout redhatloves.net (how cool is that!?) and signup for the Red Hat developer program and download Red Hat Enterprise Linux for free, and play around with it yourself.

ASP.NET Core

ASP.NET Core 1 can run on both .NET Framework 4.6 and .NET Core 1.0, whereas ASP.NET 4.6 is meant to only run on .NET Framework 4.6.

ASP.NET Core even though not yet fully matured, holds a lof of exciting features. First and foremost it allows us to run our applications on Linux and Mac. It’s modular, which means that it’s stripped for most frameworks and libraries we might not even need in our application. For instance you have to include error pages yourself, which also opens up for customization so we can have company specific error pages. You can choose if you want to include MVC or Nancy, should you prefer the latter.

ASP.Net Core is fast! According to Microsoft you’ll see a 10x speed increasement, moving your current MVC application to ASP.Net Core.

ASP.NET/Core Overview | Build 2016

The two Scotts are at it again in this next video from Build 2016, where they show some really cool features of ASP.NET Core. For instance the best demo in the entire world – must watch!

Now, go code…

We are moving towards a more platform independent future, and it has never been more exciting to be a .NET developer as it is today. The cool thing I’m stoked about and think a lot of fellow developers are as well, is that we can now develop .NET applications directly from OS X, without having to open up Parallels. We are not fully there yet though, with frameworks and libraries still missing, but it’s a start. Head over to microsoft.com/net to get going!

In a world that is fastly evolving from mobile first to mobile only, you are out of the game if your website is not accessible on a smartphone. That meaning, build with “mobile first” in mind. Your site needs to scale to the devices your visitors are using. People browsing the web from mobile devices are typically connected to the internet through 3G, 4G/LTE and in many cases even slower connections, which makes the speed of your website crucial. The majority of people tend to leave a site if it takes more than 4-5 seconds to load the page. As you may know, Google takes speed into consideration when ranking website, which is another important reason to speed up your site.

How to change domain name of your WordPress site the SEO friendly way, without losing PR and link juice

I recently changed domain for this blog from egeek.dk to egeek.io. There are two parts of changing your domain or moving your site to a new host and domain – the technical part, and the SEO part where you try to preserve your search engine rankings. Doing it the right way will save you a lot less trouble in the future, and make you keep most of your hard earned SEO work. On the other hand, doing it wrong can have fatal consequences on your rankings in the search engines.

I’ll narrow it down and describe how I did it, and how you should do it too.

In my case I wanted to change domain from egeek.dk to egeek.io on a running WordPress site. I was not switching host/server, but just wanted to “rebrand” my site with a new domain. Whether you are in the same situation and want to rebrand your site, or want to switch host and domain, you can use these steps as a guideline.

I’m currently running this blog on IIS in a Windows Server environment, but the fundamentals are the same, and the following steps can be incorporated, should you run your site/blog on Apache, Nginx or on a LiteSpeed server in a *nix environment.

Disclaimer

Before starting out, I want to stress that changing your domain from olddomain.com to newdomain.com will affect your search engine rankings in the beginning. After you do a switch Google will index both of your sites, where after olddomain.com will slowly dissapear from the search results. I’ve had pages indexed with meta titles and descriptions chosen by Google, instead of the meta-data I chose for the pages (yeah, Google can do that). After the switch these pages are currently indexed with my new domain, and yet again Google has chosen meta titles and descriptions, though different (and much worse) than before. I had top 1 placements in Google with feature images and snippets, which are now gone after I changed domain. It’s a part of the process, but I’m positive the rankings and feature snippets will come back up, when everything is rolled out completely. It’s inevitable for things like these not to happen, as Google needs to reindex your whole site again. Don’t put too much concern into this though, as things will smooth out, and if you play it right, you’ll be back up where you were before. That’s what this guide is about.

If you ever stumbled upon the info message “No data is available for the 2 most recent days” in Google Analytics when browsing the “Search Engine Optimization” section and wondered why this message appears, there’s a good explanation for that.

No data is available for the 2 most recent days

You will get this message after connecting your Google Analytics account to your website in Google Search Console (former Google Webmaster Tools). The Search Engine Optimisation graphs in Google Analytics pulls data from Google Search Console, and this data is often several days delayed. People report it can be delayed by more than 2 days as originally stated.

Right after linking your accounts you won’t be able to see data until Google fetches initial data from Google Search Console – give it some days. After that, the message stays, and just tells you that Google Analytics is 2 days behind fetching data from Google Search Console. Don’t worry, it’s perfectly normal, and Google Analytics will eventually catch up.

It might be a good idea to re-check you linked the correct accounts though. Log in to Google Search Console > click on your website > on the wheel in the top right corner click and chose “Google Analytics Property”, and you’ll see the Google Analytics account linked to your website.

Node.js is a lightweight yet extremely powerful open source, cross-platform runtime environment for hosting and running JavaScript applications. Its popularity continues rising alongside with the popularity of JavaScript. The way we are able to develop web applications today has changed drastically from only a “few years” back where JavaScript could only be run in the browser. Today we are able to build big and complex applications in JavaScript and host them in Node.js, where in the past these types of applications had to be written in languages like Java, PHP, ASP, etc.

It’s very easy and fast to get going with Node.js which I’ll demonstrate.

2. Follow the installation and install Node.js to the default location (C:\Program Files\nodejs) Make sure npm package manager is installed as well, as you will need this later on. Also let Node.js be added to PATH (default), so you can access it from anywhere.

That’s it! Open up Command Prompt and try the following:

node -v

Node will tell you which version it’s running:

As you can see, I’m running v.4.4.5 of Node.js

Go ahead and try the following as well

npm -v

It displays I’m running npm package manager v.2.15.5

We are now confident that both Node.js and npm are working and we can build our first application.

1. Create the directory C:\Node 2. Create the file “hello.js” in C:\Node with the following content:

console.log("Hello World!");

3. In Command Prompt go to C:\Node and run;

node hello.js

It’s pretty basic, but we build and ran our first JavaScript application in Node.js. Now let’s try to expand and host a web application.

3. Go to http://localhost:1337/ in your browser (from the same machine running Node.js of course)

We’ve now created a JavaScript application and hosted it through http with Node.js. It’s still pretty basic, but the possibilities are endless. For example Ghost – a blogging platform like WordPress, and actually created by one of the former WordPress developers, is a Node.js application.

If you experience extreme typing latency and Visual Studio is using high amount of CPU, even when idling, check if you have “Browser Link” enabled. I started experiencing latency when typing, after working just 5 minutes in Razor Views. CPU usage was consistent at 37-40%. Browser Link is a new feature in Visual Studio 2013 and enables dynamic data exchange between Visual Studio and your web application.

If you are interested in knowing exactly how Browser Links works, you can read more here.

Or if you just want to disable it straight away, do as shown in the picture.