* You publish in debug mode to folder
* Startup the project after setting ASPNETCORE_ENVIRONMENT variable

(Do PowerShell for powers)

* Now startup your web project / console program / whatever is going to contact the service in Visual Studio
* Now back to OLD SCHOOL and ATTACH TO PROCESS

(I know, greatest window in the world)

So it's a little pain in the ass to get it to work especially if you debug a lot. Attach to process every single time. But the key is dotnet core comes with its own webserver (Kestrel). You don't need IIS anymore and you don't need IIS Express anymore.

I am sure there's some crazy way to get it working with IIS / Visual Studio integration, remote debugging, etc., etc., but this way works and doesn't involve downloading a half dozen things and configuring IIS (which is half the point of dotnet core, lol). It also gets you ready for the day everything is on command line and you don't need Visual Studio (yeah, right).

Sunday, May 13, 2018

Sometimes, developers get given tasks outside of their usual area of responsibility. For example, dealing with gzipping.

Gzip is a compression algorithm that's existed for over 25 years. It's a standard on the Internet and almost everything is served gzipped if it is served properly. There's various ways to deal with this, for example just letting the webserver gzip on the fly. However, you may run into a situation where that is impossible. For example, you may have some artificial limit of file size of less than 1 MB.

(no code splitting is not always an answer; in particular, if you have an integration between different products, code splitting creates an unstable integration between two different products with different release cycles. a little bit of knowledge is a dangerous thing without the details.)

And of course even if you managed to gzip, if the infrastructure cannot guarantee the CONTENT-TYPE and CONTENT-ENCODING HTTP Headers and even more HTTP headers like Vary: Accept-Encoding, then the browser may decide to download the gzipped files instead of ungzipping by itself. Or simply crash.

It is also a general ask for JavaScript developers, particularly on full stack JavaScript (for example with Express as the webserver) to deal with gzipping manually. However, who knows where it will be served? It could be served on Apache, on IIS or on a CDN. So chances are, you will be asked during your career to gzip files where

a) you cannot guarantee the HTTP headers

or have other restrictions such as

b) cannot guarantee the file size (as of May 2018, there are 400 issues open in the webpack issue tracker for the split by file size . Even if code splitting by file size (actually called chunking) is done, it's experimental and bug ridden. And besides, splitting into many files is not compression... unless you serve over HTTP2 serving many files introduces an overhead. Gzipping is a standard, it must be done and the gains are too big to ignore. We are looking at gains of 5 to 10 times.

(in case you are wondering, no you cannot access the browser's native ungzip facility with JavaScript -- that is only accessible if the HTTP headers are present, and you never have access to the raw script text anyway due to cross origin policy so you will be looking at an AJAX request. If you can't make an AJAX request because of missing Access-Control-Allow-Origin or missing whitelisting tough shit, you got much bigger problems).

So what is a developer to do? Wash his hands and blame the ops guys? Who cares about gzip right, it's not our problem it's the server's problem. In fact who cares about user experience at all it can take ten seconds to load we will just wash our hands of these stupid server troubles. We are not server guys we are developers who cares about HTTP headers and how it's hosted right?

Of course not. Let's put the Dev back in DevOps and ungzip on the fly, with or without HTTP headers, on any infrastructure (well except for the Access-Control-Allow-Origin header that everyone has). Yeah baby! It will be dirty, messy but it will work.

Build Process
You can gzip in many ways, for example with this plugin if you are using webpack.

You can also just use the Linux gzip utility as part of your build process.The Client Side Code (or, the SECRET SAUCE)
We will use the library pako.js to ungzip on the fly, with or without the correct HTTP headers.

In order to make sure the JavaScript files load in the correct order, we will use JavaScript Promises (which we will require a shim for IE support) and the JavaScript Fetch API (which also requires a shim for IE support). These are the required libraries.

We will dynamically inject the script, again returning a JavaScript promise on completion. Because we are dealing with text in the inner HTML tag, we don't have to use onload or onreadystatechange (IE).

With great power comes great responsibility; make sure you measure the performance in the browser to see the decrease not only in file size but how long it takes to actually use the web application.

Hopefully this helps someone

P.S. Message to server guys : we can code on a 386 or a RaspberryPi or a Commodore 64 or TRS-80 or string and yarn and foodstuffs and but that doesn't mean it's a good idea or a good use of time or money or resources. Upgrade your infrastructure to allow gzipping of any arbitrary file size with the correct HTTP headers and make the infrastructure work with the developers not against them, because the next time the problem might not be so (un)easy to solve.

Sunday, January 8, 2017

Just read an article by Robert Martin, of "Clean Code" fame (if you haven't heard of Clean Code, read it -- it's probably the seminal text for clean object-oriented programming... some of the advice in it is dated with test-driven development and Java, but it is worth a skim at least)

"If the ranks of programmers has doubled every five years, then it stands to reason that most programmers were hired within the last five years, and so about half the programmers would be under 28. Half of those over 28 would be less than 33. Half of those over 33 would be less than 38, and so on. Less than 0.5% of programmers would be 60 or over. So most of us old programmers are still around writing code. It's just that there never were very many of us.

What does this imply for our industry?

Maybe it's not as bad as Lord of the Flies, but the fact that juniors exponentially outnumbers seniors is concerning. As long as that growth curve continues[4] there will not be enough teachers, role models, and leaders. It means that most software teams will remain relatively unguided, unsupervised, and inexperienced. It means that most software organizations will have to endlessly relearn the lessons they learned the five years before. It means that the industry as a whole will remain dominated by novices, and exist in a state of perpetual immaturity." - Robert Martin,

Basically the problem is this, more in 2017 than ever -- the world has unwittingly ceded control of its financial, healthcare and private personal information for better or worse to computer programmers. We have a duty to create systems which are maintainable, robust and error free, even if it costs us in the short term.

What are the problems? The problems are in-your-face, serious, and unfortunately have nothing at all to do with coding or computer programming.

Example #1 (easy): Boss asks for deadline, you can take a shortcut. Either you can take a shortcut, or take 20% more time... piss them off now and make them happy later, or go for the short term gain.

Example #2 (hard): You are in a responsibility of great authority to pick a framework or a technology and you can either choose what is cool and hot, and therefore good for your career (great, one more line for your resume!), or choose proven but less cool and less interesting technologies. Balance this against whether or not the technology is about to go out of the market (you can write a website in COBOL but it is NOT a good idea!)

Example #3 (very hard): You have the ear of business people, and you must convince them of the need to create a process or build a framework or library which has no readily apparent business value and no readily apparent use cases, but will increase developer productivity ten-fold down the line, or allow you to enter emerging markets or attack potential competitors.

Example #4 (extreme): You must either sacrifice personal time and personal emotion and energy to create a process / library / frameworks for your company or down the road you see the end of your company or business (at least tech-wise)... the tech is so bad nobody will want to work there or stay there, you see the train coming a mile away but you are superglued to the tracks at least at work. So you either have to sacrifice, in order to move the company in the right direction, and get 0 credit for it, or hold your tongue and hope that the world works differently than you think.

What are the answer to these problems? I could give my answers, but they would be my answers.

The point is, there are no right answers... it depends on the situation, the market, and most of all experience. And, if Robert Martin's numbers are to be believed, experience is severely lacking.

I don't pretend to know anything about making money or business. Maybe markets are all about point in time and maybe writing spaghetti code and awful code is the way to do it -- forget about "tech" things like build processes, GitHub, Open Source, frameworks, automation, libraries and command line tools. Forget about The Art of Unix Programming and give the business people what they want, all the time, because the market wants now and only now and later will be too late because the market won't exist anymore.

Or, we could draw a line and say this far and no further... the line must be drawn here. Either take the time to do it right, or suffer the consequences.

How many "senior" developers, and technical leads and architects know this? How many programmers even care about these issues?

In the end we must all do things we can live with. We all have our own moral codes and standards. The choice is easy. Living with what we choose is the hard part.

Sunday, February 21, 2016

This is Part 1 of short series of blog posts about practical MVVM and its usage. The goal of these posts will be to give information not readily available through documentation or examples, mainly how to construct a complex data-driven application with non-trivial issues.

This post will discuss the practical theory about MVVM and it's application in real-world applications. For theory and an introduction, look Martin Fowler's article, or look at any introductory textbook to Software Engineering/Software Architecture.

We will waste no time introducing what MVVM is and simply dive into the details (with a short refresher).

What is the Problem we are Solving?
I am not a fan of learning or increasing complexity of an application just for the sake of software purity or self-edification. Hopefully you are not either. Therefore, the question must be asked, what problems does MVVM actually solve? What is it's use? Is it worth the additional complexity? In corporate/Software Architecture parlance, what is the "use case" that MVVM applies to?

I am assuming we are solving a business problem. Business problems have a specific purpose, use and scope. In particular, we are not looking to reinvent the wheel and demonstrate technology or do technology for technology's sake (although valid reasons, we aren't talking about the technical merits here). What we are wondering is how we can solve business problems in a fast enough, maintainable enough and quick enough way that the business continues to be viable, extensible and expandable.

1. Business problems are data problems. Data is the business, moving data from point A to point B. However, the days of simple data-entry are gone. Dumping data from the database onto the screen and saving it is a trivial, simple task. In order to create a product of any value, the relationships between data points must be maintained, because without relationships there's no meaning for the data.

2. Increasingly, data visualization is just as important as data processing. Without reports, without diagrams, without charts, without Key-Performance-Indicators, data entry is useless. This is beyond the scope of this series of blog posts, but is obviously the next step after mastering data integrity.

3. More increasingly, the interface has to be attractive enough to provide a superior user experience. Business users are now computer experts, unlike years or decades ago and expect the same experience from top-notch consumer software in their business products. Things like milliseconds of delay, buggy interactions and non-standard interactions are unacceptable. In addition, this is not the heyday of the Internet -- the market is now mature, and mature markets demand superior customer experience as the key (and sometimes only) differentiator. This is again, beyond the scope of this series of blog posts.

We will focus on problem 1, non-trivial relationships between data points. In particular, we will focus on how to represent data points with multiple relationships between data points and a hierarchical relationship (since most business problems are hierarchical) and how to create and architect software which takes this reality into account.

What does all that mean for the Developer?
In general, what that means for the bog-standard Software Developer in his day-to-day tasks is three things.

1. The primary task of a backend Software Developer (and increasingly frontend developers as well) is to create tools, or user-interfaces. This immediately directs us to some sort of design pattern (after all, UI problems are a solved problem) and immediately to MV***, *** being the question mark. The model is a given, since we are solving business problems and all business problems are modelled. The view is a given, unless we are creating backend data processing software with no user interface.

2. Unless you are lucky enough to work somewhere you can do whatever you want, you are under the gun. Particularly for a business, time is money and time is lost market-share or lost revenue. Therefore, you can't take forever reinventing the wheel from the ground up. Luckily, a lot MV*** frameworks already exist.

This is a good introduction to the differences between MVC, MVVM and MVP. In particular, we will select MVVM, because the business problems we will try to solve are non-trivial (advanced relationships between data points).

Technology
We will use technology meant to solve a business problem. This immediately leads us to -- you guessed it -- Microsoft. However, the solution we will select is KnockoutJS, the Microsoft-recommended way to accomplish advanced UI binding. However, the concepts and code samples should be clear enough to port to any framework or programming language.

The next blog post will start with a practical example of MVVM (the trivial/kitchen sink example) then discuss the problems developers immediately face when trying to implement realistic business solutions.

Sunday, January 3, 2016

This post discusses how to convert between a JavaScript object, XML and JSON (not the same as a JavaScript object).

(Un)Surprisingly there is no native function to do this. There's various implementations of conversion like JXON or jQuery's parseXML, but no official standard. So chances are every developer will have to work with converting between these data interchange formats (and native JavaScript objects) depending on the use case or circumstances.

JavaScript Object to JSON is the most straightforward. Just use the browser's native JSON.stringify (or use a shim if you need to support older browsers). Or is it? Consider the following case:

var obj = ["lightsaber", "blaster", "vibroblade"]

All of these are Star Wars weapons. But this data structure has no idea what's inside it. It doesn't know that these are weapons, or star wars weapons.

So the conversion then, is not trivial to XML. So care must be taken when converting between a JavaScript object and JSON, to send the metadata of the JavaScript object along.

The notation I've seen most used is the $metadata attribute. It has the added advantage of $ not being a valid XML tag character, which means you won't accidently create a node with the metadata.