https://www.triplet.fi/blog/Ghost 0.11Fri, 22 Feb 2019 14:41:12 GMT60There are plenty of resources where you can learn to use right built-in value type, such as decimal when dealing with money. Unfortunately, picking a right value type is only the first step to prevent invalid data corrupting the system. For example, in a certain business case, only values from]]>https://www.triplet.fi/blog/using-custom-value-types-to-protect-system-from-invalid-data/72d887e5-aeef-488d-9f1f-43e29353c745Sun, 17 Feb 2019 14:50:00 GMT

There are plenty of resources where you can learn to use right built-in value type, such as decimal when dealing with money. Unfortunately, picking a right value type is only the first step to prevent invalid data corrupting the system. For example, in a certain business case, only values from 1 to 100 could be valid. In this blog post, I'll cover using custom value types to increase trust in the system. The examples are in C#, but the information can be applied to many languages.

Not trusting the input

The (object-oriented) programmers have a good understanding when to create class. If the system would have data age, firstName and lastName then it is very likely that the developer creates class Person that will have those three as properties, instead of moving each of them separately inside the software.

When the class is created then some sanity check can be done in the constructor, such as, first or last name should not be empty or null.

Unfortunately, when it comes to the primitive types (boolean, decimal, double, etc.) too much data is placed into a variable that doesn't describe the business rule. Let's take integer as an example. It could have 10 correct values (1-10) and around 4,294,967,285 wrong ones!

As an example, in accounting software, you might have a business rule that account number can be between 1000 - 19999. If I write a function to get account type, I might need to do some sanity check, so that I won't do a lot of work to resolve the account type if we already know that the input is invalid.

There are several problems. The account number validation rule doesn't "belong" to the GetAccountType as it violates the single-responsibility principle. Also, duplicate code will be created when other account number related functions are created.

Conclusion

Even though defining a custom value type has clear benefits it has also downsides. For example, writing those operator overrides to support built-in value types doesn't feel time well-spent. I am quite confident that in every system there are some domain specific entities that are critical to be correct and are used often in the business logic, that's where you should consider defining the custom value type.

When you go back to your software project, maybe you can spot some important business values placed in integers, floats, decimals, etc. and you can then consider could it be more "stricter" definition.

]]>I have written a couple of retrospectives in the previous years and this year is no different. The retrospectives have been from freelancing perspective, but this time I wanted to try adding a bit more random things that were interesting.

Freelancing

I have written a couple of retrospectives in the previous years and this year is no different. The retrospectives have been from freelancing perspective, but this time I wanted to try adding a bit more random things that were interesting.

Freelancing

There are still few working days left as I am writing this, but at the moment the hour counter is showing 1099 hours which is a bit more than in 2017 (1034 hours). I have done four years four day work weeks and I think there is no coming back

Almost 90% of the time I worked remotely either from Finland or from abroad (Hungary). More on that later.

I didn't have many client's as I had a wonderful client and it was, and still is, pleasant to work with them. It should not be taken granted that you find always a client with whom you have mutual trust, the trust allows remote work and in my case that will lead to more time and happiness.

From technical perspective, I went deeper into the .NET Core, EF Core and React. The term full-stack developer has been always vague, but in my case that means JavaScript frontend with .NET backend. Keeping the backend language and framework unchanged, allows me to keep up with the always changing frontend technologies.

Remote work

Due to the quite dramatic events on wife's side of the family, I worked quite much remotely from Hungary. I was already working remotely in Finland, so the change wasn't big workwise. Hungary has only one hour time difference to Finland.

I think the remote work experiences are worth of it's own blog post, but I'll try to summarize this year's experience in a few bullet points:

it is valuable to meet your team members at some point, preferably at the beginning of the project

discussion on Slack (or similar service) is also part of the searchable information, unlike "water cooler chat"

I get more done in the home office than in the actual office

Blogging

The 2018 was really bad from a blogging perspective. I guess I momentarily lost interest on writing and getting back to the "writing mode" was hard.

But, I am very happy to see that the visitor count almost doubled from 2017. Also, my blog appeared in the search engine a lot. Thank you, Google!

Interesting things from 2018

Programming

C#/.NET

OzCode, C# debugging on steroids, I have tested few times, but based on the tutorials there is so much wonderful features that I am eagerly waiting for a bug that would require going deep into debugging mode.

Dependency injection in general, this might seem a bit boring topic, but dependencies are big part of software development. I try to remember that it's not about tools as you could do everything without any IoC (Inversion of Control frameworks), but it is more about the software design. If you have a problem with the IoC/DI tool then very likely you have software design problem.

Pulumi, "infrastructure as code". I have only read documentation and listened a podcast about the topic, but probably worth checking out when you need to think about infrastructure related topics.

Visual Studio Code, I was a bit worried that VS Code will get slower or they change something making it less perfect, but I am glad to inform that VS Code has just gone better in 2018 and I am not the only one saying it

Podcasts

On the far eastern edge of Europe there is a border, 3,500 miles in length and spanning eight countries. In Edgelands, a brand new six-part podcast from The Telegraph, we explore the remote communities uniquely shaped by decades of living in the shadow of the former Soviet Union.

For Melissa Moore, 1995 was a nightmare. That’s the year the teenager learned her father, Keith Hunter Jesperson, was a serial killer. It’s also the year Melissa Moore’s doubt spiral began: When you look like your father, and you share his intelligence and charisma, how do you know you’re not a psychopath, too? Happy Face is the story of Keith Hunter Jesperson, his brutal crimes, and the cat and mouse game he played with detectives and the media. But it’s also the story of the horrific legacy he gifted his children. Join Melissa Moore as she investigates her father’s crimes, reckons with the past, and wades through her darkest fears as she hunts for a better future.

What's next?

Here are some random thoughts on what 2019 could bring.

I find freelancing still interesting, but the product business is something I would like to try at some point.

Talking at meetups has been always fun and exciting, so I guess that's something I could do more.

I need to create (again) a habit of writing.

F# would interesting to learn, but I am the kind of person who needs an actual project (slightly bigger than a Hello World) to be motivated enough to continue the learning process more than a day. If you have a F# project, let's talk!

I did few pull requests to open-source projects, contributing to a good cause always brings joy, so that's something I would like to continue.

There is no need to plan everything, so I'll keep the list relatively short.

Have a pleasant 2019 and thanks for reading!

]]>In the ideal world, investigating why a test fails should be a trivial task, but sometimes the code that is under testing is not easy to follow. Or, even the test can be hard to understand. To get understanding what is going on you can use console.log in the]]>https://www.triplet.fi/blog/debugging-jest-tests-in-visual-studio-code-while-using-wsl/f4aaab0d-f50b-4f21-afcf-b3630b70f1f3Thu, 06 Dec 2018 13:14:45 GMT

In the ideal world, investigating why a test fails should be a trivial task, but sometimes the code that is under testing is not easy to follow. Or, even the test can be hard to understand. To get understanding what is going on you can use console.log in the tests, but it does become cumbersome if you don't know which variable value you want to investigate. In this blog post, I'll show how to configure Visual Studio Code as a debugging environment for Jest tests. As an extra, the text will cover the WSL (Windows Subsystem for Linux) configuration.

Finding the launch configuration

Visual Studio Code has a Debug section where all the breakpoints, available variables, statement watches are listed.

You can start debugging by hitting F5 or play button. In the screenshot, there is a debug configuration active for launching a Chrome instance.

To add a new configuration for Jest, hit the cog on the right side of the Launch Chrome selection. It will open a new tab with file launch.json.

Debugging the tests

Adding a breakpoint is easy, click next to the line number to add one. You might have customized the view, so the line number could be hidden or on the right side of the text, but adding a breakpoint should be still possible by clicking to the side of the text editor.

The added breakpoint can also be seen seen on the Debug panel.

To start debugging, open debug panel, choose the right configuration and press the green play symbol.

After a while, you should see the line being active where you added the breakpoint.

Debug Console view, which can found using Ctrl + Shift + Y, contains information on how the debugger attached to the testing process. That view is shown in the right side of the screenshot above.

Once you have a debugging session active you can check the value of the variable by moving the mouse cursor on top of the variable, see available variables in various scopes (Variables section in the Debug panel.) etc.

I hope this will help you get started with unit test debugging!

]]>In 2017 and also briefly in 2018, I had the pleasure to work with NordSafety. Two projects were successfully executed and, in this article, I will tell about a project called SafetyFeed. This was the smaller of the two projects but is easier to explain as the concept is familiar]]>https://www.triplet.fi/blog/case-story-nordsafety-social-approach-to-safety/32af07d6-9477-46e1-bb7c-ad650696e529Fri, 08 Jun 2018 07:49:00 GMT

In 2017 and also briefly in 2018, I had the pleasure to work with NordSafety. Two projects were successfully executed and, in this article, I will tell about a project called SafetyFeed. This was the smaller of the two projects but is easier to explain as the concept is familiar from social media services.

Short introduction to EHSQ and NordSafety

To understand better what was built and why, it is important to explain what NordSafety does.

NordSafety brings EHSQ management to the mobile platform

EHS (or EHSQ) stands for Environment, Health, and Safety. It sounds pretty broad, so let's see how Wikipedia defines it:

Environment, health and safety (EHS) is a discipline and specialty that studies and implements practical aspects of environmental protection and safety at work. In simple terms it is what organizations must do to make sure that their activities do not cause harm to anyone.

As you can see from the definition, EHS is a very broad topic and covers almost all industries.

The idea of NordSafety is to have a modern take on capturing EHS-related issues. Once collected within the system, information about issues can be shared and activities to remedy ongoing risks can be tracked.

Safety-related issues can vary from a hole in the pavement to a chemical spill at a paper mill. Large construction companies can have hundreds or thousands of sites that they need to track. The term site refers to one physical location of different forms (a building, segment of a road, etc.).

NordSafety has several client applications, iOS and Android apps that are implemented using React Native and a web client. There is also a Portal that can be used to manage sites, user rights, reporting, etc.

The project we're now going through is part of the Portal and it was undertaken in collaboration with NordSafety's business and development teams and with the design team from Taiste.

SafetyFeed

The idea of SafetyFeed is to provide a specialized feed and commenting system to NordSafety's product. Think of Facebook's wall, but within the context of EHS (explained earlier). The feed can be filtered - for example, if you want to see events from a specific site or only certain type of events (new site created, a new incident reported, etc.). The common use-case is that a person who is responsible for a site, for example, the renewal of a road called Mannerheimintie, can track what kind of activities have happened at that particular site.

Most of the users are not tech-gurus, so the user-interface has been designed taking that into account, for example when thinking about terminology and user-interface controls.

My part in the project

I did all the front-end work and some of the backend APIs, especially regarding reading the data. A lot of work had been done to collect the interesting events that can occur in the system.

The SafetyFeed on the front-end had much more work and was also more significant as it was the first project that defined the more modern front-end stack that was going to be used in other areas of the application. Of course we chose stable technologies, but close enough to the cutting-edge to attract people to continue the work.

React was the UI library that was chosen, then Redux for the state management together with wonderful TypeScript FSA.

The SafetyFeed uses infinite scroll, which basically means that it automatically loads new updates when the user scrolls near the end of the current content. The front-end implementation uses react-virtualized to keep only currently visible content in the DOM, giving a huge performance boost. The back-end also responds very quickly, so the user can scroll without any lags in loading new content.

Each update type (internally called cards) could contain different data - for example, a URL to the image when person shares a photo or coordinates on an update that has a map - and therefore extra attention was given to how to build a model so that new update types could be added. I was happy with the way data was modeled without a complex inheritance.

I wrote unit tests and component snapshots using Jest.

Conclusion

NordSafety had listened to the users and understood the problem - that information on what was going on at a site or sites should be available with an easy-to-use view. The new feature was launched by putting the feature to the production and informing customers about the new feature via a blog post and newsletter.

Later we added a commenting system to the SafetyFeed, so that each update could have greater interaction between the person who made the update and with those who read it.

I haven't seen the current usage statistics, so I can just hope that many users have found the SafetyFeed useful.

To me, it is important that the client's own developers can continue adding features and fixing bugs when I am gone. Also, it is important that the chosen technology is not chosen just to satisfy my own curiosity towards the new shiny thing: instead it needs to be the right tool for the job and people can be found to continue the work. This isn't always the case when working with consultants, independents or a consulting company.

I was very happy to hear that new additions have been made by a developer who wasn't even working there when I was implementing those features and overall feedback from the code quality standpoint has been very positive. That always warms my heart.

Big thanks to the NordSafety team, and especially for my main contact, co-founder and CTO Jani Virtala, for the pleasant collaboration.

]]>This is just a quick recap of the workshop day (24.4.2018). The next two days will are conference days full of talks. You can find the official information regarding the conference from React Finland website. Disclaimer: I had to write the day recap quite quickly, and it is]]>https://www.triplet.fi/blog/react-finland-2018-workshop-day/8fc7f5d0-7e3a-4e57-bb56-61580016d72aTue, 24 Apr 2018 18:19:49 GMT

This is just a quick recap of the workshop day (24.4.2018). The next two days will are conference days full of talks. You can find the official information regarding the conference from React Finland website. Disclaimer: I had to write the day recap quite quickly, and it is more like a braindump.

Overview

The most important thing first, even though we have active web development scene in Finland, we don't have that many conferences on Web development. That is why events like this are super important and exciting!

If it warms anyone's heart, the organizers will receive my eternal gratitude for organizing the conference! Is there anything more delightful than getting world-class developers/speakers "delivered almost to your doorstep"?

I think it is understandable that the workshops cannot be in the same location. If I counted correctly there were ten workshops with each capped to 20 persons; it is quite hard to find a venue that would have the perfect fit for all workshops.

Venue & Organizing

I wasn't sure if we're receiving anything during the day, so I filled my stomach with breakfast, took my fully charged laptop and headed to the Valkoinen Sali.

The place was easy to find, and service was good right from the doorstep. Having a coffee boosted my already good morale.

The venue works very-well, good location, service, and atmosphere.

Things to improve. These are small things, and one could say that the organizers were fast to React to situations, no pun intended.

Wifi. I guess most of the workshops require a laptop; the laptop requires a Wifi. One could use Wifi hotspot, but our workshop lasted from 09:00 - 17:00 so it will drain your battery.

The venue had a Wifi, but it required a password (complicated Finnish word), and it was super-slow. Even Michel, the workshop presenter, had to switch to his Wifi hotspot.

The two workshops were a bit too close to each other and the only thing separating us was a curtain. It would be better to have a separate room so that we would not hear the other workshop.

Most of the workshop attendees need to recharge the laptop battery, so the venue should have a lot of power strips.

State Management Workshop

The workshop was about MobX and MobX-state-tree. I was curious to learn other ways to manage web application state than Redux and RxJS that I had used before. I think there is no better way to understand the thinking behind the tool than let the creator himself explain it.

The workshop was well-paced, and the exercises had the right difficulty level. Michel created MobX, among many other things, so naturally, he knew what he was talking about.

I am not going to write about MobX now, but in context of full-day (09:00-17:00) workshop and MobX, the day was enough to give an overview of the MobX philosophy, capabilities and some idea of pros/cons of the approach.

If you're interested to learn MobX, I highly recommend attending Michel's workshop on the topic.

What's next?

Tomorrow will be the first conference day and everyone will be at the same venue. I look forward to meeting old friends and, hopefully, meet some new ones also! Come say hi.

And of course, the big thing is the great talks given by fabulous people all around the world.

]]>Prettier is an opinionated code formatter for different languages (such as JavaScript and TypeScript) and style sheet files. Prettier wasn't the first code formatter, but it had a new approach that led to being very praised and popular tool. The powerful approach was to parse the code and re-print it]]>https://www.triplet.fi/blog/adopting-prettier-into-existing-project/9267461e-1dc8-406e-b038-3083174d03d5Sun, 25 Feb 2018 15:07:47 GMT

Prettier is an opinionated code formatter for different languages (such as JavaScript and TypeScript) and style sheet files. Prettier wasn't the first code formatter, but it had a new approach that led to being very praised and popular tool. The powerful approach was to parse the code and re-print it instead of just checking and nagging about space between tokens. The other reason for popularity could be its opinionated approach which made the tool very approachable. On a green-field project, the Prettier usage from the get-go is a very smooth process. On a project that has been running a while, there might be things to take into account for. In this blog post, I'll explain how the Prettier can be adopted (relative) painlessly to the existing project.

What to consider

The projects where I have included Prettier have had front-end team consisting of 2-4 developers. It is a small number, but enough to get merge conflicts, so the version control is one thing to consider.

When working on a team then there is also the education and convincing part. Team members might not have even heard of this relatively new tool.

Also, the big part when adopting Prettier is the existing tools that need to be re-configured (for example, ESLint or TSLint) or disabled (other code formatting tool).

Let's look each of those in the following chapters.

Discuss with the team

I hope that you have discussed with the team before starting any work on adopting Prettier. Everybody must on board. The Prettier is quite simple to explain and demonstrate, the output of the Prettier can be shown to the developers. I am quite confident that most of the files in the project will look better after formatting, even if you had ESLint style checks in-place.

Your team members might be even impressed.

Also, Prettier is a very automatic tool if it has been configured properly into a pre-commit hook, so people can forget the existence of Prettier if they want and the source code they contribute will still be formatted.

Using Prettier removes non-productive discussion such as whether the if statement should have space before the parenthesis, etc. Developers can then focus on something that is meaningful.

Integrating with existing tools (ESLint, TSLint)

The project that I am currently working on, had an ESLint configured with some plugins. It is important to point out that ESLint !== Prettier. ESLint looks for possible code errors and suggests best practices, in addition to that it has also rules for checking source code formatting. The latter is very likely in conflict with Prettier, for example, ESLint could be configured to have eslint-airbnb-config which has different style rules than Prettier's.

If you're adopting Prettier then, by all means, leave ESLint to the project but disable conflicting rules by using eslint-config-prettier.

Or, you could run prettier as an ESLint plugin. The benefit of this approach is that if you have ESLint configured to run as a git pre-commit hook then you get Prettier to the development process very easily. When you commit files, the source code is formatted using Prettier and then ESLint checks are done, if everything is fine the commit is successfully created.

Timing and version control

The ideal situation would be that all feature branches would be merged into the main branch and new branches would start with Prettier integrated into the project. In a small team, this is doable.

To get to this ideal situation, discuss with the team and try aim for small feature branches (like everyone should aim anyway) and try to target that on a decided day (after working hours) you'll run Prettier to the whole code base and the next morning everyone will start with the formatted code and Prettier tooling available.

If there were branches active before Prettier configuration then those branches need to be rebased on top of dev/master where Prettier is already configured.

There will be conflicts, depending on the nature of the branch, if it is a branch with a bug fix that targets existing files or if it is a new feature with a lot of new files. New files are easy as those won't have conflicts, but are not formatted, just run Prettier on those files.

Existing files have been already reformatted in the dev, so fixes need to applied to the formatted code.

Conclusion

Some people might tell that they can format code by hand to perfection and perhaps make the code more readable than machine formatted. I am not saying they're wrong, but I would put more emphasis on consistency on the overall codebase than handcrafted sections by individuals.

In the gradient of slobby VS precise, I consider myself being more on the precise end. I have been positively surprised with the formatting results. You can try pasting some (non-critical and public) source code to the Prettier Playground to see results.

I have had positive experiences on using Prettier and I hope the shared experiences will help developers to adopt Prettier to their projects.

]]>If a Web developer would make a switch to Windows from Linux or MacOS, the most significant thing they would miss is a proper Unix shell. Disclaimer: that statement is only based on discussions with numerous web developers, so the sample size rather small. Windows users have been using non-Unix]]>https://www.triplet.fi/blog/the-windows-subsystem-for-linux-from-web-developer-perspective/1a66037a-424f-4bf7-b182-df87fc7606b9Sun, 18 Feb 2018 11:42:03 GMT

If a Web developer would make a switch to Windows from Linux or MacOS, the most significant thing they would miss is a proper Unix shell. Disclaimer: that statement is only based on discussions with numerous web developers, so the sample size rather small. Windows users have been using non-Unix commands (PowerShell) or emulated the Unix tools (Cygwin) which can lead to troubleshooting an issue that the rest of the team using MacOS or Linux hasn't encountered. Windows Subsystem for Linux (WSL) can bring a change to this situation. In this blog post, I'll explain why I switched to using Ubuntu bash instead of alternatives, and share the positive and negative experiences I have had when using it.

Why Windows Subsystem for Linux?

Windows users have always had multiple choices for command line tasks, such as PowerShell, Command Prompt, Cygwin, etc. Each of them has their issues.

The PowerShell is a powerful command-line shell and scripting language but only used on Windows machines. As a side note, PowerShell Core can be run as a cross-platform tool. From web developer perspective PowerShell is not widely used in the JavaScript build scripts or tools.

The classic Command Prompt is always at your disposal on the Windows machine. Unfortunately, Command Prompt feels abandoned, and overall experience is just sad, to put it mildly.

Cygwin takes a different approach as it tries, as the project description says, "Get that Linux feeling - on Windows". You can use many of the Unix tools, and Cygwin.dll provides POSIX compatibility. Cygwin doesn't mean that you can run Unix binaries, all libraries and tools need to be compiled from the source.

The Windows Subsystem for Linux (WSL) is a new Windows 10 feature that enables you to run native Linux command-line tools directly on Windows, alongside your traditional Windows desktop and modern store apps. --WSL FAQ on Microsoft website

It is a bit difficult to find information what WSL is, but this the definition I came up after reading many FAQs.

The WSL is a compatibility layer for running Linux applications. By default, WSL includes a Bash, and therefore you're ready to use commands like awk, grep, etc. Linux distribution you can install from Windows Store.

When you use Windows Subsystem for Linux, you can access your code from the Windows file system. For example, C:\GitHub\YourProject.

The Good

The WSL is light on resources as it isn't a virtual machine based solution.

The setup is easy as you can now install the Linux distribution, such as Ubuntu, from the Windows Store.

The WSL is not an emulation of tools, so everything works as expected because the tools are real Unix commands and not Windows applications that try to imitate the original ones.

Especially for the Web developers who use Node.js based tools, WSL brings very stable native packages as the Linux compilation of the packages is way more tested than the Windows counterpart. The installation of the Windows build tools has gone easier, thanks to Windows build tools package. Still, the Linux compilation is in my experience is easier and more reliable due to the larger user base. Installing build tools is easy: sudo apt-get install build-essential.

A big plus is also access to rock-solid package manager (for example, apt-get) and a vast amount of packages.

Access to same files with the Windows file system without mounting and sharing drives.

Finally, equivalent dev environment with the whole team. You're not anymore the only person on the team who is seeing Windows-specific issues and no one able to help you with the problem because they can't reproduce it.

The Bad and The Ugly

Based on the previous section, one might think we're living on a perfect planet. Almost. There are two kinds of instability issues that I have encountered.

First, random file system errors when doing heavy I/O operations, in Web development that is, of course, npm install.

I use Windows Bash on Ubuntu on a daily basis and everything has gone smoothly except on some operations I get random file system errors. Operations such as running tests using jest or installing npm packages with yarn. Any ideas? pic.twitter.com/pZFSqK8jmE

I have tested on two different machines, with same results, a random file access errors. The first run gives an error on file X, the second run error on file Y and the third run everything is fine.

The same thing happens with JavaScript unit tests.

Another issue I have encountered is that the Windows subsystem doesn't find Linux distribution at all. I just get a blank terminal, and nothing happens. Missing distribution errors occur rarely, but when it does the only thing that has remedied the situation is a classic solution, reboot.

A non-technical problem, sometimes it is hard to remember that if you're doing npm install on the WSL-side then the binaries are in Linux format. If you accidentally try to start your project on, for example, PowerShell then it won't work without deleting node_modules folder and doing npm install again.

Conclusion

Even with these issues, I have been super-happy running Linux environment on my Visual Studio Code setup. Microsoft has made installation I think it is a matter of time that the stability issues are solved. It's hard to tell if I am the only one encountering these issues.

One thing I am looking forward to doing some customizations, starting with fonts and colors. I think the next step after that could be learning to use tmux.

]]>I started writing TypeScript at the end of 2015. Since then I have written three medium-sized TypeScript projects. Even though TypeScript is very pleasant to learn (good resources, online playground, smooth learning curve, etc.) it can become a painful experience if the developer a) aims too high too soon or]]>https://www.triplet.fi/blog/two-key-points-to-focus-when-learning-typescript/40e55fbf-b2f4-4e16-8564-f933d9a05c3dFri, 19 Jan 2018 14:55:06 GMT

I started writing TypeScript at the end of 2015. Since then I have written three medium-sized TypeScript projects. Even though TypeScript is very pleasant to learn (good resources, online playground, smooth learning curve, etc.) it can become a painful experience if the developer a) aims too high too soon or b) tries to force TypeScript/JavaScript to something that is not idiomatic. In this blog post, instead of sharing many tiny tips, I'll try to share two aspects that new TypeScript developers could focus on.

Aim for strictness

TypeScript is a very pragmatic language both from a developer and also project standpoints. The developer who hasn't written a single line of TypeScript can gradually learn the language and leverage the existing ECMAScript understanding.

From project standpoint TypeScript is a safe bet, the team can migrate to TypeScript without a complete rewrite.

TypeScript allows developers to define the strictness level, strictness in TypeScript context means what is permitted and what isn't by the compiler.

If I were to start a new project with TypeScript, I would add setting strict: true to the tsconfig.json because I understand what the compiler is trying to say if I get an error.

{
"compilerOptions": {
"strict": true
}
}

The setting strict combines multiple checks that the developer leaves on or disables one by one. What I think is cool is that when the TypeScript compiler gets type system improvements, it can spot even more errors. Those new checks will be automatically enabled with the strict option.

As an example, how strictness would increase the number of errors spotted was when strictFunctionTypes check was introduced in TypeScript 2.6. I updated TypeScript and got new errors. As it was an entirely new check, I didn't quite understand what was wrong with the code. I could turn the strictFunctionTypes option off, learn what the compiler is trying to tell me and then decide if the check makes sense in our project.

There might be cases when you don't know how to use TypeScript to model, say, input and output of your function which can lead to frustration. Be kind to yourself and mark that particular code with an

Write idiomatic JavaScript

Possibility to write classes (introduced in ECMAScript 2015 which TypeScript supports) and interfaces (TypeScript) can lead to a misunderstanding that you should do C# or Java -style Object-oriented programming (OOP). It is essential to understand the possibilities that JavaScript offers, like first-class functions, and not just write using the style what you may have learned in other languages.

In case of classes, there is nothing inherently wrong with using classes in JavaScript as long as the usage is appropriate, but it is a good thing to understand that they might not be the only solution or they might not work identically compared what you have seen in other languages.

The same thinking critical thinking should be applied to TypeScript specific features. What I see too often in C# code is unnecessary inheritance where the developer has moved properties to a base class with only one class deriving from the base class. There is a risk that TypeScript's beautiful type system is not used in full-extent and unnecessarily complicated solutions are brought from other languages.

For example, the backend implementation might have a base class and derived classes that are just data transfer objects (DTOs). It is straightforward to convert C# class with only properties to interface in TypeScript and do the inheritance with keyword extends. When doing so, the developer should think about if there is a better way to model the same data. For example, union types could be used.

Conclusion

I think having few simple "instructions" in your brain can lead to better quality. The one I try to follow on a daily basis is the "The Boy Scout Rule".

"Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result?

Maybe these two key points that I mentioned are not as powerful as the boy scout rule, but I think if you keep them in your brain while learning TypeScript there might be a positive effect.

]]>It has been one year since the last review, and it is again time reflect how the year went. The way I approach life is quite simple: do more things that bring positive feelings and try to minimize that drain energy.

Freelancing - Overview

It has been one year since the last review, and it is again time reflect how the year went. The way I approach life is quite simple: do more things that bring positive feelings and try to minimize that drain energy.

Freelancing - Overview

participated in Refresh conference (Tallinn) and attended various meetups

contributed to open-source projects

Consulting and Technology

Regarding consulting offering, I haven't done drastic changes; I still focus on the Web development.

I like to spend about 80-90% of the time on the front-end and the rest on the back-end. Back-end work has been implementing REST APIs on existing backend architecture.

From technology standpoint, TypeScript, Visual Studio Code, Jest, Prettier, React and Redux have brought a lot of joy to the coding process. Unlike Visual Studio Proper, Visual Studio Code has been very stable, and I have been very impressed with their development speed and methods.

Other activities than consulting

I am a big fan of Mark Seemann's work. Not only he knows a ton, but he is also great at presenting complicated topics in an understandable format. Even though he is an active conference speaker, I had not had a change to participate in workshops or talks kept by Mark.

That why it was inspiring to organize a workshop that I always personally wanted to participate!

I enjoyed the conversations that I had with the participants and with Mark. Organizing an event is very different from the consulting work, so it brings highly needed change to the daily routines.

The event got good reviews so it seems that I did something right as an organizer.

I wish had given more talks on meetups, the only presentation I gave this year was on React Helsinki meetup.

This year was the 2nd time I went to the Refresh conference in Tallinn. It was a well-organized event and highly recommend it to anyone who is working, for example, on a SaaS product or in the world of front-end development.

What to look for in the year 2018

For some reason, Finland has active meetup community, but conference space has been quite inactive, or the conferences hasn't been very interesting. React Finland is a conference that I am eagerly waiting. The conference has plenty of exciting topics and diverse set of speakers.

I would like to organize another workshop, but I need first to find a topic that lights my internal fire. It requires much effort to hold a workshop and get it fully booked.

Giving a meetup talk is always lovely as people tend to come and talk after the presentation. I have fond memories, for example, from Budapest Node.js meetup where I am giving a presentation.

Freelancing - Time spent

This year, I worked 1034 hours. Last year, the number was 1142 hours. As long as the decrease is intentional from my side and not because of lack of work then all is good.

I have doing hourly rate since I started freelancing, but I am interested in trying other methods. What has been nagging in my brain, is the thought: "What does the client get when they buy 100 hours of Tatu" because I don't have any answer to that.

Note: 1034 hours includes only billable work, I did spend time on other work-related activities. In my last year's review, I have written more extensively on what is work time.

Blogging

I have been less active in blogging, writing only 19 blog posts in 2017. In the beginning (2015 and 2016), I had more ideas on the backlog and now I have been writing when I have discovered something new that I find interesting enough to spend time on writing.

Even though numbers are lower than last year, I am super happy with results as most the visitors are coming through search and not just from Hacker News hit.

Blogging has also brought leads to my consulting business and one the leads might soon become a customer. Updating a blog has not ever been because of business reasons, but of course I am super happy if it also brings monetary reward.

Conclusion

The year 2017 wasn't exceptionally good or bad, quite average year without any significant life-changing events. I think that is actually something to be happy about.

New projects, remote work, interesting technologies and people make me excited on what 2018 has to offer!

If you have written your 2017 retrospect, I would like to read it! Send me a Tweet with a link.

]]>Usage of TypeScript's powerful type system decreases specific type of bugs. Linting tools provide additional static code analysis to spot common mistakes and ensure consistency across the large codebase. In each project team decides which of the rules generate an error. In this blog post, I'll cover how to setup]]>https://www.triplet.fi/blog/linting-your-typescript-project-using-tslint/44138a60-31d0-4eba-bdc0-95a71e5be338Thu, 07 Dec 2017 06:55:18 GMT

Usage of TypeScript's powerful type system decreases specific type of bugs. Linting tools provide additional static code analysis to spot common mistakes and ensure consistency across the large codebase. In each project team decides which of the rules generate an error. In this blog post, I'll cover how to setup a TSLint in your TypeScript project and show some example errors that TSLint spotted for us.

Installing TSLint on an existing project

I am not a big fan of global npm package installations; instead, I install TSLint as a devDependency. The local TSLint allows project members to have the same version on the current project and therefore the output is consistent across team members and build tools.

yarn add tslint --dev
yarn tslint --project ./tsconfig.json

The first line installs tslint as a devDependency. The second line run tslint using yarn, this way I don't have to write ./node_modules/.bin/tslint. As I have an existing TypeScript project, I'll point tslint to the project configuration file.

Configuring TSLint

With init command I could create an empty configuration for the tslint. As an example, I use my current setting file.

The configuration is a JSON file (by default it is named tslint.json). The JSON contains several keys, extends and rules being the most used ones.

In the example, I extend tslint's built-in configuration preset that contains latest recommended rules. When I update tslint, I might get additional errors compared to the previous version as new checks have been created that might be recommended.

I also use prettier and there is an NPM package that contains a configuration preset which will disable stylistic checks that prettier already fixes for me.

As this tslint-config-prettier is not built-in, I need to install it first.

yarn add tslint-config-prettier --dev

In the config file, the order of the extends array is significant. The latter line ("tslint-config-prettier") will override rules from the tslint:latest.

In the example configuration, I have disabled or changed some of the rules.

Rules

A rule is a single check that can be either built-in or a custom rule.

Rules can have a configuration, for example, to provide exceptions to the rule or varying levels of strictness. If the rule has a configuration, then they have a schema.

prefer-object-spread is a straightforward rule that has no configuration. The prefer-object-spread check has a description "Enforces the use of the ES2015 object spread operator over Object.assign() where appropriate., " and the rationale section "Object spread allows for better type checking and inference."

In the picture below, I have used TSLint together with Visual Studio Code using a fantastic 3rd party extension.

Behind the scenes, the VS Code extension is using TSLint, so I would get the same error as using yarn tslint command.

The screenshot has a light bulb, and by clicking it, I get a helpful context menu that allows me to fix the issue automatically! Again, this can also be done from the command-line.

prefer-object-spread is one of the rules that have an automatic fix available, rules with automatic fix can be identified from the list by looking for a keyword "Has Fixer."

If for some reason, I would like to disable that rule then I'll add a new property to the rules:

Or, I can add a comment that in the next line I want to use Object.assign instead of spread:

// tslint:disable-next-line:prefer-object-spread

Conclusion

Linting tools are not only useful for finding errors and providing consistency across codebase, but also linting tools are great for learning.

JavaScript and TypeScript allow me to do all sort of things, but that doesn't mean that it is wise to do even if possible. Example of a rule that tells something is possible, but not advisable or has a caveat is no-for-in-array rule. Also, there is a rule that looks for particular kind of bit operators as those are very rare in JavaScript projects and can be mixed with logical operators (for example, & VS &&). In our codebase, those bit operators were correctly used, so I marked them to be valid lines instead of disabling the rule.

Let me know if you have any questions.

Happy linting!

]]>I gave one of the talks on React Helsinki meetup on 8th of November 2017. The title of the talk wasn't very imaginative, but it was quite descriptive: "Lessons learned from three React + Redux projects." These lessons were learned the hard-way aka doing mistakes. If someone in the audience could]]>https://www.triplet.fi/blog/recap-of-react-helsinki-meetup-talk/3a6733a1-0ce0-4c51-bc76-b2ba5ab88b31Fri, 10 Nov 2017 10:58:00 GMT

I gave one of the talks on React Helsinki meetup on 8th of November 2017. The title of the talk wasn't very imaginative, but it was quite descriptive: "Lessons learned from three React + Redux projects." These lessons were learned the hard-way aka doing mistakes. If someone in the audience could avoid the some of the pitfalls money and time would be saved. It is not only my mistakes and my learnings, but React ecosystem has changed some of the best practices over the years. In this blog post, I'll summarize some of the ideas shared in the talk.

Overview of the meetup and my talk

The event was organized by Finitec and hosted by Forenom. Everything went very smoothly, so big thanks for the whole crew!

It was tough to pick which topics to cover as people have different experience levels and also they might be solving a different kind of problems than what I have experienced, but hopefully, everyone got at least something.

I divided the talk into sections called Components and Store.

Components

What is the right size for a component?

When writing React or any other UI components from scratch, it might be difficult to decide what is the right size for a component. By component size, in this context, I mean granularity of the UI elements.

To visualize why smaller (fine-grained) interfaces lead to a better outcome, Mark used Duplo and Lego bricks as an example. If you need to build a dragon using Duplo bricks, the result is not as optimal as with high-grained Lego bricks.

I used the same analogy but in context of React components. If you have a target outcome by designer, you can think this as the dragon in the Lego example. By using fine-grained components, you can achieve a better outcome than using huge building blocks (Duplo).

Another way to think about the React components is the single responsibility principle which states that ""A class should have only one reason to change." In the sentence, Class can be replaced with component.

Which component type should I use?

I have had good experiences when having only two types of components: classic components and stateless functional components. In addition to legacy types like React.createClass, the only thing missing from my list is PureComponent.

Classic component I use when:

React life-cycle events are needed, for example componentDidMount

I need component state

Stateless functional components I use for the rest.

What I normally do is that I write the page 95% ready and then start to optimize the page. Stateless function components (SFP) will render on any props change, and sometimes this is not an optimal situation. As the SFP doesn't have life-cycle events like shouldComponentUpdate, I use a library called recompose to provide a Higher-order Component that allows writing different "rules" for updates.

Store

Make it flat as possible

One of the big mistakes I made during my first React app was that I didn't change the data I received from AJAX call to a flat data structure.

One example of this was a view that had a complex tree control with unlimited depth. I used a legacy API that provided data that matched very well what I needed to render; it had branches and leafs. I stored this tree structure in my store and started build up the UI tree.

Everything was good until I had to dispatch actions that affected one of the leafs many levels down in the tree, for example selecting a leaf or expanding a branch. I sent an event that had an ID of the element that was selected.

The problems start to occur when you need to write a reducer to a deep data structure.

How do I find an item with particular id if it can be in any branch and any depth?

I could pass all the parents as an array, but it makes the components and actions much more complicated.

The better approach is to have a flat list of items and then to search by ID is much faster and easier to implement.

Note: one could also use object and ID would be a key.

I then would create an unflatten function that would create a tree structure on the fly. Redux documentation has a chapter called Computing Derived Data that is very helpful.

Store is good place to start adding (TypeScript) types

If you start using TypeScript or Flow, then Redux store is an excellent place to start. Make the application state typed, add types to the reducer functions and proceed from there to component props.

I also mentioned about TypeScript FSA library that allows you to write typed actions from dispatching to receiving.

]]>When a developer with C# or Java background learns TypeScript, there is a temptation to write TypeScript with the same style as C# or Java. It is easy to lose TypeScript's beauty by not understanding the type system TypeScript offers and how it differs from languages that are already familiar]]>https://www.triplet.fi/blog/type-system-differences-in-typescript-structural-type-system-vs-c-java-nominal-type-system/ac283ed4-07cc-464e-9c19-0bc60649d662Sun, 15 Oct 2017 10:11:00 GMT

When a developer with C# or Java background learns TypeScript, there is a temptation to write TypeScript with the same style as C# or Java. It is easy to lose TypeScript's beauty by not understanding the type system TypeScript offers and how it differs from languages that are already familiar to the developer. One of the differences is that TypeScript uses Structural Subtyping. C# and Java both use Nominal type system. In this blog post, I'll cover basics of both systems and few examples, enjoy!

Nomen est omen

Nominal refers to Latin word nomen, the name. Many might be familiar with phrase nomen est omen (the name is a sign) or nomen est omnis (the name is everything) and the especially the latter phrase can be applied to programming context also. The name of the type defines is the type compatible or not, for example, as a method argument.

Let's look at a C# example. 3rd party UI library contains a drop-down component and it will take a list of options.

The TypeScript example has a bit more lines because I made a mock UI control + initialization. The main thing is that companies are valid options. If I change a property name on the Company to the title, there will be an error, so I still have type-safety.

A bigger real-world example could be from the React app.

Let's imagine a scenario where I need to support different modes (read/edit) for each type of component. Each component should have read/edit modes defined.

Conclusion

I hope the provided examples help understand the difference between nominal and structural typing. Maybe in your codebase, there are unnecessary complex mappings or inheritance that could be simplified using some benefits that structural typing brings.

Nominal typing is useful at preventing accidental type equivalence, which allows better type-safety than structural typing. The cost is a reduced flexibility, as, for example, nominal typing does not allow new super-types to be created without modification of the existing subtypes.

It's our duty as developers to understand pros and cons of each approach. To make proper use of pros and understand cons leads to elegant solutions. Happy coding!

->]]>Setting up a build process for a modern web application can be a tedious process. Knowing your tools is important when you're going to maintain the software. When you want to learn a programming language or experiment with an idea, you want to get started fast without learning all the]]>https://www.triplet.fi/blog/two-easy-ways-to-get-started-with-typescript-and-react/3641e22a-c8d3-406d-b355-6839b8766f02Fri, 22 Sep 2017 08:15:32 GMT

Setting up a build process for a modern web application can be a tedious process. Knowing your tools is important when you're going to maintain the software. When you want to learn a programming language or experiment with an idea, you want to get started fast without learning all the nitty-gritty details of each step in the build process. In this blog post, I'll show two easy ways to bootstrap a TypeScript project with React installed.

Create-React-App-TypeScript

Create-React-App from Facebook is a way to get started with React without any configuration. CRA is a perfect starting point for JavaScript-based React development as it gives tested build configuration, development server, environment-specific settings, etc.

Unfortunately, TypeScript users lose one of the main advantages, the zero-configuration, as to support TypeScript the black box needs to be opened and configuration changes have to be made.

After requiring the fuse-box package, the initialization takes source code folder (homeDir) and output file. Note the $name which is a variable that refers to the bundle name. The $name allows generating multiple output files to the same output folder.

The next step is to describe the bundle. By giving TypeScript entry file, the FuseBox determines the dependency graph. For example, TypeScript files could require/import SCSS stylesheet files, and those would be bundled also. Requiring something else than TypeScript files will require adding a plugin, but most of the plugins require zero configuration.

Finally, by executing run, the bundle will be created.

Other alternatives

TypeScript's home page has an article called Integrating with Build Tools which covers all the popular build tools. The article only describes the TypeScript part of the build process and doesn't cover React (files with .tsx filename extension).

Webpack is currently a popular choice for module bundling. It can be a bit overwhelming with all its plugins and configurations, but TypeScript team has written an article that covers TypeScript and React which should get you to productive phase quite quickly.

If you have found a tool that makes TypeScript and React set-up breeze, let me know!

]]>You might have seen an exclamation mark after a variable name, for example person!.name. The language feature is called Non-null assertion operator. What is the purpose of the operator and when should you use it? I try to answer those questions in this blog post by giving a use-case]]>https://www.triplet.fi/blog/what-is-the-use-of-exclamation-mark-operator-in-typescript/f077d5df-ec6b-4b98-a487-8b0cb25f0d9eFri, 08 Sep 2017 18:09:03 GMT

You might have seen an exclamation mark after a variable name, for example person!.name. The language feature is called Non-null assertion operator. What is the purpose of the operator and when should you use it? I try to answer those questions in this blog post by giving a use-case from a real-life project.

Non-null assertion operator

A new ! post-fix expression operator may be used to assert that its operand is non-null and non-undefined in contexts where the type checker is unable to conclude that fact. Specifically, the operation x! produces a value of the type of x with null and undefined excluded.

The description contains many fancy words, but in plain English, it means: when you add an exclamation mark after variable/property name, you're telling to TypeScript that you're certain that value is not null or undefined.

Next, I'll show an example where I am sure value if non-null, but I need to tell it to the TypeScript explicitly.

The example usage

I had a React component that received translation functionality from an HOC (Higher-Order Component). Okay, that's a mouthful already, before going forward I'll explain the sentence.

See how in the MyComponentProps the translation function t is optional? The reason for setting it optional is that users of the component should not have to provide translation property as it is coming from the HOC.

I know that t is provided, so writing null checks like t ? t('my_translation_key') : '' in every place where I want translated text is really frustrating.

Instead, I can safely write t!('my_translation_key') without null or undefined checks.

In other words, the Non-null assertion operator decreased the possible types from TranslationFunction | undefined to just TranslationFunction.

]]>Higher-order components and monad have something in common, both sound esoteric, but at least higher-order component (HOC) is easy to explain. It is a component that returns a new component. The reason for doing so is to provide shared functionality to multiple components. The React documentation has a lot of]]>https://www.triplet.fi/blog/react-higher-order-components-hoc-using-typescript/3e8d2e7f-f42e-4108-9cf3-2d1e9ea1b68eFri, 25 Aug 2017 09:56:19 GMT

Higher-order components and monad have something in common, both sound esoteric, but at least higher-order component (HOC) is easy to explain. It is a component that returns a new component. The reason for doing so is to provide shared functionality to multiple components. The React documentation has a lot of content on caveats, conventions, and examples. In this blog post, I'll focus more on an implementation using TypeScript.

Overview

HOCs are used to address cross-cutting concerns which is a fancy term for shared functionality. Examples of cross-cutting concerns in software development are logging, security, data transfer, etc. Most of the HOCs come from 3rd party libraries. Many popular React libraries are using HOCs, for example, react-redux or i18next-react. Because many libraries already solve the issue at hand, custom HOCs, in my experience, are implemented quite rarely.

How does a usage of HOC look like? A simplified example from the i18next-react web-site:

I had a scenario where I had to add a token as part of the POST request headers. The token, in this case, was a security token created by ASP.NET MVC called Anti-Forgery Validation Token.

After dispatching Redux actions that will do the call to the API endpoint, I noticed repetition in the source code. One of the often repeated code blocks was getting the token and placing it the action payload. I didn't want to use a singleton or global state to provide the token as it would make the code harder to test. Instead, I took it from the Redux store and explicitly added to the requests that needed it.

All components that dispatched an async action had to have following code:

I can make sure that wrapped component props contain token by forcing its props to implement WithToken interface. Comp parameter is the component that will receive the shared functionality. As I am not using component's state, the Comp can be either ComponentClass or StatelessComponent.

Using the HOC is easy. Instead of using directly a component called MyComponent, I use the wrapped version.

Conclusion

I highly recommend reading the React documentation's chapter on higher-order functions as several caveats might not be obvious, for example, a section called "Don't Use HOCs Inside the render Method" describes one of them. After reading the chapter, I made some changes to my code also.

Even though I enjoy using TypeScript, HOCs type definitions sometimes make by head hurt as the usage of the type system is quite advanced.

If you have any questions don't hesitate to ask, for example, via Twitter.