When I started this blog around nine years back, my only intention was to share technical posts. But over period of time I started writing about a variety of things including productivity tips that I found useful, travelogues, random thoughts, personal goals, blogging etc. One of the things that I have noticed is that a lot of people have been inspired by these various posts and photos that I post online and have triggered them to do similar things.

I’ve had my own similar inspirations to start the various things that I do today. Like for instance, I started running after being inspired by my friends, Satish, Suresh and Thiru. I reached out to them for various tips when I started running a year ago. From running I moved on to cycling and a bit of swimming after seeing my friend Rahul. For travel, my inspiration has been Arun Sudheendran and Deepak Suresh who do a fair bit of exploration. I tend to reach out to them for travel ideas and places to visit. Similarly, there have been inspirations from people that I have never met or met just once or twice.

Below is a transcript of a chat with one of my readers whom I have never met. It’s a great feeling to wake up to such messages and it boosts your own motivation to continue what you are doing.

Social media plays a great role in spreading information these days. When you see people in your own friend’s circle start doing things that you have always wanted to, it gives you an extra push to give it a try. There might be some people who feel you are sharing too much of things that don’t interest them. For those, there is always an option to un-follow, mute, filter etc. Don’t let that thought stop you from sharing things that you do.

Such a small act of sharing, even things that you might have seen someone do could add up and be of big impact to someone else, often referred to as the Butterfly Effect.

The Butterfly Effect: This effect grants the power to cause a hurricane in China to a butterfly flapping its wings in New Mexico. It may take a very long time, but the connection is real. If the butterfly had not flapped its wings at just the right point in space/time, the hurricane would not have happened. - Chaos Theory

Of late I have been working for multiple clients at the same time. Different clients have different development environments, which has forced me into using Virual Machines (VM’s) for my day to day work. I will cover my actual setup and new way of working using VM’s in a different post.

When working on VM’s I often have to switch to the host machine for email, chat and a few other programs that I have just on my host machine. Minimizing the VM host is time consuming and context breaking if you are working off a single screen. On a multi monitor setup you can always have VM on one screen and host on the other. This can still get tricky if you have more than one VM’s connected.

The Virtual Desktops feature in Windows 10 is of great help in this scenario. We can move between desktops using keyboard shortcuts (Ctrl + Win + Right/Left Arrow). But with the VM running on separate Virtual Desktop any key presses gets picked up by the VM operating system and not by the host. This means that you cannot use the keyboard shortcuts to switch host desktops from inside a VM. However you can move between desktops using the Four Finger swipe gesture on your touchpad (if that is supported). These swipe gestures are picked up only by the host machine OS, unlike the keyboard shortcuts. So even when you are inside a VM, doing the four finger swipe gesture tells the host OS to switch desktops. This allows you to easily navigate between VM’s running on different Virtual Desktops.

Coffs Harbour is one of the places that I liked the most of all my trips. It’s been almost a year since I made my trip and the memories are still fresh. The beaches are great, especially the Jetty Beach. The rainforest walk in Dorrigo was the best I have had to date, especially because of the rain the night before. Coffs Harbour is perfect for a 3-4 days trip and there are a lot of places to visit around.

We headed off to Port Macquarie to celebrate Gauthams birthday. Gautham likes strawberries a lot which was why we chose Port Macquarie. Ricardoes Tomatoes & Strawberries is located just ten minutes north of Port Macquarie and provides a unique experience for picking your own strawberries. You can spend around 2-3 hours here and make sure you don’t miss the scones from the cafe. Port Macquarie is also a great place for whale watching and we headed off on an early morning trip to be with the whales. The boat ride (PortJet) in itself is an experience and to our luck, we were able to see around 3 whales up close. We also went to Dooragan National Park, Kattang, Perpendicular Point and Charles Hamsey lookout.

The Grand Pacific Drive makes a great one day trip as well as a multi-day trip for those who want to take their time along this stretch of land. Starting from Royal National Park and stretching all the way to Sapphire Coast, this makes a great drive with beautiful scenery and also a lot of places to visit. The Grand Pacific Drive site has all the details that you need to plan your trip. It also has a trip planner that makes planning easier. If you want to cover most of the places along the way during a single trip, it is best to give it 2-3 days. During my trip, I stopped over at Wollongong and only made my way till Kiama.

Just 90 minutes from Sydney by car, the Blue Mountains has a lot of attractions worth visiting, making it a good place for an extended weekend trip. Wentworth Falls, Echo Point, and Three Sisters are some of the popular lookouts. Scenic World offers some good rides and entertainment for kids. I liked the worlds steepest incline railway ride in particular. The entry tickets are a bit overpriced though.

Jenolan Caves is another one hour drive from the Blue Mountains and is a must-do. It’s great for people of all ages and if you have kids they will love it. Make sure you check the different cave options and choose one that fits the people in your group. Booking a spot in advance might help and make sure you arrive on time. The drive up there might be a bit slower so give enough buffer time before your cave walk starts.

Nelson Bay, Hunter Valley, Orange, Port Stephens etc are some of the places that are on our list but could not make it yet. I moved over to Brisbane end of last year and not sure when I will have another chance to explore more around Sydney. But I have new places to look forward to now - Exploring Brisbane!

I was given a console application written in .NET Core 2.0 and asked to set up a continuous deployment pipeline using TeamCity and Octopus Deploy. I struggled a bit with some parts, so thought it’s worth putting together a post on how I went about it. If you have a better or different way of doing things, please shout out in the comments below.

At the end of this post, we will have a console application that is automatically deployed to a server and running, anytime a change is pushed to the associated source control repository.

Setting Up TeamCity

The first three build steps use the .NET CLI to Restore, Build and Publish the application. Thee three steps restore the dependencies of the project, builds it and publishes all the relevant DLL’s into the publish folder.

The published application now needs to be packaged for deployment. In my case, deployments are managed using Octopus Deploy. For .NET projects, the preferred way of packaging for Octopus is using Octopack. However, OctoPack does not support .NET Core projects. The recommendation is to either use dotnet pack or Octo.exe pack. Using the latter I have set up a Command Line build step to pack the contents of the published folder into a zip (.nupkg) file.

The NuGet package is published to the NuGet server used by Octopus. Using the Octopus Deploy: Create Release build step, a new release is triggered in Octopus Deploy.

Setting Up Octopus Deploy

Create a new project in Octopus Deploy to manage deployments. Under the Process tab, I have two steps - one to deploy the Package and another to start the application.

For the Deploy Package step I have enabled Custom Deployment Scripts and JSON Configuration variables. Under the pre-deployment script, I stop any existing .NET applications. If multiple .NET applications are running on the box, select your application explicitly.

Pre Deployment Script

1

Stop-Process-Namedotnet-Force-ErrorActionSilentlyContinue

Once the package is deployed, the custom script starts up the application.

Run App

12

cdC:\DeploymentFolderStart-Processdotnet.\ApplicationName.dll

With all that set, any time a change is pushed into the source control repository, TeamCity picks that up, build and triggers a deployment to the configured environments in Octopus Deploy. Hope this helps!

Often when working with SQL queries, I come across the need to capitalize SQL keywords across in a large query. For, e.g., to capitalize SELECT, WHERE, FROM clauses in an SQL query. When it is a large query/stored procedure, it is faster done using some text editor. Sublime Text is my preferred editor for such kind of text manipulations.

Sublime Text Editor comes with a few built-in text casing converters that we can use, to convert text from one case to another. Using the simultaneous editing feature, we can combine it with case conversion and manipulate large documents easily.

For example, let’s say I have this below SQL query. As you can notice the SELECT and FROM keywords are cased differently across the query.

To standardize this (preferably capitalize all), highlight one of the ‘select’ keywords and highlight all occurrences of the keyword (ALT + F3). Once all occurrences of ‘select’ is highlighted, bring up the command pallete (CTRL + SHIFT + P on windows) and search for ‘Convert Case’. From the options listed choose the case that you want to convert. All selected occurrences of the keyword will now be in the selected case.

Hope this helps you when you have a lot of text case manipulations to be done.

2017 was the transformation year. Regular exercise and healthy eating helped loose around 20 kilos. Lots of travel and blogging made it an excellent year. Reading, Photography and Open source did not go that great. Looking forward to 2018!

What went well

Blogging

It has been both good and a bad year as far as this blog. Including the ‘Tip of the Week’ series I wrote seventy-six posts this year with an average of over six posts per month. This is the good part, as it is well past the goal of a minimum four posts a month goal set last year. But looking at the actual posts per month graph below, it is clear that I have fallen short of it on an actual month by month basis. Up until August, I had a steady stream of posts coming in, from when it started dropping down, with even months (November) with no posts. Mainly it’s my laziness to blame, but I can also tell reasons like Vietnam trip, Shifting to Brisbane, etc.

Running

I had started running towards the end of December 2016. One of my goals for 2017 was running, and it has had a good improvement. Ran over 750 kilometers including a half marathon. I am yet to participate in any running events and am planning to in the coming year. I have also started cycling, and it is an excellent way to cross train.

Travel

Did our first international holiday to Vietnam for ten days and was a great experience. Also went around Australia visiting Blue Mountains, Canberra, Port Macquarie, Brisbane and lots of one-day trips around Sydney. Mandarin Picking, Strawberry picking and Whale watching were some of the top activities for the year.

What didn’t go well

Photography
One trip every three months and post photos were one another goal. The travel part went good (see above), but my DSLR always remained in the bag. Except for a few pictures on the phone camera, there was not much photography done.

Goals for 2018

Blogging Stick to 4 posts a month. Need to get back on schedule.

Running Attend few running events. Run a marathon.

Swimming Having started cycling along with running, has got me thinking about a triathlon. The only thing between is swimming, and I have no clue how to swim. Learning to swim is one of the key goals for the upcoming year. Target is to be able to swim one km.

Open Source Start working on a side project. Need to find a matching project first.

I recently upgraded to a Garmin Fenix 3 HR from my Forerunner 630. After a few runs with the Fenix 3, I realized that in Training Mode it does not do auto lap. I have a custom training workout for a 10k with no repeat modes in it. This workout was what I used on my FR630, and it used to auto lap at 1km. That no longer happens in the Fenix 3 HR.

Software Details

Fenix 3 HR: 4.70

Forerunner 630: 7.50(bdd586f)

After googling around, I understood the auto lap under training mode is a feature only available to specific models/software versions. One of the reasoning behind it is auto lap might create issues if people are training in intervals larger than 1km. Breaking into laps at every 1km will make it harder/nearly impossible to compare their intervals. For workouts that you want auto lap at 1km (or at any custom distances), you can use Repeat feature as shown below. Setting up the workout as 10 x 1km helps to analyze the run at 1km intervals.

Depending on the model/software version of your Garmin watch you might have to tweak your workout plans. Hope this helps

At one of my clients, they had a requirement of scheduling various rules to sent our alert messages via SMS, Email, etc. A Rule consists of below and a few other properties

Stored Procedure: The Stored Procedure (yes you read it correctly) to check if an alert needs to be raised

Polling Interval: The time interval in which a Rule needs to be checked.

Cool-Off Period: Time to wait before running Rule again after an alert was raised.

All Rules are stored in a database. New rules can be added and existing ones updated via an external application. Since the client is not yet in the Cloud, using any of Azure Functions, Lambda, Web Jobs, etc. are out of the question. It needs to be a service running on-premise, so I decided to keep it as a Windows service.

Because of my past good experiences with HangFire I initially set off using that only to discover soon that it can schedule jobs only to the minute level. Even though this is a feature that has been discussed for a long time, it’s yet to be implemented. Since some of the rules are critical to the business, they want to be notified as soon as possible. This means having a polling interval in seconds for those rules.

After reaching out to my friends at Readify, I decided to use Quartz.net. Many had good experiences using it in the past and recommended it highly. One another option that came up was FluentScheduler. There was no particular reason to go with Quartz.net.

Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large-scale enterprise systems.

Setting up and getting started with Quartz scheduler is fast and easy. The library has a well-written documentation. You can update the applications configuration file to tweak various attributes of the scheduler.

The RAMJobStore indicates the store to use for storing job. There are other job stores available if you want persistence of jobs anytime the application restarts.

Setting Up Jobs

Basically, there are three jobs - Alert Job, CoolOff Job, and Refresh Job - set up for the whole application. The Alert and Refresh Jobs are scheduled on application start. The CoolOff Job is triggered by the Alert Job as required. Any data that is required by the job is passed in using JobDataMap.

Alert Jobs

The Alert Job is responsible for checking the stored procedure and sending the alerts if required. If an alert is sent, it starts the CoolOff Job and pauses the current job instance. THe DisallowConcurrentExecution prevents multiple instances of the Job having the same key does not execute concurrently. We explicitly set the Job Key based on the Rule Id. This prevents any duplicate messages getting sent out if any of the job instances takes more time to execute than its set polling interval.

Cool-Off Job

Cool-Off Jobs is a one time job scheduled by the Alert Job after an alert is sent successfully. The CoolOff job is scheduled to start after the Cool-Off time as configured for the alert. This triggers the job only after the set amount of time. It Resumes the original Rule Job to continue execution.

Refresh Job

The Refresh Job is a recurring job, that polls the database for any changes to the Rules themselves If any change is detected,it removes the existing schedules for the alert and adds the updated alert job.

With these three jobs, all the rules get scheduled at the start of the application and run continuously. Anytime a change is made to the rule itself, the Refresh Job refreshes it within the time interval that it is scheduled for.

Tip:If there are a lot of rules with the same polling interval it will be good to stagger their starting time using a delayed start per job instance. Doing that will make sure that all jobs do not get polled for at the same time.

So far I have found the Quartz library stable and reliable and have not faced any issues with it. The library is also quite flexible and adapts well to the different needs.

I was recently playing around with MessageMedia API trying to send SMS and get the status of the SMS sent. Sending the SMS and getting the status of the last sent SMS always happened in succession when testing it manually. Once I send the message, I waited for the API response, grabbed the message id from the response and used that to form the get status request.

Postman is a useful tool if you are building or testing APIs. It allows to create, send and manage API requests.

I added two requests and saved it to a collection in Postman - one to Send Message and other to Get Message status. I have created an environment variable for holding the message id. For the request that sends a message, the below Test snippet is added. It parses the response body of the request and extracts the message id of the last send message. This is then saved to the environment variable. The Test snippet is always run after performing the request.

The Get message request uses the messageId from the environment variables to construct its URL. The URL looks like below.

1

https://api.messagemedia.com/v1/messages/{{messageId}}

When executing this request, it fetches the messageId from the environment variable, which is set by the previous request. You no longer have to copy message id manually and use it in the URL. This is how we chain the data from one request to another request. Chaining requests is also useful in automated testing using Postman. Hope this helps!

At times you might need to extract data from a large text. Let’s say you have a JSON response, and you want to extract all the id fields in the response and combine them as comma separated. Here’s how you can easily extract data from large text using Sublime (or any other text editor that supports simultaneous editing).

Again the key here is to select the recurring pattern first. In this case, it is “id”: and then selecting all occurrences of that. Once all occurrences are selected, we can select the whole line and extract that out. Repeat the same to remove the id text. Then follow the same steps we used to combine text.