Daniel Pamich

I had to rebuild one of my servers recently. So I decided to have a look at the wear level for its primary SSD drive and I was very surprised at the usage and wear stats after 7 years of usage. I would have though it would have been getting close to its end of life, turns out its almost the opposite.

I bought the Intel X25-M SSD soon after it initial release after a lot of positive reviews. It cost me around $1000 and was worth every penny.

Initially it was the boot drive for my development PC, I remember been blown away by just how fast this drive was. It was a game changer. As I have needed more storage space I have swapped this drive out for more modern and larger SSD’s.

Since this drive was an excellent performer I have used it as the boot drive and SQL data drive for one of my servers. So for the past 5 years it has run as a Continuous Integration/Deployment Server running Windows Server, SQL Server, Jira, Bamboo and more recently Octopus.

Today as I am rebuilding the server I thought I would look at its wear stats, after 7 years of usage. I thought the SSD would be close to its end of life.

Well it’s still got 96% of its life left!!!!! So after 7 years of usage it’s got about another 168 years of life in it!

It’s written 16.16tb of data during that time. That’s over 1000x its storage capacity.

When I first got SSD’s for my computer I was worried how long they would last. The reality is that they will last for a very long time! On a side note all the standard hard drives I had brought back then have all failed.

The SSD drives I have are way more stable and have better data securely that any of the hard drives I have owned. I always run hard drives in a Raid 5 or mirrored mode to ensure data safety, but with SSD’s I dont worry about needing to do this. Of course I still ensure my SSD’s are fully backed up off site via Crashplan.

Like this:

Continuous Deployment is an incredibly stable and versatile method of designing software. The beauty of this system is that any task, at any time, could be interrupted by an update. Yet despite these interruptions the systems maintain constant integrity and stability.

This is because Continuous Deployment software has been designed from the ground up to manage interruptions automatically. Building your software using Continuous Deployment methods will insure your software is robust. Thus whether planned interruptions take place, or more unexpected system failures (i.e. hard drive problems) the software will maintain constant integrity and stability.

A key part in Continuous Deployment is the appropriate design of long running batch tasks. Long running batch tasks refers to an algorithm that can take anything from minutes to hours to complete (compared to Short Running Tasks that take seconds).

Processing Long Running Batch Tasks.

Long Running Batch tasks’ have three main attributes:

Each task stops quickly on request.The quicker the better so the system can finish the update. This should be ideally in under 10 seconds. Longer running tasks will need to be terminated to allow the deployment to continue.

Each task’s output is a single database transaction. This means if the processing is stopped the database is left in a consistent manner.

If you are calling third party services, then the results from the services should be logged, so if a task needs to restart it can check the status of the third party services and avoid calling them 2 or more times. e.g. you only want to process the credit card once!!

For example if you have to process 500,000 price updates, the first task would be to batch these updates into smaller chunks that can be processed quickly e.g In under 10 seconds. I have often found the best batch size to be 1. With larger batch sizes if the processing is stopped, you need to work out where you left off.

The basic processing loop

All the batch processing steps – from Initial Setup to Processing to Clean up – all need to follow the below basic processing loop.

Since your code can restart at any time, when the code is restarted it needs to be able to do three things

Work out where it left off

Do any clean up

Start the next processing step

The important part is the ability for the code to work out (1) where to restart the process and (2) whether any clean up needs to be done.

The clean up stage refers to removing any partial processing from the system, so when the process resumes no double entries occur. However this isn’t so much of a problem if all the work is on a database. This can be covered by a single database transaction, because f the processing is stopped all the changes will be automatically rolled back.

When you start using queues and files for storage then some thought needs to occur on how to pause and resume processing with your code. This effort will result in your code been super robust.

Batch Processing Example

Lets look at an example. You have a customer who FTP’s a file to you nightly for processing. The following is a overview of the process. The first two steps are Batch Creation and Batch Item Processing.

Batch Creation

Below is a simple process for initializing a batch, to prepare if for processing.

This is the processing loop to prepare a Batch for processing. This process can be paused at any time and when restarting the system will pick up where it left off.

Since the system only deletes the FTP file once all the batch steps have been created in the database, we can make sure everything is ready to be processed once the file as been deleted. Then the system can move on to processing the batch items.

Batch Item Processing

Below is a simple process for processing a single item in the batch.

If you are using database transactions then you will probably never need to worry about clean and rollback steps. Since if the processing was interrupted, then the database would have taken care of that for you automatically.

If you follow these methods you will create robust software that can be used with continuous deployment.

Like this:

I have been using Continuous Deployment for over 2 years now. It has completely changed how I write code and manage failures. The systems I design and build are now a lot more stable, thanks to Continuous Deployments.

There are two parts to continuous deployment, the actual updates themselves, and design of the code to manage the temporary outages during the updates.Continuous Deployment is just an interruption to the code, a kind of failure. However designing the code to handle these failures forces you to think about all the other failures too, as they could potentially occur during an update.

Some people think you can’t use Continuous Deployment because the updates will break the user experience. Therefore you shouldn’t use it for sites that need to remain continuously available i.e. shopping sites. However I believe continuous deployment forces you to design the code in a way that makes it more overall stable and available.

Continuous Deployment forces you design code that can cope with a service being temporarily unavailable. There are a number of design patterns and architectural techniques that allow the code to cope with these unavailable services, and thus not be affected by Continuous Deployment updates.

Therefore, Continuous Deployment is not only about rapid updates to your sites of code but also about failure management. When you use Continuous Deployments it forces you to think about how you code will fail. Since during an update services will be temporarily unavailable so your code needs to be stable when and if these issues arise.

For example, what if the user clicks on checkout just when your web server is updating. Uh oh!! Well if you did the call via ajax, on failing to update the primary web server the code would then send the transaction to a second server (maybe for processing once everything’s back up) and return the user to the confirmation screen. Interestingly, if this is coded correctly, the user would not even be aware that the primary service was unavailable due to an update!

High quality coding should be able to survive a temporary interruption to service. High quality coding allows the user to have an uninterrupted service, no matter what updates are happening in the background. The ajax example is one of multiple techniques you could use to create code that can withstand interruption.

For code to be robust it need to be able to handle all types of failures, since at the end of the day a failure will occur. Setting the code up to handle continuous deployment means all these potential failures will have been thought about already.

The only difference is how well will you code to manage the failure.

Some common failures:

Server Failure

Network Failure

Lost Request

Overloaded Errors

Software Updates

etc

Continuous Deployment is just another failure but by using it forces you to think about all the other failures too, as they could potentially occur during an update. This not only makes your code more robust, but it also increases your uptime for clients. This is because most failures are now managed. so when the worst case failures occurs your code can already cope with almost all of them.

Netflix actually take this to the next level, they have a tool called Chaos Monkey which actively breaks parts of their system by taking down servers, turning off network cards etc just to make sure there code can handle failures. The clients are unaware and the service is seamless.

Make the jump. Switch to Continuous Deployment and watch your codes quality and robustness increase. It’s a win win.

Thoughts on Continuous Deployment and Failure Management was last modified: January 25th, 2017 by Daniel Pamich

Like this:

Which is better for development, Stored Procedures or Entity Framework for modern web development.

I will compare these frameworks using the following criteria.

Performance

Security

Business Logic

Code Discovery

Refactoring

Amount of Code

Scaling

Performance

SQL is a language that is was written to allow processing of sets of data. e.g. Update all the records in the table to today’s date. This is great if you have massive sets of data that need processing. Thing is most LOB(Line of Business) apps only tend to update 1 row of data in a table. So the full power of Stored Procedures is rarely used.

SQL Server can pre-compile Stored Procedure for performance. When Entity Framework(EF) sends a request via a parameterized query, this is also complied and stored by SQL Server. This results in both Stored Procedures having the same performance for queries.

Also if it is just as easy to write badly performing Stored Procedures as EF Queries. In the past I have managed to speed up both SPs and EF Queries by 100x to 1000x.

I have seen a lot of dynamic sql code in Stored Procedures, since Stored Procedures are hard to write. Problem is as soon as you use dynamic sql code in SP’s this is slower since SQL server now as to compile this query each time.

Both can perform well, but only if you understand SQL Server and SQL queries really well, as the same SQL mistakes can make either method slow.

Security

SQL server provides a comprehensive security system but I have yet to see a web system that doesn’t use a single login to access the SQL server which renders most of the SQL security inactive.

Instead most applications move the security model into code, which makes both SP’s ad EF equal in terms of user security.

One issue that I have come across recently is the use of Dynamic SQL in stored procedures, which introduces two issues, the first been performance since SQL can no longer use a precompiled version and the second being that the code is now vulnerable to SQL Injection Attacks! The dynamic code was used since the programmer had difficultly getting SQL to do what they required and the dynamic code was the easiest workaround. Unfortunately this also introduced major security problems.

Business Logic

I am working on a number of applications at the moment, one based on EF and the other SP’s.

With the EF app the business logic resides in the C# code, which makes it trivial to find and navigate all the relevance pieces of code. Visual Studio and ReSharper both provide excellent tools for searching code.

The app using SP’s has business logic randomly distributed through the SP’s and C# code. Making it next to impossible to work out what is happening let alone make safe changes to the code.

Using EF and C# is definitely the way to go.

Code Discovery

Say you wanted to find all the bits of code that updated a field in your database. I simple task that you often have to do when working out how some code works.

Using C#and EF with Visual Studio and Code Lens, you can quickly look at all the places it is used.

With the code base using SP’s, it’s just not possible in any simple fashion that isn’t massively time consuming. You have to use a global search for the field with the major problem being that the search will return maybe 100’s of results that have nothing to do with what you are looking for.

EF and C# wins this hands down, it’s simply no contest.

Refactoring

I have had this issue recently on a some different projects.

On the one that was primarily stored procedures it was a nightmare. Since the SP’s where in SQL server and the access code was stored in Visual Studio Projects. The only way to find all the field to rename was to do a global find and replace that took ages. Even when I finished I wasn’t 100% sure I had got everything.

On the code base that had was EF based, I used the “Rename” refactor and created an new migration. The job was done in 5 mins.

Once again EF and C# wins this hands down.

Amount of Code

This one is always important, as the lower the lines of code you have the lower the chance of a bug.

When using EF , it only take a few lines of code to do a query. 1 to open the DB connection and another to do the query.

This Stored Procedure there is a stack of scaffolding code you need, to open the connection, set up the SQL command, pass the parameters in, and then execute the command. Then if you have a DAL you will need code to convert the result into objects. Then you also need to write the Stored Procedure as well.

EF and C# wins this easily as well, less code is less bugs and the less code you have to write the more productive you will be.

Scaling

When if comes time to scale your app, the less work SQL Server has to the better the chances of you scaling your app.

If all the work done via Stored Procedures and the SQL CPU is close to 100% most of the time, you are going to have issues very quickly. Same if the SQL Server’s disk IO is close to 100%.

With higher loads you want to shift more and more of the work to the web and application servers and away from the SQL server box. After all 100 or even 10 web servers are going to have more network, CPU and Memory than a single SQL Server.

With EF since all the business logic is already in code, you now have the ability to add caching layers and other techniques to remove the load from SQL Server.

With Stored Procedures since your business logic is in them, your opperiatunes to reduce load are limited until you can move the business logic to code.

I going to give this round to EF and C# as well.

Closing Thoughts

For any new project you should consider using an ORM like EF. The gains it will give you in productivity will outway any perceived shortcomings.

If EF doesn’t quite meet your needs there are a stack of other ORM’s out there. Like Hibernate etc. There is sure to be one that will work for you.

On a side note, most No-SQL databases have no concept of Stored Procedures since they don’t really make a difference when scaling an application and they introduce a lot of unnecessarily complexity.

Stored Procedures and Entity Framework Compared was last modified: January 23rd, 2017 by Daniel Pamich

Like this:

One of the challenges with development is choosing a computer language that enables coding for multiple platforms. The more platforms you code for with a single language, the more productive you can be because you can share the code between the targets. Even for polyglot developers, rewriting code for different platforms is expensive and time consuming as there will be multiple code bases to maintain.

So, with regard to the major companies, who enables the best multi platform development?

Here is a table summarizing which platforms each company supports.

Apple

Google

Oracle

Microsoft

iOS

Yes

Yes

OSX

Yes

Yes

Yes

Windows

Yes

Yes

Android

Yes

Yes

Linux

Yes

Yes

Web Server

Yes

Yes

Yes

Web Client

Yes

Yes

Yes

The bit that surprised me is that Microsoft ticks all the boxes!!! In that last few years Microsoft has really changed direction, and now seem determined to become the best all around development operating system with best tools!

Not only do they now support the majority of development environments. More and more of their products are now open source and the majority of their development tools also have free versions!

Like this:

When working from home, your office set up makes the difference between success and failure.

These are the key factors in your office:

Distraction Free

Working Hours

Internet Connection

Computer Hardware

Chair

Desk

Phone

Distraction Free

This is a can be a deal breaker. This is an area in your house that you can work and not be disturbed.

This is even more important if you have children at home, since they will want to spend time with you, which can make working very difficult!. You need to set a rule that when you are working they are not to disturb you but make sure that know when you are finished so they can play with you.

Ideally you have a room that is dedicated for your office and a door that can be shut. Blocking out any distractions so you can get your work done in the shortest time possible. Then you have time for doing life, seeing friends, playing with you kids etc.

If you don’t have a space when you can work distraction free, consider renting a office at a co-working fallacy. A lot of these places have excellent facilities, are tax deductible and provide a really excellent divide between work and play

Working Hours

Set yourself a time to start work each day, before you know it, it will become a habit and that’s half the battle won. You are now at your desk and working.

Also it’s too easy to start working 24/7 when working from home and that is quickest way to burn out!! Studies show that when you exercise and take time away from your work you are more productive, so book in time away from your work.

Study’s also show that your overall productivity drops for every hour you work over 40 hours since your brain gets fatigued. Your 80 marathon weeks may not be producing more that a 40 week. So work smart and use the other 40 hours for having fun. Work life balance is probably the reason you are work from home, so don’t forget about it.

Internet Connection

A business internet connection is your best option for home work. Sure they cost more but the benefits out-way the cost.

The most important factor for your home office internet connection is how much it’s works. Check your ISPs SLA’s for time to repair a faulty connection.

Cheap home connections may be 3-4 days before they get repaired!!

Business Connections can be 4-24 hours. That is a lot better!

Now imagine not earning for 4 days because you cheap internet connection is down or you are unable to finish that vital project, that cheap connection is suddenly very expensive.

Most business connection also come with options to have static IP’s which allow you run servers at your office. Most home connection expressly forbid you running servers on the internet connection.

Also most IPS will give business connection traffic priority over consumer traffic. You can never have too much speed. Oh for 1 Gbps connection.

Computer Hardware

There are different priorities when buying for a home office.

The most important been that the when your computer fails, how you can get it fixed as quick as possible. I chose a HP workstation since you can purchase 4 hour on site warranty for it!! So if anything fails, 4 hours later you are back up and running.

Compare that to a Mac Pro. If it fails you have to post it in a wait a week or so for it to be fixed. Or try and book a genius appointment at your local apple store and pray they have replacement parts on hand.

When you work for someone else it’s their problem if your computer breaks and you still get paid.

When you work for yourself, if you can’t work you don’t earn!! Always have a plan for getting working quickly again. That could be on site support or spare computer.

Chair

This is usually the bit of equipment that gets the least amount of thought, but there are a lot of benefits to a great chair. Most people sit in a chair over 180 hours month. Imagine the potential negative impact on your body with a bad chair.

Some things you wan to look for in a chair:

Supports you body

Moves with you body

Removes pressure points.

Head Rest

Arm Rest

Hight Adjustment

Lumbar Support

A great chair has been one of the best things I have brought!! Makes a day at the keyboard effortless.

These days there are so many options for a phone. The main requirement is to allow your clients to contact you so an answer phone is essential.

Some options for a phone

Land Line

VOIP

Mobile

Skype (with number)

VOIP is one of the best options as it can grow with your company.You can use you mobile phone to take VOIP calls or redirect it to your mobile. You even can answer your calls from anywhere in the world!