I just got off the phone with one of my colleagues, and we had an interesting discussion about a situation we have with one of our clients.

Here is the situation:

Our customer had, for years, a software that was installed as a standalone product. It was a single machine that had the application (.NET WinForms, in this case, but that’s doesn’t really matter) and the database (SQL Server Express. Again, doesn’t matter), all in one pretty box.

Not exactly state of the art, Micro-Service-y, or cloud-y, but it did the job, and did it quite well.

But times change, and after much thought, it was decided that a central DB makes much more sense, and should replace the aging local database. In this case, the new database is going to be an Azure SQL (but, as usual, that’s doesn’t really matter).

And this is where the problem began.

Implementation Alternatives

From the beginning it was clear we have two school of thoughts here:

On one side — the “Pure Database” believers. These developers wanted to simply start using the new database while changing (almost) nothing in the application. From their point of view, the only thing that needs to be changed is the connection string to the database to point to the central one instead of the local one.

Something like this:

The main motivation for this implementation is quite clear — it requires a minimum amount of code modification in order to work. Simply change the connection string and you’re good to go.

On the other side – the “API” school. These fellas said that there has to be a mediator, an API layer, between the desktops and the DB. Something like this:

Now, let’s compare both approaches.

Direct Connection Between the Desktops and the DB

Pros:

Easy to develo: As mentioned above, this approach requires minimum changes in the code. Change the connection string and that’s all.

API Layer Between the Desktops and the DB

Pros:

DB Abstraction: When the database structure will change (new columns added, columns type changed, etc.), and it will change someday, this change won’t affect the clients immediately. The API can absorb this change and expose the good ole API until the time has come to reflect the changes. This will make the system much more stable.

Loose Coupling: This architecture allows non-native clients to connect to the database. Currently, only clients that support direct connection to SQL Server (such as .NET) can use this method. However, future clients, such as Android phones that cannot connect directly to the database, will be blocked from using it. Implementing API allows all the platform that can access REST API (read: all the platforms) to access the data in the database.

Security: The database credentials are stored in a secure server stored in a secure data center. In the first approach, the credentials are stored on every desktop, probably in clear text, and every hacker wannabe can get them and gain access to the database.

Cost: (This one is relevant only for databases that stored in a public cloud) Using database in the cloud costs money. The more connection you have to the database, the more money you’ll pay. In the first approach, there are a lot of connections. Thousands of them. Each client uses its own connection. And that’s gonna cost. In the second approach, there is only a single connection, which is from the API to the database. The clients themselves do not connect to the database. This will reduce the cost substantially.

Conclusion

So, which one will you pick?

As you can probably guess, my recommendation was to implement the second approach. Using the API layer in the middle, we are able to achieve a stable, flexible, future-proof, secure, and not expensive system. Yes, it will require a little bit more of a coding, but in the long run, it will have a far greater ROI.

My colleague, on the other hand, wanted the “Pure Database” approach because he believed that less work will result in a better ROI in the long term.