It should scale very well with certain kinds of traffic (reads and writes are not on shared objects).

Minimal, or no, knowledge of devops/sysadmin is necessary.

They now have decent tools to migrate your data out of it.

Why Not Firebase

Not a boring established technology.

You only get so many innovation tokens.

Your entire backend is proprietary (BaaS), owned and run by another company. If they shut down Firebase, you have to rewrite everything on their timeline instead of moving it.

This happened with a nearly identical service called Parse.

Parse was purchased by Facebook, then shut down. (Google bought Firebase but seems to be investing in it, and its replacement).

Google shuts things down all the time.

Firebase is somewhat deprecated in favor of Cloud Firestore.

Exceptionally expensive at scale compared to REST.

Not really possible to expose an API spec (swagger) with cloud functions.

Proprietary – complete lock-in:

Migrating off means rewriting all backend tests and much backend code. This is more dangerous than “just” rewriting the code and not the tests, because the tests make sure you didn’t mess something up during the migration.

Firebase is pretty unique in the way you interact with the APIs and realtime components, making a frontend migration a massive ordeal also.

Impossible to develop the app without an internet connection.

Security and data validation setup is tricky, and it cannot be unit tested.

Security and data validations are strings in a JSON file.

Must run security integration tests against a deployed app.

Having your main database public is a highly discouraged practice.

This may not be fully fair, but it is very easy to misconfigure things and expose data that shouldn’t be.

Normally databases are only listen on private interfaces or at least use IP restrictions.

You must duplicate any business logic across all systems that interact with firebase.

Strong anti-pattern for architecture and maintenance

Cloud functions are good for one-off tasks:

good for formatting a PDF invoice and sending it

good for processing a batch job

good for individual tasks delegated by a message bus

bad for a bunch of similar data update routes

bad for structuring and testing a large REST API

Unit testing firebase backend functions is way more complicated than a regular REST API.

Querying and aggregating are limited compared to SQL or popular NoSQL databases like MongoDB.

Transactions have some odd behavior – they might get “rerun” in the case of conflicts.

Database migrations are not supported at all.

red flag – basic service development

few band aid 3rd party open source solutions exist

means hand writing a migration framework that works with unit tests

Firebase recommends duplicating data because JOINs are unsupported.

red flag – architecture

not a problem in other NoSQL databases

this is a core competency of relational databases (SQL)

Integration with outside APIs and maintaining good testing is not simple like a regular server

stripe for example: need to expose backend webhook routes, test them pretty well

You are at the mercy of Firebase’s upgrade cycle.

If they decide to change or break something and force you to upgrade, you do not have a choice on the timeline, if there is one.

Optimized for realtime apps.

Only a downside if you don’t have a realtime app.

Many of the realtime benefits of Firebase can be had with a plethora of popular open source technologies.

Later services you write will likely not be written using firebase.

It reduces the surface area of your tech stack to pick a more boring database technology that will scale a long way and can be used for multiple services.

If you built your tech stack on say, Node.js or Go, each service would have similar paradigms.

With Firebase, now you have all the Firebase paradigms (coding, testing, build and deploying), plus your second service’s paradigms (coding, testing, build and deploying).

All these complaints are probably acceptable when you have a small mobile, a basic website or small ecommerce store, or something that will not grow beyond one server.

Building the core of a SaaS on Firebase, though, is not going to work. There are many blog posts about companies who hit a wall with Firebase, and eventually migrated off it. If you are building a SaaS, the question doesn’t seem to be if you will move off Firebase, but when.

vscode touts itself as being easy to use for out-of-the-box Node.js debugging – which is true for single scripts – but debugging node-based executables (npm, npm scripts, mocha) takes additional setup based on yourenvironment.

atom to vscode: faster with integrateddebugger

Recently I switched from atomvscode, after being impressed by a talk at npm conf. I only had two complaints with atom that started to weigh on me – the often sluggish performance, and the lack of solid debugging for Node.js. (There are some 3rd party packages that try, but it is not core to the editor and oftenbreaks).

Setting up vscode for debugging basic node scripts is supported out of the box. The editor even generates working debug defaults at .vscode/launch.json. But if you want to debug npm scripts or other node-based executables, it is not straight forward at first. Turns out it’s pretty easythough.

I exclude .vscode/ in .gitignore because it ends up having settings specific to my environment andworkflow.

The trick to running npm scripts or node executables is to use a hardcoded path to npm to launch the things. So for example, in package.json:

{
"scripts": {
"mocha": "mocha"
}
}

and in .vscode/launch.json:

npm run mocha: use a hardcoded path to your npm exe. You can obtain it from the terminal with which npm.

DOESNOTWORK

After 5 years in Node land, there’s nothing sadder than taking hours to deploy your new app. It was a breeze to develop and run locally. Throwing it on staging or production can become a beast of a task if you aren’t careful. There are so many guides with complicated deploy and server setup patterns, because Node and npm, plus build tools need to be installed on theserver.

Easy deploys and easy rollbacks are my goal here. I usually just don’t want to hassle with a bunch of infrastructure. Docker is probably a nice tool if you have 50 servers, but that isn’t the case for most ofus.

What I am notusing

nexe

wonderful tool for producing small-ish Nodeexecutables

project mainentance has become questionable and it breaks alot

native modules don’t work, despite what they may hintat

jxcore

compile your node app into abinary

more of a node.js competitor than node.jstool

for me, broke a lot and feels early days ( for example, at the time of writing, their website is unreachable, which is not asurprise)

After a few years building platforms using (mostly) Node.js microservices, I thought I could troubleshoot problematic situations in minimal time, regardless of coding style, or lack thereof. It turns out – nope.

The two week bughunt

There was this tiny bug where occasionally, a group-routed phone call displayed as if someone else answered. Should be easy to find. Except call logs touch severalservices:

service for doing the calls and mashing data from the call into calllogs

internal service for CRUDding the calllogs

external gateway api for pulling back calls that massages and transforms thedata

native app display

2 databases

Tripup One: complex local environmentsetup

We skipped setting things up with docker-compose or another tool that will spin up the whole environment locally, in one command. This is a must, these days. It would take 7 terminals to fire up the whole environment, plus a few databases and background services – and each service needs its own local config. There would still be phone service dependencies (this would be mocked in an ideal world) and external messaging dependencies(Respoke).

Not being able to spin up the whole environment means you better have excellentlogging.

Tripup Two: not enoughlogging

Aggregated logs are the lifeblood of microservices, especially dockerized or load-balancedones.

We use the ELK stack for log management. Elasticsearch, Logstash, and Kibana are wonderful tools when they have consumed all server resources and blocked the user interface processingdata.

For these particular bugs, there was insufficient logging, and the problems only occurred when all of the microservices talked together. Because we have some special Asterisk hardware and phone number providers, it is a lot of work (if not impossible) to spin up the entire environmentlocally.

Thus, at first I started by adding a few logs here and there in the service. It was a round of Add Logs – PR – Deploy – Test – Add Logs – PR – Deploy – Test. Eventually I just added a ton of logging,everywhere.

I have this fear that I will add too much logging and it will cause things to go down, or get in the way. With few exceptions, you can’t have too much logging when things break. You can have bad log search. Also, at this point I have decided that the ELK stack will always consume all resources, so you might as well log everythinganyways.

Tripup Three: forgotten internal supportinglibrary

There was an internal library, written in the early days of the project, whichhad:

a unique codingstyle

no tests

poor commit messages

no comments

generic naming of variables andmethods

several basic bugs in unused codepaths

As it turned out, none of the bugs in this library were causing problems because those code paths were not in use. Nonetheless, I spent a full day understandingit.

Tripup Four: code generation in functional and unittests

I am a firm believer, now, that DRY (don’t repeat yourself) has no place in unit tests, and probably not in functional tests either. Here are common things I raninto:

test setup has multiple layers of describe() and each has beforeEach()

beforeEach() blocks used factories which assigned many values to uuid.v4(), then further manipulated theoutput

layers of generated test values are impossible todebug

It’s best just to be explicit. Use string literals everywhere in unit tests. Minimize or eliminate nested describe()s.

Tripup Five: too much logic in onespot

In Node.js land, there’s no reason to have functions with complexity higher than 6 or 7 because adding a utility or library is cheap. It takes little effort to extract things into smaller and smaller functions, and use explicit and specificnaming.

We had a ton of logic in Express routes/controllers. This is hard to unit test, because the realistic way to get at that logic is using supertest over mock HTTP. It’s better to make small functions and unit test input-output on thosefunctions.

Conclusion

Eventually, I never found the bug – I found four, after careful refactoring of code and tests to the tune of several thousandSLOC.

The actual bug could have been only a one-line-fix, but finding it took weeks ofcoding.

Situations like this are often unavoidable. I am sure if I haven’t inflicted similar situations on colleagues in the past, I will in the future. It’s the nature of trade-offs you face when moving fast to test a business idea. The following things might help minimize that,though:

agree to a single .eslintrc file and neverdeviate

use a lot of small functions, and unit testthem

don’t make a separate module withouttests

be as explicit and repeat yourself intests

be able to spin up a local dev environment with minimal commands, or run against a presetup testingenvironment

Right from your terminal – without a third partyservice

./deploy www.example.com

There are a lot of articles about how to setup Node.js in production, but they don’t always cover the full thing in an automated, easily deployable way. We will review how to setup a one-line Node.js deploy from your local terminal (OSX or Linux), with very minimalcode.

No fancy third party services deploy services here – just a little bash and an upstartscript.

Deploying Node via simple shellscript

The script belowwill:

ensure Node.js isinstalled

make an archive out of yourcode

upload it via ssh to theserver

log to a file and rotate the logs regularly to prevent filling up thedisk

setup auto-starting on serverreboot

setup auto-starting when the appcrashes

You can reuse the script again and again to deploy yourapp.

While this isn’t a silver bullet, it lets you host Node.js apps on an extremely cheap VPS (virtual private server), if you like, without needing too much knowledge of server admin. VPS hosting can be orders of magnitude cheaper than cloud hosting – and faster. You can host a simple Node.js website for a dollar or two per month in many cases – extremelycheap.

Configuration (upstart .conf file) for Node.js app on Ubuntu14.04

From inside your appdirectory:

touch myapp.conf # create it with your app name
chmod +x myapp.conf # make it executable

1. Google CloudCredentials

Click Create a new client ID then select a new Service Account. A JSON file will download. Save it in your project as gcloud.json.

2. Verify siteownership

Google requires you to verify that you own the site in Google Webmaster Tools. There are several ways to do that. If your website is new, most likely you’ll need to create a TXTDNS record with your registrar. Webmaster Tools will guide you throughit.

3. Create a specialbucket

Files on Google Cloud Storage are grouped into “buckets.” A bucket is just a bunch of files that you want to store together. I think of it like it’s own drive. You can have folders under abucket.

The bucket name must be the domain name of your website. So forhttp://symboliclogic.io, the bucket name would be symboliclogic.io. For http://www.symboliclogic.io, the bucket name would bewww.symboliclogic.io.

Be sure to choose Standard storage. The other options are for backups and can take several seconds to be accessible. Standard class storage is fast and suitable forwebsites.

4. Set the default bucketpermissions

You want to make all files public by default. Accomplish this by adding an access rules for allUsers which allowsreading.

Do this for the Default bucket permissions, and the Default objectpermissions.

5. DNS record pointing to yoursite

After verifying ownership of your site, create a new DNS record that points your domain name to Google CloudStorage.

It should be a CNAME type DNS record with the content c.storage.googleapis.com.

6. Upload files to the bucket with a Node.jsscript

First use the tool npm (bundled with Node.js) to install some dependencies into the currentdirectory:

npm install async gcloud glob

Now put the following script at deploy.js then run it from theterminal:

With so many options available for online payments, I wanted to summarize the reasons why I feel strongly that it makes sense to use Stripe when building minimum viableproducts.

Focus on your business, not onpayments

In a recent startup, we integrated with an internal paymentsystem.

We probably lost 6 man-months of productivity. At times, it felt like we were in the payment business – not in the business we were trying tobuild.

Focusing on the wrong thing is what killsstartups.

Mature, easy, and well documentedAPIs

Stripe has top-notch APIs, but their documentation and libraries are some of the best for any SaaS that I have ever seen. On several projects we’ve used Stripe’s docs as the model, but it’s deceivingly difficult to produce something so complete and so simple todigest.

Easy to finddevelopers

Along with excellent mature APIs comes an army of developers who can help work on your Stripe integrations. For non-technical founders, you will not have a hard time finding people with extensive Stripe experience (likeme!).

The best userinterface

People can forget that there’s a lot more to a payment provider than the APIs,too.

With Stripe’s UI, you can answer common business questions really easily. Things that you would have to build in your web app. Things that help you assess your startup’s burn rate. Things that help you service customers without wastingtime.

What was the paymentlifecycle?

How much has this customer payedme?

Do I have customers with the same email but different customerIds?

How many customers do Ihave?

When is a customer’s next invoice? When was their last invoice? Did they payit?

Who has expired creditcards?

How much revenue did I have in the pastweek?

How much have I paid in credit cardfees?

How much money do I have in escrow withStripe?

Try to answer all of these questions on another provider in less than a minute – you can do it withStripe.

The user interface is so good that you can just give customer service reps limited access. No technical knowledgerequired.

Easy international charges

Stripe does the currency conversions automatically and you never really have to think about it. I can’t express how much time this saves over other options, and allows your startup to charge internationally muchearlier.

Bitcoin

Stripe is a mature payment provider that offers BTC integration. Other providers of Bitcoin billing are not as mature, but they do work prettywell.

Excellent webhook support

For any action on Stripe, you can get a POST webhook event to your server. This is incredibly useful for building all kinds of custom integrations with your CRM, doing additional billing, tracking internal analytics, andmore.

Stripe will keep sending webhooks in case your server goes down, ensuring you get the data and respond with a success code. That saves you from having to implement a message queue (MQ) for paymentthings.

Recurring payments andtrials

Stripe squarely handles this exceedingly complex trap-of-a-feature that plagues many SaaS services. I continue to be impressed how easy they make recurringpayments.

Because of the payment lifecycle in Stripe, you can also do advanced billing prettyeasily.

Advanced billing and storingmetadata

I worked with a startup that had plans with tiers of service credits. So they wantedto:

charge a monthly servicefee

give X credits with theplan

track credit usage during themonth

bill all customers on the 1st of themonth

calculate any credit overage and add that to the invoice before the customer wascharged

This whole billing lifecycle took only about 25 hours to implement from start to finish. Doing the same thing on other providers isn’t really possible, or requires hacky workarounds. With Stripe, it was a natural part of the recurring billing lifecycle – that means we could produce clean code without dangerous hacks. Trust me, you don’t want hacky asynchonous checks in your billingsystem.

Every “object” in Stripe – plan, charge, customer, etc. – can have stored metadata in the form of a simple JSON object hash. We just stored the plan limits inside the Stripe plan.metadata – so all the plan data was in one place. Then we used webhook events to add additional line items for plan overages. Stripe gives you a chance to update an invoice generated by recurring billing, before they actually charge thecustomer.

Where it fallsshort

Bank account charges. There are really limited options on the internet for charging bank accounts directly. Stripe is not one of those options. eCheck payment kind of suck though because you typically have to first have the customer verify their details withmicro-deposits.

3rd party transfers. Stripe did away with their transfers API and is pushing Stripe Connect. Stripe Connect is a an excellent service, but it’s not as easy as it used to be – a simple ACH transfer by providing the routing number and account number. I miss thosedays.

Fees (maybe). The fees are about average. However, in my opinion they more than justify the time savings during development, the UI tools, and the customer service (yes, it’s pretty good, I have usedit).