We love creating tools and features that are useful for everyone, not just our paying customers. We figured there's value in offering one-off checks on our site without devalueing our offer to our paying customers.

This snippet, when used, can allow you to quickly test any website. If you're on a site that doesn't load for you, hit that bookmarklet or the Oh Dear! browser extension and let Oh Dear! doublecheck the uptime from different locations.

We'll do a deeper dive into the Chrome extension at a later point, to show all the details.

Our certificate checks allow you to check the important details of your websites' certificates. We'll check the expiration date, the intermediate & root certificates and some of the encryption algoritms used.

We actually check a few extra things that don't quite fit the screenshot, go ahead and scan your own domain to see the results.

A crucial piece of our offering is still missing from these public checks: our broken links & mixed content reporting.

These work fundamentally different than the certificate & uptime check. Our crawler can take anywhere between a few minutes up to an hour to complete a run and report the issues with a website.

As a user, you don't want to keep your browser open and wait for that.

We'll create a solution where you can leave your e-mail address and we'll email the report to you as soon as our crawls are done. This takes a bit more work to complete (and we want to get it right - just like the rest of our app).

Every check will generate a unique URL that can be shared among colleagues, friends or on social media.

If there's downtime or an issue with a certificate, this will allow you to provide all the necessary info to those responsible to take action.

]]>
2019-03-12T11:25:44+00:00https://ohdear.app/14
We've added a small but useful feature to Oh Dear! last month, which allows you to customize the recipient of your invoices.

It's not a big feature, but small & incremental improvements make Oh Dear! better one step at a time.

You can customize the recipient per team on your team profile page. If you have multiple teams (each with its own subscription), each teams' invoices can be sent to a different e-mail address.

]]>
2019-02-20T19:47:11+00:00https://ohdear.app/13
We reached out to one of our earliest customers, Marbles, to ask why they decided to move from Pingdom and use our service instead. If you're in doubt whether or not to give us a try, maybe they can convince you.

We are Marbles, an Antwerp based full service agency that does everything from creating identities to marketing & communications, business development and coaching. Of course creating websites and web applications is part of our services and to make sure they're the best they can be we use Oh Dear! for the monitoring.

By alerting us of issues before anyone could possibly notice it manually. This allows us to fix issues before our clients are even aware of it, or if they are we can tell them we're already working on a fix.

Setting up notification "groups". We have clients who have an SLA with us, they need very quick solutions and we use the SMS notifications for them, for other clients we only set up the Slack Notifications.

It's bothersome to set up the SMS notifications individually for each site that needs them, and if another person needs to receive these, or the phone numbers need to get changed, we need to do this for each site manually.

In response: this is indeed a very good idea still missing, we're taking this up in our development to see if we can introduce an elegant solution to this problem. Thanks for letting us know!

We'll be reaching out to more of our customers over the next months to find out why they chose us over the (many) competitors out there. We'd love to know what pursuaded them and what items we still need to address to serve our customers even better.

If you want to share your thoughts, feel free to reach out to us on Twitter at @OhDearApp.

]]>
2019-01-31T13:24:32+00:00https://ohdear.app/12
We've partnered with Tideways to highlight application downtime within its PHP Profiler. This will allow you to easily correlate downtime events with your application exceptions being thrown.

Oh Dear! has the ability to trigger up- and downtime alerts to a webhook that you specify. The payload varies per event we fire and gives you the ability to integrate Oh Dear! into pretty much any tool or service.

Tideways finished their Oh Dear! integration by allowing our users to send downtime events to their webhook endpoint. This lets Tideways know when downtime was detected and it can then be visualised right in your Tideways monitoring screen.

The ability to immediately see cause & effect can be a powerful addition to your monitoring stack.

Integrating Oh Dear! into your Tideways account is super simple as explained by their documentation. In short: you'll find a custom webhook endpoint in your application settings that you can add to the notification options of your application.

Once this webhook has been configured to receive both the up- and down events, Tideways can show these in their dashboard.

We strongly believe in the power of combining event information in a single display. This can help users more easily correlate events and find the root causes of downtime alerts.

This same delay is present when we perform uptime checks. The entire path between EU & Australia also contains over a dozen hops. Each hop has the potential to break the connection, be overloaded, suffer downtime, ...

For us to monitor websites worldwide, we need to be close to the source.

When you sign up, we determine your default uptime monitoring location based on the IP address you sign up with. It's a simple check to set a default, where we take Paris as the fallback location for all our EU clients.

We'll use the closest available location where we have servers to do our primary probes. For our Australian users, that will now be a server based in Sydney.

Once we detect downtime, we'll use a secondary location to confirm that there is indeed a problem. To confirm this, we'll use a server located in another country which is also served by another cloud provider.

Our Sydney server is run on Vultr, so we'll use one our Digital Ocean servers to confirm the downtime. This way, we eliminate false positive alerts that are caused by connectivity issues at a single cloud provider.

]]>
2018-12-18T10:59:29+00:00https://ohdear.app/10
We've been fairly public about the amount of testing we have for Oh Dear!. Freek has been showingbits and pieces on his Twitter to show the extent of that effort.

Having a huge test suite is nice, but integrating it into your development workflow is even better.

We use the free version of Gitlab to host our Git repositories and launch the jobs that run all our testing. However, since we run the free version (and as a startup, we're cautious about our running costs), we are limited to what kind of CI we can run on Gitlab.

As such, we've installed and run our own Gitlab Runner, that uses Docker containers to run our testing. The Gitlab.com servers essentially instruct our servers to run the entire pipeline and we report back the status to Gitlab for reporting.

Setting up the local Gitlab Runner with Docker is pretty straight-forward and their documentation handles this perfectly. Some minor things we changed on our end to speed things up are related to the concurrency of the jobs.

Remember that multiple containers will run at the same time, potentially pushing your server to 100% CPU usage. For this reason we decided to run these on one of our test machines, even though production servers have a healthy abundance of free CPU and memory.

But, in order for us to handle spike usage, we need to keep those resources free & available.

If you have a stage that consists of 3 jobs (like our testing stage), remember that the 3rd job might take longer to complete with a concurrency of only 2 jobs. Those first 2 will run in parallel, the 3rd one will have to wait for a free job slot.

During our unit tests, we spawn a webserver to test several of our key features;

Uptime & downtime detection

We crawl that test server to detect mixed content & broken links

We test several of our custom HTTP header features

This test webserver spawns our public website, which in turn relies on compiled JavaScript & CSS (Laravel Mix). That gets compiled with webpack, which is why we run that asset-building stage before our unit tests.

Without it, our tests would simply fail as we can't render our homepage.

Crawling our own site - and various custom endpoints to simulate downtime or slow responses - has the additional benefit that we validate (most of) our website is functioning correctly before we deploy.

Some of our stages depend on the output of the previous stage in order to work. A good example is our testing stage. When we run phpunit, it will fetch oru homepage which in turn relies on the CSS & JavaScript that got generated in the previous stage.

Gitlab CI allows you to set your dependencies per stage.

phpunit:
[...]
dependencies:
- build-assets
- composer
- db-seeding

Setting the dependency makes sure the artifacts get downloaded from that particular job into this one, essentially copying the output of one job to another.

A cache is, as the word implies, a local cache of the data. It's available only locally on the server and is not guaranteed to be present.

An artifact is - in our own words - what you want to store in a job to pass on to the next one. Initially, artifacts were what you wanted to survive out of a job. Like generated logs or error files, PHP Unit coverage results etc.

But this feature can be used broader than just exporting test results: you can use it to pass the output of one job onto the next one.

We first thought to use caches for this purpose, as these get stored locally and are available faster. However, the cache isn't guaranteed to be there and about 30% of our pipelines would randomly fail because it was missing a composer vendor directory, compiled assets, ... even though those jobs completed just fine.

An obvious step missing here is - perhaps - the most important one: deploying the application.

We haven't implemented this in Gitlab CI yet. Our deployment is automated of course, but right now it's not tied to the status of our CI pipeline. We can deploy, even if the tests fail.

We're still a small team and we make the decision to deploy thoughfully, but controlled. As many Laravel users would use Envoyer for their deployment, further automation could be done to integrate that too.

We'll highlight our deployment strategy (and the reasoning to not couple this in with Gitlab CI) in a later blogpost, there are a lot of nuances that deserve highlighting for that.

Now watch as Gitlab detects your config and will try to run your job. At this point, you might want to consider either subscribing to Gitlab CI or running your own Gitlab runners to execute the tests.

If you spot any improvements, gotcha's or have general remarks, we'd love to hear your thoughts in the comments below!

]]>
2018-12-12T12:58:48+00:00https://ohdear.app/8
We have just finished our transition from a websocket server based on laravel-echo-server to one that is fully driven by PHP: laravel-websockets. In this post, we'll highlight why and how we made that move.

As we're built on Laravel, we already run a fair bit of nodejs during our build phase. Our frontend JavaScript & CSS already get compiled via webpack. So in a way, our stack already includes node to make this all happen.

Part of our smooth user experience (if we say so ourselves ;-)) comes from the use of websockets, that allows us to give instant feedback to our users in their dashboard & our homepage. To make that work, we've always used laravel-echo-server, a node implementation of a websocket server.

To make that websocket server work, you can use 2 methods: use a Redis queue or publish messages directly through HTTP. We used a Redis queue, which means the following events took place for us:

Laravel publishes a message to a Redis channel

The echo-server listens to new events being stored there

The echo-server relays those to its subscribers/clients

This has worked very well for us, without any issues.

But we found ourselves in the unique spot to test an even simpler approach: run a websocket server fully in PHP without the need for node.

Our setup is already using Nginx as a TLS proxy as well as Supervisor to keep all our workers running, so we already had the building blocks in place to add some configuration for our new websocket server.

One of its biggest gains is in our development process: we now just need to run php artisan websocket:serve to get a local websocket server going and not have to deal with the (rather confusing) configuration of laravel-echo-server.

Additionally, we simplified our server setup and now fully rely on PHP without Node for running websockets. Managing less software is always a win, especially from a security angle (keeping track of the node ecosystem and the dependencies of the echo-server).

For us, it was a no-brainer to make the switch.

]]>
2018-12-04T22:21:33+00:00https://ohdear.app/9
We've launched a fresh now look for the Oh Dear! homepage and a lot of tweaks to the overall look & feel of the public facing pages of our site. Allow us to show those changes in more detail!

This is visually the most noticeable change we've pushed. When we first launched, our design looked like this.

We liked that design quite a bit. It stood out. You don't see many startups foolish enough to make their entire homepage screaming red. It was also sort-of a reference to the red error screens you'd see when a site certificated had expired in the browser.

But, the number one piece of feedback we received was: "aargh my eyes!".

And, well, we couldn't blame them. :-)

So here's a freshly designed homepage, easier on the eyes with updated text to showcase what we do and what our strenghts are.

There's still some screaming red involved, but it doesn't cover the entire homepage anymore. Besides highlighting what we're good at (doing more in terms of website monitoring than most of our competitors) we also have a live counter of the amount of checks we've run so far, refreshed live through the use of websockets.

Everything inside Oh Dear! is powered by websockets, now a part of the public facing website is too!

We applied the same design principles to our pricing and documentation too. We've improved readability by adding contrast between titles & text and by increasing the font-size and line height slightly.

Behind the scenes, we've been extending our documentation quite a lot too, which meant our left-hand menu was growing to be a bit too big. That now auto-folds to highlight the current category.

At the same time we added a section to highlight all 3rd party integrations that make use of our API. There's already a Terraform provider, a Telegram chatbot and a CLI client available to talk to Oh Dear! - how cool is that?

If you find areas of our site or application that needs improvements, we'd love to hear about them!

]]>
2018-11-30T14:15:45+00:00https://ohdear.app/7
Black Friday.

Everyone's throwing out coupon codes with crazy discounts, right? Why on earth would we be doubling our price for just that day?

For many online services, Black Friday is a huge source of income. Webshops reportedly double or even triple their revenue that day. Many make up for a bad month in just that single day.

On your most important sales day of the year, you want your site to be online. Therefore, on that day, our service is twice as valuable.

You see, when we first built Oh Dear!, it was to fix what we considered to be wrong with the current monitoring solutions out there. Many of them just did one check (mostly the homepage). Many didn't report certificate errors. Nearly no one was crazy enough to crawl an entire site and report all broken pages.

Yet, in order to be a succesfull online business, you need all of that.

You can't afford downtime when you're in peak sales. You want to be certain about your uptime and place your trust in a respected monitoring provider. So that's what we'll do on Black Friday: take care of monitoring your sites, 24/7.

Now, if you want to be absolutely certain you're online, you could go ahead and subscribe to Oh Dear! in advance. If you use coupon code BLACKFRIDAY18before Black Friday, you'll get 30% off your first month.

The code expires on Thursday, November 22nd at midnight.

Oh by the way, we're only doubling our prices on Black Friday, not permanently. We're not actually crazy, you know. :-)

]]>
2018-11-20T08:52:57+00:00https://ohdear.app/6
Today we released our new open source package called nova-ohdear-tool. It's meant to be installed into a Laravel Nova app. Laravel Nova is a package that allows you to easily create admin panels for Laravel applications.

The benefit to you - or your users - is to be able to get a birds-eye view of the health of your entire site.

You can easily see which of your pages are broken and - since you're already in your admin panel - can quickly fix it the problem or the broken link. Once that's done, just request a new crawl straight from your application and watch the results come in.

]]>
2018-11-19T13:44:03+00:00https://ohdear.app/5
We've launched a cool feature for our users on the Laravel Forge platform: automatic monitoring of any of your sites and servers managed through Laravel Forge!

Forge recently introduced a feature called tags, which allows you to add custom tags to any server or site in Forge.

We use those tags to determine which sites we should automatically add to your Oh Dear! Account. Every site or server tagged with oh-dear will be added. This allows you to still pick which sites should - or should not - get monitored.

There are 2 ways you can use these tags right now:

Add a tag to a server

Add a tag to a site

If you add a tag to a server, we will automatically monitor every site that gets added to it. This is convenient for users that have multiple sites on a single server.

Alternatively, if you want more fine grained control over the monitoring, you can add the tags to an individual site too, instead of on the server. We'll only import those sites that have the tag.

In the same settings page, you can decide which checks we should automatically enable for newly found sites. This applies to the sites that will be imported straight away, as well as the ones we will add later on as you add more sites to Forge.

When we import a website, we will assume it'll be https enabled. Hey, it's 2018 after all. If it's an http-only site, you can add an explicit extra tag called oh-dear-http to force the use of HTTP instead.

The creator of Forge, Taylor Otwell, was kind enough to provide us with a sign-up link for Forge that will give you a discount of 25% on the first bill you'll get from Forge. You can find the sign-up link on our Forge Import settings screen. Hurry up, because that link is only valid until 16 november 2018.

If you don't have an Oh Dear account yet to view that Forge coupon code, register here. The first 10 days are free, no questions (or creditcards) asked. If you decide to subscribe to Oh Dear! you can use this coupon code to get 25% discount on the first bill you'll get from us: OH-DEAR-LOVES-FORGE This coupon code is also only valid until 16 november 2018.

Most of our servers are provisioned through Laravel Forge, as well as the sites that run on them. It is an amazingly powerful tool that benefits the entire Laravel community.

We are greatful for both the API and Tags support that recently got added to Forge, allowing the development of this feature.

The best kind of monitoring is the one you don't have to think about, this is another step in that direction.

]]>
2018-11-17T01:00:40+00:00https://ohdear.app/3Laravel offers a convenient way to create asynchronous background tasks using its queues. We utilize those heavily at Oh Dear! for all our monitoring jobs and in this post we'll share some of our lessons learned and what we consider to be best practices.

One strategy we first used was to make a generic seperation between fast and slow jobs.

If you remember the purpose of Oh Dear! (monitoring sites, crawling them, reporting page errors etc.), it quickly becomes apparent that we are heavily dependent on 3rd parties.

If the site we're monitoring is quick to crawl, it might take only a few minutes. If it's a slow site (on which we have to apply rate limiting), it might take a very long time but consume few resources along the way.

Slow to us means the remote side is slow. It also means we're wasting CPU cycles just waiting for the other end to respond. We could easily spin up more workers and do more in parallel, since the load is negligable on our end.

The result was that the fast queue consumed most of the CPU & memory on our end: it quickly burned through its queue, performing all the "fast" tasks (checking uptime, certificate checks etc.).

The slow queue was running more consumers (aka workers), because we were constrained by the remote end, not ours. This meant we could easily run more of those.

While this worked for the first few weeks in beta, as our customer base grew, this also posed a few problems: while the fast queue was generally pretty fast, we were pushing a lot of different workloads on there.

It didn't take long for us to create specific queues for each job "type" we were launching. We're now in a setup where each check performed lives on its own queue. This allows us a great deal of flexibility in terms of sizing and scaling them.

We run a few very specific job queues too. Our main application runs on the following queues.

- certificate_health
- certificate_transparency
- crawler
- uptime

These queues run our monitoring checks for all our sites. On top of those, we have assisting queues.

- default
- notifications
- webhooks

The notifications and webhooks are pretty self-explanatory: they run all our notifications (Slack/Discord/Nexmo/...) and perform the webhook calls to 3rd party endpoints.

The default queue is used for small, one-off jobs that are just too small for their own queue (like sending mails like our monthly uptime reporting).

When you add a site to be monitored, your initial experience looks like this.

That's the user experience we want: immediate feedback and the feeling of responsiveness when you add a site. The certificate & uptime checks happen near-instantly, the broken links checking takes a bit longer as the site needs to be crawled.

To make that work, we push the sites that have just been added to an empty queue with a custom set of workers that can immediately pick them up.

If you added a new site, your were added to the back of the queue and had to wait for all other jobs to complete first. The initial user experience gave the impression we were slow & unresponsive, because it took too long for feedback in the UI.

We first tried to get away with adding them to the front of the queue instead of at the back, but that didn't entirely work either: if, for some reason, the uptime check took a few seconds to finish its current job, we still had to wait for our new priority job to be picked up.

And since we monitor & crawl sites we don't control, we have no idea how long each check might take. We need to assume the worst case & plan for that.

The on-demand runs pick up those sites that have just been added and those jobs that the user requested to be run on-demand (which you can trigger either from your Oh Dear! dashboard or through the API). If we ever reach the point where we detect saturation (ie: the queue never empties), we will have to split the queue again to separate the initial runs vs. the on-demand runs.

We'll share some of our internals, how we run Oh Dear!, the challenges we face and how we decide to tackle them. Expect a mix of technical insights, some business-related posts and a few customer success stories. We want to be as open as we can be about our monitoring tool, showing the ins & outs.