Zabbix is an open source system monitoring and alerting tool. Even running a home data center requires monitoring the status of the equipment. When there is an issue, it needs to alert folks that things are not working correctly.

Ted Cahall uses Zabbix for Monitoring and Alerting

As I have mentioned, I run several Linux servers at home and in the AWS cloud. This is great – but it could become a nightmare to know when servers are having issues. Enter Zabbix – it is free and comes included in most Linux distributions. So it is a natural choice for monitoring Linux servers. Another great feature is that is can monitor Windows machines and Macs as well.

High Level Zabbix Overview

Zabbix is written in PHP and stores its configuration, monitoring, and alert data in a MySQL database. All of these are also free and included in Linux distributions. I would recommend adding the Zabbix repo to your package manager for each of your Linux machines. The agent version currently supported in Ubuntu 16.04 LTS is on 2.4.7 as of this blog post. Where as I selected version 3.0 in the repository. Those Linux machines are currently running version 3.0.16 and get updated as the code is updated at Zabbix.

Zabbix uses a server to collect the data and store it in MySQL. It also uses “agents” to run on each of the monitored machines. The agents are further configured to monitor certain aspects of each of the Linux machines on which they run. Zabbix monitors CPU, Memory, bandwidth, context switches, etc. right out of the box for most Linux machines without configuration.

Running in Cahall Labs

Currently I have the agents monitoring the MySQL DBs on some of the Linux servers as well as the Apache web servers and Tomcat app servers. I am also monitoring my Cassandra and Hadoop clusters. An interesting open source feature I found is the ability to monitor my various APC UPS power back-ups. Now I know if one is getting sick or when they go offline onto battery mode. This is useful when I am not at home to know the power has gone out. The agent can also be configured to monitor a Java JVM though its JMX gateway.

Zabbix can scale to thousands of servers and has a proxy feature to help offload the main server. We used Zabbix at my previous company and monitored thousands of servers in AWS as well as our private cloud. The auto-discovery feature allowed us to locate new VMs and automatically add them to the monitoring and alerting framework. Zabbix is shipping version 3.4. I have note tested beyond 3.0 at this time.

Alerts

Zabbix can alert you when something has exceeded a pre-configured threshold. For a home data center, this may be challenging as it was not clear it would simply use a Gmail account as the outbound sender. I overcame this issue by adding a SES account to AWS. This allows my Zabbix server to connect to the AWS SES server and send outbound alert emails to my personal email accounts. See sample email alert via Amazon SES below:

Zabbix Alert email sent via Amazon SES.

It also supports sending SMS text messages as alerts. However, I have not implemented that feature due to the costs of the SMS service. Email is good enough for my home data center.

Ted Cahall highly recommends Zabbix!

In summary, I find there is very little I cannot accomplish with Zabbix for my home data center (or for the Hybrid clouds at my previous employer). With some innovative thinking, I have seen everything from room temperature to number of people coming or going through an automated gate measured.

If there is a way to get the data back to a Linux server, there is a way to monitor and alert it from Zabbix. It is the Swiss Army knife of systems monitoring tools – and it is FREE!

Synology NAS Servers

One of the most important components of my home data center, are my Network Attached Storage (NAS) servers.

Synology 1517+ Consumer NAS

I have had my old NetGear ReadyNAS unit for at least 9 or 10 years now. It has a whopping 1.3TB of storage across 3 drives in a RAID 5 configuration. NAS units are great for storing my racing videos that no one will ever watch, old photos now that everyone with a phone collects thousands of photos a year, and copies of my important tax, mortgage, and legal documents. Some of my friends store TBs of pirated videos from the dark web. I am a NetFlix and AppleTV guy so that saves me a few TBs.

Goodbye NetGear, hello Synology

While the ReadyNAS served me well, it was long in the tooth and short on TBs. It also was missing some of the interesting new features that I did not even know I was living without until I bought my first Synology NAS back in 2015- the DS1515+. These guys really have done the whole consumer NAS thing really well.

Main attraction

The main feature I use and like is the immediate file sync of Linux directories on my Linux servers (and one of my Windows 10 desktops as well). Once I configured this option and selected the directories I wanted synced, all of those files continue to be safely stored on the NAS. No backups – it copies the files immediately upon edit or save to the NAS file system. It is also a nice way to grab files from one machine to the other as the systems can all see the disk replicas across the servers.

This does not mean I do not do backups. I have Amazon Glacier storage and I have those critical legal, tax, mortgage files sent out to Glacier storage once a week from the Synology NAS. The great thing about that is Synology provides the service that runs on the NAS to do the Glacier backup. Really simple integration.

Built-in Servers (services)

The disk drives are even “hot swappable“. No downtime if you have a drive go bad. Aside from rock solid hardware technology, another amazing thing about Synology is the application ecosystem they provide on the NAS server. They want you to make this your “server” for everything and anything you do in your home. Want a VPN sever? It has that. DNS? Yep. Connect with my Macs, Windows, and Linux in their native network protocols? Of course. It has email servers, video security servers (I bought two cameras to test and they are great), video, photo, audio servers. There are Active Directory, Email, Network Management, Print, Content Management, WordPress, WikiMedia, E-Commerce, Docker, Git, Web, Plex, Application (Tomcat) and Database servers! These all run Native on the NAS. Not just as the disk – but in the memory and on the quad core CPU.

I cannot possibly list all of the features and servers these new Synology NAS units supply. I have tested many of them, and they are rock solid and dependable. I never envisioned using my NAS as a “server” other than as a network attached storage server. Now it can work as so much more.

The more the merrier

The Synology product has me so hooked, I bought my second unit! A DS1517+ with 8GM of main memory and 30TB of storage (5 disks @ 6TB each). I used this for the security video storage and as a snap backup of the first unit. Had I planned it better, I could have arranged these two Synology units in an active-passive mirrored configuration. This would allow one to take over if the other crashed. Clearly I do not need that at home. But it is nice to know that a simple consumer grade products offer these features now.

Highly Recommend Synology NAS

I fully and highly recommend these Synology NAS products. They do not sell direct. I recommend finding them on Amazon after you spend hours like I did on their product site comparing models and features.

[Update] One cool thing I forgot to mention before I hit “publish”, is that this unit of course runs Linux. It is a 3.10 kernel version modified by Synology. This is the reason so many of these services (servers) are available as a stock part of the unit. Synology chose to make Linux the engine to run the NAS and brought along many of the Linux services. With simple configuration, you can ‘ssh’ into the NAS and work on it as though it were a plain old Linux box. It is really well done.

The marrspoints.com racing application recently got some SEO updates. These we long overdue in terms of getting better ranking inside Google. Now driver’s season results URLs include the drivers name (example for Mike Collins) and the race results include the race name and classes (example for 2017 MARRS 5 SM Feature race). Most importantly the Points Leaderboards have the class name and season as part of the URL now.

On top of all of that, I automated the sitemap to build nightly and worked with the Google Search console to fix duplicate title tags and content descriptions.

Enter Tuckey – SEO URL Rescue!

This all should have been done long ago. But features were my first priority. I used the Tuckey urlrewrite filter for all of the friendly URL magic. It really is awesome and I am glad I remembered it from all the way back to my CNET days when we used it on a project there.

I still have some clean up to do when the pages are selected by form drop-down menus. My sitemap tool does not include these paths. I know Google is a lot happier to not see parameters on the URLs any longer. It is a LOT of JavaScript magic to rewrite the form action to use the rewrite destination. So that may be left for another year or two until it works its way up the stack in terms of importance.

Keeping (too) busy

Since I left Digital River at the end of February, I have been working closely with Scott Scazafavo on a stealth start-up idea we had been kicking around. Most mornings I hit my office early and attempt to further the research or code base. I worked on some Java REST API code I wanted to improve from its early usage at marrspoints.com. I remembered there was a simple test site that gave canned responses to HTTP GET, POST requests along with cookies and the likes. After a tad of searching, I found it again: httpbin.org – what a nice tool. Simple yet elegant – and great for testing out HTTP code samples where you just need a simple endpoint. Tutorials on the Internet should just use this site in their examples – as it likely will not change much.

The dangers of the Internet

This is where the danger began… As I was done using it for the simple testing I was doing, and was ready to move onto the next phase, I noticed that it had the authors name with a hyperlink. Since I wished I had written such a useful “demo” or example.com website, I wanted to see a tad more about him. Through Kenneth Reitz, I learned that I comparatively don’t have many cool hobbies or talents (I am not that great of an auto racer and I have not written books, published music, been a professional speaker or even amateur photographer). That is all on top of his enormous contribution to the Open Source space. This guy is REALLY talented. Through his link on his personal values, I saw another link stating that “Life is not a Race, but it has No Speed Limits”. Of course that deserved a click!

Through Kenneth and that link, I met (online so to speak) Derek Sivers and read his axiom – that “Life Has No Speed Limits“. And though that story, the life of Kimo Williams and why focus matters. Focus? On the Internet with so many lessons to learn?

Saying “Hell Yeah!”

It was great to “meet” three SUPER TALENTED people on the Internet this morning. People I will likely never meet in person or even exchange emails. Yet, people from whom I have already learned. While perusing Derek’s site, I found another life lesson to which I truly try to adhere. No “yes.” Either “HELL YEAH!” or “no.”

Being a caveman

So what is wrong with curl? Nothing. But Postman (at getpostman.com) is simply one of the best tools I have used while developing code that consumes APIs. This is another case where I was using caveman tech (curl) to do a job so elegantly managed by a service that makes a desktop app that runs on Linux, MacOS, and Windows (and syncs across them).

Even a stealth API…

Currently, I am now working on a stealth start-up idea with an even more stealth cohort of mine in the financial space. The data company we have tentatively selected (and their API documentation) pointed me to Postman. It is awesome. I have deeply tested the financial access, accounts, instruments, etc. This was accomplished on my own accounts in only a couple hours of work and research. Postman is script-able, has variable replacement, etc. Oh, and the best part, a single developer license is FREE. My favorite price.

To think Sam Morris at Digital River talked about Postman dozens of times. It never occurred to me to go look at it. That cost me a lot of wasted time. Especially since I know Sam is “the man”. Thank you Sam – the second time I heard of it, I knew to go get a copy and learn it quickly.

Unity vs Gnome

I hate to think of myself as a tech Luddite. Being an Ubuntu Linux fan has caused familiarization with the Unity desktop. Recently, I have been playing with 17.10 to see what is coming in 18.04 LTS. I never thought I would defend the Unity desktop as my earliest Linux days were split between the Gnome and KDE desktops. But I wish I had my old Unity back. Yes, I know I can return to it in 17.10 – but it is becoming mostly unsupported. Incremental scaling is essential with today’s 4K monitors. Or I need Lasik. Uber-Lasik in my case.

Why I like LTS.1

I never actually run the first point release of an LTS version. I waited for 16.04.1 to get anything real live on 16.04 LTS. It seems the Gnome desktop has a big memory leak and it likely will not be fixed in the 18.04 LTS initial release in April.

A Gnome future in Ubuntu

I know this is all for the good. That change thing. Moving to Gnome in this case. It is far more widely supported and used across more variants of Linux. I used to be a CentOS champion as I loosened the evil grip of RedHat subscription fees back in my AOL cost cutting days. I have since become almost an exclusive Ubuntu home data center. Seems I will be straddling Gnome and Unity for a year or so. One other word of caution, the Gnome 3.26 desktop (used in 17.10) does not truly support incremental UI scaling yet. This is a problem for people like me with a 4K laptop screen or large 4K desktops. There is a workaround. However, it is not clear if fractional scaling will make it into Gnome 3.28 which ships with 18.04 LTS.

Happy times. It is really hard to see my shell windows in a non-scaled up Gnome desktop on a 4K laptop screen.

Getting my latest NUC

I am pretty psyched to get my latest Intel NUC. The NUC7i7DNKE has an 8th generation Intel® Core™ i7 vPro™ 4.2 GHz “Turbo”, quad core processor with 32GB of DDR4 2400 MHz RAM and a 1TGB SSD drive. Not to mention built in 4K UHD video with HDMI ports and USB 3.0.

My home data center NUC cluster

I will use this as my main development machine. It is crazy that I tend to run out of RAM on my 16GB machines running Ubuntu.

This will be my 9th NUC. Maybe I am a little too in love with these things. They make great clusters for home research and development on distributed technologies such as Cassandra and Hadoop. I have three nodes running Cassandra and Hadoop today – and am looking to add a 4th node when I free up my current development machine NUC.

Quiet, Low Power, great for clustering!

They are whisper quiet and use very low power. There are 5 in a stack sitting on my desk next to me as I write this, and they make less noise than a single standard PC. In fact, they seem to make no noise at all.

I also run Windows 10 on one as a home theater type of PC connected to a Samsung UHD TV via HDMI. These NUCs are awesome. I gave my old i3 core media NUC to my younger brother as a gift.

Here is an old picture of my early stack of NUCs. They are each 4″ x 4″.

New Blog along with some old content

As a past media executive at companies such as CNET Networks, Microsoft’s MSN, AOL and the early social network Classmates.com, I have operated a blog here and there over the years. Mostly to test out SEO ideas and cross link my sites, etc.

Started on LiveJournal in 2004

One of my unfortunate SEO decisions was using LiveJournal.com for my tech postings. In 2004 as CTO of CNET Networks, I was fortunate enough to meet Brad Fitzpatrick who invented LiveJournal (as well as memcached). Since we made a (failed) bid to buy the site, I decided I should use it and get to know it a bit. I had used it to blog about some of my non-proprietary experiences with technology and software from time to time.

My last post there was almost two years ago to the day. I was musing at the intersection of my auto racing hobby and my technology hobby. It was through a lack of automation of the points standing of my auto racing league that I had finally brought these two passions together. This was all enabled by Open Source, the Intel NUC computers (home data center), and Amazon’s AWS hosting facility. Resulting in the creation of the marrspoints.com race points tracking web application.

LiveJournal did not seem to get the SEO juice

Compared to modern blogging sites such as WordPress (which this blog is built on), LiveJournal never got the great SEO features that it deserved. Therefore today, I am moving my LiveJournal information over to a new home here at cahall-labs.com. All of the posts have been successfully moved here as of this post.

Open Source and my Home Data Center

I have a few tech topics that are of interest to me. They include:

My home data center evolution

The Open Source operating systems and application software I use at home

Cassandra and Hadoop

The marrspoints.com site was simple to build, but the back end tools to ingest all of the race data was a lot more work. I occasionally look at ways to change the data ingestion or analytics. Therefore I play with tools such as Cassandra and Hadoop on my NUC cluster in my home data center. In general, I will try NOT to blog about racing in this blog. That will move to a blog at either cahallracing.com or cahall.com.

Thank you LiveJournal – hello WordPress

So thank you to LiveJournal for the tools and time. It was a good 14 year run. There is also an old, outdated racing blog on WordPress. It will likely be moving to a new home in the next month or two. It will be good to get back to using the tool Matt Mullenweg built (WordPress). I had the opportunity to work with Matt at CNET when he spent time there for a year on his way to becoming famous. Clearly I wish I had made a blog tool. Some day I may even blog about Gavin Hall and Alex Rudloff. They built blogsmith. Blogsmith powers TMZ.com and most of the AOL blogs. I guess I met most of the people that built blogs… Very, very smart and talented people.

It really gave me something useful to work on through which other racers could also benefit.

Standing on the shoulders of giants

What an honor to be recognized. But these things do not happen in isolation. I could not have done it without the help and guidance of Lin Toland. Lin was there providing the feature requests and feedback on the design and functionality. He also did a lot of unpaid QA for my early roll-out. You are a first class leader Lin – thank you.

Lin still helps navigate the WDCR SCCA region for me and helps me look at new feature requests including Bracket Racing with Chuck Edmondson.

Thank you for the start!

I would also like to thank Mike Collins of Meathead Racing for getting me involved in racing with the SCCA. It’s like putting cash in a coffee can and lighting it on fire!

It has been over five years since my last post about software and technology. It’s not that I stopped using it. I just stopped talking about it. Lately I have been on a bit of a streak. I have been working on the MARRS Points tracking app in AWS for over a year now. It will now be the official points tracking application for the 2016 season across all race classes in the Washington DC Region (WDCR) of the SCCA. I have actually done something mildly productive with my spare time!

An AWS Project Was In Order

It was mainly by happenstance that I got the app going. I wanted to work in the Amazon AWS cloud a bit to understand it better. I had managed teams using it for years now at various companies. So it seemed like a reasonable learning experience. I could have easily chosen Microsoft Azure or the Google Cloud, but AWS has the deepest legacy and I started there. Once I logged in and started to play with AWS, they let me know my first year was FREE if I kept my usage below specific CPU and memory levels. Sure no problem. But what to build, what to do? I remembered I had built an old Java/JSP app as a framework for a racing site for my brothers and I, called cahallbrosracing.com. GoDaddy had taken their Java support down and it had been throwing errors for years. So I decided that was the perfect domain to try, and grabbed the skeleton code. It would be some type of Java/JSP racing application that used a MySQL database backend. But for now, I just needed to see if I could configure AWS to let me get anything live.

EC2, RDS, a little AWS security magic…

I provisioned an EC2 node, downloaded Tomcat and Oracle Java and went to work. In no time, I had the fragments of the old site live and decided I should put my race schedule online. The schedule would not come from a static HTML page. It would use a JSP template and a Java object to get the data from the database. Then each year I would just add new events to the database and display by year. Quickly the MySQL DB was provisioned, network security provisioned, DB connectivity assembled and the schedule was live. OK – AWS was EASY to use and I now had a public facing Java environment. I was always too cheap to pay for a dedicated host. Too cheap to sort out a real public facing Java environment that allowed me to control the Linux services so I could start and stop Tomcat as needed. But FREE was right up my alley.

So there I was, developing Java, JSP and SQL code right on the “production” AWS Linux server. Who needs Maven or Ant, I was building it right in the server directories! Then I started to realize I did not have backups. I was not using a source code repository. It could all go away like a previous big app I wrote when my RAID drives both failed in the great 2005 Seattle wind storm. Not a good idea.

Intel NUCs (and GitHub) to the rescue!

Enter the NUCs!!! I had learned about the Intel NUC series and bought a handful of them to make a home server farm for Hadoop and Cassandra work. These units are mostly the i5 models with 16GB of RAM running Ubuntu 14.04.4 LTS. I realized I needed to do the development at home, keep the code in a GitHub repository, and then push updates to AWS when the next version was ready for production. My main Java development NUC has been awesome. It is a great complimentary setup. An AWS “production” environment in the cloud and a Linux environment at home with the source code repository also in the cloud. I even installed VMWare Workstation on my laptop so I have Linux at the track. This allows me to pull the code from GitHub down to my laptop and make changes from the track. It’s almost like I have made it to 2013 or something.

Why software is never “done”

Well once I got going, I wanted to track my points in the MARRS races. So I made some tools to allow manual entry of schedules, race results, etc. This manual process clearly did not scale well. The discovery of Race Monitor and their REST APIs. solved that issue. The code was written to pull the results back from Race Monitor and used Google’s GSON parser. GSON let me marshal the JSON data to objects used in the Java code. Unfortunately, Race Monitor does not pass a piece of critical data, the SCCA ID for each racer. The next step was to work with the Washington DC Region and the fine people at MotorsportReg.com to use their REST APIs to get that data for each race. This simple Java app has become complex with two REST APIs and tools to manage them.

The rest is history. The tool can now also import CSV files from the MyLaps Orbits software. A simple CMS was added to publish announcements and steward’s notes per race. All of the 2015 season has been pulled into the application across all of the classes and drivers. Many features, bells and whistles have been added thanks to Lin Toland’s sage advice. Check out the 2015 season SSM and SM Championship pages. A ton of data and a lot of code go into making those look simple.

Racing into the future with MARRS

I am really looking forward to being able to help all of the WDCR MARRS racers track their season starting in April. Let’s hope I can drive my car better than last year and race as well as I have coded this application.

It is kind of odd to think that my desire to play with AWS caused me to build something useful for hundreds of weekend racing warriors. Now the next question, should I make it work for every racing group across the world? I mean multi-tenant, SaaS, world domination? Hmmm… Maybe I should try to finish better than 6th this year…