]]>I grew up on science fiction. I loved reading stories of The Future, where space travel was commonplace, where all energy was generated cleanly, and where we worked side-by-side with machine intelligences to accomplish tasks.

Of course, that wasn’t what most of the books were about; space travel, clean energy, and A.I. were just part of the background. And many of the books I read had issues– most of them failed to properly predict the role that women play in society (or often failed to credit them at all, aside from being a love interest for the hero), quite a few were somewhat racist, and all of them glossed over problems of everyday life that couldn’t simply be resolved with technology– but in the background areas of space travel, A.I., and clean energy, they were quite prescient.

“But Ryan, none of those have happened yet,” I hear you saying. And you’re right, none of these have yet reached their full potential. But in a world where SpaceX, Blue Origin, Rocket Labs, Virgin Galactic, and so many others are all working to bring affordable spaceflight to the masses (and getting pretty darn close), I think it’s safe to say that space travel will soon1 become far more commonplace than it is today. We won’t be living in Star Trek, but costs will come down and won’t be the same barrier that they are today.

And in a world where renewable energy continues to make great strides, I think it’s safe for me to predict that we are well on the way to moving the bulk of our electrical generation away from fossil fuels– there are some problems to solve with electrical transmission and energy storage, but we keep moving closer to to a solution. As more companies compete for a slice of the renewable energy pie, prices will go down, efficiency will go up, and an increasing percentage of electrical generation will be from renewable resources. Energy will increasingly be stored in batteries of one kind or another, whether they be chemical (such as Tesla’s Powerpacks), mechanical (such as hydropower or compressed air), or thermal (such as molten salt paired with a concentrated solar tower). Those batteries will allow more and more of the baseload supply of energy to come from renewables, reducing dependence on fossil fuels, and paving the way for those to be slowly phased out.

Lastly, we already work incredibly closely with machine intelligences, though that intelligence isn’t quite what was predicted (at least, not yet). Work is increasingly done on computers with incredibly complicated error checking algorithms, designed to reduce mistakes that cost money and lives. Our data is sorted and analyzed by powerful computers that can correlate information thousands of times faster than we can, letting us build more complicated and accurate models to help us learn even more about the world we live in. And most impressively (for me), we can now literally talk to our computers and ask them to do things for us. I know that the first two examples are far more impactful to society, but I still can’t quite believe that I can just talk to my Google Home, have it correctly interpret what I’m asking for, and then give me an answer that makes sense and fulfills my need. It still feels like magic.

The science fiction writers I read in my youth may have written about a world that was full of more straightforward adventure than the one we live in today. But what our world lacks in swashbuckling heroes with laser swords, it more than makes up for with the amount of sheer technical wonder that we have at our fingertips each day. Fifty years ago, for me to write and publish this article would have required a bulky typewriter, a mimeograph machine to create copies of the typed article, a bunch of paper, and a stapler to staple my article up on random light poles for people to read. Today, I wrote most of this post on my Chromebook connected to the hotspot on my phone, while waiting for my car’s inspection to be finished. To publish it, I hit the big blue “publish” button at the top of the page, and trusted it would be sent out to your screen when you wanted to read it. The amount of time and effort saved by modern technology for this article alone could probably cover my grocery bill for a week. And if that’s not evidence of us living in the future, I don’t know what is.

]]>It’s no secret that I enjoy working with new technology and figuring out better ways to do things. For the last couple of years I’ve been dissatisfied with how my internal network was configured; I was using a basic, off-the-shelf, all-in-one consumer-grade router/wireless access point, and while it normally worked okay, it didn’t always give me the insight or visibility into my network that I really wanted to have. “If only there was a way to take a commercial-grade wireless networking system, and set it up in my apartment,” I complained to everyone would listen. “Then I could configure everything the way I want, isolate devices on specific networks, and conquer the world!”

I knew I didn’t need a solution as expensive in-depth as Cisco’s enterprise WiFi system, but wanted to graduate beyond the basic consumer networking solutions. When I found the UniFi system in an Ars Technica review, I was hooked– but I was also still in college, and my meager budget was still too small to support a more advanced networking system. It wasn’t long, however, before I graduated, moved to a new apartment, and suddenly had some disposable income I could throw at my home network.

I started my network with the smallest and most basic component: a UAP-AC-Lite, the cheapest wireless access point in the UniFi line. I plugged it into my switch, installed the controller software on my computer, setup my wireless networks, and… it worked! It was easier than I expected, which was almost disappointing. I mean, here I was, with a fancy access point, and it didn’t even require hours of tinkering to get it to work the way I wanted? Where’s the fun in that?

I left the WAP in place for a couple of weeks, and then decided I needed more. I went out and bought the UniFi Security Gateway, or USG, so I could fully replace my all-in-one with some more advanced tech. The USG required some more hand-holding to get up and running, but soon that wasn’t enough, either. I bought a Cloud Key, and then a PoE Switch, and before I knew it I was running UniFi for basically everything on my network.

“That’s all very well and good,” I hear you say. “It’s always fun to read about somebody else spending money when they technically don’t need to. But what does UniFi actually do for you? What problem does it solve?” That’s a good question. UniFi gives me a couple of things I wanted to have; first, it gives me a network that I can expand as my needs shift. If I’m not getting WiFi in an area, I can just plug in a WAP, adopt it into the system, and voila! I have signal. Secondly, everything’s managed in one place, the UniFi Dashboard. All my equipment, and anything I add to the system, can be managed through the dashboard in real-time– and I can do it from anywhere, since I connected my Cloud Key to my Ubiquiti account.

The UniFi Dashboard

This means I don’t need to worry about remembering passwords for each of my devices, which is a major plus for anyone, even if you use a password manager. UniFi also gives me some basic deep packet inspection, which lets me keep an eye on what’s talking out to the rest of the internet from my network.

It’s not as detailed as I would like, it’s true. I haven’t found a way to select a specific device and view all traffic from it, for example, but it’s mostly adequate for my current needs. If something pops up that might be a problem, it’s easy enough to explore and inspect to see if anything is truly amiss. As an example, the traffic stats show that remote access terminals have transferred nearly 1.25TB of data to somewhere off-network. If you don’t know what that might be, that’s a problem– a remote access terminal moving lots of data could be an indication of a compromised computer being used as part of a botnet, or could be something spying on you.

Looking at the specific DPI card for that category shows that that entire amount of data has been through SSH, which again could be an indication that something on the network is infected and is phoning home. UniFi lets us drill deeper, however, and I can see that almost all of the traffic is from one specific machine on my network, which is configured to perform incremental syncing to the cloud via rsync. But if this had actually been a compromised machine, the dashboard could have been my first indication that something was very wrong on my network.

UniFi also lets me setup and configure a guest wireless portal, so no more needing to give guests my WiFi password. They can just connect to my open network (named Ankh-Morpork in honor of Sir Terry Pratchett), accept the terms and conditions which warn them that their connection may not be private and to not carry out illegal activities using my WiFi, enter the password I have posted in my apartment, and voila! they can access the web on whatever device they may choose. If they start causing issues, adding bandwidth limits and filtering specific sites is easy, as is managing which devices are connected to the guest network.

Overall, I’m quite pleased with UniFi. I have more I’d like to do (like building out vlans for my various servers), but for now the network is stable, speeds are faster than they were, and my WiFi coverage is great. I’ve been talking up UniFi with everyone that I know, and I’m slowly building out a network at my parent’s house which will let me troubleshoot remotely while increasing their speeds and security. It costs a bit more than my previous solution, but I’m glad I made the switch.

]]>I host a few websites for myself and family on DigitalOcean. Up until recently, I’ve always just spun up a new droplet for each site, so they were all fully independent from each other; this was the easiest and most convenient way to get a new site up and running without jeopardizing uptime on other sites if I made a mistake in configuration, and it was drop-dead easy to map a domain to a static IP. It had some security benefits, too– if one site was compromised, it wouldn’t affect the rest.

But it was also maintenance-intensive. I needed to login to multiple servers to run updates; adding plugins had to be redone over and over on each server; and obviously this was starting to get expensive. So I decided to consolidate my multiple sites on one server, using a fancy feature of WordPress called… “Multisite“. Imaginative name, I know.

The initial configuration went well, with no real hiccups (other than my accidentally rm’ing most of Apache’s configuration files– but a quick droplet rebuild took care of that1). The trouble started when I had moved over the sites I was consolidating, and switched the domains to point at my new Multisite server. I spent two hours trying to figure out why one of the domains refused to point at the new server, only to discover (drumroll, please)… it was DNS. I use Pi-Hole on my home network to block malicious sites, but it also provides a DNS caching service which usually works great. In this case, however, it was pointing me back at the old server over and over, until the TTL finally expired2. A quick flush of the DNS cache, and I was able to see that the domain was correctly configured. Fifteen minutes later, I had SSL up and my plugins configured.

So what’s the lesson in all this? Even when you think it’s not DNS… it’s DNS.

Yes, I could have restored the configuration without too much difficulty, but I was early enough in the build that it was faster to just start over. ↵

I did set the TTL to a very low number when I started this process, but the old value wasn’t updated until the original one expired. ↵

]]>A new reflection attack was unveiled today which can increase the size of a DDoS attack by 51,000-fold. It uses memcached, an object caching system designed to speed up web applications, to amplify attacks against a target. This represents a substantial increase from previous attacks, which have used network time servers to amplify attacks 58-fold and DNS servers to amplify attacks 50-fold.

Attacks seen this week have surpassed 500 Gbps, which is pretty amazing considering only a small percentage of publicly-available memcached servers are being used to launch those attacks. It’ll be interesting to see if any larger attacks are launched in the coming weeks… and what their targets will be.

]]>Another day, another vulnerability in a widely-used software package. Today’s bug (dubbed Optionsbleed by Hanno Böck, the journalist who documented the vulnerability) can reveal passwords and other pieces of vital information to attackers. While not as big of a threat as Heartbleed, a similar bug which allowed attackers to snag private encryption keys for servers (which is a Bad Thing, since this is how servers verify they are who they say they are; for an explanation of how this works, see my Asymmetric Encryption explanation from last year), this should still be regarded as a significant threat.

Patches are being rolled out now; patch your systems if you haven’t already.

]]>The vulnerability was patched in WordPress v4.7.2 two weeks ago, but millions of sites haven’t yet updated. This leaves them open to a vulnerability in the WordPress REST API, which can allow malicious actors to edit any post on a site.

Ars Technica has a very nice writeup on the effects of the exploit, which has resulted in the defacement of a staggering number of websites (including the websites of Glenn Beck, the Utah Office of Tourism, and even the official Suse Linux site). Sucuri and Wordfence also have very good articles about the effects of the vulnerability.

If you have a WordPress site, you should immediately check to make sure you’re on the latest version (v4.7.2).

]]>I’ve noticed a growing trend in more advanced computer users lately: some of them have begun advocating against using antivirus software. Instead, they suggest using browser extensions like uBlock Origin (which I use and recommend), combined with safe browsing practices, to remove the need for antivirus software altogether. Ars Technica did a very nice write-up on this trend today, and it’s worth a look.

For what it’s worth, I still use Avast as an antivirus package. But it hasn’t alerted me to any issues or found any viruses in at least a year, so perhaps it’s time to consider freeing up some memory on my computer.

]]>I’ve finally moved to a VPS on DigitalOcean, from my previous (free) shared hosting. I did this for a couple of reasons: first, while my hosting was free for a year with my domain name, that year was almost up. To renew my hosting for the second+ year, I would have needed to pay $38.88/year; while that’s a decent price, I looked at my options and decided that moving to DigitalOcean wouldn’t cost much more (around $30 more across the year, since I use the weekly backups option), would give me much more control over my server (now I get SSH access!), and would centralize all of my VPS instances in the same place (I’ve used DigitalOcean for several years to host various projects).

Of course, as with so many things, this migration wasn’t sparked by a simple glance at the calendar. While I’ve intended to move my host for the last month or two, the timing was decided by my messing up a WordPress upgrade on the old site at the beginning of December. I used the automatic updater, ignored the warnings about making sure everything was backed up first1, and told it to apply the new version. When WordPress exited maintenance mode, I was locked out of the administration dashboard. The public part of the website was still up and running, but the backend was locked off. Since I was entering finals week at my university, I decided to just let it be until I had some time to come back and fix it. Worst-case, I had backups I could restore from, and I’d been meaning to migrate my site anyway.

Of course, things didn’t work out that way. When I finally had some time on Christmas Eve, I discovered that a complete backup hadn’t been made in months.

Turns out, if you don't verify that your backups are working properly, they might not be a viable restore medium.

Yes, I committed the cardinal sin of not verifying the state of my backups. Apparently I’d screwed something up with their configuration, and I’d never tried to restore from them before and hadn’t noticed until I needed them. At this point, I decided that if the backups weren’t working, there was no point in trying to recover on a host that I was going to be abandoning within a month, and I spun up a WordPress droplet on DigitalOcean to hold the rebuilt site.

But I've been meaning to move my hosting over to a VPS on @digitalocean for a while now, and this is a perfect opportunity to do that.

I still had copies of all the content that was on the site, so I’d be able to restore everything without much trouble. Some copy/pasting and time would be required, but I could get everything back to the way it was without too much trouble. But before I did all of that, I thought “what if I’m overlooking something really simple with the old site?” I did a little searching, and apparently W3 Total Cache, which I used to create static pages for my site and decrease load times, can cause problems with WordPress upgrades. I disabled that via FTP2, reloaded the site, and I was able to access the admin area again. Turns out the simple steps that you should take before completely rebuilding everything are actually worth it.

And of course, while I move from one host to another, I solved my original problem. C'est la vie.

Since I had already spun up and started configuring my new site, I decided to press onwards. My task was made considerably easier by my being able to access WP Clone on the original site, which let me move everything from my old site to the new one in just a few minutes. I redirected the nameservers to DigitalOcean, and ran a few last checks before calling the bulk of my work done.

The next day, when I was tidying up some loose ends and preparing to get SSL set up, I realized that my email no longer worked– my email server resided on the same server that hosted my old website, which meant I needed to find a new solution.

And we're mostly back online. Email is still acting up, but we're close to being done.

While I have been meaning to setup my own email server sometime soon, I wasn’t confident in my ability to get it up and running quickly, and email is one of those vital services I depend on working 100% of the time. In years past, I would have simply used Google Apps3 to host my email, but that is no longer the free option it once was. Luckily, I found a solution thanks to Ian Macalinao at Simply Ian, which is to use Mailgun as a free email server. Mailgun is designed to send out massive email blasts for major companies, but they also offer a free tier for people and companies that are sending out fewer than 10,000 emails per month. I send out a fraction of that number, so this was perfect for me (and their mass email prices seem quite reasonable, so I might even use them for that if the need ever arises). Ian handily provided a set of instructions for how to setup the proper routing, and, while some of the menu options have changed, I was able to get my new email up and running within a few minutes.

Well, I have email up and running, but ssl still isn't working. Going to give the letsencrypt tool another shot before trying something else

So I’d managed to get both the site and my email up and running, but I still couldn’t get SSL up and running. For those that don’t know, SSL stands for Secure Sockets Layer, and it’s what powers the little green padlock that you see on your address bar when you visit your bank, or PayPal, or this website. I wrote an explanation on how it works a while back, and I suggest checking that out if you want to learn more.
One of the benefits of hosting my website on a VPS is that I don’t need to use the major third-party SSL providers to get certificates saying my server is who it says it is; I can use the free and open Let’s Encrypt certificate authority instead. Unfortunately, I just couldn’t get the certificate to work correctly; the automated tool was unable to connect to my server and verify it, which meant that the auto-renewal process wouldn’t complete. I could have generated an offline certificate and used that, but the certificates only last ninety days and I wasn’t looking forward to going through the setup process every three months.4 I tried creating new Virtual Hosts files for Apache, my web server, but that just created more of a problem. Eventually, I figured out that I had misconfigured something somewhere along the line. Rather than try to figure out which of the dozens of edits I had made was the problem, I gave up and just reverted back to a snapshot I had made before starting down the rabbit hole.5 After reverting to back before my virtual hosts meddling, I was able to successfully run the Let’s Encrypt tool, generate my certificate, and secure my site.

…that was my own fault. Turns out I didn't actually need to rewrite all of Apache's virtual hosts, and I probably shouldn't have tried.

It’s a pretty straightforward and simple process, I just know that I would forget about it at some point, the certificate would expire, and the site would have issues. If I can automate that issue away, I would much rather do that. ↵

Snapshots are essentially DigitalOcean’s version of creating disk images of your server. I absolutely love snapshots; they’ve saved my bacon more than once, and I try to always take one before I embark on any major system changes. ↵

]]>Ars Technica did a nice job of creating an impartial write-up on why Hillary Clinton used an external email server, and how it was actually used. It sounds to me like there’s an institutional history of using private email to conduct business, largely due to obstructive or incompetent IT services (in fairness to the State Department IT team, there are likely a number of complicated policies and legal requirements that they’re trying to work around, which is difficult). Still, that’s not an excuse to use a home server to manage official communication– if you must use your own email address, at least use something like Google Apps or Microsoft Exchange Online, where you have teams of people professionally managing the email environment 1.

It’s also interesting to see that the NSA basically shot down any possibility of her getting a secured mobile device; I would have thought that providing the Secretary of State– the person who comes fourth in the presidential line of succession– with secure communications at all time would be a priority for them.

Of course, there is still the issue of all email traffic being unsecured and transmitted in plaintext. But you could use a PGP solution to reduce risks there. ↵

]]>https://www.ryanbrooks.net/blog/2016/07/16/clintons-email-server/feed/0225Uncertainty, the Fed, and the Economyhttps://www.ryanbrooks.net/blog/2016/05/29/uncertainty-fed-economy/
https://www.ryanbrooks.net/blog/2016/05/29/uncertainty-fed-economy/#respondSun, 29 May 2016 06:28:57 +0000http://www.experimentalthoughts.com/?p=212...

]]>The New York Times published this opinion piece recently, discussing the Fed’s continuing decision to delay raising rates. While the entire article is interesting, I believe that the final paragraph is the most insightful:

Adding to the frustration is that Fed policy is not to blame for the economy’s underperformance. Congress bears much of the blame because of its tightfisted federal budgets when more government spending is needed to offset feeble spending and investment in the private sector. Still, sound policy making by the Fed requires answering to conditions as they are, not as policy makers might wish they were.

Right now, we should be spending money to stimulate the economy– cutting back is incredibly short-sighted, and could seriously damage the economy. We should look back at other economic downturns from the past– the Great Depression, for example, was ended not by restricting government spending, but by massively increasing it (and by abolishing the gold standard, which let to the restriction in the first place)– and learn from them. Economists have studied recessions for many years, and the Fed has done an admirable job in regulating the U.S. economy through this entire mess. Politicians, however, often don’t understand the data, or are politically unable to make the best long-term policy. For this reason, they should seek to reduce uncertainty in U.S. markets.

Economic uncertainty is a larger problem in the United States than we may care to admit. John C. Williams, President and CEO of the Federal Reserve Bank of San Francisco, gave a 2012 speech in which he said that uncertainty was one of the largest problems facing the U.S. economy today:

By almost any measure, uncertainty is high. Businesses are uncertain about the economic environment and the direction of economic policy. Households are uncertain about job prospects and future incomes. Political gridlock in Washington, D.C., and the crisis in Europe add to a sense of foreboding. I repeatedly hear from my business contacts that these uncertainties are prompting them to slow investment and hiring. As one of them put it, uncertainty is causing firms to “step back from the playing field.” Economists at the San Francisco Fed calculate that uncertainty has reduced consumer and business spending so much that it has potentially added a full percentage point to the unemployment rate.

Obviously, with unemployment at 5.0% today,1 having uncertainty raise the unemployment a full percentage point is no small matter. And on average, economic uncertainty is increasing—according to data collected by Scott Baker, Nicholas Bloom and Steven J. Davis in “Measuring Economic Policy Uncertainty” over at PolicyUncertainty.com, economic uncertainty has been trending upwards for the past fifteen years.

Obviously, this trend is heavily influenced by the 2008 recession, but I find it interesting that it may be beginning to rise again. This is possibly a result of the fluctuating oil markets, combined with the slowdown of China’s economy; but no matter the cause both the Fed and the government should seek to reduce uncertainty and continue to promote stability in the economy.

When he gave that speech, the unemployment rate was at 8.3%, and the Economic Uncertainty Index (EUI) was at 178.3; today the latest numbers for the EUI place the United States near 98.3. I was unable to find any data correlating the EUI with specific unemployment rates, so at this time I cannot estimate how much of our present unemployment is a result of uncertainty in the economy. ↵