I've previously written about creating SSL certificates. Times have changed, and ECC is the way of the future. Today I'm going to revisit that post with creating ECDSA SSL certificates as well as how to get your certificate signed by Let's Encrypt.

Generating an ECDSA Key

Since this information doesn't seem to be readily available many places, I'm putting it here. This is the fast track to getting an ECDSA SSL certificate.

openssl ecparam -out private.key -name prime256v1 -genkey

Generating the Certficate Signing Request

Generating the csr is generally done interactively.

openssl req -new -sha256 -key private.key -out server.csr

Fill out the requested information. Use your two letter country code. Use the full name of your state. Locality means city. Organization Name and Organizational Unit Name seem rather self explanatory (they can be the same). Common name is the fully qualified domain name of the server or virtual server you are creating a certificate for. The rest you can leave blank.

Non-interactive CSR generation

You can avoid interactive csr creation by supplying the subject information. This will work fine as long as you're not using subjectAltNames.

Using a traditional Certificate Authority

If that doesn't work for you because you can't run the letsencrypt client on your web server, StartSSL is also free. If you don't want a free one, you should have no trouble finding one on your own. Whichever you pick, give them your server.csr file. They'll give you back a certificate.

I have a different relationship with Star Wars than most people. Star Wars was origionally released in theaters fourty-seven days after I was born. The Empire Strikes Back was the first movie I saw in a cinema. I stood on the seat, transfixed on the screen from the crawl to the credits. Return of the Jedi was the first movie I remember seeing in theaters. I've seen A New Hope something on the order of two thousand times. Three times in my life I've watched either ANH or the entire trilogy at least once per day for more than a year. Then theres all the other times I've seen it outside of that. I've been known to win Star Wars Trivial Pursuit on a single turn. I can recite the dialog of the entire trilogy from memory. Star Wars was an anchor for me, through a turbulent childhood.

I'm not one of those crazies though. I'm not a collector. I have some Star Wars stuff, but it's not overehelming. I've enjoyed the expanded universe, but it's not the same. The EU to me was, and still is I suppose, something like fanfic. A place to go to think about Star Wars when all of Star Wars had already been consumed. For over twenty years Star Wars was a constant in my life, before the dark times, before the prequals.

I was very excited for The Phantom Menace. I saw it on opening day, the first showing of the day in San Diego. Afterward, less so. The prequels are horribly bad. I took comfort in not beingalone in that opinion. But now there's a new expanse for Star Wars. Disney has made statements about producing one new Star Wars movie per year. And for better or for worse, Star Wars is no longer simply a trilogy.

I also am a fan of Star Trek. I am possibly going through what many Star Trek fans went through in 1987. Having watched The Cage, Picard is much closer to Pike than Kirk is. The Next Generation is more the show that Gene Roddenberry wanted to create than the original series was. The architecture of TNG traces back to Gene's original design for Star Trek before the studios got involved. And Star Trek has now lived more without its creator than with. There is phenominally good Trek (City on the Edge of Forever, The Measure of a Man, or The Inner Light) and there is bad Trek (most of DS9) and really bad Trek (Spock's Brain, seasons 2-4 of Enterprise). But there is a lot of Trek. There's almost 750 hours of Star Trek cannon. There's aproximately 12 (14 after this weekend) of Star Wars. I'm able to watch and rewatch Star Trek, enjoying the good episodes and lamenting or skipping the bad ones. I don't regard all of Star Trek cannon as cannon. Starting this week, I will be doing the same with Star Wars.

Update: As of 20150917T235937Z full support for IPv6 has been added to vmadm with the added ips and gateways parameters. If you're using SmartDataCenter, these parameters won't (yet) be added automatically, so the following may be useful to you. But if you're using SmartOS, see the updated SmartOS IPv6 configuration wiki page.

There have been a lot of requests for IPv6 support in SmartOS. I'm happy to say that there is now partial support for IPv6 in SmartOS, though it's not enabled by default and there may be some things you don't expect. This essay is specific to running stand-alone SmartOS systems on bare metal. This doesn't apply to running instances in the Joyent Cloud or for private cloud SDC.

Update: I now have a project up on Github that fully automates enabling SLAAC IPv6 on SmartOS. It works for global and non-global zones and automatically identifies all interfaces available, regardless of the driver name.

First, some definitions so we're all speaking the same language.

Compute Node (CN): A non-virtualized physical host.

Global Zone (GZ): The Operating System instance in control of all real hardware resources.

OS Zone: A SmartMachine zone using OS virtualization. This is the same thing as a Solaris zone.

There are two modes of networking with SmartOS. The default is for the global zone to control the address and routes. A static IP is assigned in the zone definition when it's created, along with a netmask and default gateway and network access is restricted to the assigned IP to prevent tennants from causing shenanigans on your network. The other is to set the IP to DHCP, enable allow_ip_spoofing and be done with it. The former mode is preferred for public cloud providers (such as Joyent) and the latter may be preferred for private cloud providers (i.e., enterprises) or small deployments where all tennants are trusted. For example, at home where I have only a single CN and I'm the only operator, I just use DHCP and allow_ip_spoofing.

By far the easiest way to permit IPv6 in a SmartOS zone is to have router-advertisements on your network and enable allow_ip_spoofing. As long as the CI has IPv6 enabled (see below for enabling IPv6 within the zone) you're done. But some don't want to abandon the protection that anti-spoofing provides.

Whether you use static assignment or DHCP in SmartOS, the CI (and probably you too) doesn't care what the IP is. In fact, KVM zones with static IP configuration are configured for DHCP with the Global Zone acting as the DHCP server. If you have another DHCP server on your network it will never see the requests and they will not conflict. In SDC, entire networks are allocated to SDC. By default SDC itself will assign IPs to CIs. In the vast majority of cases it doesn't matter which IP a host has, just as long as it has one.

Which brings us to IPv6. It's true that in SmartOS when a NIC is defined for a CI you can't define an IPv6 address in the ip field (in my testing this is because netmask is a required parameter for static address assignment, but there's no valid way to express an IPv6 netmask that is acceptable to vmadm). But like it or not, IPv4 is still a required part of our world. A host without some type of IPv4 network access will be extremely limited. There's also no ip6 field.

But there doesn't need to be. Remembering that in almost all cases we don't care which IP so long as there is one, IPv6 can be enabled without allowing IP spoofing by adding IPv6 addresses to the allowed_ips property of the NIC. The most common method of IPv6 assignment is SLAAC. If you're using SLAAC then you neither want, nor need SmartOS handing out IPv6 addresses. The global and link-local addresses can be derived from the mac property of NIC of the CI. Add these to allowed_ips property of the NIC definition and the zone definition is fully configured for IPv6 (you don't need an IPv6 gateway definition because it will be picked up automatically by router-advertisements).

Permitting IPv6 in a Zone

Here's an example nic from a zone I have with IPv6 addresses allowed. Note that both the derived link-local and global addresses are permitted.

In my workflow, I create zones with autoboot set to false, then add IPv6 addresses based on the mac assigned by vmadm then I enable autoboot and boot the zone. This is scripted of course, so it's a single atomic action.

Enabling IPv6 in a SmartMachine Instance

Once the zone definition has the IPv6 address(es) allowed it needs to be enabled in the zone. For KVM images, most vended by Joyent will already have IPv6 enabled (even Ubuntu Certified images in Joyent Cloud will boot with link-local IPv6 addresses, though they will be mostly useless). For SmartOS instances you will need to enable it.

In order to enable IPv6 in a SmartOS zone you need to enable ndp and use ipadm create-addr.

svcadm enable ndp
ipadm create-addr -t -T addrconf net0/v6

Instead of doing this manually I've taken the extra step and created an SMF manifest for IPv6.

I have a user-script that downloads this from github, saves it to /opt/custom/smf/ipv6.xml and restarts manifest-import. After the import is finished, IPv6 can be enabled with svcadm. Using the -r flag enables all dependencies (i.e., ndp) as well.

svcadm enable -r site/ipv6

Enabling the service is also done as part of the user-script.

If you do actually want specific static IPv6 assignment, do everthing I've described above. Then, in addition to that use mdata-get sdc:nics to pull the NIC definition and extract the IPv6 addresses from allowed_ips and explicitly assign them. I admit that for those who want explicit static addresses this is less than ideal, but with a little effort it can be scripted and made completely automatic.

CFEngine recently released version 3.6, which makes deploying and using cfengine easier than ever before. The greatest improvement in 3.6, in my opinion, is by far the autorun feature.

I'm going to demonstrate how to get a policy server set up with autorun properly configured.

Installing CFEngine 3.6.2

The first step is to install the cfengine package, which I'm not going to cover. But I will say that I recomend using an existing repository. Instructions on how to set this up are here. Or you can get binary packages here. If you're not using Linux (like myself) you can get binary packages from cfengineers.net. Or for SmartOS try my repository here (IPv6 only). If you're inclined to build from source I expect that you don't need my help with that.

Having installed the cfengine package, the first thing to do is to generate keys. The keys may have already been generated for you, but running the command again won't harm anything.

/var/cfengine/bin/cf-key

Setting up Masterfiles and Enabling Autorun

Next you'll need a copy of masterfiles. If you downloaded a binary community package from cfengine.com you'll find a copy in /var/cfengine/share/CoreBase/masterfiles.

As of 3.6 the policy files have been decoupled from the core source code distribution so if you're getting cfengine from somewhere else it may not come with CoreBase. In this case this you'll want to get a copy of the masterfiles repository at the tip of the branch for your version of CFEngine (in this case, 3.6.2), not from the master branch where the main development happens. There's already development going on for 3.7 in master so for consistency and repeatability grab an archive of 3.6.2. Going this route you also need a copy of the cfengine core source code (although you do not need to build it).

Here I've enabled verbose mode. You can in the verbose output that autorun is working.

Now, like Han Solo, I've make a couple of special modifications myself. I also like to leave the default files in pristine condition, as much as possible. This helps when upgrading. This is why I've only made very few changes to the default polcies. It also means that instead of using services/autorun.cf I'll create a new autorun entry point. This entry point is the only bundle executed by the default autorun.

This works exactly the same as autorun.cf, except that it looks for bundles matching digitalelf and only runs them if the bundle name matches a defined class. Also note that enabling inform_mode (i.e., cf-agent -I) will report which bundles have been discovered for automatic execution.

Since the tag is digitalelf it will be picked up by services/autorun/digitalelf.cf and because bundle name is any, it will match the class any in the methods promise, and therefore run. Again, enabling inform_mode (cf-agent -I) will report that this bundle is in fact being triggered.

You can drop in bundles that match any existing hard class and it will automatically run. Want all linux or all debian hosts to have a particular configuration? There's a bundle for that.

Extending Autorun

You may already be familiar with my cfengine layout for dynamic bundlesequence and bundle layering. My existing dynamic bundlesequence is largely obsolete with autorun, but I still extensively use bundle stack layering. I've incorporated the classifications from bundle common classify directly into the classes: promises of services/autorun/digitalelf.cf. I can trigger bundles by discovered hard classes or with any user defined class created in bundle agent digitalelf_autorun. By using autorun bundles based on defined classes you can define classes from any source. Hostname (like I do), LDAP, DNS, from the filesystem, network API calls, etc.

There's a long running debate about which is better for SSH public key authentication, RSA or DSA keys. With "better" in this context meaning "harder to crack/spoof" the identity of the user. This generally comes down in favor of RSA because ssh-keygen can create RSA keys up to 2048 bits while DSA keys it creates must be exactly 1024 bits.

Here's how to use openssl to create 2048-bit DSA keys that can be used with OpenSSH.

After this, add the contents of id_dsa.pub to ~/.ssh/authorized_keys on remote hosts and remove your RSA keys (if any). I'm not recomending either RSA or DSA keys. You need to make that choice yourself. But key length is no longer an issue. We can now go back to having this debate on the merit of math.

On my Ubuntu box I just ran an analysis of the Root CA certificates (from the ca-certificates package which itself comes from Mozilla). This certificate list is widely used by thrird-party programs as an authoritative list. But other distributors (e.g., Google, Apple, Microsoft) have a substantially similar list due to the need for SSL to work in all browsers. If any one vendor shipped a substantially different list then end users would merely preceve that browser as being broken and not use it.

Back to my analysis. Mozilla includes 20 Root CA certificates that use MD5 and 2 that use MD2. This is frightening. We already know that a Microsoft certificate with MD5 was used to distribute the Flame malware and it is all but proven that Flame was created and distributed by the U.S. government.

The situation is clear. The NSA is in the posession of one or more Root CA keys. It is only prudent to expect that the NSA has spoofed copies of all 22 CAs that use MD5 or MD2. It is also possible that they have exact copies (i.e., true keys, not spoofed) of other major U.S. based certificate authorities (I shudder to think of a world where a national security letter requests a Root CA key as being relavent to an investigation).

The NSA would then use these keys to spoof SSL certificates in real time, creating Subjects identical to the target web site, becoming a completely invisible man-in-the-middle. This method would be impossible to detect for all but the most skilled users.

Edit: Turns out I was right on the money.Edit April 2014:Heartbleed notwithstanding, I still firmly believe the NSA is actively executing MITM attacks using genuine or spoofed Root CA keys. Why let an IDS fingerprint you when you can engage in active and undetectable surveillance?