timkennedy.net

Tuesday, September 26, 2017

In learning Powershell, one of the hardest things to wrap my head around is how it is, and is not, like Unix shells. While it does allow you to interact with your system and perform actions, such as running commands, executing scripts, or doing the same to remote systems, there are some differences. Like... everything is an object, instead of everything is a blob of text. This has taken some getting used to, but in many ways it really simplifies a lot of activities that in a Unix shell would take a pretty complex pipeline.

People often conflate the Unix shell, such as Bash, or Zsh, and the terminal, such as xterm, Gnome Terminal, iTerm, etc. The terminal is the application or hardware through which a user can interact with the Unix system. Back in the last millenium, the terminal started off as a teletypewriter, from which modern Unix and Unix-like operating systems still retain the 'tty' name. Nowadays, it's also common to see purely software implementations referred to as pseudo-terminals (pty), because they serve the same function but have no physical manifestation, which are used by Terminal Emulators. The shell that runs in the terminal is how users truly interact with the system. Bash, Zsh, Csh, Ksh, and their many derivatives or specialized shells (such as may be used at a car dealer, or a POS system, etc), are the utility through which the terminal becomes useful.

The only difference between a bash shell on the system console, tty0, and a bash shell running in a terminal emulator on a remote system connected via ssh, is the capabilities of the terminal to display output, and control input. The bash shell itself will be the same, and generally speaking, everything in the Unix shell is text. We run a command, such as `ls`, and we get back a blob of text that we, as humans, can interpret as a list of files and directories. We can further process, or parse, that text using tools like awk, sed, grep, etc. These additional tools allow us to filter output, take an action on a string in the output, count the number of items, etc, etc.

In the PowerShell ecosystem, the analog of the Terminal is the Host. Microsoft includes the "Console Host", which is the terminal-like window that opens if you run "Microsoft Powershell", and the "Windows Powershell ISE Host", which opens when you run the "Windows PowerShell ISE" application. Both of these windows serve as hosts for the PowerShell shell itself, and they actually provide different behaviors, too as far as output, debugging, etc. In PowerShell, everything is an object. When you run the date alias in PowerShell, you are presented a string representation of the current date and time, but that's just the display. What you actually get back is a [System.Datetime] or (shortened) [datetime]
object, which has all sorts of methods, properties, and metadata associated with it.

Monday, July 24, 2017

I've been working with Windows and VMware for a while now, and have really enjoyed learning PowerShell and PowerCLI. I've always preffered CLI tools to GUI tools. Possibly just because I'm old enough that the computers I started with didn't have Windows (or even X-Windows).

The more I use PowerShell, the more I like PowerShell, so I've decided to start managing the Linux servers I have at home with it, just for funsies.

The first step is to install PowerShell. PowerShell for Linux/Mac/Etc is v6, and still in beta at the time of this writing. I use Ubuntu Linux at home, and fortunately for my lazy self, there is a Apt Repo for PowerShell for Ubuntu 16.

These steps were blantantly ripped off from the actual Ubuntu 16.04 Installation Instructions. If you aren't comfortable adding the repository, there are also instructions for manually downloading the .deb package and installing it.

In UNIX and Linux, everything is a file. The shell (bash, zsh, etc) kind of turn all those files into strings, and make them available to STDOUT, which can be used on a pipeline to do things like command | sed | awk | wc or something. In PowerShell, everything is an object, which can be very powerful, but which can also be overwhelming as you're trying to get used to having to understand each object's model. They are rarely the same.

How many files are there in this directory?

Linux:

PS /home/tkennedy> find . -type f | wc -l
13

PowerShell:

PS /home/tkennedy> (Get-ChildItem -Force -Recurse -File).Count
13

The really interesting piece here, is that if I want to do something with those files, on the unix side I have to parse the list of files that `find` gives me back, and then process each file to, say, get it's `stat` results, or something. Then I have to further process all that data. Because everything is a string.

With PowerShell, I can assign the results of the Get-ChildItem command to a variable, $files, and it will create a System.Array containing all the objects for the files that were found.

PS /home/tkennedy> $files = Get-ChildItem -Force -Recurse -File

Now, because everything is an object, the $files object that I created is basically an array of all the file objects that Get-ChildItem was able to identify, and each of those file objects has all the properties that corresponds to that type of object.

Wednesday, March 29, 2017

Big news this week, as the Republicans in Congress decided to scrap an FCC rule known as the Broadband Consumer Privacy Proposal which required broadband providers to get permission from subscribers before collecting and selling data collected about their users.

Since I am very interested in my online privacy, or at least, I like to have the option to choose when to share my information for myself, and since I recently upgraded my home router to a Unifi Security Gateway from Ubiquiti Networks, I wanted to know if the VPN client would be compatible with the Private Internet Access VPN that I use to protect my privacy, thereby putting my entire house behind the VPN all the time.

The only thing that posed any challenge was calculating all the routes for all the subnets outside my house, to route that traffic over the VPN. In my case, since I use RFC1918 space, here is the list of routes I needed to add to the USG, via the "subnets" menu item in the USG settings app:

0.0.0.0/1

192.169.0.0/16

192.170.0.0/15

192.172.0.0/14

192.176.0.0/12

193.0.0.0/8

194.0.0.0/7

196.0.0.0/6

200.0.0.0/5

208.0.0.0/4

224.0.0.0/3

Since hosts have a default route to the USG (192.168.1.1), all traffic will make it to the USG just fine. Now... the USG has a default route to the internet via my ISP. The default route is 0.0.0.0/0, which is the least specific route possible to have in a routing table... a route to every IP possible. In routing, more specific routes always win. So the USG also has a local route to 192.168.0.0/22, which prevents my internal traffic from following the default route. And the USG has a more specific route to it's gateway than default as well, due to it being a connected network so it won't get lost in the routes above.

The list of subnets above provides a more specific route than the default route for every possible IP that is not in my house, which forces everything to be sent across the VPN, but they are still the least specific possible routes to everything, which means they're pretty easy to override if I don't want something going over the VPN. After all, the VPN is pretty limited on bandwidth compared to going directly out FiOS.

This list is everything that I don't use in my house, and ensures that any traffic to anywhere outside my house will be routed over the VPN. And, Yes, I am aware that there are other blocks of RFC1918 and RFC5737 space, but since ISPs don't route those networks, I'm not worried about them, because the VPN essentially acts as a sink for any traffic to those destinations.

Here is how the settings go into the USG configuration in the Unifi controller application:

Specifically:

Purpose: VPN Client

VPN Client: PPTP

Enabled: check this when you want the VPN to go live

Remote Subnets: one entry for each of the subnets in the list above (modified for your own use, if you don't use 192.168.x.x in your house/business)

Server IP: get this from PIA, I used `nslookup us-east.privateinternetaccess.com`

Sunday, March 19, 2017

We (my wife and I) have been using LIFX lights in our bedroom to simulate a sunrise. They come on at sunrise, and slowly increase brightness for 30 minutes, allowing us to get used to the light, and wake up pretty gently, as opposed to being jarred out of a deep sleep by a more traditional alarm clock.

My wife asked if there was any way we could do the same with Sonos. Specifically, she wants to pick a Sirius XM channel like "15 - The Pulse" to wake up to. Have the volume start at 0, and over the same 30 minute period as the lights, ramp the volume up slowly until it's a reasonable level coinciding with the maximum brightness of our lights.

Her ideal solution would have the following features:

Pick any Sirius, Pandora, or Calm Radio station that Sonos can regularly access.

Choose a maximum volume for the alarm

Choose a length of time over which to go from 0 to Max volume

Orchestrate the details via an iOS app on iPhone or iPad.

For Extra Credit:

Do the same thing in reverse, allowing from from X - 0 over time, like a slow ramp down sleep timer.

We first tried the Alarms available in the Sonos App. These are time and content alarms, meaning I can set it to play a Sirius XM channel, at a specific time, at a specific volume. There is a fade-in, but it's only 15 seconds long. Not exactly what we're looking for. We want something more along the lines of a 30 minute fade in.

Google seems to indicate that this is a common request from Sonos users:

This library would allow me to hit about 2.5 of the ideal features, and possibly the extra credit as well, if I wrote a little program to run from cron on a Linux server.

Easy to do in cron:

Run a program at a specific time

Can do with SoCo:

Set volume of a Sonos speaker, or a group of speakers

Pick a channel to play

Can't easily do with Soco/Cron/Linux:

Control via an iOS app on iPhone/iPad.

Added Feature:

Supports a file in the same directory called 'holidays.txt', where I can put dates in the format YYYY-MM-DD (one per line), to not run the alarm. (like Work holidays)

I can also log in to the server and `touch /tmp/holiday` if I want the alarm to not go off tomorrow. (example: sick day, or unplanned day off)

So, I'm still on the look out for an iOS app that will let me orchestrate all this, at least until Sonos adds this kind of feature or one of the other home automation apps adds it. Here's a link to my alarm script: https://github.com/tksunw/IoT/tree/master/SONOS

Thursday, April 9, 2015

I recently had a desire to get OpenVPN working on Solaris 11.2, to allow me to connect to a Private Internet Access (PIA) VPN. For more information on using a VPN for general internet access, as well as some insight into why you might want to look into it for yourself, see:

A quick Google turned up a blog post from Stefan Reuter detailing how to set up OpenVPN on OpenSolaris 2008.11.

For Solaris 11.2 the basic steps are still pretty much the same, but some of the minor details have changed. We still need the TAP driver for solaris, and we obviously need to download and build OpenVPN, but we don't need to edit the TUN/TAP Makefile anymore, and we don't need any patches for OpenVPN. One step I added was to download and compile the LZO compression library for OpenVPN.

Step 2: Install the LZO compression library.

More full output of running those commands, if you are in any way possibly curious.

Step 3: Install OpenVPN.

For OpenVPN, we modify CFLAGS and LDFLAGS, to let OpenVPN find the LZO library we just installed, and we add '--enable-password-save', which will allow us to store the username and password for the VPN in a file.

Yet again, even more full output of running those commands, if you are in any way possibly curious.

Once OpenVPN is installed, configuring it for use with Solaris is relatively straight forward. PrivateInternetAccess have a bunch of OpenVPN configuration files, with some very useful defaults. Since I'm on the East coast of the US, I started with the "US East.ovpn" file:

The auth-user-pass .pia.login line tells the OpenVPN client to read your username and from a file in the current directory called '.pia.login' (Make sure your path is correct if you have issues). The contents of that file are your username by itself on line 1, and your password by itself on line 2.

supertim
MySup3rS3cr3tP@ssw0rd

The rest of the lines all affect how routing is done for the VPN. Left to it's own devices, OpenVPN doesn't have the code necessary to automatically manage routes. For example, it can't automatically determine the default gateway, and modify that route to update the default gateway to the VPN's default gateway.

With script-security set to a reasonable level to allow OpenVPN client to run scripts, we use route-delay 2 to tell the client to give the client 2 full seconds to get the VPN tunnel set up before doing anything with routing, and route-noexec tells the client not to make any direct changes to the routing tables, and the route-up route-up.sh tells the client to run a script, which I very imaginatively called route-up.sh, during the route-up phase of client activity. The contents of the script look like:

Since more specific routes are always preferred over less specific routes, setting these two routes allows us to route everything over the VPN without having to make any changes to the default route, thereby bypassing OpenVPN's lack of ability to manage Solaris routes. If the VPN goes down, the routes are removed, and you still have access to the internet via your existing default route. You also maintain access to your local LAN because that route will be even more specific, and it's directly connected. You will just not have the same amount of privacy at that point.

Monday, April 6, 2015

We are a TiVo household, so a quest has been underway to build a suitable place for long term storage of the family's favorite TV shows and movies. One indisputable requirement is that the shows and movies have to be visible via the TiVo menu. pyTivo (the William McBrine fork) is the logical tool to do this (in my house). William McBrine has been maintaining his fork of pyTivo more regularly than the original package (sourceforge).

To get pyTivo working on Solaris 11.2, only 2 dependencies needed to be resolved.

I needed to build ffmpeg to support on-the-fly video transcoding. and,

ffmpeg wanted yasm(an open source rewrite of the nasm assembler) or nasm itself.

This is what happened when I tried to build ffmpeg without yasm:

bash-[121]$ ./configure --prefix=/usr/local
yasm/nasm not found or too old. Use --disable-yasm for a crippled build.
If you think configure made a mistake, make sure you are using the latest
version from Git. If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "config.log" produced by configure as this will help
solve the problem.

Wednesday, March 23, 2011

Having problems building the Perl DBD::mysql modules on Solaris 10 Sparc 64-bit?
The Perl 5.8.4 binary that ships with Solaris 10 is a 32-bit application. You are probably running the 64-bit version of MySQL and trying to build DBD::mysql against that db version.
What you actually need to do is download the 32-bit version of MySQL, for linking the Perl DBD::mysql libraries against. I run the 64-bit MySQL database in /opt/mysql/mysql, so I unpacked the 32-bit MySQL as /opt/mysql/mysql32.
Then, run a CPAN shell, look DBD::mysql, and build the module.