Month: June 2016

We’ve got a few new people at work who don’t have any Linux experience, and I was asked to do a quick crash course on some super fundamental logging in / navigating / restarting service stuff so their first on call rotation wouldn’t be quite so stressful. Publishing the overview here in case it is useful for anyone else.

Linux Primer:

Connecting – We use both putty and Cygwin to connect to our Linux hosts via SSH (secure socket shell). Each has its own advantages and disadvantages – try them both and see which you prefer. If you need X redirection (you need the GUI ‘stuff’ to magic itself onto your computer), use Cygwin-X.

Logging In – Our Linux hosts authenticate users via cusoldap.windstream.com, so (assuming you are set up for access to the specific host) you will use your CSO userID and password to log in.

We often use a jump box – log into the jump box with your ID or using a key exchange. From there, we have key exchanges with our other boxes that allow us to connect without entering credentials again.

You can set up key exchanges on your own ID too – even from your Windows desktop – and avoid typing passwords.

Once you are logged in, you can start a screen session. Normally, anything you are running is terminated if your SSH session terminates (e.g. if you use Cygwin or Putty to connect to a box from your laptop that is VPN’d into the network & your VPN drops … everything you were doing in the SSH session is terminated.). You can use screen to set up a persistent session – you can reconnect to that session should your SSH connection get interrupted, other people can connect to the session to monitor a long running script, or multiple people can connect to the session and all see the same thing (screen sharing).

To start a new screen session, screen -S SessionName where SessionName is something that identifies the screen session as yours (e.g. LJRPasswordResync was the session I used when resyncing all employee and contractor passwords for OIDM – this includes both my initials and the function I’m running in the session). To see the currently running sessions, use screen –ls

[lisa@server810 ~]# screen -ls

There is a screen on:

8210.LJR (Detached)

1 Socket in /tmp/screens/S-lisa.

The output contains both a session ID number (green) and a session name (blue) separated by a full stop. You can use either to connect to a screen session (the name is case sensitive!). To reconnect, use screen –x SessionName or screen –x SessionID

To determine if you are currently in a screen session, look at the upper left hand corner of your Putty window. The title will change to include screen when you are in a screen session. Alternately echo the STY environment variable. If you get nothing, it is not a screen session. If you get output, it is the PID and name of your current screen session.

[lisa@server810 ~]# echo $STY
43116.LJR

SUDO – The sudo command lets you execute commands that your ID is not normally privileged to run. There is configuration to sudo (maintained by ITSecurity) that defines what you can run through sudo. If, for example, you are unable to edit a file but are permitted to sudo vim … editing a file using “vi /path/to/file.xtn” will throw an error if you attempt to save changes, but running “sudo vi /path/to/file.xtn” would allow you to save changes to the file.

Substitute user – The command su lets you substitute a uidnumber for yours – this means you become that user.

Combining SUDO and SU – Once we are logged into LX810 with our user ID, we can use sudo su – root to become root without actually knowing the root password. The “space dash space” in the su command means the user’s environment is loaded. If you omit the space dash space, you’ll still be logged in as the root user, but your environment will be left in place.

Generally speaking, allowing sudo to root is a bad idea (i.e. don’t do this even though you’ll see it on a lot of our old servers). This is because root has full access to everything and running the shell as root is insecure and typos can be disastrous.

Navigating – You are in a DOS-like command line interface. The interface is known as a shell – root on LX810 is a bash shell. The default for a CUSO ID is the korn shell (/bin/ksh) – you can change your shell in your LDAP account to /bin/bash (or /bin/csh for the C shell) and subsequent logons will use the new shell. You can try each one and see which you prefer, you can use korn because it is the default from CUSO, or you can use bash because it matches the instructions I write.

From a file system navigation perspective, you will be in the logon user’s home directory. If you aren’t sure where you are in the file system, type pwd and the present working directory will be output.

To see what is in a directory, use ls … there are additional parameters you can pass (in Linux parameters are passed with a dash or two dashes). Adding -a lists *all* files (including the hidden ones, any file where the name starts with a full stop is a hidden file). Adding -l provides a long listing (file owners, sizes, modified dates). Adding -h lists file sizes in human readable format. You can pass each parameter separately (ls –a –l –h) or by concatenating them together (ls –alh)

You can use wc to count the number of lines either in a file (wc –l /path/to/file.xtn) or the output of ls (ls –al | wc –l) – this is useful on our sendmail servers when you have received a queue length alert and done something to clear out some of the queue. In sendmail particularly, there are two files for each message so you need to divide the line count by 2.

To change to a different directory, use cd – e.g. cd /etc/mail will change the working directory to /etc/mail.

To delete a file, use rm/path/to/file.xtn – this is the safe way to run it, it will prompt for confirmation for each file being deleted. You can use wildcards (rm /path/to/files*) to delete multiple files. You can add a -f parameter to not be prompted – which is more dangerous as you may have typed the wrong thing and it’ll be deleted without prompting. You can add a –r parameter for recursive (get rid of everything under a path). Not too dangerous as long as you have the prompt coming up – but if you use –r in conjunction with –f (rm –rf) … you can do a lot of damage. Absolute worst case would be recursive force delete from / … which would mean every file on disk goes away. Don’t do that J

If you are not sure where a file you need is located, you can use either find or locate. The locate command is not always installed, so you would need to use the find command. Locate uses an index database – so it’s quicker, but it doesn’t know about files created/deleted since the index was updated.

To use locate, use locate -i filename where filename is some part of the filename. The -i performs a case insensitive search – if you know the proper casing, you do not need to include this parameter.

To use find, you need to indicate the root of the search (if you have no clue, use ‘/’ which is the top level directory) as well as the exact file name that you want (not a substring of the file name like locate will let you do). Finding a file named audit.log that is somewhere on the disk would be find / -name audit.log

Customizing shell environment – You can customize your shell environment. The system-wide shell environment settings are in /etc and are specific to the shell. For a bash shell, it is /etc/bashrc

Individual user settings are in a hidden file within their home directory. For the bash shell, the user specific settings are in $HOME/.bashrc ($HOME is a variable for the current logon user’s home directory).

For a shared account, adding things to $HOME/.bashrc isn’t the best idea – your preferred settings can differ from someone else’s preferences. We make our own rc file in $HOME for the shared account (I actually set my .bashrc as world-readable and linked the shared ID $HOME/.ljlrc to my personal .bashrc file so I only have to remember to edit one file). You can load your personal preferences using source $HOME/.yourrc or you can load someone else’s preferences by sourcing their file in the shared account’s home directory (source $HOME/.ljlrc will load in mine).

Service Control – Most of our Linux systems still use systemd (init.d scripts) to start and stop services. You can find the scripts in /etc/init.d – these are readable text scripts. All scripts will have a start and stop command, and many have restart and status as additional commands. To control a service, you can use service servicename command, /sbin/service servicename command or /etc/init.d/servicename command – same thing either way. If you are controlling the service through sudo, though, you need to use the technique that is permitted to your UID in the sudo configuration.

If you use a command that isn’t implemented in the script, you will get usage information. You can use a semicolon to chain commands (like the & operator in DOS) – so /etc/init.d/sendmail restart is the same thing as running /etc/init.d/sendmail stop;/etc/init.d/sendmail start

Process utilization – To see what the processor and memory utilization is like on a box (as well as which processes are causing this utilization), use top. When top has launched, the first few lines give you the overall usage. The load average (blue below) tells you the load during the last one, five, and fifteen minutes – 1.00 is 100% on a single core system, 2.00 is 100% on a two core system, etc. Over the 100% number for a system means stuff got queued waiting for CPU cycles to become available.

The process list can be sorted by whatever you need – if the box is CPU-bound, type an upper case C to sort by CPU usage. If it is memory bound, type an upper case M to sort by memory usage.

PID USER PR NI %CPU TIME+ %MEM VIRT RES SHR S COMMAND

23190 root 15 0 1 5:43.81 14.9 608m 605m 2872 S perl

14225 root 16 0 0 7:14.20 1.7 170m 69m 60m S cvd

14226 root 16 0 0 1:30.32 1.4 147m 57m 50m S EvMgrC

4585 root 16 0 0 212:01.99 1.1 230m 43m 6368 S dsm_om_connsvc3

4003 root 16 0 0 2729:44 0.6 171m 24m 3364 S dsm_sa_datamgr3

24552 root 16 0 13 0:36.16 0.3 17804 12m 2900 S perl

The first column shows the PID (process ID). Some commands as listed in top are obvious what they actually are (httpd is the apache web server, for instance) and others aren’t (perl, above, doesn’t really tell us *what* is using the CPU). To determine what the PID actually is, use ps –efww | grep PID#

You will see the full command that is running – in this case a particular perl script. Note that you may also find your grep command in the list … depends a bit on timing if it shows up or not.

You may need to restart a service to clear something that has a memory leak. You may need to stop the process outside of the service control (e.g. stopping the sendmail service doesn’t shut down all current threads). To stop a process, use kill PID# … this is basically asking a process nicely to stop. It will clean up its resources and shut down cleanly. use ps –efww to see if the process is still running. If it still is, use kill -9 PID# which is not asking nicely. Most things to which a process is connected will clean up their own resources after some period of client inactivity (i.e. you aren’t causing a huge number of problems for someone else by doing this) but it is cleaner to use kill without the “do it NOW!!!” option first.

Tail and Grep – Tail is a command that outputs the last n lines of a file. It has a parameter that outputs new lines as they get appended to the file. On *n?x systems, you can use tail –F/path/to/file.xtn and lines will be output as they show up. This is particularly useful on log files where the system is continually adding new info at the bottom of the file. We put Windows ports of these utilities on our Windows servers – but the Windows port of tail does not support –F (there’s a good reason that has to do with the difference between Unix-like and Windows file systems). You can use tail –f instead – if the log file rolls (gets moved to another file and a new file is started) you won’t continue to get output like you will with –F … but you can ctrl-c to terminate the tail & start it again to start seeing the new lines again.

Grep is a command line search program. You can use grep to find lines in a file containing a string (or regex pattern, but learning regex is a question for LMGTFY.com) – to find all of the mail addressed to or from me in a sendmail log, grep –i rushworth /var/log/maillog – the dash i means case insensitive search.

Grep will also search piped input instead of a file – this means you can send the output of tail to grep and display only the lines matching the pattern for which you search.

tail -f /var/log/maillog | grep –i rushworth will output new lines of the maillog as they come in, but only display the ones with my name.

VIM – The non-visual text editor is vim – which is usually invoked using ‘vi’, but vi is an actual program that is like but not exactly the same as vim (vim is improved vi). The vim installation contains a very nice tutorial – invoked by running vimtutor

VIM has both a command mode and an editing mode. When in command mode, different keys on the keyboard have different functions. There are “quick reference” guides and “cheat sheets” online for vim – most people I know have a quick ref guide or cheat sheet taped next to their computer for quite some time before vim commands become well known.

History – Linux maintains a history of commands run in a session. This spans logons (you’ll see commands run last week even through you’ve logged on and off six times between then) but when there are multiple sessions for the same user, there can be multiple history files. Which is all a way of saying you may not see something you expect to see, or you may see things you don’t expect. The output of history shows the command history for the current logon session. You can pipe the output to grep and find commands in the history – for example, if you don’t remember how to start a service, you can use history | grep start and get all commands that contain the string start

[lisa@server855 ~]# history | grep start

7 service ibmslapd start

15 service ibmslapd restart

42 service ibmslapd start

56 service ibmslapd restart

71 service ibmslapd start

95 service ibmslapd start

107 service ibmslapd start

115 service ibmslapd restart

289 service ibmslapd start

303 service ibmslapd start

408 service ibmslapd start

419 service ibmslapd start

430 service ibmslapd start

443 service ibmslapd start

If a command fails, it will still be in the history (all of my typo’s are in there!), but if you see the same command a number of times … it’s probably correct. You can copy/paste the command if you need to edit it to run (or even to run it as-is). You can run the exact command again by typing bang followed by the line number of the history output (!115 with the history above would re-run “service ibmslapd restart”).

Symbolic Links

Linux symbolic links are nothing like Windows shortcuts, although I see people saying that. Shortcuts are independent files that contain a path to the referenced file. Linux sym links are just pointers to the inode for the file. They are the file, just allowing it to be used in a different location. This is a bit like memory addressing in programming — anything that reads from the memory address will get the same data, and anything that writes to the memory address. When you do a long list (ls -al or just ll), you will see both the file name and the file to which it points:

Scott has been setting up our OpenHAB server, and the latest project was controlling our network speakers. You can play Internet radio stations to the speakers, you can stream music from the NAS … but we also want to be able to play announcements. For that, we needed a text to speech engine.

Festival is in Fedora’s yum repository, but everything I’ve read about Festival says the output is robotic. Which is likely fun at first, but tiring after the first three or four times. Even if you have it say “beep, boop” at the end.

SVox (Nuance, which a long LONG time ago was spun off from Stanford Research Labs) has an open-source version of their text to speech product. Not in convenient package form, but close. Someone maintains a shell install script. Download the script:

Corn!!! The corn has been loving the hot weather. Our tomatoes are doing quite well too. We’ve even got half a dozen garlic plants sprouting up. Still need to get some beans planted (we’ll do the greenhouse thing again at the end of the season, so we *should* get a good number of beans even though we’ve gotten a late start of it).

The largest hop plant is really taking off too. We have four rhizomes from our original two. Although two are *really* tiny little guys with just a vine or three, they all lived through transplanting.

My husband has been setting up OpenHAB to control our home automation. Our dimmers are very direct – there’s a z-Wave binding that you set to 100 if you want it at 100%, set it to 18 if you want it at 18%, and so on. We have a handful of Zigbee bulbs, though, which are not so direct. We are controlling these bulbs through a Wink hub by running a curl command with the exec binding.

The OpenHAB exec binding runs a shell with a command string passed in from the -c parameter. Thus far, I have not found anything that runs within a shell not work in the exec binding. This includes command substitution {I personally use the backtick format instead of the $(command) format, but I expect the later to be equally functional}.

What is command substitution (without having to read the Open Group Base Specifications linked above)? If you run

kill `pidof java`

the shell takes the component within the backticks, evaluates it, and then takes the standard output and places that into the command. When “pidof java” returns “938 984 1038”, the command above becomes “kill 938 984 1038”.

We want to set the value to the OpenHab value (0-100) scaled to the Wink value (0-255 for GE Link bulbs) using command substitution with bc (an arbitrary precision calculator language). To evaluate a mathematical expression, echo the expression text and pipe it to bc. To set a bulb to 75% of its maximum brightness, our post data is “nodeId=a&attrId=aprontest -u -m9 -t2 -v`echo 2.55*75/1|bc`”.

Notice the divide by 1 at the end — that’s to turn a decimal value into an integer. If you use just 2.55*75, you post a value of 191.25 which throws an error. In bc’s language, / returns the quotient — this isn’t *rounding* but rather truncating the decimal portion( i.e. bc 9.99999/1 = 9).

We configure the OpenHAB item to take the selected value (the %2$s below), scale the value with bc, and insert the result into the curl command. We use a similar technique to read the data from Wink and present the scaled value through OpenHAB.

More than a decade ago, when my company had an office out in Thousand Oaks, I traveled to LA fairly regularly. We got a “travel day”, so Friday was a free day in LA. Then my whole group would usually stay the weekend and fly out Sunday night. One of the things I loved to do on Friday or Saturday was drive down the coast and get fish tacos. We’ve gotten a few decent fish tacos in Cleveland, but nothing close to what I remember from the West Coast.

A few weeks ago, I decided to try making some at home. They turned out really well. The ocean perch was a really small filet that needed to be skinned first … which was a LOT of work. The next time, I used a tilapia which came in a larger filet but didn’t require any prep work. The smaller pieces of fish gave us a crunchier taco, but I think I’d get the larger pieces and slice them.

Flour Tortillas:

4 cups all purpose flour

1/2 teaspoon salt

2 teaspoons baking powder

2 tablespoons liquid vegetable oil

1-1/2 cups

Zest of one or two limes

Mix all of the dry ingredients (not the lime zest) in a bowl. Measure water in a measuring cup, add in the oil and zest, and mix well to break up the zest. Pour the water into the dry ingredients and mix to combine into a dough. Knead the dough for a few minutes. Cover with clingfilm and let set for at least an hour.

Grab a ball of dough and roll it with a pin — we roll them quite thick (1/8″ to 1/3″) and have something more like a flatbread than a traditional tortilla. I cook them on an electric griddle/grill at 350 degrees F for about three minutes, flip and cook for another two or three minutes (timing is going to depend on how thickly the tortillas are rolled).

Fish:

1 lb of light white fish (we’ve had ocean perch and tilapia)

1/4 cup vegetable oil

Juice and zest of one lime

1 teaspoon ancho chili powder

Mix the oil, lime juice, zest, and ancho powder. Place fish in a low baking dish, pour marinade over fish, cover with clingfilm and refrigerate for twenty minutes. For grilled fish tacos, grill or saute the fish at this point.

Fish Breading:

Combine zest of one or two limes with bread crumbs in a bowl (add some of the chili powder if you want a spicier completed dish)

Combine all purpose flour, salt, and pepper in a second bowl

Combine two eggs and a little milk in a third bowl

To bread fish, dip it in the egg mixture, then dredge in flour. Shake off, dip in egg mixture again, then dip in bread crumbs. Pan fry in a good bit of oil, place on paper towels to absorb some of the oil.

To serve, take a warm tortilla / flat bread. Place fish onto tortilla and top with broccoli slaw, shredded cabbage, or mixed greens with diced tomatoes and onions. My broccoli and cabbage slaws are usually Mark Bittman’s spicy slaw sauce which has no mayo. Sometimes, though, I’ll do a creamy sauce of Greek yogurt, lime juice, and celery seed.

We’ve been trying to get our BloomSky data parsed and reflected in OpenHAB — we can automatically turn the lights on when there is motion *and* the luminescence is lower than some desired value. Bloomsky has an API which allows us to retrieve JSON formatted data from our weather station. I never worked with JSON before – I’d heard the term, but didn’t actually know what it was … but I needed to parse it in a JavaScript transform. Does JavaScript do JSON? D’oh! Turns out JSON is an abbreviation for JavaScript Object Notation, and JavaScript parses JSON data really well.

Still need to turn my example web code into a transform that runs from OpenHAB, but getting values out of a JSON formatted string is as easy as using the “parse” function:

I wish there was a decent way to file RFE’s (request for enhancements) with the federal government. I can’t do a thing about the complexity of the tax code or the annoyance of having to spend a weekend filling out forms just to get my money back. But there’s existing tax code for charitable deductions (although you can fall afoul of the AMT if you donate too much of your income … so that may need a little rewrite here). Create a new tax deductible donation categorization for government entities — then each department of the government not get themselves registered as a not-for-profit-goverment-entity that qualifies for tax deductible charitable donations. I would feel a LOT better about paying 10k in taxes this year if I knew the money was going toward departments I support (and not going to departments I do not support). I could literally donate every dollar I owe in taxes to specific departments – then get my payroll deduction contributions completely refunded (bonus, US government, you got the interest on my payroll deductions since you held on to them). Don’t want to bother? Then don’t – your payroll deductions will get allocated out for you through the budgeting process.

With a significant adoption rate, if no one wants to fund the Department of Whatever, then the people writing the budget could well take that as a hint. Obviously that’s not a perfect rule – no one wants to fund the IRS, but you’re still going to need someone to handle tax collection & filing (at least until you manage to sort out the tax code & processes). But someone who advocates eliminating the Department of Education may be surprised how many people voluntarily earmark their taxes for Education. Or the military industrial complex may be shocked that donations don’t approach the 60% or 16% (depending on your point of view of “all spending”) of the federal budget that goes into the military and Homeland Security.

I read an article from the NY Times stating: ‘Mr. Trump suggested that all Muslim immigrants posed potential threats to America’s security and called for a ban on migrants from any part of the world with “a proven history of terrorism” against the United States or its allies’. I know there’s a lot of interpretation in journalism, and I was curious what he actually advocated.

One quick Google later, I found the speech text on the candidate’s web site:

“When I am elected, I will suspend immigration from areas of the world when there is a proven history of terrorism against the United States, Europe or our allies, until we understand how to end these threats.”

If this is what the man actually said, then there’s no third-party misinterpretation to blame. “Areas” of the world is vague, but I get not using legally accurate terms in campaign speeches. Linguistic and legal nuances are not exactly gripping (and sometimes get ridiculed a la what-the-meaning-of-is-is). But where there is a proven history of terrorism against the US, Europe, or our allies??? No delta-time qualification in there, so the Irish are right out? Bonus, though, is he inadvertently sorted a huge portion of South/Central America with this generalization too. For the most part, it’s been decades there too, but “fuera yankis” and all that.

Actually, I’d find the proposal far less incendiary if he said “the immigration system is absolutely a mess. I propose stopping all immigration for six months while we figure out how to do this properly.”. Years ago, Microsoft had a run of significant bugs and took a coding holiday to perform a code review. Similar thing — yeah, it’s disruptive to shut down our business for some period of time while we make sure we’re doing the right thing … but it’s more disruptive to continue doing the wrong thing.

Problem is that what Mr. Trump probably means is more apt to be banning immigration from any country some arbitrary individual / board decides seems like it could be dangerous (or just to be safe – any individual immigrant who looks like ..). Which would be frighteningly institutionalized racism.

We made homemade marshmallow using Alton Brown’s recipe (with hazelnut extract instead of vanilla). We built a fire pit out in our woods, and I wanted to make smores.

It got surprisingly thick while whipping it – I was worried that it would get too hard, and I may have stopped whipping it a little too soon.

The end result, though, was spectacular. For future reference … they melt really quickly over the fire when they’re still fresh. Supposedly if you let them dry for a few days, they toast up better.

We have a HUGE tray of marshmallows – supposedly they freeze well … so I used maybe 1/10th of them & froze the rest. This might be a lifetime supply … but I’m kind of looking forward to hot cocoa this winter with hazelnut marshmallows!