The Macintosh Guyhttps://macintoshguy.wordpress.com
My technical siteMon, 05 Jun 2017 00:37:38 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngThe Macintosh Guyhttps://macintoshguy.wordpress.com
Bash Completion For Pandoc Is Built Inhttps://macintoshguy.wordpress.com/2017/06/05/bash-completion-for-pandoc-is-built-in/
https://macintoshguy.wordpress.com/2017/06/05/bash-completion-for-pandoc-is-built-in/#respondMon, 05 Jun 2017 00:37:32 +0000http://macintoshguy.wordpress.com/?p=1673]]>This is more in the way of a note to myself. I was just starting to write a bash completion script for Pandoc when I came upon this in the Pandoc documentation:

--bash-completion

Generate a bash completion script. To enable bash completion with pandoc, add this to your .bashrc:

eval "$(pandoc --bash-completion)"

So no need for me to write one. Neat trick, generate your own bash completion script. John McFarlane really is a god. Oh, and the completion is top quality, it knows when you’ve typed an option that takes an input or output format and completes on those and other little tricks. I may end up using some of his tricks for my completions.

]]>https://macintoshguy.wordpress.com/2017/06/05/bash-completion-for-pandoc-is-built-in/feed/0honestpuckA Little Shell Will Fix Ithttps://macintoshguy.wordpress.com/2017/06/01/a-little-shell-will-fix-it/
https://macintoshguy.wordpress.com/2017/06/01/a-little-shell-will-fix-it/#respondThu, 01 Jun 2017 04:44:49 +0000http://macintoshguy.wordpress.com/?p=1646]]>Last night I went to Lights For The Wild at Taronga Zoo. As usual, I took a lot of photos with my DSLR camera, over 200, though a lot of that number are quite similar as I often take two or three to increase the chances of getting the right shot, sometimes I vary the speed so that one is better exposed.

The camera saves both a RAW file​ and a JPEG so I end up with over 400 images. Looking through them in QuickLook in the Finder can be painful as the RAW images take quite a while to load, then you get the problem that when you have decided which of the three shots you want to keep you also have to delete the matching JPEG or RAW file.

The easiest solution to both of these is to only go through the JPEG files and then delete the matching NEF file (which is what the Mac calls the RAW file).

So I open the folder and sort by ‘Kind’ which puts the JPEGs at the top. I then open the first in QuickLook by hitting space and using the up and down arrow keys to move through the list command–delete deletes a file and displays the next. Easy.

Now I have 80 JPEG files from the original 240. How to get rid of the NEF files that match the JPEG files I have deleted? A little bash programming to the rescue.

for i in *.NEF ; do
if [ ! -e `basename $i NEF`JPG ]; then
rm $i;
fi
done

The secret to this is the basename utility. It’s a neat little tool. Pass it a full file path such as /Users/tonyw/Documents/UselessRamblings.txt and it will return just
the file name without the path, UselessRamblings.txt. It has a matching tool, dirname which returns just the path portion. As you can see from my code basename has another trick, it will happily strip the suffix from the filename if you tell it what to strip.

]]>https://macintoshguy.wordpress.com/2017/06/01/a-little-shell-will-fix-it/feed/0honestpuckMore Tools For Building Toolshttps://macintoshguy.wordpress.com/2017/05/19/more-tools-for-building-tools/
https://macintoshguy.wordpress.com/2017/05/19/more-tools-for-building-tools/#respondThu, 18 May 2017 23:00:59 +0000http://macintoshguy.wordpress.com/?p=1616]]>I’m working on more bash completions. This time for some of the command line tools Apple provides for sysadmins.

I decided there had to be a way to get a list of the options from the man page for a tool. After all they are all in there.

So I built a command line piece by piece. As an example let’s get a list of the options (with some caveats) for the tool pkgbuild. We start with man pkgbuild | col -b , the col -b step takes out the special characters man uses to show bold on screen. Now find all lines containing -- with grep, I liked grep -e '--'. If you have a look at the output of that we are getting close.

Next I decided to use sed to do a find and replace for the option itself. After some playing around I ended up with sed -e 's#.*\(--[a-zA-Z-]*\).*$#\1#' An important note for young players, it did take some time and a few tries to get that substitution just right. Don’t be afraid and remember Google (and Stack Exchange) are your friends.

First, I should point out an old Unix hand’s trick. Most of the time you see sed substitution commands using / as the separator but sed can use anything but \ or newline – it uses the first character it sees after the ‘s’. I usually use # as it makes the \ used for special characters easier to spot and the string easier to read.

So now we have a look at the find portion of the sed command. The .* matches a string of zero or more characters, then the \( is the start of what we want to match. The option starts with -- and is followed by a string of alpha characters of either case or a -. Then we end the match with \) and finish with zero or more characters till the $,
which is the end of the line. Then all of the line is replaced with what we matched in the find half by the \1. Whhheeewwww.

Now we have a list but it can contain the same option multiple times so we pass it through sort and uniq. Our final command line is :

You may find that you end up with the odd extra line so you should give it a check. If the options only have a single - in front of them then I found that grep -e ' -' | sed -e 's/.*\(\s-[az]*\)/\1/' | seems to work quite well, though some man pages want a \t instead of a space in that grep command.

You will notice that this leaves the - or -- in place. I like that as a check of my command line work and it’s easily removed in our editor.

If you are a BBEdit user then you can add | bbedit at the end for the output to open straight into a new BBEdit window. Sweet. Oh, did you know that the bbedit command line tool has a —-resume flag? Put this in your EDITOR shell variable and when you close the window after editing your git commit message it will take you back to your shell window. So I have export EDITOR='bbedit -w —resume in my .bash_profile.

Of course, in the time it took me to develop the command line and write this blog post
I could have written several bash completion scripts, but where’s the fun in that?

]]>https://macintoshguy.wordpress.com/2017/05/19/more-tools-for-building-tools/feed/0honestpuckNow We Have bash Completion For Munkihttps://macintoshguy.wordpress.com/2017/05/17/now-we-have-bash-completion-for-munki/
https://macintoshguy.wordpress.com/2017/05/17/now-we-have-bash-completion-for-munki/#respondWed, 17 May 2017 03:49:08 +0000http://macintoshguy.wordpress.com/?p=1611]]>I’m on a roll. I’ve written the bash completions for Munki.

It’s getting easier to write them. There was one little trick I used that I didn’t
mention in my last post that I thought I’d share. How to use find and replace with
regular expressions to generate some of your code.

For this I use Find... in BBEdit. I started with a list of the commands, one on each
line.

The first thing we need to do is generate a string with each command separated by
a space. This one is trivial. We just find \n and replace it with ` `. The second one is the
hardest. We want each of the switches in the case statement like this:

The “Find” is the easy part. We want to match everything on a line up to, but not
including, the newline at the end. This looks like (.*)\n – the parentheses define the
part we want to match. Now for the replace – we want a tab, then the name, then a
parenthesis and so on. You can see we need to insert the name into a template twice.
This ends up as \t\1) _autopkg_\1 ;;\n – the \1 means “the first match in the Find”.

So I just enter those into the dialog and hit “Replace All” and the list of commands is
changed into the required bash code. After pasting the result into my script I can
hit “Undo” and the list is back ready for me to use it again. I can even generate
boilerplate code using a different replace:

The advantage of doing things this way is not just less typing. By generating the code
I can be sure that the switches and the function names are correct and match
each other. (BTW – notice that I have a **** in the boilerplate. This marks where I
need to alter the function and also marks it as not finished.)

Many years ago I was tutored by two of Brain Kernighan’s books – ‘Software Tools’ and
‘The Unix Programming Environment’ and this is exactly the sort of thing he evangelised.
If you can use a tool to write your code, all the better.

]]>https://macintoshguy.wordpress.com/2017/05/17/now-we-have-bash-completion-for-munki/feed/0honestpuckbash completion for autopkghttps://macintoshguy.wordpress.com/2017/05/15/bash-completion-for-autopkg/
https://macintoshguy.wordpress.com/2017/05/15/bash-completion-for-autopkg/#respondMon, 15 May 2017 03:50:47 +0000http://macintoshguy.wordpress.com/?p=1607]]>Over the weekend I was feeling a little bored so I decided to try my hand at writing a shell script to add custom completion for autopkg to bash.

I found an example for the zsh shell which lacked a couple of features and I spent some time examining the script for brew so I wasn’t totally in the dark.

There are a number of tutorials available for writing them but none are particularly detailed so that wasn’t much help.

Writing Shell Scripts

The first thing I should say is that I find writing shell scripts totally different to writing for any other language. I probably write shell scripts incredibly old school, shell and C were the two languages I was paid to write way back in the 1980’s. It feels like coming home.

In shell I write tiny functions. The final script was 224 lines long and contains 22 functions, around half a dozen either do nothing or contain a single call to another function. Apart from the main function none is longer than ten lines of code.

Even the main function is quite simple, though it runs to 40 lines or so half is a single case statement with a line for each of the commands in autopkg.

In the first one we have a single case statement. This could have been an if but it looks much neater and clearer written as a case statement. You might wonder why I bothered writing the next two functions at all. It’s done in the name of consistency. Down in the main function we have the case statement :

This statement is one line for each possible autopkg command. By creating those “useless” functions it makes this case statement look clean and clear. It also makes it obvious what we have to do if autopkg adds a new command – add a line in this case statement, write a new function and put the new command into a string called “opts”. By using those tiny functions we’ve made our code much cleaner.

The other complexity in shell scripting is that so much of what you write ends up being other tools or sometimes complex shell builtins. Writing completions is a classic example, there are two special builtins, complete and compgen which you will need to understand. Then I also had to use grep and expr. The grep command was simple but the expr comand is a little gnarly:

It’s not really the fault of expr, that’s a regular expression after the : and they’re often gnarly but it does show that you need to be familiar with a wide range of small tools for shell programming.

By the way, that regular expression takes a string in the form <directorypath> (<URL) and returns the URL without the parentheses. It will even cope with parentheses in the directory path since expr has “greedy expansion” and that first .* in the expression will grab everything up to the last open parenthesis character. This is, of course, exactly the sort of detail you have to be all over when writing shell scripts.

That single line probably took me more than twenty times longer to write than any other line of code in the script. Given that I wanted to check that it would cope with any possible directory path and URL I actually wrote another shell script that took it’s first argument and ran it through the expr command. My final step was to write a file containing 16 possible complications in the file path and a half dozen possible complications in the URL and loop through the file running the test script on each line. I had nothing to worry about, but it was nice to be sure. I was so happy when I finished that line I actually posted a status update to Facebook and had a celebratory bourbon.

Docker implements a way of walling off a piece of software from the underlying operating system using a tech they call “containers”.

This is an absolute godsend for deploying services. One of the problems in system administration is the cost and complexity of spinning up a new service and then removing it from a computer once it is no longer required.

Software when it is installed and run can spray pieces of itself all over the computer’s file system and getting it out again is difficult.

Previously we have used virtual machines to isolate this problem. That has it’s own costs, a virtual machine means you are running (at least) two complete operating systems on the hardware. It also has a cost in memory and hard disk space.

Containers lower the cost considerably. They have all the advantages of virtual machines but share the operating system kernel with each other and the underlying OS. This makes them smaller and consuming considerably less resources than virtual machines. This also makes them quicker to download and deploy.

Since Docker is open source it means that there is now a huge community around it. Docker containers are easily available for a huge range of applications, a quick visit to Dockerhub will show you how large.

Docker containers may well be the holy grail of app deployment. They certainly tick all the boxes system administrators require.

Using Docker

So how easy is it to use? Installing it is trivial, just download the install package and copy the Docker application to your Applications folder. You might also want to downloadKitematic which provides a GUI interface to Docker, it also just requires downloading and copying the app to your Applications folder. It is just as easily installed on a Linux box.

I wish I could tell you how easy it is to build a Docker container from scratch but every time I searched DockerHub for a container I wanted someone else had already built it, or built a large chunk of it.

As an example, I wanted a container running Python 3, Jupyter and the add-on for bash notebooks. Sure, I could have built it from scratch but Continuum, the Anaconda people, already have a Docker container with Python 3 and Jupyter (along with a bunch of other useful Python libraries) installed so :-

docker run -it continuumio/anaconda3 /bin/bash

which will download and run the Python 3 version of Anaconda in a container. Then when the container runs (the -it makes it an interactive container) :-

pip install bash_kernel
python -m bash_kernel.install

then exit the container and at the terminal prompt

docker ps -a
docker commit <container_name> tonyw/jupyter

The ps -a lists all the containers so I know which one to commit and the commit saves the changed container with (optionally) a new name. Now we can run the new container.

This runs the Docker container in ‘daemon’ mode and when the container starts runs the command at the end, in this case Jupyter in notebook mode.

Of course if I just want to run Python 3.5 instead of Jupyter I can always replace the -d with -it and the jupyter command with bash and I get a shell in the container.

Docker Magic

Now all the Docker gurus out there are screaming at me that I should use a Dockerfile to build my custom container and define all sorts of magical stuff like the default command to run when the container starts and the working directory and all the rest so I didn’t need them all in my long command line. Frankly, while that would probably be a good idea I haven’t quite managed to learn how to do all that automated magic and it almost seems like too much work.

Perhaps for my next blog post.

Further Reading

]]>https://macintoshguy.wordpress.com/2016/08/16/containers-rock-why-im-a-docker-fan/feed/0honestpuckBBEdit Really Doesn’t Suckhttps://macintoshguy.wordpress.com/2016/08/08/bbedit-really-doesnt-suck/
https://macintoshguy.wordpress.com/2016/08/08/bbedit-really-doesnt-suck/#respondMon, 08 Aug 2016 07:19:20 +0000http://macintoshguy.wordpress.com/?p=1592]]>Recently, with version 11, BBEdit introduced a demo mode so I thought to take another look at the big brother of TextWrangler. I have to say BareBones Software’s tag line for BBEdit is true “BBEdit – It doesn’t suck!”.

There are two tasks that I use an editor for, writing Python and writing Markdown so those are the two that I looked at.

There are a number of things you can do to improve BBEdit as a Python IDE. The first is to install Dash. This is a brilliant tool for searching documentation sets and can be easily searched from BBEdit. Just select a library call and choose “Find In Reference…” under the Search menu and BBEdit will pass the search to Dash. Dash will search across all your documentation sets but it is easy to set the sort order so the Python entries are close to the top and in the Dash results window there is a little Python icon next to the Python results.

The other neat item under the Search menu is “Find Definition”, this will find where in your file a function is defined – useful if you have a long source file.

But how does that work if our project is in multiple source files? Well, Unix has long known of that problem and had a solution. It’s a tags file, first used in vi. This is a file that lists all the function definitions and variables used in all the files in a directory tree. Not only can BBEdit use a tags file but it can (using the open source utility cats) generate them. At the top of your project directory tree, on the command line bbedit --maketags will generate a tags file and now “Find Definition” will work across all the Python files in the tree.

BBEdit can also run a syntax check across your source. You will find “Check Syntax under the ”#!“ menu which also allows you to run your Python code. The final entry in this menu ”Show Module Documentation” displays a new text window with the output from running pydoc across your file. I love this, it encourages me to properly document my code as I write with pydoc strings for each function. The output is extremely useful as a memory aide for large programs and modules.

Next up is running a lint across our Python source. BBEdit comes with another command line tool, bbresults which turns formatted error output from Unix command-line tools into a BBEdit results windows. This is an exceptionally neat trick. At the command line flake8 example.py | bbresults will give you a window in BBEdit with each of the errors and warnings listed and a click on one will take you to the exact spot in your source. If you don’t have flake8 installed then you can install it with conda or pip.

By the way, this works because the bbedit and bbresults command line tools understand the +n argument syntax for going to line n in a file. Sublime Text and other editors on the Mac could learn this.

A final tip for programmers, BBEdit recommends setting the $EDITOR shell variable to bbedit -w where the -w flag has the bbedit command line tool wait till you close the window before exiting. If you add the --resume flag as well then when you close the window in BBEdit it will return the Terminal to the front. Exceptionally handy.

Markdown

One complaint I would make, and I make it about a number of editors, is that the Markdown syntax highlighting is on the stupid side. This is generally due to the flaws in using nothing but regular expressions to do the highlighting. The most obvious flaw is that underlines in such things as a URL will trigger highlighting for italics.

If you want you can “lint” your prose using proselint and bbresults. Personally I find proselint rarely throws up something I actually want to change but your mileage might vary, it’s a good tool for looking at prose text.

BBEdit has no special facilities for writing Markdown such as inserting the codes for text styles or formatting but it does have the ability to use “Clippings”, a short piece of text, and clippings can be kept in sets and a clipping can have a keyboard shortcut. I don’t use it, I have a few Keyboard Maestro macros for such things as web links and otherwise just type the few extra keystrokes.

BBEdit also has “Text Filters”, which allow you to run the current selection through a script. For Markdown I have one that turns tab separated text into a Markdown table, incredibly useful for tables copied from a spreadsheet. Not sure where I got it but I suspect it was from Brett Terpstra’s blog.

BBEdit is a good editor, well worth the $50 purchase price and has a number of advantages over it’s free little brother TextWrangler. As both a general purpose editor and an editor for programming I’d have to say that it is the best editor available on the Mac at the moment though Sublime Text comes close.

]]>https://macintoshguy.wordpress.com/2016/08/08/bbedit-really-doesnt-suck/feed/0honestpuckJupyter Releasing Some Nice Softwarehttps://macintoshguy.wordpress.com/2016/07/23/jupyter-releasing-some-nice-software/
https://macintoshguy.wordpress.com/2016/07/23/jupyter-releasing-some-nice-software/#respondSat, 23 Jul 2016 04:42:16 +0000http://macintoshguy.wordpress.com/?p=1589]]>The Jupyter group have released an alpha version of a new Notebook environment called JupyterLab

JupyterLab is browser based, just like the old notebook system but adds a multiple pane environment. I’m not going to go into the details of the collaboration between the large number of organisations that have gone in to the development, go read the blog post announcing JupyterLab. Suffice to say that I’m glad such a high powered group are working on my favourite Python environment.

I installed the alpha (it’s quickly done with pip) and had a look. It’s an exciting looking development and will make a brilliant Python development environment.

At the moment it seems to be suffering from minor speed problems and minor layout problems in Safari (they are minor, don’t appear in Google Chrome and Safari is not currently listed as a supported browser so I’m not going to complain too loud.)

The built in editor can syntax colour Python. It even has colour themes for those, like me, who like a particular look in their editor. At the moment it is indenting only two characters with a tab (PEP 8 says it should be 4) and if you hit return with the cursor in column 1 then you get a first level indent on the next line.

These are the sort of problems you an expect in alpha software. I think I might install the current development version from Github and check there before filing a couple of bug reports. I’m a bit idiosyncratic, nothing I like more than spending an hour or two getting a bug down to it’s essentials and filing a report.

IPython 5

They have also released a new version of IPython they are calling IPython 5.0 LTS. It has some nice new features including syntax highlighting as you type and much better multi-line support. This is due to shifting from various command line interfaces to the purely Python readline replacement prompt_toolkit.

I think the move to prompt_toolkit is going to show major dividends as the library (currently at version 1.0.3) adds yet more functionality and that functionality moves into IPython. Jonathon Slenders, the author of the library, is also developing clones of Vim and tmux in pure Python using it and intends to fold features from those projects back in to prompt_toolkit.

They are designating this as “Long Term Support” as it will be the last IPython to run under Python 2. IPython 6 will require Python 3. Not is all lost though, they say they will continue to support Python 2 kernels with Jupyter Notebooks (and we assume the new Jupyter Lab). As they say in their announcement “For the 5.x series releases we are making an exception to that rule: until the end of 2017 the core team will do its best to provide fixes for critical bugs in the 5.x release series. Beyond that, we will deprioritise this work, but we will continue to accept pull requests from the community to fix bugs through 2018 and 2019, and make releases when necessary.” So it will be a while before us OS X users are forced to run Python 3 for IPython and break PyObjC and it’s brethren which are written in 2.7 (we can also hope that well before the 20202 deadline Apple moves to Python 3 and does the port of PyObjC.)

Easy Python Development

Taken together these two new releases improve Python development enormously for me. I have always been a fan of iterative development of my code in IPython and this just makes the explore and iterate method easier and easier.

]]>https://macintoshguy.wordpress.com/2016/07/23/jupyter-releasing-some-nice-software/feed/0honestpuckThe “Next” Human-Computer Interfacehttps://macintoshguy.wordpress.com/2016/07/12/the-next-human-computer-interface/
https://macintoshguy.wordpress.com/2016/07/12/the-next-human-computer-interface/#respondTue, 12 Jul 2016 09:30:13 +0000http://macintoshguy.wordpress.com/?p=1581]]>Earlier today I read a piece in The Atlantic entitled The Quest For the Next Human-Computer Interface, subtitled “What will come after the touch screen?”.

I’ve been interested in human-computer interfaces since the very early Eighties when I first came across the work of Niklaus Wirth, Seymour Papert and Jef Raskin. For me human-computer interfaces are split in two. The first is the interface to _build_ software and the second is to _control_ software. Wirth worked mainly on the former, Raskin on the latter and Papert in both areas, principally from work in learning.

The Atlantic article is, of course, mainly concerned with the latter. How do people control the software on their computing device, how do they enter data and how do they get results.

It also starts from a broken premise, that there will be a “next” interface. Next implies there was a previous interface and that it has now been replaced. This couldn’t be further from the truth. It was only the most primitive of computers that predated the use of a keyboard and printer, two interfaces still going strong more than sixty years later. Speech recognition was usable for serious work as far back as the early 1980’s. Touch screens date from the same time. Virtual reality and augmented reality work, including work on using gestures, also began around then.

Let’s have a look at my favourite interface, the keyboard. You might think that not much has changed but just think about spelling correction and predictive text. If you’re a programmer using a good editor then you can even have fairly good (and improving) context sensitive predictive text – the editor knows when you are typing a variable name and only predicts those one moment then on the next line realises you are calling a function and predicts on those. How about an editor that “knows” when you import a bunch of functions and adds those to the list to predict on?

Even better, in Google Wave Peter Norvik demonstrated context sensitive spelling correction. His example was the system capable of correcting “icland is an icland” to “Iceland is an island”. He also demonstrated the system correcting a number of homonyms such as “Are they’re parents going two the coast?” corrected to “Are their parents going to the coast?”

So while the physical keyboard has not improved (indeed keyboard junkies like me feel it has gone backwards) the intelligence of the keyboard has improved and improved the interface.

How about that voice technology?

First, let’s dismiss one of the statement’s in the Atlantic article. Missy Cummings (head of Duke University’s Robotics Lab) says “Of course, the problem with that is voice-recognition systems are still not good enough. I’m not sure voice recognition systems ever will get to the place where they’re going to recognize context. And context is the art of conversation.”

I’m going to break that down. Voice-recognition is actually two problems. The first is translating the noise of a voice into a text stream. The second is understanding the text stream so that our software can act upon the request. In good systems the second informs the first, but they are different problems. So when Cummings talks about recognizing context she is talking about the second problem.

For all intents and purposes the first problem has been solved. Translating the noise of your voice to a text stream is becoming more reliable, less upset by your accent and faster by the day. Siri, for example, does this superbly.

So it is the second problem where improvements still occur. This is the field of study called “natural language processing”. The problem Cummings is talking about is partly discourse analysis, text linguistics and topic segmentation. All of these sub-fields have continued to progress. Indeed progress has been amazing for natural language processing within what researchers call “limited domains”. This is where the general topic of a conversation (or discourse) is limited to a specific area.

An example might be a search of a movie database.

“Show me all Cameron Diaz’s movies.”

“I’ve got 32 movies.”

“OK, how about just her comedies?”

“Here are the six movies starring Cameron Diaz marked as comedies.”

That is a conversation which uses context. A tiny example but the computer has to understand the meaning of “her” from the context of the conversation. The next time you talk “her” might be Judi Dench or Cate Blanchett. Now this is limited in domain and the context is easy but it *is* recognizing context. So research continues on understanding more complex examples of context and across a wider domain. Siri, the Amazon Echo and their ilk are improving constantly.

We have also seen constant improvements in touch interfaces. Both the hardware, with touch sensitive capacitive touch screens with excellent resolution replacing earlier capacitive screens, and interface software where tap, tap and hold, hard tap and hold and swipe all recognised with different meanings (and often different meanings in different contexts). Touch screen software is even getting good at recognising the difference between your finger or a pen and your hand accidently brushing the screen.

So what will the next human-computer interface be? Mostly the old ones with improved software, hardware and interface design.

]]>https://macintoshguy.wordpress.com/2016/07/12/the-next-human-computer-interface/feed/0computer_keyboad_slanted_angle_dec07honestpuckX World 2016 Was A Great Conferencehttps://macintoshguy.wordpress.com/2016/07/11/x-world-2016-was-a-great-conference/
https://macintoshguy.wordpress.com/2016/07/11/x-world-2016-was-a-great-conference/#respondMon, 11 Jul 2016 09:55:48 +0000http://macintoshguy.wordpress.com/?p=1560]]>So last Thursday and Friday was the AUC‘s annual conference for Macintosh system administrators, X World.

Held at the University of Technology it is a combination of workshops, presentations and social events.

This year it started with pre-conference drinks organise by the Sydney Mac Admins group. We meet once a month or so and made sure our July meeting coincided with the start of the conference.

The first keynote was from Rich Trouton on OS X security. Then the first afternoon saw other presentations. I had to miss them as I was giving my workshop “Bash For Beginners”. If you want the slides and other materials from the workshop then they are in my github here.

The rest of the conference was equally good with a dinner on Thursday night, more presentations on the Friday and time to meet and gossip with many other Macintosh administrators.

If you are a Mac administrator in Australia or New Zealand then I recommend you start your planning to attend next year’s conference. It is the best place to learn and meet others that you will find. The AUC has a YouTube channel where you can check out presentations from previous years as well as their other conferences.