My technical site

Menu

Post navigation

The Jupyter group have released an alpha version of a new Notebook environment called JupyterLab

JupyterLab is browser based, just like the old notebook system but adds a multiple pane environment. I’m not going to go into the details of the collaboration between the large number of organisations that have gone in to the development, go read the blog post announcing JupyterLab. Suffice to say that I’m glad such a high powered group are working on my favourite Python environment.

I installed the alpha (it’s quickly done with pip) and had a look. It’s an exciting looking development and will make a brilliant Python development environment.

At the moment it seems to be suffering from minor speed problems and minor layout problems in Safari (they are minor, don’t appear in Google Chrome and Safari is not currently listed as a supported browser so I’m not going to complain too loud.)

The built in editor can syntax colour Python. It even has colour themes for those, like me, who like a particular look in their editor. At the moment it is indenting only two characters with a tab (PEP 8 says it should be 4) and if you hit return with the cursor in column 1 then you get a first level indent on the next line.

These are the sort of problems you an expect in alpha software. I think I might install the current development version from Github and check there before filing a couple of bug reports. I’m a bit idiosyncratic, nothing I like more than spending an hour or two getting a bug down to it’s essentials and filing a report.

IPython 5

They have also released a new version of IPython they are calling IPython 5.0 LTS. It has some nice new features including syntax highlighting as you type and much better multi-line support. This is due to shifting from various command line interfaces to the purely Python readline replacement prompt_toolkit.

I think the move to prompt_toolkit is going to show major dividends as the library (currently at version 1.0.3) adds yet more functionality and that functionality moves into IPython. Jonathon Slenders, the author of the library, is also developing clones of Vim and tmux in pure Python using it and intends to fold features from those projects back in to prompt_toolkit.

They are designating this as “Long Term Support” as it will be the last IPython to run under Python 2. IPython 6 will require Python 3. Not is all lost though, they say they will continue to support Python 2 kernels with Jupyter Notebooks (and we assume the new Jupyter Lab). As they say in their announcement “For the 5.x series releases we are making an exception to that rule: until the end of 2017 the core team will do its best to provide fixes for critical bugs in the 5.x release series. Beyond that, we will deprioritise this work, but we will continue to accept pull requests from the community to fix bugs through 2018 and 2019, and make releases when necessary.” So it will be a while before us OS X users are forced to run Python 3 for IPython and break PyObjC and it’s brethren which are written in 2.7 (we can also hope that well before the 20202 deadline Apple moves to Python 3 and does the port of PyObjC.)

Easy Python Development

Taken together these two new releases improve Python development enormously for me. I have always been a fan of iterative development of my code in IPython and this just makes the explore and iterate method easier and easier.

I’ve been interested in human-computer interfaces since the very early Eighties when I first came across the work of Niklaus Wirth, Seymour Papert and Jef Raskin. For me human-computer interfaces are split in two. The first is the interface to _build_ software and the second is to _control_ software. Wirth worked mainly on the former, Raskin on the latter and Papert in both areas, principally from work in learning.

The Atlantic article is, of course, mainly concerned with the latter. How do people control the software on their computing device, how do they enter data and how do they get results.

It also starts from a broken premise, that there will be a “next” interface. Next implies there was a previous interface and that it has now been replaced. This couldn’t be further from the truth. It was only the most primitive of computers that predated the use of a keyboard and printer, two interfaces still going strong more than sixty years later. Speech recognition was usable for serious work as far back as the early 1980’s. Touch screens date from the same time. Virtual reality and augmented reality work, including work on using gestures, also began around then.

Let’s have a look at my favourite interface, the keyboard. You might think that not much has changed but just think about spelling correction and predictive text. If you’re a programmer using a good editor then you can even have fairly good (and improving) context sensitive predictive text – the editor knows when you are typing a variable name and only predicts those one moment then on the next line realises you are calling a function and predicts on those. How about an editor that “knows” when you import a bunch of functions and adds those to the list to predict on?

Even better, in Google Wave Peter Norvik demonstrated context sensitive spelling correction. His example was the system capable of correcting “icland is an icland” to “Iceland is an island”. He also demonstrated the system correcting a number of homonyms such as “Are they’re parents going two the coast?” corrected to “Are their parents going to the coast?”

So while the physical keyboard has not improved (indeed keyboard junkies like me feel it has gone backwards) the intelligence of the keyboard has improved and improved the interface.

How about that voice technology?

First, let’s dismiss one of the statement’s in the Atlantic article. Missy Cummings (head of Duke University’s Robotics Lab) says “Of course, the problem with that is voice-recognition systems are still not good enough. I’m not sure voice recognition systems ever will get to the place where they’re going to recognize context. And context is the art of conversation.”

I’m going to break that down. Voice-recognition is actually two problems. The first is translating the noise of a voice into a text stream. The second is understanding the text stream so that our software can act upon the request. In good systems the second informs the first, but they are different problems. So when Cummings talks about recognizing context she is talking about the second problem.

For all intents and purposes the first problem has been solved. Translating the noise of your voice to a text stream is becoming more reliable, less upset by your accent and faster by the day. Siri, for example, does this superbly.

So it is the second problem where improvements still occur. This is the field of study called “natural language processing”. The problem Cummings is talking about is partly discourse analysis, text linguistics and topic segmentation. All of these sub-fields have continued to progress. Indeed progress has been amazing for natural language processing within what researchers call “limited domains”. This is where the general topic of a conversation (or discourse) is limited to a specific area.

An example might be a search of a movie database.

“Show me all Cameron Diaz’s movies.”

“I’ve got 32 movies.”

“OK, how about just her comedies?”

“Here are the six movies starring Cameron Diaz marked as comedies.”

That is a conversation which uses context. A tiny example but the computer has to understand the meaning of “her” from the context of the conversation. The next time you talk “her” might be Judi Dench or Cate Blanchett. Now this is limited in domain and the context is easy but it *is* recognizing context. So research continues on understanding more complex examples of context and across a wider domain. Siri, the Amazon Echo and their ilk are improving constantly.

We have also seen constant improvements in touch interfaces. Both the hardware, with touch sensitive capacitive touch screens with excellent resolution replacing earlier capacitive screens, and interface software where tap, tap and hold, hard tap and hold and swipe all recognised with different meanings (and often different meanings in different contexts). Touch screen software is even getting good at recognising the difference between your finger or a pen and your hand accidently brushing the screen.

So what will the next human-computer interface be? Mostly the old ones with improved software, hardware and interface design.

So last Thursday and Friday was the AUC‘s annual conference for Macintosh system administrators, X World.

Held at the University of Technology it is a combination of workshops, presentations and social events.

This year it started with pre-conference drinks organise by the Sydney Mac Admins group. We meet once a month or so and made sure our July meeting coincided with the start of the conference.

The first keynote was from Rich Trouton on OS X security. Then the first afternoon saw other presentations. I had to miss them as I was giving my workshop “Bash For Beginners”. If you want the slides and other materials from the workshop then they are in my github here.

The rest of the conference was equally good with a dinner on Thursday night, more presentations on the Friday and time to meet and gossip with many other Macintosh administrators.

If you are a Mac administrator in Australia or New Zealand then I recommend you start your planning to attend next year’s conference. It is the best place to learn and meet others that you will find. The AUC has a YouTube channel where you can check out presentations from previous years as well as their other conferences.

I’ve been using the Anaconda install of Python and IPython (now part of Jupyter) for quite a while but certainly wanted to move to using conda instead of pip and virtualenv to handle module installs and environments.

So I have now converted entirely to conda but the move to Python 3 is harder. I have a Python 3 environment installed on my Mac and do try and use it. The lack of PyObjC in Python 3 does slow me down in having it as the default however. Conda handles the combination of modules and virtual environment much better than the usual tools.

Using Jupyter notebooks instead of the IPython console is harder still. I tend to do my Python development as a real hack and the IPython console seems easier than the semi-permanence of cells in a Jupyter notebook. A notebook is a nice way of documenting and coding side by side. It’s growing on me as a way to work on my Python scripts.

If you do any serious work in Python then let me recommend Anaconda and Jupyter.

I thought the 15th birthday of my favourite operating system was the perfect time to look at why I love it so.

I do love OS X, I certainly think it is the pinnacle of operating systems. Don’t get me wrong, I know it has faults and I am more than happy to enumerate them given the chance. It is, however, the best available operating system at the moment, it has been for many years.

Alan Kay, the scientist who worked at Xerox PARC on the first GUI, called the Macintosh “the first computer worth criticising”. I’ve always thought of OS X as “the first OS worth criticising”. In case you’re wondering why Alan Kay, he’s one of my gods. Go read his Wikipedia page. Continue reading →

I set out to rebuild the OS on my constant companion, the MacBook Air. It has been running OS X 10.10 upgraded from 10.9 and since I do a lot of weird things on it the cruft level was getting pretty high.

The first step was to back it up using Carbon Copy Cloner. Such a useful tool. This included backing up my usual boot partition and the one I had built to test 10.11 onto an external drive. No need to waste time on an OS install, though if you don’t have a known good install then do it from scratch.

I then booted into the newly cloned 10.11 on the external so I could wipe the internal SSD. God, the new version of Disk Utility is a disaster. The interface has been “dumbed down” so that an expert doesn’t get the information required and it’s still too difficult for most. This time it failed to either wipe or repartition the internal drive and gave so little feedback on the reason for failure it was useless.

I booted into the backup of my 10.10 partition and used the old Disk Utility. The drive had been a bit mangled so it no longer saw both partitions and Core Storage meant it didn’t show the SSD, just a single logical volume. Fortunately erasing the logical volume fixed that and I once again had a 256 Gb SSD. Phew.

Carbon Copy Cloner quickly had my 10.11 partition back on the SSD and it kindly offered to create a Recovery partition on the drive. Got to love a good tool.

The whole idea behind doing this is to end up with an exceptionally clean System so I’m not using Migration Assistant for anything and only installing applications as I need them, preferably from a fresh download.