03 August 2017

Catalyst started up in 1997, and from humble beginnings as an outsourced services company we have grown to over 250 staff globally, with seven offices across Australia, New Zealand and the United Kingdom. Delivering open source software solutions to large and small clients is what we do. It’s what we love.

Our approach is for each office to service their own region, connecting our local team members with local projects and engagements. This is one of our strengths and points of difference. We aim to build long term ongoing relationships with local clients, where it’s more than just a transactional project relationship.

From time to time, we farm work out between offices, for example a large project might land in Sydney that we need a bit more project muscle for. This has allowed us to punch above our weight with big projects when we set up an office in a new geographic region.

Catalyst's European office is in Brighton, UK and has a growing team of developers, business analysts and system administrators. Some of the larger Moodle LMS managed services engagements for our European University clients require 24x7 infrastructure and application monitoring. Our current global cloud platform for enterprise service delivery is Amazon Web Services. Catalyst is an AWS Partner and we know the toolset well, having built and managed a number of large workloads.

Catalyst has been involved in High Availability (HA) application design and architecture for a while. However, even the perfect system still needs a defined escalation framework for when issues occur. We aim to detect and fix issues before our clients even realise.

Historically, Catalyst have used an 'on call' pager roster (even though pagers are almost dead) for our infrastructure team. Responsibilities are shared across the team for out-of-hours service. Of course we pay extra for this, but it’s no one’s preference that our staffers are up all night dealing with alerts and breakages. In at least one case, a noisy pager has been the cause of serious marital stress - wife with newborn baby sending husband with beeping pager to the lounge to sleep!

In the interest of providing the highest level of service and reliability for our clients (and letting our infrastructure team get more sleep), the Australian, New Zealand and European offices decided to set up a Follow The Sun (FTS) support model. The idea being that we would share responsibility for systems across time zones, ideally the technician responding and investigating an alert is 'in sunlight', i.e. not waking up at 3 in the morning. This approach is becoming more and more common with technical and development teams distributed across the globe.

Our FTS programme has now been up and running for over 18 months. We started discussing it in 2015, and the first round of cross-team alerts went out in Jan 2016. It has been quite a journey.

Here are some of the things we've learned along the way.

Mandated inter-team communications

This means phone conferences, video catch ups on a regular basis with an agenda. These meetings will not happen by themselves. Maintaining regularity between Australia and the UK is a challenge when there are no overlapping work hours. It’s either early in the morning or late at night for one side. Things have to be planned and agreed well in advance.

It’s always better to talk than to not talk. Even if there’s nothing to talk about, we discuss what's happened recently, any event notifications or changes on either side.

Walking together technically

Catalyst is all about the application of free and open source technologies to deliver value for our clients. This means that we embrace the use of new toolsets and technologies, innovation is in our DNA.

However, when we are responsible for fixing a complicated web application hosted in AWS that the technician may have not built. It’s critical that all team members have a broad understanding of how things fit together. Even better if the solution’s architecture team has committed to building systems in a standard-ish fashion.

Given the fast-moving pace of cloud hosting services, and the broad requirements of our different global clients. All the regional teams need to be free to do what they need to do for better customer outcomes. This needs to be balanced with some level of standardisation in terms of build and deployment policy. This is a challenging problem that is not new.

We have learnt that too much control around change or tool sets is counter productive. But huge deviation in standard operations is also not ideal.

There is no magic wand here. Most important is people talking to people - especially at the senior technical level. Combined with good documentation practices this builds trust and a collaborative tone. Meaning that innovations by one team are move likely to get adopted by all, not ignored or vetoed.

The right communication and alert tools

The ability to communicate across the team, from any device, is critical. It should not be hard to reach out. And there should be a clear and concise audit trail of events and actions taken.

In our case, this has meant the use of pagerduty, rocketchat and icinga, with an ongoing review and assessment policy. Asking ourselves, are these tools working for us?

We also need confidence that in the instance of an alert getting missed, the global escalation framework is solid and gets all the way to the CTO if required.

No hiding from mistakes

Management responsibilities for enterprise applications is nothing new to Catalyst. And we have the scars and stories to prove it.

In the real world systems break and people make mistakes … bad stuff happens. Despite this, the bigger mistake is to sweep these events under the carpet or descend into the blame game. Focus needs to be on taking steps to analyse and improve the underlying system to make sure problems don’t recur.

Don’t get frustrated. Get better.

The benefits

There was at least six months of planning and discussions prior to the first cross-team alert happening. So after all this effort, what are the real benefits to our team and clients?

Catalyst can provide better system support for our clients. More daytime attention to systems when they are in need.

Less sleep interruption for our valued sysadmins! Before we wake someone up, another capable team member in the sunshine on the other side of the planet reviews and (ideally) resolves the issue. And in the past too much night time alert activity has caused some of our team members to find another job.

Ability to perform out-of-hours updates and upgrades for our clients. It’s now very simple for us to roll out changes at 3am local time with a day or two planning.

More flexibility for team size for project and build work. We are move able to lean on each other across regions as we are working more with each other.

All round better communication between the Catalyst offices. A good thing and something you can’t take for granted when everyone is busy on projects and dealing with endless business activity.

We consider this initiative a great success. It allows all parts of the Catalyst group to provide better services to our clients.

Special thanks to Alex Lawn from the Sydney team who is driving this initiative.

02 August 2017

It's disappointing to see Jacinda Ardern being grilled about her plans for parenthood, less than 24 hours in as the leader of the Labour party. As well as being sexist, inappropriate and intrusive, the question implies Ardern hasn't thought the issue through. As noted at various times by Metiria Turei, Judith Collins, Marama Fox and many others, female MPs work in a tough environment. Sexist lines of questioning discount female MPs' ability to make smart choices about work, career and family, and suggest that women are the only ones making such choices. This just isn't the case.

Families in 2017 come in all shapes and sizes. They are diverse. As a parent myself, I am delighted to work for Catalyst, a company that encourages diversity by being flexible.

We understand that people have busy lives outside work, whether they're parents or not. We need to be available at particular times for our clients, but it is understood here that life is busy and sometimes messy, and that if something's not working, a little creative tinkering with work arrangements can make a big difference.

So. Catalyst employees work remotely, in the office, early in the morning, during business hours, and late at night. We work full time and part time. Lots of us are parents or guardians of children, which means there are sometimes kids in the office during school holidays, and people working from home when the kids are sick. Plus, children are always welcome at our company drinks, and we work hard to make these gatherings family friendly.

As our Managing Director, Don Christie, says, "We're interested in people's brains." We're convinced that given the right conditions, the smart people we hire make smart choices that help them balance life and work well, and that by giving more choices, we're helping people do their very best work.

25 July 2017

We've been holding our Arduino Academy for several years now, but this is the first time our participants have been all female. Needless to say, we get pretty excited about seeing young women lay foundations for future careers in technology, and have fun doing it!

The Arduino Academy is for secondary school students, and is held annually over three days in the July school holidays.

This year the tutors were Darrin Hodges from our Sydney office, and Ian Beardslee.

Participants spent three productive days starting with the Freetronics Experimenter's Kit before playing with other interesting components such as a NeoPixel Ring and LCD before having a go at getting the classic arcade game, pong, working on Arduino.

The point of the Arduino Academy (and our Open Source Academy) is to encourage young people to consider careers in technology, especially in open source technology.

It's important we encourage and mentor Wellington's next generation of technologists so there's plenty of able hands to keep growing our New Zealand tech sector. By introducing them to open source at this stage in their education, they can begin contributing to projects before they leave school, putting them way ahead of other candidates when they're looking for internships and jobs later.

As usual, we had a blast with the students who were enthusiastic and curious.

11 April 2017

Tonight was the second part of a two-part tasting of Australian Shiraz with Geoff Kelly at Regional Wines & Spirits, the first being the 1996 library tasting (see previous post). This time we blind-tasted eleven new 2013-14 Australian Shiraz wines, including the Penfold Grange which is north of $850 per bottle, and with an Elephant Hill Hawke’s Bay 2013 Syrah thrown in to keep us honest.

Each wine was very well-built, young and purple, peppery and bold. Each wine had something to say, but unfortunately this time I exhausted my palate by the ninth, and couldn’t make head or tail of the last three. Shame, because although I liked them the Lloyd Reserve which I admired in the library tasting was hiding among them.

As we poured the blind wines into glasses, the colours of all the wines were good healthy young Syrah deep purple-red, although I could tell there would be something special about No. 6 and No. 9 just from the density of colour; No. 6 looked like you could stand a spoon up in it.

For me the remarkable wines were Nos. 3, 6, and 9.

No. 3 reminded me of a big, older-style blackcurrant jam Australian Shiraz, with lots of berry, ripe toffee and a long oaky finish. The minty, freshly-crushed basil leaf on the nose typical of South Australian Shiraz goes well; Geoff says if he likes it he calls it “mint”, or “eucalypt” otherwise. Someone else remarked this wine might be like Kylie crashing a Holden ute full of Foster’s into a blackberry patch. Enjoyable perhaps, but not especially subtle. No. 6 was the most beautifully dark rich purple-red, with an intoxicating, highly concentrated nose of mostly blackcurrant, but also warm florals and a whiff of rough-sawn timber. The wine itself was complex, initially spicy but with savoury meaty flavours and berries competing for space, with a longer finish. No. 9 for me was also a dense colour, with a peppery lavender on the nose and an interesting hint of baked dates or figs, not over-sweet but nicely integrated into the plum fruit flavours for a lingering complexity.

Once again we gathered some “wisdom of the crowd” data to see if as a group we could pick our wines, and this time we did a bit better; results are below.

Blind rating totals from the new 2013-14 Australian Syrah tasting.

The Penfolds Grange hiding at No. 6 was correctly identified by about half the group. I was overthinking things too much and was trying to re-taste the last three wines at this point, to find the rich, complex wine that would be a likely Grange candidate. I had assumed that, having never tasted it before, something as ludicrously expensive as the Grange might surely be less up in one’s grill with its big bold Aussie blackcurrants, so although No. 6 was beautifully dense and concentrated, I had assumed the Grange was busy being all sophisticated elsewhere. Once everyone’s hands shot up, however, it became clear the cat was out of the bag! The No. 9 I liked was the Elephant Hill 2014 Syrah Reserve, which surprised me, and the Lloyd Reserve from Coriole in McLaren Valley was hiding at No. 10, which was interesting to re-taste after The Grange. It has that torn basil leaf mint and lavender on the nose, with savory and plum, liquorice and a good long finish.

Of futher note was No. 11, the Cape Mentelle 2013 Shiraz from Margaret River in Western Australia. This was a more delicate wine than the others, with interesting and complex boquet of jasmine, perhaps roses, with a good plum fruit body and a nice mild spiciness like a hint of Christmas cake, with a good long-ish finish. It was certainly different enough from the others that three of us thought it was the Hawke’s Bay Syrah.

30 March 2017

Tonight we went to one of Geoff Kelly‘s illuminating wine tastings, held as ever at Regional Wines & Spirits next to the Basin Reserve in Wellington. This was part one of a two part tasting – a library tasting of 20 year-old Australian Shiraz wines, with a 1996 Hermitage thrown in as a yardstick; Next month part two will be a tasting of eleven new vintage Australian Shiraz with a good Hawke’s Bay Syrah to compare. Tonight was a blind tasting, in order to gather some interesting data from participants before revealing which wines were which.

It really is quite intimidating to try twelve magnificent 20 year-old red wines, and try to remain objective about comparing their colour and weight, nose (aroma), taste, complexity, and so on. As humans we’re notoriously bad at taste and smell compared to our other senses, so even just trying to identify the different flavours is a constant challenge. They are sometimes elusive or fleeting; there at the start, but then gone with the vapours a few minutes later. Sometimes they are maddeningly familiar, but the right word, recollection or label for it is just out of reach. Geoff, a true national treasure, runs a good show; reminding us not to speak too much aloud and cloud each others’ judgements, but dropping a few helpful hints and starting points to look for in aged reds, and Australian Syrah in particular, drawing on his 40 years of wine cellaring, judging, and writing.

Most of them were just as you’d imagine beautiful aged 20 year-old Syrah to be: plum or berry dominant, interesting florals, smooth, and tannins tamed by oak and time. That is, apart from No. 5 which to my nose was of fresh cowpat and sweaty horse. No. 7 to me had an unpleasant butyric bile odour, but it had weird almost salty savoury taste, like Parmigiano. My favourites were No. 3 for its sheer number and complexity of different and intriguing flavours, and its beautiful long velvety finish, and No. 8, which was a standout for me. It was the most purple-red of the set like it was only three years old, while all the others had aged to a fairly uniform red-ruby, near garnet colour. It had a bold nose of cognac, almond and cherry, with a slight floral element of jasmine and violets. Strong dark plum fruit but with a savoury hint of truffle, and its long-lingering tannins, whilst softened with the oak, were still unwinding even after all this time, and could probably go for another ten years.

Before revealing the wines, Geoff asked us to rate a first and second favourite, a least favourite, and which we thought was the French wine hiding in the glasses. This data set is tabulated below.

No. 5 was the 1996 Cape Mentelle from Margaret River, Western Australia, which might have had either a dose of brett or it was corked. No. 3 was the 1996 d’Arenberg Dead Arm from McLaren Vale, South Australia, and No. 8, my favourite, was the 1995 Coriole Lloyd Reserve, also from McLaren Vale. The No. 7 was the ludicrously expensive Hermitage (AOC Syrah from Rhône, France), the Jaboulet Hermitage La Chapelle; Jancis Robinson writes about this wine, here. Luckily for me, Regional Wines had a couple of the 2011 Lloyd Reserves in stock!

21 October 2016

A couple of days ago I experienced some some difficulties using YouTube Live Events. So today, I was all prepared:

Had my phone with me for 2-factor auth so I could log into my account on a second computer in order to paste links into the chat;

Prepared a document with all the links I wanted to paste;

Had the Hangout on my presenter computer running well ahead of time.

Indeed, I was done with my prep so much in advance that I had heaps of time and thus wanted to pause the broadcast as it looked like it was not actually broadcasting since I couldn’t see anything on the screen. So I thought I needed to adjust the broadcast’s start time.

Hence why I stopped the broadcast and as soon as I hit the button I knew I shouldn’t have. Stopping the broadcast doesn’t pause it, but stops it and kicks off the publishing process.

Yep, I panicked. I had about 10 minutes to go to my session and nobody could actually join it. Scrambling for a solution, I quickly set up another live event, tweeted the link and also sent it out to the Google+ group.

Then I changed the title of the just ended broadcast to something along the lines of “Go to description for new link”, put the link to the new stream into the description field and also in the chat as I had no other way of letting people know where I had gone and how they could join me.

I was so relieved when people showed up in the new event. That’s when the panic subsided, and I still had about 3 minutes to spare to the start of the session.

The good news? We released Mahara 16.10 and Mahara Mobile today (though actually, we soft-launched the app on the Google Play store already yesterday to ensure that it was live for today).

19 October 2016

Living in New Zealand, far, far away from the rest of the world (except maybe Australia), means that I’m doing a lot of online conference presentations, demonstrations, and meetings. I’ve become well-versed in a multitude of online meeting and conferencing software and know what works on Linux and what doesn’t.

The latter always give me a fright as I have to start up my VM and hope for the best that it will not die on me unexpectedly. Usually, closing Thunderbird and any browsers helps free some resources in order to let Windows start up. I can only dream of a world in which every conferencing software also runs on Linux.

Lately, some providers have gotten better and make use of WebRTC technology, which only requires a browser but no fancy additional software or flash. Only when I want to do screensharing do I need to install a plugin, which is done quickly.

So for meetings of fewer than 10 people, I’m usually set and can propose a nice solution like Jitsi, which works well. In the past, my go-to option was Firefox Hello for simple meetings, but that was taken off the market.

But what to do when there may be more than 10 people wanting to attend a session? Then it gets tough very quickly. So I have been trialling Google Hangouts on Air recently after I’ve seen David Bell use them successfully. It looked easy enough, but boy, was I in for a surprise.

Finding the dashboard

At some point, my YouTube account was switched to a “Creator Studio” one and so I can do live events. Google Hangouts on Air are now YouTube Live Events and need to be scheduled in YouTube.

There is no link from the YouTube homepage to the dashboard for uploading or managing content. I’d have thought that by clicking on “My channel” that I’d get somewhere, but far from it. There is nothing in the navigation.

The best choice is to click the “Video Manager” to get to a subpage of the creator area. Or, as I just found out, click your profile icon and then click the “Creator Studio” button.

Getting to the creator dashboard either via the “Video Manager” on your channel or via the button under your profile picture.

Scheduling an event

Setting up an event is pretty straight forward as it’s like filling in the information for a video upload just with the added fields for event times.

Unfortunately, I haven’t found yet where I can change the placeholder for the video that is shown in the preview of the event on social media. It seems to set it to my channel’s banner image rather than allowing me to upload an event-specific image.

So once you have your event, you are good to go and can send people the link to it. The links that you get are only for the stream. They do not allow your viewers to actually join your hangout and communicate with you in there and that’s where it gets a bit bizarre and what prompted me to write this blog post so I can refer back to it in the future.

There is the hangout link and the YouTube event link

Streaming vs. Hangout

There are actually two components to the YouTube Live event (formerly known as Google Hangout on Air):

The Hangout from which the presenter streams;

The YouTube video stream that people watch.

In order to get into the Hangout, you click the “Start Hangout on Air” button on your YouTube events page. That takes you into a Google Hangout with the added buttons for the live event. You are supposed to see how many people joined in, but the count may be a bit off at times.

In that Google Hangout, you have all the usual functionality available of chats, screensharing, effects etc. You can also invite other people to join you in there. That will allow them to use the microphone. The interesting thing is that you can simply invite them via the regular Hangout invite. You can’t give them the link to the stream as they would not find the actual hangout. And if you only give people the link to the Hangout but not the stream, nobody will be in the stream.

You can also get the two different links from the hangout. Just make sure you get the correct one.

The YouTube video stream page only shows the content of the Hangout that is displayed in the video area, but not the chat. The live event has its separate chat that you can’t see in the Hangout! In order to see any comments your viewers make, you need to have the streaming page open and read the comments there.

In a way, it’s nice to keep the Hangout chat private because if you have other people join you in there as co-presenters, you can use that space to chat to each other without other viewers seeing what you type. However, it’s pretty inconvenient as you have to remember to check the other chat. Dealing with separate windows during a presentation can be daunting. It would be nicer to see the online chat also in the hangout window.

Today I even just fired up another computer and had the stream show there, which taught me another thing.

Having the stream on another computer also showed me how slow the connection was. The live event was at least 5 seconds behind if not more. That is something to consider when taking questions.

The stream was also very grainy. I was on a fast connection, but the default speed was on the lowest setting nevertheless. Fortunately, once I increased the resolution on the finished video, the video did get better. I don’t know if you could increase the setting during the stream.

Last but not least, I couldn’t present in full-screen mode as the window wouldn’t be recognized. I’ll have to try again and see if it works if I screenshare my entire desktop as it would be nicer not to show the browser toolbars.

Not sharing of links

When you are not the owner of the stream, you cannot post URLs. I’m pretty sure that is to prevent trolls misusing public YouTube events to post links. However, it’s pretty inconvenient for the rest who want to hold meetings and webinars and share content. You can’t post a single link. Only I as organizer could post links. Unfortunately, I found that out only after the event as I was logged in under a different account.

Being used to many other web conferencing software, I’ve come to like the backchannel and the possibility to post additional material, which are in many cases links, so people can simply click on them. This was impossible in the YouTube live event as I was only a regular user. And even had I logged in with my creator account, which I’ll certainly do during the next session on Friday, nobody else would have been able to post a link. That is very limiting. I wish it were possible to determine whether links were allowed or not.

Editing the stream

Once the event was over today, I went back to the video, but couldn’t find any editing tools. I started being discouraged as I had hoped to simply trim the front and the back a bit from non-essential chatter and then just keep the rest of the video online rather than trimming my local recording that I had done on top of the online recording, encoding that and uploading it. Before I could get sadder, I had to do some other work, and once I came back to the recording, I suddenly had all my regular editing tools available and rejoiced. Apparently, it takes a bit until all functionality is at your disposal.

So I trimmed the video, which was not easy, but I managed. And then it did its encoding online. After some time, the shortened recording was available and I didn’t have to send out a new link to the video.

Summing up

What does that mean for the next live event with YouTube events?

Click the “Creator Studio” button under my Google / YouTube profile to get to the editor dashboard easily.

Invite people who should have audio privileges through the Hangout rather than giving them the YouTube Live link, which is displayed more prominently.

Co-presenters are invited via Hangout.

Viewers get the YouTube live link.

Open the YouTube Live event with the event creator account in order to be able to post links in the chat on YouTube. Have both the Hangout and the YouTube Live event open so you can see the online chat of those who aren’t in the Hangout.

Take into account that there is a delay until the content is shown on YouTube.

Once finished, wait a bit until all editing features are available and then go into post-production.

Remembering all these things will put me into a better position for the next webinar, which is a repeat session of today’s and showcases the new features of Mahara 16.10.

14 October 2016

I’ve recently acquired a Wessex contrabass trombone in F. It is pretty much a knock-off of the Thein Ben van Dijk model, and compared to this gold standard of contrabass trombone, this instrument is about an eighth of the price and a perfectly decent instrument. It plays really well throughout the range and the slide, valves and bell are all of high build quality, unlike the notorious Chinese-made instruments of the past.

But really, this post is just an excuse to test out a nifty music notation WordPress plugin. The shorthand it uses is ABC which is a bit quaint compared to Lilypond, but it seems to work well enough. For instance, take the first scale we might learn on a contrabass trombone:

The contrabass trombone in F only has six positions on the open slide instead of seven. Furthermore, only the first five are actually practical, unless you are Tarzan, so we can play the G on the first (D) valve in third position. While the A is also theoretically available in first position on the D valve, it is indistinct and slightly flat. Play it on the open slide in fourth. Good. Now, how about an excerpt from Ein Alpensinfonie by Richard Strauss:

Sounds good! Now, pop along to the NZSO performance in March 2017 to hear Shannon playing it, live in concert! In the meantime, here’s this excerpt by Berlin Philharmoniker:

11 October 2016

I’m playing catch-up and working my way backwards of my events. Yesterday, I wrote a bit about the NZ MoodleMoot on 5 October 2016. Just a day before that, AUT organized a local half-day Mahara Hui, Mahara Hui @ AUT 2016. Lisa Ransom and Shen Zhang from CfLAT (Centre for Learning and Teaching) were responsible for the event and did well wrangling everything and made all attendees feel welcome.

It was great to catch up with lecturers and learning technology support staff from AUT, Unitec and University of Waikato, and with a user from Nurseportfolio. We started the day out with introductions and examples of how people use Mahara.

Mahara in New Zealand tertiaries

At AUT, the CfLAT team trained about 630 students this academic year, in particular Public Policy, Tourism and Midwifery. Paramedics are also starting to use ePortfolios and can benefit from the long experience that Lisa and Shen have supporting other departments at AUT.

Linda reported that Mahara is now also being used in culinary studies in elective courses as well as degree papers. They use templates to help students get started, but then let them run with it. Portfolios are well suited for culinary students as they can showcase their work as well as document their creation progress and improve their work.

She also showcased a portfolio from a new lecturer who became a student in her area of expertise, going through a portfolio assignment with her students to see for herself how the portfolios worked and what she could and wanted to expect from her students. By going through the activity herself, she became an expert and now has a better understanding of the portfolio work.

John, an AUT practicum leader, who was new to AUT, came along to the hui and said that they were starting to use portfolios for their lesson plans and goals. Reflections are expected from the future teachers and form an important aspect. I’m sure we’ll hear more from him.

Sally from Nursing at AUT is looking at Mahara again, and the instructor could form connections directly with Unitec and Nurseportfolio, which is fantastic, because that’s what these hui are about: Connecting people.

JJ updated the group on the activities at Unitec. Medical imaging is going digital and looking into portfolios, and they also created a self-paced Moodle course on how to teach with Mahara effectively so that lecturers at Unitec can get a good overview.

Stephen from the University of Waikato gave an overview of the portfolio activities at his university. Waikato still works with two systems, MyPortfolio.school.nz for education students becoming teachers, and the new Waikato-hosted Mahara site. Numerous faculties at Waikato now work with portfolios. If you’d like to find out more directly, you can watch recordings from the last WCELfest, in particular the presentations by Richard Edwards, Sue McCurdy and Stephen Bright. Portfolios will be used even more in the future as evidence from general papers will need to be collected in them by every student.

We also discussed a couple of ideas from a lecturer and are interested in other people’s opinion on them. One idea was to be able to share portfolios more easily in social networks and then see directly when the portfolio was updated and share those news again. The other idea was to show people who are interested in the portfolios when new content has been added. The latter is already possible to a degree with the watchlist. However, there students or lecturers still need to put specific pages on the watchlist first rather than the changes coming to them. The enhancements that Gregor is planning for the watchlist goes more in that direction.

Mahara 16.10

In a second part of the hui, I presented the new features of Mahara 16.10, and we spent a bit of time on taking a closer look at SmartEvidence.

I’m very excited that this new version will be live very soon and look forward to the feedback by users on how SmartEvidence works out for them. It’s the initial implementation. While it doesn’t contain all the bells and whistles, I think it is a great beginning to get the conversations started around use cases besides the ones we had and see how flexible it is.

Next hui and online meetings

If you want to share how you are using Mahara, you’ll have the opportunity to do so in Wellington on 27 October 2016 when we’ll have another local Mahara Hui, Mahara Hui @ Catalyst. From 5 to 7 April 2017, we are planning a bigger Mahara Hui again in Auckland. More information will be shared soon on the Mahara Hui website.

There will also be two MUGOZ online meetings on 19 and 21 October 2016 in which I’ll be presenting the new Mahara 16.10 features. You are welcome to attend either of these 1-hour sessions organized by the Australian Mahara User Group. Since the sessions are online, anybody can tune in.

24 July 2016

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

[macro-stdexten]
...
exten => a,1,Goto(pstn,027xxx,1)
...

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

[macro-stdexten]
...
exten => a,1,Goto(vmfwd,${ARG1},1)
...

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

[vmfwd]
exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

02 December 2014

Already attending linux.conf.au? Come a couple of days earlier and attend the mini-DebConf too! There will be a day of talks with a strong focus on the Debian project and a bug squashing day.

Debian Miniconf

After 5 years, the Debian Miniconf is back! Run as part of linux.conf.au 2015, this event will attract speakers talking on topics that suit the broader audience attending LCA. The Debian Miniconf has been one of the largest miniconfs in the history of linux.conf.au.

25 August 2014

As a follow up from the GSOC post I thought it might be useful to mention a few things happening with SCORM at the moment.

There are currently approx 71 open issues related to SCORM in the Moodle tracker at the moment, of those 38 are classed as bugs/issues I should fix in stable branches at some point, 33 are issues that are really feature/improvement requests.

Issues about to be fixed and under developmentMDL-46639 – External AICC packages not working correctly.MDL-44548 – SCORM Repository auto-update not working.

Issues that are high in my list of things to look at and I hope to look at sometime soon.MDL-46961 – SCORM player not launching in Firefox when new window being used.MDL-46782 – Re-entry of a scorm not using suspend_data or resuming itself should allow returning to the first sco that is not complete.MDL-45949 – The TOC Tree isn’t quite working as it should after our conversion to YUI3 – it isn’t expanding/collapsing in a logical manner – could be a bit of work here to make this work in the right way.

New improvements you might not have noticed in 2.8 (not released yet)MDL-35870 -Performance improvements to SCORMMDL-37401 -SCORM auto-commit – allows Moodle to save data periodically even if the SCORM doesn’t call “commit”

New improvements you might not have noticed in 2.7:MDL-28261 -Check for live internet connectivity while using SCORM – warns user if SCORM is unable to communicate with the LMS.MDL-41476 – The SCORM spec defines a small amount of data that can be stored when using SCORM 1.2 packages, we have added a setting that allows you to disable this restriction within Moodle to allow larger amounts of data to be stored (you may need to modify your SCORM package to send more data to make this work.)

Thanks to Ian Wild, Martin Holden, Tony O’Neill, Peter Bowen, André Mendes, Matteo Scaramuccia, Ray Morris, Vignesh, Hansen Ler, Faisal Kaleem and many other people who have helped report/test and suggest fixes related to SCORM recently including the Moodle HQ Integration team (Eloy, Sam, Marina, Dan, Damyon, Rajesh) who have all been on the receiving end of reviewing some SCORM patches recently!

Another year of GSOC has just finished and Vignesh has done a great job helping us to improve a number of areas of SCORM!
I’m really glad to finally have some changes made to the JavaScript datamodel files as part of MDL-35870 – I’m hoping this will improve the performance of the SCORM player as the JavaScript can now be cached properly by the users browser rather than dynamically generating it using PHP.

Vignesh has made a number of general bug fixes to the SCORM code and has also tidied up the code in the 2.8 branch so that it now complies with Moodle’s coding guidelines.

These changes have involved almost every single file in the SCORM module and significant architectural changes have been made. We’ve done our best to avoid regresssions (thanks Ray for testing SCORM 2004) but due to the large number of changes (and the fact that we only have 1 behat test for SCORM) It would be really great if people could test the 2.8 branch with their SCORM content before release so we can pick up any other regressions that may have occurred.

Thanks heaps to Vignesh for his hard work on SCORM during GSOC – and kudos to Google for running a great program and providing the funding to help it happen!

10 July 2014

I've spent a reasonable chunk of the past year working on a project we launched last month, Catalyst Cloud! It is using OpenStack with Ceph as the object store. It has taken a lot of work, and it is now very exciting seeing the level of interest there we're receiving about this new service!

The great part of this is that we can now offer private cloud services to our customers which provides all the flexibility that we've come to expect with the "cloud", but hosted in New Zealand by a New Zealand owned company so no concerns about jurisdiction of your data! Not only are we able to offer private cloud services on our OpenStack cluster(s), but we can also deploy OpenStack onto our customers own hardware using our ProdStack solution (I get to look directly at the Dashboard shown on that page, which is pretty cool).

Next up is deploying another OpenStack cluster in our new data centre (which is another project I'm working on). In the near future we also hope to start using Open Compute Project hardware for our clusters.

Time to say goodbye to the “Dan Marsden Turnitin plugin”… well almost!

Turnitin have done a pretty good job of developing a new plugin to replace the code that I have been working on since Moodle 1.5!

The new version of their plugin contains 3 components:

A module (called turnitintool2) which contains the majority of the code for connecting to their new API and is a self-contained activity like their old “turnitintool” plugin

A replacement plugin for mine (plagiarism_turnitin) which allows you to use plagiarism features within the existing Moodle Assignment, Workshop and forum modules.

A new Moodle block that works with both the above plugins.

The Moodle.org Plugins database entry has been updated to replace my old code with the latest version from Turnitin, we have a number of clients at Catalyst using the new plugin and the migration has mostly gone ok so far – there are a few minor differences between my plugin and the new version from Turnitin so I encourage everyone to test the upgrade to the new version before running it on their production sites.

I’m encouraging most of our clients to update to the new plugin at the end of this year but I will continue to provide basic support for my version running on all Moodle versions up to Moodle 2.7 and my code continues to be available from my github repository here:https://github.com/danmarsden/moodle-plagiarism_turnitin

Thanks to everyone who has helped in the past with the plugin I wrote – hopefully this new version from Turnitin will meet everyone’s needs!

16 October 2012

Darla has been using Koha from 2006, for the Bering Strait School District in Alaska. This is pretty neat in itself, what is cooler is that as far as I know, they have never had a ‘Support Contract’. Doing things either by themselves or with the help of IT personnel as needed. One of Darla’s first blogposts that I read was about her struggles trying to install Debian on an Emac. I totally respect anyone who is trying to reclaim hardware from the darkside

Darla has presented on Koha at conferences, and maintains a blog that has useful information, including sections of what she would do differently. As well as some nice feel good bits like this, from April 2007

I know I had an entry titled this before, but I do love OSS programs. Yesterday I mentioned that I would look at Pines because I like the tool it has to merge MARC records. Today a Koha developer emailed me to let me know that he is working on this for Koha and it should be available soon. I can’t imagine getting that kind of service from a vendor.

Hopefully she will be able to make it Kohacon13 in Reno, NV. It would be great to put a face to the email address

10 October 2012

Last night on IRC the Koha Community elected a new release team, for the 3.12 release. Once again it is a nicely mixed team, there are 16 people involved, from 8 different countries (India, New Zealand, USA, Norway, Germany, France, Netherlands, Switzerland) and four of the 12 roles are filled by women.

The release team will be working super hard to bring you the best release of Koha yet, and you can help:

Reporting bugs

Testing bug fixes

Writing up enhancement requests

Using Koha

Sending cookies

Inventing time travel

Killing MARC

Winning the lottery and donating the proceeds to the trust to use for Koha work.

24 July 2012

So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

Here’s the suggestion I made on how they can really get in front of FOSS developers:

Hi [name]

Just a quick note to thank you for getting in touch of so many our
Catalyst IT staff, both here and in Australia, with job offers. It comes
across as a real compliment to our company that the folks that work here
are considered worthy of Google’s attention.

One thing about most of our staff is that they *love* open source. Can I
suggest, therefore, that one of the best ways for Google to demonstrate
its commitment to FOSS and FOSS developers this year would be to be a
sponsor of the NZ Open Source Awards. These have been very successful at
celebrating and recognising the achievements of FOSS developers,
projects and users. This year there is even an “Open Science” category.

Google has been a past sponsor of the event and it would be good to see
you commit to it again.

09 July 2012

Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

05 June 2012

I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.

Introduction

I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

I have 3 Points to make in 3 minutes.

1. The Long Term plan lacks vision and is a plan for stagnation and erosion

It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

2. Show faith in local companies

The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

It improves project success rates, which helps the public sector be more effective.

It reduces project cost, which benefits the taxpayers.

It invites small business, which stimulates the economy.

3. Smart cities are open source cities

Use open source software as the default.

It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

05 January 2012

Gosh, it’s been a while. But this site is not dead. Just been distracted by indenti.ca and twitter.

I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

“No problem”, I thought. “Let’s install Firefox, I know that works”.

But no, Firefox is not available to iPad users and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

Er, and the upgrade failed to fix the problem. One day gone.

So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

* I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

18 May 2011

This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):

Those aren’t compression artifacts you’re seeing!

Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.

The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

14 August 2009

Following on from the SAML 2.0 work that I've done recently for Moodle, I thought it was useful to do the same for the Mahara ePortfolio service, while I was in the same space. Details of the first release can be found here, with tested version for both trunk, and 1.1_STABLE.

02 August 2009

Of late I have been doing a lot of SSO integration work for the NZ Ministry of Education, and during this time I came across an excellent project FEIDE. One of the off shoots of this has been the development of a high quality PHP library for SAML 2.0 Web SSO - SimpleSAMLPHP.

For Moodle integration, Erlend Strømsvik of Ny Media AS, developed an authentication plugin, which I've made a number of changes to around configuration options, and Moodle session integration. This has now been documented and added to Moodle Contrib to give it better visibility to the Moodle community at large. Documentation is here and the contrib entry is here.

27 June 2009

I doing some work for a client recently, I got the opportunity to do some major performance work on sapnwrfc for Perl. The net result is that a number of memory leaks, mainly of Perl values not going out of scope properly, have been fixed.

Additionally, I've had some time to put together a proper cookbook style set of examples in the sapnwrfc-cookbook. These examples, while specifically for Perl, are almost identical for sapnwrfc for Python, Ruby, and PHP too.