tag:blogger.com,1999:blog-61261920480817198242014-10-03T16:56:36.447+10:00Tomorrow Comes TodayTales from GMT+11Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-6126192048081719824.post-42151497275163691652013-09-22T14:52:00.000+10:002013-09-22T15:27:15.750+10:00Mathematics education<p>In response to&nbsp;<a class="g-profile" href="http://plus.google.com/115348217455779620753" target="_blank">Jonathan Lange</a>&nbsp;poking me with <a href="http://worrydream.com/refs/Lockhart-MathematiciansLament.pdf">Lockheart's Lament</a>. It seems that I haven't addressed that in any public space yet. </p> <p>We looked at this essay in passing in the mathematics portion of my teaching degree. I agree with some of it. Parts of it match my understanding of the Australian mathematics curricula and teaching practice, parts of it don't; these parts may match American curricula and praxis, but I can't speak to that. I'm pretty sure some parts of it are objectively wrong. </p> <p>Firstly, his essay is transparently written from a <i>pure</i> mathematician's point of view. Now, that's where my sympathies lie, but I'm not sure you'd get the same rant out of an applied mathematician. Furthermore, I disagree with some of his assumptions - particularly the assertion that teaching is some kind of mystical unteachable skill. I suspect that his teaching method is exclusive; when he says <blockquote cite=http://worrydream.com/refs/Lockhart-MathematiciansLament.pdf>It’s perfectly simple. Students are not aliens. They respond to beauty and pattern, and are naturally curious like anyone else. Just talk to them! And more importantly, listen to them! </blockquote>I feel that there's an implicit &quot;naturally curious like anyone else <i>(sufficiently like me)</i>&quot; in there. It sounds laudable, but <i>may</i> be exclusionary. </p> <p>For an essay that's about how mathematics should be taught as an art it seems peculiarly dissonant to deny teaching the same status. </p> <p>Also in the broad strokes, there's the little problem of the philosophical purpose of education<a id="purposeofedu-ref" href="#purposeofedu">¹</a>. I think I'm broadly in agreement with Lockheart on the purpose of education, but ours is by no means the uniquely correct view. All writing about education is embedded in such a philosophical context, and the context that applies to a given system is mostly a political matter. </p> <p>Finally, he makes an observation - children go into school all curious and excited, but the adults who come out of school are significantly less curious. This is a common observation, and it's also common to draw the causal link: school <i>eliminates the curiosity</i> of children. </p><p>I don't believe this; it seems to be a textbook <a href="http://en.wikipedia.org/wiki/Post_hoc_ergo_propter_hoc"><i>post hoc ergo propter hoc</i></a> argumentative fallacy. I'm not aware of any evidence that adults in the pre-compulsory education era were any more curious on average. Sure, there are plenty of historical artefacts showing there were people in ancient Babylon, Greece, the Islamic Caliphate and Renaissance Europe who were interested in mathematical play, but that's not answering the same question. We've got plenty of academic mathematicians - more than we have <i>jobs</i> for, in fact - who are equally interested in mathematical play. </p><p>Furthermore, most mammalian species are much less curious as adults than infants. No one is surprised that adult cats tend to be less curious than kittens; I don't see why it's surprising that adult humans are less curious than immature humans. </p><p>It's entirely possible that there exists a teaching and schooling method that <i>preserves</i> infant curiosity into adulthood, but I don't think we know what that one is, and not being able to do that is not the same as squashing curiosity. </p> <p>In the actual mathematical meat, he's quite right that there's a lot more algorithm and definition in the Australian curricula than there is real <i>mathematics</i>. However, that doesn't mean that classroom experiences don't contain many mathematical experiences. The Australian syllabus documents are high-level descriptions of what students are tested on (and hence, are expected to learn) - for example, the <a href="http://www.boardofstudies.nsw.edu.au/syllabus_sc/pdf_doc/mathematics_710_syllabus.pdf">NSW syllabus</a> says things like <blockquote cite="http://www.boardofstudies.nsw.edu.au/syllabus_sc/pdf_doc/mathematics_710_syllabus.pdf">Data Representation DS4.1 (p 114): Constructs, reads and interprets graphs, tables, charts and statistical information </blockquote>This does not seem particularly objectionable. Being able to read graphs and tables is an important life skill, and is a natural fit for a mathematics class. </p><p>Conspicuously absent is any mention of <i>how</i> this is to be taught; just what (by the end of year 7, in this case) it's expected that students will know or be able to do. There's plenty of scope for teachers to provide authentic<a id="authentic-ref" href="#authentic">²</a> mathematical experiences. There aren't even <i>that</i> many top-level objectives that need to be reached - the whole set of objectives from year 7 to year 10 are there from page 16 to page 27. The crushing bureaucracy mandating rote learning of contextless data might be a peculiarly American phenomenon. Or maybe a product of hyperbole. </p> <p>Most likely it's the result of his bizarre assertion that &quot;true&quot; teaching cannot be planned. </p> <p>Lockheart's on firmer ground when describing the cultural understanding of mathematics. It <i>is</i> seen as a tool to other ends, as a collection of disparate arbitrary formulae, as primarily arithmetic. Few people know it for the creative art that it is. But then again, mathematics inarguably <i>is</i> a tool of immense power. As Lockheart puts it, mathematics is the music of reason; it is one of the few ways of <i>knowing</i> that you're thinking correctly<a id="axiomatic-ref" href="#axiomatic">³</a>. </p> <p>He's also more correct in practice than theory. As he says, teaching real mathematics is <i>difficult</i>. It's even more difficult when, as in the classrooms of the world-as-it-is, the teacher needs to take the whole class with them, under a certain amount of time pressure, and get to a pre-determined end state. While there's no particular impediment to exploration-driven mathematics it requires a level of mathematical confidence on the part of the teacher that's not common, particularly in primary school where teachers often only have high-school level mathematics. </p> <p>So, it seems that my response to Lockheart's Lament is that I agree with it, except in all the details! </p> <p><a id="purposeofedu" href="#purposeofedu-ref">¹:</a> This problem occurs over and over and over. It's the halting-problem of education.</p><p><a id="authentic" href="#authentic-ref">²:</a> I <i>really</i> hated the use of the word &quot;authentic&quot; as a buzzword in the teaching literature. Here's where I get to continue that fine tradition!</p><p><a id="axiomatic" href="#axiomatic-ref">³:</a> Of course, it doesn't, and cannot, ensure that your axioms are correct. That's what science is for!</p>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0tag:blogger.com,1999:blog-6126192048081719824.post-28647133853067086412013-07-15T09:31:00.001+10:002013-07-15T09:31:20.301+10:00XMir Performance<h3>Or: Why XMir is slower than X, and how we'll fix it</h3> <p>We've had a bunch of testing of XMir now; plenty of bugs, and plenty of missing functionality. </p><p>One of the bugs that people have noticed is a 10-20% performance drop over raw X. This is really several bits of missing functionality - we're doing a lot more work than we need to be. Oddly enough, people have also been mentioning that it feels "smoother" - which might be placebo, or unrelated updates, or might be to do with something in the Mir/XMir stack. It's hard to tell; it's hard to measure "smoother". We're not faster, but faster is not the same as smoother. </p> <p>Currently we do a lot of work in submitting rendering from an X client to the screen, most of which we can make unnecessary. </p><h4>The simple bit</h4><p>The simple part is composite bypass support for Mir - most of the time unity-system-compositor does not need to do any compositing - there's just a single full-screen XMir window, and Mir just needs to flip that to the display. This is <a href="https://blueprints.launchpad.net/ubuntu/+spec/client-1310-mir-xmir">in progress.</a> This cuts out an unnecessary fullscreen blit. </p><h4>The complicated part is in XMir itself</h4><p>The fundamental problem is the mismatch between rendering models - X wants the contents of buffers to be persistent; Mir has a GLish new-buffer-each-frame. This means each time XMir gets a new buffer from Mir it needs to blit the previous frame on first, and can't simply render straight to Mir's buffer. Now, we can (but don't yet) reduce the size of this blit by tracking what's changed since XMir last saw the buffer - and a lot of the time that's going to be a lot smaller than fullscreen - but there's still some overhead<a href="#singlebuffer">¹</a>. </p><p>Fortunately, there's an way around this. GLX matches Mir's buffer semantics nicely - each time a client SwapBuffers it gets a shiny new backbuffer to render into. So, rather like Compiz's unredirect-fullscreen-windows option, if we've got a fullscreen<a href="#rootless">²</a> GLX window we can hand the buffer received from Mir directly to the client and avoid the copy. </p><p>Even better, this doesn't apply only to fullscreen games - GNOME Shell, KWin, and Unity are all fullscreen GLX applications. </p><p>As always, there are interesting complications - applications can draw on their GL window with X calls, and applications can try to be fancy and only update a part of their frontbuffer rather than calling SwapBuffers; in either case we can't bypass. Unity does neither, but Shell and KWin might. </p><h4>Enter the cursor</h4><p>In addition to the two unnecessary fullscreen blits - X root window to Mir buffer, Mir buffer to framebuffer - XMir currently uses X's software cursor code. This causes two problems. Firstly, it means we're doing X11 drawing on top of whatever's underneath, so we can't do the SwapBuffers trick. Secondly, it causes a software fallback whenever you move the cursor, making the driver download the root window into CPU accessible memory, do some CPU twiddling, and then upload again to GPU memory. This is bad, but not terrible, for Intel chips where the GPU and CPU share the same memory but with different caches and layouts. It's terrible for cards with discrete memory. Both these problems go away once we support setting the HW cursor image in Mir. </p><p>Once those three pieces land there shouldn't be a meaningful performance difference between XMir-on-Mir and X-on-the-hardware. </p> <small><p><a href="#ref1" id="singlebuffer">¹</a>: If we implemented a single-buffer scheme in Mir we could get rid of this entirely at the cost of either losing vsync or blocking X rendering until vsync. That's probably not a good tradeoff. </p><p><a href="#ref2" id="rootless">²</a>: Technically, if we've got a GLX client whose size matches that of the underlying Mir buffer. For the moment, that means "fullscreen", but when we do rootless XMir for 14.04 <i>all</i> windows will be backed by a Mir buffer of the same size. </p></small>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0tag:blogger.com,1999:blog-6126192048081719824.post-51470988018023306652013-03-18T21:27:00.000+11:002013-03-20T11:27:32.840+11:00Artistic differences<small>The latest entry in my critically acclaimed series on Mir and Wayland!</small> <h1>Wayland, Mir, and X - different projects</h1> <p>Apart from the architectural differences between them, which <a href="http://blog.cooperteam.net/2013/03/for-posterity.html">I've</a> <a href="http://blog.cooperteam.net/2013/03/mir-and-you.html">covered</a> <a href="http://blog.cooperteam.net/2013/03/server-allocated-buffers-in-mir.html">previously</a>, Mir and Wayland also have quite different project goals. Since a number of people seem to be confused as to what Wayland actually is - and that's not unreasonable, because it's a bit complicated - I'll give a run-down as to what all these various projects are and aim to do, throwing in X11 as a reference point. </p> <h2>X11, and X.org</h2><p>Everyone's familiar with their friendly neighbourhood X server. This is what we've currently got as the default desktop Linux display server. For the purposes of this blog post, X consists of: </p><h4>The X11 protocol</h4><p>You're all familiar with the <a href="http://www.x.org/releases/X11R7.5/doc/x11proto/proto.pdf">X11 protocol</a>, right? This gentle beast specifies how to talk to an X server - both the binary format of the messages you'll send and receive, and what you can expect the server to do with any given message (the semantics). There are also plenty of <a href="http://www.x.org/releases/X11R7.7/doc/">protocol extensions</a>; new messages to make the server do new things, like handle more than one monitor in a non-stupid way.</p><h4>The X11 client libraries</h4><p>No-one <i>actually</i> fiddles with X by sending raw binary data down a socket; they use the client libraries - the modern <a href="http://xcb.freedesktop.org/tutorial/">XCB</a>, or the boring old <a href="http://www.x.org/wiki/ProgrammingDocumentation">Xlib</a> (also known as <tt>libX11.so.6</tt>). They do the boring work of throwing binary data at the server and present the client with a more civilised view of the X server, one where you can just <tt>XOpenDisplay(NULL)</tt> and start doing stuff. <br/>Actually, the above is a bit of a lie. Almost all the time people don't even use XCB or Xlib, they use toolkits such as GTK+ or Qt, and <i>they</i> use XLib or XCB. </p><h4>The Xorg server</h4><p>This would be the bit most obviously associated with X - the one, the only, the <a href="http://cgit.freedesktop.org/xorg/xserver/">X.org X server</a>! This is the actual <tt>/usr/bin/X</tt> display server we all know and love. Although there are other implementations of X11, this is all you'll ever see on the free desktop. Or on OS X, for that matter. </p> <p>So that's our baseline stack - a protocol, one or more client libraries, a display server implementation. How about Wayland and Mir?</p> <h4>The Wayland protocol</h4>The <a href="http://cgit.freedesktop.org/wayland/wayland/tree/protocol/wayland.xml">Wayland protocol</a> is, like the X11 protocol, a definition of the binary data you can expect to send and receive over a Wayland socket, and the semantics associated with those binary bits. This is handled a bit differently to X11; the protocol is specified in XML, which is processed by a <a href="http://cgit.freedesktop.org/wayland/wayland/tree/src/scanner.c">scanner</a> and turned into C code. There is a binary protocol, and you can technically<a href="#eglcaveat" id="ref1">¹</a> implement that protocol without using the <tt>wayland-scanner</tt>-generated code, but it's not what you're expected to do. </p><p>Also different from X11 is that everything's treated as an extension - you deal with all the interfaces in the core protocol the same way as you deal with any extensions you create. And you create a lot of extensions - for example, the core protocol doesn't have any buffer-passing mechanism other than SHM, so there's <a href="http://cgit.freedesktop.org/mesa/mesa/tree/src/egl/wayland/wayland-drm/wayland-drm.xml">an extension</a> for drm buffers in Mesa. The Weston reference compositor also has a bunch of extensions, both for ad-hoc things like the compositor<->desktop-shell interface, and for things like XWayland. </p> <h4>The <a href="http://cgit.freedesktop.org/wayland/wayland/">Wayland client library</a></h4><p>Or <tt>libwayland</tt>. A bit like XCB and Xlib, this is basically just an IPC library. Unlike XCB and Xlib it can also used by a Wayland server for server→client communication. Also unlike XCB and Xlib, it's programmatically generated from the protocol definition. It's quite a nice IPC library, really. Like XCB and Xlib, you're not really expected to use this, anyway; you're expected to use a toolkit like Qt or GTK+, and EGL + your choice of Khronos drawing API if you want to be funky. <br/>There's also a library for reading X cursor themes in there. </p> <h4>The Wayland server?</h4><p>This is where it diverges; there is no Wayland server in the sense that there's an Xorg server. There's <a href="http://cgit.freedesktop.org/wayland/weston/">Weston</a>, the reference compositor, but that's strictly intended as a testbed to ensure the protocol works. </p><p>Desktop environments are expected to write their own Wayland server, using the protocol and client libraries. </p> <h4>The Mir protocol?</h4><p>We <a href="http://bazaar.launchpad.net/~mir-team/mir/trunk/files/head:/src/shared/protobuf/">kinda</a> have an explicit IPC protocol, but not really. We don't intend to support re-implementations of the Mir client libraries, and will make no effort to not break them if someone tries. We're using <a href="https://code.google.com/p/protobuf/">Google Protobuf</a> for our IPC format, by the way. </p> <h4>The Mir client libraries</h4><p>What toolkit writers use; it's even called <a href="http://bazaar.launchpad.net/~mir-team/mir/trunk/files/head:/include/client/mir_toolkit/">mir_toolkit</a>. Again, you're probably not going to use this directly; you're going to use a toolkit like GTK+ or Qt, and like Wayland if you want to draw directly you'll be using EGL + GL/GLES/OpenVG. </p> <h4>The Mir server?</h4><p>Kind of. Where the Wayland libraries are all about IPC, Mir is about producing a library to do the drudge work of a display-server-compositor-thing, so in this way it's more like Xorg than Wayland. In fact, it's a bit more specific - Mir is about creating a library to make the most awesome Unity display-server-compositor-thingy. We're not aiming to satisfy anyone's requirements but our own. That said, our requirements aren't likely to be <i>hugely</i> specific, so Mir will likely be generally useful. </p><p>To some extent this is why GNOME and KDE aren't amazingly enthused about Mir. They already <i>have</i> a fair bit of work invested in their own Wayland compositors, so a library to build display-server-compositor-thingies on isn't hugely valuable to them. Right now. </p><p>Perhaps we'll become so awesome that it'll make sense for GNOME or KDE to rebase their compositors on Mir, but that's a long way away. </p> <p><a href="#ref1" id="eglcaveat">¹:</a> Last time I saw anyone try this on <tt>#wayland</tt> there were problems around the interaction with the Mesa EGL platform which meant you couldn't really implement the protocol without using the C existing library. I'm not sure if that got resolved. </p>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com17tag:blogger.com,1999:blog-6126192048081719824.post-68570416735645785862013-03-14T21:40:00.001+11:002013-03-14T21:40:36.322+11:00Server Allocated Buffers in Mir<h1>…Or possibly server <i>owned</i> buffers</h1> <p>One of the significant differences in design between Mir and Wayland compositors<a id="ref1" href="#pedantry">¹</a> is the buffer allocation strategy.</p> <p>Wayland favours a client-allocated strategy. In this, the client code asks the graphics driver for a buffer to render to, and then renders to it. Once it's done rendering it gives a handle<a id="ref2" href="#bufferhandles">²</a> to this buffer to the Wayland compositor, which merrily goes about its job of actually displaying it to the screen.</p> <p>In Mir we use a server-allocated strategy. Here, when a client creates a surface, Mir asks the graphics driver for a set of buffers to render to. Mir then sends the client a reference to one of the buffers. The client renders away, and when it's done it asks Mir for the next buffer to render to. At this point, Mir sends the client a reference to another buffer, and displays the first buffer.</p> <p>You can see this in action - in the software-rendered case - in <a href="http://bazaar.launchpad.net/~mir-team/mir/trunk/view/head:/examples/demo_client_unaccelerated.c#L168">demo_client_unaccelerated.c</a> (I know the API here is a bit awkward; I'm agitating for a better one).<br/>The meat of it is: <pre><br />mir_wait_for(mir_surface_next_buffer(surface, surface_next_callback, 0));<br /></pre>which asks the server for the next buffer and blocks until the server has replied. This <i>also</i> tells the server that the current buffer is available to display.<br/>Then <pre><br />mir_surface_get_graphics_region(surface, &graphics_region);<br /></pre>gets a CPU-writeable block of memory from the current buffer. You'll note that we don't have a <tt>mir_wait_for</tt> around this; that's because we've already got the buffer from the server above; this just accesses it.</p><p>The code then writes an interesting <small>or, in fact, highly boring</small> pattern to the buffer and loops back to <tt>next_buffer</tt> to display that rendering and get a new buffer, <i>ad infinitum</i>. </p> <h2>Great. So why are you doing it, again?</h2> <p>The need for server-allocated buffers is primarily driven by the ARM world and Android graphics stack, an area in which I'm blissfully unaware of the details. So, as with my second-hand <a href="http://blog.cooperteam.net/2013/03/for-posterity.html">why Mir</a> post, you, gentle reader, get my own understanding. </p> <p>The overall theme is: <i>server-allocated buffers give us more control over resource consumption</i>. </p> <p>ARM devices tend to be RAM constrained, while at the same time having higher resolution displays than many laptops. Applications also tend to be full-screen. Window buffers are on the order of 4 MB for mid-range screens, 8 MB for the fancy 1080p displays on high end phones, and 16 MB for the Nexus 10s. Realistically you need to at least double-buffer, and it's often worth triple-buffering, so you're eating 8-12MB on the low end and 32-48MB on the high end, per window. Have a couple of applications open and this starts to add up on devices with between 512MB and 2GB of RAM and no dedicated VRAM. </p> <p>In addition to their lack of dedicated video memory, ARM GPUs also tend to be much less flexible. On a desktop GPU you can generally ask for as many scanout-capable<a href="#scanout" id="#ref3">³</a> buffers as you like. ARM GPUs tend to have several restrictions on scanout buffers. Often they need to be physically contiguous, which in practice means allocated out of a limited block of memory reserved at boot. Sometimes they can only scanout of particular physical addresses. I'm sure there are other oddities out there. </p> <p>So scanout buffers are a particularly precious resource. However, you <em>really</em> want clients to be able to use scanout-capable buffers - in the common case where you've got a single, opaque, fullscreen window, you don't have to do any compositing, so having the application draw directly to the scanout buffer saves you a copy; a significant performance win. Desktop aficionados might recognise this as the same effect as Compiz's <i>unredirect fullscreen windows</i> option. </p> <p>So, there's the problem domain. What do server-allocated buffers buy us? </p> <p><b>The server can control access to the limited scanout buffers.</b><br/>This one's pretty self-explanatory. </p> <p><b>The server can steal buffers from inactive clients.</b><br/>When applications aren't visible, they don't really need their window buffers, do they? The server can either destroy the inactive client's buffers, or clear them and hand them off to a different application. </p><p>When applications are idle for an extended period we'll want to clear up more resources - maybe destroy their EGL state, and eventually kill them entirely. However, applications will be able to resume faster if we've just killed their window buffers than if we've killed their whole EGL state. </p> <p><b>The server can easily throttle clients.</b><br/>There's an obvious way for the server to prevent a client from rendering faster than it needs to; delay handing out the next buffer. </p> <p>Finally: </p> <p><b>Having a single allocation model is cleaner.</b><br/>It's cleaner to have just one allocation model. We need to do at least <i>some</i> server-allocation, so that's the one allocation model. Plus, even though desktops have lots more memory than phones, people still care about memory usage ☺. </p> <h2>But doesn't X do server-allocated buffers, and it is terrible?</h2> Yes. But we're not doing most of the things that make server-allocated buffers terrible. In X's case <ul><li>X allocates all sorts of buffers, not just window surfaces. We leave the client to allocate pixmap-like things - that is, GL textures, framebuffer objects, etc - itself. Notably, we're not allocating ancillary buffers in the server, just the colour buffers. We shouldn't need to change protocol to support new types of buffers (HiZ springs to mind), like has happened with DRI2. <li>X isn't particularly integrated. When you resize an X window, the client gets an DRI2 <tt>Invalidate</tt> event, indicating that it needs to request new buffers. It also gets a separate <tt>ConfigureNotify</tt> event about the actual resize. We won't have this duplication; clients will automatically get buffers of the new size on the next buffer advance, and these buffers know their size. <li>X has more round-trips than necessary. After submitting rendering with <code>SwapBuffers</code>, clients receive an <tt>Invalidate</tt> event. They then need to request new, valid buffers. We won't have as many roundtrips; the server responds to a request to submit the current rendering with the new buffer. </ul> <p>A possibly significant disadvantage we <i>can't</i> mitigate is that the server no longer knows how big the client thinks its window is, so we may display garbage in the window if the client thinks its window is smaller than the server's window. I'm not sure how much of a problem that will be in practise. </p> <h2>Summary</h2><p>Many of the benefits of server-allocation could also be gained with sufficiently smart clients and some extra protocol messages - although that falls down once you start suspending idle clients, as it requires the client to actually be running. We think that doing it in the server will be easier, cleaner, and more reliable. </p> <p>I'm less convinced that server-allocation is the right choice than the rest of Mir's architecture. I'm not convinced that it's a mistake, either, and some of the benefits are attractive. We'll see how it goes! </p><hr/> <p><a href="#ref1" id="pedantry">¹:</a> Yes, <i>I know</i> you can write a server-allocated buffer model with a Wayland protocol. All the Wayland compositors for which we have source, however, use a client-allocated model, and that's what existing Wayland client code expects. Feel free to substitute “existing Wayland compositors” whenever I say “Wayland”. </p><p><a href="#ref2" id="bufferhandles">²:</a> Actually, in the common (GPU-accelerated) case the client itself basically only gets a handle to the buffer; the GEM handle, which it gets from the kernel. </p><a href="#ref3" id="scanout">³:</a> scanout is the process of reading from video memory and sending to an output encoder, like HDMI or VGA. A scanout-capable buffer is the only thing the GPU is capable of putting on a physical display.Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com14tag:blogger.com,1999:blog-6126192048081719824.post-23575245771250959422013-03-13T13:03:00.001+11:002013-03-14T09:02:42.793+11:00Mir and YOU!<small><i style="font-size: small; line-height: 18px;">This is still based on my series&nbsp;<a href="https://plus.google.com/113883146362955330174/posts/PXc93m8nKwk">of</a>&nbsp;<a href="https://plus.google.com/113883146362955330174/posts/P4GFie3VoD8">G+</a>&nbsp;<a href="https://plus.google.com/113883146362955330174/posts/QwMqCgC7c9G">posts</a></i></small><br /><h1>But Chris! I don't care about Unity. What does Mir mean for me?</h1><br /><br />The two common concerns I've seen on my G+ comment stream are:<br /><ul></ul><br /><br /><li><a href="#gnome">With Canonical focusing on Mir rather than Wayland, what does this mean for GNOME/Kubuntu/Lubuntu? What about Mint?</a></li><br /><br /><li><a href="#driverfragmentation">Does this harm other distros by fragmenting the Linux driver space?</a></li><br /><br /><br /><h2 id="gnome">What does this mean for GNOME/Kubuntu/etc?</h2><br />The short answer, for the short-to-mid-term is: not much.<br /><br />We'll still be keeping X available and maintained. Even after we shove a Mir system-compositor underneath it there will still be an X server available for sessions.<br /><br />Let's review the architecture diagram from <a href="http://wiki.ubuntu.com/MirSpec">MirSpec</a>:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://wiki.ubuntu.com/MirSpec?action=AttachFile&amp;do=get&amp;target=Compositor_Cascade.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="201" src="https://wiki.ubuntu.com/MirSpec?action=AttachFile&amp;do=get&amp;target=Compositor_Cascade.png" width="320" /></a></div>Here we see a single Mir system compositor - the top box - and three clients of the system compositor: a greeter (leftmost box) and two Unity-Next user sessions.<br /><br />You can replace any of the Mir boxes which contain the sessions - “Unity Next” - with an X server + everything you'd normally run on the X server. The top “System Compositor/Shell” box is just there to support the user sessions - to handle the transitions startup splash → greeter, greeter → user session, user session → user session, and user session → shutdown splash. We'll probably also use the system compositor to provide a proper secure authentication mechanism, too.<br /><br />Note that the “Unity Next” boxes will also be running an X server, but in this case it will be a “rootless” X server. Basically, that means that it doesn't have a desktop window, just free standing application windows. This will be how we support legacy X11 apps in a Unity Next session. This is also how Wayland compositors, such as Weston, handle X11 compatibility - and, for that matter, how X11 works on OSX.<br /><br />The rootless X server in Unity Next will not be able to run KWin or Mutter or whatever. This isn't losing anything, though - you can't currently run KWin or GNOME Shell in Unity, or visa versa, even though everything runs on X.<br /><br /><h3>So, if the short-term impact is “approximately none”, how about the long-term impact?</h3><br /><br />This somewhat depends on what other projects do. Remember how we can run full X servers under the System Compositor? We can do the same with Weston. Weston has multiple backends; you can already run Weston-on-Wayland, just as you can run Weston-on-X11. Once we've got input sorted out in Mir it'll probably be a fun weekend hack to add a Mir backend to Weston, and have Weston-on-Mir. You're welcome to do that if you get to it before me, gentle reader ☺.<br /><br />So what happens if KDE or GNOME build a Wayland compositor? Well, we don't really know, because they haven't finished yet<a href="#footnote1" id="ref1">¹</a>. Neither of them are using Weston, so a Weston-on-Mir backend won't be terribly useful. If their compositor architecture allows multiple backends then writing a Mir backend for them shouldn't be terrible; in that case, we can replace any of the “Unity Next” sessions in the diagram with “GNOME Shell Next” or “KDE Next”, collect our winnings, and drive off into the sunset in our shiny new Ferrari.<br /><br />The result that requires the most work is if KDE or GNOME additionally build a system compositor. This seems quite likely, as a system compositor makes a bunch of knarly problems go away. In that case you'd not only need to write a Mir backend for their compositor, you'd also need to implement whatever interfaces they use on their system compositor.<br /><br />We see this a bit already, actually, with GNOME Shell and GDM; Shell uses interfaces which LightDM doesn't provide but GDM does, so things break.<br /><br />The worst-case scenario here is that you need GNOME's stack to run GNOME, KDE's stack to run KDE, and Unity's stack to run Unity, and you need to select between them at boot rather than at the greeter.<br /><br /><b>So, in summary: Mir doesn't break GNOME/KDE/XFCE now. These projects may change in a way that's incompatible with Mir in future, but that's (a) in the future and (b) solvable.</b><br /><br /><h2 id="driverfragmentation">Does this fragment the Linux graphics driver space?</h2><br /><br />Ish.<br /><br />Mir only exists because of all the work Wayland devs have done around the Linux graphics stack. It uses exactly the same stack that Weston's drm backend does<a href="#footnote2" id="ref2">²</a>.<br /><br />The XMir drivers are basically the same as the XWayland drivers - they're stock <tt>xf86-video-intel</tt>, <tt>xf86-video-ati</tt>, <tt>xf86-video-nouveau</tt> with a small patch. They're not the same, but they're not hugely different.<br /><br />The Mir EGL platform looks almost exactly the same as the Wayland EGL platform. Which looks almost exactly the same as the GBM EGL platform, which looks almost exactly the same as the X11 EGL platform. Code reuse and interfaces are awesome - we might be submitting patches to share <i>even more</i> code here.<br /><br />Weston currently doesn't have any proprietary driver support; similarly, neither does Mir.<br /><br />We're talking with NVIDIA and AMD to get support for running Mir on their proprietary drivers, and providing an interface for proprietary drivers in general. It's early days; we don't have anything concrete; and even if we did, I probably wouldn't be able to divulge it. But it's likely that whatever we come up with to support Mir on NVIDIA will also support Wayland on NVIDIA.<br /><br /><b>So, driver divergence - real, but probably overblown.</b><br /><br /><a href="#ref1" id="footnote1">¹</a>: There's an old GNOME Shell branch that's seeing some love recently, and <a href="http://community.kde.org/KWin/Wayland">KDE</a> has a <a href="https://projects.kde.org/projects/kde/kde-workspace/repository/show?rev=kwin-wayland">kwin branch</a> for wayland support.<br /><a href="#ref2" id="footnote2">²</a>: On the desktop. On Android, it uses the Android stack.Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com2tag:blogger.com,1999:blog-6126192048081719824.post-17514284913632391862013-03-12T14:32:00.000+11:002013-03-14T09:03:12.624+11:00For posterity<span style="font-size: x-small;"><span style="font-family: inherit; line-height: 18px;"><i>This is based on my series <a href="https://plus.google.com/113883146362955330174/posts/PXc93m8nKwk">of</a> <a href="https://plus.google.com/113883146362955330174/posts/P4GFie3VoD8">G+</a> <a href="https://plus.google.com/113883146362955330174/posts/QwMqCgC7c9G">posts</a></i></span></span><br /><h2><i style="background-color: white; line-height: 18px;"><span style="font-family: inherit; font-size: small;">Standing on the shoulders of giants</span></i></h2><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">We've recently gone public (yay!) with the <a href="http://launchpad.net/mir">Mir</a> project that we've been working on for some months now.</span><br /><span style="font-family: inherit;"><br style="background-color: white; font-size: 13px; line-height: 18px;" /></span><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;">It's been a bit rockier than I'd hoped (boo!). Particularly, we offended people with incorrect information the <a href="https://wiki.ubuntu.com/MirSpec#Why_Not_Wayland_.2BAC8_Weston.3F">wiki page</a>&nbsp;we wanted to direct the inevitable questions to.</span></span><br /><span style="font-family: inherit;"><br style="background-color: white; font-size: 13px; line-height: 18px;" /></span><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;">I had proof-read this, and didn't notice it - I'm familiar with Wayland, so even with “X's input has poor security” and “Wayland's input protocol may duplicate some of the problems of X” juxtaposed I didn't make the connection. After all, one of the nice things of Wayland is that it&nbsp;</span><i style="background-color: white; font-size: 13px; line-height: 18px;">solves</i><span style="background-color: white; font-size: 13px; line-height: 18px;">&nbsp;the X security problems! It was totally reasonable to read what was written as “Wayland's input protocol will be insecure, like X's” which is totally wrong; sorry to all concerned for not picking that up, most especially +Kristian Høgsberg and +Daniel Stone.</span></span><br /><span style="font-family: inherit;"><br style="background-color: white; font-size: 13px; line-height: 18px;" /></span><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">Now that the mea-culpa's out of the way…</span><br /><span style="font-family: inherit;"><br style="background-color: white; font-size: 13px; line-height: 18px;" /></span><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">Although we've got a section on the wiki page “why not Wayland/Weston” there's a bunch of speculation around about why we really created Mir, ranging from the sensible (we want to write our own display serve so that we can control it) - to the not-so-sensible (we're actually a front company of Microsoft to infiltrate and destroy Linux). I don't think the rationale on the page is inaccurate, but perhaps it's not clear.</span><br /><span style="font-family: inherit;"><br style="background-color: white; font-size: 13px; line-height: 18px;" /></span><br /><h3><span style="font-family: inherit;">Why Mir?</span></h3><span style="font-family: inherit; font-size: x-small;"><b style="background-color: white; line-height: 18px;"><br /></b></span><span style="font-family: inherit; font-size: x-small;"><b style="background-color: white; line-height: 18px;">Note:</b><span style="background-color: white; line-height: 18px;">&nbsp;I was not involved in the original decision to create Mir rather than bend Wayland to our will. While I've had discussions with those who were, this is filtered through my own understanding, so treat this as&nbsp;</span><i style="background-color: white; line-height: 18px;">my interpretation</i><span style="background-color: white; line-height: 18px;">&nbsp;of the thought-processes involved. Opinions expressed do not necessarily reflect the opinions of my employer, etc.</span></span><br /><h3><span style="font-family: inherit;"><br /></span></h3><h3><span style="font-family: inherit;">We wanted to integrate the shell with a display server</span></h3><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;">There are all sorts of frustrations involved in writing a desktop shell in X. See any number of Wayland videos for details :).</span></span><br /><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;"><br /></b></span><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;">We therefore want Wayland, or something like it.</b></span><br /><span style="font-family: inherit;"><br style="background-color: white; font-size: 13px; line-height: 18px;" /></span><br /><h3><span style="font-family: inherit;">We don't want to use Weston (and neither does anyone else)</span></h3><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;"><a href="http://cgit.freedesktop.org/wayland/weston/">Weston</a>, the reference Wayland compositor, is a test-bed. It's for the development of the Wayland protocol, not for being an actual desktop shell. We could have forked Weston and bent it to our will, but we're on a bit of an automated-testing run at the moment, and it's generally hard to retro-fit tests onto an existing codebase. Weston has some tests, but we want super-awesome-tested code.</span></span><br /><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;"><br /></span></span><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;">It's perhaps worth noting that neither GNOME nor <a href="http://community.kde.org/KWin/Wayland">KDE</a>&nbsp;have based their Wayland compositor work on Weston.</span></span><br /><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;"><br /></span></span><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;">We don't want Weston, but maybe we want Wayland?</b></span><br /><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;"><br /></b></span><br /><h3><span style="line-height: 18px;">What about input?</span></h3><div><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">At the time Mir was started, Wayland's input handling was basically non-existent. +Daniel Stone's done a lot of work on it since then, but at the time it would have looked like we needed to write an input stack.</span></div><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;"><br /></b></span><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;">Maybe we want Wayland, but we'll need to write the input stack.</b></span><br /><span style="font-family: inherit;"><b style="background-color: white; font-size: 13px; line-height: 18px;"><br /></b></span><br /><h3><span style="line-height: 18px;">We want server-side buffer allocation; will that work?</span></h3><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">We need server-side buffer allocation for ARM hardware; for various reasons we want server-side buffer allocation everywhere. Weston uses client-side allocation, and the Wayland EGL platform in Mesa does likewise. Although it's possible to do server-side allocation in a Wayland protocol, it's swimming against the tide.</span><br /><b style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;"><br /></b><b style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">Maybe we want Wayland, but we'll need to write an input stack and patch XWayland and the Mesa EGL platform.</b><br /><b style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;"><br /></b><br /><h3>Can we tailor this to Unity's needs?</h3><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">We want the minimum possible complexity; we ideally want something tailored exactly to our requirements, with no surplus code. We want different WM semantics to the existing&nbsp;</span><span style="background-color: white; font-size: 13px; line-height: 18px;"><span style="font-family: Courier New, Courier, monospace;">wl_shell</span></span><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">&nbsp;and&nbsp;</span><span style="background-color: white; font-size: 13px; line-height: 18px;"><span style="font-family: Courier New, Courier, monospace;">wl_shell_surface</span></span><span style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">, so we ideally want to throw them away and replace them with something new.</span><br /><b style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;"><br /></b><b style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;">Maybe we want Wayland, but we'll need to write an input stack, patch XWayland and the Mesa EGL platform, and redo the WM handling in all the toolkits.</b><br /><b style="background-color: white; font-family: inherit; font-size: 13px; line-height: 18px;"><br /></b><br /><h3>So, does it make sense to write a Wayland compositor?</h3><span style="font-family: inherit;"><span style="background-color: white; font-size: 13px; line-height: 18px;">At this point, it looks like we want something like Wayland, but different in almost all the details. It's not clear that starting with Wayland will save us all that much effort, so the upsides of doing our own thing - we can do&nbsp;</span><i style="background-color: white; font-size: 13px; line-height: 18px;">exactly</i><span style="background-color: white; font-size: 13px; line-height: 18px;">&nbsp;and&nbsp;</span><i style="background-color: white; font-size: 13px; line-height: 18px;">only</i><span style="background-color: white; font-size: 13px; line-height: 18px;">&nbsp;what we want, we can build an easily-testable code base, we can use our own infrastructure, we don't have an additional layer of upstream review - look like they'll outweigh the costs of having to duplicate effort.&nbsp;</span></span><br /><h4><span style="font-family: inherit;"><b style="background-color: white; line-height: 18px;">Therefore, Mir.</b></span></h4>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com4tag:blogger.com,1999:blog-6126192048081719824.post-76477596051393550642009-05-12T16:20:00.003+10:002009-05-12T16:44:09.639+10:00A grab-bag of annoyanceI've been meaning to write more, or indeed at all, on this blog. In the interests of making this easier, I'll try to ease my way in with a bit of a gripe post. Always easier! So this will be deliberately more extreme than my <i>actual</i> views. With that said...<br /><br /><h4>The concept of "definition"</h4><br />Why does it seem that education academics find this so difficult? This morning's 5500 lecture featured a slide titled "Definition of School 1.0 & Web 2.0" with the text "School 1.0" and "Web 2.0" linked. Both of these links went to images, and the "definition" was derived by asking us "what sort of words do you think describe the pedagogies & teacher-student-knowledge relationships inspired by these". These are not <i>definitions</i>, damnit, and it seems that Tony doesn't really get it. Later in the lecture a student asked "I don't really get what you mean by 'technological determinism' and 'social determinism', could you explain?", and it didn't seem that Tony understood the question - he certainly didn't reply with a <i>definition</i> of what he meant by "technological determinism". From the context of the lecture, it seems to me that he could very well have said "technological determinism is the idea that technology is awesome, and so if you use it in some task it will make that task more awesome". However, I don't know if that's how Tony understands it, or even if he could <i>make his understanding explicit at all</i>. I find, myself, that being unable to explicitly state what I mean is indicative of my poor understanding.<br /><br />This is by no means isolated to Tony. One of our readings was an excerpt from Professor Ewing's book, the first chapter of which was titled "Towards some Definitions". In this chapter, she surveys the wide range of definitions of "curriculum" in the literature - ranging from "the list of dot-points a teacher wants to cover in the year", to "the whole sum of experience adults would like a child to receive over the course of their life". No where does she suggest what <i>she</i> means by the word "curriculum", something which I feel would be useful in a book about the subject!<br /><br /><h4>On the choice of labels</h4><br />Dear academics: when choosing the label you'd like to use for your particular assessment technique, please be aware that calling it <i>authentic assessment</i> will make you look like an arrogant know-it-all. Thank you.Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com1tag:blogger.com,1999:blog-6126192048081719824.post-54400766770963367042008-10-11T16:26:00.001+11:002008-10-11T16:31:06.384+11:00The Will to Macros<br /><p>The excellent <a href="http://www.blogger.com/rptools.net">MapTool</a> full of useful features. One of which is the ability to associate macros with tokens - particularly useful it 4E D&amp;D, since the maximum number of different attacks a character can have is less than 10.</p><p><br />Sadly, the <a href="http://rptools.net/doku.php?id=parser">documentation</a> is somewhat sparse. Let's remedy that, with a worked example: writing Graham Tom's <em>basic attack</em> macro.</p><p><br />So, at its simplest, a basic attack is d20 + Str modifier + 1/2 level + proficiency vs AC, with [W] + Str mod damage. Graham Tom wields a longsword (weapon die: d8) and has Str 16 (modifier +3). This gives us:</p><p><br /></p><blockquote>Attack [d20 + 3 + 1 + 3]<br />Damage [d8 + 3]</blockquote><p></p><p><br />We can parametrise this; MapTool allows you to define the attributes of a token, so the parametrised attack looks like:</p><p><br /></p><blockquote>Attack [d20 + floor((Strength - 10)/2) + floor(Level/2) + 3]<br />[d8 + floor((Strength - 10)/2)]</blockquote><p></p><p><br />Again, this is sub-optimal: a natural 20 on the to-hit is a critical, which does maximum damage. We can get this behaviour using MapTool's ability to assign variables, and the fact that eq(var, 20) = 1 iff var = 20 and 0 otherwise.</p><p><br />I'm not sure how to suppress output from these macros, and MapTool seems to evaluate only the first full expression in [ ]. Feel free to remedy these flaws in the macro, which ends up as:</p><p><br /></p><blockquote>Attack: Roll [v=d20] + modifiers = [v + floor((Strength-10)/2) + floor(Level/2) + 3]<br />Damage: [(eq(v,20)*8 + ne(v,20)*d8) + floor((Strength-10)/2)]</blockquote><br /><br />So, we can see the unmodified roll, the final result, and this correctly calculates the damage in the event of a critical.Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0tag:blogger.com,1999:blog-6126192048081719824.post-57965336310773791572008-09-14T21:06:00.000+10:002008-10-11T16:27:00.779+11:00Adventures in future upstream nightmares<br /><p>I clearly need to move "Write a 'How to be a good upstream' Ubuntu wiki page" closer to the top of my TODO list.</p><p><br />This piece of wrongness seen in #ubuntu-motu:</p><p><br /><blockquote>(20.24.31| screennam)) folks, I am sent here 'cos I have a bit of software to release under a modified gpl</p><p><br />(20.24.53| screennam)) and I guess I've not done this before so I'll need some advice on making it publishable</p><p><br />(20.26.23| wgrant)) Isn't the GPL immutable?</p><p><br />(20.26.44| screennam)) yes, so what?</p><p><br />(20.27.31| wgrant)) Releasing something under a mutated version of an immutable license seems unwise.</p><p><br />(20.28.05| screennam)) not if I call it something different</p><p><br />(20.28.23| screennam)) If I call it 'custom licence' no-one will complain</p><p><br /></blockquote></p><p><br />Remember, kids: releasing your software under a modified GPL will earn you the eternal enmity of all right-thinking packagers. Smart people with legal degrees argued over the wording of the GPL. Is your reworking going to be as sound?</p><p><br />Copyright is the most annoying and time consuming parts of a lot of Debian packaging. Please, make it slightly easier for packagers to get your work into Debian and Ubuntu - use one of the wide variety of common licenses for your code!</p>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0tag:blogger.com,1999:blog-6126192048081719824.post-85947277664538958422007-10-18T18:51:00.000+11:002007-10-18T19:44:44.934+11:00Agricultre, Neptune, Commerce, CyclopsMy friend Bice came up to Sydney for the first time at the beginning of this week in order to burn off some of his annual leave. He's so lazy that he hadn't bothered organising any leave for the three years he's been with his company. Habits are easy to fall into and hard to break - I doubt I'd do much differently.<br /><br />Speaking of which, I have a habit of accumulating TODO items and insufficient time management to do anything about it. Currently on my plate, in rough order of priority:<br /><ul><li>Finish a paper on rational inversive geometry</li><li>Upload a fixed specto package to Debian</li><li>Add some better checks &amp; configuration to the Ubuntu Xgl package</li><li>Reverse-engineer LDVS-on-nVidia to make nouveau's XRandR 1.2 branch work on my lappy.<br /></li><li>Write a config system for specto/notifrenzy</li><li>Write a xscreensaver hack theme editor for gnome-screensaver</li><li>Package the <a href="http://www.taoframework.com/">Tao CIL OpenGL bindings</a></li><li>Update the compiz CIL plugin loader to work with current compiz</li><li>Do some work on <a href="http://launchpad.net/playtools">Playtools</a><br /></li><li>Hack on <a href="http://launchpad.net/joybot">Joybot</a><br /></li></ul>... the list goes on. I wish I were a <a href="http://en.wikipedia.org/wiki/Chronomancy">chronomancer</a> like Saint Germain.Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com1tag:blogger.com,1999:blog-6126192048081719824.post-83900121887458214172007-07-08T22:48:00.000+10:002007-07-09T10:41:15.054+10:00My heart is elsewhereSam's gone off to a materials science conference, where she will be presenting some of her work. This means that she's not here. Somehow, a couple of years ago this wouldn't have mattered. Wow.<br /><br />Before she left, I tried to update her Windows XP laptop. I'd forgotten how strange windows is. I needed to install three install programs before I could actually install any updates.Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0tag:blogger.com,1999:blog-6126192048081719824.post-85901948040058078312007-07-06T18:12:00.000+10:002007-07-06T18:43:47.458+10:00The importance of micro-optimisationsSam and I bought some wardrobes and a desk from a post-doc who's moving off to England to take up a position as lecturer. This is good: no longer will our clothes horse have to do double duty as our entire clothes storage space. The bad is, of course, that we needed to get the furniture home ourselves.<br /><br />Now, getting a removalist/furniture taxi would've cost about $140. Hiring a ute for the day, cost $69. Plus $16 for insurance, which seemed prudent since I've never driven in Sydney, or driven a ute, and the last time I drove was over a year ago. Plus 1.5% stamp duty, for some reason. Plus petrol.<br /><br />So all up it cost around $100, an important, <i>necessary</i>, saving of $40, and now we have to lug some heavy furniture up a couple of flights of stairs.<br /><br />I commit this note to self to the boundless memory of the intertron: Next time <i>just pay someone to do it!</i>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0tag:blogger.com,1999:blog-6126192048081719824.post-11540234700599950662007-07-05T20:22:00.000+10:002007-07-06T18:19:26.786+10:00So, on Friday Sam and I went to see the new Transformers movie with SpockSoc. And it was good. Who'd have thought that battles between huge, city destroying robots could be so cinematic?<br /><br />I'm learning dvorak, now that I've got a lappy that I can move the keys around on. Except, curiously, for the 'b' key, which is different to every other key on the keyboard. I'm now at that awkward phase where my fingers kind of know where they're meant to go, just as long as I don't think too hard about it :)<br /><br />Things I've recently learnt:<br /><ul><br /> <li>Inversive geometry is really about the pole-polar relationship in a projective space.<br /> <li>The time taken to pack up a flat and move is dwarfed by the time taken to <b>unpack</b>.<br /> <li>My laptop bag is sufficiently waterproof to not kill my laptop in the rain. Yay!</li><br /> <li>It's not too hard to to Test Driven Development in C with the <i>check</i> package</li><br /> <li>In related news, I'm a much less proficient C programmer than I was five years ago.</li><br /> <li>Two-fingered tapping on my touchpad generates a Mouse2 click, and 3 fingered tapping generates a Mouse3 :).</li><br /></ul>Christopher Halse Rogershttps://plus.google.com/113883146362955330174noreply@blogger.com0