Blog posts tagged as 'sketching'

The project we ran in the spring with the Goldsmiths Design BA course was not ‘live’ in the sense that there was a commercial client’s needs informing the project, but it was an approximation of the approach that we take in the studio when we are working with clients around new product generation and design consultancy.

It’s a direct influence from Durrell – and techniques he used while teaching Schulze, Joe Malia and others at the RCA – and also something that is very familiar to many craftspeople – having at least some knowledge of a lot of different materials and techniques that can then inform deeper investigation, or enable more confident leaps of invention later on in the process. It also owes a lot to our friend Matt Cottam‘s “What is a Switch?” brief that he’s run at RISD, Umea, CIID and Aho…

We asked the students to engage with everyday technology and manufactured, designed goods as if it were nature.

“The Anthropocene” has been proposed by ecologists, geologists and geographers to describe the epoch marked by the domination of human influence on the Earth’s systems – seams of plastic kettles and Tesco “Bags For Life” will be discovered in millions of years time by the distant ancestors of Tony Robinson’s Time Team.

There is no split between nature and technology in the anthropocene. So, we ask – what happens if you approach technology with the enthusiasm and curiosity of the amateur naturalist of old – the gentlemen and women who trotted the globe in the last few centuries with sturdy boots, travel trunks and butterfly nets – hunting, collecting, studying, dissecting, breeding and harnessing the nature around them?

The students did not disappoint.

Like latter-day Linneans, or a troop of post-digital Deptford Darwins – they headed off into New Cross and took the poundstretchers and discount DIY stores as their Galapagos.

After two weeks I returned to see what they had done and was blown-away.

And – perhaps most importantly I had the feeling they had not only been understood, but the invention around communicating what they had learnt displayed a confidence in this ‘new nature’ that I felt would really stand them in good stead for the next part of the project, and also future projects.

It was all great work, and lots of work – the smile didn’t leave my face for at least a week – but a few projects stood out for me.

Charlotte’s investigations of disposable cameras, Helen’s thought-provoking examination of pregnancy tests, Tom’s paper speakers (which he promised had worked!), Simon’s unholy pairings of pedometers and drills, Liboni and Adam’s thorough dissections of ultrasonic keyfinders and the brilliant effort to understand how quartz crystal regulate time by baking their own crystal, wiring it to a multimeter and whacking it with a hammer!

Hefin Jones’ deconstruction of the MagnaDoodle, and his (dramatic, hairdryer-centric) reconstruction of it’s workings was a particularly fine effort.

The second half of the brief asked the students to assess the insights and opportunities they had from their material exploration and begin to combine them, and place them in a product context – inventing new products, services, devices, rituals, experiences.

We’ve run this process with students before in a brief we call “Hopeful Monsters”, which begins with a kind of ‘exquisite corpse’ mixing and breeding of devices, affordances, capabilities, materials and contexts to spur invention.

We’d pinched that drawing technique way back in 2007 for Olinda from Matt Ward, head of the design course at Goldsmiths so it only seemed fitting that he would lead that activity in a workshop in the second phase of the brief.

The students organised themselves in teams for this part of the brief, and produced some lovely varied work – what was particularly pleasing to me was that they appeared to remain nimble and experimental in this phase of the project, not seizing upon a big idea then dogmatically trying to build it, but allowing the process of making inform the way to achieve the goals they set themselves.

We closed the project with an afternoon of presentations at The Gopher Hole (thanks to Ossie and Beatrice for making that happen!) where the teams presented back their concepts. All the teams had documented their research for the project as they went online, and many opted to explain their inventions in short films.

A special mention to the ‘Roads Mata’ team, who for me really went the extra-mile in creating something that was believably-buildable and desirable – to the extent that I think my main feedback to them was they should get on KickStarter…

There were sparks of lovely invention throughout all the student groups – some teams had more trouble recognising them than others, but as Linus Pauling once said “To have a good idea you have to have a lot of ideas”, and that certainly wasn’t a problem.

I wonder what everyone would have come up with if we had a slightly longer second design phase to the project, or introduced a more constrained brief goal to design for. It might have enabled some of the teams to close in on something either through iteration or constraint.

Next time!

As it was I hope that the methods that the brief introduce stay with the group, and that the curiosity, energy and ability to think through making that they obviously all have grows in confidence and output through the coming years.

I mentioned in weeknotes that Denise was doing some quick character sketches for some film work we’re planning, and she kindly let me take a shot of her pencil sketches before she took them into Illustrator.

As a studio we have recently been quite pre-occupied with two themes. One is new systems of time and place in interactive experiences. The second is with the emerging ecology of new artificial eyes – “The Robot Readable World”. We’re interested in the markings and shapes that attract the attention of computer vision, connected eyes that see differently to us.

We recently met an idea which seems to combine both, and thought we’d talk about it today – as a ‘product sketch’ in video to start a conversation hopefully.

Our “Clock for Robots” is something from this coming robot-readable world. It acts as dynamic signage for computers. It is an object that signal both time and place to artificial eyes.

It is a sign in a public space displaying dynamic code that is both here and now. Connected devices in this space are looking for this code, so the space can broker authentication and communication more efficiently.

The difference between fixed signage and changing LED displays is well understood for humans, but hasn’t yet been expressed for computers as far as we know. You might think about those coded digital keyfobs that come with bank accounts, except this is for places, things and smartphones.

Timo says about this:

One of the things I find most interesting about this is how turning a static marking like a QR code into a dynamic piece of information somehow makes it seem more relevant. Less of a visual imposition on the environment and more part of a system. Better embedded in time and space.

In a way, our clock in the cafe is kind of like holding up today’s newspaper in pictures to prove it’s live. It is a very narrow, useful piece of data, which is relevant only because of context.

If you think about RFID technology, proximity is security, and touch is interaction. With our clocks, the line-of-sight is security and ‘seeing’ is the interaction.

Our mobiles have changed our relationship to time and place. They have radio/GPS/wifi so we always know the time and we are never lost, but it is at wobbly, bubbly, and doesn’t have the same obvious edges we associate with places… it doesn’t happen at human scale.

Line of sight to our clock now gives us a ‘trusted’ or ‘authenticated’ place. A human-legible sense of place is matched to what the phone ‘sees’. What if digital authentication/trust was achieved through more human scale systems?

Timo again:

In the film there is an app that looks at the world but doesn’t represent itself as a camera (very different from most barcode readers for instance, that are always about looking through the device’s camera). I’d like to see more exploration of computer vision that wasn’t about looking through a camera, but about our devices interpreting the world and relaying that back to us in simple ways.

We’re interested in this for a few different reasons.

Most obviously perhaps because of what it might open up for quick authentication for local services. Anything that might be helped by my phone declaring ‘I am definitely here and now’ e.g., as we’ve said – wifi access in a busy coffee shop, or authentication of coupons or special offers, or foursquare event check-ins.

But, there are lots directions this thinking could be taken in. We’re thinking about it being something of a building block for something bigger.

Spimes are an idea conceived by Bruce Sterling in his book “Shaping Things” where physical things are directly connected to metadata about their use and construction.

We’re curious as to what might happen if you start to use these dynamic signs for computer vision in connection with those ideas. For instance, what if you could make a tiny clock as a cheap solar powered e-ink sticker that you could buy in packs of ten, each with it’s own unique identity, that ticks away constantly. That’s all it does.

This could help make anything a bit more spime-y – a tiny bookmark of where your phone saw this thing in space and time.

Maybe even just out of the corner of it’s eye…

As I said – this is a product sketch – very much a speculation that asks questions rather than a finished, finalised thing.

We wanted to see whether we could make more of a sketch-like model, film it and publish it in a week – and put it on the blog as a stimulus to ourselves and hopefully others.

We’d love to know what thoughts it might spark – please do let us know.

Clocks for Robots has a lot of influences behind it – including but not limited to:

While parents of my acquaintance have found work-arounds, such as placing their children’s favourite apps on specific ‘pages’ of the homescreen, it’s a device bound to a MacBook or iMac, and an iTunes account – ultimately to an individual, not a small group.

While travelling last month, my wife and I managed to use the iPad as our shared device by basically signing-in and out of our Google accounts. Do-able but laborious.

Switch seems like a useful step in the direction of “non-personal computing”, allowing multiple user accounts for browsing, with a single password for each.

But I thought I’d quickly sketch something that built on the ‘magic-table’ mock-ups I’d been playing with back in the summer – looking enhancing the passable and sharable nature of the iPad as an object in and of the household.

It’s pretty simple, and not much of leap, frankly…

The ‘person-in-each-corner’ pattern can already be seen in iPad games such as Marble Mixer and Multipong, so this really just uses the corners of the device in tandem with the orientation sensors to select which of the – up to four* – different users wants to access their apps and settings on the device.

Activity notifications could be displayed alongside the names on the lockscreen so that you could quickly see at a glance if anything needed your attention.

And, if you wanted a little more privacy from the rest of your housemates or family, then just a standard iOS passcode dialog could be set.

That’s it really.

Just a quick sketch but something I wanted to get out of my head.

The individual nature of the UI and user-model of the iPad seems so at odds to me with its form-factor, the share-ability of its screen technology and it’s emergent context of use that I can imagine something (much more elegant) than this coming from Apple in the near-future.

Of course, they may just want to sell us all one each…

* as well as the four user limit being a simple mapping to the number of corners the thing has, this seems like a very Apple constraint to me…