This is my A2 (systems and control) technology coursework. Its been a while so I figured I should publish this as a reference not only to those doing the course but generally as a moderate example of report writing and a project. I hope you enjoy.

I was talking to an engineer at a small company the other day and they were facing a pretty interesting project; I won’t go into specifics but they had a few things which made it a lot more interesting: They had 5 weeks, they were to make ~200, the system had to work reliably, the system was one that one couldn’t source off shelf and the system had to be delivered for sub grand a piece.

Ouch, I really did not envy them in having to deliver that sort of product, I’d love to tackle the problem but not with those sort of restrictions. I started thinking: “Oo I think I saw something like this on x blog” and “this might be a good application for _ sort of technology”, basically I knew somewhere on the internet, someone would have already started solving this problem: almost certainly not a complete solution but ideas and systems to get it started.
So I asked: “Do you ever look at hobbyist projects when you are faced with something like this?” Answer: “Not really, no”. I’m not sure how I would go about this sort of thing half as effectively without having picked up masses of random, and often useless, projects from the web: all made and carefully documented by skilled professionals or kids in their spare time.

Not only do they provide ideas but the solutions are under really good constraints: Most hobbyists don’t have access to massive amounts of fancy fab equipment, they are shooting for low quantity, easy to manufacture. They have basically *no* R&D costs and often not much time. Maybe not a perfect match for your desired end product, but I’m sure you can see where the similarities lie. The problem is that firms don’t document failures and experiments, of course they don’t, there is no reason why they would.

And that is exactly the problem, firms find solutions to problems and discard them for various random reasons, hobbyists too and there is very little communication between the two sides. What I’d like is a world just a little bit more integrated, where smarter designs came about from everyone keeping their eyes and development just a little more open.

The documentation for the upeth project is coming along. This is a handy pin-out sheet including some useful dimensions and other information. This is also a little sneak peak of the type of design going into the website.

Share this:

Like this:

I’m a memory stick person, when I have it with me, and it allows me to have my files, and some programs (Inkscape, PChat, OpenOffice, notepad++, the list goes on and on) with me wherever and whenever I like but being a small piece of plastic, metal and glass, it’s rather prone to being lost, and or broken, necessitating frequent and thorough backups. Despite the flaws in the backup software (Syncback SE) to do with “version control” it really does work, and I’ve never lost more than about a days work, or lost the capability to work for more than a day.

People say that the cloud is the future, the ability for, anywhere, any time any files one could possibly want to access to be used, modified and globally updated over the internet, and this is surely something that “omnipresent computers” (the series) should be all over. Connectivity, ease of use and reliability are surely all improved? Well yes and no: outsourcing the storage of such sensitive data can be dangerous, especially as one has no control over deletion, duplication or good access control. One lacks the options to put version control along side plain drives or to triplicate some files. I may be a little of a hypochondriac when it comes to loosing data but this sort of flexibility is something that should be available, especially if the data is your livelihood or personally important. Who says online services have any duplication of data?

Back to the point: Save as. Why do I have to navigate a file system to save anything? I’ve long wanted a system where one attaches files to particular tags or projects, easily searchable and not rigidised to a strict file system. I could then tie a version control system to all files in that tag or project, call them up, do cross linked searches etc.

This system still needs more casuality to be added to machine computer interface, its a particularly AI and voice control friendly way of sorting files, but I can see it being a major change in the stagnating area of consumer file management. That combined with a service through which one could access ones files off a home server or separate server through a similar method quickly and easily, including, say adding guest access to particular tags or files.

I’ve written previously about CelTex, and the lovely interface afforded by a stored project file but add to that the power of visual linked branches, one can see a future where everyone’s files are neatly organised, without one having to remember to put things in the right place. Maybe soon we’ll see “Save as” replaced with “store”, and get rid of the book-keeping to the computer.

Although not many of my project ideas make it off the ground, I am pushing to have achieved at least something in my summer. Hence I am announcing a new project I am doing as part of Carrack Measurement Technology (as well as making a new website). The project is basically a really easy and cheap way of hooking up a micro (STM32) to the Internet/Ethernet. I want to do a couple of little things with it, obviously mostly Internet of things based. I hope to put it up for sale on Carrack, but don’t hold your breath: I have a lot of getting started to do. If you want just the hardware, soldered up or bear, I can probably organize that: comment or PM me. Here is a 3D render of my PCB, which is going off to be made soonish:

Once more into the world of the future and once more into the world of terrifying but strangely desirable paths.

Imagine this: You’ve got this great idea for a drawing, you sit at your desk, call up some pictures to remind yourself the exact details of what hyper modern weapon you are equipping your space fish with (*cough* Avengers *cough*). Then you draw, and draw and then decide that the laser equipped terrifying space bass is really rather good and the Internet will love it. So you post.

Now at the moment, I have a desk, with screens and I could draw, if I had any talent at it, and so, once I had finished, I’d turn around, walk over to the scanner, hit the button, select where I wanted to save/send/print it (NB: future OC on that whole fuss), then I’d email to myself (different computer and all) and then finally share it. And I’m okay with that, becuase I hardly ever do share things, and I can’t draw…I digress.

Cameras are getting pretty decent these days, tiny image sensors capable of taking sort of decent images at resolutions which allow pretty good crops to be taken, and these sensors will only get better. At some point processing it all might actually catch up, but that is a little far off. Imagine if, while drawing, your computer videoed all of your pen strokes, snap shotted each time you moved your hand away, and when you decided to post, you didn’t even need to scan, it knew it was “the drawing” or “it” and it went ahead and posted it on your angsty drawings blog. Really really high resolution, beautifully lit and with a complete “revision history” of most of the drawing progress. You notice a problem with some of the shading? You could edit an area without actually having to rub out…weird stuff.

I love this idea, where your area has cameras trained on it, in sufficient different angles to capture 3D hand movement, digitize objects and documents, without any trouble or special hardware. I want to specify a rough dimension into an actual measurement? I hold my hands to the size and say measure, and its done. I want to have a skype call with decent quality video, it can do that too!

I can think of hundreds of uses for this always on image processing, as we are already seeing with motion controlled televisions or puppet shows. But there are some major problems: Mainly, the processing power needed to take a 14Megapixel sensor at 50fps, make a depth map (or even full model), extract where is person, recognize the gesture and react in real time (while running that favorite program) is something that I doubt any single computer could pull off. Using back of envelope calculations, that is 6.3 Gigabytes a second of data pouring in (3 cameras, 24bit colour, 14MPx, 50fps). Thats raw data, let alone the hundreds of filters one has to run on it. Not a chance.

Always on high definition cameras also bring big personal privacy problems, especially with the inevitable level of networking any system one used this on would have and there are a few ideas I came up with to help quell those feelings of insecurity: Two modes, in hardware on the cameras: high resolution, slow frame rate (also addresses the bandwidth problem) and a low resolution, fast frame rate, indicated by a light or some infallible indicator on the actual camera.

Another feature would have to be firewalling, even inside individual applications: the camera and any network connected components would have to be separated by a strong data wall, limiting the bandwidth between the two so no video stream could be passed, although this wouldn’t stop everything, it would help stop live sevalence. Any video chat services or online games could have express user permission to break this, but it would be ideal that this was clearly stated somewhere when in use. Sounds messy to implement though, but in a model of crazily powerful computers, such a problem would largely dissolve.

Getting computers to see usefully is something I’d surely like to see, but we have a long way to go in processing (mainly) before such a useful bit of software/hardware comes forward.

The future of technology is something that occupies me, partly because its what will eventually be my everyday pursuit and partly because the current computers are just unsatisfactory. We should be living in the future, but we seem to be ever stuck in the slowly advancing present (a statement which makes no sense, I’ll grant). Anyway, here is the first installment in my n-part series.

The Google Glass-es prototype/first model

Google recently announced its “Glass” project: a device and system which provides augmented reality using a pair of fairly futuristic looking glasses. Aside from the usual arguments around letting a computer giant (or in fact any company) have access to a high resolution camera pointing at everything you see, your eye, and displaying content to you almost continually (an issue I’ll be sure to write about in a future OC), I can see several problems:

Handy solve-all computer systems have been around for ages now: from the Google Mail auto calendar integration to the all popular Siri and they are great, and probably really useful (I’ve never really used them in a day to day way, I’ll admit) but have a few big problems in themselves, the main and hardest to solve being limited response: like the late Dr Lanning in iRobot (film) says: “My responses are limited, you must ask the right questions”. All of these systems suffer the same problem, they use templates to know how to go about something: Each use case is programmed in explicitly, so even if a little automatic re-phrasing of the question or request is made automatically, if you asked it to send an email of all the texts you had sent [insert person] to [another person], it would be stumped, unless the makers of the system had had some pretty far out and comprehensive ideas. And funnily enough this type of system makes the whole scenario so much simpler: if one has a set of templates, keywords and re-phrasing techniques which all link explicitly to a method of communication or research, one can just think up a load of use cases and implement them, without much performance needed (after all these things are meant to be used, so there is no point making them take months to get directions to your local corner shop).

“[How would I get, directions] to [the nearest] [Place, category of place, address]”

And of course such systems would know how to deal with those sorts of requests, and this is all fine for most of the times we do things, until we actually want something done. You shouldn’t have to know the limitations of a system while using it and this has been the bane of such systems forever (along with some other issues involving voice-to-text capability and “always on technology” – a topic I’ll go into in another OC).

Seemingly the only way of sorting this out is to have an intelligent computer system (AI driven) which can wheedle its way into file systems, networks, programs and APIs to give it pretty unlimited control over what you might want to access, hardly a comforting proposition, but one which will give the best end results.

So maybe thats the way it will go, computers actually doing a good job in terms of everything-control, rather than being able to tell you where to bury bodies, but not being able to tell you where the nearest Michelin-stared restaurants are.

So recently I feel the blog has been host to a little too many cooking posts for a techy-meta type blog, so I’ve branched the blogs to be one for food and cooking (searingwater.wordpress.com) and this one for musings.

If you liked the cooking bits, I’d strongly advise subscribing to Searing Water, or if you liked the musings, stay here…or even go with both!

Thank you for reading as ever.

Share this:

Like this:

Around 1970, the law known as Moore’s Law was formulated: the simple statement that processing power would double, and price halve (or such kind of guidelines) every 12 to 18 months. A phenomenal prediction at the time, but one that has held amazingly well so far. We see a new range of amazing processors come out every year, but recently, the push has been for parallelism rather than sheer speed of silicon: for density over yield and integrity. This does mean that markers such as transistors per chip go up, but has the problem that actual user experience does not necessarily go that way (after all operating systems and programs cannot be custom fitted to a particular architecture of parallelism while keeping general enough for consumers). The problem is: we have hit a wall: Silicon just isn’t fast enough. It pretty much tops out at around 8.3GHz (or so the over-clockers seem to have proved). Now while this might be pretty fast, it means that to get more power, one has to have more die area. To add to this we are hitting the limits of just how small you can make a single transistor: about 5nm. So what the world really needs is a completely new technology base: something that can go orders faster, orders smaller, orders lower power… And we are starting to see such technologies emerging in labs around the world, some are even beginning to make real progress, so much so we may see them in a decade or so…

A similar problem is that of space travel: its been pretty common for a few decades now but in no way is it cheap or easy, it takes thousands of tons of fuel to propel tons of satellite into orbit costing millions. To say take men off Mars or other feats, we dont need a little more energy and some clever design, we need loads more energy and loads less weight, we need systems which run off pretty much no power. We need an order of magnitude.

Labs at the moment, like for semiconductor design, are also discovering new materials (weirdly based off carbon again) which give strength, weight and properties way higher than what aluminium and carbon fiber currently delivered. But this brings with it a whole new set of accessibility problems: Who could drill something as strong as diamond with a hand drill? Who could grow crystals of insanely complicated solution in their garage? What start up could afford very high cost manufacturing just to keep up with big companies? At the moment the technology industries are split, but in hardware enthusiasts can mostly match big companies in terms of materials, maybe not in reproducability, but in ease of prototyping certainly.

So what will the future of engineering hold? Probably a few very able companies with patented super materials, licensed to medium-sized manurfactures…Looks pretty bleak, especially thinking how many massive developments have come from little companies.

Energy use too needs to be massively reduced if we are to continue living. The current trend is to insulate a bit more, make some renewable power stations and charge more for fuel and these approaches gets one some way towards the targets set. But it doesnt go anywhere near the the problem that the amount of energy and material the average person in a developed country uses is way, way more than is ever sustainable. In energy we need an order leap to the user using less, not reducing the impact of each unit, but reducing the units. Our way of living is totally unsustainable and we need to reduce impacts by tens or even hundreds of times.

I leave you with this thought: “If humanity survives the next 300 years, we will survive the next 1500”