Category Archives: the future of

A zero carbon air travel solution. Well, most of the bits would be made of carbon materials, but it wouldn’t emit any CO2.

The pic says it all. A linear solar farm suspended in the high atmosphere to provide an IT platform for sensors, comms and other functions often accomplished by low orbit satellite. It would float up there thanks to being fixed to a graphene foam base layer that can be made lighter than helium (my previous invention, see https://timeguide.wordpress.com/2013/01/05/could-graphene-foam-be-a-future-helium-substitute/ which has since been prototyped and proven to be extremely resilient to high pressures too). Ideally, it would go all the way around the world, in various inclinations at different altitudes to provide routes to many places. Carbon materials are also incredibly strong so the line can be made as strong as can reasonably be required.

The flotation layer also supports a hypersonic linear induction motor that could provide direct propulsion to a hypersonic glider or to electric fans on a powered plane. Obviously this could also provide a means of making extremely low earth orbit satellites that continuously circumnavigate the ring.

I know you’re asking already how the planes get up there. There are a few solutions. Tethers could come all the way to ground level to airports, and electric engines would be used to get to height where the plane would pick up a sled-link.

Alternatively, stronger links to the ground would allow planes to be pulled up by sleds, though this would likely be less feasible.

Power levels? Well, the engines on a Boeing 777 generate about 8.25MW. A high altitude solar cell, above clouds could generate 300W per square metre. So a 777 equivalent plane needs 55km of panels if the line is just one metre wide. That means planes need to be at least that distance apart, but since that equates to around a minute, that is no barrier at all.

If you still doubt this, the Hyperloop was just a crazy idea a century ago too.

One of the first ‘facts’ I ever learned about nature was that there were a million species of beetle. In the Google age, we know that ‘scientists estimate there are between 4 and 8 million’. Well, still lots then.

Technology lets us control them. Beetles provide a nice platform to glue electronics onto so they tend to fall victim to cybernetics experiments. The important factor is that beetles come with a lot of built-in capability that is difficult or expensive to build using current technology. If they can be guided remotely by over-riding their own impulses or even misleading their sensors, then they can be used to take sensors into places that are otherwise hard to penetrate. This could be for finding trapped people after an earthquake, or getting a dab of nerve gas onto a president. The former certainly tends to be the favored official purpose, but on the other hand, the fashionable word in technology circles this year is ‘nefarious’. I’ve read it more in the last year than the previous 50 years, albeit I hadn’t learned to read for some of those. It’s a good word. Perhaps I just have a mad scientist brain, but almost all of the uses I can think of for remote-controlled beetles are nefarious.

The first properly publicized experiment was 2009, though I suspect there were many unofficial experiments before then:

Big beetles make it easier to do experiments since they can carry up to 20% of body weight as payload, and it is obviously easier to find and connect to things on a bigger insect, but obviously once the techniques are well-developed and miniaturization has integrated things down to single chip with low power consumption, we should expect great things.

For example, a cloud of redundant smart dust would make it easier to connect to various parts of a beetle just by getting it to take flight in the cloud. Bits of dust would stick to it and self-organisation principles and local positioning can then be used to arrange and identify it all nicely to enable control. This would allow large numbers of beetles to be processed and hijacked, ideal for mad scientists to be more time efficient. Some dust could be designed to burrow into the beetle to connect to inner parts, or into the brain, which obviously would please the mad scientists even more. Again, local positioning systems would be advantageous.

Then it gets more fun. A beetle has its own sensors, but signals from those could be enhanced or tweaked via cloud-based AI so that it can become a super-beetle. Beetles traditionally don’t have very large brains, so they can be added to remotely too. That doesn’t have to be using AI either. As we can also connect to other animals now, and some of those animals might have very useful instincts or skills, then why not connect a rat brain into the beetle? It would make a good team for exploring. The beetle can do the aerial maneuvers and the rat can control it once it lands, and we all know how good rats are at learning mazes. Our mad scientist friend might then swap over the management system to another creature with a more vindictive streak for the final assault and nerve gas delivery.

So, Coleoptera Nefarius then. That’s the cool new beetle on the block. And its nicer but underemployed twin Coleoptera Benignus I suppose.

Time for a second alphabetic ‘The future of’ set. Air is a good starter.

Air is mostly a mixture of gases, mainly nitrogen and oxygen, but it also contains a lot of suspended dust, pollen and other particulates, flying creatures such as insects and birds, and of course bacteria and viruses. These days we also have a lot of radio waves, optical signals, and the cyber-content carried on them. Air isn’t as empty as it seems. But it is getting busier all the time.

Internet-of-things, location-based marketing data and other location-based services and exchanges will fill the air digitally with fixed and wandering data. I called that digital air when I wrote a full technical paper on it and I don’t intend to repeat it all now a decade later. Some of the ideas have made it into reality, many are still waiting for marketers and app writers to catch up.

The most significant recent addition is drones. There are already lots of them, in a wide range of sizes from insect size to aeroplane size. Some are toys, some airborne cameras for surveillance, aerial photography, monitoring and surveillance, and increasingly they are appearing for sports photography and tracking or other leisure pursuits. We will see a lot more of them in coming years. Drone-based delivery is being explored too, though I am skeptical of its likely success in domestic built up areas.

Personal swarms of follower drones will become common too. It’s already possible to have a drone follow you and keep you on video, mainly for sports uses, but as drones become smaller, you may one day have a small swarm of tiny drones around you, recording video from many angles, so you will be able to recreate events from any time in an entire 3D area around you, a 3D permasuperselfie. These could also be extremely useful for military and policing purposes, and it will make the decline of privacy terminal. Almost everything going on in public in a built up environment will be recorded, and a great deal of what happens elsewhere too.

We may see lots of virtual objects or creatures once augmented reality develops a bit more. Some computer games will merge with real world environments, so we’ll have aliens, zombies and various mythical creatures from any game populating our streets and skies. People may also use avatars that fly around like fairies or witches or aliens or mythical creatures, so they won’t all be AI entities, some will have direct human control. And then there are buildings that might also have virtual appearances and some of those might include parts of buildings that float around, or even some entire cities possibly like those buildings and city areas in the game Bioshock Infinite.

Further in the future, it is possible that physical structures might sometimes levitate, perhaps using magnets, or lighter than air construction materials such as graphene foam. Plasma may also be used as a building material one day, albeit far in the future.

Today when you look up during the day you typically see various weather features, the sun, maybe the moon, a few birds, insects or bats, maybe some dandelion or thistle seeds. As night falls, stars, planets, seasonal shooting stars and occasional comets may appear. To those we can add human contributions such as planes, microlights, gliders and helicopters, drones, occasional hot air balloons and blimps, helium party balloons, kites and at night-time, satellites, sometimes the space station, maybe fireworks. If you’re in some places, missiles and rockets may be unfortunate extras too, as might be the occasional parachutist or someone wearing a wing-suit or on a hang-glider. I guess we should add occasional space launches and returns too. I can’t think of any more but I might have missed some.

Drones are the most recent addition and their numbers will increase quickly, mostly for surveillance purposes. When I sit out in the garden, since we live in a quiet area, the noise from occasional microlights and small planes is especially irritating because they fly low. I am concerned that most of the discussions on drones don’t tend to mention the potential noise nuisance they might bring. With nothing between them and the ground, sound will travel well, and although some are reasonably quiet, other might not be and the noise might add up. Surveillance, spying and prying will become the biggest nuisances though, especially as miniaturization continues to bring us many insect-sized drones that aren’t noisy and may visually be almost undetectable. Privacy in your back garden or in the bedroom with unclosed curtains could disappear. They will make effective distributed weapons too:

Adverts don’t tend to appear except on blimps, and they tend to be rare visitors. A drone was this week used to drag a national flag over a football game. In the Batman films, Batman is occasionally summoned by shining a spotlight with a bat symbol onto the clouds. I forgot which film used the moon to show an advert. It is possible via a range of technologies that adverts could soon be a feature of the sky, day and night, just like in Bladerunner. In the UK, we are now getting used to roadside ads, however unwelcome they were when they first arrived, though they haven’t yet reached US proportions. It will be very sad if the sky is hijacked as an advertising platform too.

I think we’ll see some high altitude balloons being used for communications. A few companies are exploring that now. Solar powered planes are a competing solution to the same market.

As well as tiny drones, we might have bubbles. Kids make bubbles all the time but they burst quickly. With graphene, a bubble could prevent helium escaping or even be filled with graphene foam, then it would float and stay there. We might have billions of tiny bubbles floating around with tiny cameras or microphones or other sensors. The cloud could be an actual cloud.

And then there’s fairies. I wrote about fairies as the future of space travel.

I promised in my last blog to do one on the dimensions of cyberspace. I made this chart 15 years ago, in two parts for easy reading, but the ones it lists are still valid and I can’t think of any new ones to add right now, but I might think of some more and make an update with a third part. I changed the name to virtuality instead because it actually only talks about human-accessed cyberspace, but I’m not entirely sure that was a good thing to do. Needs work.

The chart has 14 dimensions (control has two independent parts), and I identified some of the possible points on each dimension. As dimensions are meant to be, they are all orthogonal, i.e. they are independent of each other, so you can pick any one on any dimension and use it with any from each other. Standard augmented reality and pure virtual reality are two of the potential combinations, out of the 2.5 x 10^11 possibilities above. At that rate, if every person in the world tried a different one every minute, it would take a whole day to visit them all even briefly. There are many more possible, this was never meant to be exhaustive, and even two more columns makes it 10 trillion combos. Already I can see that one more column could be ownership, another could be network implementation, another could be quality of illusion. What others have I missed?

I recently acquired a point-and-click thermometer for Futurizon, which gives an instant reading when you point it at something. I will soon know more about the world around me, but any personal discoveries I make are quite likely to be well known to science already. I don’t expect to win a Nobel prize by discovering breeches of the second law of thermodynamics, but that isn’t the point. The thermometer just measures the transmission from a particular point in a particular frequency band, which indicates what temperature it is. It cost about £20, a pretty cheap stimulation tool to help me think about the future by understanding new things about the present. I already discovered that my computer screen doubles as a heater, but I suspected that already. Soon, I’ll know how much my head warms when if think hard, and for the futurology bit, where the best locations are to put thermal IoT stuff.

Now that I am discovering the joys or remote sensing, I want to know so much more though. Sure, you can buy satellites for a billion pounds that will monitor anything anywhere, and for a few tens of thousands you can buy quite sophisticated lab equipment. For a few tens, not so much is available and I doubt the tax man will agree that Futurizon needs a high end oscilloscope or mass spectrometer so I have to set my sights low. The results of this blog justify the R&D tax offset for the thermometer. But the future will see drops in costs for most high technologies so I also expect to get far more interesting kit cheaply soon.

Even starting with the frequent assumption that in the future you can do anything, you still have to think what you want to do. I can get instant temperature readings now. In the future, I may also want a full absorption spectrum, color readings, texture and friction readings, hardness, flexibility, sound absorption characteristics, magnetic field strength, chemical composition, and a full range of biological measurements, just for fun. If Spock can have one, I want one too.

But that only covers reality, and reality will only account for a small proportion of our everyday life in the future. I may also want to check on virtual stuff, and that needs a different kind of sensor. I want to be able to point at things that only exist in virtual worlds. It needs to be able to see virtual worlds that are (at least partly) mapped onto real physical locations, and those that are totally independent and separate from the real world. I guess that is augmented reality ones and virtual reality ones. Then it starts getting tricky because augmented reality and virtual reality are just two members of a cyberspace variants set that runs to more than ten trillion members. I might do another blog soon on what they are, too big a topic to detail here.

People will be most interested in sensors to pick up geographically linked cyberspace. Much of the imaginary stuff is virtual worlds in computer games or similar, and many of those have built-in sensors designed for their spaces. So, my character can detect caves or forts or shrines from about 500m away in the virtual world of Oblivion (yes, it is from ages ago but it is still enjoyable). Most games have some sort of sensors built-in to show you what is nearby and some of its properties.

Geographically linked cyberspace won’t all be augmented reality because some will be there for machines, not people, but you might want to make sensors for it all the same, for many reasons, most likely for navigating it, debugging, or for tracking and identifying digital trespass. The last one is interesting. A rival company might well construct an augmented reality presence that allows you to see their products alongside ones in a physical shop. It doesn’t have to be in a properly virtual environment, a web page is still a location in cyberspace and when loaded, that instance takes on a geographic mapping via that display so it is part of that same trespass. That is legal today, and it started many years ago when people started using Amazon to check for better prices while in a book shop. Today it is pretty ubiquitous. We need sensors that can detect that. It may be accepted today as fair competition, but it might one day be judged as unfair competition by regulators for various reasons, and if so, they’ll need some mechanism to police it. They’ll need to be able to detect it. Not easy if it is just a web page that only exists at that location for a few seconds. Rather easier if it is a fixed augmented reality and you can download a map.

If for some reason a court does rule that digital trespass is illegal, one way of easy(though expensive) way of solving it would be to demand that all packets carry a geographic location, which of course the site would know when the person clicks on that link. To police that, turning off location would need to be blocked, or if it is turned off, sites would not be permitted to send you certain material that might not be permitted at that location. I feel certain there would be better and cheaper and more effective solutions.

I don’t intend to spend any longer exploring details here, but it is abundantly clear from just inspecting a few trees that making detectors for virtual worlds will be a very large and diverse forest full of dangers. Who should be able to get hold of the sensors? Will they only work in certain ‘dimensions’ of cyberspace? How should the watchers be watched?

The most interesting thing I can find though is that being able to detect cyberspace would allow new kinds of adventures and apps. You could walk through a doorway and it also happens to double as a portal between many virtual universes. And you might not be able to make that jump in any other physical location. You might see future high street outlets that are nothing more than teleport chambers for cyberspace worlds. They might be stuffed with virtual internet of things things and not one one of them physical. Now that’s fun.

We have all had to sit through talks where the speaker thinks that using lots of points that start with the same letter is somehow impressive. During one such talk, I got bored and produced this one letter fully comprehensive MBA. Enjoy:

Like this:

I occasionally do talks on future TV and I generally ignore current companies and their recent developments because people can read about them anywhere. If it is already out there, it isn’t the future. Companies make announcements of technologies they expect to bring in soon, which is the future, but they don’t tend to announce things until they’re almost ready for market so tracking those is no use for long term futurology.

Thanks to Pauline Rigby on Twitter, I saw the following article about Dolby’s new High Dynamic Range TV:

High dynamic range allows light levels to be reproduced across a high dynamic range. I love tech, terminology is so damned intuitive. So hopefully we will see the darkest blacks and the brightest lights.

It looks a good idea! But it won’t be their last development. We hear that the best way to predict the future is to invent it, so here’s my idea: textured pixels.

As they say, there is more to vision than just resolution. There is more to vision than just light too, even though our eyes can only make images from incoming photons and human eyes can’t even differentiate their polarisation. Eyes are not just photon detectors, they also do some image pre-processing, and the brain does a great deal more processing, using all sorts of clues from the image context.

Today’s TV displays mostly use red, blue and green LCD pixels back-lit by LEDs, fluorescent tubes or other lighting. Some newer ones use LEDs as the actual pixels, demonstrating just how stupid it was to call LCD TVs with LED back-lighting LED TVs. Each pixel that results is a small light source that can vary in brightness. Even with the new HDR that will still be the case.

Having got HDR, I suggest that textured pixels should be the next innovation. Texture is a hugely important context for vision. Micromechanical devices are becoming commonplace, and some proteins are moving into nano-motor technology territory. It would be possible to change the direction of a small plate that makes up the area of the pixel. At smaller scales, ridges could be created on the pixel, or peaks and troughs. Even reversible chemical changes could be made. Technology can go right down to nanoscale, far below the ability of the eye to perceive it, so matching the eye’s capabilities to discern texture should be feasible in the near future. If a region of the display has a physically different texture than other areas, that is an extra level of reality that they eye can perceive. It could appear glossy or matt, rough or smooth, warm or cold. Linking pixels together across an area, it could far better convey movement than jerky video frames. Sure you can emulate texture to some degree using just light, but it loses the natural subtlety.

Like this:

Bacteria have already taken the prize for the first synthetic organism. Craig Venter’s team claimed the first synthetic bacterium in 2010.

Bacteria are being genetically modified for a range of roles, such as converting materials for easier extraction (e.g. coal to gas, or concentrating elements in landfill sites to make extraction easier), making new food sources (alongside algae), carbon fixation, pollutant detection and other sensory roles, decorative, clothing or cosmetic roles based on color changing, special surface treatments, biodegradable construction or packing materials, self-organizing printing… There are many others, even ignoring all the military ones.

I have written many times on smart yogurt now and it has to be the highlight of the bacterial future, one of the greatest hopes as well as potential danger to human survival. Here is an extract from a previous blog:

Progress is continuing to harness bacteria to make components of electronic circuits (after which the bacteria are dissolved to leave the electronics). Bacteria can also have genes added to emit light or electrical signals. They could later be enhanced so that as well as being able to fabricate electronic components, they could power them too. We might add various other features too, but eventually, we’re likely to end up with bacteria that contain electronics and can connect to other bacteria nearby that contain other electronics to make sophisticated circuits. We could obviously harness self-assembly and self-organisation, which are also progressing nicely. The result is that we will get smart bacteria, collectively making sophisticated, intelligent, conscious entities of a wide variety, with lots of sensory capability distributed over a wide range. Bacteria Sapiens.

I often talk about smart yogurt using such an approach as a key future computing solution. If it were to stay in a yogurt pot, it would be easy to control. But it won’t. A collective bacterial intelligence such as this could gain a global presence, and could exist in land, sea and air, maybe even in space. Allowing lots of different biological properties could allow colonization of every niche. In fact, the first few generations of bacteria sapiens might be smart enough to design their own offspring. They could probably buy or gain access to equipment to fabricate them and release them to multiply. It might be impossible for humans to stop this once it gets to a certain point. Accidents happen, as do rogue regimes, terrorism and general mad-scientist type mischief.

Transhumanists seem to think their goal is the default path for humanity, that transhumanism is inevitable. Well, it can’t easily happen without going first through transbacteria research stages, and that implies that we might well have to ask transbacteria for their consent before we can develop true transhumans.

Self-organizing printing is a likely future enhancement for 3D printing. If a 3D printer can print bacteria (onto the surface of another material being laid down, or as an ingredient in a suspension as the extrusion material itself, or even a bacterial paste, and the bacteria can then generate or modify other materials, or use self-organisation principles to form special structures or patterns, then the range of objects that can be printed will extend. In some cases, the bacteria may be involved in the construction and then die or be dissolved away.

When I started writing articles on the future, I started almost every title with the words ‘The future of”. I think I will do that again.

Let’s start with the future of the Aardvark.

An aardvark, image credit: National Geographic

They look interesting, and do play important role in their ecosystem, so it would be a shame to lose them, and it is likely that they will be protected sufficiently to survive a good while longer.

Aardvarks eat ants and termites. Termites are one of the biggest natural sources of methane, which is well known as a greenhouse gas, so they could be thought of as assistants in prevention of global warming, except that methane’s greenhouse activities are mostly obscured by the absorption of the same frequencies by water vapor. So atmospheric water prevents aardvarks from being accidental environmental heroes. Or does it?

Having just blogged yet again on internet of things, it’s obvious that aardvarks could easily be fitted with tracking and monitoring devices, externally and internally. Small devices near the end of their noses could do a lot of environmental monitoring. They could monitor a wide range of pollutants, local climate variations, the spread of various organisms, all sorts of things. Maybe they could even be used to spread various biological or synthetic agents to termite or any colonies, since they don’t eat every single one. That provides a means to spread sensing even further. I suggested a few years ago that ants would make good spies, if they could be persuaded to pick up sugar crystals containing sensors such as microphones, and carry them to their nests, unseen by militants.

It is likely that genetic modification will be used to ‘improve’ a range of natural organisms by adding sensory enhancement, enhancing their ability to find mates as well as their libidos, or navigate around man-made obstacles, process new types of food, or adapt to climate change.

We’re also likely to see some robotic Aardvarks. These could be used to interact with natural populations for scientific study, or they could be used as biomimetic IED detectors in war zones. With multisensory noses to detect chemical or EM emissions or local ground absorption spectrum, and effective claws to dig up and destroy IEDs, they might be better suited than wheeled or caterpillar tracked variants used today.

Finally, virtual aardvarks will be even more ubiquitous. Being an interesting shape puts them in a good position when it comes to choosing creatures to populate virtual environments. In virtual worlds, they might talk or come in different sizes and colors, or shape shift. Far future technology could even link virtual ones to real individuals still living in the wild, so that the virtual ones behave realistically.

Well, there we are. First marker in a new ‘The future of …’ series. Bacteria next.

Email Subscription

I D Pearson BSc DSc(hc) FWAAS CITP FBCS FWIF

About me

I’m an all-round futurist/futurologist with a sound engineering foundation and over 1800 inventions. I spend most of my time writing futures material for white papers or to accompany PR campaigns, but I’ve also delivered well over 1000 conference presentations and appeared over 750 times on TV and Radio, often following writing I’ve done for PR campaigns. I’ve written hundreds of commissioned reports, press articles and seven books, most recently Society Tomorrow, Space Anchor, Total Sustainability and You Tomorrow (2nd Edn). I sometimes undertake phone or face-to-face consultancy on any aspect of the future, usually from a technology perspective, using over 30 years experience as a futurologist and engineer. I have demonstrated about 85% accuracy when looking 10-15 years ahead.

I am a Chartered Fellow of the British Computer Society and a Fellow of the World Academy for Arts and Science and the World Innovation Foundation.