Andreas Hein is a familiar figure in these pages, having written on the subject of worldships as well as the uploading of consciousness. He is Deputy Director of the Initiative for Interstellar Studies (I4IS), as well as Director of its Technical Research Committee. He founded and leads Icarus Interstellar’s Project Hyperion: A design study on manned interstellar flight. Andreas received his master’s degree in aerospace engineering from the Technical University of Munich and is now working on a PhD there in the area of space systems engineering, having conducted part of his research at MIT. He spent a semester abroad at the Institut Superieur de l’Aeronautique et de l’Espace in Toulouse and also worked at the European Space Agency Strategy and Architecture Office on future manned space exploration. Today’s essay introduces the Initiative for Interstellar Studies’ Project Dragonfly Design Competition.

by Andreas Hein

2089, 5th April: A blurry image rushes over screens around the world. The image of a coastline, waves crashing into it, inviting for a nice evening walk at dawn. Nobody would have paid special attention, if it were not for one curious feature: Two suns were mounted in the sky, two bright, hellish eyes. The first man-made object had reached another star system.

Is it plausible to assume that we could send a probe to another star within our century? One major challenge is the amount of resources needed for such a mission. [1, 2]. Ships proposed in the past were mostly mammoths, weighing ten-thousands of tons: the fusion-propelled Daedalus probe with 54,000 tonnes and recently the Project Icarus Ghost Ship with over 100,000 tonnes. All these concepts are based on the rocket principle, which means that they have to take their propellant with them to accelerate. This results in a very large ship.

Another problem with fusion propulsion in particular is the problem of scalability. Most fusion propulsion systems get more efficient when they are scaled up. There is also a critical lower threshold for how small you can go. These factors lead to large amounts of needed propellant and large engines, for which you need a large space infrastructure. A Solar System-wide economy is probably needed, as the Project Daedalus report argues [3].

However, there is a different avenue for interstellar travel: going small. If you go small, you need less energy for accelerating the probe and thus less resources. Pioneers of small interstellar missions are Freeman Dyson with his Astrochicken; a living, one kilogram probe, bio-engineered for the space environment [4]. Robert Forward proposed the Starwisp probe in 1985 [5]. A large, ultra-thin sail which rides on a beam of microwaves. Furthermore, Frank Tipler and Ray Kurzweil describe how nano-scale probes could be used for transporting human consciousness to the stars [6, 7].

At the Initiative for Interstellar Studies (I4IS), we wanted to have a fresh look at small interstellar probes, laser sail probes in particular. The last concepts in this area have been developed years ago. How did the situation change in recent years? Are there new, possibly disruptive concepts on the horizon? We think there are. The basic idea is to develop an interstellar mission by combining the following technologies:

Laser sail propulsion: The spacecraft rides on a laser beam, which is captured by an extremely thin sail [8].

Small spacecraft technology: Highly miniaturized spacecraft components which are used in CubeSat missions

Distributed spacecraft: To spread out the payload of a larger spacecraft over several spacecraft, thus, reducing the laser power requirements [9, 10]. The individual spacecraft would then rendezvous at the target star system and collaborate to fulfill their mission objectives. For example, one probe is mainly responsible for communication with the Solar System, another responsible for planetary exploration via distributed sensor networks (smart dust) [11].

Magnetic sails: A thin superconducting ring’s magnetic field deflects the hydrogen in the interstellar medium and decelerates the spacecraft [12].

Solar power satellites: The laser system shall use space infrastructure which is likely to exist in the next 50 years. Solar power satellites would be temporarily leased to provide the laser system with power to propel the spacecraft.

Communication systems with external power supply: A critical factor for small interstellar missions is power supply for the communication system. As small spacecraft cannot provide enough power for communicating over these vast distances. Thus, power has to be supplied externally, either by using laser or microwave power from the Solar System during the trip and solar radiation within the target star system [5].

Bringing all these technologies together, it is possible to imagine a mission which could be realized with technologies which are feasible in the next 10 years and could be in place in the next 50 years: A set of solar power satellites are leased for a couple of years for the mission. A laser system with a huge aperture has been put into a suitable orbit to propel the interstellar, as well as future planetary missions. Thus, the infrastructure can be reused for multiple purposes. The interstellar probes are launched one-by-one.

After decades, the probes start to decelerate by magnetic sails. Each spacecraft charges its sails differently. The first spacecraft decelerates slower than the follow-up probes. Ideally, the spacecraft then arrive at the target star system at the same point in time. Then, the probes start exploring the star system autonomously. They reason about exploration strategies, exchange and share data. Once a suitable exploration target has been chosen, dedicated probes descend to the planetary surface, spreading dust-sized sensor networks onto the pristine land. The data from the network is collected by other spacecraft and transferred back to the spacecraft acting as a communication hub. The hub, powered by the light from extrasolar light sends back the data to us. The result could be the scenario described at the beginning of this article.

Of course, one of the caveats of such a mission is its complexity. The spacecraft would have to rendezvous precisely over interstellar distances. Furthermore, there are several challenges with laser sail systems, which have been frequently addressed in the literature, for example beam collimation and control. Nevertheless, such a mission architecture has many advantages compared to existing ones: It could be realized by a space infrastructure we could imagine to exist in the next 50 years. The failure of one or more spacecraft would not be catastrophic, as redundancy could easily be built in by launching two or more identical spacecraft.

The elegance of this mission architecture is that all the infrastructure elements can also be used for other purposes. For example, a laser infrastructure could not only be used for an interstellar mission but interplanetary as well. Further applications include an asteroid defense system [20]. The solar power satellites can be used for providing in-space infrastructure with power [18].

Image: Artist’s impression of a spacecraft swarm arriving at an exosolar system (Courtesy: Adrian Mann)

How about the feasibility of the individual technologies? Recent progress in various areas looks promising:

The increased availability of highly sophisticated miniaturized commercial components: smart phones include many components which are needed for a space system, e.g. gyros for attitude determination, a communication system, and a microchip for data-handling. NASA has already flown a couple of “phone-sats”; Satellites which are based on a smart phone [13].

Advances in distributed satellite networks: Although a single small satellite only has a limited capability, several satellites which cooperate can replace larger space systems. The concept of Federated Satellite Systems (FSS) is currently explored at the Massachusetts Institute of Technology as well as at the Skolkovo Institute of Technology in Russia [14]. Satellites communicate opportunistically and share data and computing capacity. It is basically a cloud computing environment in space.

Increased viability of solar sail missions. A number of recent missions are based on solar sail technology, e.g. the Japanese IKAROS probe, LightSail-1 of the Planetary Society, and NASA’s Sunjammer probe.

Greg Matloff recently proposed use of Graphene as a material for solar sails [15]. With an areal density of a fraction of a gram and high thermal resistance, this material would be truly disruptive. Currently existing materials have a much higher areal density; a number crucial for measuring the performance of solar sails.

Material sciences has also advanced to a degree where Graphene layers only a few atoms thick can be manufactured [16]. Thus, manufacturing a solar sail based on extremely thin layers of Graphene is not as far away as it seems.

Small satellites with a mass of only a few kilograms are increasingly proposed for interplanetary missions. NASA has recently announced the Interplanetary CubeSat Challenge, where teams are invited to develop CubeSat missions to the Moon and even deeper into space (NASA) [17]. Coming advances will thus stretch the capability of CubeSats beyond Low-Earth Orbit.

Recent proposals for solar power satellites focus on providing space infrastructure with power instead of Earth infrastructure [18, 19]. The reason is quite simple: Solar power satellites are not competitive to most Earth-based alternatives but they are in space. A recent NASA concept by John Mankins proposed the use of a highly modular tulip-shaped space power satellite, supplying geostationary communication satellites with power.

Large space laser systems have been proposed for asteroid defense [20]

In order to explore various mission architectures and encourage participation by a larger group of people, I4IS has recently announced the Project Dragonfly Competition in the context of the Alpha Centauri Prize [21]. We hope that with the help of this competition, we can find unprecedented mission architectures of truly disruptive capability. Once this goal is accomplished, we can concentrate our efforts on developing individual technologies and test them in near-term missions.

If this all works out, this might be the first time in history that there is a realistic possibility to explore a near-by star system within the 21st or early 22nd century with “modest” resources.

[2] Hein, A. M. (2012). Evaluation of Technological-Social and Political Projections for the Next 100-300 Years and the Implications for an Interstellar Mission. Journal of the British Interplanetary Society, 65, 330-340.

Monolayer graphene is a great material except that a) we can’t yet manufacture it in bulk as one seamless sheet, and b) it’s transparent, so a suitable dopant needs to be found to make it highly reflective.

The Earth already produces tens of terrawatts of power. Instead of waiting for solar power satellites to be developed, could that power be beamed to space during norm down time (e.g. night time) either by direct use or beaming to rectennas in orbit or on the Moon and then rebeaming from there? That way, part of the challenge is located to the Earth where it can be done relatively inexpensively and sooner.

@Andrew – my understanding is that graphene can absorb energy in in the far IR and microwave wavelengths. So while not reflecting, there should still be momentum transfer. How much absorbtion from a monolayer (or a few layers) there is I don’t know, so the efficiency would need to be determined.

Graphene would be cool. There is no way, however, that it or any other material could survive near-relativistic flight through the interstellar medium. The ISM between here and Alpha Centauri amounts to a sheet of condensed matter microns thick, much thicker than most sails that are envisioned. In a head-on collision between the two, at hard radiation speed, the sail is likely to lose catastrophically.

I’m far from an expert , so pardon the naivete of these questions:
Can we accelerate the probe(s) sufficiently while they are still
relatively close (a few light hours ) so that before we begin to lose significantly due to lack of collimation & pointing accuracy , that the probes will have reached sufficient velocity such that further pushing would not be necessary?

For the momentum-transfer idea: I imagine if we impinge neutral particles on the probe, it would cause sputtering of material so if the probe is nano, this would imply a minimum size to the probe. Wonder how big that is?

Alternatively, if we use lasers with power sufficiently high to accelerate the probe before we lose it due to tracking inaccuracy, are there materials that are reflective enough and with sufficient thermal tolerance that would survive such a high power?

This is the way to go I would love to go back to the Can do mentality of the mid 20th century but we don’t seem to be going that way. Here is a real plan for the Cant do era we now have . It even has a launch date in my potential lifetime . Really brilliant work

Arriving at the distant star system, the probes look around, finally deciding on a worthy subject for exploration. A star system is a very very big place, however, mostly empty of planets. How would these probes find the planets? Would the planet-finding techniques we use from afar be of any use?

Jack McDevitt describes the process in several of his novels: he has an AI do it! As to exactly how this happens, readers are left to wonder.

Can we accelerate the probe(s) sufficiently while they are still
relatively close (a few light hours ) so that before we begin to lose significantly due to lack of collimation & pointing accuracy , that the probes will have reached sufficient velocity such that further pushing would not be necessary?

This is a very good point, and it was one of the motivations for Jordin Kare when he went to work on the SailBeam concept, which involved tiny ‘micro-sails’ that could be accelerated to much higher velocities in a relatively short space, thus easing the collimation problem.

@Michael:
“Arriving at the distant star system, the probes look around, finally deciding on a worthy subject for exploration. A star system is a very very big place, however, mostly empty of planets. How would these probes find the planets? Would the planet-finding techniques we use from afar be of any use?”

The specific planets etc. for exploration are of course defined prior to arrival. However, there are limits to what can be predetermined as scientific objectives beforehand. Thus, the probes would have to be able to recognize scientifically interesting surface feastures of exoplanets and adapt their exploration strategy to it.

We could use a reflective material such as beryllium and use ion etching techniques to reduce the weight of the sail, if you use reactive ions that are shot into the material and out the other side it would reduce the weight by a significant amount first by shear kinetic power and second by chemical means. The reactive ions that are embedded into the material then turn into a gas on heating further reducing the weight. Considerable mass can be removed this way.

@Eniac September 5, 2014 at 19:45

‘There is no way, however, that it or any other material could survive near-relativistic flight through the interstellar medium. The ISM between here and Alpha Centauri amounts to a sheet of condensed matter microns thick, much thicker than most sails that are envisioned. In a head-on collision between the two, at hard radiation speed, the sail is likely to lose catastrophically.’

You are correct Eniac, although the ISM is tenuous it amounts to a significant erosion hurdle on thin materials, we could use very high acceleration of the probe within the solar system to reduce the collimation issue and then cocoon the probe i.e. it shrinks to form a protective mass and then slowly opens up again at the target system or it could reconfigure itself to a magnetic sail for slowing down.

@Kamal Ali September 5, 2014 at 20:48

‘Can we accelerate the probe(s) sufficiently while they are still
relatively close (a few light hours ) so that before we begin to lose significantly due to lack of collimation & pointing accuracy , that the probes will have reached sufficient velocity such that further pushing would not be necessary?’

Light probes would withstand high accelerations very nicely indeed, 30 000g electronics is normal in artillery shells, Nano-electronics much higher, 100 000 to millions of g’s, however the sail may not withstand these forces.

‘For the momentum-transfer idea: I imagine if we impinge neutral particles on the probe, it would cause sputtering of material so if the probe is nano, this would imply a minimum size to the probe. Wonder how big that is?’

The probe if small would intercept little of the neutral particles but when they hit erosion and secondary radiation would be an issue.

‘Alternatively, if we use lasers with power sufficiently high to accelerate the probe before we lose it due to tracking inaccuracy, are there materials that are reflective enough and with sufficient thermal tolerance that would survive such a high power?’

This is a significant issue, if we heat the material to strongly it will sublime or simply evaporate at to higher a rate, we could use that to our advantage though, as the sail evaporates it will get lighter and radiate heat better.

My biggest issue with the very small probe concept is radiation during flight, there is simply not enough protective mass and it gets worse the faster we go to the square of the velocity.

As for the I4IS Project Dragonfly Competition I would like to see a separate one for magnetic sail technologies. My money would be on the magnetic sail/particle beam concept winning.

The probes, wouldn’t they, need to go into an initial sun orbit (awkward with out the word ‘solar’!), and then start hunting. Presumably they’d be in an exaggerated ellipse, watching for moving objects against the star field. They would be moving so fast that the ellipse will be quite exaggerated.

They would know how far the planet(s) are from the star, of course, and one could suppose that they would be able to establish advantageous orbits by imaging during the approach.

And they would need sufficient delta-v to leave the initial orbit, moving into orbit into alpha, then beta, etc. Quite a tall order, even with in situ resources of some sort.

The problems of this mission are daunting indeed.

(I wonder, too, the upper velocity limit for capture by the target star, in a way analogous to the way current tech allows Mars or Jupiter or Saturn to capture probes).

I think by the time something like this is possible, the first priority of any arriving probe(s) would be to find a small icy/rocky body suitable for extracting the raw materials needed to build the first “factory”, which would then (second priority) start producing and assembling the components of a giant laser or microwave array to call home. As third priority, the factory would then start producing an endless stream of all kinds of probes that would fan out across the system to map and explore it, thoroughly. New and/or better sites found in this process would then be settled with daughter factories until sufficient infrastructure is present to secure a permanent presence and support complete exploration. Some decades after the first call home, high level instruction and/or an OS upgrade might arrive from the system of origin, but clearly most decisions would have to be made autonomously. Contrary what most seem to think, that does not really require “intelligence”. Non-intelligent mechanisms are perfectly capable of making the decisions needed to maintain themselves, as demonstrated by any simple biological organism.

In my opinion, such probes would be constructed from conventional metal/silicon technology. Miniaturized, to be sure, but still far from the mythical nano-assemblers which are much harder to design and make than most imagine. The minimum mass of an “industrial seed” of this type is anyone’s guess, but it could well be tons rather than grams. So, there is yet another reason why large probes are better than small ones, although you could, of course, imagine ways to divide the seed into smaller components that can self-assemble at the target.

The obvious fourth priority of such a probe would be to construct the infrastructure needed to launch a new fleet of interstellar probes, targeted towards other systems in the direction away from the system of origin. The result of all this for the original system of origin would be an exponentially increasing stream of data on every system within an expanding sphere of galactic territory.

The fifth priority course, could be to construct human habitats and build the infrastructure to receive human beings, safely. As frozen bodies, or binary files, or whatever other suitable embodiment of humanity at that time.

‘Alternatively, if we use lasers with power sufficiently high to accelerate the probe before we lose it due to tracking inaccuracy, are there materials that are reflective enough and with sufficient thermal tolerance that would survive such a high power?’

This is a significant issue, if we heat the material to strongly it will sublime or simply evaporate at to higher a rate, we could use that to our advantage though, as the sail evaporates it will get lighter and radiate heat better.

Indeed, this is a huge issue. The quality parameter for this is the ratio between reflected and absorbed light for a material, which should be as large as possible. I think in the current literature there are several orders of magnitude between requirement and reality that can only be bridged by hand-waving and wishful thinking at this point. Still, I think that out of all the formidable problems with interstellar travel this one seems to be one of the few with a real chance.

Assuming starting in 50 years (2064), getting to Alpha Centauri by 2085 (so the signal can get back to Earth in 2089) implies a speed of around 20% of light speed. At this speed, dust impacts will be incredibly energetic – excluding relativity, I get about 1.8 x 10^15 joules per kilogram, or a nanogram dust grain hitting with the force of a fairly powerful bullet.

And if the probe hits an actual meteoroid… *shudder*

This is why I favor huge nuclear pulse worldships with massive shielding at slower speeds (1%-3%c, reaching the nearest systems in a few centuries). And even they might have trouble – sure, an Orion is designed to use nukes for propulsion, but they don’t actually go off in contact with the pusher plate. That’s probably workable with enough shielding though (I’m assuming a truly colossal vehicle).

“In my opinion, such probes would be constructed from conventional metal/silicon technology. Miniaturized, to be sure, but still far from the mythical nano-assemblers which are much harder to design and make than most imagine. The minimum mass of an “industrial seed” of this type is anyone’s guess, but it could well be tons rather than grams. So, there is yet another reason why large probes are better than small ones, although you could, of course, imagine ways to divide the seed into smaller components that can self-assemble at the target.”

I have often argued that at a certain level of miniaturization, and especially concerning self-replicating machinery, which play by definition within the rules of evolution, it is very hard to draw a clear line between artificial constructs and nature. Your nano-assembler, if we look at the examples of nature, weights about one pictogram. So that is evidently possible, even if beyond our current capabilities.

To be fair, since last time i checked – unless there is something truly overlooked about flagella – it comes without an propulsion system for crossing interstellar distances.

There is some eerie chill however. If this Panspermia mechanism that got so much attention lately turns out to be correct, and especially including the conclusions presented here, that is that it is a far more efficient approach to do this by miniaturization and external propulsion systems, this may turn out to be a very interesting explanation for the Fermi Paradox, especially with respect to Tipler’s Von Neumann argument.

It may open up a door so wide that it may change our perception of the universe and our place in it forever.

… concerning self-replicating machinery, which play by definition within the rules of evolution …

This is not true. The definition of “self-replicating” does not include one important necessary condition for evolution: Imperfect replication.

Unless the machinery is specifically designed to allow imperfect replication, aka mutation, it will not evolve. Even if it were, generation times are long and evolution by random mutation would be exceedingly slow. Just don’t make the machines intelligent enough to rewrite their own code …

intercoastal: It can be argued that dust grains are so rare that there is a good change a small probe will never encounter one. Even if it does, if it is a thin sail, a dust grain will penetrate and simply leave a very small hole punched, easily survivable.

However, 99% of the ISM is gas, which turns into hard radiation at high velocity. This is what I am most worried about. Sails will be eroded away quickly by sputtering. Whether you can win by accelerating quickly depends on your acceleration and the rate of erosion. There will be a certain velocity that can be reached before the sail is gone. It should not be too hard to calculate, but I haven’t done it. Let us hope it can be more than just a few percent of c.

With respect to metamaterials, I fear that sputtering will lead to loss of function, which would cause the sail to evaporate from overheating well before it erodes.

“This is not true. The definition of “self-replicating” does not include one important necessary condition for evolution: Imperfect replication.”

Are you suggesting here that replication will always be perfect in any given generation? Because i would argue that only mechanisms are promoted that are positive for survival (replication success). Any mechanism preventing the operation of a viable (if not perfect) generation is ultimately a drawback.

Here is my prediction: sooner or later this mechanism itself will be transcribed incorrectly, inactivating it. Now, since that is actually an advantage, under the rule of selection through survival, so this “version” should be able to make even more copies plus enabling the kind of adaptability that comes with natural selection.

There is no safeguard for molecular assemblers, even if, as suggested, the blueprint is kept externally. The generations, if allowed to operate long enough (witch may be very long, mind you), it will create local additions to the blueprint until it becomes independent, thus subverting the safeguard mechanism.

100% identical copies simply do not exist physically. There will always be small imperfections. And that can only lead down one path…

Look, these days there is a lot of talk about post-biology. How about post-technology?

” Even if it were, generation times are long and evolution by random mutation would be exceedingly slow.”

Initially. But mechanisms would arise to speed things up. Thats how life works. Don’t distinguish between biology and technology here, the rules and therefore the mechanisms are identical. Its just molecular machinery. That is what life is.

“Just don’t make the machines intelligent enough to rewrite their own code …”

That is the coup de grace: you don’t need intelligence. Its a self-organizing system to begin with.

I think that replication accuracy can certainly be high enough to prevent degradation or even evolution. DNA replication accuracy varies between organisms, and is much higher for eukaryotes than prokaryotes, with a number of correcting mechanisms. Technological replication can be designed to be far better. Not perfect, but very high. Unless there is an environment that allows for many machines and selection, the effects of replication errors will most likely be detrimental.

Are you suggesting here that replication will always be perfect in any given generation?

Yes, of course, that is exactly what I am suggesting. A self-replicating machine is completely described by its code, a sequence of bits. It is no trouble at all for a machine to make a perfect copy of a sequence of bits. The optical drive in your computer can do it. The key to this apparent miracle is error correction codes.

” Even if it were, generation times are long and evolution by random mutation would be exceedingly slow.”

Initially. But mechanisms would arise to speed things up. That’s how life works.

No, not life, evolution. Biological life comes from evolution, which is why evolution is built into it. Self-replicating machines do not have this constraint, they are designed. Evolution is simply not needed. It could probably be achieved, but only on purpose.

Anyway, even if you were right, it seems like we would be quite safe for at least the first few billion years, plenty of time to do something ….

Alex:

Technological replication can be designed to be far better. Not perfect, but very high.

Perfect for all practical purposes. Error correction coding is very, very effective. Want less than one bit out of a billion flipped in a billion billion years? Easily done.

Yet, we have a very fine example of a system where replication errors are a huge boon. I am not so sure. I don’t think self replicating automata are ultimately controllable. It may take a very long time, like millions or even billions of years for the right replication errors to occur, but ultimately, if the machines survive long enough, its just a question of time. And once that lock is broken… well, you could say the rest is history.

One solution to the problem of the interstellar medium occured to me. If hundreds of the probes are launched, and are realitivly close behind each other, the probes at the front would impact any deberis, vapourising it with the energy of the impact. The probes thus act as a kind of ‘ablative shield’, because there are so many, the loss of even quite a few might not endanger the mission. This also its in with the idea of the nanoprobes working as a swarm, more like some kind of ‘hive mind’ than a goup of independant constructs.

And once that lock is broken… well, you could say the rest is history.

Why do people so overestimate the speed of evolution? Do you think one error is enough turn a benevolent machine into a monster? In biology, it takes thousands of changes in the genome to produce any changes that go beyond simple tuning of parameters. And that is only counting those changes that are beneficial. It took billions of years to evolve what we have on Earth, now, with a rather high mutation rate and short generation time. If the machine is constructed to make one error in a billion billion years (which is easy, as stated before), even if every such error turns the machine into a monster immediately (extremely unlikely), we would still be quite safe for the next 100 million billion years. Not really that alarming, now, is it?

With the advent of sites like Kickstarter and Indiegogo in the past few years, crowd-funding has become a popular way to pay for everything from financing your trek across Europe to bankrolling indie movies. And with NASA perpetually strapped for money, it’s no surprise that crowd-funding has hit the space business too.

But space projects are expensive, and not every campaign works. Planetary Resources asked for $1 million to work on its Arkyd orbital space telescope and ended up with more than $1.5 million, making it the most successful crowdfunded space project to date.

But when Golden Spike attempted to raise $240,000 to jumpstart a human mission to the moon, they received less than one tenth of the requested amount. Both projects are technically feasible, if ambitious, and both involve respected experts and ex-NASA personnel. So why the difference? What makes for an effective large-scale funding campaign?

A study examining much smaller, non space-related science projects found that it essentially comes down to two things: communication and involvement. The study found strong links between the amount of social networking a project does and the number of hits the project’s website gets. Interestingly, “likes” on Facebook seemed to outweigh traditional media coverage for getting attention, and gaining early likes from friends and family made a crucial difference.

Will: I suppose much depends on the distance between successive probes. If too large, the ISM will simply flow back into the beam space (This may happen quite quickly, considering the average relative velocity among stars, and likely also the ISM vs stars, is tens of km/s). If too small, it is hard to see how the leading probes are not shaded from their propulsive beam by the trailing ones. I doubt there is any room in between, but a calculation may be in order.

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For the last twelve years, this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image: Marco Lorenzi).

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to your fellow readers, I'm probably not going to run it.