Saturday, March 31, 2012

The supreme court recently ruled that surviving veterans of the United Kingdom's nuclear test series, conducted in Australia and the Pacific during the 1950s, will not be permitted to sue the Ministry of Defence for alleged ill-health caused by the tests.

The story, however, raises a number of interesting issues. Firstly, there is the question of the reliability of unverified personal testimony. Rose Clark, widow of Michael, is reported by The Guardian as stating that her late husband "was so close he could see the bones of the people on the beach beside him. It was like an x-ray."

This is a telling claim because x-rays are actually invisible. As a consequence, x-rays cannot be directly used to see the bones in a human body. In a medical x-ray, the x-rays pass through soft tissue, to be captured on a photographic plate. Human bones partially attenuate the x-rays, hence the bones can be seen as a shadow on the photographic image.

The second issue, not reported by The Guardian, is that the health of the nuclear test veterans has already been rigorously assessed by successive epidemiological studies conducted by what was then the National Radiological Protection Board, now part of the Health Protection Agency:

"Based on this work the HPA conclude that nuclear weapons test participants had, in general, a better life expectancy than members of the general UK population. When compared with the control group, the test participant group had similar overall patterns of mortality and cancer incidence indicating no significant cause for concern.

"The statistical analyses also provided a slight indication that test participation may have caused a very small increased risk of leukaemia but there was not enough evidence to confirm this as a fact and there was evidence to suggest that this finding should be treated with caution."

The third issue is the question of new research. According to The Guardian, "the veterans had contended that they did not have proper knowledge that their illnesses were connected to the atomic tests until medical research was published in 2007."

Now, this new research transpires to be a study, conducted by the Institute of Molecular Biosciences at Massey University, of so-called chromosome abberrations amongst New Zealand veterans who attended the British tests. There are several different types of chromosome abberration, but the primary result obtained by the Massey University team analysed a type of abberration called a translocation, in which parts of different chromosomes are swapped.

The technique involves taking blood samples from the subjects of the study, and counting the number of abberrations. To infer a dose, it is then necessary to calibrate the method by exposing blood samples in vitro to known doses of radiation to obtain a dose-response curve, such as that seen above here.

Everyone's DNA is subject to a continuous barrage of intrinsic damage, so any attempt to infer a radiation dose from DNA damage will need to distinguish radiogenic damage from intrinsic damage. This means that each method of counting chromosome abberrations will have a detection limit associated with it. In other words, each technique will only be able to detect radiation doses above a certain minimum level.

In the case of translocation frequencies, various estimates of the detection limit can be found in the literature. Writing in 1997, A.A.Edwards claimed that "the scoring of translocations...results in reduced sensitivity at low doses so that acute X-ray doses of about 0.3 Gy and chronic doses of about 0.4 Gy are at the limit of measurement...a final limit to these approaches exists because of the higher level of spontaneous translocations...in cells of unirradiated persons."

The unit of dose referred to here is the Gray (Gy). A Gray is a large unit of radiation: most people receive an annual background radiation dose comparable to only about 0.002 Gray. So 0.4 Gray is two hundred times or so the background dose, and that's possibly the detection limit for this technique; if the dose is any lower, it may be impossible to distinguish it from the random level of DNA damage.

In their 2007 report they calculate (p31) that there were 37 individuals who received a dose in the range 0 – 0.49 Gy, 6 individuals in the range 0.5 – 0.99 Gy, and 5 individuals who received greater than 1 Gy. The highest dose estimates here are above the range in which the detection limit for this technique might lie. But as a consequence, they're also very large doses; doses so large that, were they to be delivered all at once, could lead to symptoms of radiation sickness!

These dose estimates, we learn, were obtained by comparing the abberrations counted in the veterans with those in "Blood samples from 3 healthy donors (mean age 40.5) [which] were irradiated with 60 Co at a dose rate 0.835Gy/min to different doses (0, 0.2, 0.5, 0.75, 1, 2 Gy)," (p15).

Something strange now happens when we turn to the 2008 paper, published, let us recall, in a peer-reviewed journal. We now find that "Dose estimates ranged from 0 to 0.431 Gy in the veterans (mean = 0.170 Gy)," (p85).

All of a sudden, the mean dose is down to 0.170 Gy, which lies beneath what might well be the detection limit for this technique. And how are these new doses estimates inferred? Well, the blood-samples from the forty-year olds have been discreetly disposed of, and in their stead we find "in vitro exposures of a blood sample of a normal donor of age 60."

There's no explanation of why the blood samples used to calibrate the technique have changed. Which is strange, because if the Massey University team discovered that something was wrong with their initial approach, one would expect them to explain and report this fact, so that the rest of the scientific community could learn from their research.

The Massey University team emphasise the difference between the number of translocations found amongst the veterans, and the number found amongst a control group (whose mean dose was inferred to be 0.037 Gy), but given that both dose estimates are below the detection limit for the technique, it is unclear why this should be considered of dosimetric significance.

Why were samples from several 40-year olds used in the first paper to calibrate the technique, and then a single sample from a single 60-year old used in the second paper? How sensitive are the dose estimates to the choice of calibration sample? Detection limits are not so much as mentioned in either the 2007 paper, or the 2008 paper; why do the team from Massey University not even discuss this issue?

All these questions remain unanswered. Which is disappointing if you're a scientist rather than a lawyer.

Saturday, March 24, 2012

Sauber have an interesting slot which ducts air from the underside of the nose, and discharges it behind the step on the top surface. Craig Scarborough has a diagram of the slot, and Giorgio Piola also has an elegant drawing of the feature in this week's Autosport.

The best explanation of the slot is that it removes a portion of the boundary layer from the underside of the nose, using the low pressure created on the top surface when the air accelerates over the step. The thinner a boundary layer is, the less liable it is to detach, hence a thinner boundary layer beneath the nose could potentially enhance the quality of the airflow fed to the underbody of the car.

Monday, March 19, 2012

Ferrari have admitted that they need to re-design the sidepods on their 2012 Formula 1 car to take full advantage of the exhaust-blown effects still available within the current regulations. In fact, the re-design will be sufficiently radical to require a new side-impact crash-test.

So why exactly can't Ferrari just modify their existing sidepods? Mark Hughes explains in this week's Autosport that Ferrari need to move their exhaust exits further forward, thereby requiring a re-arrangement of the sidepod internals.

However, there's perhaps something else here which hasn't received much attention. Ferrari's original sidepods combined the radiator cooling exits with the exhaust outlets, whereas the trend on other cars such as the Red Bull and McLaren, is to separate the two, with a single cooling exit placed at base of the engine cover in the centre of the car.

The idea of having the cooling exit duct co-axial with the exhaust outlet was very popular in the early 2000s, and the reason is that the flow of exhaust gases can be used the increase the mass-flow rate through the cooling system, a phenomenon sometimes referred to as 'aspiration'. If you pull the flow out of the cooling system more quickly, you can get the same mass-flow rate for a smaller inlet area, and a smaller inlet provides aerodynamic benefits. This concept was studied by Parra and Kontis in their 2006 paper, Aerodynamic effectiveness of the flow of exhaust gases in a generic formula one car configuration, published in the The Aeronautical Journal:

"Due to the characteristic configuration of a Formula One car, the exhaust pipes pass through the chamber located directly behind the radiators. This chamber is normally shaped so that it enhances the outflow of gases. However, an extra outflow could be generated by inserting the exhaust pipe into a bigger diameter duct to create a mixing stream. Such mixing is believed to generate an aspiration along the outer duct, based on the same principles of operation as an ejector pump. Because this enveloping duct connects the chamber behind the radiator with the atmosphere, an outflow of gases through this passage would increase the efficiency of the radiator," (p574).

The exhaust outlet on Ferrari's original 2012 sidepod was inside the cooling exit duct, which in turn, was inside a downwardly-inclined funnel at the rear of the sidepods. Thus, Ferrari's original 2012 solution was perhaps a very neat idea: they might have been trying to use the exhaust gas to aspirate the cooling flow, and simultaneously use both the internal cooling flow and the external sidepod flow to pull the exhaust jet downwards.

However, once this failed to achieve the desired exhaust-blown effect, it became necessary to separate the exhaust outlet from the cooling outlet, as seen in testing at Barcelona (pictured). Once the exhaust outlet is separated from the cooling outlet, the mass-flow through the cooling system is no longer aspirated by the exhaust, and it may now be necessary for Ferrari to design a sidepod with a larger cooling inlet, as well as one which brings the exhaust outlet further forwards.

Moreover, with a reduced mass-flow through the radiators, it may be necessary in the interim for Ferrari to turn their engines down slightly, and Stefano Domenicali confirmed in Melbourne that "the car at the moment is slow in a straight line."

Sunday, March 18, 2012

"On joining McLaren in 1996, [Adrian Newey] immediately had his drawing office repainted in duck egg blue, a dramatic counterpoint to the muted grey decor that dominated the building.

"Dennis strode in unannounced, took a good long look at his technical director's revisionist taste of colour, and walked out without making a comment. The matter was never raised again," (Alan Henry, Autocourse 2009-2010, p33).

"Researchers...have observed so-called alpha waves, produced when the brain is relaxed but awake...suddenly flood the right brain roughly eight seconds before an idea pops into mind...[Creativity can be encouraged by, amongst other things] the colour blue (working in a blue room tricks the brain into releasing alpha waves)," (Stephen Armstrong, The Sunday Times 18/03/2012, previewing 'How Creativity Works', by Jonah Lehrer).

Saturday, March 17, 2012

Despite the prohibition on driver-activated aerodynamics other than the DRS, Mercedes have themselves a DRS-activated F-duct ('fluidic switch'). It seems that if a fluidic switch is activated by driver-activated DRS, then that fluidic switch does not constitute driver-activated aerodynamics. Curious.

Speculation continues, however, about the exact purpose of Mercedes' F-duct. There are slots in the undersurface of the front-wing, and Craig Scarborough suggests that the purpose of the system is to blow the front-wing.

Here's another possibility, however. One of the advantages of the active-ride Williams FW14B was that it reduced drag in a straight-line. Here's how:

"We realised in the wind-tunnel that if we lowered the rear and raised the front, you could stall the diffuser and that reduced the drag of the car significantly...I can't remember the figure but that would give them something like an extra 10 kph," (Adrian Newey, p233-234 in Williams, Maurice Hamilton, 2009).

So could Mercedes be stalling the diffuser somehow? The diffuser downforce depends upon the vortices which peel off its lateral edges, hence if one could blow these edges, one might be able to stall the diffuser.

Monday, March 12, 2012

Such small scales are low Reynolds-number environments, where viscosity dominates inertial forces, and turbulence is of diminishing significance. It is difficult to avoid concluding that the quality of the racing will be intrinsically better in follicular Formula One.

Saturday, March 10, 2012

The persistence of exhaust-blowing in contemporary Formula 1 is a significant philosophical development, for it signals a step towards a more effective unification of internal and external airflows. It is, as Mark Hughes points out in Autosport this week, a way of "using the upper-body airflow to seal the exhaust flow into going to the right place, where it in turn seals the underbody airflow into going to the right place." But as well as being an aerodynamic zip, it also constitutes a karstification of Formula 1's aerodynamic landscape.

External flows on racing cars are familiar and comforting territory, conveniently idealised as incompressible. Beneath this, however, lies the dark, disturbing realm of internal flow. In this subterranean domain the flow is often compressible, is characterised by changing temperature and density, and is bedevilled by Mephistophelean harmonics.

External and internal flow were almost treated as independent worlds for many years, but the resurgence of exhaust-blown diffusers heralds a new era in which the two regimes are becoming ever more tightly integrated. What we already have is a hydrodynamical topology matching that of a Karst landscape, with multiple sinkholes and outlets, and mysterious, hidden networks interpolating between.

External flows are swallowed by engine airboxes and radiator intakes, discharged from exhausts and cooling outlets, and then re-ingested by transgressive brake ducts and diffuser orifices; like trains on the Piccadilly line, shooting out of some fetid tunnel, briefly scuttling through a graffiti-ridden cutting under a sunless sky, then anxiously diving back into the darkness.

There's even the suggestion that Mercedes are using the greater temperature of the exhaust and radiator flows above the diffuser, to create a pressure differential with the cooler, denser air below. One imagines a semi-permanent band of frontal rainfall, hanging gloomily over the driveshafts.

In combination with this is the equally secretive and speluncal world of the F-duct. Nothing is solid any more; everything is potentially hollow, permeated with channels and rills and flues and pipes.

Visible aerodynamic surfaces are no longer just external-vorticity generating solid boundaries; they are also the separation between the internal and the external.

Saturday, March 03, 2012

Red Bull revealed a fascinating upgrade to the RB8 in Barcelona this morning, which seems to have drawn upon the urban architecture of Milton Keynes for inspiration.

As first recommended by McCabism in October of last year, although F1 exhaust exits must now be angled slightly upwards, and placed 250mm above the reference plane, if those exits are made almost flush with the surface of downward-sweeping sidepods, then at high speed the downward cross-flow will pull the exhaust jet towards the diffuser.

Red Bull have now incorporated such a design, in conjunction with a sort of bridge, or ramp, which appears to direct the exhaust gases down towards the outer edges of the diffuser, (picture courtesy of a Twitter link from Jason@crucial_Xtreme).

Beneath the bridge is an underpass straight out of central Milton Keynes, which takes the air flowing along the sidepod undercut, and funnels it towards the coke-bottle, and thence over the central section of the diffuser, and through the starter-motor hole.

In effect, it's like a hydrodynamic version of the cross-over at Suzuka. It's a way of stopping these two flows from interfering with each other, now that the regulations have forced the exhaust exits to be further forward and further upwards. In retrospect, it's clear that the 'blister' which houses the exhaust exit on the new McLaren, was their response to the same problem, the overhang of the blister achieving a comparable separation of the two flows.

Perhaps we can attribute the inspiration for this to the overhang at the McLaren Technology Centre...

A theory in modern mathematical physics consists of a mathematical formalism, and a set of rules linking parts of the mathematical structure to parts of the physical world. Observational and measurement data is thereby embedded into the theoretical formalism as an empirical substructure.

In terms of the philosophy of science, if you think that physics captures the nature of the world which exists beyond the empirical data, then you're a realist, whilst if you think that the theory is simply a means for organising the empirical data, and generating reliable predictions, then you're an empiricist.

One of the problems which realists face is distinguishing between physical and non-physical mathematical structure; some parts of the mathematical formalism appear to be convenient, but surplus to the physical structure.

An interesting illustration of these issues can be found by comparing classical electromagnetism with fluid mechanics. These two fields represent very different aspects of the physical world, but use the same branch of mathematics to do so.

For example, in electromagnetism, there is a vector field called the magnetic field B, and this field is defined to be the result of applying a differential operator called the curl, to a vector potential A:

Now, in fluid mechanics, there is a vector field called the vorticity, and this field is defined to be the result of applying a differential operator called the curl, to the velocity vector field:

What's interesting about this is that in electromagnetism some people regard the object from which the curl is taken, the vector potential A, to be surplus mathematical structure, whilst in fluid mechanics, there are people who regard the object obtained by taking the curl, the vorticity, to be the surplus mathematical structure:

"In a wholly classical context, electromagnetism acts on charged particles only through the electromagnetic field...the electromagnetic potential has no independent manifestations, and seems best regarded as an element of 'surplus mathematical structure'," (Richard Healey, Gauging what's real: The conceptual foundations of contemporary gauge theories, OUP, 2007,p21).

"Vorticity, the quantification of the strength of such vortices, is not actually physics—vorticity is a purely mathematical definition. Indeed, vorticity is constructed from the velocity gradients described above—which are physics: the amount velocity changes over a given distance," (p46, Introductory lectures on turbulence: Physics, Mathematics and Modeling, J. M. McDonough).

Whether McDonough's assertion is accurate is something of a moot point; one could make a decent counter-argument for saying that vorticity is an objective, physical pattern to be found in velocity fields. For the sake of argument, however, let us assume that McDonough is correct.

Now, the theories in which these structures are embedded are non-isomorphic: electromagnetic fields must satisfy the Maxwell equations, while the velocity/vorticity fields must satisfy the Navier-Stokes equations. The electromagnetic and velocity/vorticity fields, then, differ not only in the way they are mapped to the physical world, but also by virtue of the overall structures in which they are embedded.

Max Tegmark has argued that it should be possible to infer the interpretation of a theory from its intrinsic mathematical structure: "Suppose we were given mathematical equations that completely describe the physical world, including us, but with no hints about how to interpret them...the only way in which familiar physical notions and interpretations...can emerge are as implicit properties of the structure itself that reveal themselves during the mathematical investigation," (The Mathematical Universe, Tegmark 2008,p5).

Perhaps, then, electromagnetism and fluid mechanics would be a good test-bed for this hypothesis. If you can show, from the intrinsic mathematical structures alone, why the magnetic vector potential A is surplus mathematical structure in electromagnetism, whilst the vorticity is surplus mathematical structure in fluid mechanics, without resorting to physical interpretation, then Tegmark has a viable hypothesis.