readaroundyoursubjecthttps://readaroundyoursubject.wordpress.com
Science blog, revision notes and pages to help bridge the gap between A Level and University in Physics and ChemistrySun, 18 Mar 2018 20:15:16 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngreadaroundyoursubjecthttps://readaroundyoursubject.wordpress.com
Game on: gamers beat scientists at modelling proteinshttps://readaroundyoursubject.wordpress.com/2016/10/03/game-on-gamers-beat-scientists-at-modelling-proteins/
https://readaroundyoursubject.wordpress.com/2016/10/03/game-on-gamers-beat-scientists-at-modelling-proteins/#respondMon, 03 Oct 2016 14:00:07 +0000http://readaroundyoursubject.wordpress.com/?p=661Stand back scientists: gamers have beaten highly qualified crystallographers, undergraduates and even computer algorithms at creating an accurate model on a protein, based on its biochemical data.

A team of scientists in America pitted the participants of the study against each other. In one corner, two highly trained crystallographers, in another, 61 undergraduate students using computer modelling programs, in the third, two computer algorithms, and in the last corner, 469 gamers playing a game called Foldit.

All they had to do was create a model of the protein YPL067C by interpreting the electron density maps they were given. As is common in scientific research, the undergraduates and the crystallographers all worked independently. That may be what gave the gamers the edge – while one online player contributed the majority of the ‘moves’ towards the creation of the model, others tweaked and refined the structure.

This collaborative approach was found to work best, and shows that what is considered a labour intensive and time consuming process in science could in fact be crowd sourced. The researchers call the gamers ‘citizen scientists’, and say that anyone with a 3D mentality could have a huge impact on the field.

Not only that, but this style of gaming could help to get more people interested in science, and teach scientific concepts. Playing the game is said to teach students about protein modelling a lot better than simply having the concepts drilled into them, by making the process more fun and engaging. The gamers might even have found a new type of protein – one that is responsible for plaque formation. Studying this protein might lead to a better understanding of Alzheimer’s.

This isn’t the first time that scientists have considered using the general public to help with their research. Not only does involving the public increase the publicity of science and research, but it provides the researchers with a big team, which can observe much more detail than they can on their own. For example, in April 2016, the public was asked to keep an eye on a group of penguins through 75 cameras installed in Antarctica, to see how climate change was affecting them, and in August 4,500 citizen scientists analysed weather data to see if a partial eclipse had had any effect on the weather.

Next, the researchers say that they will incorporate the gamers’ tips and tricks into the software that the scientists were using. Understanding how more proteins work is important, since every function of the body uses them. Man-made proteins may even one say solve health, environmental and energy problems.

]]>https://readaroundyoursubject.wordpress.com/2016/10/03/game-on-gamers-beat-scientists-at-modelling-proteins/feed/0protein-shapereadaroundyoursubjectprotein-shapeEarth 2.0 – have we found a replacement?https://readaroundyoursubject.wordpress.com/2016/10/01/earth-2-0-have-we-found-a-replacement/
https://readaroundyoursubject.wordpress.com/2016/10/01/earth-2-0-have-we-found-a-replacement/#respondSat, 01 Oct 2016 17:00:00 +0000http://readaroundyoursubject.wordpress.com/?p=647Fancy jetting off somewhere new and exotic for your holidays this year? How about the newest, most expensive option out there?

Proxima b is your newly discovered, closest neighbouring habitable planet. Only 4.25 light years away, this rocky planet orbits our nearest star, Proxima Centuri. With a mass 1.3 times that of Earth, and a surface temperature which may, hopefully, if there’s an atmosphere, allow liquid water oceans to form, this planet may just be visit-able. A year long holiday on this planet is the perfect length – it lasts only 11.2 Earth days, excluding the 70,000 years of travel on modern spacecraft each way.

Okay, so the average surface temperature could be anything from -33°C to the high hundreds, depending on whether there is an atmosphere (and its nature if there is one), and since the planet and star are tidally locked only one side ever sees its sun, which appears as a dull red glow three times the size of Earth’s sun (Proxima Centuri is a red dwarf, meaning it is small and burns less brightly than our sun). And yes, if there isn’t an atmosphere the bursts of ultraviolet and X-ray radiation regularly emitted from Proxima Centuri would kill you, but there’s a small chance that the atmosphere and magnetic field will be just right to protect you, and to ensure that the heat from the planet’s star is distributed evenly over the whole planet, allowing you to holiday in style.

The planet was first spotted in 2013, when astronomers saw signs of a small gravitational pull exerted by the planet on its star. It could just be a random blip in the data that they collected, but the scientists spent weeks trying to explain the signal away, testing whether it may have been caused by noise in the measurement or the star’s own activity. Convinced that the mystery couldn’t be resolved that way, they checked the data from other telescopes, and started searching further in their ‘red dot campaign’. Only a third of the way in to their 60 planned nights of data collection, the team were convinced – the new planet is there.

Apart from the planet’s mass, and the fact that it orbits the star 7.3 million km away from it (a mere 5% of the distance between the sun and Earth), we don’t know much about this new planet, which they named Proxima b. Some scientists want to send a probe to the planet. Indeed, a project, known as Breakthrough Starshot, already exists for this purpose, but it only has $100 million of the billions of dollars of funding it would need to get there. Failing that, we will be able to find out more about the planet if it passes between its star and us, in an event called transit. In the unlikely event that this does happen, we may be able to use the Hubble telescope to see features in Proxima b’s atmosphere, by analysing the light which passes through.

If that’s not good enough though, don’t worry too much. In 2024, the European Extremely Large Telescope, or E-ELT to its friends, is due to be built. This 40 meter wide telescope might manage a whole pixel image of the planet. Pretty disappointing, but astronomers reckon that monitoring how the pixel changes over time might be able to tell us about the cloud patterns or even continents of the planet. Alternatively, the James Webb Space Telescope, due to replace the Hubble in 2018, might be able to tell us something.

For now, the planet is a bit of a puzzle – it’s unlikely to have formed so close to the star, so how did it get there? What we do know, though, is that you might not want to start packing just yet.

]]>https://readaroundyoursubject.wordpress.com/2016/10/01/earth-2-0-have-we-found-a-replacement/feed/0Habitable planet discoveredreadaroundyoursubjectHabitable planet discoveredMapping the starshttps://readaroundyoursubject.wordpress.com/2016/09/26/mapping-the-stars/
https://readaroundyoursubject.wordpress.com/2016/09/26/mapping-the-stars/#respondMon, 26 Sep 2016 14:00:22 +0000http://readaroundyoursubject.wordpress.com/?p=655The biggest ever map of the sky has been created. It includes the location and brightness of 1,142,000 stars, and two million of these also have their motion across the sky mapped.

Pictures from ESA’s Gaia satellite have been compiled by a team of 450 scientists and software engineers. The satellite is half way through its five year mission to collect data on billions of stars in the Milky Way, just 1% of all the stars in the galaxy. The pictures come from the first 14 months of the mission.

Soon, the team will release a new version of the map, including more of the stars that Gaia has imaged since the first 14 months of operation. However, there is so much data that the team is struggling to keep up and have asked the public for help. The map is already twenty times as large as the previous definitive guide, the Hipparcos Catalogue, and twice as precise.

Gaia is 10 meters wide when its solar panels are outstretched, and it spins slowing, scanning the sky, 11.5 million kilometres away from Earth. Its camera is powerful enough to work out the diameter of a human hair from a thousand kilometres away, and is coupled with two telescopes. Combining the data and images with data from the Hipparcos Catalogue is what allowed the team to work out the motion of two million stars, by separating the ‘parallax motion’ of the star – its apparent motion due to Earth orbiting the Sun – and the ‘proper motion’ – the star’s physical movement.

The map has already located 386 previously unknown stars, and 3,194 variable stars. Variable stars are useful as cosmic distance indicators because they shrink and swell in size creating regular brightness changes. The Andromeda galaxy can also be seen, and the Large and Small Magellanic Clouds, which are two dwarf galaxies orbiting the Milky Way. The brightest portion of the image is the Galactic Plane, which is the plane that the Milky Way lies in, in a spiral. There are also stripes and other artefacts present, but these are a result of how Gaia scans the sky, and will gradually fade out as more data is added over time.

So what next? The map is just the tip of the ice-burg, and shows that Gaia is operating correctly and is well on the way to charting the position, brightness and motion of one percent of the Milky Way’s stars. Over the rest of the mission the map will reach unprecedented detail and accuracy.

]]>https://readaroundyoursubject.wordpress.com/2016/09/26/mapping-the-stars/feed/0sky-mapreadaroundyoursubjectsky-mapOne equation to rule them all? This new equation might be able to link the two biggest theories describing the universehttps://readaroundyoursubject.wordpress.com/2016/08/27/one-equation-to-rule-them-all-this-new-equation-might-be-able-to-link-the-two-biggest-theories-describing-the-universe/
https://readaroundyoursubject.wordpress.com/2016/08/27/one-equation-to-rule-them-all-this-new-equation-might-be-able-to-link-the-two-biggest-theories-describing-the-universe/#respondSat, 27 Aug 2016 17:00:15 +0000http://readaroundyoursubject.wordpress.com/?p=641ER = EPR. It doesn’t look like much. In fact, it doesn’t look like anything at all. But this is the equation that could unite the two most successful theories describing the universe.

We’ve all heard of quantum mechanics and general relativity. They both explain a range of complicated phenomena exceedingly well, but if you tried to combine them, you would find that the maths just doesn’t work out.

Enter ER = EPR. This new equation, postulated by Susskind, a theoretical physicist from Stanford, could be the answer. The letters don’t represent numbers, but are instead initials of scientists, connecting two papers from 1935, one on quantum mechanics and the other on general relativity. The letters stand for the authors of the papers – E is for Einstein, who worked with Rosen to produce a paper on wormholes and with Rosen and Podolsky on the topic of quantum entanglement.

Wormholes, or Einstein-Rosen bridges, are spacetime tunnels – they connect two distant locations or times. If you fell through one side of a wormhole, you would instantaneously find yourself at the other side, no matter how far away the two locations are.

Quantum entanglement describes how two particles can interact if they are linked by having been created from the same source. Even if these particles are lightyears away from each other, anything that happens to one particle will immediately affect the other. For example, if you have two entangled particles and you measure the spin of one, you now know the spin of the other without making any other measurements.

The two can be combined. What if you had loads of pairs of entangled particles, and you took one from each pair to one point in space and the remaining particles to another, then smashed each group of particles together to make two separate black holes? The black holes would be entangled or, if the equation is right, they would be connected by a wormhole.

The paper is still undergoing peer review, which is required before it can be published, but even now more papers are being churned out, running with the idea. If it turns out to be right, entanglement can be used to describe wormholes, and entangled particles could be explained by wormholes. The fight over how to interpret quantum mechanics might be cleared up. Gravity may be derivable from quantum mechanics, and linking the two more deeply than it was formerly thought. Spacetime may even be built from networks of quantum entanglement in a vacuum.

]]>https://readaroundyoursubject.wordpress.com/2016/08/27/one-equation-to-rule-them-all-this-new-equation-might-be-able-to-link-the-two-biggest-theories-describing-the-universe/feed/0Equation for quantum and relativityreadaroundyoursubjectEquation for quantum and relativityThe electric shock to science: has a room temperature supercurrent been achieved?https://readaroundyoursubject.wordpress.com/2016/08/12/the-electric-shock-to-science-has-a-room-temperature-supercurrent-been-achieved/
https://readaroundyoursubject.wordpress.com/2016/08/12/the-electric-shock-to-science-has-a-room-temperature-supercurrent-been-achieved/#respondFri, 12 Aug 2016 14:00:18 +0000http://readaroundyoursubject.wordpress.com/?p=632Supercurrents seem like the stuff of science fiction. But they’re real, and they’re what happen when particles move without any resistance, so they don’t lose any energy. They’re usually only possible at very low temperatures of below -150°C, but now a group of scientists think they might have gone one step further, and produced a supercurrent at room temperature.

According to the paper, which was published by an international team in Nature Physics, a supercurrent is “a macroscopic effect of a phase-induced collective motion of a quantum condensate”. Put simply, it’s a quantum effect seen on a much larger scale than the tiny world of quantum.

It is only possible to set up a supercurrent in Bose-Einstein condensates (or BECs). These are collections of bosons which can be described by a single wavefunction, and include systems like charged particles moving in a superconductor, or particles moving in superfluid helium. The particles’ continuous motion isn’t maintained by a force (since no energy is lost, no force is needed), but by a gradient in the phase of the BEC’s wavefunction. Until now, supercurrents have only been set up at very low temperatures, because heat excites the particles into quantum states above the ground state. This requires more than one wavefunction to fully describe it, so the BEC is destroyed.

In their research, the team used a condensate of magnons, a type of quasiparticle (packets called quanta of energy in a crystal lattice which can be regarded as particles) for the BEC. As had been done previously in 2006, they set up the BEC by injecting magnons into the ground state of a crystal film made of yttrium iron, using a process called parametric pumping, so that the concentration of magnons was high. Then, by shining a laser pulse at a point on the BEC, they created a temperature difference between that spot and the rest of the material. Heating the BEC causes the magnetic properties and, more importantly, the phase of the BEC, to change. This is what caused the phase gradient in the wavefunction, crucial to the formation of the supercurrent.

The team observed the magnons moving away from the heated region, with a flow magnitude which increased when the temperature difference increased. They say that it is highly likely that this flow of particles is a supercurrent. Theoreticians in Israel and Ukraine have also constructed mathematical models of the observed magnon flow, and say a supercurrent is the only explanation.

Other scientists disagree though, saying that the phenomenon could be explained by less novel ideas. Demokritov, another researcher in the field, says that the parametric pumping may be adding energy to the system, which could mean that the magnons gain enough energy to overcome friction rather than moving without it. He says that his own team had better evidence for a supercurrent than this, but that they didn’t call it that because they couldn’t conclusively show that energy wasn’t being dissipated by friction.

If it is a supercurrent though, it could be used in devices using macroscopic quantum states, meaning big things for information storage and processing.

]]>https://readaroundyoursubject.wordpress.com/2016/08/12/the-electric-shock-to-science-has-a-room-temperature-supercurrent-been-achieved/feed/0electricityreadaroundyoursubjectelectricityMagic material: the textiles which can heal themselveshttps://readaroundyoursubject.wordpress.com/2016/08/04/magic-material-the-textiles-which-can-heal-themselves/
https://readaroundyoursubject.wordpress.com/2016/08/04/magic-material-the-textiles-which-can-heal-themselves/#respondThu, 04 Aug 2016 10:30:55 +0000http://readaroundyoursubject.wordpress.com/?p=626How long does a piece of clothing last you before you have to throw it away and get a new one?

Clothing designers are beginning to use a technology from all sorts of applications, from mechanical engineering to buildings, to improve the lifetimes of clothing. In small electrical components and large buildings alike, fractures and cracks can be catastrophic. In many cases, it is simply too hard to detect all of these problems, so scientists make the problems solve themselves! Using materials which can self-heal – that is, repair themselves with no external influence – can make designs last much longer and prevent many disasters.

Self-healing fabrics provide a way for clothing to become more durable, and increasingly offer more innovative functions. When a piece of clothing made from one of these fabrics is damaged or even cut in half it can ‘self-heal’, making it much more durable. There are many types of self-healing fabrics. Commonly, the material is made of polymers which can be repaired using an adhesive, which is either pumped to the break through a system of tubes, or stored inside microcapsules distributed around the material, which split open as the material breaks.

Last week scientists developed a material which can self-heal and protect the wearer from dangerous chemicals. The materials were made by coating an ordinary material with a series of liquids to create layers of polyelectrolyte coating (positively and negatively charged polymers), in this case like the polymers in the proteins that comprise squid ring teeth. Each coating is less than a micron thick, so barely affects the look of the garment, and could even be applied to the stands of fibre before the material is made.

The scientists found that they could add an enzyme between these layers. They added urease, an enzyme which breaks urea into ammonia and carbon dioxide, and showed that adding the enzyme would work to stop the urea from reaching the skin of the wearer. By extension, any number of chemicals could be broken down by adding different enzymes, from the organophosphates present in herbicides and insecticides used in farming to hazardous chemicals used in factories and in chemical warfare to harm soldiers, to stop the chemicals from reaching and harming the skin.

Materials are also being created which, when broken, can restore their electrical properties. This is critical for safety, because damage to the material may change its resistivity and thermal conductivity, causing overheating. Scientists have added a dielectric insulating factor, nanosheets of two-dimensional boron nitride, which is added to a plastic polymer base. The electrostatic attractions present draw the broken pieces back together so that the hydrogen bonds can be re-established, and the mechanical, thermal and electrical properties restored.

For now, the developments are limited to industry, such as in farming and factory work, and the teams are still looking for ways to make more conventional textiles self-healing. But who knows how soon you’ll be able to find all this, and more, in your wardrobe!

]]>https://readaroundyoursubject.wordpress.com/2016/08/04/magic-material-the-textiles-which-can-heal-themselves/feed/0ripped-denim-texture-wallpaper-4readaroundyoursubjectripped-denim-texture-wallpaper-4A lumpy model for a lumpy universehttps://readaroundyoursubject.wordpress.com/2016/07/13/a-lumpy-model-for-a-lumpy-universe/
https://readaroundyoursubject.wordpress.com/2016/07/13/a-lumpy-model-for-a-lumpy-universe/#respondWed, 13 Jul 2016 14:00:53 +0000http://readaroundyoursubject.wordpress.com/?p=619Computer models are used for all sorts of applications, from designing cars to predicting the weather. They are even used to simulate the whole universe, to work out how it was born, and what might happen next. Until now, the universe has been modelled using numerical simulations, which are quick and simple, but limited by the assumptions they make and their use of Newtonian gravity instead of Einstein’s general relativity.

A typical numerical simulation assumes that the universe is isotropic and homogeneous, meaning that all of its matter is distributed evenly throughout. This is true on a large scale, but on smaller scales the matter is gathered into clusters of galaxies and dark matter, and the rest of space is empty. This means that expansion of the universe occurs at different rates in different places – spots dense with matter will be pulled closer together by their gravity and expand slower, and empty spaces will expand unhindered (28% faster than the average rate of expansion!).

Two sets of code have now been written independently using this more realistic mass distribution and all of Einstein’s theory of general relativity to simulate how the universe evolved. One of the programs was written by Eloisa Bentivegna and Marco Bruni from the University of Catania in Italy and the Institute of Cosmology and Gravitation at the University of Portsmouth respectively. It is already available to the public, and uses a collection of free software called the ‘Einstein toolkit”. There is a backbone of software called ‘Cactus’, and modules called ‘thorns’ are added to perform specific tasks, like solving Einstein’s equations or calculating gravitational waves. As modules are added, new applications are created, and by writing a module preparing the initial conditions for matter which is inhomogeneous on a small scale but homogeneous on larger scales the code could be used to make predictions about the whole universe. The second code was written by James Mertens and Glenn Starkman from Case Western Reserve University in Ohio and John T Giblin from Kenyon College. They will be releasing their code soon, and say that it performs better than the ‘Cactus’ code.

The code can confirm whether or not our interpretation of the structure of the universe and cosmic expansion are true. They are likely to be used equally, so that they can independently verify each other’s solutions when making important predictions. Numerical simulations won’t become obsolete for a while either – they are less accurate, but take a lot less computing power, meaning that they can get a lot more detail.

So how much does this change about our perception of the universe? Many of our current assumptions might not hold in this ‘lumpy’ model of the universe. For example, cosmic distances are measured using ‘standard candles’, which use the propagation of light from supernovae, since we know their luminosities. Light may propagate differently though different parts of space, so the distances we think we know may all be wrong. Next, the calculated rate of expansion of the universe, described by the Hubble parameter, is also calculated using a homogenous universe, and we now know that this value depends on the density of the region in space. Finally, the code rules out the ‘back reaction’ phenomenon, which was thought to explain the effects of dark energy. Time to rethink everything we think we know…

]]>https://readaroundyoursubject.wordpress.com/2016/07/13/a-lumpy-model-for-a-lumpy-universe/feed/0Relativistic ModelreadaroundyoursubjectRelativistic ModelHelium shortage solution: we finally have a way to predict where to find supplies of Heliumhttps://readaroundyoursubject.wordpress.com/2016/07/06/helium-shortage-solution-we-finally-have-a-way-to-predict-where-to-find-supplies-of-helium/
https://readaroundyoursubject.wordpress.com/2016/07/06/helium-shortage-solution-we-finally-have-a-way-to-predict-where-to-find-supplies-of-helium/#respondWed, 06 Jul 2016 14:00:58 +0000http://readaroundyoursubject.wordpress.com/?p=611More helium is used each year than we produce. That sounds strange, since helium is the second most abundant element in the universe. Unfortunately though, it doesn’t stay on earth long – it’s so light that it just floats away. We can only obtain it from reserves found by chance during oil and gas drilling, and supplies are being depleted fast.

This is a problem for all sorts of industries. Helium is spectacularly good at keeping things cool, and that’s useful from scientific research (to freeze out complicating factors in experiments) to spacecraft. It’s used to cool telescopes, the fuel used in the Apollo space vehicles, nuclear reactors, the Large Hadron Collider, and most prominently MRI machines, which take up a fifth of the global use of helium. Helium’s low density also means it makes balloons float, from party balloons to enormous airships, and it can be used in lasers to scan barcodes at supermarket checkouts amongst many more.

As the known reserves of helium are being depleted and becoming privatised, helium’s prices fluctuate massively, and the price of helium has gone up 500% in the last 15 years. Well, not any more. Scientists from Durham and Oxford universities have teamed up with the Norwegian company Helium One to devise a method of predicting the locations of viable helium deposits. They’ve already found a reserve in the Rift Valley in Tanzania so large that it could meet the global demand for several years (it could be as big as 54billion cubic feet (Bcf), compared to a global yearly consumption of 8 Bcf).

Their new exploration method is based on the influence of volcanic activity upon helium trapped deep in the earth’s crust. The decay of radioactive substances like uranium emits alpha radiation, which is just a helium nucleus. Some of this helium gets trapped in cavities in the rock, and can be obtained when the gas is being extracted for other purposes. Volcanic activity produces heat intense enough to release helium from rocks much older and deeper down than we can reach. The helium gets trapped in shallower gas fields, which we can access more easily. The team used seismic imaging and geochemical sampling to find areas where this may have happened.

The next step is to ensure that the reserves aren’t too close to volcanoes, as that would mean that volcanic gases will be mixed in, contaminating and diluting the supply. The team is looking for what they call a ‘goldilocks zone’, where the supply isn’t diluted too much, but enough helium is released to make extracting it economical. Then their method can be applied to other parts of the world with similar geological history to find new resources.

]]>https://readaroundyoursubject.wordpress.com/2016/07/06/helium-shortage-solution-we-finally-have-a-way-to-predict-where-to-find-supplies-of-helium/feed/0Helium 2readaroundyoursubjectHelium 2Alive and dead and in two places at once: the mystery of Schrodinger’s cathttps://readaroundyoursubject.wordpress.com/2016/06/29/alive-and-dead-and-in-two-places-at-once-the-mystery-of-schrodingers-cat/
https://readaroundyoursubject.wordpress.com/2016/06/29/alive-and-dead-and-in-two-places-at-once-the-mystery-of-schrodingers-cat/#respondWed, 29 Jun 2016 14:00:29 +0000http://readaroundyoursubject.wordpress.com/?p=606We all know the story of Schrodinger’s cat – the unlucky beast that spends its days both living and dead in a box until someone lets it out. Well, now it can get out, sort of. Scientists have shown that the cat can be alive and dead, and can also be in two boxes at once.

Originally, the cat was the protagonist in a story used to explain the Copenhagen interpretation of quantum mechanics. He (or she) is sealed in a box along with a radioactive particle, which could decay at any time, and a vial of poison gas, which will break if the particle does decay. Schrodinger argues that until we open the box and find out if the cat is alive or dead, it is both, in what theoretical physicists like to call a “superposition of states”. In short, unobserved microscopic matter is in multiple states simultaneously until we take a measurement.

The scientists at Yale used an experiment not nearly as life threatening as this one to show that the cat can be split in two. They connected two aluminium boxes called microwave chambers by a sapphire chip and aluminium circuit, which acted as a switch which could be on or off. They introduced an electromagnetic wave oscillating in two opposite directions at once into the first box, connected the boxes, and then when the switch was turned off and the cavities were no longer connected, they counted the number of photons in each box. Sounds weird, but by showing that the total number of photons was always even, they showed that the boxes still acted as if they were connected.

It sounds small, and maybe even obvious, but this is new information and could be used for things like measuring the phase of light to improving quantum computers. In a quantum computer, numbers are stored as quantum bits, or qubits. Normal bits can store either a 0 or a 1, but a qubit can be in both of those states, and all states in between, at once. That makes quantum computers much faster than standard computers at lots of types of problems, particularly optimisation, as all possibilities can be calculated simultaneously and the results compared. But the qubits are more delicate – they could be affected by the outside environment and we can’t correct them without disturbing them. Conveniently, this is where the cat comes back in. If the qubits in the computer can be linked to other qubits, these could be used to correct the errors.

Quantum computers may be the only way to continue to improve the power of computers without compromising the size. So good luck Tiddles, we’re relying on you…

]]>https://readaroundyoursubject.wordpress.com/2016/06/29/alive-and-dead-and-in-two-places-at-once-the-mystery-of-schrodingers-cat/feed/0Schrodingers catreadaroundyoursubjectSchrodingers catA Super Computer for a Super-long Proofhttps://readaroundyoursubject.wordpress.com/2016/06/22/a-super-computer-for-a-super-long-proof/
https://readaroundyoursubject.wordpress.com/2016/06/22/a-super-computer-for-a-super-long-proof/#respondWed, 22 Jun 2016 14:00:20 +0000http://readaroundyoursubject.wordpress.com/?p=590Some maths is just too hard or complicated for humans. Some would just take too long. A new proof for the Boolean Pythagorean triples problem is too long even to be read by a human. At 200TB (or 46,600 DVDs worth!) it is the longest mathematical proof ever – even the shortened version (still a hefty 68 GB) would take 30,000 hours for a computer to process.

The Boolean Pythagorean triples problem asks whether it is possible to assign all of the natural numbers (positive integers) a colour of either blue or red, so that the numbers in no Pythagorean triple are all the same colour. A Pythagorean triple is a set of three integers that satisfies the condition a2 + b2 = c2. This equation is useful for finding the lengths of the sides of a right angled triangle.

The problem is harder than it looks. Some integers are part of many Pythagorean triples, so for example 5 is in the triple 3, 4, 5 and 5, 12, 13. If 5 is red in the first triple, it stays red in the second, and either 12 or 13 or both must be blue. This continues into the much higher numbers, and the scientists, Marijn J. H. Heule, Oliver Kullmann and Victor W. Marek, have shown that this eventually breaks down 7,825.

They used the Cube-and-Conquer program, which searches all the possible solutions by placing them into cubes. They used the University of Texas’ Stampede Supercomputer, which ran 800 processors for two days to find the solution. There are 102300 possible ways to colour the integers up to 7,825, and even after a load of mathematical tricks from symmetry to number theory techniques were used there were a trillion combinations to check.

Even now the problem has been solved it doesn’t add much to our understanding of the problem, begging the question ‘is this a real proof?’ Previous computer proofs have later been verified by hand, improving our understanding. In this case, we still don’t understand why colouring the integers works until 7,824 but then stops at 7,825, so we can’t solve similar problems any more easily, meaning we’ve gained little more than a useless fact.