Tabs

Saturday, 12 September 2015

Einstein famously said that “The most incomprehensible thing about the world is that it is comprehensible.” [1] And that very discovery, that we inhabit a universe governed by unbreakable patterns or ‘laws’ is a triumph of the scientific enterprise. Unlike the laws of Australia or America, the laws of physics appear to be descriptions rather than rules; in such a light, what would it mean to say that nature “obeys” or follows the “laws” that we have written down in our textbooks?

This brings us to the ontological/metaphysical status of these ‘laws’. Since humans are pattern seekers who cannot help but anthropomorphise their surroundings, the question arises as to whether these principles are actually ‘written into’ the structure of reality, have an objective existence of their own (Platonism) or are simply human inventions/generalisations. To paraphrase Stephen Hawking, what is it that breathes fire into the Equations? [2] Where do the laws of physics “come from”?

The concept of a ‘law’ owes its origins to geometry and theology. In geometry, laws describe the dynamics of bodies (like Euclid’s axioms and postulates) and it was God who employed them to govern their motions. However, as a naturalistic understanding of the world progressed, the role of God in governing the universe via such laws was largely abandoned (or made redundant). Nevertheless, this underlying metaphor of “governance” remained.

Broadly speaking, there are two competing metaphysical accounts of laws of physics; the former is a prescriptive view while the latter is descriptive:

1. The Governing View: the laws of physics are an intrinsic part of reality that govern and explain the evolution of physical systems. (Armstrong, Maudlin, Ellis, Vilenkin, Krauss)

2. The Summarising View: the laws of physics are certain theorems of the scientifically best systematic summary of the motions of particles, fields, etc. throughout space/time (Mill, Ramsey, Lewis, Loewer, Carroll). Interestingly this view also presupposes a 4D block universe view, hence the laws never "began to exist" and do not need to be "put in" or created as an initial condition.

Now that we have a metaphysical framework for understanding what the laws of Nature are/where they “come from” (namely the BSA approach), a further query arises: why do the laws of physics take the form they do? For example, why do opposite charges attract, while like charges repel? Why is there an inverse square law rather than an inverse cube law? And so on…

The dynamical laws of nature at the microscopic level (including general relativity and the Standard Model of particle physics) are tightly constrained in the form that they may take, largely by symmetry principles such as gauge invariance and Lorentz invariance. The specific values of the numerical parameters of these theories are in principle arbitrary, although on naturalness grounds we would expect mass/energy scales to be roughly comparable to each other. [emphasis added]

Saturday, 26 July 2014

Welcome to Hasan's Thoughts Dissected. You will find a panoply of articles on a range of topics and questions concerning human existence, knowledge and empirical inquiry. Most posts are organised under the following major subcategories:

Click links below to view posts...

Evolution: Exploring the diversity and history of life on earth; the evidence for common ancestry, macroevolutionary transitions, phylogenetics and special topics such as the evolution of social behaviour.

Origin of Life: Building together a unified view of prebiotic chemistry; open problems in abiogenesis research and models of the origin of life including (but not excluded to) the RNA world, Iron-Suplhur world, the origin of biological information and other big ideas

Cosmology: Probing the origins, evolution and large scale structure of the observable universe and beyond. Topics include the formation of structure, quantum gravity, general relativity, the cosmic microwave background, inflation and speculative theories of the universe.

Theoretical Physics: An inquiry into the fundamental nature of small-scale and high-energy physics: includes quantum mechanics, quantum field theory, string theory, supersymmetry and beyond. Investigation of particles, forces, symmetries and field theories with experimental and phenomenological implications.

Philosophy of Mind: Controversies regarding consciousness and mental states: a discussion of major schools of thought, physicalism vs non-physicalism, the hard problem of qualia, elimitivism, behaviourism, functionalism and more.

Philosophy of Religion:A critique of popular and scholarly arguments for the existence of God(s). Implications of the naturalism vs. theism debate and open problems in professional theology.

Psychology: An introduction to influential ideas in modern psychology and cognitive science; an encounter with major figures and thinkers. Review articles on mental development, intelligence, personality, sociology and mental disorders.

Pseudoscience:Debunking claims of creationists, the intelligent design community and popular hoaxes.

Friday, 25 July 2014

A central tenet of
evolutionary biology is the notion of common ancestry. The theory of descent
with modification ultimately connects all organisms to a single common
ancestor. Humans, butterflies, lettuce, and bacteria all trace their lineages
back to the same primordial stock. The crucial evidence for universal common
ancestry includes homology.

Why Common Ancestry
Matters

Common ancestry is
the conceptual foundation upon which all of modern biology, including
biomedical science, is built. Because we are descended from the same ancestral
lineage as monkeys, mice, baker’s yeast, and even bacteria, we share with these
organisms numerous homologies in the internal machinery of our cells. This is
why studies of other organisms can teach us about ourselves.
Consider work on mice
and yeast by Kriston McGary and colleagues (2010) in the lab of Edward Marcotte. The researchers knew that because mice and yeast are derived from a
common ancestor, we find not only many of the same genes in both creatures, but
many of the same groups of genes working together to carry out biological
functions—what we might call gene teams. The scientists thus guessed that a
good place to look for genes associated with mammalian diseases would be on
mouse gene teams whose members are also teammates in yeast. Using a database of
genes known to occur in both mice and yeast, McGary and colleagues first identified
gene teams as sets of genes associated with a particular phenotype. In mice the
phenotype might be a disease. In yeast it might be sensitivity to a particular
drug. The researchers then looked for mouse and yeast gene teams with
overlapping membership.

Among the pairs of
overlapping teams they found was a mouse team of eight genes known to be
involved in the development of blood vessels (angiogenesis) and a yeast team of
67 genes known to influence sensitivity to the drug lovastatin. These teams formed
a pair because of the five genes that belonged to both. The connection between
the two teams suggested that both might be larger than previously suspected,
and that more than just five genes might play for both. In particular, the 62
genes from the yeast lovastatin team not already known to belong to the mouse
angiogenesis team might, in fact, be members. Starting with this list of 62
candidates, the researchers conducted experiments in frogs revealing a role in
angiogenesis for at least five of the genes. Three more genes on the list
turned out to have been identified already as angiogenesis genes, but had not
been flagged as such in the researchers’ database. Eight hits in 62 tries is a
much higher success rate
than would have been expected had the researchers simply chosen genes at random
and tested their influence on angiogenesis. In other words, McGary and
colleagues used genetic data from yeast, an organism with neither blood nor
blood vessels, to identify genes in mammals that influence blood vessel growth.
Researchers in Marcotte’s lab have since exploited the overlap between the
yeast lovastatin team and the mouse angiogenesis team to identify an antifungal
drug as an angiogenesis inhibitor that may be useful

in treating cancer
(Cha et al. 2012). That the theory of descent with modification is such a
powerful research tool indicates that it has a thing or two going for it.

Thursday, 5 December 2013

In the beginning was the vacuum. And nature abhorred the vacuum, filling it with topology. The surprising connection between quantum field theory (QFT) and topology yields the instanton, a cousin of the magnetic monopole; a soliton. The U(1) problem in QCD; where strange, up and down quarks of identical mass give the Lagrangian an extra U(3) symmetry, a product of SU(3)xU(1). Removing all mass from the quarks generates an additional copy of U(3). The Eightfold Way of strongly interacting particles and the mesons generated from the spontaneous breaking make up the pair of SU(3) groups. Conservation of baryon number is indicated by one U(1) segment, but the last additional U(1) group necessitates particles that don't even exist; nonetheless the chiral symmetry is spontaneously broken; but how? The solution is the insanton. But what topological features make instantons relevant to the QCD vacuum? Imagine a topologist mowing her lawn with an electric mower; she subsequently faces complications as she tries to move the power-cord around trees and shrubs. An intuitive analogy for homotopy. She then gives each tree a 'winding number' of 1 for one circuit around the tree, 2 for two circuits, etc. Topologically, both a short or wide path around a tree shrub are equivalent to each other; hence they are homotopic as they can be deformed into one another without the need for cutting. But if she then moves the mower in a circular fashion back to her initial starting position, the paths might look as though nothing has changed but they have a subtle difference; one wind around a tree. An analogous example of how the non-trivial topology of a vacuum can have physical effects is the Aharanov-Bohm effect. If you split an electron in a way that it passes both ways around a solenoid carrying current and subsequently recombines, the outcome is an interference pattern which changes. Thus, the topology of a vacuum without a magnetic field is equivalent to a punctured plane (the puncture made by the solenoid). Now we can pump an extra gauge invariant term (the instanton) into the Lagrangian, in QCD this is essentially a gluonic entity; a ripple in a gluon field. If you take the initial and final field strengths, in the middle is a local region where there is some positive energy; this is the instanton. Instantons have a 'topological charge' which explains their stability, much like an A4 sheet of paper taped at both ends to a coffee table with a single 180 degree twist induced before the ends are secured. This part of the sheet will remain twisted until someone cuts the strip of tape, this is just like the topological charge.

In QFT, the initial and final state is the vacuum. We can envision an instanton as a sort of path that correlates or links initial and final states (that have different topological winding numbers) and since those winding numbers can be infinite in extent, the vacuum not only becomes a state of lowest energy but also an aggregation of an infinite number of apparently homogeneous yet topologically different vacua. The lawn-mowing analogy is helpful as the power-lead leading over the tops of trees and shrubs act as barriers to movement of the electric lawn-mower. In field theory, this is equivalent to an energy barrier, instantons surpass this barrier via quantum tunnelling (linking one distinct topological state to another, measured by the θ-parameter. But how do instantons solve the U(1) problem? We could just invoke a respectable particle to account for the symmetry breaking like the η meson; but it's a Goldstone boson and the particle with the next mass up is too heavy. Instantons, like goldilocks, give just the right symmetry disturbance. A massless spiral of gluons and inverts right-handed quarks to left-handed ones. Such an inversion of handedness breaks the chiral symmetry and deals with the additional U(1) symmetry without the need for particles.

Saturday, 12 October 2013

The Web remains an untamed beast. Ever since its inception, routers and lines are added continuously without bounds in an uncontrolled and decentralised manner; the every embodiment of digital anarchy. But is this network of networks inherently random? Nope. But how them do you get order to emerge from the entropy of millions of links and nodes? Let's examine the Internet and the web in the light of network theory and statistical mechanics.The most fundamental qualitative feature of any network is its degree distribution. The degree of a node is the amount of edges connected to it. Much of the Internet is an aggregation of low degree nodes with some few high degree hubs. An intriguing pattern arises with degree distribution of the Internet at large; it follows roughly a straight line when plotted against a logarithmic scale, essentially implying that the # p(k) of nodes with a degree k obeys a power law p(k)xk^-a. The present value of a for the Internet is around 2.2. But if edges of a network were arbitrarily placed between nodes, the resultant degrees would obey a Poisson distribution (in which a majority of nodes have degrees fairly close to the mean value and a total lack of high degree hubs), much like the Erdős-Renyi random graph. So the fact that the Internet follows a power-law makes it far from random and hence 'scale-free'. Citation networks, where edges represent citations of one paper to another and nodes symbolise the papers themselves, are also scale-free. So why do the Web and Internet both have an affinity and indeed a tendency to form similar scale-free networks? Conventional graph theory makes the assumption that the amount of nodes in a network is static and that links are randomly distributed. Such assumptions fail given that the Internet continually evolves with new routers the the Web with new pages, also the fact that actual networks feature 'preferential attachment' (nodes have a high probability of forming connections with another nodes that have many links).Let's imagine that some nodes in a network are abruptly removed or disappear. 3% of Internet routers are destined to fail at any given time, so what percentage of nodes would need to be removed so as to affect network performance? We can perform one test by removing nodes uniformly and at random and another test by deliberately removing the nodes with the highest degree. It turns out that for a scale-free network, random node removal has little to no effect whereas targeting hubs can be destructive.

The concept of 'six degrees of separation', proposed by Stanley Milgram, suggests that anyone in the world can be connected to anyone else by a chain of five or six acquaintances. Does the Internet follow this trend seen in social networks (small separation of nodes and high degrees of clustering)? Since we don't have a complete copy of the entire web, even search engine cover only around 16%, we can use a small finite sample of it to make an inference about the whole. Using 'finite size scaling', you can quantify the mean shortest distance between two nodes (numbers of clicks to get from a page to another page). Given there are around 1 billion nodes that make up the Web, this bring the 'small world' effect to 19 'clicks of separation'. Not all pairs of nodes can be internconnected given that the Web is not a directed network; a link leading from one page to another does not mean an inverse link exists, hence such a path of 19 clicks is not guaranteed.In most complex networks, nodes undergo competition for links. We can model this by giving each node a 'fitness factor' which quantifies its ability to compete, subsequently energy levels can be assigned to each node to produce a Bose gas (its lowest energy level representing the fittest node). The Bose gas evolves with time, adding new energy levels; such corresponds to the addition of novel nodes in the network. Two different outcomes can arise depending on the distribution of energy level selection: (1) 'fit get rich': as the energy level increases, particle level decreases (2) Bose-Einstein Condensation: the fittest node gains a large percentage of all links and manifests itself as a highly populated lowest energy level. Perhaps the Web is just another Bose-Einstein condensate?

Thursday, 3 October 2013

Homology is 'a word ripe for burning'. But how should it be defined? Superficially, it's often identified as similarity in morphology reflecting a common evolutionary origin; but can we give a more rigorous approach? Like 'species', the many definitions of homology fall into two basic forms: developmental and taxic. The developmental approach is based on ontogeny, and two characters are homologous if they have an identical set of developmental constraints. The taxic definition is based on cladistics and identifies a homologues as a synapomorphy (a trait that characterises a monophyly). Some complications arise with structural homology, for instance, the wings or bats and birds can be considered convergent as they are differently arranged (and lack common ancestry); but they can be considered homologous at the level or the organism (because they evolved from the same pattern or vertebrate forelimb traceable to a common ancestor). Circular reasoning also arises with structural homology in that it is used to build a phylogeny and that phylogeny is subsequently used to infer homology (notice the circularity); a phylogeny must be initially constructed based on evidence before homology is proposed. What about evo-devo? This is also unhelpful for a working definition of homology, different pathways of development can converge on the same adult form, such as the methods of gastrulation and the many routes of developmental regeneration in the hydroid Tubularia. Even the embryonic developmental origin is unuseful as it relies on the subsequent interactions between cells and fails to give a conserved adult morphology. Molecular markers such as genes succumb to hierarchical disconnect (whereby homologous characters produce non-homologous traits). A classic example is the gene PAX6 in eye development which is found and transcribed in species as diverse as insects, humans, squids and even primitive eyed nemertines and platyhelminths.

Experiments involving grafting of Drosophila PAX6 into Drosophila limbs or wings can place eyes in incorrect positions; when mice PAX6 is inserted into Drosophila, it is expressed as mouse-like. These grafting tests indicate that the adjustability for change does not lie in the genes as in the regulatory network of genes that code for expression. The need by to redefine homology at different hierarchical levels is also indicated in other characters. For a long time, arthropod compound eyes had been thought to have evolved rather independently of the vertebrate simple eye; now this seems improbable given the immense similarity between cephalopod and vertebrate eyes (commonly attributed to convergence). In essence, the gene starting eye formation is homologous but it's expression is not necessarily homologous. Hierarchical disconnect in the form of nonhomologous traits causing homologous characters is also noted. With the exception of urodele amphibians, all tetrapods develop tissue between their primordial digits and later undergo apoptosis. But in newts and salamanders, there is no need for apoptosis and digits take a separate developmental pathway. The evolutionary hypothesis is that salamanders and newts (or one of their ancestral species) lost the ability of apoptosis between digits and differential growth is a derived process. Novel genes exchanged for older ones can also cause the same homologous morphology (co-option of genes during evolution for very distinct functions).

Saturday, 28 September 2013

Mercury is a periodic anomaly. It's liquid at room temperature, but why? Traditionally, the answer has always been its low melting point, but how? The marriage of quantum chemistry and relativity allows us to demystify many deviations of the periodic table, let's begin by comparing some of its next-door neighbours. Au and Hg are similar and distinct in many ways: with melting points of 1,064 °C and -38.83 °C densities of 19.32 and 19.30 g·cm−3 respectively. Their entropies of fusion are quite similar as opposed to their enthalpies. And regarding their crystalline structures: Au, Ag and Cu are cubic while Zn and Cd are hexagonal; Hg is rhombohedrally distorted Finally, Hg is a poor conductor with weak metal-metal bonding as opposed to Au, despite their similar electron configurations. Looking beyond the rare earth elements, some surprising periodic deviations arise; Hf and Zr have an uncanny resemblance. To explain this phenomenon, the lanthanide contraction is invoked. This involves the filling of the 4f orbital (unlike s, p or d electrons, here the electrons poorly shield the nuclear charge). As we move along the rare earths, 14 protons are added and the lesser penetration of the 4f orbitals means they are partially shielded by the 14 4f electrons; causing the electron cloud to contract. But other questions remain unanswered by the lanthanide contraction: 1. why is Ag coloured gold? why not silver? 2.what is the reason for the high electron affinity of Au? One may be tempted to introduce the idea of an inert 6s pair, but this fails to address the liquid nature of mercury. Relativity dictates that the mass of an object increases with its velocity, hence we can derive 3 main relativistic effects relevant to Hg and Au.

Firstly, the p3/2 orbital contracts to a lesser degree as opposed to the s1/2 and p1/2 orbitals (which contract a lot). Secondly, such causes an outward augmenting of the d and f orbitals (in relation to the s and p orbitals). And thirdly, the relativistic splitting of the p, d and f orbital energies manifests itself as spin-orbit coupling. These 3 effects cause the energy gap (difference) between the 5d5/2 and 6s1/2 orbitals to shrink. More importantly, we may explain away the colours of Au and Ag; the colour of Au is caused by the absorption of blue light causing 5d electrons to be excited to the 6s level, however silver appears colourless when it absorbs UV. The relativistically contracted 6s orbital in Hg is filled and hence, unlike Au, the 2 6s electrons don't play that much a role in metal-metal bonding, which is why it is liquid at room temperature.

Sunday, 8 September 2013

Are Americans, Archaea, Amoeba and bacteria all genetically related? The notion of a 'universal common ancestor' (UCA) is central to evolutionary biology. Much of the traditional arguments have been confined to 'local' common ancestry (of particular phyla) as opposed to the totality of life; let's overcome this assumption and test the hypothesis using probability theory and phylogenetics. The problem of UCA has been compounded by horizontal gene transfer (transduction, transformation and conjugation) where early genetic material passed on between entirely different species is thought by some to challenge the 'tree of life' pattern by creating reticulated lineages. The qualitative evidence for UCA spans the congruence of biogeography and phylogeny; the mutual agreement between the fossil record and phylogeny, the nested hierarchy of forms and the correspondence between morphology and molecular genetics. Such arguments boil down to 2 premises: (1) the nearly-universal nature of the genetic code and (2) critical similarities on molecular level (L-amino acids, fundamental polymers, metabolic intermediates). Since these arguments are merely qualitative, they do not rule out conclusively, the possibility of multiple independent ancestors. We can examine UCA quantitatively by model selection theory (without the presumption that sequence similarity implies a genealogical relationship) and a set of highly-conserved proteins. Also, we can model our test on Bayesian and likelihoodist probability (as opposed to the classical frequentist null hypotheses). Keep in mind that sequence similarity is the most likely consequence of common ancestry but this alone is not enough to support homology (similarity may be due to convergence). It is the nested, hierarchical relationship between sequences that necessitates the inference of common ancestry (because some similar sequences produce a conflicting phylogenetic structure which forces the conclusion of uncommon descent).

So are the three superkingdoms of life (archaea, bacteria, eukarya) united by a common ancestor? Douglas Theobald recently performed a test where 23 conserved amino-acids across the three domains had evolutionary networks (or trees) build around their sequences. Then contrasting the probability values for a range of ancestry hypotheses. But does this imply that life originated only once around 3.5 BYA? Not at all! It just implies that one of the primordial (original) forms of life has extant descendants; however it is possible for life to arisen more than once but the whole conclusion necessitates that all life has at least once common ancestor: a last universal common ancestor (LUCA). A problem however is that a phylogenetic tree can be build on virtually any set of data; we need to demonstrate an agreement between trees for the exact set of data spanning different datasets. And this agreement can also be explained in terms of other biological processes so the Akaike Information Criterion (AIC) may be applied to compare and contrast a range of hypotheses.So what signature feature of sequence data allows us to give qualitative evidence for UCA? In a nut-shell, the site-specific relationships in the amino-acids across a range of species; such relationships fade away as we go back in time through a lineage and species converge back (but with enough data, the progressive accumulation of relationships becomes statistically significant). On the other hand, if a pair of extant species have absolutely distinct origins, the relationships between the site-specific amino acid correlations (in the two species) disappear.

Friday, 6 September 2013

This carbon flatland is one wonder material. Graphene is a two-dimensional sheet of crystalline carbon just one atom thick; it is the 'mother' of all carbon-based structures: the graphite in pencils, carbon nanotubes and even buckminsterfullerene. The behaviour of electrons in the honey comb lattice as massless Dirac fermions gives graphene its unique properties. One signature effect of graphene is its distinct Hall effect, in the original Hall effect, an electric current (in the presence of a transverse magnetic field) causes a decrease in potential perpendicular to both the current and magnetic field. Near absolute zero, the Hall resistivity (ratio of decrease in potential to current flowing) in a 2-D electron gas becomes discrete (or quantised), taking integer values h/ne^2. But in graphene, a Berry phase means that the Hall resistivity is only quantised as odd integers (π), hence if you spin the wave-function of the Dirac fermions in graphene (about a circle), there is no symmetry and the state ends up in a different phase then what it began with. Moreover, the quantum Hall effect in graphene can occur at room temp. and can distinguish between layers (due to cyclotron energy of electrons). Graphene could give insight into relativistic effects on the bench-top, since the velocity of light for Dirac fermions is 300 times less in graphene, it should have a larger value for its fine structure constant (around 2). Zitterbewegung (the jerky motion that arises when its impossible to locate the wave function of a relativistic particle) is yet another frontier for graphene, the path of a relativistic electron jitters when it interacts with a positron. This type of motion occurs too quickly to be observed directly in materials like solids but when Dirac fermions are restricted to graphene sheets, they can be interpreted as mixing of states.

The Klein paradox in QED is when a potential barrier allows relativistic particles to move through freely, yet the probability that an electron tunnels through decreases at an exponential rate with the height of the barrier. Paradoxical enough, the probability for relativistic particles increases with barrier height (since a potential barrier that acts to repel electrons will also attract positrons). Chiral symmetry breaking may also be illuminated by graphene; in graphene the right and left-handed fermions behave the same unlike neutrinos which are strictly left-handed. But graphene is too conductive and to lower its conductivity we can take advantage of carbon's adaptability. In diamonds, each carbon is bound to four others (involving all electrons) in contrast to graphene, where one electron is left over (making it a good conductor). The most basic way of achieving this is to add a hydrogen (just like conversion of ethane to ethane) to make graphene into graphane. The σ-electrons that bind carbon atoms in graphene make a band structure with an energy gap between the final occupied and vacant states. But the delocalised π-electrons cause fully occupied and vacant bands to touch one another. In graphane, the π-electrons are strongly attached to hydrogen atoms, making an energy gap between the lowest vacant band and the highest occupied band. Bizarrely, annealing causes the hydrogen to disperse leaving the graphene backbone whole.

Sunday, 25 August 2013

We have all heard of conductors and insulators. And indeed some of us are more familiar with magnets and semiconductors or even superconductors; they are all manifestations of electronic band structure. But what about the topological insulator? They conduct on the outside but insulate on the inside, much like a plastic wire wrapped with a metallic layer. Weird enough, they create a 'spin-current', where the conducted electrons themselves into spin-down electrons moving in one direction and spin-up moving in the other. Such a topological insulator is an exotic state resulting from quantum mechanics: the spin-orbit interaction and invariance (symmetry) under time reversal. What's more, the topological insulator has topologically protected surface state which is free of impairment of impurities. So how can we understand this 'new' physics? The insulating state has a conductivity of exactly zero around a temperature of absolute zero due to the energy gap segregating the vacant and occupied electron states. The quantum Hall state (QHS) near absolute zero has a quantised Hall conductance (ratio of current to voltage orthogonal to flow of current), unlike other materials like ferromagnet which have order arising from a broken symmetry, topologically ordered states are made distinct by wound up quantum states of electrons (and this protects the surface state). The QHS (the most basic topologically ordered state) happens when electrons trapped to a 2-D interface in between a pair of semiconductors encounters a strong magnetic field. This field causes the electrons to 'feel' an orthogonal Lorentz force, making them move around in circles (like electrons confined to an atom). Quantum mechanics substitutes these circular movements with discrete energies, causing a energy gap to segregate the vacant and occupied states like in an insulator.

However, at the boundary of the interface, the electron's circular motion can rebound off the edge, creating so called 'skipping orbits'. At the quantum scale, such skipping orbits create electronic states that spread across the boundary in a one-way manner with energies that are not discrete; this state can conduct owing to the lack of an energy gap. In addition, the flow in one-direction creates perfect electric transport (electrons have no other option but to move forward because there are no backward-motion modes). Dispationless transport emerges because the electrons don't scatter and hence no energy or work is lost (it also explains the discrete transport). But topological insulators happen without a magnetic field, unlike the quantum hall effect; the job of the magnetic field is taken over by spin-orbit coupling (interplay between orbital motion of electrons via space and the electron's spin). Relativistic electrons arise in atoms with high atomic numbers and thus produce strong spin-orbit forces; so any particle will experience a strong spin-momentum reliant force that plays the part of the magnetic field (when spin changes, its direction changes). Such a comparison between a spin-reliant magnetic field and spin-orbit coupling allows us to introduce the most basic 2-D topological insulator; the quantum Hall spin state. This happens when both the spin-up and spin-down electrons experience equal but opposite 'magnetic fields'.

Just as in a regular insulator, there exists an energy gap but there are edge states where the spin-up and spin-down electrons propagate in opposition to another. Time-reversal invariance exchanges both the direction of spin and propagation; hence swapping the two oppositely-propagating modes. But the 3-D topological insulator can't be explained by a spin-dependant magnetic field. The surface state of 3-D topological insulators promotes the movement of electrons in any direction, but the direction of electronic motion decides the spin direction. The relation between momentum and energy has a Dirac cone structure like in graphene.

Wednesday, 21 August 2013

Like ripples through a rubber sheet, they squeeze and stretch spacetime and move outwards at the speed of light. Gravitational waves are still up for grabs. An exotic prediction of general relativity yet to be observed yet having profound implications for cosmology and astrophysics. If we picture a star in a relativistic orbit around a supermassive black hole, it may continue so for thousands of years but never forever. Neglecting even drag due to gas, the orbit would lose energy gradually until the star spiralled into the hole; the reason for this plunge is the emission of gravitational radiation. We know that if the shape or size of an object is altered, so is the gravity surrounding it; Newton realised the sphere was an exception since the gravitational field outside it is invariant (remains the same) if it merely expands or contracts. Changes in the gravitational field can't spread out instantly because this would imply a conveyance of information about the shape and size of an object at superluminal speeds (which is forbidden by relativity). If the sun were to somehow alter its shape and the gravitational field around it, 8 minutes would elapse for the effect to be 'felt' on the earth and at very large distances, this is evident as radiation (a wave of changing gravity) moving away from its source. This is analogous to the manner in which fluctuations in an electric field produces electromagnetic waves (a rotating bar with a charged ends produces an electric field unlike which is different from when the bar is end-on or sideways-on). But there are two main distinctions to be made between gravitational and electromagnetic waves. Firstly, gravitational waves are especially weak (except if very large masses are involved). Diatomic molecules are great emitters of electromagnetic radiation but terrible at transmitting gravitational waves. Because there is no such thing as negative mass (negative gravitational charge) to neutralise (or cancel out) positive ones (like in electricity), on large scales, gravity competes with electromagnetism. This lack of negative gravitational charge gives gravity an advantage over electromagnetism but it implies a deep paradox: it weakens the strength of an object to make gravitational radiation. Which brings us to the second difference between gravitational and electromagnetic waves:

The most productive (i.e. efficient) way of making electromagnetic radiation is for the 'centre of electric charge' to stagger or wobble in relation to the centre of mass. Dipole radiation is an example of this, where the ends of a spinning bar are positively charged on one end while negative on the other. But the Equivalence Principle (which dictates that gravitation is indistinguishable from acceleration, much like how a rising elevator makes you feel heavier while a descending one makes you feel lighter) also mentions that everything exerts a gravitational force equal to its inertial mass, hence at all points in spacetime, all bodies experience the same gravitational acceleration. Translating into english: this implies that the 'centre of gravitational charge' is really just the centre of mass and since the former can't wobble relative to the latter, dipole gravitational radiation can't exist. We compare gravitational radiation to the spinning bar by envisaging it possesses positive charges at both ends so that 'the centre of charge' remains set (fixed) at the centre and thus, low amounts of radiation are produced owing the existence of a quadrupole moment (it's only quantity that changes: it describes the distribution of shape and charge). Due to gravitational radiation, binary systems loses energy and their orbital period shrinks progressively, causing the component stars to coalesce; when two black holes meet, their even horizons combine into a larger one and in accordance with the 'no hair theorems', returns to a state described by the Kerr Metric (hole has mass and spin).

But the detection of such gravitational radiation (or waves) is causing a stir, it is Einstein's final straw. Any object in the way of a gravitational wave would experience a tidal gravitational force that acts transverse (perpendicular) to the direction in which the wave moved outward. If you interrupt a gravitational wave some sort of circular hoop head-on, it will eventually be contorted into an ellipse. In Louisiana, the LIGO detector uses laser interferometry, where a laser beam is divided and reflected off mirrors which are connected to two masses (kilometers away) in a perpendicular fashion (an L shape). If a gravitational wave were to arrive, it would cause two lengths, X and Y to change. To be continued...

Sunday, 18 August 2013

The X and Y chromosomes are an odd couple. But the Y reads more like a rule-breaker of human genetics; most of it refuses to recombine, more than half of it consists of tandem repeats of satellite DNA and it's not a prerequisite for life (females don't have or need one). So why bother with a chromosome that tells us about 50% of the population (assuming a 1:1 sex ratio)? Since it passes directly from father to son, its sex-determining role means it is specific to males and haploid. It contains vast numbers of unique SNPs and has some notable exceptions such as 2 pseudoautosomal regions that do recombine with the X as well as euchromatin sequences (which are loosened during interphase). Largely escaping recombination, the Y can bequeath haplotypes which are passed down a robust phylogeny (changes only via mutation) and can be used to trace back the most recent matrilineal ancestor, Y-chromosomal Adam. A gene called SRY (sex-determining-region-Y), derived from SOX3, which transcribes a protein to activate the formation of the testes, such is the origin of the sex-determining role. We can infer that the sex chromosomes started off initially as a matched pair (due to the identical telomeric sequences at the tips, which can engage in recombination); during the course of meiosis (the process of gamete formation), the homologous chromosomes align and exchange segments, subsequently sending off a copy of an autosome and and a sex chromosome to each cell. Other indications that Y and X were once alike include the non-recombining sites on the Y, most genes in this region have corresponding duplicates on the X. What makes the Y-chromosome an evolutionary curiosity is that its profound lack of recombination it makes it more prone to accumulating mutations and then decay; something must have happened to cease the exchange of DNA between the X and Y. The Y forfeited its ability to exchange DNA with the Y in discrete stages; firstly, a strip of DNA flanking the SRY gene spreads down the chromosome. But only the Y decayed in response to the loss of X-Y chromosome recombination, in contrast to the X which in females undergoes recombination when a pair of copies meet during meiosis. So what then could explain away the interruption of recombination between the X and Y?

As the early Y tended to exchange segments, a portion of DNA experienced an inversion (effectively turning the sequence upside down) relative to the X and since a prerequisite for recombination is that analogous sequences are aligned, any inversion would prevent interaction between the two regions. Comparative genomics unveils that autosomal precursors of the X and Y were unbroken (intact) in reptilian species before the mammalian lineage began . But monotremes like platypi were among the earliest to speciate and have a SRY gene aged back to 300 million years. X-inactivation followed (in which female embryo cells arbitrarily shut down a majority of the genes in one of the 2 X-chromosomes) to compensate for the degeneration. If we reduce the whole human population to two people (one man and woman), together this couple carries four copies of each autosome and three X chromosomes and a single Y. The effective population size of the Y can be therefore predicted to be similar to that of haploid mtDNA, 1/3 that of any X and 1/4 that of any autosome. Hence, we can expect much lower rates of diversification in the Y than any other region of the nuclear genome. We can predict is to also be more subject to genetic drift (random changes in frequency of haplotypes) and such drift would act as a catalyst for the differentiation between aggregates of Y-chromosomes in different populations.