Sunday, January 29, 2012

Fake flakes

Tanguy Chouard at Nature has pointed out to me Google’s tribute to the snowflake today:
This is a beautiful example of the kind of bogus flake I collected for my spot in Nine Lessons and Carols for Godless People just before Christmas. Eight-pointed flakes like this are relatively common, because they are easier to draw than six-pointed ones:
(from a Prospect mailing)
(from an Amnesty Christmas card (occasionally sent by yours truly))

More rarely one sees five-pointed examples like this from some wrapping paper in 2010:
Or, more deliciously, this one from the Millibands last year:
I like to point out that the possible sighting of quasicrystalline ice should make us hesitant to be too dismissive of these inventive geometries. What’s more, there do exist claims of pentagonal flakes having been observed, though this seems extremely hard to credit. Of course, in truth quasicrystal ice, even if it exists in very rare circumstances, hardly has five- or eightfold snowflakes as its inevitable corollary. But it’s fun to think about it, especially near the quadricentenary [?] of Kepler’s classic treatise on the snowflake, De nive sexangula.

Thursday, January 26, 2012

Forbidden chemistry

I’ve just published a feature article in New Scientist on “reactions they said could never happen” (at least, that was my brief). A fair bit of the introductory discussion had to be dropped, so here’s the original full text – sorry, a long post. I’m going to put the pdf on my web site, with a few figures added.
_____________________________________________________________________

The award of the 2011 Nobel prize for chemistry to Dan Shechtman for discovering quasicrystals allowed reporters to relish tales of experts being proved wrong. For his heretical suggestion that the packing of atoms in crystals can have a kind of fivefold (quasi)symmetry, Shechtman was ridiculed and ostracized and almost lost his job. The eminent chemist Linus Pauling derided him as a “quasi-scientist”.

Pauling of all people should have known that sometimes it is worth risking being bold and wrong, as he was himself with the structure of DNA in the 1950s. As it turned out, Shechtman was bold and right: quasicrystals do exist, and they earn their ‘impossible’ fivefold symmetry at the cost of not being fully ordered: not truly crystalline in the traditional sense. But while everyone enjoys seeing experts with egg on their faces, there’s a much more illuminating way to think about apparent violations of what is ‘possible’ in chemistry.

Here are some other examples of chemical processes that seemed to break the rules – reactions that ‘shouldn’t’ happen. They demonstrate why chemistry is such a vibrant, exciting science: because it operates on the borders of predictability and certainty. The laws of physics have an air of finality: they don’t tolerate exceptions. No one except cranks expects the conservation of energy to be violated. In biology, in contrast, ‘laws’ seem destined to have exceptions: even the heresy of inheritance of acquired characteristics is permitted by epigenetics. Chemistry sits in the middle ground between the rigidity of physics and the permissiveness of biology. Its basis in physics sets some limits and constraints, but the messy diversity of the elements can often transcend or undermine them.

That’s why chemists often rely on intuition to decide what should or shouldn’t be possible. When his postdoc student Xiao-Dong Wen told Nobel laureate Roald Hoffmann that his computer calculations found graphane – puckered sheets of carbon hexagons with hydrogens attached, with a C:H ratio of 1:1 – was more stable than familiar old benzene, Hoffmann insisted that the calculations were wrong. The superior stability of benzene, he said, “is sacrosanct - it’s hard to argue with it”. But eventually Hoffmann realized that his intuition was wrong: graphane is more stable, though no one has yet succeeded in proving definitively that it can be made.

You could say that chemistry flirts with its own law-breaking inclinations. Chemists often speak of reactions that are ‘forbidden’. For example, symmetry-forbidden reactions are ones that break the rules formulated by Hoffmann in his Nobel-winning work with organic chemist Robert Woodward in 1965 – rules governed by the mathematical symmetry properties of electron orbitals as they are rearranged or recombined by light or heat. Similarly, reactions that fail to conserve the total amount of ‘spin’, a quantum-mechanical property of electrons, are said to be spin-forbidden. And yet neither of these types of ‘forbidden’ reaction is impossible – they merely happen at slower rates. Hoffmann says that he (at Woodward’s insistence) even asserted in their 1965 paper that there were no exceptions to their rules, knowing that this would spur others into finding them.

So this gallery of ‘reactions they said couldn’t happen’ is not a litany of chemists’ conservatism and prejudice (although – let’s be honest – that sometimes played a part). It is a reflection of how chemistry itself exists in an unstable state, needing an intuition of right and wrong but having constantly to readjust that to the lessons of experience. That’s what makes it exciting – it’s not the case that anything might happen, but nevertheless big surprises certainly can. That’s why, however peculiar the claim, the right response in chemistry, perhaps more than any other branch of science, is not “that’s impossible”, but “prove it”.

Crazy tiling

In the early 1980s, Daniel Shechtman was bombarding metal alloys with electrons at the then National Bureau of Standards (NBS) in Gaithersburg, Maryland. Through mathematical analysis of the interference patterns formed as the beams reflected from different layers of the crystals, it was possible to determine exactly how the atoms were packed.

Among the alloys Shechtman studied, a blend of aluminium and manganese produced a beautiful pattern of sharp diffraction spots, which had always been found to be an indicator of crystalline order. But the crystal symmetry suggested by the pattern didn’t make sense. It was fivefold, like that of a pentagon. One of the basic rules of crystallography is that atoms can’t be packed into a regular, repeating arrangement with fivefold symmetry, just as pentagons can’t tile a floor in a periodic way that leaves no gaps.

Pauling wasn’t the only fierce critic of Shectman’s claims. When he persisted with them, his boss at NBS asked him to leave the group. And a paper he submitted in the summer of 1984 was rejected immediately. Only when he found some colleagues to back him up did he get the results published at the end of that year.

Yet the answer to the riddle they posed had been found already. In the 1970s the mathematician Roger Penrose had discovered that two rhombus-shaped tiles could be used to cover a flat plane without gaps and without the pattern ever repeating. In 1981, the crystallographer Alan Mackay found that if an atom were placed at every vertex of such a Penrose tiling, it would produce a diffraction pattern with fivefold symmetry, even though the tiling itself was not perfectly periodic. Shechtman’s alloy was analogous to a three-dimensional Penrose tiling. It was not a perfect crystal, because the atomic arrangement never repeated exactly; it was a quasicrystal.

Since then, many other quasicrystalline alloys have been discovered. They, or structures very much like them in polymers and assemblies of soap-like molecules called micelles. It has even been suggested that water, when confined in very narrow slits, can freeze into quasicrystalline ice.

You can’t have it both ways

For poor Boris Belousov, vindication came too late. When he was awarded the prestigious Lenin prize by the Soviet government in 1980 for his pioneering work on oscillating chemical reactions, he had already been dead for ten years.

Still, at least Belousov lived long enough to see the scorn heaped on his initial work turn to grudging acceptance by many chemists. When he discovered oscillating chemical reactions in the 1950s, he was deemed to have violated one of the most cherished principles of science: the second law of thermodynamics.

This states that all change in the universe must be accompanied by an increase in entropy – crudely speaking, it must leave things less ordered than they were to begin with. Even processes that seem to create order, such as the freezing of water to ice, in fact promote a broader disorder – here by releasing latent heat into the surroundings. This principle is what prohibits many perpetual motion machines (others violate the first law – the conservation of energy – instead). Violations of the second law are thus something that only cranks propose.

But Belousov was no crank. He was a respectable Russian biochemist interested in the mechanisms of metabolism, and specifically in glycolysis: how enzymes break down sugars. To study this process, Belousov devised a cocktail of chemical ingredients that should act like a simplified analogue of glycolysis. He shook them up and watched as the reaction proceeded, turning from clear to yellow.

Then it did something astonishing: it went clear again. Then yellow. Then clear. It began to oscillate repeatedly between these two coloured states. The problem is that entropy can’t possibly increase in both directions. So what’s up?

Belousov wasn’t actually the first to see an oscillating reaction. In 1921 American chemist William Bray reported oscillations in the reaction of hydrogen peroxide and iodate ions. But no one believed him either, even though the ecologist Alfred Lotka had shown in 1910 how oscillations could arise in a simple, hypothetical reaction. As for Belousov, he couldn’t get his findings published anywhere, and in the end he appended them to a paper in a Soviet conference proceedings on a different topic: a Pyrrhic victory, since they then remained almost totally obscure.

But not quite. In the 1960s another Soviet chemist, Anatoly Zhabotinsky, modified Belousov’s reaction mixture so that it switched between red and blue. That was pretty hard for others to ignore. The Belousov-Zhabotinsky (BZ) reaction became recognized as one of a whole class of oscillating reactions, and after it was transmitted to the West in a meeting of Soviet and Western scientists in Prague in 1967, these processes were gradually explained.

They don’t violate the second law after all, for the simple reason that the oscillations don’t last forever. Left to their own devices, they eventually die away and the reaction settles down to an unchanging state. They exist only while the reaction approaches its equilibrium state, and are thus an out-of-equilibrium phenomenon. Since thermodynamics speaks only about equilibrium states and not what happens en route to them, it is not threatened by oscillating reactions.

The oscillations are the result of self-amplifying feedback. As the reaction proceeds, one of the intermediate products (call it A) is autocatalytic: it speeds up the rate of its own production. This makes the reaction accelerate until the reagents are exhausted. But there is a second autocatalytic process that consumes A and produces another product, B, which kicks in when the first process runs out of steam. This too quickly exhausts itself, and the system reverts to the first process. It repeatedly flips back and forth between the two reactions, over-reaching itself first in one direction and then in the other. Lotka showed that the same thing can happen in populations of predators and their prey, which can get caught in alternating cycles of boom and bust.

If the BZ reaction is constantly fed fresh reagents, while the final products are removed, the oscillations can be sustained indefinitely: it remains out of equilibrium. Such oscillations are now know to happen in many chemical processes, including some industrially important reactions on metal catalysts and even in real glycolysis and other biochemical processes. If it takes place in an unstirred mixture, the BZ oscillations can spread from initiating spots as chemical waves, giving rise to complex patterns. Related patterns are the probable cause of many animal pigmentation markings. BZ chemical waves are analogues of the waves of electrical excitation that pass through heart tissue and induce regular heartbeats; if they are disturbed, the waves break up and the result can be a heart attack.

These waves might also form the basis of a novel form of computation. Andrew Adamatsky at the University of the West of England in Bristol is using their interactions to create logic gates, which he believes can be miniaturized to make a genuine “wet’” chemical computer. He and collaborators in Germany and Poland have launched a project called NeuNeu to make chemical circuits that will crudely mimic the behaviour of neurons, including a capacity for self-repair.

The quantum escape clause

It’s very cold in space. So cold that molecules encountering one another in the frigid molecular clouds that pepper the interstellar void should generally lack enough energy to react. In general, reactions proceed via the formation of high-energy intermediate molecules which then reconfigure into lower-energy products. Energy (usually thermal) is needed to get the reactants to get over this barrier, but in space there is next to none.

In the 1970s a Soviet chemist named Vitali Goldanski challenged that dogma. He showed that, with a bit of help from high-energy radiation such as gamma-rays or electron beams, some chemicals could react even when chilled by liquid helium to just four degrees above absolute zero – just a little higher than the coldest parts of space. For example, under these conditions Goldanski found that formaldehyde, a fairly common component of molecular clouds, could link up into polymer chains several hundred molecules long. At that temperature, conventional chemical kinetic theory suggested that the reaction should be so slow as to be virtually frozen.

Why was it possible? Goldanski argued that the reactions were getting help from quantum effects. It is well known that particles governed by quantum rules can get across energy barriers even if they don’t appear to have enough energy to do so. Instead of going over the top, they can pass through the barrier, a process known as tunnelling. It’s possible because of the smeared-out nature of quantum objects: they aren’t simply here or there, but have positions described by a probability distribution. A quantum particle on one side of a barrier has a small probability of suddenly and spontaneously turning up on the other side.

Goldanski saw the signature of quantum tunnelling in his ultracold experiments in the lab: the rate of formaldehyde polymerization didn’t steadily increase with temperature, as conventional kinetic theory predicts, but stayed much the same as the temperature rose.

Goldanski believed that his quantum-assisted reactions in space might have helped the molecular building blocks of life to have assembled there from simple ingredients such as hydrogen cyanide, ammonia and water. He even thought they could help to explain why biological molecules such as amino acids have a preferred ‘handedness’. Most amino acids have so-called chiral carbon atoms, to which four different chemical groups are attached, permitting two mirror-image variants. In living organisms these amino acids are always of the right-handed variety, a long-standing and still unexplained mystery. Goldanski argued that his ultracold reactions could favour one enantiomer over the other, since the tunnelling rates might be highly sensitive to tiny biasing influences such as the polarization of radiation inducing them.

Chemical reactions assisted by quantum tunnelling are now well established – not just in space, but in the living cell. Some enzymes are more efficient catalysts than one would expect classically, because they involve the movement of hydrogen ions – lone protons, which are light enough to experience significant quantum tunnelling.

This counter-intuitive phenomenon can also subvert conventional expectations about what the products of a reaction will be. That was demonstrated very recently by Wesley Allen of the University of Georgia and his coworkers. They trapped a highly reactive free-radical molecule called methylhydroxycarbene, which has unpaired electrons that predispose it to react fast, in an inert matrix of solid argon at 11 degrees Kelvin. This molecule can in theory rearrange its atoms to form vinyl alcohol or acetaldehyde. In practice, however, it shouldn’t have enough energy to get over the barrier to these reactions under these ultracold conditions. But the carbene was transformed nonetheless – because of tunnelling.

“Tunnelling is not specifically a low-temperature phenomenon”, Allen explains. “It occurs at all temperatures. But at low temperatures the thermal activation shuts off, so tunnelling is all that is left.”

What’s more, although the formation of vinyl alcohol has a lower energy barrier, Allen and colleagues found that most of the carbene was transformed instead to acetaldehyde. That defied kinetic theory, which says that the lower the energy barrier to the formation of a product, the faster it will be produced and so the more it dominates the resulting mixture. The researchers figured that although the barrier to formation of acetaldehyde may have been higher, it was also narrower, which meant that it was easier to tunnel through.

Tunnelling through such high barriers as these “was quite a shock to most chemists”, says Allen. He says the result shows that “tunnelling is a broader aspect of chemical kinetics that has been understood in the past”.

Not so noble

Dmitri Mendeleev’s first periodic table in 1869 didn’t just have some gaps for yet-undiscovered elements. It had a whole column missing: a whole family of chemical elements whose existence no one suspected. The lightest of them – helium – was discovered that very same year, and the others began to turn up in the 1890s, starting with argon. The reason they took so long to surface, even though they are abundant (helium is the second most abundant element in the universe) is that they don’t do anything: they are inert, “noble”, not reacting with other elements.

That supposed unreactivity was tested with every extreme chemists could devise. Just after the noble gas argon was discovered in 1894, the French chemist Henri Moissan mixed it with fluorine, the viciously reactive element that he had isolated in 1886, and sent sparks through the mixture. Result: nothing. By 1924, the Austrian chemist Friedrich Paneth pronounced the consensus: “the unreactivity of the noble gas elements belongs to the surest of all experimental results.” Theories of chemical bonding seemed to explain why that was: the noble gases had filled shells of electrons, and therefore no capacity for adding more by sharing electrons in chemical bonds.

Linus Pauling, the chief architect of those theories, didn’t give up. In the 1930s he blagged a rare sample of the noble gas xenon and peruaded his colleague Don Yost at Caltech to try to get it to react with fluorine. After more cooking and sparking, Yost had succeeded only in corroding the walls of his supposedly inert quartz flasks.

Against this intransigent background, it was either a brave or foolish soul who would still try to make compounds from noble gases. But the first person to do so, British chemist Neil Bartlett at the University of British Columbia in Vancouver, was not setting out to be an iconoclast. He was just following some wonderfully plain reasoning.

In 1961 Bartlett discovered that the compound platinum hexafluoride (PtF6), first made three years earlier by US chemists, was an eye-wateringly powerful oxidant. Oxidation – the removal of electrons from a chemical element or compound – is so named because its prototypical form is the reaction with oxygen gas, a substance almost unparalleled in its ability to grab electrons. But Bartlett found that PtF6 can out-oxidize oxygen itself.

In early 1962 Bartlett was preparing a standard undergraduate lecture on inorganic chemistry and happened to glance at a textbook graph of ‘ionization potentials’ of substances: how much energy is needed to remove an electron from them. He noticed that it takes almost exactly the same energy to ionize – that is, to oxidize – oxygen molecules as xenon atoms. He realised that if PtF6 can do it to oxygen, it should do it to xenon too.

So he tried the experiment, simply mixing red gaseous PtF6 and colourless xenon. Straight away, the glass was covered with a yellow material, which Bartlett found to have the formula XePtF6: the first noble-gas compound.

Since then, many other compounds of both xenon and krypton, another noble gas, have been made. Some are explosively unstable: Bartlett nearly lost an eye studying xenon dioxide. Heavy, radioactive radon forms compounds too, although it wasn’t until 2000 that the first compound of argon was reported by a group in Finland. Even now, the noble gases continue to produce surprises. Roald Hoffmann admits to being shocked when, in that same year, a compound of xenon and gold was reported by chemists in Berlin – for gold is supposed to be a noble, unreactive metal too. You can persuade elements to do almost anything, it seems.

Improper bonds

Covalent chemical bonds form when two atoms share a pair of electrons, which act as a glue that binds the union. At least, that’s what we learn at school. But chemists have come to accept that there are plenty of other ways to form bonds.

Take the hydrogen bond – the interaction of electron ‘lone pairs’ on one atom such as oxygen or nitrogen with a hydrogen atom on another molecular group with a slight positive charge. This interaction is now acknowledged as the key to water’s unusual properties and the glue that sticks DNA’s double helix together. But the formation of a second bond by hydrogen, supposedly a one-bond atom, was initially derided in the 1920s as a fictitious kind of chemical “bigamy”.

That, however, was nothing compared to the controversy that surrounded the notion, first put forward in the 1940s, that some organic molecules, such as ‘carbocations’ in which carbon atoms are positively charged, could form short-lived structures over the course of a reaction in which a pair of electrons was dispersed over three rather than two atoms. This arrangement was considered so extraordinary that it became known as non-classical bonding.

The idea was invoked to explain some reactions involving the swapping of dangling groups attached to molecules with bridged carbon rings. In the first step of the reaction, the ‘leaving group’ falls off to create an intermediate carbocation. By rights, the replacement dangling group, with an overall negative charge, should have attached at the same place, at the positively charged atom. But it didn’t: the “reactive centre” of the carbocation seemed able to shift.

Some chemists, especially Saul Winstein at the University of California at Los Angeles, argued that the intermediate carbocation is bridged by a non-classical bond that bridged three carbon atoms in a triangular ring, with its positive charge smeared between them, giving the replacement group more than one place to dock. This bonding structure would temporarily, and rather heretically, give one of the carbon atoms five instead of the usual four bonding partners.

Such an unusual kind of bonding offended the sensibilities of other chemists, most of all Herbert Brown, who was awarded a Nobel prize in 1979 for his work on boron compounds. In 1961 he opened the “non-classical ion” war with a paper dismissing proposals for these structures as lacking “the same care and same sound experimental basis as that which is customary in other areas of experimental organic chemistry”. The ensuing arguments raged for two decades in what Brown called a “holy war”. “By the time the controversy sputtered to a halt in the early 1980s”, says philosopher of chemistry William Goodwin of Rowan University in New Jersey, “a tremendous amount of intellectual energy, resources, and invective had been invested in resolving an issue that was crucial neither to progress in physical organic chemistry generally nor to the subfield of carbocation chemistry.” Both sides accused the rival theory of being ‘soft’ – able to fit any result, and therefore not truly scientific.

Brown and his followers didn’t object in principle to the idea of electrons being smeared over more than two atomic nuclei – that happened in benzene, after all. But they considered the nonclassical ion an unnecessary and faddish imposition for an effect that could be explained by less drastic, more traditional means. The argument was really about how to interpret the experiments that bore on the matter, and it shows that, particularly in chemistry, it could and still can be very hard to apply a kind of Popperian falsification to distinguish between rival theories. Goodwin thinks that the non-classical ion dispute was provoked and sustained by ambiguities built into in the way organic chemists try to understand and describe the mechanisms of their reactions. “Organic chemists have sacrificed unambiguous explanation for something much more useful – a theory that helps them make plausible, but fallible, assessments of the chemical behavior of novel, complex compounds”, he says. As a result, chemistry is naturally prone to arguments that get resolved only when one side or the other runs out of energy – or dies.

The non-classical ion argument raged for two decades, until eventually most chemists except Brown accepted that these ions were real. Ironically, in the course of the debate both Winstein and Brown implied to a young Hungarian emigrĂ© chemist, George Olah, that his claim to have isolated a relatively long-lived carbocation – a development that ultimately helped resolve the issue – was unwise. This was another ‘reaction that couldn’t happen’, they advised – the ions were too unstable. But Olah was right, and his work on carbocations earned him a Nobel prize in 1994.

Monday, January 23, 2012

Nanotheology

Belatedly, here’s my final column for the Saturday Guardian. It’s final because, in a reshuffle to ‘consolidate’ the paper (i.e. save space because they’re losing so much money), the back page and its contents have been chopped. It was kind of fun while it lasted, though I intend shortly to post a few thoughts on being exposed to (and encouraged to engage with) the Comment is Free feedback. This piece was particularly revealing in that respect, eliciting as it did a fair bit of outrage from the transhumanists. Who’d have thought there were so many people desperately and credulously hanging out for the Singularity?
____________________________________________________________

What does God think of nanotechnology? The glib answer is that, like the rest of us, he’s only just heard of it. If you think it’s a silly question anyway, consider that a 2009 study claimed “religiosity is the dominant predictor of moral acceptance of nanotechnology.” ‘Science anthropologist’ Chris Toumey has recently surveyed this moral landscape.

Nanotechnology is a catch-all term that encompasses a host of diverse efforts to manipulate matter on the very small scales of atoms and cells. There’s no single objective. Some nanotechnologists are exploring new approaches to medicine, others want to make computer circuits or new materials.

Of the rather few explicitly religious commentaries on nanotech so far, some have focused on issues that could equally be raised by secular voices: sensible concerns about safety, commercial control and accountability, and responsible application. (None seems too bothered about the strong military interest.)

Yet much of the discussion has headed down the blind alley of transhumanism. Nanotech scientists have long sought to rescue their discipline’s public image from the vocal but fringe spokespersons such as Eric Drexler and billionaire inventor Ray Kurzweil, who have painted a fantastic picture of tiny robots patching up our cells and perhaps hugely extending our longevity. Kurzweil has suggested that nanotech will play a big role in guiding us to a moment he calls the Singularity: a convergence of exponentially growing computer power and medical capability that will transform us into disembodied immortals. He has even set up a Singularity University, based on NASA’s research park in Silicon Valley, to prepare the way.

Needless to say, immortality – or its pursuit – isn’t acceptable to most religious observers of any creed, since it entails a hubristic attempt to transcend the divinely decreed limitations of the human body, and relieves us from saving our souls. But the transhumanism question isn’t unique to nanotech – it’s part of a wider debate about the ethics of human enhancement and modification.

In any case, as far as nanotech is concerned the theologians can relax. Transhumanism and Kurzweil’s Singularity are just delirious dreams and on no serious scientist’s agenda. One Christian writer admitted to being shocked by what he heard at a transhumanist conference. Quite right too: all these folks determined to freeze their heads or download their consciousness into computers are living in an infantile fantasy.

So are there any ethical issues in nanotech that really do have a religious dimension? Science-fiction writer Charles Stross has imagined the dilemmas of Muslims faced with bacon that is chemically identical to the real thing but assembled by nanotechnology rather than pigs. He wasn’t entirely serious, but some liberal Muslim scholars have debated whether the Qu’ran places any constraints on the permitted rearrangements of matter. Given that chemistry was pioneered by Muslims between the eighth and twelfth centuries, this seems unlikely. Jewish scholars, meanwhile, have used the legend of the golem to think about the ethics of making life from inanimate matter, partly in reference to nanotech and artificial intelligence. In the 1960s the pre-eminent expert on the golem legends Gershom Scholem was sanguine about the idea, asking only that our digital golems “develop peacefully and don’t destroy the world.”

These academic discussions have so far been rather considered and tolerant. Toumey wonders whether they’d impinge on the views of, say, your average Southern Baptist, hinting tactfully at what we might suspect anyway: both sensible people and bigots adapt their religion to their temperament and prejudices rather than vice versa.

One British study of attitudes to nanotech made the point that religious groups were better able than secular ones to articulate their ethical concerns because they possessed a vocabulary and conceptual framework for them. The researchers suggested that religious groups might therefore take the lead in communicating public perceptions. I’m not so sure. Articulacy is useful, but it’s more important that you first understand the science. And just because you can couch your views eloquently in terms of souls and afterlives doesn’t make them more valid.

Tuesday, January 17, 2012

Forever young?

I was asked by the Guardian to write an online story about the new ‘youth cream’ from L’Oreal. I think they were anticipating a debunking job, but I guess I learnt here the difference between skepticism and cynicism. I’m not really interested in whether these things work or not (whatever ‘work’ can mean in this instance), but I had to admit that there was some kind of science behind this stuff, even if I see no proof yet that it has any lasting effect on wrinkles. So I was overcome by an attack of fairness (who said "gullibility"?). This is what resulted.
___________________________________________________

I don’t suppose I’m in the target group for Yves Saint Laurent’s new skin cream Forever Youth Liberator - but what if I did want to know it’s worth shelling out sixty quid for a 50 ml tub? I could be wowed by the (strangely similar) media reports. “It is likely to be one of the most sought after face creams ever”, says the Telegraph, “5,000 women have already pre-ordered a face cream using ingredients which scientists claimed would change the world.” Or as the Daily Mail puts it, the cream is “hailed as the ‘holy grail’ of anti-ageing.” (You have to read on to discover that it’s Amandine Ohayon, general manager of Yves Saint Laurent, who is doing the hailing here.)

But I’m hard to please. I want to know about the science supporting these claims. After all, cosmetics companies have been trying to blind us with science for years – perhaps ever since the white coats began to appear in the DuPont chemical company’s ads (“Better living through chemistry”) in the 1930s. Recently we’ve had skin creams loaded with nano-capsules, vitamins A, C and E, antioxidants and things with even longer names.

“The science behind the brand lies in the groundbreaking technology of Glycobiology”, one puff tells us. “It’s been noted as the future in the medical field, the fruit of more than 100 years of research and recognized by seven Nobel Prizes.” The Telegraph, meanwhile, parrots the PR that, “the cream has been 20 years in development, and has the backing of the Max Planck Institute in Germany.”

I rather wish that, as a chemist, I could say this is all tripe. But it’s not as simple as, say, claims by bottled-water companies to have a secret process that alters the molecular structure of water to assist hydration. For example, it’s true that glycobiology is a big deal. This field studies an undervalued and once unfashionable ingredient of living cells: sugars. Glycans are complicated sugar molecules that play many important biological roles. Attached to proteins at the surfaces of our cells, such sugars act as labels that distinguish different cell types – for example, they determine your blood group. Glycans and related biochemicals are an essential component of the way our cells recognise and communicate with one another.

Skin cells – essentially, tissue-generating cells called fibroblasts – produce glycans and other substances that form a surrounding extracellular matrix, Some of these glycans attract water and keep the skin plump and soft. But their production declines as fibroblasts age, and so the skin becomes dry and wrinkled. Skin creams routinely contain glycoproteins and glycans to redress this deficit.

Fine – but what’s so different about the new cream? It’s based on a combination of artificial glycans trademarked Glycanactif. Selfridges tells us that they “unlock the cells to reactivate their vital functions and liberate the youth potential at all levels of the skin”. Well, it would be nice if cells really were little boxes brimming with ‘youth potential’, just waiting to be ‘unlocked’, but this statement is basically voodoo.

So I contact YSL. And – what do you know? – they sent me some useful science. It’s surrounded by gloss and puff (“Youth is a state of mind that cannot live without science” – meaning what, exactly?), and exposed as the source of that garbled soundbite from Selfridges. But it also shows that YSL has enlisted some serious scientists, most notably Peter Seeberger, a specialist in glycan chemistry at the Max Planck Institute of Colloids and Interfaces in Berlin. And it explains that, instead of just supplying a source of glycans in the extracellular matrix to make up for their reduced production in ageing cells, Glycanactif apparently binds to glycan receptors on the cell surface and stimulates them to start making the molecules (including other glycans and related compounds) needed for healthy skin.

Tough-skinned cynic that I am about the claims of cosmetics manufacturers, I am nonetheless emolliated, if not exactly rejuvenated. True, there’s nothing in the leaflet which proves that FYL does a better job than other skin creams. The science remains very sketchy in places. And (this is true of any claims for cosmetics) we’d reserve judgement until the long-term clinical trials, if it were a drug. But I’m offered a troupe of serious scientists ready to talk about the work. I’m open to persuasion.

Still, it puzzles me. How many of the thousands of advance orders, or no doubt the millions to come, will have been based on examination of the technical data? I know we lack the time, and usually the expertise, for such rigour. So what instead informs our decision to shell out sixty quid on a tiny tub of youthfulness? And if the science was all nonsense, would it make a difference?

Monday, January 16, 2012

The truth about Einstein's wife

Some weeks back I mentioned in passing in my Guardian column the far-fetched claim that Einstein’s first wife Mileva Maric was partly or even primarily responsible for the ideas behind his theory of relativity. Allen Esterson has written to me to point out that this claim is still widely circulated and accepted as established fact by some people. Indeed, he says that “the 2008-2009 EU Europa Diary for secondary school children (print run 3 million) had the following: ‘Did you know? Mileva Marić, Einstein's first wife, confidant and colleague – and co-developer of his Theory of Relativity – was born in what is now Serbia’”. Seems to me that this sort of thing (and the concomitant notion that this ‘truth’ has been long suppressed) ultimately doesn’t do the feminist cause any good. Allen has also posted on the web site Butterflies and Wheels a critique of an independent short film that tries to promote the myth – you can find it here.

Wednesday, January 11, 2012

How big is yours?

Here, then, is my column from last Saturday’s Guardian.

While writing this, I discovered that Google Scholar has an add-on that will tot up your citations to establish an h-index. From that, I gather that mine is around 29. One of the comments on the Guardian thread points out that Richard Feynman has an h of 23. As Nigel Tufnell famously said apropos Jimmy Page, “I think that says quite a lot.”

_________________________________________________________________

Many scientists worry that theirs isn’t big enough. Even those who sniff that size isn’t everything probably can’t resist taking a peek to see how they compare with their rivals. The truly desperate can google for dodgy techniques to make theirs bigger.

I’m talking about the h-index, a number that supposedly measures the quality of a researcher’s output. And if the schoolboy double entendres seem puerile, there does seem to be something decidedly male about the notion of a number that rates your prowess and ranks you in a league table. Given that, say, the 100 chemists with the highest h-index are all male, whereas 1 in 4 postdoctoral chemists is female, the h-index does seem to be the academic equivalent of a stag’s antlers.

Few topics excite more controversy among scientists. When I spoke about the h-index to the German Physical Society a few years back, I was astonished to find the huge auditorium packed. Some deplore it; some find it useful. Some welcome it as a defence against the subjective capriciousness of review and tenure boards.

The h-index is named after its inventor, physicist Jorge Hirsch, who proposed it in 2005 precisely as a means of bringing some rigour to the slippery question of who is most deserving of a grant or a post. The index measures how many highly cited papers a scientist has written: your value of h is the number of your papers that have each been cited by (included in the reference lists of) at least h other papers. So a researcher with an h of 10 has written 10 papers that have received at least 10 citations each.

The idea is that citations are a measure of quality: if a paper reports something important, other scientists will refer to it. That’s a broadly a reasonable assumption, but not airtight. There’s evidence that some papers get highly cited by chance, because of a runaway copycat effect: people cite them just because others have, in the same way that some mediocre books and songs become unaccountably popular.

But to get a big h-index, it’s not enough to write a few influential papers. You have to write a lot of them. A single paper could transform a field of science and win its author a Nobel prize, while doing little for the author’s h-index if he or she doesn’t write anything else of note. Nobel laureate chemist Harry Kroto is ranked an apparently undistinguished 264th in the h-index list of chemists because his (deserved) fame rests largely on a single breakthrough paper in 1985.

That’s one of the criticisms of the h-index – it imposes a one-size-fits-all view of scientific impact. There are many other potential faults. Young scientists with few publications score lower, however brilliant they are. The value of h can be artificially boosted – slightly but significantly – by scientists repeatedly citing their own papers. It fails to distinguish the relative contributions to the work in many-author papers. The numbers can’t be compared across disciplines, because citation habits differ.

Many variants of the h-index have been proposed to get round these problems, but there’s no perfect answer, and one great virtue of the h-index is its simplicity, which means that its pros and cons are relative transparent. In any case, it’s here to stay. No one officially endorses the h-index for evaluation, but scientists confess that they use it all the time as an informal way of, say, assessing applicants for a job. The trouble is that it’s precisely for average scientists that the index works rather poorly: small differences in small h-indices don’t tell you very much.

The h-index is part of a wider trend in science to rely on metrics – numbers rather than opinions – for assessment. For some, that’s like assuming that book sales measure literary merit. It can distort priorities, encouraging researchers to publish all they can and follow fads (it would have served Darwin poorly). But numbers aren’t hostage to fickle whim, discrimination or favouritism. So there’s a place for the h-index, as long as we can keep it there.

Monday, January 09, 2012

No secret

Before I post my last Guardian column, here’s one that got away: I’d planned to write about a paper in PNAS (not yet online) on blind testing of new and old violins, until – as I was half-expecting – Ian Sample wrote a regular story on it. So this had to be scrapped.

Radio 4's PM programme covered the story too, but in a somewhat silly way. They got a sceptical professor from the Royal College of Music to come on and play some Bach on a new and and old instrument, and asked listeners to see if they could identify which was which. A good demonstration, I suppose, of exactly why double-blind tests were invented.
___________________________________________________________

At last we now know Antonio Stradivari’s secret. Violinists and craftsmen have long speculated about what makes the legendary Italian luthier’s instruments sound so special. Does the magic lie in the forgotten recipe for the varnish, or in a chemical pre-treatment of the wood? Or perhaps it’s the sheer passage of time that mellows the tone into such richness?

Alas, none of these. A new study by French and US researchers suggests that the reason the sound of a Stradivari is so venerated is because it has never before been properly put to the test.

Twenty-one experienced violinists were asked to blind-test six violins – three new, two Stradivaris and one made by the equally esteemed eighteenth-century instrument-maker Guarneri del GesĂ¹. Most of the players were unable to tell if an instrument was new or old, and their preferences bore no relation to cost or age. Although their opinions varied, the favourite choice was a modern instrument, and the least favourite, by a clear margin, was a Stradivari.

OK, it’s just a small-scale test – getting hold of even three old violins (combined value $10m) was no mean feat. And you’ll have to trust me that the researchers took all the right precautions. The tests were, for example, literally double-blind – both the researchers and the players wore welders’ goggles in dim lighting to make sure they couldn’t identify the type of instrument by eye. And in case you’re thinking they just hit on a dud Stradivari (which do exist), the one with the worst rating had been owned by several well-known violinists.

This is embarrassing for the experts, both scientists and musicians. In judging quality, “the opinions of different violinists would coincide absolutely”, one acoustics expert has previously said. “Any musician will tell you immediately whether an instrument he is playing on is an antique instrument or a modern one”, claimed another. And a distinguished violinist once insisted to me that the superior sound of the most expensive old instruments is “very real”.

But acoustic scientists have struggled to identify any clear differences between the tone of antique and (good) new instruments. And as for putting belief to the test, an acoustic scientist once told me that he doubted any musicians would risk exposing themselves to a blind test, preferring the safety of the myth.

That’s why the participants in the latest study deserve credit. They’re anonymous, but they must know how much fury they could bring down on their heads. If you’ve paid $3m for one of the 500 or so remaining Strads, you don’t want to be told that a modern instrument would sound as good at a hundredth of the price.

But that’s perhaps the problem in the first place. In a recent blind wine-testing study, the ‘quality’ was deemed greater when the subjects were told that the bottle cost more.

Is there a killjoy aspect to this demonstration that the mystique of the Strad evaporates under scientific scrutiny? Is it fair to tell violinists that their rapture at these instruments’ irreplaceable tone is a neural illusion? Is this an example of Keats’ famous criticism that science will “clip and Angel’s wings/Conquer all mysteries by rule and line”?

I suspect that depends on whether you want to patronize musicians or treat them as grown-ups – as well as whether you wish to deny modern luthiers the credit they are evidently due. In fact, musicians themselves sometimes chafe at the way their instruments are revered over their own skill. The famous violinist Jascha Heifetz, who played a Guarneri del GesĂ¹, pointedly implied that it’s the player, not the instrument, who makes the difference between the sublime and the mediocre. A female fan once breathlessly complimented him after a performance on the “beautiful tone” of his violin. Heifetz turned around and bent to put his ear close to the violin lying in its case. “I don’t hear anything”, he said.

Wednesday, January 04, 2012

Science is a joke

Belatedly, here is last Saturday’s Critical Scientist column for the Guardian.
_____________________________________________________________________

Is there something funny about science? Audiences at Robin Ince’s seasonal slice of rationalist revelry, Nine Carols and Songs for Godless People, just before Christmas seemed to think so. This annual event at the Bloomsbury Theatre in London is far more a celebration of the wonders of science than an exercise in atheistic God-baiting. In fact God gets a rather easy ride: the bad science of tabloids, fundamentalists, quacks and climate-change sceptics provides richer comic fodder.

Time was when London theatre audiences preferred to laugh at science rather than with it, most famously with Thomas Shadwell’s satire on the Royal Society, The Virtuoso, in 1676. Samuel Butler and Jonathan Swift followed suit in showering the Enlightenment rationalists with ridicule. In modern times, scientists (usually mad) remained the butt of such jokes as came their way.

They haven’t helped matters with a formerly rather feeble line in laughs. Even now there are popularizing scientists who imagine that another repetition of the ‘joke’ about spherical cows will prove them all to be jolly japers. And while allowing that much humour lies in the delivery, there are scant laughs still to be wrung from formulaic juxtapositions of the exotic with the mundane (“imagine looking for the yoghurt in an eleven-dimensional supermarket!”), or anthropomorphising the sexual habits of other animals.

Meanwhile, science has its in-jokes like any other profession. A typical example: A neutron goes into a bar and orders a drink. “How much?”, he asks the bartender, who replies: “For you, no charge”. Look, I’m just telling you. Occasionally the humour is so rarefied that its solipsism becomes virtually a part of the joke itself. Thomas Pynchon, for instance, provides a rare example of an equation gag, which I risk straining the Guardian’s typography to repeat: ∫1/cabin d(cabin) = log cabin + c = houseboat. This was the only calculus joke I’d ever seen until Matt Parker produced a better one at Nine Carols. Speaking of rates of flow (OK, it was flow of poo, d(poo)/dt – some things never fail), he admitted that this part of his material was a little derivative.

The rise of stand-up has changed everything. Not only do we now have stand-ups who specialize in science, but several, such as Timandra Harkness and Helen Keen, are women, diluting the relentless blokeishness of much science humour. Some aim to be informative as well as funny. At the Bloomsbury you could watch Dr Hula (Richard Vranch) and his assistant demonstrate atomic theory and chemical bonding with hula hoops (more fun than perhaps it sounds).

As Ben Goldacre’s readers know, good jokes often have serious intent. Perhaps the most notorious scientific example was not exactly a joke at all. Certainly, when in 1996 the physicist Alan Sokal got a completely spurious paper on ‘quantum hermeneutics’ published in the journal of postmodern criticism Social Text, the postmodernists weren’t laughing. And Sokal himself was more intent on proving a point than making us giggle. Arguably funnier was the epilogue: in the early 2000s, a group of papers on quantum cosmology published in physics journals by the French brothers Igor and Grichka Bogdanov was so incomprehensible that this was rumoured to be the postmodernists’ revenge – until the indignant Bogdanovs protested that they were perfectly serious.

But my favourite example of this sort of prank was a paper submitted by computer scientists David Mazières and Eddie Kohler to one of the ‘junk science’ conferences that plague their field with spammed solicitations. The paper had a title, abstract, text, figures and captions that all consisted solely of the phrase “Get me off your fucking email list”. Mazières was keen to present the paper at the conference but was never told if it was accepted or not. Reporting the incident made me probably the first and only person to say ‘fucking’ in the august pages of Nature* – not, I admit, the most distinguished achievement, but we must take our glory where we can find it.

*Apparently not, according to Adam Rutherford on the Guardian site...

Monday, January 02, 2012

The new history

Here is the original draft of the end-of-year essay I published in the last 2011 issue of Nature.
___________________________________________________

2011 shows that our highly networked society is ever more prone to abrupt change. The future of our complex world depends on building resilience to shocks.

In the 1990s, American political scientist Francis Fukuyama, now at Stanford, predicted that the world was approaching the ‘end of history’ [1]. Like most smart ideas that prove to be wrong, Fukuyama’s was illuminating precisely for its errors. Events this year have helped to reveal why.

Fukuyama argued that after the collapse of the Soviet Union, liberal democracy could be seen as the logical and stable end point of civilization. Yet the prospect that the world will gradually replicate the US model of liberal democracy, as Fukuyama hoped, looks more remote today than it did at the end of the twentieth century.

This year we have seen proliferating protest movements in the fallout from the financial crisis – not just the cries of the marginalized and disaffected, but genuine challenges to the legitimacy of the economic system on which recent liberal democracies have been based. In the face of the grave debt crisis in Greece, the wisdom of deploying democracy’s ultimate tool – the national referendum – to solve it was questioned. The political situation in Russia and Turkey suggests that there is nothing inexorable or irreversible about a process of democratization, while North Africa and the Middle East demonstrate to politicians what political scientists could already have told them: that democratization can itself inflame conflict, especially when it is imposed in the absence of a strong pre-existing state [2,3]. Meanwhile, China continues to show that aggressive capitalism depends on neither liberalism nor democracy. As a recent report of the US National Intelligence Council admits, in the coming years “the Western model of economic liberalism, democracy, and secularism, which many assumed to be inevitable, may lose its luster” [4].

The real shortcoming behind Fukuyama’s thesis, however, was not his faith democracy but that he considered history to be gradualist: tomorrow’s history is more (or less) of the same. The common talk among political analysts now is of ‘discontinuous change’, a notion raised by Irish philosopher Charles Handy 20 years ago [5], and alluded to by President Obama in his speech at the West Point Military Academy last year, when he spoke of ‘moments of change’. Sudden disruptive events, particularly wars, have of course always been a part of history. But they would come and go against a slowly evolving social, cultural and political backdrop. Now the potential for discontinuous social and political change is woven into the very fabric of global affairs.

Take the terrorist attack on the World Trade Centre’s twin towers in 2001. This was said by many to have proved Fukuyama wrong – but on this tenth anniversary of that event we can now see more clearly in what sense that was so. It was not simply that this was a significant historical event – Fukuyama was never claiming that those would cease. Rather, it was a harbinger of the new world order, which the subsequent ‘war on terror’ failed catastrophically to acknowledge. That was a war waged in the old way, by sending armies to battlegrounds (in Afghanistan and Iraq) according to Carl von Clausewitz’s old definition, in his classic 1832 work On War, of a continuation of international politics by other means. But not only were those wars in no sense ‘won, they were barely wars at all – illustrating the remark of American strategic analyst Anthony Cordesman that “one of the lessons of modern war is that war can no longer be called war” [6]. Rather, armed conflict is a diffuse, nebulous affair, no longer corralled from peacetime by declarations and treaties, no longer recognizing generals or even statehood. In its place is a network of insurgents, militias, terrorist cells, suicide bombers, overlapping and sometimes competing ‘enemy’ organizations [7]. Somewhere in this web we have had to say farewell to war and peace.

Network revolutions

The nature of discontinuous change is often misunderstood. It is sometimes said – this is literally the defence of traditional economists in their failure to predict the on-going financial and national-debt crises – that no one can be expected to foresee such radical departures from the previous quotidian. They come, like a hijacked aircraft, out of a clear blue sky. Yet social and political discontinuities are rarely if ever random in that sense, even if there is a certain arbitrary character to their immediate triggers. Rather, they are abrupt in the same way, and for the same reasons, that phase transitions are abrupt in physics. In complex systems, including social ones, discontinuities don’t reflect profound changes in the governing forces but instead derive from the interactions and feedbacks between the component parts. Thus, discontinuities in history are precisely what you'd expect if you start considering social phenomena from a complex-systems perspective.

Experience with natural and technological complex systems teaches us, for example, that highly connected networks of strong interactions create a propensity for avalanches, catastrophic failures, and systemic ruptures [8,9]: in short, for discontinuous change.

So it should come as no surprise that today’s highly networked, interconnected world, replete with cell phones, ipads and social media, is prone to abrupt changes in course. It is much more than idle analogy that connects the cascade of minor failures leading to the 2003 power blackout of eastern North America with the freezing of liquidity in the global banking network in 2007-8.

Some see the revolts in Tunisia and Egypt in this way too, dubbing them ‘Twitter revolutions’ because of the way unrest and news of demonstration were spread on social networks. Although this is an over-simplification, it is abundantly clear that networking supplied the possibility for a random event to trigger a major one. The Tunisian revolt was set in motion by the self-immolation of a street vendor, Mohammed Bouazizi, in Sidi Bouzid, in protest at harsh treatment by officials. Three months earlier there was a similar case in the city of Monastir – but no one knew about it because the news was not spread on Facebook.

It was surely not without reasons that Twitter and Facebook were shut down by both the Tunisian and Egyptian authorities. The issue is not so much whether they ‘caused’ the revolutions, but that their existence – and the concomitant potential for mobilizing the young, educated populations of these countries – can alter the way things happen in the Middle East and beyond. These same tools are now vital to the Occupy protests disrupting complacent financial districts worldwide, from New York to Taipei, drawing attention to issues of social and economic inequality.

Social media seem also to have the potential to facilitate qualitatively new collective behaviours, such as the riots during the summer in the UK. These brief, destructive paroxysms are still an enigma. Unlike previous riots, they were not confined either to particular demographic subsets of the population or to areas of serious social deprivation. They had no obvious agenda, not even a release of suppressed communal fury – although there was surely a link to post-financial-crash austerity policies. One might almost call them events that grew simply because they could. Some British politicians suggested that Twitter should be disabled in such circumstances, displaying not only a loss of perspective (some of the same people celebrated the power of networking in the Arab Spring) but also a failure to understand the new order. After all, police monitoring of Twitter in some UK cities provided information that helped suppress rioting.

What all these events really point towards is the profound impact of globalization. They show how deep and dense the interdependence of economies, cultures and institutions has become, in large part thanks to the pervasive nature of information and communication technologies. And with this transformation come new, spontaneous modes of social and political organization, from terrorist and protest networks to online consumerism – modes that are especially prone to discontinuous change. Nothing will work that fails to take this new interconnectedness into account: not the economy, not policing, not democracy.

The path forwards

Such extreme interdependence makes it hard to find, or even to meaningfully define, the causes of major events. The US subprime mortgage problem caused the financial collapse only in the way Bouazizi’s immolation caused the Arab Spring – it could equally have been something else that set events in motion. The real vulnerabilities were systemic: webs of dependence that became destabilized by, say, runaway profits in the US banking industry, or rising food prices in North Africa. This means that potential solutions must lie there too.

Complex systems can rarely if ever be controlled by top-down measures. Instead, they must be managed by guiding the trajectories from the bottom up [10]. In a much simpler but instructive example, traffic lights may direct flows more efficiently if they are given adaptive autonomy and allowed to self-organize their switching, rather than imposing a rigid, supposedly optimal sequence [11]. The robustness of the Internet to random server failures is precisely due to the fact that no one designed it – it grew its ‘small world’ topology spontaneously.

This does not imply that political interventions are doomed to fail, but just that they must take other forms from those often advanced today. “Complex systems cannot be steered like a bus”, says Dirk Helbing of the Swiss Federal Institute of Technology (ETH) in Zurich, a specialist on the understanding and management of complex social systems. “Attempts to control the systems from the top down may be strong enough to disturb its intrinsic self-organization but not strong enough to re-establish order. The result would be chaos and inefficiency. Modern governance typically changes the institutional framework too quickly to allow individuals and companies to adapt. This destroys the hierarchy of time scales needed to establish stable order.”

But these systems are nevertheless manageable, Helbing insists – not by imposing structures but by creating the rules needed to allow the system to find its own stable organization. “This can’t be ensured by a regulatory authority that monitors the system and tries to enforce specific individual action”, he says.

That’s why theories or ideologies are likely to be less effective at predicting or averting crises than scenario modelling. It’s why problems need to be considered at several hierarchical levels, probably with multiple, overlapping models, and why solutions must have scope for adaptation and flexibility. And although cascading crises and discontinuous changes may be unpredictable, the connections and vulnerabilities that permit them are not. Planning for the future, then, might not be so much a matter of foreseeing what could go wrong as of making our systems and institutions robust enough to withstand a variety of shocks. This is how the new history will work.

References
1. Fukuyama, F. The End of History and the Last Man (Penguin, London, 1992).
2. E. D. Mansfield & Snyder, J. Int. Secur. 20, 5–38 (1995).
3. Cederman, L.-E., Hug, S. & Wenger, A., in Democritization (eds Grimm, S. & Merkel, W.), 15, 509-524 (Routledge, London, 2008).
4. National Intelligence Council, Global Trends 2025: A Transformed World (US Government Printing Office, Washington DC, 2008).
5. Handy, C., The Age of Unreason (Harvard Business School Press, Boston, 1990).
6. In H. Strachan, Europaeum Lecture, Geneva, 9 November 2006, p. 12.
7. J. C. Bohorquez, S. Gourley, A. R. Dixon, M. Spagat & N. F. Johnson, Nature 462, 911-914 (2009).
8. BarabĂ¡si, A.-L. IEEE Control Syst. Mag. 27(4), 33-42 (2007).
9. Vespignani, A. Nature 464, 984-985 (2010).
10. Helbing, D. (ed.), Managing Complexity: Insights, Concepts, Applications (Springer, Berlin, 2008).
11. Lämmer, S. & Helbing, D., J. Stat. Mech. P04019 (2008).