Wednesday, July 31, 2013

Plastic fantastic

Here’s the initial version of a leader I wrote for last week’s Nature.

_____________________________________________

The transition from basic science to practical technology is rarely linear. The common view – that promising discoveries need only patience, hard work and money to shape them into commercial products – obtains only rarely. Often there are more factors at play: all kinds of technical, economic and social drivers must coincide for the time to be right. So dazzling forecasts fail and fade, but might then re-emerge when the climate is more clement.

That seems to be happening for organic electronics: the use of polymers and other organic molecules as the active materials in information processing. That traditionally insulating plastics could be made to conduct electricity was discovered serendipitously in the late 1960s by Hideki Shirakawa in Tokyo, in the form of silvery films of polyacetylene. Chemists Alan Heeger and Alan MacDiarmid collaborated with Shirakawa in 1976 to boost the conductivity of this material by doping with iodine, and went on to make a ‘polymer battery’. Other conducting polymers, especially polyaniline, were mooted for all manner of uses, such as antistatic coatings and loudspeaker membranes.

This early work was greeted enthusiastically by some industrial companies, but soon seemed to be leading nowhere fast – the polymers were too unstable and difficult to process, and their properties hard to control and reproduce reliably. That changed in the late 1980s when Richard Friend and coworkers in Cambridge found that poly(para-phenylene vinylene) not only would conduct without doping but could be electrically stimulated to emit light, enabling the fabrication of polymer light-emitting diodes. The attraction was partly that a polymer’s properties, such as emission colour and solubility, can be fine-tuned by altering its chemistry. Using such substances for making lightweight, flexible devices and circuits, via simple printing and coating techniques rather than the high-tech methods needed for inorganic semiconductor electronics, began to seem possible. The genuine potential of the field was acknowledged when the 2000 Nobel prize for chemistry went to Shirakawa, Heeger and MacDiarmid.

The synthesis of gossamer-thin organic electronic circuits reported by Martin Kaltenbrunner in Tokyo and colleagues (Nature 499, 458-463; 2013) is the latest example of the ingenuity driving this field. Their devices elegantly blend new and old materials and techniques. The substrate is a one-micron-thick plastic foil, while organic small molecules provide the semiconductor for the transistors, other organic molecules and alumina constitute the insulating layers, and the electrodes are ultrathin aluminium. The featherweight plastic films, 27 times lighter than office paper, can be crumpled like paper, and on an elastomeric substrate the circuits can be stretched more than twofold, all without impairing the device performance. Adding a pressure-sensitive rubber layer produces a touch-sensing foil which could serve as an electronic skin for robotics, medical protheses and sports applications.

Wearable and flexible electronics and optoelectronics have recently taken great strides, propelled in particular by the work of John Rogers’ group at Illinois (D.-H. Kim et al., Ann. Rev. Biomed. Eng. 14, 113-128 (2012)). Such devices can now be printed on or attached directly to human skin, and can be made from materials that biodegrade safely. Especially when coupled to wireless capability, both for powering the devices and for reporting their sensor activity, the possibilities for in situ monitoring of wound care and tissue repair, brain and heart function, and drug delivery are phenomenal; the challenge will be for medical procedures to keep pace with what the technology can offer. At any event, such applications reinforce the fact that organic electronics should not be seen as a competitor to silicon logic but as complementary, taking information processing into areas that silicon will never reach.

At the risk of inflating another premature bubble, these technologies look potentially transformative – more so, on current showing, than the much heralded graphene. The remark by Kaltenbrunner et al. that their circuits are “both virtually unbreakable and imperceptible” says more than perhaps they might have intended. In this regard the new work continues the trend towards the emergence of a smart environment in which all kinds of functionality are invisibly embedded. What happens when packing film (one possible use of the new foldable circuitry), clothing, money, even flesh and blood, is imbued with the ability to receive, process and send information – when more or less any fabric of daily life can be turned, unseen, into a computing and sensing device? Most narratives currently dwell on fears of surveillance or benefits of round-the-clock medical checks and diagnoses. Both might turn out to be warranted, but past experience (with information technology in particular) should teach us that technologies don’t simply get superimposed on the quotidian, but both shape and are shaped by human behaviour. Whether or not we’ll get what’s good for us, it probably won’t be what we expect.

Wednesday, July 24, 2013

Radio DNA

Another cat among the pigeons, perhaps… here is my latest Crucible column for Chemistry World.

______________________________________________________________

It has to rate as one of the most astonishing discoveries of this century, and it came from a Nobel laureate. Yet it was almost entirely ignored. In 2011 Luc Montagnier, who three years earlier was awarded the Nobel Prize in medicine for his co-discovery of the AIDS virus HIV, reported that he and his coworkers could use the polymerase chain reaction (PCR, the conventional method of amplifying strands of DNA) to synthesize DNA sequences of more than 100 base pairs, without any of the target strands present to template the process [1]. All they needed was water. Water, that is, first subjected to very-low-frequency electromagnetic waves emitted and recorded from solutions of DNA encoding the target sequence. In other words, the information in a DNA strand could be transmitted by its electromagnetic emissions and imprinted on water itself.

Maybe you’re now thinking this work was ignored for good reason, namely that it’s utterly implausible. I agree: it doesn’t even begin to make sense given what we know about the molecular ingredients. But the claims were unambiguous. The authors say they took a 104-base-pair fragment of DNA from HIV (and who knows about that better than Montagnier?) and copied it, reproducibly and with at least 98% fidelity, by adding the PCR ingredients to the irradiated water. If you choose to ignore this, are you saying Montagnier is lying?

What you’re actually saying is that science doesn’t always work as it is ‘supposed’ to, by claims being tested and then accepted or rejected depending on the result. Of course, many trivial claims never get replicated (that’s another story), but really big ones – and they don’t come much bigger than this – are immediately interrogated by other labs, right? That’s what happened with cold fusion, however implausible it seemed. True, some results can’t be replicated without highly specialized kit and expertise – no one has rushed to verify the Higgs boson sighting. But Montagnier and colleagues used nothing more than you’d find in most molecular biology labs worldwide.

So what’s going on? What we’re really seeing tested here are the unwritten social codes of science. Montagnier has long been seen as something of a maverick, but in recent years some have accused him of descending into quackery. Since claiming in 2009 that some DNA emits EM signals [2], he has suggested that such signals can be detected in the blood of children with autism and that this justifies treating autism with antibiotics. He has seemed to suggest that HIV can be defeated with diet and supplements, and commended the notorious ‘memory of water’ proposed by French immunologist Jacques Benveniste [3]. Although he is currently the head of the World Foundation for AIDS Research and Prevention in Paris, his unorthodox views have prompted some leading researchers to question his suitability to lead such projects.

But science judges the results, not the person, right? So let’s look at the paper. At face value making a simple claim, it is in fact so peppered with oddness that other researchers probably imagine any attempt at replication will be deeply unrewarding. There are hints that the EM emissions come from a baffling and bloody-minded universe: their strength doesn’t correlate with concentration, they seem to appear in some ranges of dilution and then vanish in others, and there is no rhyme or reason to which organisms or sequences produce them and which don’t. That the authors show the signals not as ordinary graphs but as a screenshot adds to the misgivings.

Then there’s the ‘explanation’. Montagnier has teamed up with Italian physicist Emilio Del Giudice and his colleagues, who in 1988 published a “theory of liquid water based on quantum field theory” [4] which proposed that water molecules can form “coherent domains” about 100 nm in size containing “almost free electrons” that can absorb electromagnetic energy and use it to create self-organized dissipative structures. These coherent domains are, however, a quantum putty to be shaped to order, not a theory to be tested. They haven’t yet been clearly detected, nor have they convincingly explained a single problem in chemical physics, but they have been invoked to account for Benveniste’s results and cold fusion, and now they can explain Montagnier’s findings on the basis that the EM signals from DNA can somehow shape the domains to stand in for the DNA itself in the PCR process.

Make of this what you will; the real issue here is that it all looks puzzling, even prejudiced, to outsiders, who understandably cannot fathom why a startling claim by a distinguished scientist is apparently just being brushed aside. Perhaps it might help to stop pretending that science works as the books say it does. Perhaps also, given that Montagnier says his findings are motivating clinical trials to “test new therapeutics” for HIV in sub-Saharan Africa, it might be wise to subject them to more scrutiny after all.

References
1. L. Montagnier et al., J. Phys. Conf. Ser. 306, 012007 (2011).
2. L. Montagnier et al., Interdiscip. Sci. Comput. Life Sci. 1, 81 (2009).
3. E. Davenas et al., Nature 338, 816 (1988).
4. E. Del Giudice, G. Preparata & G. Vitiello, Phys. Rev. Lett. 61, 1085 (1988).

Maxwell's fridge

I haven’t generally been putting up here the pieces I’ve been writing for Physical Review Focus, as they can tend to be a bit technical. But as I’ve been writing this and that about Maxwell’s demon elsewhere, I thought I’d post this one. The final version is here.

_______________________________________________________

In 1867 the physicist James Clerk Maxwell described a thought experiment in which the random thermal fluctuations of molecules might be rectified by intelligent manipulation, building up heat that might be used to do useful work. Now in Physical Review Letters a team at the University of Maryland outline a theoretical scheme by which Maxwell’s nimble-fingered ‘demon’ might be constructed in an autonomous device that in effect uses computation to transfer heat from a cold substance to a hotter one, thereby acting as a refrigerator.

Maxwell believed that his demon might oppose the second law of thermodynamics, which stipulates that the entropy of a closed system must always increase in any process of change. Because this law seems to be statistical – an entropy increase, or increase in disorder, is simply the far more likely outcome – the demon might undermine it, for example by physically reversing the usual scrambling of hot and cold molecules and thereby preventing the diffusion of heat.

Most physicists now agree that such a demon wouldn’t defeat the second law, because of an argument developed in the 1960s by Rolf Landauer [1]. He showed that the cogitation needed to perform the selection would have a compensating entropic cost – specifically, the act of resetting the demon’s memory dissipates a certain minimal amount of heat per bit erased.

Despite this understanding, there have been few attempts to postulate an actual physical device that might act as a Maxwell demon. Last year, Christopher Jarzynski and Dibyendu Mandal at Maryland proposed such a ‘minimal model’ of an autonomous device [2]. It consisted of a three-state device (the ‘demon’) that can extract energy from a reservoir of heat and use it to do useful work. The transitions in the demon are linked the writing of bits into a memory register – a tape recording binary information – which moves past the it, according to particular coupling rules.

In collaboration with their colleague Haitao Quan, now at Peking University, Mandal and Jarzynski have now refined their model so that the demon is a two-state device coupled to heat exchange between a hot and a cold reservoir. Again, the operation of the demon is ensured by the coupling rules imposed between its transitions, the reservoirs and the memory, resulting in a mathematically solvable model whose performance depends on the model’s parameters.

The demon can absorb heat from the hot reservoir to reach its excited state, and reverse that process, without altering the memory. But the rules say that energy may only be exchanged with the cold reservoir by coupling to the memory. The demon can absorb heat from the cold reservoir if the incoming bit is a 0, or release it if the bit is a 1. And whenever energy is exchanged with the cold reservoir, the demon reverses the bit, which affects the entropy of the outgoing bit stream. So each 0 allows the chance for energy to move from the cold reservoir into the demon – and potentially then out to the hot reservoir.

The researchers find that the behaviour of the system depends on the temperature gradient and the relative proportions of 1s and 0s in the incoming bit stream. In one range of parameters the device acts as a refrigerator, drawing heat from the cold reservoir colder while imprinting a memory of this operation as 1s in the outgoing bit stream. In another range it acts as an information eraser: lowering the excess of 0s in the bit stream and thus randomizing this ‘information’, while allowing heat transfer from hot to cold.

Jarzynski says that, while the model couples heat flow and information, it doesn’t have Landauer’s condition explicitly built in. Rather, this condition emerges from the dynamics, and so the results provide some support for Landauer’s interpretation.

How might one actually build such a system? “We don’t have a specific physical implementation in mind”, Jarzynski admits, but adds that “we are exploring a fully mechanistic Rube Goldberg-like contraption where the demon and memory are represented by wheels and paddles that rotate about the same axis and interact by bumping into one another.”

Trying to figure out how a physical device might act like Maxwell’s demon is “an important task”, according to Franco Nori of the University of Michigan. “To build such a system in the future would be another story, but this is a very important step in the right direction,” he says.

Although he sees this as “an interesting theoretical model of Maxwell's demon”, Charles Bennett of IBM’s research laboratory in Yorktown Heights, New York, thinks it could be made even simpler. “It’s somewhat unrealistic and unnecessarily complicated to have the tape move at a constant velocity”, he says – the parameter describing the tape speed could be eliminated “by coupling each 0→1 tape transition to a forward step of the tape and each 1→0 transition to a backward step.”

References
1. R. Landauer, IBM J. Res. Dev. 5, 183 (1961).
2. D. Mandal & C. Jarzynski, Proc. Natl Acad. Sci. USA 109, 11641-11645 (2012).

Friday, July 19, 2013

What the bees know


I’ve written a news story for Nature on a new paper claiming that the bees’ honeycomb is made hexagonal by surface tension, rather than the engineering skills of the bees. They just make cylindrical cells, the researchers say, and physics does the rest. This isn’t a new idea, as I point out in the story: D’Arcy Thompson suggested as much, and Darwin suspected it. However, it seems to be to be potentially underestimating the role of the bees. The weird thing about the work is that it essentially freezes the honeycomb in an unfinished state, by smoking out the worker bees, and they find that the incomplete cells are circular in cross-section – but there’s apparently no reason to believe that the bees had done all they were going to do, leaving the rest to surface tension. Who’s to say they wouldn’t have kept shaping the cells if left undisturbed? It may be that the authors are right, but this current work seems to me to be some way from a proof of that. Well, here first is the story…

__________________________________________________________________

Physical forces rather than bees’ ingenuity might create the hexagonal cells.

The perfect hexagonal array of the bees’ honeycomb, admired for millennia as an example of natural pattern formation, owes more to simple physical forces than to the skill of the bees, according to a paper published in the Journal of the Royal Society Interface [1].

Engineer Bhushan Karihaloo of the University of Cardiff in Wales and his coworkers say that the bees simply make cells of circular cross-section, packed together like a layer of bubbles, and that the wax, softened by the heat of the bees’ bodies, then gets pulled into hexagonal cells by surface tension.

The finding feeds into a long-standing debate about whether the honeycomb is an example of exquisite biological engineering or blind physics.

To make a regular geometric array of identical cells with simple polygonal cross-sections, they can only have one of three forms: triangular, square or hexagonal. Of these, hexagons divide up the space using the least amount of wall area, and thus the least amount of wax.

This economy was noted in the fourth century by the mathematician Pappus of Alexandria, who claimed that the bees had “a certain geometrical forethought”. But in the seventeenth century the Danish mathematician Erasmus Bartholin suggested that they don’t need any such foresight, since the hexagons would result automatically from the pressure of each bee trying to make its cell as large as possible, much as the pressure of bubbles packed in a single layer creates a hexagonal foam.

In 1917 the Scottish zoologist D’Arcy Thompson argued that, again by analogy with bubbles, surface tension in the soft wax will pull the cell walls into hexagonal, threefold junctions [2]. A team led by Christian Pirk of the University of Würzburg in Germany showed in 2004 that molten wax poured into the space between a regular hexagonal array of cylindrical rubber bungs will indeed retract into hexagons as it cools and hardens [3].

Karihaloo and colleagues now seem to clinch this argument by showing that bees do initially make cells with a circular cross-section – as Charles Darwin suspected – and that these develop into hexagons by the flow of wax at the junctions where three walls meet.

They interrupted honeybees in the act of making a comb by smoking them out of the hive, and found that the most recently built cells have a circular shape while those just a little older have developed into hexagons. They say the worker bees that make the comb knead and heat the wax with their bodies until it reaches about 45 oC – warm enough to flow like a viscous liquid.

Karihaloo thinks that no one thought previously to look at cells before they are completed “because no one imagined that the internal profile of the cell begins as a circle” – it was just assumed that the final cell shape is the one the bees make. He says they got the idea from experiments on a bunch of circular plastic straws which changed to the hexagonal form when heated [4].

The question is whether there is anything much left for the bees to do, given that they do seem to be expert builders. They can, for example, use their head as a plumb-line to measure the vertical, tilt the cells very slightly up from horizontal to prevent the honey from flowing out, and measure cell wall thicknesses extremely precisely. Might they not continue to play an active role in shaping the circular cells into hexagons, rather than letting surface tension do the job?

Physicist and bubble expert Denis Weaire of Trinity College Dublin in Ireland suspects they might, even though he acknowledges that “surface tension must play a role”.

“I have seen descriptions of bees steadily refining their work by stripping away wax”, he says. “So surely those junctions of cell walls must be crudely assembled then progressively refined, just as a sculptor would do?”

While Karihaloo says “I don't think the bees know how to measure angles”, he admits that further experiments are needed to rule out that possibility.

Weaire adds that “if the bee’s internal temperature is enough to melt wax, the temperature of the hive will always be close to the melting point, so the wax will be close to being fluid. This may be more of a nuisance than an advantage.”

But Karihaloo explains that not all the bees act as 'heaters'. "The ambient temperature inside the comb is just 25o C", he says. Besides, he adds, the insects strengthen the walls over time by adding recycled cocoon silk to it, creating a kind of composite.

References
1. Karihaloo, B. L., Zhang, K. & Wang, J. J. R. Soc. Interface advance online publication doi:10.1098/rsif.2013.0299 (2013).
2. Thompson, D. W. On Growth and Form (Cambridge University Press, 1917).
3. Pirk, C. W. W., Hepburn, H. R., Radloff, S. E. & Tautz, J. Naturwissenschaften 91, 350–353 (2004).
4. Zhang, K., Zhao, X. W., Duan, H. L., Karihaloo, B. L. & Wang, J. J. Appl. Phys. 109, 084907 (2011).

Now I want to add a few further comments. It seems the authors didn’t know that Darwin had looked extensively at this issue. He felt some pressure to show how the hexagonal hive could have arisen by natural selection. He conducted experiments himself at Down House, and corresponded with bee experts, noting that bees first excavate hemispherical pits in the wax which they gradually work into the cell shapes. There is some fascinating correspondence on this in the link given above, though Darwin never found the evidence he was looking for.

One of the problems with leaving it all to surface tension, however, is what happens when you get an irregular cell, either because the bees make a mistake (as they do) or because edge effects create defects. As Denis Weaire pointed out,

“Bees do make topological mistakes, or are led into them by boundary conditions. Surface tension would entirely destroy their work, because of this, if unchecked! (five-sided cells shrink etc...):there is no equilibrium configuration!”

Another worry that Denis voiced is what happens to the excess wax if the cell walls are thinned and straightened by flow. This does seem to have an explanation: Karihaloo says that wax is not actually removed, it just begins in a somewhat loose, porous state, which gets consolidated.

I also wondered about the cell end caps. The cells in the honeycomb are made in two back-to-back layers, married by a puckered surface made from end caps that consist of three rhombi in a fragment of a rhombic dodecahedron. This turns out – as Denis showed in 1994 (Nature 367, 123) – to be the minimal surface for this configuration. So one might imagine it too could result from surface tension, if the authors’ argument is right. But when I asked about it, Karihaloo said “Pirk et al. have shown that the end caps are not rhombic at all; it is just an optical illusion.” I was surprised by this, and asked Weaire about it – he said this was the first time he’d heard that suggestion, and that he has pictures of natural combs which show that these polygonal end faces are certainly not illusory. Indeed, Darwin and his correspondents mention the rhombi, and those old gents were mighty careful natural historians. So this suggestion seems to be wrong.

Tuesday, July 09, 2013

Preparing for a new second

A bit techie, this one, but I liked the story. It’s a news piece for Nature.

________________________________________________________

A new type of atomic clock could transform the way we measure time.

The international definition of a second of time could be heading for a change, thanks to the demonstration by researchers in France that a new type of ‘atomic clock’ has the required precision and stability.

Jérôme Lodewyck of the Observatoire de Paris and his colleagues have shown that two so-called optical lattice clocks (OLCs) can remain as perfectly in step as the experimental precision can establish [1]. They say that this test of consistency is essential if OLCs are to be used to redefine the second, currently defined according to a different sort of atomic clock.

This is “very beautiful and careful work, which gives grounds for confidence in the optical lattice clock and in optical clocks generally”, says Christopher Oates, a specialist in atomic-clock time standards at the National Institute of Standards and Technology (NIST) in Boulder, Colorado.

Defining the unit of time according to the frequency of electromagnetic radiation emitted from atoms has the attraction that this frequency is fixed by the laws of quantum physics, which dictate the energy states of the atom and thus the energy and frequency of photons of light emitted when the atom switches from one state to the other.

Since 1967, one second has been defined as the duration of 9,192,631,770 oscillations of the microwave radiation absorbed or emitted when a caesium atom jumps between two particular energy states.

The most accurate way to measure this frequency at present is in an atomic fountain, in which a laser beam is used to propel caesium atoms in a gas upwards. Emission from the atoms is probed as they pass twice through a microwave beam – once on the way up, once as they fall back down under gravity.

The time standard for the United States is defined using a caesium atomic-fountain clock called NIST-F1 at NIST. Similar clocks are used for time standards elsewhere in the world, including the Observatoire de Paris.

The caesium fountain clock has an accuracy of about 3x10**-16, meaning that it will keep time to within one second over 100 million years. But some newer atomic clocks can do even better. Monitoring emission from individual ionized atoms trapped by an electromagnetic field can supply an accuracy of about 10**-17.

The clocks studied by Lodewyck and colleagues are newer still – first demonstrated under a decade ago [2]. And although they can’t yet beat the accuracy of trapped-ion clocks, they have already been shown to be comparable to caesium fountain clocks, and some researchers suspect that they’ll ultimately be the best of the lot.

That’s for two reasons. First, like trapped-ion clocks, they measure the frequency of visible light, with a frequency tens of thousands of times higher than microwaves. “Roughly speaking, this means that optical clocks divide a second into many more time intervals than microwave caesium clocks, and so can measure time with a higher precision,” Lodewyck explains.

Secondly, they measure the average emission frequency from several thousand trapped atoms rather than just one, and so the counting statistics are better. The atoms are trapped in a so-called optical lattice, rather like an electromagnetic eggbox for holding atoms.

If OLCs are to succeed, however, it’s essential to show that they are reliable: that one such clock ticks at exactly the same rate as another prepared in an identical way. This is what Lodewyck and colleagues have now shown for the first time. They prepared optical lattices each holding about 10,000 atoms of the strontium isotope strontium-87, and have shown that the two clocks stay in synchrony to within a precision of at least 1.5x10**-16, which is the detection limit of the experiment.

But if the definition of a second is to be switched from the caesium standard to the OLC standard, it’s also necessary to check that two types of clock are in synchrony. The French team have done that too. They found that their strontium OLCs will keep pace with all three of the caesium clocks in the Observatoire, to an accuracy limited only by the fundamental limit on the caesium clocks themselves.

“These sorts of comparisons have historically been critical in laying the groundwork for redefinitions of fundamental units”, says Oates.

Accurate timing is crucial to satellite positioning systems such as GPS, which is why GPS satellites have onboard atomic clocks. But their accuracy is currently limited more by other factors, such as air turbulence, than by the performance of their clocks. There are, however, other good reasons for going beyond the already astonishing accuracy of caesium clocks.

For example, in astronomy, if the arrival times of light from space could be compared extremely accurately for different places on the Earth’s surface, this could allow the position of the light’s source to be pinpointed very precisely – with a resolution that, as with current interferometric radio telescope networks, is “equivalent to a continent-sized telescope”, says Lodewyck.

Better time measurement would also enable high-precision experiments in fundamental physics: for example, to see if some of nature’s fundamental constants change over time, as some speculative theories beyond the Standard Model of physics predict.

Before switching to a new standard second, says Lodewyck, there are more hurdles to be jumped. Optical clocks are needed that can run constantly, and there must be better ways to compare the clocks operating in different institutes.

“This measurement is a significant advance towards a new definition of the second”, says Uwe Sterr of the Physikalisch-Technische Bundesanstalt in Braunschweig, Germany, which also operates an atomic-clock standard. “But to agree on a new standard for time the pros and cons of the different candidates that are in the play needs to be evaluated in more detail”, he adds.

“It’s not yet decided which atomic species nor which kind of optical clocks will be chosen as the next definition of the SI second”, Lodewyck concurs. “But we believe that strontium OLCs are a strong contender.”

References
1. Le Targat, R. et al., Nature Communications 4, 2109 (2013).
2. Takamoto, M., Hong, F. -L., Higashi, R. & Katori, H. Nature 435, 321–324 (2005).

Gangs of New York

Here’s my latest piece for BBC Future, pre-editing.

_______________________________________________________

One of the big challenges in fighting organized crime is precisely that it is organized. It is run like a business, sometimes literally, with chains of command and responsibility, different specialized ‘departments’, recruitment initiatives and opportunities for collaboration and trade. This structure can make crime syndicates and gangs highly responsive and adaptable to attempts at disruption by law-enforcement services.

That’s why police forces are keen to discover how these organizations are arranged: to map the networks that link individual members. This structure is quite fluid and informal compared to most legitimate businesses, but it’s not random. In fact, violent street gangs seem to be organized along rather similar lines to insurgent groups that stage armed resistance to political authority, such as guerrilla forces in areas of civil war, for instance in being affiliations of cells each with their own leaders. It’s for this reason that some law-enforcement agencies are hoping to learn from military research. A team at the West Point US Military Academy in New York has just released details of a software package it has developed to aid intelligence-gathering by police dealing with street gangs. The program, called ORCA (Organization, Relationship, and Contact Analyzer), can use real-world data acquired from arrests and questioning of suspects to deduce the network structure of the gangs.

ORCA can figure out the likely affiliations of individuals who will not admit to being members of any specific gang, as well as the sub-structure of gangs (the ‘gang ecosystem’) and the identity of particularly influential members, who tend to dictate the behaviour of others.

There are many reasons why this sort of information would be important to the police. The ecosystem structure of a gang can reveal how it operates. For example, many gangs fund themselves through drug dealing, which tends to happen by the formation of “corner crews”: small groups that congregate on a particular street corner to sell drugs. And having some knowledge of the links and affiliations between different gangs can highlight dangers that call for more focused policing. If a gang perpetrates some violent action on a rival gang, police will often monitor the rival gang more closely because of the likelihood of retaliation. But gangs know this, and so the rivals might instead ask an allied gang to carry out a reprisal instead. So police need to be aware of such alliances.

The roles of highly influential members of a social network are familiar from other studies of such networks – for example, in viral marketing and the epidemiology of infectious diseases. These individuals typically have a larger than average number of links to others, and their choices and actions are quickly adopted by others. An influential gang member who is prone to risky, radicalizing or especially violent behaviour can induce others to follow suit – so it can be important to identify these individuals and perhaps to monitor them more closely.

In developing ORCA, West Point graduate Paulo Shakarian, who has a doctorate in computer sciences and has worked in the past as an adviser to the Iraqi National Police, and his coworkers have drawn on the large literature that has grown over the past decade on the mapping of social networks. These studies have shown that the way a network operates – how information and influence spread through it, for example – depends crucially on what mathematicians call its topology: the shape of the links between people. For example, spreading happens quite differently on a grid (like the street network of Manhattan, where there are many alternative routes between two points), or a tree (where points are connected by the repeated splitting of branches), or a ‘small world’ network (where there are generally many shortcuts so that any point can be reached from any other in relatively few jumps). Many studies in this new mathematical science of networks have been concerned to deduce the community structure of the network: how it can be decomposed into smaller clusters that are highly connected internally but more sparsely linked to other modules. It’s this kind of analysis that enables ORCA to figure out the ecosystems of gangs.

One of the features of ORCA is an algorithm – a set of rules – that assigns each member of the network a probability of belonging to a particular gang. If an individual admits to this, the assignment can be awarded 100% probability. But if he will not, then any known associations he has with other individuals can be used to calculate a probable ‘degree of membership’. The program can also identify ‘connectors’ who are trusted by different gangs to mediate liaisons between them, for example to broker deals that allow one gang to conduct drug sales on the territory of another.

Shakarian and colleagues tested ORCA using police data on almost 1500 individuals belonging to 18 gangs, collected from 5418 arrests in that district over three years. These gangs were known to be racially segregated, and the police told the West Point team that one racial group was know to form more centrally organized gang structures than the other. ORCA confirmed that the latter, more decentralized group tended to be composed of more small modules, rather than larger, branched networks.

Although the West Point team can’t disclose details, they say that they are working with a “major metropolitan police department” to test their program and to integrate it with information on the geographical distributions of gangs and how they change over time. One can’t help suspecting that the developers of games such as Grand Theft Auto, which unfolds in a complex netherworld of organized crime gangs, will also be taking an interest to improve the realism of its fictional scenarios.

Reference: D. Paulo et al., preprint http://www.arxiv.org/abs/1306.6834 (2013).

Friday, July 05, 2013

Turning pearls


Here’s the previous fortnightly piece for BBC Future. That published today is coming up soon.

_________________________________________________________

Of all nature’s defence mechanisms, molluscs surely have the most stunning. If a foreign particle such as an abrasive sand grain or a parasite gets inside the soft body of a mollusc – most pearls are made by oysters, though clams and mussels will make them too – the organism coats it in nacre (mother of pearl), building up a smooth blob of this hard iridescent material. The mollusc is, of course, oblivious to the fact that this protective capsule is so gorgeous. Pearls can be white, grey, black, red, blue, green or yellow, and their attraction for humans has led to traditions of pearl-diving that are thousands of years old. Today pearls are harvested in oyster farms in the Indian Ocean, East Asia and all across the Pacific, in which pearl production is stimulated artificially by inserting round beads into the molluscs to serve an a seed.

Yet in spite of the commercial value of pearl production, the formation of pearls is still imperfectly understood. Only the most highly prized pearls are perfectly spherical. Many have other shapes: elongated and ovoid, say, or the teardrop shape that works well for earrings. Some, called baroque pearls, are irregular, like blobs of solder pinched off at one end into a squiggly tail. It’s common for pearls to adopt a shape called a solid of revolution, roughly round or egg-shaped but often with bands and rings running around them latitudinally, like wooden beads or bedknobs turned on a lathe. In other words, the pearl has perfect ‘rotational symmetry’: it looks the same when rotated by any amount on its axis. When you think about it, that’s a truly odd shape to account for.

It’s recently become clear that pearls really are turned. Pearl farmers have long suspected that the pearl might rotate as it grows within the pouch that holds it inside the soft ‘mantle’ tissue of the mollusc. In 2005 that was confirmed by a report published in an obscure French-language ‘journal of perliculture’, which stated that a pearl typically rotates once every 20 days or so. This would explain the rotational symmetry: any differences in growth rate along the axis or rotation get copied around the entire circumference.

But what makes a pearl turn? Julyan Cartwright, who works for the Spanish Research Council (CSIC), and his colleagues Antonio Checa of the University of Granada and Marthe Rousseau of the CNRS Pharmacologie et Ingénierie Articulaires in Vandoeuvre les Nancy, France, have now come up with a possible explanation.

Nacre is an astonishing material in its own right. It consists mostly of aragonite, a form of calcium carbonate (the mineral fabric of chalk), which is laid down here as microscopic slabs stacked in layers and ‘glued’ with softer organic membranes of protein and chitin (the main component of the insect cuticle and shrimp shell). This composite structure, with hard layers weakly bonded together, makes nacre extremely tough and crack-resistant, which is why materials scientists seek to mimic its microstructure in artificial composites. The layered structure also reflects light in a manner that creates interference of the light waves, producing the iridescence of mother-of-pearl.

The slabs of aragonite are made from chemical ingredients secreted by the same kind of cells responsible for making the mollusc’s shell. Several layers grow at the same time, creating terraces that can be seen on a pearl’s surface when inspected under the microscope.

Cartwright and colleagues think that these terraces hold the key to a pearl’s rotation. They say that, as new molecules and ions (whether of calcium and carbonate, or chitin or protein) stick to the step of a terrace, they release energy which warms up the surface. At the same time, molecules of water in the surrounding fluid bounce off the surface, and can pick up energy as they do. The net result is that, because of the conservation of momentum, the step edge recoils: the surface receives a little push.

If terrace steps on the surface were just oriented randomly across the pearl, this push would average out to zero. But for pearls with a solid-of-rotation shape, it’s been found that the terraces are arrayed in parallel like lines of longitude on a globe, creating a ratchet-like profile around the circumference of the pearl. Because of this ratchet shape, impart a preferred direction to the little impulses received by the pearl from molecular impacts on the vertical faces of the steps, causing the growing pearl to rotate. The researchers’ rough estimate of the size of this force during growth of a typical pearl shows that it should produce a rotation rate more or less equal to that observed.

In other words, the pearl can become a kind of ratchet that, by virtue of the unsymmetrical step profile of its surface, can convert random molecular motions into rotation in one direction.

The researchers can also offer an explanation for where the ratchet profile comes from in the first place. If by chance a growing pearl starts to turn in a particular direction, feedbacks in the complex crystallization process on the pearl surface will cause the step edges to line up longitudinally (that is perpendicular to the rotation), creating the ratchet that then sustains the rotation.

The researchers admit that there are still gaps to be filled in their argument, but they say that the idea might be applied to make little machines that will likewise rotate spontaneously powered only be ambient heat. But don’t worry: they haven’t invented perpetual motion. The rotation is ultimately powered by the heat released during the chemical process of crystallization, and it will stop when there is nothing left to crystallize – when the ‘fuel’ runs out.

Reference: J. H. E. Carywright, A. G. Checa & M. Rousseau, Langmuir, advance online publication doi:10.1021/la40142021 (2013).

More on cursive

Only when clearing out my old magazines did I notice another comment in Prospect (May issue) on my article on cursive writing. There Katy Peters says:

“[Cursive] encourages children to allow their writing to follow the flow of thought more easily… My children [who were taught the usual print and cursive] have not been taught two different systems of writing; they have been taught a single method that allows them to commit thoughts to paper.”

So cursive helps a child’s train of thought to flow better than does print? I can see the logic in that: joined up writing leads to joined up thinking, right? But, Ms Peters, what about the fact that children find spelling harder with cursive because they find it more difficult to keep track of words as being composed of a discrete sequence of letters?

How do I know that’s the case, you ask? Well, I don’t. I just made it up. But it sounds kind of plausible, doesn’t it, so I figure it is on a par with Ms Peters’ view. I should say that, because Prospect letters are kept short and don’t allow references, I can’t entirely rule out the possibility that Ms Peters is quoting the findings of an academic study. But somehow I strongly doubt that. If you want to allude to actual research but aren’t permitted the citation itself, you can do that, as I did in my original piece (though that didn’t stop some from asking why there were no references). There’s no sign that Ms Peters did so. She’s simply saying something that sounds like it might be true.

What is patently untrue is that her children were taught “a single method” of writing. No, they really weren’t. It was abundantly evident to me that my daughter was very clearly being taught two systems – as she too knew very well, stating explicitly when she was choosing to use one and when the other. Cursive was a method she had to learn afresh. So let’s not just make stuff up because we want to believe it.

In any event, I wasn’t saying that cursive is inherently absurd, but rather, that there’s no good evidence that it has any advantages (I reserve judgement in the case of some children with dyslexia). The responses to my article have been very revealing about the way folks reason on things about which they have strong views. They don’t really want to know what academic studies show, and if confronted with such studies, they find ways to ignore or dismiss them. No, people simply want to find arguments for holding on to their beliefs. It doesn’t matter if these arguments are patently absurd – when I told the audience at the Hay Festival how several people cited the “grandmother’s letters in the attic” argument, they laughed, reassuring me that this was as transparently facile as I’d always thought.

No, people prefer anecdote to scientific study. (“But cursive is quicker for me.” Well, big surprise – you stopped printing when you were six, because you were told it wasn’t “grown-up”, and so you’ve scarcely practiced it since.) This doesn’t mean that such folk are stupid; they’re simply behaving as we seem predisposed to do, which is not in an evidence-based way. I’m sure I regularly do this too.

This is one reason why scientists who insist that we must better inform the public on issues such as climate change and evolution are only getting half the point. Better information is good, but it isn’t necessarily going to win the day, because we are ingenious at finding arguments that support our preconceptions, and ignoring evidence that doesn’t. Which is why I must try to remain open to the possibility that there really is evidence out there for why teaching children to print and then to write in cursive is a sensible way to teach. I was hoping that, if it exists, my article would bring it out into the open. It hasn’t yet.