What I’ve been reading

I love this introspective post by Robert Krulwich on what it means to collect a ton of data about yourself, and whether that’s the same as being observant.  Myself, I tend to side with the poets and with Bill Bryson. (H/T Ed Yong)

“Numbers are really the only reason you’re writing your paper, and you don’t want your readers to think that you’re into something as lame as words”  –Adam “you don’t write like a scientist” Ruben, whose Experimental Error blog at Science I can stand to read only one post at a time.  He’s also the author of Surviving your stupid, stupid decision to go to graduate school, which I have avoided on principle; I’m saving it for a crisis.  Possibly the one that happens when my love-affair with words is uncovered.

I’m enjoying the Wellcome Trust’s series on scientific writing that science writers admire.  It’s good to hear from experts on how to do a thing well.  If you’re in the UK or Ireland and have any interest in science writing, consider submitting a piece to their contest!  I’m told there is a prize.

And on the lighter side Scicurious’s jumprope rhymes: possibly worth getting a Twitter account for.


What looms ahead: how neuroscientists study individual neurons

I spent a semester in college as a teaching assistant in an animal behavior lab. One of the very most frustrating failed experiments was the mating assay: we put two flies, a male and a female, in Eppendorf tubes with an air hole, then waited to see if strain 1 and strain 2 were as likely to cross-mate as to mate with members of their own strain. These particular flies had a loooooong latency to mate, and as you can imagine, the roomful of undergrads made a whole lot of jokes about mood music. When they finally began courtship behavior, the observers would often lean closer to track those tiny movements better.  Bad idea.  It turns out that for flies, a looming object is the biggest mood-killer there is.

We would have known that if we’d talked to Tom Clandinin at Stanford.  His lab studies, among other parts of fly vision, the neurons that let the fly respond to looming objects.  I got the opportunity to hear Dr Clandinin speak about his work recently, and I think the study, in Current Biology earlier this month, is so cool that I’m going to try summarizing it in the style of Scicurious and Ed Yong.

Dr Clandinin started out interested in how vision and behavior are connected in the fly brain. This kind of problem is much easier to study if you can identify smaller components, so in an earlier paper, the lab identified a group of neurons with something in common separate from other neurons. Using the knowledge that similar cells have similar patterns of gene expression, and a genetic tool called enhancer trapping, they inserted a reporter gene at random into the genome in a large group of flies, then looked for changes in behavior. This let them identify flies where a change in only a few cells caused a major change in behavioral responses to visual stimuli.

In particular, they found five neurons (shown in green in the image) in a connective region between the visual lobe and the rest of the brain, which they dubbed Foma1 cells (it stands for “failure of motion assay 1” and means that the flies responded weirdly to visual stimuli). The location of the cells is interesting, because the connective region is a chokepoint between two areas that can communicate lots of information, but relatively little information can get through.  Think of it as an undersea cable between phone networks on different continents; some kind of integration or discarding of information must occur there, so that only important stuff gets through.

But how could they figure out what kind of information was passing through these neurons?  What made the mutated flies behave differently than their wild-type brethren?  They used an ingenious set-up: put a miniaturized TV screen in front of a fly, project simple, abstract moving images, and then use electrophysiology to listen in on individual neurons. They didn’t see any action in these five neurons when they presented a series of two-dimensional stimuli: rolls, yaws, up-and-down movement, or anything else that the fly could conceivably see while staying in one place and rotating in space (here’s an idea of the kind of image they used).  Instead, the neurons fired like crazy in response to a looming stimulus: a square that got bigger and bigger on the screen. The lab used slight changes in the projected image to characterize the neurons further, finding that they fired more often when the looming object seemed to be approaching faster; they responded the same way to objects coming from any part of the visual field and in any color; and they didn’t respond to a general change in screen brightness without the looming illusion. This combination of features made the Foma1 neurons loom detectors; that is, what they recognized was the looming, and they seemed to respond to anything that loomed.

Figuring out what could stimulate the Foma1 neurons was a cool piece of neuroscientific sleuthing. The next question was, were they behaviorally important?  The team knew that in response to the illusion of a looming object, 92% of flies would raise their wings and look like they’d fly away; about 75% would take off before the looming object was due to hit them.  But when the group silenced Foma1 neuron activity by expressing an inactive ion channel selectively in the Foma cells, only 30% of flies escaped. The only problem with this test was that it couldn’t tell the difference between what the fly knows (i.e., loom detection) and how the fly responds.  Maybe the entire visual lobe was yelling, “Mayday! Mayday!” and the Foma1 neurons were only important in passing the message along.

So to eliminate any confusion they used another cool technology: optogenetic stimulation. Again, this involved selective ion channel expression only in the foma neurons; they expressed channelrhodopsin, which is sensitive to light, and then blinded the fly and shone a bright light on it. This set-up activates only the Foma neurons and anything downstream of them, and it allowed them to show that activation of Foma1 neurons is enough to make the fly fly away.

They did one more interesting experiment, which isn’t in the paper; if the fly is already flying (glued to a stick, so that it has the illusion it’s supporting itself but also is held in one place), then instead of flying to escape a looming stimulus, it throws up its legs in a landing pattern. This led them to infer that there was a downstream decision point: if the fly was not flying, a looming object meant it should fly away (for instance, a fly swatter or an attentive student of animal behavior was approaching). If it is flying, the looming object means it should land—in other words, watch out for that… TREE!

More reading:

“Loom-sensitive neurons link computation to action in the Drosophila Visual System” Current Biology March 6 2012 http://www.cell.com/current-biology/abstract/S0960-9822%2812%2900008-5


“Big ideas have their origin stories.” On how starpower and branding can cause TED and related “big-ideas” franchises to make ideas smaller, not bigger. Also tackles the one-word-title book trend (which annoys me to no end).

Introduction-post at new blog Neurochambers. I like the description of fun and variety in the life of a practicing academic scientist, and I like what he says about science blogging as “neighbourhood watch” duty.

This cloned Kashmiri goat story is all kinds of interesting to me. For starters, cloning done “by hand” (that is, with a dissecting scope and a steady hand, no fancy nucleus-relocating automated pipettors) in India has some resonance with other ingenious, comparatively low-tech solutions pursued there in other fields–like text alerts for prenatal health care. Besides that, I like the conservation angle–if Drs. Shah and Hassan can save endangered Himalayan species with this technique, more power to them! Also, who knew kid goats were so very disarmingly cute?

If you post linkarounds, check out Maria Popova’s attribution guide, The Curator’s Code. I’ve been looking for a proper citation guide to the internet, and I think she’s just built it.  Plus the site graphics are keen.  (HT to Ed Yong)

New full-length post up on Saturday.


I loved Maria Popova’s take on science as whimsy, and the balance scientists strike between rationality and intuition.

For the practicing scientists among us, here’s an unexpected tidbit of recent history: how poster sessions came to be. (hat tip to the design/advice blog Better Posters)

“Just because scientific knowledge is being printed and published online does not necessarily mean that the content is being avidly consumed by the general population” –a good reminder for anybody interested in writing about science for the public.  This article also revealed to me that there are people whose job it is to research science communication and how to improve it.  I wonder, are they hiring?

Also on the topic of science communications, here’s Ed Yong with a forceful piece on accountability and accuracy in science journalism. I look forward to reading whatever account he posts of the British Royal Institution discussion on setting standards in science journalism. Also, watch this space for a review of Bad Science, a book by Ben Goldacre on intentional, unethical journalistic chicanery.

Cybernetics: information theory at the dawn of the information age

Nature published a special issue last week for Alan Turing’s centenary; were he alive, he would be 100. Turing is widely regarded as the father of the computer. His contributions to code-breaking were key to cracking the Enigma code, and helped win World War II for the Allies. His untimely death at the age of 41 was a sad reminder of the power of prejudice over gratitude.

Turing’s work was fundamental to the field of cybernetics, which was as short-lived and as influential as he was. Although Turing himself never published on the subject, it drew heavily on code analysis of the kind he did during the war. Cybernetics existed because early computing existed, and all at once the passage of messages became at least as interesting in their content.  This is especially germane if you think about computer science in terms of its code-breaking background; Neal Stephenson’s Cryptonomicon tells this story, among others, beautifully.

Cybernetics made sense of information transfer by simplifying down to a system that could transduce input to output. It had a lot in common with thermodynamics, the study of heat transfer, which had solidified over the previous century or so and which also takes an input/system/output view of the world.

This ground rule was elaborated as different systems were studied in cybernetic terms. One of my own favorite elaborations (it’s delightfully silly!) was the study of second-order cybernetics, or understanding how information is transferred when we try to understand information transfer. (In the picture above, the lower diagram shows second-order cybernetics; Wiener, Bateson and Mead are three of the founders of the field).

The field was at its strongest during the Macy conferences, a series of academic conferences between 1946 and 1953 dedicated to both developing and spreading the ideas of cybernetics. You could make a cybernetic study of the conferences themselves; although we have the schedules, and writings from academics that reflected on the ideas shared there, but no proceedings were ever published, so we’ll never know exactly what was said. (So much has changed in science communication; today conferences are liveblogged up one side and down the other, like the SciOnline unconference series.)

Cybernetics was holistic at a very reductionist time; it played a major role in coming to understand biological systems in terms of their emergent properties. Its findings took two directions: toward the development of computers as more-and-more complicated information processing machines, and toward understanding the brain, which remains the most complicated information processor we know of.

Conventional wisdom says that the movement lost momentum after the conferences ended. Its progenitors were from different fields, and they never made a real interdisciplinary mark despite their ambition to found a unified study of everything.  However, it threw a long shadow across academia; cybernetics often turns up in astrofuturist poetry of the sixties, and modern scholars of systems and complexity point to cybernetics as their progenitor field.

I think of the 1940s and 1950s with a weird nostalgia, as a golden age in American academic science, and cybernetics plays a big part in why I think that way.  After all, American science was in no way socially or ethically ideal in that period; scientists developed the bomb, stole Henrietta Lacks’s cells, carried out the Tuskegee study, all the while excluding from their ranks many people who would have been great scientists.  But strictly in the realm of ideas, it was a time of very great discovery: the structure of DNA, the most basic signaling that controls a cell, the rules behind genetics and virology; not to mention the great strides in physics and chemistry that were made at the same time. Because discoveries made back then are fundamental to science education now, and experiments were both elegant and easy to explain, there’s an illusion that information was more manageable. But then I’m reminded it wasn’t; scholars back then founded a whole new field strictly for understanding information flow.