Three/five/seven, or: how I spent my summer vacation.

image-6

Recently I stayed at the farmhouse of a tight-knit family in eastern Montana, invited out to help and get acquainted at the wheat harvest. One afternoon, when the wheat was too green to cut after two nights of rain, the uncle who ran the farm sat me down at its big family dinner-table. Clearing away the newspapers, color-coded maps and scrawled lists of yield per field, he proposed a game. Here are three blue, five white and seven red poker chips. Oops—we’ll use a spice-jar lid for the seventh red one. You can take as many as you like of any one color; then I’ll do the same. We alternate until the chips are gone, and your object is to force me to take the last chip. We call it three-five-seven.

We started to play. I chose colors, quickly lost, and lost again a bit more slowly. “Is the trick that you have to go first?” I asked, but that wasn’t it. He let me start the next three times, to no avail. The rules were simple, the logic elusive. A couple of times, I saw the beginning of a sly smile on his face as I made what I had thought was a clever move. Without fail, a turn later the path to victory I had counted on evaporated, leaving me with no way to win even if three or four tokens were still on the table. As I learned the game, those checkmate moments became clear earlier. I began to see that there were patterns that were safe, and patterns that would always get me. I could win—if only I could pitch one of the second set of patterns at my opponent before he saw what I was up to.

My chances on that front were slim. I wasn’t the first person that favorite farmer uncle had snookered into playing and losing. Mischievously, he unrolled a long spool of stories that distracted me as I peered at the table, looking for a for-certain safe move. He had learned this game in math class as a child. On the long school bus ride home from town, he and his brother had worked out with a paper and pencil the best way to win. He’d held onto the game for entertainment in bars, a non-drinker playing for quarters against his tipsy coworkers after a day in California’s pistachio orchards. He brought it back to the farm as a grown man, perching his five-year-old nephew on his knee and coaching him invisibly to win against a visitor—a story the nephew in question grinned to be reminded of. After a good half hour of playing, I was able to scrape a win while they revisited fond family memories.

Later the farmer’s two nephews, brothers who nowadays mostly see each other there on the farm, sat across from each other to play the same game for the best of three. The boys—men now—spent summers here growing up, and now both work in software. Each played very deliberately, and they folded up games a turn or two before I could tell how the endgame would look. “Wait, who won that one?” I would ask. It was like watching two clever friends play chess, only easier on the attention span. “There are just a few positions that are safe, if your opponent doesn’t make a mistake,” explained the farmer’s second nephew, the one I had just met. He’d memorized more of those positions than his brother, so he had an edge.

 

image-5
Pocketful of change? Or thorny strategic problem?

Back at home in the city, I suggested a round of three-five-seven over beers at a game night with the farmer’s nephew. Our friend, a student of law, had gotten bored with losing at reflex games. The farmer’s nephew dug in his change jar for three pennies, five dimes, seven quarters. He dug out his store of patterns, too, and beat our law student friend handily. (The farmer’s nephew has his own private laundry room, so unlike at my apartment, quarters were plentiful).

The friend turned out to be a game theory aficionado, who was sure he recognized the game from somewhere—with phone in hand, he placed it as the ancient game of Nim. Sure enough, as the farmer’s other nephew had told me, there’s a table of positions  you can use to win, available right there on the internet. There’s an algorithm, too, one that was published in 1901 and that Wikipedia informs me is a foundational piece of game theory. I, for one, wouldn’t want to run those computations on the fly. They involve converting the number of pieces in each heap into binary, then adding them without carrying the one (the farmer’s nephew calls this XOR logic) to calculate a nim-sum that informs your play.

Unfortunately having all this wealth of human knowledge at hand didn’t help our friend as much as he’d hoped. With the slight muddlement of someone who’s sure he would get this with just one less beer in him, he lost five times in a row to the farmer’s nephew. Walking back through the moves in a round he had just lost, he explained his thinking, trying to piece together where his algorithm had failed him. “So the nim-sum here is two,” he said, gesturing at five quarters, four dimes and three pennies. “That should be 0100 in binary, right?” The farmer’s nephew guffawed. He claimed that he was just winning empirically, citing just three patterns he’d memorized.

Meanwhile, with the whole universe on my phone and a pretzel stick in the other hand, I learned that some of the first computer games, the early and little-remebered Nimatron  and the later, aptly named Nimrod, were developed to play this very game. It delights a nerdy corner of my heart—the same corner that devoured Cryptonomicon and was delighted to find out about cybernetics—to think that I sat absorbed in a game that someone had worked out a perfect logic-playing computer for back in the 1940s.

Our friend left that evening with a promise that, next time he was back, he’d beat the farmer’s nephew. I have yet to see it happen, but I don’t doubt his stubbornness, or his ability (when sober) to memorize a whole lot of patterns or a tricky algorithm. But I like the game better when you don’t quite know the rules yet. I like the slow build, the pattern-recognition engine of human cognition being brought to bear on a problem it hasn’t encountered before. Sure, there’s an app for that—you can practice on your commute! But the mystified grin on the face of someone playing the first time, for kicks, is worth the hassle of scrounging up fifteen game pieces in the right colors. I commend it to you. Make sure you have salty snacks close at hand.

What I’ve Been Reading

https://i2.wp.com/2.bp.blogspot.com/-HQ93Ac6ZMPs/UJgnGpBT08I/AAAAAAAAKAs/d5Wu22NjlHk/s1600/tumblr_lrvrt6HAM91qmrqk9o2_400.jpg
Fruit bat enjoys smoothie; Laurel enjoys fruit bat. From The Featured Creature

Blogger Neuroskeptic weighs in on the greatest sins a scientist can commit… and the proper punishments. Anybody else reminded of the fabulous punishment/crime song in The Mikado?

Neuroskeptic also gets a nod in this New York Times article about the dangers of misrepresenting neuroscience in the popular press.

Here’s an interesting take on game-ifying science; how much is it worth simplifying to get people interested (and how interested will they be, if your “interactive” game gives them no room for input and teaches them little?)

Seriously cute animal alert.

Dr. James Watson (DNA Nobelist and maker of dubious remarks about women in science) discusses a new edition of his memoir on discovering the structure of DNA. (H/T to Ed Yong)

Also, I fortuitously ran late the other day (how often do you get to use a phrase like that?!) and happened to hear my local radio’s broadcast of “Skywatch.”  It’s put as a collaboration between the Space Telescope Science Institute, those guys who control the Hubble, and the Maryland Science Center. As a science-interested, stargazing-inclined person who never got much beyond eighth grade astronomy (it’s amazing how much you can learn and still remain ignorant in a lot of ways)  I’d recommend Skywatch to everyone. Here’s their website.

A Playful Tone

Like a lot of my fellow early-stage grad students, I just submitted an application for a grant from the National Science Foundation. It’s a curious grant; more like a college application than your typical description of a research project in search of funding. Applicants aren’t bound to actually perform the research that they propose, and where ordinarily you might be prompted to include a list of publications, presentations, and accolades, instead this application includes a personal statement.

Here’s what a lab mate wrote on the personal statement I asked him to review: “There is a playful tone to your writing.” Coming as it did on the heels of a critique of the preceding paragraph, and a rejoinder to pay at least a little attention to the prompt, I don’t think he meant it in a good way.

It’s true. I admit it. There is a playful tone to my writing. I wrote the essay in the spirit of a college application, in hopes of being liked. I aimed for a voice, a confident and fun-loving one, instead of just a list of achievements and accomplishments. I told a story about my mentor in college, and another story about the Science Question Box in a classroom I volunteered in, and I was only narrowly dissuaded from telling a third story about a girl I met at a dance performance. The question is, was the playfulness out of place?

I hope not. I hope there’s room for a light-hearted, fun-loving approach to science in the highly competitive world of research, because that’s how I seem to be wired, and that’s where I would like to work. I guess, this time, the funding committee will decide.

What I’ve been reading

I love this introspective post by Robert Krulwich on what it means to collect a ton of data about yourself, and whether that’s the same as being observant.  Myself, I tend to side with the poets and with Bill Bryson. (H/T Ed Yong)

“Numbers are really the only reason you’re writing your paper, and you don’t want your readers to think that you’re into something as lame as words”  –Adam “you don’t write like a scientist” Ruben, whose Experimental Error blog at Science I can stand to read only one post at a time.  He’s also the author of Surviving your stupid, stupid decision to go to graduate school, which I have avoided on principle; I’m saving it for a crisis.  Possibly the one that happens when my love-affair with words is uncovered.

I’m enjoying the Wellcome Trust’s series on scientific writing that science writers admire.  It’s good to hear from experts on how to do a thing well.  If you’re in the UK or Ireland and have any interest in science writing, consider submitting a piece to their contest!  I’m told there is a prize.

And on the lighter side Scicurious’s jumprope rhymes: possibly worth getting a Twitter account for.

What looms ahead: how neuroscientists study individual neurons

I spent a semester in college as a teaching assistant in an animal behavior lab. One of the very most frustrating failed experiments was the mating assay: we put two flies, a male and a female, in Eppendorf tubes with an air hole, then waited to see if strain 1 and strain 2 were as likely to cross-mate as to mate with members of their own strain. These particular flies had a loooooong latency to mate, and as you can imagine, the roomful of undergrads made a whole lot of jokes about mood music. When they finally began courtship behavior, the observers would often lean closer to track those tiny movements better.  Bad idea.  It turns out that for flies, a looming object is the biggest mood-killer there is.

We would have known that if we’d talked to Tom Clandinin at Stanford.  His lab studies, among other parts of fly vision, the neurons that let the fly respond to looming objects.  I got the opportunity to hear Dr Clandinin speak about his work recently, and I think the study, in Current Biology earlier this month, is so cool that I’m going to try summarizing it in the style of Scicurious and Ed Yong.

Dr Clandinin started out interested in how vision and behavior are connected in the fly brain. This kind of problem is much easier to study if you can identify smaller components, so in an earlier paper, the lab identified a group of neurons with something in common separate from other neurons. Using the knowledge that similar cells have similar patterns of gene expression, and a genetic tool called enhancer trapping, they inserted a reporter gene at random into the genome in a large group of flies, then looked for changes in behavior. This let them identify flies where a change in only a few cells caused a major change in behavioral responses to visual stimuli.

In particular, they found five neurons (shown in green in the image) in a connective region between the visual lobe and the rest of the brain, which they dubbed Foma1 cells (it stands for “failure of motion assay 1” and means that the flies responded weirdly to visual stimuli). The location of the cells is interesting, because the connective region is a chokepoint between two areas that can communicate lots of information, but relatively little information can get through.  Think of it as an undersea cable between phone networks on different continents; some kind of integration or discarding of information must occur there, so that only important stuff gets through.

But how could they figure out what kind of information was passing through these neurons?  What made the mutated flies behave differently than their wild-type brethren?  They used an ingenious set-up: put a miniaturized TV screen in front of a fly, project simple, abstract moving images, and then use electrophysiology to listen in on individual neurons. They didn’t see any action in these five neurons when they presented a series of two-dimensional stimuli: rolls, yaws, up-and-down movement, or anything else that the fly could conceivably see while staying in one place and rotating in space (here’s an idea of the kind of image they used).  Instead, the neurons fired like crazy in response to a looming stimulus: a square that got bigger and bigger on the screen. The lab used slight changes in the projected image to characterize the neurons further, finding that they fired more often when the looming object seemed to be approaching faster; they responded the same way to objects coming from any part of the visual field and in any color; and they didn’t respond to a general change in screen brightness without the looming illusion. This combination of features made the Foma1 neurons loom detectors; that is, what they recognized was the looming, and they seemed to respond to anything that loomed.

Figuring out what could stimulate the Foma1 neurons was a cool piece of neuroscientific sleuthing. The next question was, were they behaviorally important?  The team knew that in response to the illusion of a looming object, 92% of flies would raise their wings and look like they’d fly away; about 75% would take off before the looming object was due to hit them.  But when the group silenced Foma1 neuron activity by expressing an inactive ion channel selectively in the Foma cells, only 30% of flies escaped. The only problem with this test was that it couldn’t tell the difference between what the fly knows (i.e., loom detection) and how the fly responds.  Maybe the entire visual lobe was yelling, “Mayday! Mayday!” and the Foma1 neurons were only important in passing the message along.

So to eliminate any confusion they used another cool technology: optogenetic stimulation. Again, this involved selective ion channel expression only in the foma neurons; they expressed channelrhodopsin, which is sensitive to light, and then blinded the fly and shone a bright light on it. This set-up activates only the Foma neurons and anything downstream of them, and it allowed them to show that activation of Foma1 neurons is enough to make the fly fly away.

They did one more interesting experiment, which isn’t in the paper; if the fly is already flying (glued to a stick, so that it has the illusion it’s supporting itself but also is held in one place), then instead of flying to escape a looming stimulus, it throws up its legs in a landing pattern. This led them to infer that there was a downstream decision point: if the fly was not flying, a looming object meant it should fly away (for instance, a fly swatter or an attentive student of animal behavior was approaching). If it is flying, the looming object means it should land—in other words, watch out for that… TREE!

More reading:

“Loom-sensitive neurons link computation to action in the Drosophila Visual System” Current Biology March 6 2012 http://www.cell.com/current-biology/abstract/S0960-9822%2812%2900008-5

Cybernetics: information theory at the dawn of the information age

Nature published a special issue last week for Alan Turing’s centenary; were he alive, he would be 100. Turing is widely regarded as the father of the computer. His contributions to code-breaking were key to cracking the Enigma code, and helped win World War II for the Allies. His untimely death at the age of 41 was a sad reminder of the power of prejudice over gratitude.

Turing’s work was fundamental to the field of cybernetics, which was as short-lived and as influential as he was. Although Turing himself never published on the subject, it drew heavily on code analysis of the kind he did during the war. Cybernetics existed because early computing existed, and all at once the passage of messages became at least as interesting in their content.  This is especially germane if you think about computer science in terms of its code-breaking background; Neal Stephenson’s Cryptonomicon tells this story, among others, beautifully.

Cybernetics made sense of information transfer by simplifying down to a system that could transduce input to output. It had a lot in common with thermodynamics, the study of heat transfer, which had solidified over the previous century or so and which also takes an input/system/output view of the world.

This ground rule was elaborated as different systems were studied in cybernetic terms. One of my own favorite elaborations (it’s delightfully silly!) was the study of second-order cybernetics, or understanding how information is transferred when we try to understand information transfer. (In the picture above, the lower diagram shows second-order cybernetics; Wiener, Bateson and Mead are three of the founders of the field).

The field was at its strongest during the Macy conferences, a series of academic conferences between 1946 and 1953 dedicated to both developing and spreading the ideas of cybernetics. You could make a cybernetic study of the conferences themselves; although we have the schedules, and writings from academics that reflected on the ideas shared there, but no proceedings were ever published, so we’ll never know exactly what was said. (So much has changed in science communication; today conferences are liveblogged up one side and down the other, like the SciOnline unconference series.)

Cybernetics was holistic at a very reductionist time; it played a major role in coming to understand biological systems in terms of their emergent properties. Its findings took two directions: toward the development of computers as more-and-more complicated information processing machines, and toward understanding the brain, which remains the most complicated information processor we know of.

Conventional wisdom says that the movement lost momentum after the conferences ended. Its progenitors were from different fields, and they never made a real interdisciplinary mark despite their ambition to found a unified study of everything.  However, it threw a long shadow across academia; cybernetics often turns up in astrofuturist poetry of the sixties, and modern scholars of systems and complexity point to cybernetics as their progenitor field.

I think of the 1940s and 1950s with a weird nostalgia, as a golden age in American academic science, and cybernetics plays a big part in why I think that way.  After all, American science was in no way socially or ethically ideal in that period; scientists developed the bomb, stole Henrietta Lacks’s cells, carried out the Tuskegee study, all the while excluding from their ranks many people who would have been great scientists.  But strictly in the realm of ideas, it was a time of very great discovery: the structure of DNA, the most basic signaling that controls a cell, the rules behind genetics and virology; not to mention the great strides in physics and chemistry that were made at the same time. Because discoveries made back then are fundamental to science education now, and experiments were both elegant and easy to explain, there’s an illusion that information was more manageable. But then I’m reminded it wasn’t; scholars back then founded a whole new field strictly for understanding information flow.

Linkaround

“Dr. Botstein took his visitor into the lab and announced, ‘I have here a novelist.’ ” — a New York Times article on how Jeffrey Eugenides created a surprisingly accurate portrait of the working life of a yeast geneticist in the 1980s, after just one afternoon in a lab (and, I imagine, a lot of reading).  For the record, this makes me more interested in The Marriage Plot than any other buzz I’ve read yet.  Watch this space.

 

RIP Dr. Renato Dulbecco: Nobelist, Italian resistance fighter, virologist, and (as a friend put it) “the D in your DMEM.”

 

“Worst of all, my brain felt boring.” — Scicurious on crafting a work/life balance in grad school.