All Science is Anthropological at the Margins

28 February 2013

It is no secret that hard science folks (think Physics, Engineering, and Math majors) sometimes look down on the “softer” sciences for being less rigorous. There is a joke that a professor from my college once told, which I’m copping directly from a former classmate’s blog:

The library at the Princeton Institute of Advance Study was divided into two wings. One wing was for the sciences, which included all the updated journals in math, physics, etc. The other wing was for the humanities, which had the analog journals in history, literature, etc. Given that this was an advanced institute, these scholars spent a great many hours in the library, in their respective wings. One day, Kurt Gödel had enough, gathered a stack of journals, walked to the central librarian, and stated that these journals have been shelved incorrectly. The librarian looked puzzled because all the call numbers to the anthropology journals were correct. In response, Gödel supposedly shouted, “These surely cannot belong in the science wing!

As much as anthropology can’t get no respect, I’m starting to notice a funny trend in the classes I’m taking in grad and med school as compared to college. Introductory science and mathematics classes in college are very didactic: you learn principles and their applications. Although in introducing a topic professors talk a little about its backstory—e.g., how Rutherford’s experiment with gold foil led to the development of the Rutherford model of the atom, which became the Bohr model—the who and the why is secondary to grokking the setup, the principle demonstrated, and its logical consequences.

And that is in fact a wonderful thing about math and science, and a large part of why I liked it throughout grade school: rote memorization was de-emphasized to the point of remembering only a small set of principles, and the skill lay in combining and applying them. By contrast, history courses seemed to be full of details that had to be slowly committed to memory with little to tie them together except that things happened to turn out that way. Whereas knowing F = dp/dt could take you very far in solving a multitude of physics problems, knowing Moctezuma took power in 1440 would not help you guess how long he stayed in power, how he died, or much else about his life or Aztec history.

There is a tide shifting, however, in my grad school courses on the biomedical sciences. Whereas the principles taught in college are usually well established, once you reach the fringes of what is known (i.e., results from just a few years ago), there are enough potholes that the multiplicative benefits of combining prior knowledge breaks down. For example, here’s a slide from a developmental biology lecture I had this week:

non-canonical Wnt pathway

There are almost as many question marks here as named proteins! It turns out that this is the “non-canonical” Wnt pathway, which the lecturer happens to research in his own lab. That name already tells you a few things: that there is a “canonical” pathway (the one enshrined in Alberts’ MBOC, of course), that this one is less established, and that people don’t completely understand the relationship between the two yet. The fact that one pathway is “canonical” is a complete accident, since that one would have likely been considered differently had it been discovered later, or at the same time as the “non-canonical” pathway.

This is starting to sound as arbitrary as history, right? It is, and the reality is that in order to do research at the fringes of science, particularly in super sparse fields like the life sciences where there are a lot less researchers than things to study (there are possibly billions of molecular interactions that take place in the human body), you need an anthropological backstory of what you are studying. That includes: 1) who worked on it and what else they worked on, 2) why they started working on it, and 3) what their assumptions were. Not only is it helpful for understanding how things in the subfield were named—gene names have a tendency to get really fanciful1—but it is crucial for properly evaluating the claims made and the way the evidence is presented.

Speaking of naming things: odd names pop up just as frequently in Anatomy and lately, Histology. For better or worse, many of the cells in the body were named by histologists who were working in the late 19th and early 20th century and could often see a lot more than they could understand (certainly at a molecular level). Also, what they saw depended on the stains they chose. The result is that in the 21st century, we still use names like “enterochromaffin cells”, “eosinophils”, and “basophils,” which have nothing to do with the function of the cells, but reflect the stains that revealed them on microscope slides.

So lately, science classes seem to be more and more dependent on the history and people behind them. A unit on a particular cellular transport mechanism, summarized as “this person argued for this controversial idea, the more established groups vehemently responded, and a great solution was overlooked for years because the author died before he was recognized” could equally describe a history of cultural movements in 19th century France. Is there something to be learned from this? Well, a new scientific hypothesis has much in common with a cultural movement: when only a few labs are willing to come up with supportive data, how the greater scientific community will respond depends quite a bit on the kind of the people behind it, how well the idea is marketed, and the responses of others with vested interests. (A 1% rule borrowed from web design is probably appropriate here: for every 100 people that hear about your great idea, 10 will actually understand it enough to care, and maybe 1 will do something about it.)

The beauty of science is that data trumps all, as my dad says. The tough news is that getting enough other people to hear your idea and generate data about it, to the point where it becomes a principle that stands on its own, is an anthropological problem. That’s why it takes two decades from an initial (amazing) result to get a Nobel Prize, assuming of course that it is even awarded to the right person. Until then, to make sense of brand-new science, you have to learn just as much about the people behind the ideas, because that is often the only way to comprehend and contextualize what they are trying to say.

  1. A sample of actual gene names made up by goofy D. melanogaster researchers: Bride-of-Sevenless, Kevin and Barbie, Swiss Cheese, cheapdate, Mothers-Against-Dpp, Sugar Daddy, and John Wayne Bobbit. Must be all those ether fumes… 

Pebble, a watch made for iPhone/Android

26 February 2013

I ran across this interesting gadget while stumbling upon a favorable review of it on another blog.

It’s a watch with an e-Ink display that is meant to be tethered via Bluetooth to an iPhone. Besides displaying the time, which it can obviously do in just about any format (analog/digital/words), it will discreetly display texts, caller ID data, and other alerts from your phone. When the phone is playing music, it can control playback.

Watches seem to currently occupy a tenuous and nostalgic place in our digital lives—unless you are wearing them for show, sport, or jewelry, a brick phone from 2001, functionally speaking, can outdo a Swiss chronometer in every regard except size. Sadly, that includes displaying an accurate time for the local timezone: GSM time has always been more consistent than whatever my radio-synchronized watch showed, and it required zero fiddling. Having been wristwatch-less for a while now, I’ve only missed it in two scenarios: 1) when, in the middle of a conversation, I needed to check the time and pulling out a phone seemed faux pas; and 2) trying to time something while my hands are occupied, e.g. taking a pulse or respiratory rate (who took all the clocks out of hospital exam rooms?)

So, the smart move by watchmakers is to have them piggyback off of a phone’s capabilities. However, the established brands have mostly ignored this potential, with only two lackluster products that I can find: Sony’s is only Android compatible, and G-Shock is still using the same LCD screens as ten years ago. Even car manufacturers like GM are ahead of the game on this one, dropping CD players from cars marketed to younger drivers and prioritizing smartphone integration. Because seriously, besides an FM radio, what can a car audio system offer that a phone with an iTunes library, Pandora, and Spotify couldn’t outdo?

Thankfully, it appears that Kickstarter projects have stepped in to fill the void. The Pebble makes a case for being permanently clasped to your wrist by offering a second interface to your phone during those times when yanking out the phone seems rude, inconvenient, or both. And when it’s not doing that, it is probably pretty good at telling the time, since it pulls that from your phone too. Unlike most geeky watches, it looks more nondescript than ridiculous, and the e-Ink display can display information all day long without sucking down battery life.

Particularly cool for the programming crowd is that they will release an SDK. Surely, hackers will find interesting uses for a 10k-pixel wrist display with a 4G data connection.

I can’t justify buying it yet, but I’ll be curious enough to follow this product type and see if it catches on. Other comparable items in this space from no-name brands have been cheap and trashy so far; by creating something that looks like a watch, designing it around seamless integration with popular smartphones, and promoting a dev environment for the geeks that might want to make apps, Pebble and its self-funded design team could win big.

Creating Animated GIFs with ImageMagick & ffmpeg

07 February 2013

I’ll start off this blog with something lightweight and relatively useless except for creating amusing internet gimmicks and perhaps the occasional animated diagram. Surprisingly, there are few good tools for making animated GIFs on a Mac or Linux desktop. On Windows, Paint Shop/Animation Shop used to be the go-to suite for doing this—now, the closest thing for Mac folks might be GIFBrewery, which has far less features and some mixed reviews. I haven’t plunked down $6 for that yet, so I’ve still been trekking about on the command line, which is certainly more tedious, but you can control every step of the process and get exactly the results you want.


You won’t be able to jump into this without a little comfort on the command line, and the installation of ffmpeg (if you are working from a video source) and ImageMagick.

If you’re on a Mac, and you already use MacPorts, it should be a simple sudo port install ImageMagick ffmpeg and a lot of thumb twiddling. If you don’t, try installing HomeBrew which is a little more lightweight, and run a brew install imagemagick ffmpeg.

For Linuxes, I’ll leave you to figure out how to get those two packages from your package manager. Depending on how idealistic your distro is, it might require adding some non-free repositories or even compiling ffmpeg from scratch. You only really need ffmpeg if you have a video source file, though; everything else is done with ImageMagick.

The full workflow

First, you want to start with your source video or images. If it’s video, you can crop/trim it down to the relevant clip with Quicktime X or any more advanced video-editing suite. Then, it’s time to extract the images with ffmpeg. Open a terminal and change to the directory with your movie file, and assuming is the filename:

$ mkdir decode
$ ffmpeg -i decode/%d.png

Now you’ll have a sequence of images in the decode folder that correspond to each frame of your movie. At this point, it’s a good idea to check out this folder and preview the sequence of images. They are likely to be far too big to be appropriate for a GIF, so we have to resize and crop them. We may also want to refine the range of frames that will be used in the gif.

Bash can expand a sequence of numbers using the {$begin..$end} notation, which functions as a shell glob. We can use this to select the frames we want, resize and crop them with ImageMagick, and put these into a new folder. In this example, we’ll take the first 12 frames, crop them to 405x720 starting 437 pixels from the left side, and resize them by 30%.

$ mkdir resized
$ convert decode/{1..12}.png -crop 405x720+437+0 -resize 30% resized/%d.png

Time to check the output images again; if they aren’t how you like, you might try fiddling with the parameters in the previous command until it looks better.

A typical thing you might need to do at this point is label some of the frames. For that, we can use the -annotate feature of ImageMagick.

Quick aside: It might so happen that you don’t have any civilised fonts (e.g. Impact) available for use by ImageMagick, which requires them to be in TTF format and specified by an XML configuration file. To fix that, you can follow this guide; the short version is, convert the font into TTF format, put it somewhere sane like /usr/local/share/fonts and then write up an ~/.magick/type.xml that points to it:

<?xml version="1.0"?>



Back to text annotation. Note that we set the text and Y coordinate (from the bottom of the image) at the start of this command to avoid repetition, since the text must be painted twice for best results (once for the outline, and once for the fill). To print it near the top of the image instead, change -gravity south to -gravity north. We can copy over any images that we don’t want labeled.

$ mkdir annotated 
$ TEXT="CAPTION" && YDIST="10" && convert resized/{5..8}.png -pointsize 24 \
  -font Impact -strokewidth 2 -stroke black -fill white -gravity south -annotate \
  "+0+$YDIST" "$TEXT" -stroke none -annotate "+0+$YDIST" "$TEXT" annotated/%d.png
$ cp {0..4}.png {9..11}.png annotated

One last thing we may want to do is play the frames in reverse at the end of the GIF, so it appears to play seamlessly when looped. For that, a simple cp inside a for loop will do, but note that we omit copying the first and last frames to avoid playing them twice. Set LAST to the number of the last image in your animation.

$ LAST=11 && for i in `seq $[LAST-1]`; do cp annotated/$i.png \
  annotated/$[LAST*2-i].png; done

Finally, it’s time to assemble this bad boy into a GIFfy little bundle of web-friendly glory.

$ convert -delay 6 -loop 0 annotated/{0..21}.png out.gif

out.gif will contain your masterpiece. Preview it in a few web browsers to get a sense for its flavor. N.B. with that argument to -delay: Using any value lower than 6 is liable to produce strange results in Internet Explorer. This has to do with legacy-compatible interpretations of what the maximum reasonable frame rate for a web browser animation is, explained in great detail elsewhere.