Jurassic Park Jr.

Scientists often must try to measure things that they don’t have any real access to.  Solar physicists try to image inside the sun by looking at magnetic waves rippling across the solar surface, for instance.

For those of us waiting for the day when we must build a jurassic park it is important to know a little dinosaur psychology.  Luckily, some people are already on it, attempting to perform brain scans on the fossilized remains of these unlucky beasts.  Their basic technique, I think, is to examine the shape of the bone where these anatomical features would exist in modern creatures.  They use “micro-tomography” which I suppose means they can see very, very small indentations which seems a little suspicious, but hey, it’s science, they’d never lead us astray.

What do we now know?  Well, Lambeosaurs don’t have a great sense of smell but they might socialize a fair bit.  T. Rexes had small brains but relatively large olfactory bulbs, so their smell was better than average.  They also had large inner-ear complexes, so they could either hear well or had really good balance, it’s not that important which, is it?  Also, the Archaeopteryx, that dinosaur that can fly?  It could also hear and vocalize well, like birds.  And now you know.

The future is now

cyborgbeetleThe future is now, and it is the cyborg beetle! Yes I shit you not, this is as awesome as it sounds. I had a friend who tried to make a lobster prosthetic once, but apparently it gets all wet and the shell makes it more difficult than it should be. I can’t wait to make myself a cyborg ant, one day. The paper is here, detailing the cyborg in all its machine glory. It injects -1.5 V of current into ‘the brain’ which starts (or stops) the flying motion, then controls the left and right wings with current injected into the muscles. They can do some other nifty things by injecting current into ‘the brain’ as well. They try to make the beetle fly by giving it a virtual reality screen, but apparently that is less effective. Stupid intelligent beetles.

Recent papers – LTD and R-F neurons

I am going to keep tabs on the interesting papers I read. I’m actually stealing this idea from Al who uses Google Documents to take notes on papers so they are always at hand!

One of the papers is Izhikevich’s 2001 paper on resonate-and-fire neurons. The most common neural simplification is to use an integrate-and-fire model. This is fine, except some dynamics may be lost. Most importantly, since integrate-and-fire models are 1D, they can only lose stability via a codimension-1 bifurcation (ie, saddle-node). This ensures that certain phenomena, such as spiking in response to inhibitory input, cannot be a part of the model.

The other paper is by Jo and Cho et al. (2008) which we read for the systems and synaptic journal club. I didn’t actually go to the journal club, but I did read the paper! Basically, the authors relate mechanisms for two seperate forms of LTD. One acts through NMDA receptors and the other through mGluR’s. These forms of LTD and seperate, so one can be induced on top of the other to increase the depression.

Click through the link if you care more about these papers (ie, if you are me).

Continue reading

Building a computer like a brain

Here is a video that presents some ideas from ‘neuromorphic engineering’. Neuromorphic engineering attempts to mimic neural architectures on silicon chips. The idea, as the video communicates, is that the brain is really, really good at what it does. An equivalent computer uses vastly more power and is unable to do most of what the human brain can do, so maybe we should try to get some inspiration from the brain. A lot of people are working on this problem, including Gert Cauwenberghs here at UCSD.

Clearly, a lot of the difference has to do with the software of the brain: it is simply better than anything we have created. But it also has to do with the architecture. The reason the brain is so good at what it does is that it has evolved that way for a long, long time. Everything in it is optimal for the task at hand; that doesn’t mean it is perfect in every situation (see Brain Hacks), but it is usually the best at what it needs to do. Experiment after experiment show that the brain performs in the mathematically best and most efficient way possible. This makes sense, right? Why would the system evolve the way it did if there was a more efficient path? Since the body has a limited amount of energy, efficiency includes metabolic constraints. Although it still consumes roughly a quarter of our body’s energy it is still much more efficient than a computer.

Obviously, then, one would want to copy the brain to get the best possible computer! However, even simple systems haven’t been fully worked out. The retina is probably the most mechanical piece of neural architecture, and there is so much we don’t know about it that it would be impossible to replicate it. I am not sure how the guy in the video plans on making a functional retina without this knowledge. We know the gist of how a lot of it works, but we have a long way to go before we can fully realize the energy savings available.

[Via Balaji’s status message]

A win for theory

Score one for theoretical neuroscience! Neuroscientists have previously known about three types of cells that help our internal sense of ‘space’: place cells, grid cells, and head direction cells. Place cells fire preferentially when an animal is in a certain location in an environment. For instance, if you put a rat in a circular maze, a place cell may fire every time the rat runs through a small section of the maze. Grid cells, on the other hand, respond to multiple locations in a grid-like fashion. So one grid cell may fire, say, every meter at points on a hexagonal lattice. Head direction cells, obviously, respond to an animal’s head being in a certain direction.

Computational models suggested that there was another type of cell needed to have an accurate cognitive map: border cells. Border cells respond when the animal is near some salient border, such as a wall. And lo and behold, such cells were recently discovered!

So, a win for theory in neuroscience.  Theory helps the field make coherent descriptions of what the system is trying to accomplish and how it is doing it.  If you can describe something coherently, you can make predictions like for future experiments such as this one.  If the experiments turn out negative, maybe you don’t understand the system as well as you think you did.

Bad statistics? Or bad criticism of statistics?

A few weeks ago, I posted an article about bad statistics in some fMRI work onto my gchat message. Their train of logic was as follows: the reliability of measurements in fMRI experiments should be no higher than about 0.7 (where reliability is the test-retest consistency). Since the correlation of variables is related to the reliability of the measurements, we have a ceiling on the observable correlations. Given that, there were a large number of studies with suspiciously large correlations. They believe that this arises because certain researchers are selecting voxels which cross some threshold of correlation between behavior and brain activity, then using only these voxels for analysis. Worryingly, this biased voxel selection leads to inflated across-subject correlations and can even find significant measures from pure noise. This is a non-independence error – one could be selecting noise exhibiting the effect being searched for.

Some of the criticized authors have posted a rebuttal defending their methods. They have a variety of rebuttals, most of which seem to rest on their assurance that they are performing corrections for using multiple-comparisons and therefore what is described above is okay. Many of the rest of their rebuttals seems to be, mystifyingly, more about defending ‘social neuroscience’ and suggests that the paper offers a biased view of how fMRI studies are performed. It surely doesn’t do that – remember how I said that only a subset of the population seemed to be implementing a flawed statistical method? – but they are pretty defensive anyway. It’s a funny read because they are clearly very peeved throughout the whole rebuttal.

Then the original authors offered a rebuttal-rebuttal. I don’t feel confident enough in my knowledge of statistics to properly evaluate it. It certainly seems reasonable. For instance, choosing a threshold necessarily truncates the noise distributation; correcting for multiple comparisons does not seem like it would help fix the problem at all. Anyone out there with more knowledge than me like to weigh in on this?

The moral of the story? Be careful with statistics! It is really a shame that people are not put through a rigorous year-long series of statistics courses that they must pass when in graduate school. Statistics underlies all our data analysis and is one of the most misunderstood parts of science. Hopefully this back-and-forth will bring a good discussion of the relevant statistical techniques so the problem isn’t propagated any further.

Words are not words

Have you ever written a word down, looked at it, and thought to yourself, “What a strange word? Is that really a word? Is it actually spelled like that?” Or maybe you said a word a few times – “canoe, canoe, canoe” – and then become confused as to whether that was a real word? No?  Uh, me neither.  If it did happen to me, I’d be curious as to why.  Now I know. The effect is known as semantic satiation and there is a lot of information found in this old, purple and violet paper from the 1960’s. A pubmed search shows that there are still a few researchers working on the effect, but it looks a little dead. I guess it’s not that sexy.

Semantic satiation can be measured in a number of ways. The 1967 paper favors index the transformation of self-reported intensity of meaning of a repeated word. By looking at this index over time, one sees an inverted U-shaped curve. The authors suggest that the rising part of the curve is due to “semantic generation”, indicating an increase in meaning as the word is at first repeated. The falling portion is satiation and inhibition where the ‘meaning’ of the word falls. This satiation appears less frequently or not at all in the extremely young, the elderly, and in older mentally-challenged children.

What causes the satiation? It is unclear. The older paper has a lot of speculation and no real evidence. The newer paper presents data from EEG experiments. They focus on the N400 part of the ERP which is known to be affected by semantic tasks. For instance, the N400 is larger in response to words that are misplaced in their semantic contex.  When subjects were asked to simply repeat a word, no change in the N400 was found. In a seperate experiment, the N400 was examined after priming. In this case, after heavy repetition of a primed word there was a decrease in the N400 following presentation of a related cue word versus the low repetition case. This would suggest that satiation – or at least some aspect of repetition – could be affecting the semantic meaning of the word.