Recent papers – LTD and R-F neurons

I am going to keep tabs on the interesting papers I read. I’m actually stealing this idea from Al who uses Google Documents to take notes on papers so they are always at hand!

One of the papers is Izhikevich’s 2001 paper on resonate-and-fire neurons. The most common neural simplification is to use an integrate-and-fire model. This is fine, except some dynamics may be lost. Most importantly, since integrate-and-fire models are 1D, they can only lose stability via a codimension-1 bifurcation (ie, saddle-node). This ensures that certain phenomena, such as spiking in response to inhibitory input, cannot be a part of the model.

The other paper is by Jo and Cho et al. (2008) which we read for the systems and synaptic journal club. I didn’t actually go to the journal club, but I did read the paper! Basically, the authors relate mechanisms for two seperate forms of LTD. One acts through NMDA receptors and the other through mGluR’s. These forms of LTD and seperate, so one can be induced on top of the other to increase the depression.

Click through the link if you care more about these papers (ie, if you are me).

Continue reading

Advertisements

Building a computer like a brain

Here is a video that presents some ideas from ‘neuromorphic engineering’. Neuromorphic engineering attempts to mimic neural architectures on silicon chips. The idea, as the video communicates, is that the brain is really, really good at what it does. An equivalent computer uses vastly more power and is unable to do most of what the human brain can do, so maybe we should try to get some inspiration from the brain. A lot of people are working on this problem, including Gert Cauwenberghs here at UCSD.

Clearly, a lot of the difference has to do with the software of the brain: it is simply better than anything we have created. But it also has to do with the architecture. The reason the brain is so good at what it does is that it has evolved that way for a long, long time. Everything in it is optimal for the task at hand; that doesn’t mean it is perfect in every situation (see Brain Hacks), but it is usually the best at what it needs to do. Experiment after experiment show that the brain performs in the mathematically best and most efficient way possible. This makes sense, right? Why would the system evolve the way it did if there was a more efficient path? Since the body has a limited amount of energy, efficiency includes metabolic constraints. Although it still consumes roughly a quarter of our body’s energy it is still much more efficient than a computer.

Obviously, then, one would want to copy the brain to get the best possible computer! However, even simple systems haven’t been fully worked out. The retina is probably the most mechanical piece of neural architecture, and there is so much we don’t know about it that it would be impossible to replicate it. I am not sure how the guy in the video plans on making a functional retina without this knowledge. We know the gist of how a lot of it works, but we have a long way to go before we can fully realize the energy savings available.

[Via Balaji’s status message]

A win for theory

Score one for theoretical neuroscience! Neuroscientists have previously known about three types of cells that help our internal sense of ‘space’: place cells, grid cells, and head direction cells. Place cells fire preferentially when an animal is in a certain location in an environment. For instance, if you put a rat in a circular maze, a place cell may fire every time the rat runs through a small section of the maze. Grid cells, on the other hand, respond to multiple locations in a grid-like fashion. So one grid cell may fire, say, every meter at points on a hexagonal lattice. Head direction cells, obviously, respond to an animal’s head being in a certain direction.

Computational models suggested that there was another type of cell needed to have an accurate cognitive map: border cells. Border cells respond when the animal is near some salient border, such as a wall. And lo and behold, such cells were recently discovered!

So, a win for theory in neuroscience.  Theory helps the field make coherent descriptions of what the system is trying to accomplish and how it is doing it.  If you can describe something coherently, you can make predictions like for future experiments such as this one.  If the experiments turn out negative, maybe you don’t understand the system as well as you think you did.

Words are not words

Have you ever written a word down, looked at it, and thought to yourself, “What a strange word? Is that really a word? Is it actually spelled like that?” Or maybe you said a word a few times – “canoe, canoe, canoe” – and then become confused as to whether that was a real word? No?  Uh, me neither.  If it did happen to me, I’d be curious as to why.  Now I know. The effect is known as semantic satiation and there is a lot of information found in this old, purple and violet paper from the 1960’s. A pubmed search shows that there are still a few researchers working on the effect, but it looks a little dead. I guess it’s not that sexy.

Semantic satiation can be measured in a number of ways. The 1967 paper favors index the transformation of self-reported intensity of meaning of a repeated word. By looking at this index over time, one sees an inverted U-shaped curve. The authors suggest that the rising part of the curve is due to “semantic generation”, indicating an increase in meaning as the word is at first repeated. The falling portion is satiation and inhibition where the ‘meaning’ of the word falls. This satiation appears less frequently or not at all in the extremely young, the elderly, and in older mentally-challenged children.

What causes the satiation? It is unclear. The older paper has a lot of speculation and no real evidence. The newer paper presents data from EEG experiments. They focus on the N400 part of the ERP which is known to be affected by semantic tasks. For instance, the N400 is larger in response to words that are misplaced in their semantic contex.  When subjects were asked to simply repeat a word, no change in the N400 was found. In a seperate experiment, the N400 was examined after priming. In this case, after heavy repetition of a primed word there was a decrease in the N400 following presentation of a related cue word versus the low repetition case. This would suggest that satiation – or at least some aspect of repetition – could be affecting the semantic meaning of the word.

Brain hacks!

When people think of hackers, they often imagine 1337 h4x0rz who sit in their dark room, listening to techno, breaking into corporate mainframes. What they really do is much less glamorous; they find an inconsistency in the way a program works, and exploit it. For instance, suppose a program expected 128 characters of text. What if it didn’t check how much you inputted and then you gave it 1024 characters instead? Those extra characters are written to memory – but who knows where in memory? Properly done, that kind of attack can allow you to run executable code. That’s called a buffer overflow exploit.

It shouldn’t be surprising that the brain has inconsistencies, too. After all, it is optimal at the tasks it was designed for – and usually only ‘good enough’ at everything else. There’s a whole bunch of people interested in figuring out how to optimize these ‘good enough’ tasks so they can learn, write, work better.
11brain2axx__1231575167_0544

But everyone’s aware of a certain kind of brain hack – the optical illusion. Optical illusions use the idiosyncracies of the visual system to screw with what you see. For other, much cooler and less common ways of hacking yourself, there’s an informative little graphic from the Boston Globe. [Via].

Neuroscience we can believe in

As a neuroscientist, I often wonder when we will be controlling everything with our minds.  Well, obviously we already kind of do that.  But why do I want to be a chump who has to use his mind to reach out his hand to pick up a ping pong ball?  Why can’t I just use my mind to telepathically activate fans to make the ping pong ball float?  Well don’t worry, now for $80 you can!  Look at this innovative toy:

bits_mindflex480I know as a child, I always wanted a toy where I had to concentrate really hard and turn a nob to…make a ball go in a circle.  Seriously.  Hours of fun right there.  Hours.

I do imagine it would be quite cool to try once, however.    Maybe one day they’ll make a, you know, fun mind control toy.