Blog

Stimulating the will to persevere

April 26th, 2015

So, this week I ran across a really cool paper published in 2013 in the journal Neuron by a team from Stanford University consisting of Josef Parvizi, Vinitha Rangarajan, William Shirer, Nikita Desai, and Michael Greicius. Actually, the first thing I ran across was this well-done article in the online magazine Quanta that featured the study when it came out. Then I went and looked at the published study.

It’s one of those studies you see sometimes that represent researchers’ taking advantage of a learning opportunity afforded by unusual circumstances. We’ve learned a great deal about the brain from these serendipitous windows of opportunity. In fact, for most of the history of brain science the biggest advances in our knowledge have come from unusual cases, such as victims of head injury, stroke, and so on. Some of the most interesting work of this kind has been done by scientists working with people whose brains are available to direct manipulation for reasons relating to medical circumstance. For example, we have Wilder Penfield’s seminal work in which he stimulated the exposed brains of conscious, aware epileptic patients and asked them about their subjective experiences, and Roger Sperry’s Nobel Prize-winning experiments with so-called “split-brain” patients whose corpora callosa (that’s the plural of corpus callosum I looked it up!) had been surgically severed to prevent the spread of epileptic seizures from one hemisphere to the other.

The study by Parvizi et al. stands directly in this tradition. They were working with two epileptic patients and had the opportunity to implant electrically stimulating probes in the anterior midcingulate cortex of each. In case you’re wondering where that is, the following image shows the location of the stimulation site (marked with a yellow ‘1-2’) superimposed on each patient’s MRI scan:

Print

This was part of a process whereby the researchers were assessing where in the patients’ brains their epileptic seizures were originating. With the patients conscious and interacting with them, they asked the patients to share any experiences that were generated when they turned on the current.

The results were quite striking. Both patients reported a similar experience, that was all the more remarkable because it had a complex, nuanced quality that wove together aspects of physiology, emotion, and intention. It would have been interesting enough had the stimulation resulted in a muscle twitch or seeing spots or something simple like that. What the authors described, though, was much more interesting. They entitled their paper “The Will to Persevere Induced by Electrical Stimulation of the Human Cingulate Gyrus”, and the “will to persevere” captures the patients’ experience rather well. Here’s the authors’ description of what happened:

Both patients reported autonomic symptoms including ‘‘shakiness’’ or ‘‘hot flashes’’ in the upper chest and neck region. Heart rate seemed to increase in both cases … Moreover, both patients recounted a sense of ‘‘challenge’’ or ‘‘worry’’ (also known as foreboding) but remained motivated and aware that they would overcome the challenge.

Both patients reported a sense that they were heading into a weighty and momentous challenge, and that they must bear up and persevere to get through it. It wasn’t fear, exactly, but a sense of challenge accompanied by a conviction that the patient would make it through. Here’s a snippet of video of one of the patients, published along with the journal article:

The authors noted that the stimulation site, in the anterior part of the midcingulate cortex, is a major node in what’s called the salience network, a set of mutually interconnected structures whose function seems to be to identify and register when something motivationally important happens in the environment, and to signal the need to engage other brain networks associated with effortful control (what neuroscientists call executive control) to deal with what’s happened. One of the authors on the paper, Michael Greicius, was one of the researchers who in 2007 first brought the salience network to the world’s attention and started mapping out its functions.

The authors of the current study confirmed that it was indeed the salience network that they had tweaked by identifying structures in each of the two patients’ brains whose activity levels (at rest, outside of the stimulation experiment) correlated with that of the stimulated site. The patients’ reported experiences align quite well with what the salience network is thought, on the basis of other, only slightly less cool research, to do.

keepcalm

I’ve got a gut feeling this could be an influential study…

February 1st, 2015

So, it turns out that you really do listen to your gut. Rather literally, as it turns out. It’s been known for a long time that your viscera, i.e., your guts, have signalling properties that communicate with the brain. This has been a perennial source of interest, because there are lots of theories all the way from nineteenth-century philosopher William James to modern neuroscientist Antonio Damasio that our brains construct our lived moment-to-moment experience by integrating inputs from multiple body systems, making the world of our thoughts, emotions, and decision-making intimately dependent on not only our mentation, but on the state of our whole body.

Now a group of Japanese researchers led by T. Hashimoto has published a study in the journal Neuroscience in which they may have identified which part of the brain receives the most direct input from the guts. They used a technique called electrointestinography (EIG; see the word “intestine” buried in the middle of all that Greek?), which is really just like the electroencephalography (EEG) that’s near and dear to our hearts at Choratech, except that it records electrical activity from sensors placed on the abdomen, over the intestines, rather than on the scalp, over the brain. The researchers measured the oscillatory patterns of the electrical signals that were coming from the intestines, and their research subjects also just happened to be lying in an MRI scanner (what luck!), so they could correlate rising and falling activity in the guts with the rising and falling activity in the brain.

AntInsThey found that the gut activity correlated with activity in the brain region called the insula (lit. Latin, “island”). This structure is invisible from the outside of the brain, because it’s buried underneath the bottom part of the frontal and parietal lobes, and the top part of the temporal lobes. The insula is intimately associated with detecting motivationally significant information in both the external and internal (bodily) environments, and passing that information along so it can influence decision-making and behaviour. The authors found that the gut activity correlated most closely with regions of the insula on the right side, in the middle and at the front, as shown in the accompanying image taken from their study.

Interestingly, those with the most gut activity were also those with the highest self-reported anxiety. The authors also showed significant relationships between anxiety scores and the linkage between the right insula and related structures (the left insula and the dorsal anterior cingulate), also known to be involved with the experience of anxiety.

We have a thousand colloquial expressions that suggest that our “gut” has input into our decision-making; I’m sure you can think of two or three right off the top of your head. It seems to be turning out that this isn’t just metaphor: our actual, physical gut does indeed have this type of influence, and that its gateway to the brain is somewhere in the middle and front of the right insula.

You know that thing about how you use only ten percent of your brain?

September 13th, 2014

Well, there is a special little ten percent that does, admittedly, do a lot of heavy lifting. On the other hand, the brain has this amazing ability to make do with less than its full complement of bits.

This article reports on a recent case report coming out of China and published in Brain,  in which a married 24-year-old mother presented to the hospital with a long history of dizziness and unsteady gait, and a recent history of nausea and vomiting extending back a couple of months. Turns out she has no cerebellum. Take a look at this:

No cerebellum. Like, at all. She’s only the ninth documented case of someone who hasn’t got one.

The cerebellum is an extremely densely packed bundle of neural tissue that sits underneath the back of the brain, right in that space that, in the above MRI image, looks forlornly empty. It’s a fascinating structure, because its anatomical features suggest that it’s extremely important, and yet no one is a hundred percent sure what it does. It takes up about ten percent of the volume inside the skull, but it contains an incredible half of all the brain’s neurons. Just that fact alone has to mean it’s pretty important, and yet its function is not terribly well understood. It’s clear that it contributes to the fine-tuning of motor output and the maintenance of balance. It has a way of popping up here and there in other research as well, though, in ways that aren’t easy to integrate into a comprehensive theory of how it works.

Interestingly, this young woman has made it to age 24, is married, and has a (neurologically normal) daughter. She apparently always did have motor and balance problems. She was four years old before she could stand on her own, and seven before she could walk unassisted. She also did not speak until age six, and now she has trouble articulating words properly, although her ability to understand language is normal. She is apparently mildly cognitively delayed.

The amazing thing is that her balance and motor control problems are what her neurologists would have considered characteristic of mild damage to the cerebellum, rather than not having a cerebellum at all. It’s another remarkable testament to the brain’s ability to find a way to do what’s asked of it, even if the equipment it’s working with is damaged, compromised, or in this case, entirely absent.

All right, everyone. Break into groups and discuss, and we’ll reconvene in, oh, about 8 hours.

August 10th, 2014

Just read a 2013 study published in NeuroImage by Enzo Tagliagucchi and colleauges, working out of Goethe University Frankfurt am Main. They used some fascinating modern analytic tools to study the way the brain’s functional network architecture changes as we move from wakefulness, through the different stages of sleep (from light sleep to deep sleep).

Functional network what, you say? Right. Well. Let me give you some background. Neuroscience research in the last five to ten years has put a huge emphasis on the links between brain areas, and the participation of brain areas in collective action. Gone are the days when we ask what part of the brain does what, as if the parts are separately responsible for different functions. Instead, we ask what parts of the brain are coupled to what other parts, and in what way does their collective action relate to function (i.e., to the generation of sensory or emotional experience, the processes of thought, the execution of actions, etc.). It’s all about connectivity now. Using cool techniques, researchers have been able to show that the brain consists of a relatively small number of networks, each of which has dense connections among the structures that participate in it, and relatively sparse connections with structures from other networks. Moreover, these networks become (collectively) active at different times, depending on what the brain is up to.

Using these techniques, Tagliagucchi and his colleagues explored how the arrangement of the connections among brain areas changes as people fall asleep and move from light sleep toward deep sleep. They found that as sleep deepens, there is an increase in modularity, which means that the brain’s various networks become increasingly segregated from one another. The connections within each network remain the same or are strengthened, while the connections between networks are weakened. There’s even a little bit of swapping that happens, wherein brain regions that participate strongly in one network during wakefulness are “reassigned” to other networks during deep sleep. So, basically, as the brain moves further and further away from consciousness, its functional subdivisions tend to go off and do their own thing, while dropping their connections with one another. Why they do this is still a matter for speculation, but it does support the idea proposed by some that consciousness is a function of whole-brain integration, and departures from consciousness represent a breakdown in this integration.

There’s an interesting EEG connection, too, in that this movement toward increased functional segregation of networks correlates highly with the amount of EEG activity in the Delta frequency band (between 1 and 4 Hz). The amount of Delta activity effectively indexes the amount of network segregation. This raises the intriguing possibility that the presence of Delta activity even during wakefulness for some clinical populations might correspond to a weakening of long-range, cross-network communication.

Another beautiful image from brain science

August 6th, 2014

Y’all know how much I like the beautiful images coming from neuroimaging and microscopy of the brain. Here’s one that was posted recently at the website BrainFacts.org, showing (in green) cells in mouse motor cortex expressing a gene transcription factor called Fezf2. The grey cells show what changes come about in cell structure after Fezf2 is expressed in the cell, putting out long, branching extensions called dendrites (from the Greek δένδρον, or “tree”). Dendrites are the part of the neuron that receives incoming signals from other neurons. When the dendritic tree expands and branches, it increases the cell’s capacity to make connections with, and be influenced by, other neurons.

But I’m really just posting this because it’s pretty. Ain’t it just?


Positive attention from fathers is associated with successful development of the frontal lobes.

July 28th, 2014

Just read this study, published in 2010 by Kosuke Narita and colleagues from Japan’s Gunma University. In it they investigated the relationships between parenting styles and brain development. The authors recruited a sample of young adults, all of whom completed the Parental Bonding Instrument (PBI), a retrospective questionnaire in which adults describe the behavior of their parents toward them up until age 16. (If you’d like to have a look at the PBI to get a feel for what they were measuring, take a look here.) The PBI has two major factors, that is, subsets of items that tend to be endorsed in the same direction by the same people. One of the factors measures “care”, and includes behavior such as smiling, treating the child warmly, and so on. The other factor measures “overprotection”, which really is just like it sounds. Everyone who completed the questionnaire was also scanned in an MRI scanner, which computed the volume of grey matter in a couple of regions of interest, specifically the dorsolateral prefrontal cortex (DLPFC) and some other frontal sites.

What the authors found was quite interesting. First, they found that scores on the paternal “care” factor (i.e., scores reflecting loving treatment by the father) correlated positively with grey matter volume in the left dorsolateral prefrontal cortex (DLPFC), the right ventromedial prefrontal cortex, and the right orbitolateral prefrontal cortex. The left DLPFC is important for concentration, planning, and memory, while the other two sites are involved in emotion regulation. On the contrary, there was a negative correlation between paternal “overprotection” and left DLPFC grey matter, meaning that the more overprotective or smothering the fathers were, the less successfully the left DLPFCs of their children developed. Interestingly, there were no correlations between grey matter volume at any site and maternal care or maternal overprotection, although on some other analyses maternal overprotection was also negatively associated with grey matter volume at the left DLPFC. That is, the amount of maternal care the person experienced was generally unrelated to his subsequent grey matter volume in adulthood.

If these findings can be replicated, it will mean that fathers have a particularly important role in shaping the brain development of their children, influencing not only their cognitive capacity, but also their propensity toward things like anxiety and depression.  The way fathers can bring about the most positive results is by being a steady, warm, loving presence in the lives of their children, but at the same time allowing the children to take risks and explore their world. Mothers’ care, it seems, exerts other, apparently unrelated effects, but mothers also need to be careful not to stifle and overprotect their children if they want their children’s frontal lobes to develop well.

All of this highlights the absolutely vital importance of fathers in children’s lives. Nowadays we tend to think of adult “parental figures” as basically interchangeable units. We assume that as long as there is some well-intentioned adult or combination of adults in the home, it doesn’t matter what sex they are, or if they are biologically related to the children. Indeed, this assumption has underlain unprecedented experimentation with different combinations and permutations of adults in the home in the last generation or two, a marked departure from the way families have been structured everywhere, throughout (nearly) all of human history. The Narita study (and other studies as well, like this one) suggests that all of this social experimentation may be ill-advised. Fathers matter. Mothers matter. Children have a right to be raised by both, and to have their parents love them and accept them, while giving them enough space to allow them to learn to be independent. These are the conditions under which their brains will develop the best.

So. Let me introduce you to my friend LORETA.

July 21st, 2014

It’s not what you think. Although I do know a really nice woman named Loreta. A colleague from a ways back. But I digress. What I’m talking about is LORETA, with capitals. It’s poised to transform the field of neurofeedback completely.

LORETA stands for Low Resolution Electromagnetic Tomography – I know, the acronym doesn’t really work, but “LORETA” is way nicer than “LRET”. I mean, c’mon, you can’t even pronounce “LRET”. LORETA is what is known mathematically as an “inverse solution”. That is, it’s a means of mathematically reconstructing the source or sources of scalp-recorded EEG patterns, deep within the three-dimensional space inside the skull. That is, inverse solutions aim at identifying where in the brain the stuff is happening that is being picked up as electrical fields on the surface of the head. I’ll explain:

EEG in its raw form looks like this. Each electrode picks up a complex, oscillating signal from the brain tissue underneath it, and the oscillations are plotted across time, like this:

This is the EEG that neurologists read. Now, for the purposes of neurofeedback, we analyse the EEG waves into their component oscillating frequencies and compare the size (amplitude) and scalp distribution of those oscillations to a normative database. For the gentleman depicted here, who is in his thirties, the EEG waves contained an abundance of activity (relative to other people his age) in the range between about 12 and 14 cycles per second, or 12 to 14 Hz. In the database output, that looked like this:

The yellow and orange areas are where, on my client’s scalp, the brain waves at these particular frequencies exceeded the amplitude that is considered normal compared to the EEG of other people his age. Notice how there seems to be something going on in the back half his head, maybe a little more on the right side than the left.

Now, that already provides us with a lot of information—especially given accumulated clinical wisdom that says an overabundance of activity in this frequency range in the back of the head is associated with anxiety (this guy was very anxious). But it doesn’t tell us where exactly in the brain all that 12 to 14 Hz activity is really coming from.

That’s where LORETA comes in. The invention of a neuroscientist at the University of Zurich named Roberto Pascual-Marqui, LORETA is a mathematical solution that estimates—as it turns out with a high degree of accuracy—exactly where, in three dimensions deep within the brain, the source of the activity measured on the scalp is. So, in the case of my client, here’s a depiction of which part of the brain was producing the most deviant of activity at 13 Hz:

The LORETA analysis superimposes the estimated locations of activity onto a standard image of the brain, and allows us to spot the location of the abnormal activity with remarkable accuracy. The images shown here are likes slices made by a big saw (not to be all macabre about it): one horizontal, one vertical along the long axis of the brain (perpendicular to a line drawn between the ears) and one vertical, crossing the long axis of the brain at ninety degrees (parallel to a line drawn between the ears). This three-dimensional “slicing” of the brain is the way all imaging techniques work. It is, in fact, where the word tomography comes from; literally from the ancient Greek, “slice-writing”. Here’s what the same information looks like on an image of a whole, intact brain:

What LORETA allows us to do, then, is to identify with increased spatial accuracy where the patterns of brain activity observable as scalp EEG originate. Rather than looking at a smeared map of activity spread across a wide area of scalp, we can see in three dimensions where that activity actually originates in the brain. From there we can make connections to our knowledge about what locations and networks in the brain are involved in what sorts of functions. In this case, the area producing the most aberrant 13-Hz activity is the right temporoparietal junction (TPJ), which is known to be involved in responding to stimuli that are unexpected, but that have special behavioral importance or salience to the individual. Taking this information, along with the symptoms and complaints with which the individual comes to us, we can identify the structures and networks most likely to be contributing to their problems.

So, LORETA allows us to see with more precision where the sources of scalp-recorded EEG really are in the brain, even if they’re buried quite deep in the cranial vault. Want to know something even cooler? Stay tuned…

The metastable brain

June 8th, 2014

I just finished reading a fascinating theoretical paper that addresses how the brain coordinates its activity on various spatial scales from the individual neuron, to local ensembles of neurons working together, to widespread networks within the brain. The article is called “The metastable brain”, and is published in January’s issue of the journal Neuron. The authors, Emmanuelle Tognoli and Scott Kelso, address themselves to a central paradox that arises in our attempts to understand how the brain works, namely, how different areas of the brain can get together and work in concert for the purpose of doing a task or a computation, without the brain becoming locked into a rut that renders it inflexible and unresponsive to changing circumstances. This is an old theoretical problem that has gone by a number of names and been formulated in various contexts. Stephen Grossberg of Boston University has referred to something similar in his articulation of the stability-plasticity dilemma. How does a brain balance between continuity and preservation of function on the one hand, and responsiveness and adaptation on the other?

Tognoli and Kelso draw on principles from nonlinear dynamics (what used to be popularly referred to as “chaos theory”) in articulating their vision of the brain. They view the brain as containing numerous oscillating elements (think of elements as simply being physically located functional entities) that exist on different spatial scales, from individual neurons, to local ensembles of neurons working together in the same neighborhood, to functional networks reaching across the whole brain. The rising and falling of these oscillating elements can synchronize with that of other elements, or it can be unsynchronized and independent.  The authors postulate that the brain is organized to be a “metastable” system, that is, a system that is perpetually balanced between integrative influences (those that pull neural elements into synchrony) and segregative influences (those that pull the elements out of association with each other). This leads to an immensely flexible regime, where neural elements can be called into collective action and form into functional units rapidly, and with minimal input of energy from outside; then, just as easily, they can fall back out of tight association with each other when the job is done, or when they’re needed for some other job.

The authors adduce empirical evidence from various spatial scales showing that the brain at each of those scales shows the properties, mathematically, of a metastable system. They note that because of the properties of these systems, what we observe as the oscillations—and coupling/uncoupling behavior or phase locking/unlocking of the oscillations—at one spatial scale doesn’t necessarily tell us everything about what’s happening at other spatial scales. For example, the oscillations of the EEG may show bursts of activity or periods of relative silence, but the silence can be misleading, because there may be plenty going on underneath (at smaller scales), but it isn’t visible as EEG because it’s either not coordinated, or it is coordinated but is out of phase and therefore self-cancelling. The authors recommend that researchers try to find ways to study brain activity at multiple spatial scales simultaneously, so that they can better characterize the relationships that form in space and time.

As someone who is fascinated by EEG (witness Choratech’s employment of QEEG and neurofeedback), I find papers like this to be exciting. I’ve long believed that the oscillating patterns observable in the EEG are reflective of something fundamental about how the brain self-organizes. Neuroscientists seem to be coming to the same conclusion.

Why we yawn

May 28th, 2014

Ever wonder why you yawn? A recent study published in the journal Physiology and Behavior supports a theory that the reason why we yawn is to cool our brains down. Turns out that yawns are preceded by warmer-than-normal brain temperature, and are followed immediately by a cooler brain. Check this out:

I had a look at the study. It was a fun experiment. The experimenters approached people in public places in the city of Vienna and asked them to participate in a survey about contagious yawning (the phenomenon wherein we yawn when we see, hear about, or think about someone else yawning), look at a series of pictures of people yawning, and then fill out a brief survey reporting how often they had yawned, or felt the urge to yawn, during the process. The experiment was conducted once during the winter, and again during the summer, and the researchers were collecting temperature and humidity data during each person’s participation. Sure enough, the warmer it was when someone took the survey, the more likely it was that they yawned. Interestingly, that was only true up to a point, though. A previous study conducted in Tuscon, AZ when the temperature was over 37°C (i.e., body temperature), found that yawning decreased at that temperature. This also aligns with the theory, because filling the lungs with body-temperature air wouldn’t have any cooling effect.

As for the phenomenon of contagious yawning, researchers think it’s a way of promoting the wide distribution of peak levels of vigilance within a social group, because when the brains of all group members are working optimally, they’ll be more likely to spot predators, potential food sources, etc.

Because, of course, cooler heads always prevail.

Another reason your dad was right that you should stay in school

May 25th, 2014

This study is interesting. Published last December in the Archives of Physical Medicine and Rehabilitation, it looked at cognitive performance of people who had suffered a traumatic brain injury (TBI). A TBI is basically any injury to the brain that results from externally applied physical forces, as would happen, for example, in a car accident. The study, authored by James Sumowski, Nancy Chiaravalotti, Denise Krch, Jessica Paxton and John DeLuca, examined a variable that might partially account for why impacts of TBIs on cognitive ability can be so variable from one case to another. It’s well known in TBI circles that it’s difficult to predict, simply on the basis of the type or extent of a brain injury, just how much the injury will affect subsequent cognitive functioning.

The variable the authors examined was the number of years of schooling the individual had undergone prior to the injury. The number of years of schooling is what is known as a “proxy variable”, that is, a variable that’s assumed to reflect another variable indirectly. In the case of educational attainment, what researchers think they’re measuring is the amount of intellectual stimulation a person has received in his or her lifetime—and that’s a reasonable assumption, provided she didn’t study fine arts. Okay, okay, don’t be so touchy; I was just kidding! Anyway, what the authors were interested in was a phenomenon that’s already known in research on other conditions, namely, the fact that people with more years of education seem to have brains that are more resistant to the performance-degrading impacts of various diseases or illnesses. The most striking of these is Alzheimer’s Disease, for which it’s been well documented that people with more education tend to be spared partially or totally from dementia, despite their brains on autopsy being riddled with the same Alzheimer’s pathology that is assumed to cause the cognitive decline in other people. The term for this seemingly greater resilience characterizing better-educated brains is cognitive reserve. The researcher whose excellent work is most associated with the construct of cognitive reserve is Yaakov Stern, of Columbia University in New York.

So, most of the work on cognitive reserve has shown that people with more years of education are more resistant to the onset of dementia in old age. The researchers in the current study looked at whether the same would be the case for people who had suffered a TBI. To examine this question, they compared the cognitive performances of a group of 44 TBI sufferers, on average about a year after their injury, to a group of normal, healthy control subjects. All of the participants completed a group of cognitive tests measuring processing speed, working memory (the ability to hold information in a highly active state while using it), and episodic memory (the ability to remember events that have happened to you). These were chosen because they’re among the most consistently impaired functions following a TBI. The three tests were summarized into a single performance score, and then the joint impact of TBI status and educational attainment (expressed as number of years of schooling) on this score was tested.

The results clearly showed that, although the TBI patients performed significantly worse on the cognitive tests than the controls (not a surprising finding), those who had more years of schooling were not impaired as badly as those who had fewer years of schooling. The following figure taken from the study illustrates this difference. The red dots represent members of the TBI group, and the blue dots, members of the control group:

In fact, it’s clear from the graph that if a person had had, say, 25 years of schooling, he would perform better after a TBI than before! Okay, not really. That was a joke. But it is a pretty interesting indication of how the extensive and varied intellectual experiences that accompany higher education can actually be not only personally enriching, but also neurologically protective.

I find the concept of cognitive reserve to be fascinating, because it’s one case in which there’s a clear crossing-over between what we normally think of as mind/psychology on the one hand, and brain/neurology on the other. What the study suggests is that educational experiences not only change your psychology, but also change the physical stuff of your brain, in a way that is strikingly evident as (relatively) spared functioning following a physical injury. Cool, huh? See, I knew it was worth it to stay in school for that long!