Culture

Unexpected link found between feeding and memory brain areas

image: Dr. Zheng Sun

Image: 
Baylor College of Medicine

The search for a mechanism that could explain how the protein complex NCOR1/2 regulates memory has revealed an unexpected connection between the lateral hypothalamus and the hippocampus, the feeding and the memory centers of the brain, respectively. The findings, which were published today in the journal Nature Neuroscience by a multidisciplinary team led by researchers at Baylor College of Medicine, have implications for studies on brain function, including those related to autism spectrum disorders, intellectual disabilities and neurodegenerative disease.

"It was not known how NCOR1/2 regulates memory or other cognitive functions, but there is evidence that NCOR1/2 plays a fundamental role in the activity of many hormones," said corresponding author Dr. Zheng Sun, assistant professor of medicine and of molecular and cellular biology at Baylor and member of Baylor's Dan L Duncan Comprehensive Cancer Center and Center for Precision Environmental Health and of the Texas Medical Center Digestive Diseases Center.

In this project, the researchers worked with mice carrying mutations of NCOR1/2.

"These mice clearly present with memory deficits," said co-first author Dr. Wenjun Zhou, postdoctoral associate in the Sun lab. "The signaling involving GABA, a key inhibitory neurotransmitter in the brain, was dysfunctional in hypothalamus neurons when NCOR1/2 was disrupted."

To explore the cellular mechanism underlying the condition, Sun collaborated with Dr. Yong Xu, associate professor of pediatrics, molecular and cellular biology and with the USDA/ARS Children's Nutrition Research Center at Baylor College of Medicine.

The researchers conducted a number of electrophysiological experiments to investigate how the lack of NCOR1/2 resulted in memory deficits in mice.

"What struck us the most was that the process by which NCOR1/2 regulates memory involves a new circuit that links two brain regions: the lateral hypothalamus, known as a feeding center of the brain, and the hippocampus, a place that stores memory," Xu said. "It surprised us because the hypothalamus is not traditionally considered to be a major regulator of learning and memory."

The researchers validated the newly discovered circuits in different ways.

"We applied both optogenetics and chemogenetics techniques," said co-first author Dr. Yanlin He, postdoctoral associate in the Xu lab. "The protein complex NCOR1/2 is key to the hypothalamus-hippocampus circuit; when we knock it out the circuit becomes dysfunctional."

In addition, the researchers have connected their findings in mouse models with human conditions.

"We describe here new genetic variants of NCOR1/2 in patients with intellectual disability or neurodevelopmental defects," said co-corresponding author Dr. Pengfei Liu, assistant professor of molecular and human genetics at Baylor and laboratory director of clinical research at Baylor Genetics.

"The gene NCOR1 is located on human chromosome 17, very close to the region that has been previously implicated in the Potocki-Lupski and Smith-Magenis syndromes," Liu explains. "We have always suspected that mutations of this gene could cause intellectual disabilities or other deleterious neurological consequences. The mouse models in the current study provide the first evidence that this is indeed the case."

These findings have implications for the relationships among endocrine factors, obesity and metabolic disorders and cognitive dysfunctions such as Alzheimer's disease. It is known, for instance, that people with endocrine disruption or metabolic disorders are more susceptible to Alzheimer's disease.

"Mechanisms underlying these associations are not completely clear," Sun said. "We think that the NCOR1/2-regulated neural circuit between the feeding and the memory centers of the brain we have discovered is worth exploring further in this context."

Credit: 
Baylor College of Medicine

Leaving 2-hour gap between dinner and bedtime may not affect blood glucose

Leaving a two-hour gap between the last meal of the day and bedtime doesn't seem to be associated with any discernible difference in blood glucose levels among healthy adults over the long term, suggests Japanese research published in the online journal BMJ Nutrition, Prevention & Health.

Avoiding eating a meal or snacking shortly before going to bed is thought to be better for long term health. In Japan, the recommended gap is 2 hours at least three times a week.

But, based on their findings, the researchers suggest that people might be better off getting enough sleep and keeping their weight, drinking, and smoking in check to stave off the risk of 'lifestyle' illnesses, such as diabetes and heart disease that are associated with high blood glucose.

In Japan, 40-74 year olds get regular health checks, to try and lower the risk of lifestyle-related ill health, which increases with age.

This check includes a blood glucose test and an assessment of lifestyle and eating habits, such as whether people leave the recommended 2-hour gap between dinner and bedtime.

But there is no clear evidence behind this practice, say the researchers. So they decided to assess its potential impact on HbA1c levels--a measure of average blood glucose over the longer term, and considered to be a reliable indicator of future health risks.

They did this by scrutinising the health check data for 1573 healthy middle-aged and older adults with no underlying conditions associated with diabetes from one city in Okayama prefecture for the years 2012, 2013, and 2014.

Two thirds of the sample were women, and two thirds were over the age of 65 and retired.

As well as eating habits, the researchers looked at how much people smoked; their physical activity levels; weight gain since the age of 20; eating style (fast or slow); how much they drank every day; and whether they skipped breakfast.

In all, 83 (16%) of the men and 70 (7.5%) of the women fell asleep within 2 hours of eating dinner.

Full data were obtained for 1531 adults for all three years. When HbA1c levels were higher to start with, these rose over time, but overall, this rise was gradual over the three years.

And average HbA1c didn't change significantly between 2012, when it was 5.2 per cent, and 2013 and 2014, when it was 5.58 per cent, which is within the normal range. There were no significant differences between men and women either.

Weight (BMI), blood pressure, blood fats (triglycerides), physical activity levels, smoking and drinking seemed to be more strongly associated with changes in HbA1c levels rather than the interval between eating and sleeping, the findings showed.

This is an observational study, and therefore can't establish cause, nor were the researchers able to gather information on the precise timing or content of the evening meal, both of which may have been influential.

The traditional Japanese diet contains a lot of vegetables and soup, and the portion sizes are small. The findings might therefore not be applicable to other nations, add the researchers.

Nevertheless, they conclude: "Contrary to general belief, ensuring a short interval between the last meal of the day and bedtime did not significantly affect HbA1c levels.

"More attention should be paid to healthy portions and food components, getting adequate sleep and avoiding smoking, alcohol consumption, and overweight, as these variables had a more profound influence on the metabolic process."

Credit: 
BMJ Group

How our brains distinguish between self-touch and touch by others

image: Scientists at Linköping University have examined what happens in various parts of the nervous system when a person is touched by another person, and compared this with corresponding self-touch.

Image: 
Thor Balkhed/Linköping University

Our brains seem to reduce sensory perception from an area of our skin when we touch it ourselves, according to a new study from Linköping University published in the journal Proceedings of the National Academy of Sciences, PNAS. The finding increases our understanding of how the brain distinguishes between being touched by another person and self-touch.

We don't wonder about our concept of "self" all the time, but the ability to distinguish between self and others is extremely important. During the first period of life, new-born children develop an understanding of where their own body ends mainly through being touched by those who care for them. Problems with the self-concept, such as the ability to recognise one's own actions, are common in several psychiatric disorders. Most people cannot tickle themselves, but some patients with schizophrenia can, suggesting that their brain interprets sensory perceptions from their own body differently.

Scientists at Linköping University in Sweden have examined what happens in various parts of the nervous system when a person is touched by another person, and compared this with corresponding self-touch. They have shown that the brain reduces the processing of the sensory perception when it comes from self-touch.

The skin contains sensory receptors that react to touch, pressure, heat and cold. Information about touch is transmitted from these to the spinal cord and on to the brain, where the perception is processed in several steps in different regions of the brain. The researchers involved in the new study carried out several experiments in which healthy volunteers lay in a magnet resonance camera, which recorded images of brain activity (fMRI). The participants were requested to stroke their arm slowly with their own hand, or were told that a researcher would stroke their arm in a similar manner. The researchers investigated how these types of touch were linked to activity in different parts of the brain.

"We saw a very clear difference between being touched by someone else and self-touch. In the latter case, activity in several parts of the brain was reduced. We can see evidence that this difference arises as early as in the spinal cord, before the perceptions are processed in the brain", says principal author Rebecca Böhme, postdoc in the Department of Clinical and Experimental Medicine and the Center for Social and Affective Neuroscience, CSAN, at Linköping University.

The results are compatible with a theory in brain research that suggests that the brain attempts to predict the sensory consequences of everything we do. This means that it does not attach as much importance to sensory perceptions that are caused by our own bodies, since the information from these is expected. In one of the experiments, the participant's arm was touched with filaments of different thickness, while the arm was simultaneously stroked either by the participant or by another person. The researchers showed that the ability to experience simultaneous sensory perceptions was damped when the participants stroked their own arms. Maybe this phenomenon can explain why we, for example, rub our arm when we bump it against a table.

"Our results suggest that there is a difference as early as in the spinal cord in the processing of sensory perceptions from self-touch and those from touch by another person. This is extremely interesting. In the case of the visual system, research has shown that processing of visual impressions occurs as early as in the retina, and it would be interesting to look in more detail into how the brain modulates the processing of tactile perceptions at the level of the spinal cord", says Rebecca Böhme.

Credit: 
Linköping University

Researchers for the first time identify neurons in the human visual cortex that respond to faces

image: A new study published in Neurology identifies for the first time the neurons in the human visual cortex that selectively respond to faces. The study was carried out by Dr. Vadim Axelrod, head of the Consciousness and Cognition Laboratory at the Gonda (Goldschmied) Multidisciplinary Brain Research Center at Bar-Ilan University, in collaboration with a team from Institut du Cerveau et de la Moelle Épinière and Pitié-Salpêtrière Hospital (team leader: Professor Lionel Naccache).

The researchers showed that the neurons in the visual cortex (in the vicinity of the Fusiform Face Area) responded much more strongly to faces than to city landscapes or objects. A high response was found both for faces of famous people (e.g., Charles Aznavour, Nicolas Sarkozy, Catherine Deneuve, Louis De Funes) and for faces unfamiliar to the participant in the experiment. In an additional experiment, the neurons exhibited face-selectivity to human and animal faces that appeared within a movie (a clip from Charlie Chaplin's "The Circus").

The present results provide unique insights into human brain functioning at the cellular level during face processing. These findings also help bridge the understanding of face mechanisms across species (i.e., monkeys and humans).

Image: 
Dr. Vadim Axelrod <P>Credit for the individual photos in the image are written at the bottom of the image.

Imagine a world where everyone has the same face. That would be a very different world than the one we know. In our world, in which faces are different, faces convey essential information. For example, most of us can recognize a celebrity's face even if it only appears for a fraction of second or the face of an old college friend even after decades of not seeing him. Many of us can sense the mood of a significant other just based on facial expression. Often, we can establish whether a person is trustworthy by just looking at his or her face. Despite intensive research, how the brain achieves all of these behaviors is still a great mystery.

A new study published in Neurology, the medical journal of the American Academy of Neurology (issue of January 22, 2019), identifies for the first time the neurons in the human visual cortex that selectively respond to faces. The study was carried out by Dr. Vadim Axelrod, head of the Consciousness and Cognition Laboratory at the Gonda (Goldschmied) Multidisciplinary Brain Research Center at Bar-Ilan University, in collaboration with a team from Institut du Cerveau et de la Moelle Épinière and Pitié-Salpêtrière Hospital (team leader: Prof. Lionel Naccache).

The researchers showed that the neurons in the visual cortex (in the vicinity of the Fusiform Face Area) responded much more strongly to faces than to city landscapes or objects (see examples: https://youtu.be/QYJCB60FhHE). A high response was found both for faces of famous people (e.g., Charles Aznavour, Nicolas Sarkozy, Catherine Deneuve, Louis De Funes) and for faces unfamiliar to the participant in the experiment. In an additional experiment, the neurons exhibited face-selectivity to human and animal faces that appeared within a movie (a clip from Charlie Chaplin's The Circus).

"In the early 1970s Prof. Charles Gross and colleagues discovered the neurons in the visual cortex of macaque monkeys that responded to faces. In humans, face-selective activity has been extensively investigated, mainly using non-invasive tools such as functional magnetic resonance imaging (fMRI) and electrophysiology (EEG)," explains the paper's lead author, Dr. Axelrod. "Strikingly, face-neurons in posterior temporal visual cortex have never been identified before in humans. In our study, we had a very rare opportunity to record neural activity in a single patient while micro-electrodes were implanted in the vicinity of the Fusiform Face Area ? the largest and likely the most important face-selective region of the human brain."

Probably the best-known neurons that respond to faces have been the so-called "Jennifer Aniston cells" ? the neurons in the medial temporal lobe that respond to different images of a specific person (e.g., Jennifer Aniston in the original study published in Nature by Quiroga and colleagues in 2005). "But the neurons in the visual cortex that we reported here are very different from the neurons in the medial temporal lobe," emphasizes Dr. Axelrod. "First, the neurons in the visual cortex respond vigorously to any type of face, regardless of the person's identity. Second, they respond much earlier. Specifically, while in our case, a strong response could be observed within 150 milliseconds of showing the image, the "Jennifer Aniston cells" usually take 300 milliseconds or more to respond."

The present results provide unique insights into human brain functioning at the cellular level during face processing. These findings also help bridge the understanding of face mechanisms across species (i.e., monkeys and humans). "It is really exciting," Dr. Axelrod says, "that after almost half a century since the discovery of face-neurons in macaque monkeys, it is now possible to demonstrate similar neurons in humans."

Credit: 
Bar-Ilan University

Asking patients about sexual orientation, gender identity

Patients are open to being asked about their sexual orientation and gender identity in primary care, which can help make health care more welcoming, although the stage should be set for these questions and they should include a range of options, found a study published in CMAJ (Canadian Medical Association Journal).

"Understanding the social determinants of health, including gender identity and sexual orientation, is important for providing better health care as these are linked with outcomes," says Dr. Andrew Pinto, The Upstream Lab, St. Michael's Hospital, Toronto, Ontario. "Many transgender and gender-diverse people have negative experiences in health care that affect their health."

During the study, conducted at the St. Michael's Hospital Academic Family Health Team between Dec. 1, 2013, and Mar. 31, 2016, researchers asked questions on sexual orientation and gender identity. They offered a survey to 15 221 patients. A total of 90% of 14 247 respondents answered questions about sexual orientation and gender identity. The researchers also interviewed 27 patients of diverse age, gender identity, education level, language and immigration status to fully understand their reactions to these questions. Several themes emerged:

Patients appreciated the variety of options on the surveys to indicate gender identity and sexual orientation

Some LGBTQ2S (lesbian, gay, bisexual, queer, two-spirited) patients were uncomfortable answering the questions, as they recalled previous negative experiences related to gender or sexual orientation. Some cisgender and heterosexual patients were also uncomfortable.

Despite a variety of responses provided on the survey, some patients did not see their identities reflected and suggested additional terms to include in future surveys.

"Our findings can inform health care organizations that wish to characterize their patients through routine collection of sociodemographic data," says Dr. Pinto. "Questions on gender identity and sexual orientation should include a range of flexible response options and definitions for clarity."

Credit: 
Canadian Medical Association Journal

Fossilized slime of 100-million-year-old hagfish shakes up vertebrate family tree

image: Tethymyxine tapirostrum, is a 100-million-year-old, 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon, believed to be the first detailed fossil of a hagfish.

Image: 
Tetsuto Miyashita, University of Chicago.

Paleontologists at the University of Chicago have discovered the first detailed fossil of a hagfish, the slimy, eel-like carrion feeders of the ocean. The 100-million-year-old fossil helps answer questions about when these ancient, jawless fish branched off the evolutionary tree from the lineage that gave rise to modern-day jawed vertebrates, including bony fish and humans.

The fossil, named Tethymyxine tapirostrum,is a 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon. It fills a 100-million-year gap in the fossil record and shows that hagfish are more closely related to the blood-sucking lamprey than to other fishes. This means that both hagfish and lampreys evolved their eel-like body shape and strange feeding systems after they branched off from the rest of the vertebrate line of ancestry about 500 million years ago.

"This is a major reorganization of the family tree of all fish and their descendants. This allows us to put an evolutionary date on unique traits that set hagfish apart from all other animals," said Tetsuto Miyashita, PhD, a Chicago Fellow in the Department of Organismal Biology and Anatomy at UChicago who led the research. The findings are published this week in the Proceedings of the National Academy of Sciences.

The slimy dead giveaway

Modern-day hagfish are known for their bizarre, nightmarish appearance and unique defense mechanism. They don't have eyes, or jaws or teeth to bite with, but instead use a spiky tongue-like apparatus to rasp flesh off dead fish and whales at the bottom of the ocean. When harassed, they can instantly turn the water around them into a cloud of slime, clogging the gills of would-be predators.

This ability to produce slime is what gave away the Tethymyxine fossil. Miyashita used an imaging technology called synchrotron scanning at Stanford University to identify chemical traces of soft tissue that were left behind in the limestone when the hagfish fossilized. These soft tissues are rarely preserved, which is why there are so few examples of ancient hagfish relatives to study.

The scanning picked up a signal for keratin, the same material that makes up fingernails in humans. Keratin, as it turns out, is a crucial part of what makes the hagfish slime defense so effective. Hagfish have a series of glands along their bodies that produce tiny packets of tightly-coiled keratin fibers, lubricated by mucus-y goo. When these packets hit seawater, the fibers explode and trap the water within, turning everything into shark-choking slop. The fibers are so strong that when dried out they resemble silk threads; they're even being studied as possible biosynthetic fibers to make clothes and other materials.

Miyashita and his colleagues found more than a hundred concentrations of keratin along the body of the fossil, meaning that the ancient hagfish probably evolved its slime defense when the seas included fearsome predators such as plesiosaurs and ichthyosaurs that we no longer see today.

"We now have a fossil that can push back the origin of the hagfish-like body plan by hundreds of millions of years," Miyashita said. "Now, the next question is how this changes our view of the relationships between all these early fish lineages."

Shaking up the vertebrate family tree

Features of the new fossil help place hagfish and their relatives on the vertebrate family tree. In the past, scientists have disagreed about where they belonged, depending on how they tackled the question. Those who rely on fossil evidence alone tend to conclude that hagfish are so primitive that they are not even vertebrates. This implies that all fishes and their vertebrate descendants had a common ancestor that -- more or less -- looked like a hagfish.

But those who work with genetic data argue that hagfish and lampreys are more closely related to each other. This suggests that modern hagfish and lampreys are the odd ones out in the family tree of vertebrates. In that case, the primitive appearance of hagfish and lampreys is deceptive, and the common ancestor of all vertebrates was probably something more conventionally fish-like.

Miyashita's work reconciles these two approaches, using physical evidence of the animal's anatomy from the fossil to come to the same conclusion as the geneticists: that the hagfish and lampreys should be grouped separately from the rest of fishes.

"In a sense, this resets the agenda of how we understand these animals," said Michael Coates, PhD, professor of organismal biology and anatomy at UChicago and a co-author of the new study. "Now we have this important corroboration that they are a group apart. Although they're still part of vertebrate biodiversity, we now have to look at hagfish and lampreys more carefully, and recognize their apparent primitiveness as a specialized condition.

Paleontologists have increasingly used sophisticated imaging techniques in the past few years, but Miyashita's research is one of a handful so far to use synchrotron scanning to identify chemical elements in a fossil. While it was crucial to detect anatomical structures in the hagfish fossil, he believes it can also be a useful tool to help scientists detect paint or glue used to embellish a fossil or even outright forge a specimen. Any attempt to spice up a fossil specimen leaves chemical fingerprints that light up like holiday decorations in a synchrotron scan.

"I'm impressed with what Tetsuto has marshaled here," Coates said. "He's maxed out all the different techniques and approaches that can be applied to this fossil to extract information from it, to understand it and to check it thoroughly."

Credit: 
University of Chicago Medical Center

Greenland ice melting four times faster than in 2003, study finds

COLUMBUS, Ohio - Greenland is melting faster than scientists previously thought--and will likely lead to faster sea level rise--thanks to the continued, accelerating warming of the Earth's atmosphere, a new study has found.

Scientists concerned about sea level rise have long focused on Greenland's southeast and northwest regions, where large glaciers stream iceberg-sized chunks of ice into the Atlantic Ocean. Those chunks float away, eventually melting. But a new study published Jan. 21 in the Proceedings of the National Academy of Sciences found that the largest sustained ice loss from early 2003 to mid-2013 came from Greenland's southwest region, which is mostly devoid of large glaciers.

"Whatever this was, it couldn't be explained by glaciers, because there aren't many there," said Michael Bevis, lead author of the paper, Ohio Eminent Scholar and a professor of geodynamics at The Ohio State University. "It had to be the surface mass--the ice was melting inland from the coastline."

That melting, which Bevis and his co-authors believe is largely caused by global warming, means that in the southwestern part of Greenland, growing rivers of water are streaming into the ocean during summer. The key finding from their study: Southwest Greenland, which previously had not been considered a serious threat, will likely become a major future contributor to sea level rise.

"We knew we had one big problem with increasing rates of ice discharge by some large outlet glaciers," he said. "But now we recognize a second serious problem: Increasingly, large amounts of ice mass are going to leave as meltwater, as rivers that flow into the sea."

The findings could have serious implications for coastal U.S. cities, including New York and Miami, as well as island nations that are particularly vulnerable to rising sea levels.

And there is no turning back, Bevis said.

"The only thing we can do is adapt and mitigate further global warming--it's too late for there to be no effect," he said. "This is going to cause additional sea level rise. We are watching the ice sheet hit a tipping point."

Climate scientists and glaciologists have been monitoring the Greenland ice sheet as a whole since 2002, when NASA and Germany joined forces to launch GRACE. GRACE stands for Gravity Recovery and Climate Experiment, and involves twin satellites that measure ice loss across Greenland. Data from these satellites showed that between 2002 and 2016, Greenland lost approximately 280 gigatons of ice per year, equivalent to 0.03 inches of sea level rise each year. But the rate of ice loss across the island was far from steady.

Bevis' team used data from GRACE and from GPS stations scattered around Greenland's coast to identify changes in ice mass. The patterns they found show an alarming trend--by 2012, ice was being lost at nearly four times the rate that prevailed in 2003. The biggest surprise: This acceleration was focused in southwest Greenland, a part of the island that previously hadn't been known to be losing ice that rapidly.

Bevis said a natural weather phenomenon--the North Atlantic Oscillation, which brings warmer air to West Greenland, as well as clearer skies and more solar radiation--was building on man-made climate change to cause unprecedented levels of melting and runoff. Global atmospheric warming enhances summertime melting, especially in the southwest. The North Atlantic Oscillation is a natural--if erratic--cycle that causes ice to melt under normal circumstances. When combined with man-made global warming, though, the effects are supercharged.

"These oscillations have been happening forever," Bevis said. "So why only now are they causing this massive melt? It's because the atmosphere is, at its baseline, warmer. The transient warming driven by the North Atlantic Oscillation was riding on top of more sustained, global warming."

Bevis likened the melting of Greenland's ice to coral bleaching: Once the ocean's water hits a certain temperature, coral in that region begins to bleach. There have been three global coral bleaching events. The first was caused by the 1997-98 El Niño, and the other two events by the two subsequent El Niños. But El Niño cycles have been happening for thousands of years--so why have they caused global coral bleaching only since 1997?

"What's happening is sea surface temperature in the tropics is going up; shallow water gets warmer and the air gets warmer," Bevis said. "The water temperature fluctuations driven by an El Niño are riding this global ocean warming. Because of climate change, the base temperature is already close to the critical temperature at which coral bleaches, so an El Niño pushes the temperature over the critical threshold value. And in the case of Greenland, global warming has brought summertime temperatures in a significant portion of Greenland close to the melting point, and the North Atlantic Oscillation has provided the extra push that caused large areas of ice to melt".

Before this study, scientists understood Greenland to be one of the Earth's major contributors to sea-level rise--mostly because of its glaciers. But these new findings, Bevis said, show that scientists need to be watching the island's snowpack and ice fields more closely, especially in and near southwest Greenland.

GPS systems in place now monitor Greenland's ice margin sheet around most of its perimeter, but the network is very sparse in the southwest, so it is necessary to densify the network there, given these new findings.

"We're going to see faster and faster sea level rise for the foreseeable future," Bevis said. "Once you hit that tipping point, the only question is: How severe does it get?"

Credit: 
Ohio State University

Youthful cognitive ability strongly predicts mental capacity later in life

Early adult general cognitive ability (GCA) -- the diverse set of skills involved in thinking, such as reasoning, memory and perception -- is a stronger predictor of cognitive function and reserve later in life than other factors, such as higher education, occupational complexity or engaging in late-life intellectual activities, report researchers in a new study publishing January 21 in PNAS.

Higher education and late-life intellectual activities, such as doing puzzles, reading or socializing, have all been associated with reduced risk of dementia and sustained or improved cognitive reserve. Cognitive reserve is the brain's ability to improvise and find alternate ways of getting a job done and may help people compensate for other changes associated with aging.

An international team of scientists, led by scientists at University of California San Diego School of Medicine, sought to address a "chicken or egg" conundrum posed by these associations. Does being in a more complex job help maintain cognitive abilities, for example, or do people with greater cognitive abilities tend to be in more complex occupations?

The researchers evaluated more than 1,000 men participating in the Vietnam Era Twin Study of Aging. Although all were veterans, nearly 80 percent of the participants reported no combat experience. All of the men, now in their mid-50s to mid-60s, took the Armed Forces Qualification Test at an average age of 20. The test is a measure GCA. As part of the study, researchers assessed participants' performance in late midlife, using the same GCA measure, plus assessments in seven cognitive domains, such as memory, abstract reasoning and verbal fluency.

They found that GCA at age 20 accounted for 40 percent of the variance in the same measure at age 62, and approximately 10 percent of the variance in each of the seven cognitive domains. After accounting for GCA at age 20, the authors concluded, other factors had little effect. For example, lifetime education, complexity of job and engagement in intellectual activities each accounted for less than 1 percent of variance at average age 62.

"The findings suggest that the impact of education, occupational complexity and engagement in cognitive activities on later life cognitive function likely reflects reverse causation," said first author William S. Kremen, PhD, professor in the Department of Psychiatry at UC San Diego School of Medicine. "In other words, they are largely downstream effects of young adult intellectual capacity."

In support of that idea, researchers found that age 20 GCA, but not education, correlated with the surface area of the cerebral cortex at age 62. The cerebral cortex is the thin, outer region of the brain (gray matter) responsible for thinking, perceiving, producing and understanding language.

The authors emphasized that education is clearly of great value and can enhance a person's overall cognitive ability and life outcomes. Comparing their findings with other research, they speculated that the role of education in increasing GCA takes place primarily during childhood and adolescence when there is still substantial brain development.

However, they said that by early adulthood, education's effect on GCA appears to level off, though it continues to produce other beneficial effects, such as broadening knowledge and expertise.

Kremen said remaining cognitively active in later life is beneficial, but "our findings suggest we should look at this from a lifespan perspective. Enhancing cognitive reserve and reducing later life cognitive decline may really need to begin with more access to quality childhood and adolescent education."

The researchers said additional investigations would be needed to fully confirm their inferences, such as a single study with cognitive testing at different times throughout childhood and adolescence.

Credit: 
University of California - San Diego

Managing gender dysphoria in adolescents: A practical guide for family physicians

As a growing number of adolescents identify as transgender, a review aims to help primary care physicians care for this vulnerable group and its unique needs. The review, published in CMAJ (Canadian Medical Association Journal), looks at emerging evidence for managing gender dysphoria, including social and medical approaches for youth who are transitioning.

"[T]he hallmark of care will remain a thoughtful, affirming, well-reasoned individualized approach that attempts to maximize support for this vulnerable population, as youth and their caregivers make complex and difficult decisions," writes Dr. Joseph Bonifacio, Department of Pediatrics, St. Michael's Hospital, Toronto, Ontario, with coauthors.

Gender dysphoria is "the distress experienced by an individual when their gender identity and their gender assigned at birth are discordant." Although precise numbers are unknown, studies from other countries indicate that 1.2% to 4.1% of adolescents identify as a different gender identify from their birth gender, a rate higher than in the adult population.

A recent Canadian study found that less than half of transgender youth are comfortable discussing their health care needs with their family doctor.

"Ideally, the approach to youth with gender dysphoria revolves around collaborative decision-making among the youth, family or guardians, and care providers," writes Dr. Bonifacio, with coauthors. "The youth's voice is always paramount."

The review follows the Endocrine Society's guideline recommendation that medication to suppress puberty, which allows youth to explore their changing gender identity, should not be used before puberty.

"Some youth find that their dysphoria abates as puberty starts, making it important to allow initial pubertal changes to occur," writes Dr. Bonifacio. "On the other hand, some youth may find their gender dysphoria increases with puberty, corroborating their need for further care."

As this is a relatively new field, there are gaps in the research base, such as the number of nonbinary youth who identify outside male-female genders and data on adolescents requesting surgery. The authors also note that ethnocultural diversity is underrepresented in study populations and in their clinics in the large city of Toronto, and needs to be better understood.

"[A]ccessing optimal individualized care may be difficult for certain populations, making it important that generalists are supported to increase their capacity to care for youth with gender dysphoria and to liaise with other professions to support families," write the authors.

The review also includes quick-reference boxes of definitions, criteria to diagnose gender dysphoria and resources for children, caregivers and clinicians.

Credit: 
Canadian Medical Association Journal

Mice pass on brain benefits of enriched upbringing to offspring

image: Figure 1. Ocular dominance (OD)-plasticity is preserved in primary visual cortex (V1) of old enriched environment (EE)-mice

Image: 
Kalogeraki, Yusifov, and Löwel, <em>eNeuro</em> (2019)

Mice growing up in a basic cage maintain lifelong visual cortex plasticity if their parents were raised in an environment that promoted social interaction and physical and mental stimulation, according to a multigenerational study published in eNeuro. The research suggests life experience may be transmitted from one generation to the next through a combination of changes in gene expression and parental caretaking behavior.

Blocking visual input to one eye of adult mice leads to a rewiring of the visual cortex to prioritize input from the open eye. Siegrid Löwel and colleagues first confirmed that this plasticity declines over time in mice housed in standard cages while it is preserved throughout life in mice raised in an enriched environment -- in this case a large, two-story cage with separate living and eating areas connected by a ladder, regularly changed mazes, and a slide.

The researchers then bred the mice to create three experimental groups of offspring, all of which were raised in standard cages. Despite being raised in the same impoverished environment, mice whose parents -- particularly mothers -- were raised in the enriched environment maintained lifelong plasticity in the visual cortex. These findings emphasize the importance of documenting rearing conditions of experimental animals across generations.

Credit: 
Society for Neuroscience

Implantable device curbs seizures and improves cognition in epileptic rats

image: GDNF releasing cells reduce epilepsy induced cell death. In a normal hippocampus (A) no overt sign of cell death can be observed, whereas many dying cells (stained in green) are observed in the epileptic hippocampus (B). GDNF releasing cells effectively attenuate cell death (C).

Image: 
Giovanna Paolone

A protein-secreting device implanted into the hippocampus of epileptic rats reduces seizures by 93 percent in three months, finds preclinical research published in JNeurosci. These results support ongoing development of this technology and its potential translation into a new treatment for epilepsy.

Motivated by an unmet need for effective and well-tolerated epilepsy therapies, Giovanna Paolone and colleagues of the University of Ferrara, Italy and of Gloriana Therapeutics, Inc. (Providence, RI) investigated the effects of the Gloriana targeted cellular delivery system for glial cell line-derived neurotrophic factor (GDNF) -- a protein recent research suggests may help suppress epileptic activity.

In addition to quickly and progressively reducing seizures in male rats -- by 75 percent within two weeks -- the researchers found their device improved rats' anxiety-like symptoms and their performance on an object recognition task, indicating improvement in cognition.

The treatment also corrected abnormalities in brain anatomy associated with epilepsy. These effects persisted even after the device was removed, indicating this approach may modify the disease progression.

Credit: 
Society for Neuroscience

Brain training app improves users' concentration, study shows

image: Decoder brain training game on Peak.

Image: 
Peak

A new 'brain training' game designed by researchers at the University of Cambridge improves users' concentration, according to new research published today. The scientists behind the venture say this could provide a welcome antidote to the daily distractions that we face in a busy world.

In their book, The Distracted Mind: Ancient Brains in a High-Tech World, Adam Gazzaley and Larry D. Rosen point out that with the emergence of new technologies requiring rapid responses to emails and texts and working on multiple projects simultaneously, young people, including students, are having more problems with sustaining attention and frequently become distracted. This difficulty in focussing attention and concentrating is made worse by stress from a global environment that never sleeps and also frequent travel leading to jetlag and poor quality sleep.

"We've all experienced coming home from work feeling that we've been busy all day, but unsure what we actually did," says Professor Barbara Sahakian from the Department of Psychiatry. "Most of us spend our time answering emails, looking at text messages, searching social media, trying to multitask. But instead of getting a lot done, we sometimes struggle to complete even a single task and fail to achieve our goal for the day. Then we go home, and even there we find it difficult to 'switch off' and read a book or watch TV without picking up our smartphones. For complex tasks we need to get in the 'flow' and stay focused."

In recent years, as smartphones have become ubiquitous, there has been a growth in the number of so-called 'brain training' apps that claim to improve cognitive skills such as memory, numerical skills and concentration.

Now, a team from the Behavioural and Clinical Neuroscience Institute at the University of Cambridge, has developed and tested 'Decoder', a new game that is aimed at helping users improve their attention and concentration. The game is based on the team's own research and has been evaluated scientifically.

In a study published today in the journal Frontiers in Behavioural Neuroscience Professor Sahakian and colleague Dr George Savulich have demonstrated that playing Decoder on an iPad for eight hours over one month improves attention and concentration. This form of attention activates a frontal-parietal network in the brain.

In their study, the researchers divided 75 healthy young adults into three groups: one group received Decoder, one control group played Bingo for the same amount of time and a second control group received no game. Participants in the first two groups were invited to attend eight one-hour sessions over the course of a month during which they played either Decoder or Bingo under supervision.

All 75 participants were tested at the start of the trial and then after four weeks using the CANTAB Rapid Visual Information Processing test (RVP). CANTAB RVP has been demonstrated in previously published studies to be a highly sensitive test of attention/concentration.

During the test, participants are asked to detect sequences of digits (e.g. 2-4-6, 3-5-7, 4-6-8). A white box appears in the middle of screen, of which digits from 2 to 9 appear in a pseudo-random order, at a rate of 100 digits per minute. Participants are instructed to press a button every time they detect a sequence. The duration of the test is approximately five minutes.

Results from the study showed a significant difference in attention as measured by the RVP. Those who played Decoder were better than those who played Bingo and those who played no game. The difference in performance was significant and meaningful as it was comparable to those effects seen using stimulants, such as methylphenidate, or nicotine. The former, also known as Ritalin, is a common treatment for Attention Deficit Hyperactivity Disorder (ADHD).

To ensure that Decoder improved focussed attention and concentration without impairing the ability to shift attention, the researchers also tested participants' ability on the Trail Making Test. Decoder performance also improved on this commonly used neuropsychological test of attentional shifting. During this test, participants have to first attend to numbers and then shift their attention to letters and then shift back to numbers. Additionally, participants enjoyed playing the game, and motivation remained high throughout the 8 hours of gameplay.

Professor Sahakian commented: "Many people tell me that they have trouble focussing their attention. Decoder should help them improve their ability to do this. In addition to healthy people, we hope that the game will be beneficial for patients who have impairments in attention, including those with ADHD or traumatic brain injury. We plan to start a study with traumatic brain injury patients this year."

Dr Savulich added: "Many brain training apps on the market are not supported by rigorous scientific evidence. Our evidence-based game is developed interactively and the games developer, Tom Piercy, ensures that it is engaging and fun to play. The level of difficulty is matched to the individual player and participants enjoy the challenge of the cognitive training."

The game has now been licensed through Cambridge Enterprise, the technology transfer arm of the University of Cambridge, to app developer Peak, who specialise in evidence-based 'brain training' apps. This will allow Decoder to become accessible to the public. Peak has developed a version for Apple devices and is releasing the game today as part of the Peak Brain Training app. Peak Brain Training is available from the App Store for free and Decoder will be available to both free and pro users as part of their daily workout. The company plans to make a version available for Android devices later this year.

"Peak's version of Decoder is even more challenging than our original test game, so it will allow players to continue to gain even larger benefits in performance over time," says Professor Sahakian. "By licensing our game, we hope it can reach a wide audience who are able to benefit by improving their attention."

Xavier Louis, CEO of Peak, adds: "At Peak we believe in an evidenced-based approach to brain training. This is our second collaboration with Professor Sahakian and her work over the years shows that playing games can bring significant benefits to brains. We are pleased to be able to bring Decoder to the Peak community, to help people overcome their attention problems."

Credit: 
University of Cambridge

Enhanced NMR reveals chemical structures in a fraction of the time

MIT researchers have developed a way to dramatically enhance the sensitivity of nuclear magnetic resonance spectroscopy (NMR), a technique used to study the structure and composition of many kinds of molecules, including proteins linked to Alzheimer's and other diseases.

Using this new method, scientists should be able to analyze in mere minutes structures that would previously have taken years to decipher, says Robert Griffin, the Arthur Amos Noyes Professor of Chemistry. The new approach, which relies on short pulses of microwave power, could allow researchers to determine structures for many complex proteins that have been difficult to study until now.

"This technique should open extensive new areas of chemical, biological, materials, and medical science which are presently inaccessible," says Griffin, the senior author of the study.

MIT postdoc Kong Ooi Tan is the lead author of the paper, which appears in Sciences Advances on Jan. 18. Former MIT postdocs Chen Yang and Guinevere Mathies, and Ralph Weber of Bruker BioSpin Corporation, are also authors of the paper.

Enhanced sensitivity

Traditional NMR uses the magnetic properties of atomic nuclei to reveal the structures of the molecules containing those nuclei. By using a strong magnetic field that interacts with the nuclear spins of hydrogen and other isotopically labelled atoms such as carbon or nitrogen, NMR measures a trait known as chemical shift for these nuclei. Those shifts are unique for each atom and thus serve as fingerprints, which can be further exploited to reveal how those atoms are connected.

The sensitivity of NMR depends on the atoms' polarization -- a measurement of the difference between the population of "up" and "down" nuclear spins in each spin ensemble. The greater the polarization, the greater sensitivity that can be achieved. Typically, researchers try to increase the polarization of their samples by applying a stronger magnetic field, up to 35 tesla.

Another approach, which Griffin and Richard Temkin of MIT's Plasma Science and Fusion Center have been developing over the past 25 years, further enhances the polarization using a technique called dynamic nuclear polarization (DNP). This technique involves transferring polarization from the unpaired electrons of free radicals to hydrogen, carbon, nitrogen, or phosphorus nuclei in the sample being studied. This increases the polarization and makes it easier to discover the molecule's structural features.

DNP is usually performed by continuously irradiating the sample with high-frequency microwaves, using an instrument called a gyrotron. This improves NMR sensitivity by about 100-fold. However, this method requires a great deal of power and doesn't work well at higher magnetic fields that could offer even greater resolution improvements.

To overcome that problem, the MIT team came up with a way to deliver short pulses of microwave radiation, instead of continuous microwave exposure. By delivering these pulses at a specific frequency, they were able to enhance polarization by a factor of up to 200. This is similar to the improvement achieved with traditional DNP, but it requires only 7 percent of the power, and unlike traditional DNP, it can be implemented at higher magnetic fields.

"We can transfer the polarization in a very efficient way, through efficient use of microwave irradiation," Tan says. "With continuous-wave irradiation, you just blast microwave power, and you have no control over phases or pulse length."

Saving time

With this improvement in sensitivity, samples that would previously have taken nearly 110 years to analyze could be studied in a single day, the researchers say. In the Sciences Advances paper, they demonstrated the technique by using it to analyze standard test molecules such as a glycerol-water mixture, but they now plan to use it on more complex molecules.

One major area of interest is the amyloid beta protein that accumulates in the brains of Alzheimer's patients. The researchers also plan to study a variety of membrane-bound proteins, such as ion channels and rhodopsins, which are light-sensitive proteins found in bacterial membranes as well as the human retina. Because the sensitivity is so great, this method can yield useful data from a much smaller sample size, which could make it easier to study proteins that are difficult to obtain in large quantities.

Credit: 
Massachusetts Institute of Technology

Targeting 'hidden pocket' for treatment of stroke and seizure

image: A 93-series chemical compound joins with a neuron's NMDA receptor. Compounds like this have a high affinity for the receptor due to a unique motif that is drawn into a hidden pocket (illustrated by the dotted line) when in an acidic environment.

Image: 
Furukawa Lab/CSHL

Cold Spring Harbor, NY -- The ideal drug is one that only affects the exact cells and neurons it is designed to treat, without unwanted side effects. This concept is especially important when treating the delicate and complex human brain. Now, scientists at Cold Spring Harbor Laboratory have revealed a mechanism that could lead to this kind of long-sought specificity for treatments of strokes and seizures.

According to Professor Hiro Furukawa, the senior scientist who oversaw this work, "it really comes down to chemistry."

When the human brain is injured, such as during a stroke, parts of the brain begin to acidify. This acidification leads to the rampant release of glutamate.

"We suddenly get more glutamate all over the place that hits the NMDA receptor and that causes the NMDA receptor to start firing quite a lot," explains Furukawa.

In a healthy brain, the NMDA (N-methyl, D-aspartate) receptor is responsible for controlling the flow of electrically charged atoms, or ions, in and out of a neuron. The "firing" of these signals is crucial for learning and memory formation. However, overactive neurons can lead to disastrous consequences. Abnormal NMDA receptor activities have been observed in various neurological diseases and disorders, such as stroke, seizure, depression, and Alzheimer's disease, and in individuals born with genetic mutations.

Furukawa's team, in collaboration with scientists at Emory University, looked for a way to prevent over-firing NMDA receptors without affecting normal regions of the brain.

Previous work had identified promising compounds, called the 93-series, suited to this purpose. Eager to join with the NMDA receptor in an acidic environment, these compounds downregulate the receptor activity, even in the presence of glutamate, thereby preventing excessive neuronal firing.

However, the 93-series compounds sometimes cause the unwanted consequence of inhibiting the NMDA receptors in healthy parts of the brain. That's why Furukawa and his colleagues set out to determine how they could improve upon the unique features of the 93-series.

Their latest results are detailed in Nature Communications.

Using a method known as X-ray crystallography, the researchers were able to see that a motif on the 93-series compound slots into a tiny, never-before-noticed pocket within the NMDA receptor. Experimentation showed that this pocket is particularly sensitive to the pH around it.

"Now that we see the pH-sensitive pocket within NMDA receptors, we can suggest a different scaffold," Furukawa explained. "We can redesign the 93-series chemical compound--let's call it 94-series--in such a way that it can more effectively fit to that pocket and a higher pH sensitivity can be obtained. So, we're basically just starting our effort to do that."

Credit: 
Cold Spring Harbor Laboratory

Genetic variants implicated in development of schizophrenia

Genetic variants which prevent a neurotransmitter receptor from working properly have been implicated in the development of schizophrenia, according to research by the UCL Genetics Institute.

The N-methyl-D-aspartate receptor (NMDAR) is a protein which normally carries signals between brain cells in response to a neurotransmitter called glutamate. Previous research has shown that symptoms of schizophrenia can be caused by drugs which block NMDAR or by antibodies which attack it.

Genetic studies have also suggested that molecules associated with NMDAR might be involved in the development of schizophrenia.

"These results, and others which are emerging, really focus attention on abnormalities in NMDAR functioning as a risk factor for schizophrenia. Given all the pre-existing evidence it seems tempting to conclude that genetic variants which by one means or another reduce NMDAR activity could increase the risk of schizophrenia," said Professor David Curtis (UCL Genetics, Evolution & Environment), the psychiatrist who carries out the research.

For the current study, published today in Psychiatric Genetics, the DNA sequences of over 4,000 people with schizophrenia and 5,000 controls were used to study variants in the three genes which code for NMDAR (GRIN1, GRIN2A and GRIN2B) and a fourth (FYN), for a protein called Fyn which controls NMDAR functioning.

By comparing variants to the normal DNA sequence, it was possible to predict the specific rare variants which would either prevent each gene from being read or which would produce a change in the sequence of amino acids it coded for such that the protein product would not function correctly.

The investigation revealed an excess of such disruptive and damaging variants in FYN, GRIN1 and GRIN2B among the people with schizophrenia.

While the numbers of variants involved are too small for firm conclusions to be drawn, the results are consistent with previous evidence that impaired NMDAR functioning can produce symptoms of schizophrenia. They also support the hypothesis that rare genetic variants which lead to abnormal NMDAR function could increase the risk of developing schizophrenia in 0.5% of cases.

"For many years we've been aware that drugs such as phencyclidine, which blocks the receptor, can cause symptoms just like those which occur in schizophrenia. More recently it's been recognised that sometimes people produce antibodies which attack this receptor and again they have similar symptoms," said Professor Curtis.

Large genetic studies have increasingly accumulated evidence suggesting that there is an association between schizophrenia and genes associated with NMDAR but these typically involve very large numbers of genes in a rather non-specific way.

The UCL researchers focused closely on just four genes and used computer programs to predict the effects of rare variants in these genes. When they did this, they found that more of the variants predicted to impair functioning are found in the people with schizophrenia than people without schizophrenia.

For example, variants in the gene for Fyn were seen in 14 schizophrenia cases and three controls. When the team looked at the predicted effect on the protein, they saw that all three of the variants in controls affected a region with no known function whereas 10 of the variants in schizophrenia cases occurred in functional domains of the protein.

As the variants are rare, the researchers plan on following up by studying a larger sample set. Professor Curtis is part of a collaboration which will look at DNA sequence data from over 30,000 subjects with schizophrenia. They also plan on studying the effects of these specific variants in model systems such as cultures of nerve cells to precisely characterise their effects on the cell function.

Professor Curtis concluded, "Currently available medications for schizophrenia are not directed at NMDAR. However if we can conclusively demonstrate ways in which its function is abnormal then this should further stimulate attempts to develop new drugs which target this system, hopefully leading to safer and more effective treatments."

Credit: 
University College London