Culture

Immuno-PET precisely diagnoses IBD inflammation without invasive procedures

video: Dr. Patrick Hughes, from the University of Adelaide in Australia, discusses new research showing that inflammation in inflammatory bowel disease (IBD) can be quickly and precisely diagnosed using a new type of nuclear medicine scan, which also has potential for treating IBD and other inflammatory diseases. The research is featured in The Journal of Nuclear Medicine (read more at http://ow.ly/TOif30oXjV6).

Image: 
Dmochowska N, Tieu W, Keller MD, et al.

Inflammation in inflammatory bowel disease (IBD) can be quickly and precisely diagnosed using a new type of nuclear medicine scan, according to research published in the June issue of The Journal of Nuclear Medicine. Using immuno-positron emission tomography (immuno-PET) to image monoclonal antibodies directed against specific innate immune cell markers, investigators were able to effectively assess IBD in murine models. In addition, immuno-PET has high potential for theranostic diagnosis and precision treatment of IBD and other inflammatory diseases.

IBD, encompassing Crohn's disease and ulcerative colitis, is characterized by chronic relapsing and remitting inflammation of the lower gastrointestinal tract. According to the Centers for Disease Control and Prevention, approximately three million adults in the United States live with IBD. These diseases require constant monitoring due to the flare-up of symptoms and the increased risk of developing colon cancer.

"The diagnosis and maintenance of IBD is heavily reliant on endoscopy, which is invasive and does not provide real-time information regarding the role of specific mediators and drug targets," said Patrick A. Hughes, PhD, head of the Gastrointestinal Neuro-immune Interactions Group, Centre for Nutrition and Gastrointestinal Disease at the University of Adelaide in Australia. "There is a need to develop less invasive tools that provide quick diagnostic information for IBD. This is particularly relevant when the area of inflammation is beyond the reach of the endoscope, such as difficult-to-access regions of the small intestine, and in patient populations that have increased risk in endoscopy, including pediatrics and hemophiliacs."

Activation of the innate immune system is intimately linked to inflammation in IBD. Innate immune cells are marked by the cell surface receptor CD11b, and they secrete IL-1β to generate immune responses. In the study, the authors compared the ability of immuno-PET with 89Zr-conjugated antibodies against IL-1β and CD11b versus the ability of 18F-FDG PET and magnetic resonance imaging (MRI) to detect inflammation in colitic mice.

To evaluate the imaging methods, mice with ulcerative colitis were assessed daily for signs of acute colitis. Healthy mice were age- and weight-matched to the colitic mice, and comparisons were made regarding body weight loss, colon shortening and epithelial barrier permeability. Researchers then measured the levels of IL-1β and CD11b concentration, determining that the colitic mice had increases in these innate immune mediators.

Immuno-PET imaging revealed that in colitic mice, distal colonic uptake of 89Z-α-IL-1β was increased by approximately three-fold, uptake of 89Z-α-CD11b was increased by approximately five-fold and uptake of 18F-FDG was increased approximately 3.5 fold. MRI analysis showed an approximately two-fold increase in the T2 signal intensity ratio in colitic mice. A robust positive correlation was observed between colonic uptake of 18F-FDG and percentage body weight loss, with a strong trend toward a similar effect observed for 89Z-α-IL-1β, but not for 89Z-α-CD11b. MRI analysis of inflammation, however, did not correlate with percentage weight loss.

In addition, an ex vivo analysis indicated that the uptake of 89Z-α-IL-1β and 89Z-α-CD11b was increased throughout the entire gastrointestinal tract in colitic mice as compared to control mice. 89Z-α-IL-1β was distributed mainly in the gastrointestinal tract, while 89Z-α-CD11b was distributed to more tissue types. Furthermore, 89Z-α-IL-1β correlated with colitis severity, whereas 89Z-α-CD11b did not.

The comparison demonstrates the strong potential of immuno-PET of innate immune mediators for diagnosing and monitoring IBD, Hughes explains. "These findings are important for inflammatory diseases in general, as many of the biologics used to treat these diseases are directed against specific immune mediators; however, these drugs are also associated with primary and secondary non-response," he adds. "Future refinements will lead to theranostic applications where the efficacy of drugs can be rapidly and non-invasively determined, leading to precision treatment not only in IBD, but also in other inflammatory diseases."

Credit: 
Society of Nuclear Medicine and Molecular Imaging

9,000 years ago, a community with modern urban problems

image: These are excavations in a number of Neolithic buildings at Catalhoyuk.

Image: 
Scott Haddow

COLUMBUS, Ohio - Some 9,000 years ago, residents of one of the world's first large farming communities were also among the first humans to experience some of the perils of modern urban living.

Scientists studying the ancient ruins of Çatalhöyük, in modern Turkey, found that its inhabitants - 3,500 to 8,000 people at its peak - experienced overcrowding, infectious diseases, violence and environmental problems.

In a paper published June 17, 2019 in the Proceedings of the National Academy of Sciences, an international team of bioarchaeologists report new findings built on 25 years of study of human remains unearthed at Çatalhöyük.

The results paint a picture of what it was like for humans to move from a nomadic hunting and gathering lifestyle to a more sedentary life built around agriculture, said Clark Spencer Larsen, lead author of the study, and professor of anthropology at The Ohio State University.

"Çatalhöyük was one of the first proto-urban communities in the world and the residents experienced what happens when you put many people together in a small area for an extended time," Larsen said.

"It set the stage for where we are today and the challenges we face in urban living."

Çatalhöyük, in what is now south-central Turkey, was inhabited from about 7100 to 5950 B.C. First excavated in 1958, the site measures 13 hectares (about 32 acres) with nearly 21 meters of deposits spanning 1,150 years of continuous occupation.

Larsen, who began fieldwork at the site in 2004, was one of the leaders of the team that studied human remains as part of the larger Çatalhöyük Research Project, directed by Ian Hodder of Stanford University. A co-author of the PNAS paper, Christopher Knüsel of Université de Bordeaux in France, was co-leader of the bioarchaeology team with Larsen.

Fieldwork at Çatalhöyük ended in 2017 and the PNAS paper represents the culmination of the bioarchaeology work at the site, Larsen said.

Çatalhöyük began as a small settlement about 7100 B.C., likely consisting of a few mud-brick houses in what researchers call the Early period. It grew to its peak in the Middle period of 6700 to 6500 B.C., before the population declined rapidly in the Late period. Çatalhöyük was abandoned about 5950 BC.

Farming was always a major part of life in the community. The researchers analyzed a chemical signature in the bones - called stable carbon isotope ratios - to determine that residents ate a diet heavy on wheat, barley and rye, along with a range of non-domesticated plants.

Stable nitrogen isotope ratios were used to document protein in their diets, which came from sheep, goats and non-domesticated animals. Domesticated cattle were introduced in the Late period, but sheep were always the most important domesticated animal in their diets.

"They were farming and keeping animals as soon as they set up the community, but they were intensifying their efforts as the population expanded," Larsen said.

The grain-heavy diet meant that some residents soon developed tooth decay - one of the so-called "diseases of civilization," Larsen said. Results showed that about 10 to 13 percent of teeth of adults found at the site showed evidence of dental cavities.

Changes over time in the shape of leg bone cross-sections showed that community members in the Late period of Çatalhöyük walked significantly more than early residents. That suggests residents had to move farming and grazing further from the community as time went on, Larsen said.

"We believe that environmental degradation and climate change forced community members to move further away from the settlement to farm and to find supplies like firewood," he said. "That contributed to the ultimate demise of Çatalhöyük."

Other research suggests that the climate in the Middle East became drier during the course of Çatalhöyük's history, which made farming more difficult.

Findings from the new study suggest that residents suffered from a high infection rate, most likely due to crowding and poor hygiene. Up to one-third of remains from the Early period show evidence of infections on their bones.

During its peak in population, houses were built like apartments with no space between them - residents came and left through ladders to the roofs of the houses.

Excavations showed that interior walls and floors were re-plastered many times with clay. And while the residents kept their floors mostly debris-free, analysis of house walls and floors showed traces of animal and human fecal matter.

"They are living in very crowded conditions, with trash pits and animal pens right next to some of their homes. So there is a whole host of sanitation issues that could contribute to the spread of infectious diseases," Larsen said.

The crowded conditions in Çatalhöyük may have also contributed to high levels of violence between residents, according to the researchers.

In a sample of 93 skulls from Çatalhöyük, more than one-fourth - 25 individuals - showed evidence of healed fractures. And 12 of them had been victimized more than once, with two to five injuries over a period of time. The shape of the lesions suggested that blows to the head from hard, round objects caused them - and clay balls of the right size and shape were also found at the site.

More than half of the victims were women (13 women, 10 men). And most of the injuries were on the top or back of their heads, suggesting the victims were not facing their assailants when struck.

"We found an increase in cranial injuries during the Middle period, when the population was largest and most dense," Larsen said.

"An argument could be made that overcrowding led to elevated stress and conflict within the community."

Most people were buried in pits that had been dug into the floors of houses, and researchers believe they were interred under the homes in which they lived. That led to an unexpected finding: Most members of a household were not biologically related.

Researchers discovered this when they found that the teeth of individuals buried under the same house weren't as similar as would be expected if they were kin.

"The morphology of teeth are highly genetically controlled," Larsen said. "People who are related show similar variations in the crowns of their teeth and we didn't find that in people buried in the same houses."

More research is needed to determine the relations of people who lived together in Çatalhöyük, he said. "It is still kind of a mystery."

Overall, Larsen said the significance of Çatalhöyük is that it was one of the first Neolithic "mega-sites" in the world built around agriculture.

"We can learn about the immediate origins of our lives today, how we are organized into communities. Many of the challenges we have today are the same ones they had in Çatalhöyük - only magnified."

Credit: 
Ohio State University

Seaweed feed additive cuts livestock methane but poses questions

image: Cows in the study at the Penn State dairy barns, to eat a sweet treat, put their heads in devices that measure methane they belch. The average dairy cow burps about 380 pounds of methane a year. Early studies show that supplementing their feed with seaweed could mitigate 80 percent of the potent greenhouse gas.

Image: 
Hristov Research Group/Penn State

Supplementing cattle feed with seaweed could result in a significant reduction in methane belched by livestock, according to Penn State researchers, but they caution that the practice may not be a realistic strategy to battle climate change.

"Asparagopsis taxiformis -- a red seaweed that grows in the tropics -- in short-term studies in lactating dairy cows decreased methane emission by 80 percent and had no effect on feed intake or milk yield, when fed at up to 0.5 percent of feed dry-matter intake," said Alexander Hristov, distinguished professor of dairy nutrition. "It looks promising, and we are continuing research."

If seaweed feed supplement is a viable option to make a difference globally, the scale of production would have to be immense, Hristov noted. With nearly 1.5 billion head of cattle in the world, harvesting enough wild seaweed to add to their feed would be impossible. Even to provide it as a supplement to most of the United States' 94 million cattle is unrealistic.

"To be used as a feed additive on a large scale, the seaweed would have to be cultivated in aquaculture operations," he said. "Harvesting wild seaweed is not an option because soon we would deplete the oceans and cause an ecological problem."

Still, the capability of Asparagopsis taxiformis to mitigate enteric methane as a feed supplement demands attention, said Hannah Stefenoni, the graduate student working with Hristov on the research project, who will present the research to members of the American Dairy Science Association June 23 at their annual meeting in Cincinnati, Ohio. The findings of their research were published recently online in the Proceedings of the 2019 American Dairy Science Association Meeting.

"We know that it is effective in the short term; we don't know if it's effective in the long term," Hristov explained. "The microbes in cows' rumens can adapt to a lot of things. There is a long history of feed additives that the microbes adapt to and effectiveness disappears. Whether it is with beef or dairy cows, long-term studies are needed to see if compounds in the seaweed continue to disrupt the microbes' ability to make methane."

There are also questions about the stability over time of the active ingredients -- bromoforms -- in the seaweed. These compounds are sensitive to heat and sunlight and may lose their methane-mitigating activity with processing and storage, Hristov warned.

Palatability is another question. It appears cows do not like the taste of seaweed -- when Asparagopsis was included at 0.75 percent of the diet, researchers observed a drop in the feed intake by the animals.

Also, the long-term effects of seaweed on animal health and reproduction and its effects on milk and meat quality need to be determined. A panel judging milk taste is part of ongoing research, Hristov said.

Cows burping -- often incorrectly characterized as cows farting -- methane and contributing to climate change has been the subject of considerable derision within the U.S., conceded Hristov, who is recognized as an international leader in conducting research assessing greenhouse gas emissions from animal agriculture. It is taken seriously in other countries, he explained, because the average dairy cow belches 380 pounds of the potent greenhouse gas a year.

"But methane from animal agriculture is just 5 percent of the total greenhouse gases produced in the United States -- much, much more comes from the energy and transportation sectors," Hristov said. "So, I think it's a fine line with the politics surrounding this subject. Do we want to look at this? I definitely think that we should, and if there is a way that we can reduce emissions without affecting profitability on the farm, we should pursue it."

And there may be a hidden benefit.

"It is pretty much a given that if enteric methane emissions are decreased, there likely will be an increase in the efficiency of animal production," said Hristov.
Seaweed used in the Penn State research was harvested from the Atlantic Ocean in the Azores and shipped frozen from Portugal. It was freeze-dried and ground by the researchers. Freeze drying and grinding 4 tons of seaweed for the research was "a huge undertaking," Hristov said.

Credit: 
Penn State

Biting backfire: Some mosquitoes actually benefit from pesticide application

image: This is a predatory damselfly from the bromeliads.

Image: 
Jennifer Weathered/Utah State University

The common perception that pesticides reduce or eliminate target insect species may not always hold. Jennifer Weathered and Edd Hammill report that the impacts of agricultural pesticides on assemblages of aquatic insects varied resulting in distinct ecological winners and losers within aquatic communities. While pesticides reduced many species, the evolution of pesticide resistance allowed the mosquito Wyeomyia abebala to actually benefit from the application of the pesticide-Dimenthoate. This benefit appeared to occur as pesticide-resistant mosquitoes were able to colonize habitats that had reduced numbers of predators and competitors due to the direct effects of Dimenthoate. Their results are reported in a recent issue of Oecologia (doi.org/101007/s00442-019-04403-2).

Weathered and Hammill, a student and faculty member from the College of Natural Resources at Utah State University, conducted extensive analyses of aquatic invertebrate communities within tropical bromeliads. They found that invertebrate biodiversity was reduced in bromeliads exposed to the pesticides compared to assemblages from pristine, non-agricultural areas. Surprisingly however, bromeliads from areas with pesticide use exhibited high densities of W. abebala. "Our toxicity bioassays showed that W. abebala from agricultural areas had ten times the Dimethoate tolerance compared to non-agricultural W. abebala. Combining the toxicity experiments with field observations gave us a better understanding of possible mechanisms driving community patterns across landscapes," says Jenn Weathered. Additional analyses indicated that the loss of a predatory damselfly, Mecistogaser modesta, from pesticide-treated locations allowed pesticide-resistant mosquitoes to colonize these habitats that lacked predators. The results were confirmed in both a laboratory and a field transplant experiment where mosquito density was impacted by pesticide use and the presence of the damselfly, but not by the original location of the bromeliads. "Our results show that the addition of novel chemicals into natural systems may lead to the opposite result of what we'd expect, and that we must think about effects on whole communities of species," says Edd Hammill.

Results of this study indicate that biodiversity of aquatic invertebrates was strongly reduced in habitats exposed to an agricultural pesticide, but that differential resistance responses by some invertebrates allowed some non-intuitive increases in species that have the potential to impact human health. The authors stress that to understand the response of novel stressors on individual species, assessment of entire communities of organisms needs to be considered.

Credit: 
S.J. & Jessie E. Quinney College of Natural Resources, Utah State University

Lumping all Hispanic Americans together masks the differences in cancer outcomes

image: SDSU public health researchers Caroline Thompson and Steven Zamora studied cancer mortality statistics by separating out data for each Hispanic American ethnic group.

Image: 
San Diego State University

A San Diego State University study is among the first to describe trends in cancer mortality by specific Hispanic group for the 10 leading causes of cancer deaths nationwide.

There are subtle and sometimes significant differences in food habits, cultural mores, and lifestyles among Cubans, Mexicans, Puerto Ricans, and Central and South Americans in the U.S. This also extends to their risks of getting cancer and dying from it. Yet the subgroups tend to be bunched under the larger umbrella of Hispanic Americans, much like Asian Americans, despite the inherent diversity.

National cancer mortality statistics tell very different stories depending on whether Hispanics are grouped together or separated according to their underlying ethnic origin. This is especially true for men within certain Hispanic groups, who had significantly higher, differentiated risks when compared with women who had fairly similar risks.

The study came about because a third-generation Mexican-American graduate student researcher at SDSU wanted to understand what his own risks were. Steven Zamora undertook a year-long data study that examined national cancer mortality rates from the National Center for Health Statistics (NCHS) for the period between 2003 to 2012, which led to surprising findings about each ethnic group, and stomach and liver cancers in particular.

"I wanted to study something that would have lasting impact and this was very rewarding personally," said Zamora, who completed his master's in public health at SDSU's School of Public Health. "While Hispanic groups may have similar stories when it comes to immigration, they're very different in terms of job and education opportunities, health outcomes, and access to care."

Stomach and liver cancers a major concern

Published online first at Cancer Epidemiology, Biomarkers and Prevention, an American Association for Cancer Research journal, the study found Mexican American and Puerto Rican American males were dying at twice the rate of non-Hispanic whites from stomach and liver cancers.

"These are the two most worrisome cancers for Hispanic Americans, and both are caused by chronic infection," said Caroline Thompson, senior study author and assistant professor of public health at SDSU.

Puerto Rican American males had the highest liver cancer deaths at 16 per 100,000, followed by Mexican Americans at 14, while non-Hispanic white males were at seven. For stomach cancer, Mexican American, Puerto Rican, Central and South American males had mortality rates of eight per 100,000 while non-Hispanic whites were at four. "We also found that liver cancer mortality rates are increasing for males and females of all Hispanic groups," Thompson said.

Cuban Americans, on the other hand, tended to reflect the trends of non-Hispanic whites for these two cancers, being one of the early immigrant groups that have been in the U.S a lot longer than newer immigrants. With better access to health care and better compliance with vaccinations and cancer screenings, this appears to have protected them from infectious disease related cancers more than other groups.

However, Cuban American males did have higher rates of lung cancer deaths compared to other Hispanic groups, at 50 per 100,000, while the comparable figures were 67 for non-Hispanic whites, 30 for Mexican American males, and 15 for Central and South Americans.

Hispanic Paradox

The good news is that except for stomach and liver cancers, Hispanic Americans had better cancer outcomes and lower cancer mortality rates than non-Hispanic whites, results which are consistent with findings from the American Cancer Society.

"Why do Hispanic Americans have better health outcomes even though they have lower socio-economic status, poorer access to care and language barriers?" Zamora asked. "This is something we don't know yet, but it's not limited to cancer mortality. There is a health advantage for Hispanics that we can't really explain, but it's also not equal among all Hispanic groups. This benefit or paradox actually disappears the longer they've been in the U.S."

Future studies are needed to determine the relative importance of the many potential and significant risk factors driving cancer mortality in these specific Hispanic groups. A comprehensive understanding of cancer burden is essential to guide treatment and prevention strategies, especially in Hispanics as they represent a heterogeneous and growing segment of the U.S. population.

Credit: 
San Diego State University

The brain consumes half of a child's energy -- and that could matter for weight gain

EVANSTON, Ill. --- Weight gain occurs when an individual's energy intake exceeds their energy expenditure -- in other words, when calories in exceed calories out. What is less well understood is the fact that, on average, nearly half of the body's energy is used by the brain during early childhood.

In a new paper published in the journal Proceedings of the National Academy of Sciences (PNAS), "A hypothesis linking the energy demand of the brain to obesity risk," co-authors Christopher Kuzawa of Northwestern University and Clancy Blair of New York University School of Medicine, propose that variation in the energy needs of brain development across kids -- in terms of the timing, intensity and duration of energy use -- could influence patterns of energy expenditure and weight gain.

"We all know that how much energy our bodies burn is an important influence on weight gain," said Kuzawa, professor of anthropology in the Weinberg College of Arts and Sciences and a faculty fellow with the Institute for Policy Research at Northwestern. "When kids are 5, their brains use almost half of their bodies' energy. And yet, we have no idea how much the brain's energy expenditure varies between kids. This is a huge hole in our understanding of energy expenditure."

"A major aim of our paper is to bring attention to this gap in understanding and to encourage researchers to measure the brain's energy use in future studies of child development, especially those focused on understanding weight gain and obesity risk."

According to the authors, another important unknown is whether programs designed to stimulate brain development through enrichment, such as preschool programs like Head Start, might influence the brain's pattern of energy use.

"We believe it plausible that increased energy expenditure by the brain could be an unanticipated benefit to early child development programs, which, of course, have many other demonstrated benefits," Kuzawa said. "That would be a great win-win."

This new hypothesis was inspired by Kuzawa and his colleagues' 2014 study showing that the brain consumes a lifetime peak of two-thirds of the body's resting energy expenditure, and almost half of total expenditure, when kids are five years old. This study also showed that ages when the brain's energy needs increase during early childhood are also ages of declining weight gain. As the energy needed for brain development declines in older children and adolescents, the rate of weight gain increases in parallel.

"This finding helped confirm a long-standing hypothesis in anthropology that human children evolved a much slower rate of childhood growth compared to other mammals and primates in part because their brains required more energy to develop," Kuzawa said.

Credit: 
Northwestern University

Repurposing existing drugs or combining therapies could help in the treatment of autoimmune diseases

Research led by the University of Birmingham has found re-purposing already existing drugs or combining therapies could be used to treat patients who have difficult to treat autoimmune diseases.

Funded by Versus Arthitis, the research was led by the University of Birmingham's Institute of Inflammation and Ageing and Institute of Cardiovascular Sciences and was published today (June 17th) in Proceedings of the National Academy of Sciences.

The research, a collaboration with the University of Oxford, University of Cambridge, University of York, Université Rennes in France, and the University of Lausanne in Switzerland, was supported by the National Institute for Health Research Birmingham Biomedical Research Centre.

Dr Saba Nayar, of the University of Birmingham, explained: "In this study, we found for the first time that fibroblasts - cells that play a critical role in healing - also play a key role in the process of the formation of tertiary lymphoid structures, which are small clusters of blood and tissue cells found at the sites of chronic inflammation.

"Inflammation is the body's process of fighting against things that harm it, such as infections, injuries, and toxins, in an attempt to heal itself. When something damages cells, our bodies releases chemicals that trigger a response from our immune system.

"This response usually lasts for a few hours or days in the case of acute inflammation, however in chronic inflammation the response lingers, leaving your body in a constant state of alert. Chronic inflammation occurs in a range of conditions from cancer to arthritis and autoimmune conditions - illnesses or disorders that occur when healthy cells get destroyed by the body's own immune system."

Dr Joana Campos, also of the University of Birmingham, added: "Tertiary lymphoid structures are believed to play a key role in the progression of autoimmune conditions such as Sjögren's Syndrome - a condition that affects parts of the body that produce fluids like tears and spit.

"Previously research has not identified the role fibroblasts play in the formation and maintenance of tertiary lymphoid structures.

"We proved that fibroblasts expand and acquire immunological features in a process that is dependent on two cytokines - substances which are secreted by cells including fibroblasts in the immune system."

Dr Francesca Barone, also of the University of Birmingham, said: "Our research has led us to conclude that, by re-purposing already existing drugs or combining therapies, we could use these medications to directly target immune cells and fibroblasts to attack these cytokines in patients who have difficult to treat autoimmune diseases in which the formation of tertiary lymphoid structures plays a critical role.

"Our findings were surprising and unexpected and have addressed functional questions that the science community has been trying to address since tertiary lymphoid structures were first discovered."

Credit: 
University of Birmingham

Researchers call for personalized approach to aging brain health

People are living longer than ever before, but brain health isn't keeping up. To tackle this critical problem, a team of researchers has proposed a new model for studying age-related cognitive decline - one that's tailored to the individual.

There's no such thing as a one-size-fits-all approach to aging brain health, says Lee Ryan, professor and head of the University of Arizona Department of Psychology. A number of studies have looked at individual risk factors that may contribute to cognitive decline with age, such as chronic stress and cardiovascular disease. However, those factors may affect different people in different ways depending on other variables, such as genetics and lifestyle, Ryan says.

In a new paper published in the journal Frontiers in Aging Neuroscience, Ryan and her co-authors advocate for a more personalized approach, borrowing principles of precision medicine in an effort to better understand, prevent and treat age-related cognitive decline.

"Aging is incredibly complex, and most of the research out there was focusing on one aspect of aging at a time," Ryan said. "What we're trying to do is take the basic concepts of precision medicine and apply them to understanding aging and the aging brain. Everybody is different and there are different trajectories. Everyone has different risk factors and different environmental contexts, and layered on top of that are individual differences in genetics. You have to really pull all of those things together to predict who is going to age which way. There's not just one way of aging."

Although most older adults - around 85% - will not experience Alzheimer's disease in their lifetimes, some level of cognitive decline is considered a normal part of aging. The majority of people in their 60s or older experience some cognitive impairment, Ryan said.

This not only threatens older adults' quality of life, it also has socioeconomic consequences, amounting to hundreds of billions of dollars in health care and caregiving costs, as well as lost productivity in the workplace, Ryan and her co-authors write.

The researchers have a lofty goal: to make it possible to maintain brain health throughout the entire adult lifespan, which today in the U.S. is a little over 78 years old on average.

In their paper, Ryan and her co-authors present a precision aging model meant to be a starting point to guide future research. It focuses primarily on three areas: broad risk categories; brain drivers; and genetic variants. An example of a risk category for age-related cognitive decline is cardiovascular health, which has been consistently linked to brain health. The broader risk category includes within it several individual risk factors, such as obesity, diabetes and hypertension.

The model then considers brain drivers, or the biological mechanisms through which individual risk factors in a category actually impact the brain. This is an area where existing research is particularly limited, Ryan said.

Finally, the model looks at genetic variants, which can either increase or decrease a person's risk for age-related cognitive decline. Despite people's best efforts to live a healthy lifestyle, genes do factor into the equation and can't be ignored, Ryan said. For example, there are genes that protect against or make it more likely that a person will get diabetes, sometimes regardless of their dietary choices.

While the precision aging model is a work in progress, Ryan and her collaborators believe that considering the combination of risk categories, brain drivers and genetic variants is key to better understanding age-related cognitive decline and how to best intervene in different patients.

Ryan imagines a future in which you can go to your doctor's office and have all of your health and lifestyle information put into an app that would then help health-care professionals guide you on an individualized path for maintaining brain health across your lifespan. We may not be there yet, but it's important for research on age-related cognitive decline to continue, as advances in health and technology have the potential to extend the lifespan even further, she said.

"Kids that are born in this decade probably have a 50% chance of living to 100," Ryan said. "Our hope is that the research community collectively stops thinking about aging as a single process and recognizes that it is complex and not one-size-fits-all. To really move the research forward you need to take an individualized approach."

Credit: 
University of Arizona

Past climate change: A warning for the future?

image: This is an aerial photograph of Pre-Colombian raised field from Llanos de Moxos, Bolivia.

Image: 
Umberto Lombardo, University of Bern, Switzerland

A new study of climate changes and their effects on past societies offers a sobering glimpse of social upheavals that might happen in the future.

The prehistoric groups studied lived in the Amazon Basin of South America hundreds of years ago, before European contact, but the disruptions that occurred may carry lessons for our time, says study coauthor Mitchell J. Power, curator of the Garrett Herbarium at the Natural History Museum of Utah, University of Utah.

The paper, "Climate change and cultural resilience in late pre-Columbian Amazonia," published on the Nature Ecology & Evolution website June 17, traces impacts in the Amazon before 1492.
Climatic conditions in the Amazon Basin underwent natural shifts during periods when much of the rest of the Earth also was impacted. These times are known as the Medieval Climate Anomaly, from about AD 900 to 1250, and the Little Ice Age, 1450-1850. In Amazonia, rainfall amounts and patterns changed, affecting agriculture and subsistence patterns.

Presently, climate change is affecting most parts of the world; but the difference now is that it's human-caused.

One of the biggest problems in the future may be that climate extremes will harm many countries, and that their "climate refugees" will be pushed from ancestral homes into more temperate and developed places not as badly affected by climate change. The migrations could cause great stresses in the host countries, Power said.

The surprising results of the study show that these types of crises occurred during and after the first millennium in the Amazon Basin.

"Were we getting a window into that in prehistoric Amazonia? I think so," said Power, who is also an associate professor of geography at the University of Utah. "So it's kind of a one-two punch: if the climate doesn't get you, it might be the thousands of bodies that show up that you have to feed because extreme drought forced them out of their homelands."

Climate was a dominant factor in the social and cultural changes in ancient Amazonia, he emphasized, but the study also shows "more nuanced" effects because of subsistence and cultural practices as well as population movements. In particular, those cultural groups that subsisted with diverse food resources or polycultures and agroforestry, avoided political hierarchies with an elite ruling class, and adopted a strategy of creating organic and charcoal-rich soil, called "Amazonian Dark Earth", were most resilient to extreme climate variations.

The scientists searched for indications of prehistoric climate and culture in six regions throughout the enormous Amazon Basin during the last few thousand years: the Guianas Coast, Lianos de Moxos, and the Eastern, Central, Southwestern and Southern Amazon. Up to 8-10 million people were estimated to have lived in the Greater Amazon region before European contact.

Researchers synthesized paleoecological, archaeological and paleoclimate studies by combining evidence of changes in natural vegetation and cultigens, changes in precipitation and disturbance regimes as well as changes in cultural practices and population movements.

Rainfall estimates were derived by measuring the percent of titanium in sediments deposited by runoff, as well as oxygen isotopes in cave speleothem records from across Amazonia. Botanical remains, including phytoliths (microscopic silica formations in plant tissue that are long-lasting in the soil), pollen- and other plant fossil-based evidence of cultigens, including maize, manioc, squash, peanuts and cotton was used to reconstruct subsistence strategies through time.

Another indicator of agricultural practices by some cultures was the presence or lack of Amazonian Dark Earth (ADE) produced by the accumulation of organic materials, including charcoal, into soils through time, which provides a long-term investment in soil fertility, further buffering against extreme changes in climate.

Archaeological remains that indicated social structure and presence and absence of political hierarchies were items such as pottery, elaborate architecture and earthworks, including mounds, raised fields, elite burials, canal systems as well as evidence of fortifications and defensive structures. Whether regions were burned to support agricultural production was another consideration.

Because living plants take up an isotope of carbon called C-14 that dissipates at a known rate after death, researchers compiled hundreds of radiocarbon dates from occupation sites across the Amazon basin. This helped establish the chronology of cultural change and demonstrate how people responded to pressure from climate change and migration.

Paleoecological data was synthesized from a network of sediment cores across Amazonia, from lakes, bogs and wetlands microfossil plant remains, including phytolith, pollen and charcoal records provide information about which types of plants occurred at each site and whether fire was a key process.

A tool that was important to the study is the Global Charcoal Database, which is used to explore linkages among past fire histories, climate change and the role of humans around the globe. Power helped develop the database while a post-doctoral student at the University of Edinburgh, Scotland and is part of an international team, the Global Paleofire Working Group, that continues to contribute to many interdisciplinary studies such as this one.

After synthesizing paleo data with archaeological information on cultures and agricultural practices, the team discovered that at least two different social system trajectories were in place, and that often they had different outcomes, based on flexibility.

"The flexibility, or lack thereof, of these systems explains the decline of some Amazonian societies and not others ..." the report says. Societies that collapsed were at the end of periods of growth, accumulation, restructuring and renewal. "Those societies had accumulated rigidities, and were less able to absorb unforeseen disturbances resulting in dramatic transformation."

Complex societies with social hierarchies and extensive earthworks, including raised fields, supported intensive agriculture of a limited number of crops, but eventually soil leaching and other factors left the villages vulnerable. Such settlements sometimes were able to make short-term improvements; but then, as crises grew, such as a multi-decadal drought, they became in danger of collapse.

However, while some groups underwent major reorganization, the paper says, "others were unaffected and even flourished."

The report details migrations and conflict that took place potentially in response to extreme changes in climate. It notes that the demise of mound centers in the Guianas coast around the year 1300 CE, for example, could have occurred because of a prolonged drought that the researchers documented - or the expansion of a culture called the Koriabo "could have been responsible for conflicts leading to the ... demise, or at least accelerating a process triggered by climate change."

On the other hand, societies that depended on "polyculture agroforestry," that is, varying crops including fruit-bearing trees, "in the long term, were more resistant to climate change." These were the cultures that also tended to produced ADEs.

Still under debate is whether the formation of anthropic forests were deliberate or a result of people living in an area for centuries and disposing of nuts, seeds and waste that just happened to spread desirable plants and provide a diverse food resource. Power doesn't take a position on that, saying the combination of developing ADEs and polycultures and agroforestry were both long-term solutions to mitigating food scarcity that occurred during times of extreme climate variability, such as during the Medieval Climate Anomaly.

Diverse agriculture associated with the dark soil, with inhabitants growing corn, squash, maniocs and possibly trees, made some groups better able to withstand climate change. But these practices could not prevent conflicts with others who were flooding into their areas because of climate-induced collapse in adjacent regions.

The situation reminds Power of conditions in Ethiopia, a country from which he recently returned and is working on a similar interdisciplinary project trying to understand the rise and fall of the Aksumite Empire. Today, something like 85 percent of the population participates in agriculture production, which still relies on seasonal rainfall in many regions. Climate extremes can cause the wet season to come late some years, or even not come at all.

This causes a ripple effect, encouraging young generations to migrate, mostly to Europe, he said.

Likely, a similar thing happened with migrations in Amazonia in the period before Columbus. The newcomers were "like climate refugees," Power said, "which is an interesting corollary to today's problems."

"I believe the most important aspect of the research is showing how societies respond differently to climate change depending on several factors like the size of their population, their political organization, and their economy," said the study's lead author, Jonas Gregorio de Souza of the Universitat Pompeu Fabra, Barcelona, Spain.

"We started the research expecting that climate change would have had an impact everywhere in the Amazon, but we realized that some communities were more vulnerable than others. To summarize one of the main ideas of the paper, those pre-Columbian peoples that depended heavily on intense and specialized forms of land use ended up being less capable of adapting to climatic events."

S. Yoshi Maezumi, also a coauthor of the paper, said teams of scientists from diverse backgrounds helped tackle questions from different angles, "each providing a piece of the puzzle from the past." She is a lecturer at the University of the West Indies, Mona, Jamaica; a guest researcher at the University of Amsterdam, and an honorary research fellow at the University of Exeter, United Kingdom.

"Together, we have a better understanding of the long term changes in climate and human activity," she said. "These long-term perspectives on how people responded to past climate variability, including droughts and increased fire activity may help provide insights into human adaptation and vulnerability to modern anthropogenic climate change."

Credit: 
University of Utah

Researchers question implanting IVC filters on prophylactic basis before bariatric surgery

image: Riyaz Bashir, M.D., FACC, RVT, Professor of Medicine at the Lewis Katz School of Medicine at Temple University and Director of Vascular and Endovascular Medicine at Temple University Hospital.

Image: 
Lewis Katz School of Medicine at Temple University

(Philadelphia, PA) - There are currently more than 200,000 bariatric surgeries being performed in the United States each year according to estimates from the American Society for Metabolic and Bariatric Surgery. Blood clotting is of particular concern during and after these procedures given that obesity and post-surgical immobility are risk factors for developing blood clots, including venous thromboembolism (VTE), which is a blood clot that starts in a vein - often in the deep veins of the leg, groin or arm. This type of VTE is known as deep vein thrombosis, or DVT. A venous clot can break off and travel to the lungs, causing a life-threatening condition called pulmonary embolism (PE). Inferior vena cava filters (IVCFs) are sometimes implanted prophylactically prior to bariatric surgery in an attempt to reduce post-surgical PE rates. IVCFs are small-basket-like devices made of wire that are inserted into the inferior vena cava, a large vein that returns blood from the lower body to the heart and lungs, to catch the blood clots before they reach the lungs.

"The effectiveness of IVCF insertion prior to bariatric surgery for primary prophylaxis against PE is unknown and controversial, and is also considered off-label since it lies outside of the official recommendation by the U.S. Food and Drug Administration," said Riyaz Bashir, MD, FACC, RVT, Professor of Medicine at the Lewis Katz School of Medicine at Temple University (LKSOM) and Director of Vascular and Endovascular Medicine at Temple University Hospital.

Dr. Bashir led a research team that sought to compare the outcomes associated with patients receiving prophylactic IVCFs prior to bariatric surgery to those who did not receive IVCFs. The team's findings were published online June 17 by the journal JACC: Cardiovascular Interventions.

The research team used the National Inpatient Sample (NIS) database to identify obese patients who underwent bariatric surgery from January 2005 to September 2015. The outcomes associated with patients receiving prophylactic IVCFs prior to their bariatric surgery were compared to those who did not receive IVCFs.

Among the team's findings:

258,480 patients underwent bariatric surgery (representing a national estimate of 1,250,500 over the 11-year study period) and 1,047 (0.41%) of those had prophylactic IVCF implanted.

Patients with prophylactic IVCFs compared to those without IVCFs had a significantly higher rate of the combined endpoint of in-hospital mortality or pulmonary embolism. (1.4% vs. 0.4%).

Prophylactic IVCFs were associated with higher rates of lower extremity or caval deep vein thrombosis (1.47% vs. 0.10%).

Prophylactic IVCFs were associated with higher length-of-stay (median 3 days vs. 2 days).

Prophylactic IVCFs were associated with higher hospital charges (median $61,301 vs. $36,097).

"Our results from this 11-year nationwide observational study suggest that attempting to safeguard bariatric surgery patients from PE-related morbidity and mortality with prophylactic IVCFs is ineffective and should not be performed without further evidence supporting its use," added Dr. Bashir. "Research and development in other options such as pharmacologic deep vein thrombosis prophylaxis, mechanical lower extremity compression devices and early post-operative mobility strategies specifically targeted toward the needs of the obese surgical patients may be higher yielding endeavors for protection against VTE."

Credit: 
Temple University Health System

NASA scientists find sun's history buried in moon's crust

image: An artistic conception of the early Earth, showing a surface pummeled by large impact, resulting in extrusion of deep-seated magma onto the surface.

Image: 
Simone Marchi

The Sun is why we're here. It's also why Martians or Venusians are not.

When the Sun was just a baby four billion years ago, it went through violent outbursts of intense radiation, spewing scorching, high-energy clouds and particles across the solar system. These growing pains helped seed life on early Earth by igniting chemical reactions that kept Earth warm and wet. Yet, these solar tantrums also may have prevented life from emerging on other worlds by stripping them of atmospheres and zapping nourishing chemicals.

Just how destructive these primordial outbursts were to other worlds would have depended on how quickly the baby Sun rotated on its axis. The faster the Sun turned, the quicker it would have destroyed conditions for habitability.

This critical piece of the Sun's history, though, has bedeviled scientists, said Prabal Saxena, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Saxena studies how space weather, the variations in solar activity and other radiation conditions in space, interacts with the surfaces of planets and moons.

Now, he and other scientists are realizing that the Moon, where NASA will be sending astronauts by 2024, contains clues to the ancient mysteries of the Sun, which are crucial to understanding the development of life.

"We didn't know what the Sun looked like in its first billion years, and it's super important because it likely changed how Venus' atmosphere evolved and how quickly it lost water. It also probably changed how quickly Mars lost its atmosphere, and it changed the atmospheric chemistry of Earth," Saxena said.

The Sun-Moon Connection

Saxena stumbled into investigating the early Sun's rotation mystery while contemplating a seemingly unrelated one: Why, when the Moon and Earth are made of largely the same stuff, is there significantly less sodium and potassium in lunar regolith, or Moon soil, than in Earth soil?

This question, too, revealed through analyses of Apollo-era Moon samples and lunar meteorites found on Earth, has puzzled scientists for decades -- and it has challenged the leading theory of how the Moon formed.

Our natural satellite took shape, the theory goes, when a Mars-sized object smashed into Earth about 4.5 billion years ago. The force of this crash sent materials spewing into orbit, where they coalesced into the Moon.

"The Earth and Moon would have formed with similar materials, so the question is, why was the Moon depleted in these elements?" said Rosemary Killen, an planetary scientist at NASA Goddard who researches the effect of space weather on planetary atmospheres and exospheres.

The two scientists suspected that one big question informed the other -- that the history of the Sun is buried in the Moon's crust.

Killen's earlier work laid the foundation for the team's investigation. In 2012, she helped simulate the effect solar activity has on the amount of sodium and potassium that is either delivered to the Moon's surface or knocked off by a stream of charged particles from the Sun, known as the solar wind, or by powerful eruptions known as coronal mass ejections.

Saxena incorporated the mathematical relationship between a star's rotation rate and its flare activity. This insight was derived by scientists who studied the activity of thousands of stars discovered by NASA's Kepler space telescope: The faster a star spins, they found, the more violent its ejections. "As you learn about other stars and planets, especially stars like our Sun, you start to get a bigger picture of how the Sun evolved over time," Saxena said.

Using sophisticated computer models, Saxena, Killen and colleagues think they may have finally solved both mysteries. Their computer simulations, which they described on May 3 in the The Astrophysical Journal Letters, show that the early Sun rotated slower than 50% of baby stars. According to their estimates, within its first billion years, the Sun took at least 9 to 10 days to complete one rotation.

They determined this by simulating the evolution of our solar system under a slow, medium, and then a fast-rotating star. And they found that just one version -- the slow-rotating star -- was able to blast the right amount of charged particles into the Moon's surface to knock enough sodium and potassium into space over time to leave the amounts we see in Moon rocks today.

"Space weather was probably one of the major influences for how all the planets of the solar system evolved," Saxena said, "so any study of habitability of planets needs to consider it."

Life Under the Early Sun

The rotation rate of the early Sun is partly responsible for life on Earth. But for Venus and Mars -- both rocky planets similar to Earth -- it may have precluded it. (Mercury, the closest rocky planet to the Sun, never had a chance.)

Earth's atmosphere was once very different from the oxygen-dominated one we find today. When Earth formed 4.6 billion years ago, a thin envelope of hydrogen and helium clung to our molten planet. But outbursts from the young Sun stripped away that primordial haze within 200 million years.

As Earth's crust solidified, volcanoes gradually coughed up a new atmosphere, filling the air with carbon dioxide, water, and nitrogen. Over the next billion years, the earliest bacterial life consumed that carbon dioxide and, in exchange, released methane and oxygen into the atmosphere. Earth also developed a magnetic field, which helped protect it from the Sun, allowing our atmosphere to transform into the oxygen- and nitrogen-rich air we breathe today.

"We were lucky that Earth's atmosphere survived the terrible times," said Vladimir Airapetian, a senior Goddard heliophysicist and astrobiologist who studies how space weather affects the habitability of terrestrial planets. Airapetian worked with Saxena and Killen on the early Sun study.

Had our Sun been a fast rotator, it would have erupted with super flares 10 times stronger than any in recorded history, at least 10 times a day. Even Earth's magnetic field wouldn't have been enough to protect it. The Sun's blasts would have decimated the atmosphere, reducing air pressure so much that Earth wouldn't retain liquid water. "It could have been a much harsher environment," Saxena noted.

But the Sun rotated at an ideal pace for Earth, which thrived under the early star. Venus and Mars weren't so lucky. Venus was once covered in water oceans and may have been habitable. But due to many factors, including solar activity and the lack of an internally generated magnetic field, Venus lost its hydrogen -- a critical component of water. As a result, its oceans evaporated within its first 600 million years, according to estimates. The planet's atmosphere became thick with carbon dioxide, a heavy molecule that's harder to blow away. These forces led to a runaway greenhouse effect that keeps Venus a sizzling 864 degrees Fahrenheit (462 degrees Celsius), far too hot for life.

Mars, farther from the Sun than Earth is, would seem to be safer from stellar outbursts. Yet, it had less protection than did Earth. Due partly to the Red Planet's weak magnetic field and low gravity, the early Sun gradually was able to blow away its air and water. By about 3.7 billion years ago, the Martian atmosphere had become so thin that liquid water immediately evaporated into space. (Water still exists on the planet, frozen in the polar caps and in the soil.)

After influencing the course for life (or lack thereof) on the inner planets, the aging Sun gradually slowed its pace and continues to do so. Today, it revolves once every 27 days, three times slower than it did in its infancy. The slower spin renders it much less active, though the Sun still has violent outbursts occasionally.

Exploring the Moon, Witness of Solar System Evolution

To learn about the early Sun, Saxena said, you need to look no further than the Moon, one of the most well-preserved artifacts from the young solar system.

"The reason the Moon ends up being a really useful calibrator and window into the past is that it has no annoying atmosphere and no plate tectonics resurfacing the crust," he said. "So as a result, you can say, 'Hey, if solar particles or anything else hit it, the Moon's soil should show evidence of that.'"

Apollo samples and lunar meteorites are a great starting point for probing the early solar system, but they are only small pieces in a large and mysterious puzzle. The samples are from a small region near the lunar equator, and scientists can't tell with complete certainty where on the Moon the meteorites came from, which makes it hard to place them into geological context.

Since the South Pole is home to the permanently shadowed craters where we expect to find the best-preserved material on the Moon, including frozen water, NASA is aiming to send a human expedition to the region by 2024.

If astronauts can get samples of lunar soil from the Moon's southernmost region, it could offer more physical evidence of the baby Sun's rotation rate, said Airapetian, who suspects that solar particles would have been deflected by the Moon's erstwhile magnetic field 4 billion years ago and deposited at the poles: "So you would expect -- though we've never looked at it -- that the chemistry of that part of the Moon, the one exposed to the young Sun, would be much more altered than the equatorial regions. So there's a lot of science to be done there."

Credit: 
NASA/Goddard Space Flight Center

More heart failure patients may benefit from CRT defibrillator

Certain groups of heart failure patients may see improved heart function with cardiac resynchronization therapy with defibrillator (CRT-D) if traditional implantable cardioverter defibrillator treatment does not work, according to research published today in the Journal of the American College of Cardiology.

There are three types of conduction disorders, and until now, most research on CRT therapy in heart failure patients has focused on the most common, called left bundle branch block (LBBB). This is a heart conduction abnormality seen on the electrocardiogram (EKG). In this condition, activation of the left ventricle of the heart is delayed, which causes the left ventricle to contract later than the right ventricle.

The new study focused on two less common conduction disorders: right bundle branch block (RBBB) and nonspecific intraventricular conduction delay (NICD)--together, often referred to as "non-LBBB"--to determine the benefit of CRT-D.

A CRT-D is a special device for heart failure patients who are also at high risk for sudden cardiac death. While functioning like a normal defibrillator (called an implantable cardioverter defibrillator, or ICD) to treat slow heart rhythms and life-threatening fast heart rhythms, a CRT-D device also delivers small electrical impulses to the left and right ventricles to help them contract at the same time. This helps the heart pump more efficiently.

"CRT is known to improve heart function in patients with LBBB, but until now we have not had enough evidence to support use of CRT in patients with either RBBB or NICD," said lead researcher Hiro Kawata, MD, PhD, of the Oregon Heart and Vascular Institute in Springfield. "Current guidelines state that when patients with heart failure and non-LBBB conduction disorder continue to suffer symptoms such as shortness of breath or fatigue even after medical therapy, CRT can be tried as a next step, even though there is a lack of evidence about its effectiveness in these patients. We wanted to find out whether CRT can help non-LBBB patients."

Kawata and colleagues evaluated data between 2010-2013 from the NCDR ICD Registry, the national standard for understanding patient selection, care and outcomes in patients receiving ICD therapy.

The researchers divided patients with RBBB and NICD into two groups according to the length of their QRS, or waves on an EKG that represent how long it takes to conduct electricity through the lower ventricle of the heart. A QRS longer than 120 milliseconds represents a conduction abnormality.

"In patients with LBBB, the longer the QRS, the more likely you are to respond to CRT," Kawata said. "We wanted to find out whether this was also true in patients with non-LBBB conduction disorders."

The study compared patients who had one of two types of defibrillator - either an CRT-D or an ICD.

Among 5,954 Medicare-aged patients with NICD or RBBB who were implanted with a defibrillator, the study found patients with NICD and a QRS of more than 150 milliseconds responded best to CRT-D. In these patients, CRT-D was associated with decreased risk of death, readmission to the hospital for any cause, as well as readmission for heart-related causes, compared with a similar group of patients implanted with ICD. Among patients with RBBB, CRT-D was not associated with better outcomes compared with ICD, regardless of the duration of their QRS.

"This means that if you have a patient with RBBB who is still suffering from heart failure symptoms after medical therapy, there is not enough data to support using CRT blindly," Kawata said. "But in NICD patients, we now know that those with a long QRS are likely to benefit from CRT."

He said more study is needed to establish whether certain RBBB patients might respond to CRT.

"While implanting a CRT-D is relatively safe, it is not without risks," Kawata said.

CRT can cause complications, including infection, a pneumothorax (punctured lung) or cardiac perforation (perforated heart muscle), he said.

In an editorial accompanying the study, Michael Gold, MD, PhD, of the Medical University of South Carolina in Charleston, noted the results from this study provide important data indicating that not all non-LBBB types, or morphologies, are the same.

"The authors should be commended for providing more detailed analysis of the electrocardiogram and not simply using what is now the conventional LBBB vs non-LBBB morphologies," he said. "These findings will need to be confirmed with further studies, either from prospective trials or pooled data from previous randomized trials. However, importantly, the results of the present study challenge our convention of lumping CRT candidates into two categories."

Credit: 
American College of Cardiology

The evolution of puppy dog eyes

image: The authors suggest that the inner eyebrow raising movement triggers a nurturing response in humans because it makes the dogs' eyes appear larger, more infant like and also resembles a movement humans produce when they are sad.

Image: 
The University of Portsmouth

Dogs have evolved new muscles around the eyes to better communicate with humans.

New research comparing the anatomy and behavior of dogs and wolves suggests dogs' facial anatomy has changed over thousands of years specifically to allow them to better communicate with humans.

In the first detailed analysis comparing the anatomy and behavior of dogs and wolves, researchers found that the facial musculature of both species was similar, except above the eyes. Dogs have a small muscle, which allows them to intensely raise their inner eyebrow, which wolves do not.

The authors suggest that the inner eyebrow raising movement triggers a nurturing response in humans because it makes the dogs' eyes appear larger, more infant like and also resembles a movement humans produce when they are sad.

The research team, led by comparative psychologist Dr Juliane Kaminski, at the University of Portsmouth, included a team of behavioural and anatomical experts in the UK and USA.

It is published in the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

Dr Kaminski said: "The evidence is compelling that dogs developed a muscle to raise the inner eyebrow after they were domesticated from wolves.

"We also studied dogs' and wolves' behavior, and when exposed to a human for two minutes, dogs raised their inner eyebrows more and at higher intensities than wolves.

"The findings suggest that expressive eyebrows in dogs may be a result of humans unconscious preferences that influenced selection during domestication. When dogs make the movement, it seems to elicit a strong desire in humans to look after them. This would give dogs, that move their eyebrows more, a selection advantage over others and reinforce the 'puppy dog eyes' trait for future generations."

Dr Kaminski's previous research showed dogs moved their eyebrows significantly more when humans were looking at them compared to when they were not looking at them.

She said: "The AU101 movement is significant in the human-dog bond because it might elicit a caring response from humans but also might create the illusion of human-like communication."

Lead anatomist Professor Anne Burrows, at Duquesne University, Pittsburgh, USA, co-author of the paper, said: "To determine whether this eyebrow movement is a result of evolution, we compared the facial anatomy and behaviour of these two species and found the muscle that allows for the eyebrow raise in dogs was, in wolves, a scant, irregular cluster of fibres.

"The raised inner eyebrow movement in dogs is driven by a muscle which doesn't consistently exist in their closest living relative, the wolf.

"This is a striking difference for species separated only 33,000 years ago and we think that the remarkably fast facial muscular changes can be directly linked to dogs' enhanced social interaction with humans."

Dr Kaminski and co-author, evolutionary psychologist Professor Bridget Waller, also at the University of Portsmouth, previously mapped the facial muscular structure of dogs, naming the movement responsible for a raised inner eyebrow the Action Unit (AU) 101.

Professor Waller said: "This movement makes a dogs' eyes appear larger, giving them a childlike appearance. It could also mimic the facial movement humans make when they're sad.

"Our findings show how important faces can be in capturing our attention, and how powerful facial expression can be in social interaction."

Co-author and anatomist Adam Hartstone-Rose, at North Carolina State University, USA, said: "These muscles are so thin that you can literally see through them - and yet the movement that they allow seems to have such a powerful effect that it appears to have been under substantial evolutionary pressure. It is really remarkable that these simple differences in facial expression may have helped define the relationship between early dogs and humans."

Co-author Rui Diogo, an anatomist at Howard University, Washington DC, USA, said: "I must admit that I was surprised to see the results myself because the gross anatomy of muscles is normally very slow to change in evolution, and this happened very fast indeed, in just some dozens of thousands of years."

Soft tissue, including muscle, doesn't tend to survive in the fossil record, making the study of this type of evolution harder.

The only dog species in the study that did not have the muscle was the Siberian husky, which is among more ancient dog breeds.

An alternative reason for the human-dog bond could be that humans have a preference for other individuals which have whites in the eye and that intense AU 101 movements exposes the white part of the dogs eyes.

It is not known why or precisely when humans first brought wolves in from the cold and the evolution from wolf to dog began, but this research helps us understand some of the likely mechanisms underlying dog domestication.

Credit: 
University of Portsmouth

Breakthrough in understanding how human eyes process 3D motion

Scientists at the University of York have revealed that there are two separate 'pathways' for seeing 3D motion in the human brain, which allow people to perform a wide range of tasks such as catching a ball or avoiding moving objects.

The new insight could help further understanding into how to alleviate the effects of lazy eye syndrome, as well as how industry could develop better 3D visual displays and virtual reality systems.

Much of what scientists know about 3D motion comes from comparing the 'stereoscopic' signals generated by a person's eyes, but the exact way the brain processes these signals has not been fully understood in the past.

Scientists at the Universities of York, St Andrews, and Bradford have now shown that there are two ways the brain can compute 3D signals, not just one as previously thought.

They found that 3D motion signals separate into two 'pathways' in the brain at an early stage of the image transmission between the eyes and the brain.

Dr Alex Wade from the University of York's Department of Psychology, said: "We know that we have two signals from our visual system that helps the brain compute 3D motion - one is a fast signal and one is a slow signal.

"This helps us in a number of ways, with our hand-eye coordination for example, or so that we don't fall over navigating around objects. What we didn't know was what the brain did with these signals to allow us to understand what is going on in front of our eyes and react appropriately.

"Using brain imaging technology we were able to see that two 3D motion signals are separated out into two distinct pathways in the brain, allowing information to be extracted simultaneously and indicating to the visual system that it is encountering a 3D moving object."

The research team had previously shown that people with lazy eye syndrome might still be able to see 'fast' 3D motion signals, despite them having very poor 3D vision in general. Now that scientists understand how this pathway works, there is the potential to build tests to measure and monitor therapies aimed at curing the condition.

Dr Milena Kaestner, who conducted the work as part of her PhD at the University of York, said: "We were also surprised to see a link between 3D motion signals and how the brain receives information about colour. We now believe that colour might be more important in this type of visual processing than we previously thought.

"The visual pathways for colour have been thought to be independent of signals about motion and depth, but the research suggests that there could be a connection in the brain between these three visual properties."

Dr Julie Harris, from St Andrews University, said: "Knowing more about our visual system, and particularly how motion, depth and colour could all be connected in the brain, could help in a number of research areas into what happens when these pathways go wrong, resulting in visual disturbances that impact negatively on people's quality of life."

The research is published in the journal Proceedings of the National Academy of Sciences (PNAS).

Credit: 
University of York

Breakthrough paves way for new Lyme disease treatment

video: Virginia Tech biochemist Brandon Jutras has discovered the cellular component that contributes to Lyme arthritis, a debilitating and extremely painful condition that is the most common late stage symptom of Lyme disease.

Image: 
Virginia Tech

Virginia Tech biochemist Brandon Jutras has discovered the cellular component that contributes to Lyme arthritis, a debilitating and extremely painful condition that is the most common late stage symptom of Lyme disease.

Jutras found that as the Lyme-causing bacteria Borrelia burgdorferi multiplies, it sheds a cellular component called peptidoglycan that elicits a unique inflammatory response in the body.

"This discovery will help researchers improve diagnostic tests and may lead to new treatment options for patients suffering with Lyme arthritis," said Jutras, the lead author on the study. "This is an important finding, and we think that it has major implications for many manifestations of Lyme disease, not just Lyme arthritis."

Reported incidences of Lyme disease, the most reported vector-borne disease in the country, have increased by more than 6,000 percent in the past 15 years in the state of Virginia. The Centers for Disease Control estimates that approximately 300,000 people are diagnosed with Lyme disease annually in the United States. Scientists predict that the number of people who become infected with Lyme will increase as our climate continues to change.

Jutras -- an assistant professor of biochemistry in the College of Agriculture and Life Sciences and an affiliated faculty member of the Fralin Life Sciences Institute -- and his collaborators recently published their findings in the Proceedings of the National Academy of Sciences.

The PNAS paper was four years in the making, and Jutras began this research during his post-doctoral fellowship in the lab of Christine Jacobs-Wagner, a Howard Hughes Medical Institute Investigator and professor at Yale University.

"Nowadays, nothing significant in science is accomplished without collaboration," Jutras said. Co-authors on this paper ranged from bench scientists to medical doctors and practicing physicians. Allen Steere, a Harvard doctor who originally identified Lyme disease in the 1970s, assisted Jutras with his research and provided access to patient samples.

This research may provide a new way to diagnose Lyme disease and Lyme arthritis for patients with vague symptoms based on the presence of the cellular component called peptidoglycan in synovial fluid.

The team found peptidoglycan is a major contributor to Lyme arthritis in late-stage Lyme disease patients. Peptidoglycan is an essential component of bacterial cell walls. All bacteria have some form of peptidoglycan, but the form found in the bacteria that causes Lyme, Borrelia burgdorferi, has a unique chemical structure. When the bacteria multiply, they shed peptidoglycan into the extracellular environment, because its genome does not have the appropriate proteins to recycle it back into the cell.

"We can actually detect peptidoglycan in the synovial fluid of the affected, inflamed joints of patients that have all the symptoms of Lyme arthritis but no longer have an obvious, active infection," Jutras said.

Peptidoglycan elicits an inflammatory response and the molecule persists in the synovial fluid, which means that our bodies continue to respond, without mounting a counter response.

Receptors in our immune system sense bacterial products and, depending on the individual's genetic predispositions, may determine how strongly a patient's body reacts to peptidoglycan.

The next phase of Jutras' work is to use methods to destroy the peptidoglycan, or intervene to prevent a response, which could get rid of Lyme disease symptoms. Jutras predicts that with either therapy patients would start recovering sooner.

Clinical samples included in this study were obtained from patients that had confirmed cases of Lyme disease under the guidelines of the CDC, but virtually all did not respond to oral and/or intravenous antibiotic treatment. The presence of peptidoglycan in these patients' synovial fluids may explain why some people experience symptoms of late stage Lyme disease in the absence of an obvious infection. In this case, the usual antibiotic treatments for Lyme disease would no longer be helpful, but this discovery might provide avenues for new treatments.

Members of the Jacobs-Wagner lab purified the peptidoglycan and removed all other bacterial components and asked: Is peptidoglycan all on its own capable of causing arthritis in a mouse model?

Within 24 hours post-injection, mice presented with dramatic joint inflammation, indicating that peptidoglycan can cause arthritis.

Jutras is continuing his research at Virginia Tech on peptidoglycan by more thoroughly studying its chemical composition to determine how it is able to persist in the human body. This will also help further the understanding of how this bacterial product contributes to other manifestations of Lyme disease.

"We are interested in understanding everything associated with how patients respond, how we can prevent that response, and how we could possibly intervene with blocking therapies or therapies that eliminate the molecule entirely," Jutras said.

Credit: 
Virginia Tech