Culture

The whisper of schizophrenia: Machine learning finds 'sound' words predict psychosis

A machine-learning method discovered a hidden clue in people's language predictive of the later emergence of psychosis -- the frequent use of words associated with sound. A paper published by the journal npj Schizophrenia published the findings by scientists at Emory University and Harvard University.

The researchers also developed a new machine-learning method to more precisely quantify the semantic richness of people's conversational language, a known indicator for psychosis.

Their results show that automated analysis of the two language variables -- more frequent use of words associated with sound and speaking with low semantic density, or vagueness -- can predict whether an at-risk person will later develop psychosis with 93 percent accuracy.

Even trained clinicians had not noticed how people at risk for psychosis use more words associated with sound than the average, although abnormal auditory perception is a pre-clinical symptom.

"Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes," says Neguine Rezaii, first author of the paper. "The automated technique we've developed is a really sensitive tool to detect these hidden patterns. It's like a microscope for warning signs of psychosis."

Rezaii began work on the paper while she was a resident at Emory School of Medicine's Department of Psychiatry and Behavioral Sciences. She is now at fellow in Harvard Medical School's Department of Neurology.

"It was previously known that subtle features of future psychosis are present in people's language, but we've used machine learning to actually uncover hidden details about those features," says senior author Phillip Wolff, a professor of psychology at Emory. Wolff's lab focuses on language semantics and machine learning to predict decision-making and mental health.

"Our finding is novel and adds to the evidence showing the potential for using machine learning to identify linguistic abnormalities associated with mental illness," says co-author Elaine Walker, an Emory professor of psychology and neuroscience who researches how schizophrenia and other psychotic disorders develop.

The onset of schizophrenia and other psychotic disorders typically occurs in the early 20s, with warning signs -- known as prodromal syndrome -- beginning around age 17. About 25 to 30 percent of youth who meet criteria for a prodromal syndrome will develop schizophrenia or another psychotic disorder.

Using structured interviews and cognitive tests, trained clinicians can predict psychosis with about 80 percent accuracy in those with a prodromal syndrome. Machine-learning research is among the many ongoing efforts to streamline diagnostic methods, identify new variables, and improve the accuracy of predictions.

Currently, there is no cure for psychosis.

"If we can identify individuals who are at risk earlier and use preventive interventions, we might be able to reverse the deficits," Walker says. "There are good data showing that treatments like cognitive-behavioral therapy can delay onset, and perhaps even reduce the occurrence of psychosis."

For the current paper, the researchers first used machine learning to establish "norms" for conversational language. They fed a computer software program the online conversations of 30,000 users of Reddit, a social media platform where people have informal discussions about a range of topics. The software program, known as Word2Vec, uses an algorithm to change individual words to vectors, assigning each one a location in a semantic space based on its meaning. Those with similar meanings are positioned closer together than those with far different meanings.

The Wolff lab also developed a computer program to perform what the researchers dubbed "vector unpacking," or analysis of the semantic density of word usage. Previous work has measured semantic coherence between sentences. Vector unpacking allowed the researchers to quantify how much information was packed into each sentence.

After generating a baseline of "normal" data, the researchers applied the same techniques to diagnostic interviews of 40 participants that had been conducted by trained clinicians, as part of the multi-site North American Prodrome Longitudinal Study (NAPLS), funded by the National Institutes of Health. NAPLS is focused on young people at clinical high risk for psychosis. Walker is the principal investigator for NAPLS at Emory, one of nine universities involved in the 14-year project.

The automated analyses of the participant samples were then compared to the normal baseline sample and the longitudinal data on whether the participants converted to psychosis.

The results showed that higher than normal usage of words related to sound, combined with a higher rate of using words with similar meaning, meant that psychosis was likely on the horizon.

Strengths of the study include the simplicity of using just two variables -- both of which have a strong theoretical foundation -- the replication of the results in a holdout dataset, and the high accuracy of its predictions, at above 90 percent.

"In the clinical realm, we often lack precision," Rezaii says. "We need more quantified, objective ways to measure subtle variables, such as those hidden within language usage."

Rezaii and Wolff are now gathering larger data sets and testing the application of their methods on a variety of neuropsychiatric diseases, including dementia.

"This research is interesting not just for its potential to reveal more about mental illness, but for understanding how the mind works -- how it puts ideas together," Wolff says. "Machine learning technology is advancing so rapidly that it's giving us tools to data mine the human mind."

Credit: 
Emory Health Sciences

A metal-free, sustainable approach to CO2 reduction

image: Carbon emissions.

Image: 
Chris LeBoutillier via PIXEL

Researchers in Japan present an organic catalyst for carbon dioxide (CO2) reduction that is inexpensive, readily available and recyclable. As the level of catalytic activity can be tuned by the solvent conditions, their findings could open up many new directions for converting CO2 to industrially useful organic compounds.

Sustainability is a key goal in the development of next-generation catalysts for CO2 reduction. One promising approach that many teams are focusing on is a reaction called the hydrosilylation1 of CO2. However, most catalysts developed to date for this purpose have the disadvantage of containing metals that are expensive, not widely available and potentially detrimental to the environment.

Now, scientists at Tokyo Institute of Technology (Tokyo Tech) and the Renewable Energy Research Center at Japan's National Institute of Advanced Industrial Science and Technology (AIST) have demonstrated the possibility of using a fully recyclable, metal-free catalyst.

By comparing how well different organic catalysts could achieve hydrosilylation of CO2, the team identified one that surpassed all others in terms of selectivity and yield. This catalyst, called tetrabutylammonium (TBA) formate, achieved 99% selectivity and produced the desired formate product with a 98% yield. The reaction occurred rapidly (within 24 hours) and under mild conditions, at a temperature of 60°C.

Remarkably, the catalyst has a turnover number2 of up to 1800, which is more than an order of magnitude higher than previous results.

In 2015, team leader Ken Motokura of Tokyo Tech's Department of Chemical Science and Engineering and his colleagues found that formate salts show promising catalytic activity. It was this hint that provided the basis for the current study. Motokura explains: "Although we did expect formate salts to exhibit good catalytic activity, TBA formate showed much higher selectivity, stability and activity that went beyond our expectations."

In the current study, the researchers found that the catalyst can be made reusable by using toluene3 as a solvent. They showed that Lewis basic solvents4 such as N-methylpyrrolidone (NMP) and dimethyl sulfoxide (DMSO) can accelerate the reaction, meaning that the catalytic system is tunable.

Overall the findings, published in the online edition of the journal ACS Sustainable Chemistry & Engineering, offer a new, environmentally friendly path to reducing CO2 at the same time as yielding industrially important formate products.

Silyl formate can be easily converted to formic acid, which can serve as an important hydrogen carrier, for example, in fuel cells. The high reactivity of silyl formate enables its conversion into intermediates for the preparation of organic compounds such as carboxylic acids, amides and alcohols.

"This efficient transformation technique of CO2 to silyl formate will expand the possibilities for CO2 utilization as a chemical feedstock," Motokura says.

Credit: 
Tokyo Institute of Technology

Special fibroblasts help pancreatic cancer cells evade immune detection

image: Imaging mass cytometry (IMC) staining of a human PDAC section, using metal-conjugated antibodies to mark different cell types, and apCAF markers. The arrows point to examples of apCAFs in the PDAC stroma.

Image: 
Tuveson lab/CSHL 2019

Cold Spring Harbor, NY -- Pancreatic ductal adenocarcinoma (PDAC) is the fourth leading cause of cancer-related deaths in the world. Mostly chemoresistant, PDAC so far has no effective treatment. Understanding the connective tissue, called stroma, that surrounds, nurtures, and even protects PDAC tumors, is key to developing effective therapeutics.

"PDAC patients are diagnosed really late, so we don't know they're sick until the very end stages," said Ela Elyada, a postdoctoral fellow in Dr. David Tuveson's lab at Cold Spring Harbor Laboratory (CSHL). "We can't diagnose patients early enough because we don't have tools, and they don't respond to drugs. One barrier to the drugs is the fibroblasts in the stroma."

PDAC is characterized by an abundance of non-malignant stromal cells, and fibroblasts are one of the most common types of stromal cells. "We have a lot of fibroblast in pancreatic cancer, unlike other cancers which are mostly cancer cells," Elyada said. These cancer-associated fibroblasts (CAFs) can help cancer cells proliferate, survive and evade detection by the immune system.

The insidious role CAFs seem to play in protecting cancer cells labels them as bad, but completely obliterating CAFs in mice also worsened their cancers. Elyada wanted to investigate the nature of CAFs: are they good or bad? To crack the case, she, Associate Professor Paul Robson at the Jackson Laboratory, and colleagues used single-cell RNA sequencing to classify the fibroblasts into three distinct sub-populations, identifying specific functions and characteristics unique to each. This includes two previously identified types of CAFs, myofibroblastic CAFs (myCAFs) and inflammatory CAFs (iCAFs), and also a new type of CAF called antigen-presenting CAFs (apCAFs). The apCAFs were present in both mice and human PDAC. Their findings are published in the journal Cancer Discovery.

While newly identified apCAFs had the properties of a fibroblast, Elyada and her team found that they were different from the other fibroblast sub-populations. They expressed MHC class II genes, which are usually only expressed by specialized immune cells. Cells with MHC class II molecules on the surface can present antigens, or foreign peptides from viruses and bacteria, to helper T-cells. Detecting the antigen, the T-cell activates and recruits cytotoxic T-cells and other immune elements to attack and eliminate the invader. But apCAFs present in pancreatic tumors lack other components that activate T-cells. Elyada and her team hypothesize that this may result in incompletely activated T-cells that are unable to properly eliminate the cancer cells.

"We showed that apCAFs have specific capabilities of interacting with T-cells in a way that other CAFs don't," said Elyada. The research team now wants to know how the apCAFs are interacting with T-cells and the immune system. "If we can show that the apCAFs are somehow inhibiting the activity of T-cells, we can come up with therapies that specifically target that type of CAFs," Elyada proposed. "We can also combine it with other, complementary immune therapies to make them more effective."

Credit: 
Cold Spring Harbor Laboratory

Bitcoin causing CO2 emissions comparable to Hamburg

The use of Bitcoin causes around 22 megatons in CO2 emissions annually - comparable to the total emissions of cities such as Hamburg or Las Vegas. That is the conclusion of the most detailed analysis to date of the cryptocurrency's carbon footprint. For their study, an interdisciplinary team of researchers at the Technical University of Munich (TUM) analyzed such data as the IPO filings of hardware manufacturers and the IP addresses of Bitcoin "miners".

Although Bitcoin is a virtual currency, the energy consumption associated with its use is very real. For a Bitcoin transfer to be executed and validated, a mathematical puzzle must be solved by an arbitrary computer in the global Bitcoin network. The network, which anyone can join, rewards the puzzle solvers in Bitcoin. The computing capacity used in this process - known as Bitcoin mining - has increased rapidly in recent years. Statistics show that it quadrupled in 2018 alone.

Consequently, the Bitcoin boom raises the question of whether the cryptocurrency is imposing an additional burden on the climate. Several studies have attempted to quantify the CO2 emissions caused by Bitcoin mining. "These studies are based on a number of approximations, however," says Christian Stoll, who conducts research at the Technical University of Munich (TUM) and the Massachusetts Institute of Technology (MIT).

"Detective work" to track down the power consumption

Therefore, a team of management sciences and informatics researchers at TUM has carried out the most detailed calculation of the carbon footprint of the Bitcoin system to date. Working like detectives, they proceeded step by step to gather conclusive data.

The team began by calculating the power consumption of the network. This depends primarily on the hardware used for Bitcoin mining. "Today special systems are used, known as ASIC-based miners," explains Stoll. In 2018 the three manufacturers who control the ASIC miner market planned IPOs. The team used the mandatory IPO filings to calculate the market shares of the companies' respective products. The study also had to consider whether the mining was being done by someone running just one miner at home or in one of the large-scale "farms" set up in recent years by professional operators. "In those operations, extra energy is needed just for the cooling of the data center," says Stoll. To investigate the orders of magnitude involved, the team used statistics released by a public pool of different miners showing the computing power of its members.

68 percent of computing power located in Asia

The researchers determined the annual electricity consumption by Bitcoin, as of November 2018, to be about 46 TWh. And how much CO2 is emitted when this energy is generated? Here, too, the research team wanted to go beyond mere estimates. The key question, therefore: Where are the miners located?

Once again, live tracking data from the mining pools provided the decisive information. "In these groups, miners combine their computing power in order to get a quicker turn in the reward for solving puzzles - similar to people in lottery pools," explains Stoll. The IP addresses in the statistics published by the two biggest pools showed that miners tend to join pools in or near their home countries. Based on these data, the team was able to localize 68 percent of the Bitcoin network computing power in Asian countries, 17 percent in European countries, and 15 percent in North America. The researchers cross-checked this conclusion against the results of another method by localizing the IP addresses of individual miners using an internet of things search engine. They then combined their results with statistics on the carbon intensity of power generation in the various countries.

"Linking large-scale mining operations to renewable energy production"

The conclusion of the study: The Bitcoin system has a carbon footprint of between 22 and 22.9 megatons per year. That is comparable to the footprint of such cities as Hamburg, Vienna or Las Vegas.

"Naturally there are bigger factors contributing to climate change. However, the carbon footprint is big enough to make it worth discussing the possibility of regulating cryptocurrency mining in regions where power generation is especially carbon-intensive," says Christian Stoll. "To improve the ecological balance, one possibility might be to link more mining farms to additional renewable generating capacity."

Credit: 
Technical University of Munich (TUM)

Cooking vegetables: healthier with extra virgin olive oil

image: The Natural Antioxidant Group of the University of Barcelona (UB) participated in this study.

Image: 
University of Barcelona

Cooking vegetables in the sofrito (sauté) with extra virgin olive oil favours the absorption and release of bioactive compounds of its traditional ingredients (garlic, onion and tomato), according to the study conducted by a research team of the Faculty of Pharmacy and Food Sciences at the University of Barcelona (UB), from the Physiopathology of Obesity and Nutrition Networking Biomedical Research Centre (CIBERobn) and the Diabetes and Associated Metabolic Diseases Networking Biomedical Research (CIBERDEM), led by the tenure lecturer Rosa M. Lamuela.

These results, published in the science journal Molecules, allow an insight on the mechanisms with which gastronomy could play a relevant role in the health-improving effects of the Mediterranean Diet.

The Mediterranean diet, known for a high consumption of phytochemicals from vegetables, fruits and legumes, has been correlated to health-improving effects in cardiovascular and metabolic health. This relation is mainly based on the results of the PREDIMED study, a multicentre clinical trial carried out from 2003 to 2011 with more than 7,000 people and in which Rosa M. Lamuela took part too.

However, the effects regarding health in the Mediterranean diet have been hard to reproduce in non-Mediterranean populations, probably, according to the researchers, due differences in cooking practice. Therefore, researchers try to assess whether the Mediterranean gastronomy can bring health benefits, not only for its food but also for the way they are cooked.

Sofrito, a key element in the Mediterranean diet

In this context, the objective of the study was to assess the effect of the extra virgin olive oil in bioactive compounds in tomato, onion and garlic -traditional ingredients in sofrito, one of the key cooking techniques in the Mediterranean diet. According to the researchers, this sauce has forty different phenolic compounds and a high amount of carotenoids, and its consumption is associated with an improvement of the cardiovascular risk parameters and insulin sensitivity.

"The main result of the study is that cooking vegetables with extra virgin olive oil favours the bioactive compounds, such as carotenoids and polyphenols that are present in vegetables we find in sofrito, to move to the olive oil, which enables the absorption and bioactivity of these compounds", says Rosa M. Lamuela, director of the Institute for Research on Nutrition and Food Safety (INSA-UB).

From vegetables to olive oil

The study also found out a new trait of olive oil. So far, researchers had stated that this oil and onion can produce isomers -molecules with the same molecular formula but with different traits- from carotenoids, variants which are more bio-available and have a higher antioxidant capacity. However, the study proved that oil plays an essential role in this process, not only in carotenoids, but also in polyphenols, which go from vegetables to the oil.

Anti-inflammatory effects

These results could explain the causes for which previous studies by this research group had stated that the presence of oil increases the anti-inflammatory effect in the tomato sauce. "We saw that this increase can occur due the migration of bioactive compounds (carotenoids and polyphenols) from the tomato to the oil during the cooking process, which favours the absorption of these compounds", concludes José Fernando Rinaldi de Alvarenga, INSA-UB member and first author of the study.

Credit: 
University of Barcelona

Secondhand horror: Indirect predator odor triggers reproductive changes in bank voles

image: The study was conducted at Konnevesi Research Station of the University of Jyväskylä.

Image: 
University of Jyvaskyla

Reproducing in a fearful world is tricky. How do rodents get information of prevailing risk of death, and how do they respond to the information? A research team of evolutionary biologists from University of Jyväskylä, Finland and University of Vienna, Austria reported that rodent mothers are more likely to become pregnant after smelling odors produced by conspecific males frightened by predators. The study was published on 7th of June 2019 in the open access journal Ecosphere.

If your neighbor is scared, this means that danger is near and acute. One evolutionary solution is to speed reproduction in hope that one of your offspring survives. These results bring new information about the proximate and ultimate explanations of small mammal behavioral responses under predation risk.

Fear of being eaten has the power to shape populations and drive evolution. The effect the authors report is large: mothers exposed to scared males had a higher successful insemination rate compared to unexposed control mothers.

Chemical messages

Predation involves more than just predators consuming prey. The study shows that voles are able to determine the difference between the smell of a predator, the smell of a non-stressed vole, and the smell of a vole who encountered a predator.

The ability to distinguish between these cues allows voles to make a more nuanced risk assessment of their current environment. The authors argue that voles perceive the odor of a fearful conspecific to be more reliable as an indicator of acute danger than the pure cue of predator presence in the breeding area. This olfactory "eavesdropping" allows voles to make use of the experiences of conspecifics around them.

When the risk of death is high, it may be better to invest in offspring. Females changed their reproductive strategy from cautious to risky, towards terminal investment, after encountering the smell of a fearful vole.

Implications of the findings

Professor Perter Banks and researcher Catherine Price from the University of Sydney, both working on predator-prey relationships, commented these findings:

"The finding that indirect cues of predation can have such profound influences on animal reproduction is really very interesting. There has been recent evidence that animals can use information from other non-predator individuals to assess their predation risk. But this experiment demonstrates how olfactory "eavesdropping" on scared conspecifics within a population can lead to impacts on breeding and potentially population-level processes. Even more interesting is that this information is passed between individuals socially which has some interesting consequences for the evolution of such communication."

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

MRI plays a role in diagnosis of cocaine-related damage to the heart

OAK BROOK, Ill. - Cardiac MRI has a pivotal role to play in the diagnosis of cocaine-induced cardiovascular diseases, according to an article published in the journal Radiology: Cardiothoracic Imaging.

Cocaine abuse is a significant public health problem around the world. In the European Union alone, prevalence of cocaine consumption was estimated to be around 3.4 million among people ages 15 to 64 years in 2017, and there is evidence that consumption is increasing across every socioeconomic class. The National Survey on Drug Use and Health reported that in 2014 there were an estimated 1.5 million current cocaine users aged 12 or older in the United States.

The abuse of cocaine can have particularly devastating effects on the heart. These effects can be both acute, with severe and sudden onset attacks, and chronic, developing and worsening over years. Examples of cocaine's acute effects include heart attacks, and acute myocarditis, or inflammation of the heart muscle. A common chronic effect is cardiomyopathy, a condition in which the heart muscle becomes enlarged, thick or rigid.

Being able to distinguish between these conditions is vital for proper interventions, yet in many cases diagnosis is difficult due to overlapping symptoms. Typically, a thorough diagnostic workup is necessary, featuring clinical history, laboratory tests, an electrocardiogram, or EKG, stress test, non-invasive imaging modalities and coronary angiography.

As a non-invasive imaging method, cardiac MRI is well positioned to be central to the diagnostic workup of cocaine abuse-related heart problems. Its greatest strength, according to study senior author Marco Francone, M.D., Ph.D., from Sapienza University in Rome, is its ability to provide a microscopic view into living tissue, helping clinicians differentiate among a wide spectrum of heart diseases.

"This is a unique feature of cardiac MRI which sets it apart from other imaging modalities and places it in a pivotal role in the management of substance abuse patients and, generally speaking, of structural heart diseases," he said.

Prof. Francone and colleagues recently conducted a review of existing literature on the use of cardiac MRI in cocaine abuse evaluation. The research showed that cardiac MRI can detect cocaine's effects on the cardiovascular system and help differentiate between acute and chronic conditions.

"Cardiac MRI's ability to distinguish between the different cardiac manifestations of cocaine abuse is important because they all have different patterns," Prof. Francone said. "Even though all these pathologies have cocaine abuse as primary cause, the myocardial damage and, therefore, clinical course are completely different, ranging from complete recovery to heart failure."

Cardiac MRI has particularly important diagnostic and prognostic implications for the types of cardiomyopathies that constitute the chronic effects of cocaine abuse. It permits evaluation of ventricular function, a measure of how well the heart is pumping blood to the rest of the body. Assessing ventricular function helps identify different phases of chronic cardiomyopathy. Cardiac MRI can also provide information on the myocardial tissue, allowing for investigation of the underlying causes of suboptimal heart function.

"The real challenge is early diagnosis of cocaine-induced cardiomyopathy and, in particular, its asymptomatic stage," Prof. Francone said. "Early diagnosis can indeed have a significant impact on clinical outcome, preventing evolution to heart failure."

Diagnosis of cocaine-induced changes to the heart should integrate data such as age and gender, clinical assessment and history of drug abuse, and laboratory findings with results from cardiac MRI.

Credit: 
Radiological Society of North America

Exercise may have different effects in the morning and evening

video: Researchers from the University of Copenhagen have learned that the effect of exercise may differ depending on the time of day it is performed. In mice they demonstrate that exercise in the morning results in an increased metabolic response in skeletal muscle, while exercise later in the day increases energy expenditure for an extended period of time.

Image: 
University of Copenhagen, Faculty of Health and Medical Sciences.

Researchers from the University of Copenhagen have learned that the effect of exercise may differ depending on the time of day it is performed. In mice they demonstrate that exercise in the morning results in an increased metabolic response in skeletal muscle, while exercise later in the day increases energy expenditure for an extended period of time.

We probably all know how important a healthy circadian rhythm is. Too little sleep can have severe health consequences. But researchers are still making new discoveries confirming that the body's circadian clock affects our health.

Now, researchers from University of Copenhagen - in collaboration with researchers from University of California, Irvine - have learned that the effect of exercise may differ depending on the time of day it is performed. Studies in mice reveal that the effect of exercise performed in the beginning of the mouse' dark/active phase, corresponding to our morning, differs from the effect of exercise performed in the beginning of the light/resting phase, corresponding to our evening.

'There appears to be rather significant differences between the effect of exercise performed in the morning and evening, and these differences are probably controlled by the body's circadian clock. Morning exercise initiates gene programs in the muscle cells, making them more effective and better capable of metabolising sugar and fat. Evening exercise, on the other hand, increases whole body energy expenditure for an extended period of time', says one of the researchers behind the study, Associate Professor Jonas Thue Treebak from the Novo Nordisk Foundation Center for Basic Metabolic Research.

Morning Exercise Is Not Necessarily Better than Evening Exercise

The researchers have measured a number of effects in the muscle cells, including the transcriptional response and effects on the metabolites. The results show that responses are far stronger in both areas following exercise in the morning and that this is likely to be controlled by a central mechanism involving the protein HIF1-alfa, which directly regulates the body's circadian clock.

Morning exercise appears to increase the ability of muscle cells to metabolise sugar and fat, and this type of effect interests the researchers in relation to people with severe overweight and type 2 diabetes.

On the other hand, the results also show that exercise in the evening increases energy expenditure in the hours after exercise. Therefore, the researchers cannot necessarily conclude that exercise in the morning is better than exercise in the evening, Jonas Thue Treebak stresses.

'On this basis we cannot say for certain which is best, exercise in the morning or exercise in the evening. At this point, we can only conclude that the effects of the two appear to differ, and we certainly have to do more work to determine the potential mechanisms for the beneficial effects of exercise training performed at these two time-points. We are eager to extend these studies to humans to identify if timed exercise can be used as a treatment strategy for people with metabolic diseases', he explains.

Credit: 
University of Copenhagen - The Faculty of Health and Medical Sciences

Hubble observes tiny galaxy with big heart

image: Nestled within this field of bright foreground stars lies ESO 495-21, a tiny galaxy with a big heart. ESO 495-21 is just 3000 light-years across, a fraction of the size of the Milky Way, but that is not stopping the galaxy from furiously forming huge numbers of stars.

Image: 
ESA/Hubble, NASA

Nestled within this field of bright foreground stars lies ESO 495-21, a tiny galaxy with a big heart. ESO 495-21 may be just 3000 light-years across, but that is not stopping the galaxy from furiously forming huge numbers of stars. It may also host a supermassive black hole; this is unusual for a galaxy of its size, and may provide intriguing hints as to how galaxies form and evolve.

Located about 30 million light-years away in the constellation of Pyxis (The Compass), ESO 495-21 is a dwarf starburst galaxy -- this means that it is small in size, but ablaze with rapid bursts of star formation. Starburst galaxies form stars at exceptionally high rates, creating stellar newborns of up to 1000 times faster than the Milky Way.

Hubble has studied the bursts of activity within ESO 495-21 several times. Notably, the space telescope has explored the galaxy's multiple super star clusters, very dense regions only a few million years old and packed with massive stars. These spectacular areas can have a huge impact on their host galaxies. Studying them allows astronomers to investigate the earliest stages of their evolution, in a bid to understand how massive stars form and change throughout the Universe.

As well as hosting the cosmic fireworks that are super star clusters, ESO 495-21 also may harbour a supermassive black hole at its core. Astronomers know that almost every large galaxy hosts such an object at its centre, and, in general, the bigger the galaxy, the more massive the black hole. Our home galaxy, the Milky Way, houses a supermassive black hole, Sagittarius A*, which is over four million times as massive as the Sun. ESO 495-21, also known as Henize 2-10) is a dwarf galaxy, only three percent the size of the Milky Way, and yet there are indications that the black hole at its core is over a million times as massive as the Sun -- an extremely unusual scenario.

This black hole may offer clues as to how black holes and galaxies evolved in the early Universe. The origin of the central supermassive black holes in galaxies is still a matter of debate -- do the galaxies form first and then crush material at their centres into black holes, or do pre-existing black holes gather galaxies around them? Do they evolve together -- or could the answer be something else entirely?

With its small size, indistinct shape, and rapid starburst activity, astronomers think ESO 495-21 may be an analogue for some of the first galaxies to have formed in the cosmos. Finding a black hole at the galaxy's heart is therefore a strong indication that black holes may have formed first, with galaxies later developing and evolving around them.

The data comprising this image were gathered by two of the instruments aboard the NASA/ESA Hubble Space Telescope: the Advanced Camera for Surveys and already decommissioned Wide Field Planetary Camera 2/.

Credit: 
ESA/Hubble Information Centre

Antibody treatment allows transplant of mismatched stem cells, tissues in mice

A combination of six antibodies can successfully prepare mice to accept blood and immune stem cells from an immunologically mismatched donor, according to a study by researchers at the Stanford University School of Medicine.

The recipient animals can then accept an organ or tissue transplant matching that of the donor stem cells without requiring ongoing immune suppression.

If the findings are replicated in humans, the work could transform the treatment of people with immune or blood disorders while also vastly increasing the pool of available organs for those who need transplants.

The work builds on a series of recent studies conducted at Stanford that may pave the way for this type of stem cell transplant, known as a hematopoietic stem cell transplant, to safely treat a variety of disorders. The technique is now primarily used to treat cancers of the blood and immune system.

"Radiation and chemotherapy are the current standard for preparing patients for a bone marrow transplant," said Irving Weissman, MD, professor of pathology and developmental biology at Stanford. "For the past decade, we have been working to step-by-step replace these nonselective and dangerous treatments with targeted antibodies. This study is an important milestone that began with our isolation of purified blood stem cells 30 years ago."

Weissman, the director of Stanford's Institute for Stem Cell Biology and Regenerative Medicine and of the Ludwig Center for Cancer Stem Cell Research and Medicine at Stanford, is the senior author of the study, which will be published online June 13 in Cell Stem Cell. Graduate student Benson George is the lead author.

"This study indicates that it's possible to perform these transplants in mice in a much gentler way without requiring a complete match between the donor and the recipient stem cells," George said. "It also opens the door to increasing the availability of solid organs for transplant."

Harsh process

Hematopoietic stem cells are a rare type of stem cell in the bone marrow that give rise to the progenitors of all blood and immune cells throughout our lives. It's been known for some time that people with genetic blood disorders such as sickle cell anemia or thalassemia, or those with autoimmune diseases or immune deficiencies, can be cured by a transplant of healthy hematopoietic stem cells. But in order for the transplanted cells to settle in to the recipient's bone marrow -- a process known as engrafting -- it's first necessary to eliminate key components of the recipient's own blood and immune system.

Traditionally this has been accomplished with toxic doses of chemotherapy or radiation, or both. But this pretreatment, known as conditioning, is so harsh that clinicians have been hesitant to resort to hematopoietic stem cell transplantation unless the patient's life is threatened by their disease. For this reason, most transplant recipients have been people with cancers such as leukemia or lymphoma.

Hematopoietic stem cells for transplant are typically obtained by collecting them from either circulating blood (although the cells usually live in the bone marrow they can be induced by specific drugs to enter the blood) or from the bone marrow itself, which is why the procedure is often referred to as a bone marrow transplant. In either case, the recipient receives a mixture of cells from the donor, only some of which are hematopoietic stem cells.

Unfortunately, some of those passenger cells include a type of immune cell called a T cell, which Weissman and others have shown is responsible for a life-threatening transplant complication called graft-versus-host disease. This occurs when the donor's cells recognize the recipient's tissues as foreign and begin to attack it. Clinicians try to reduce the likelihood of graft-versus-host disease by using donor stem cells that immunologically match the recipient as closely as possible. But those matches can be difficult to find, particularly for some ethnic minorities. Although it's possible to do transplants with less-than-perfect matches -- a situation known as a haploidentical transplant -- the recipient then requires ongoing treatment with strong immunosuppressive drugs to prevent rejection or graft-versus-host disease.

Although pure hematopoietic stem cell transplants avoid this outcome, they are more difficult to obtain in sufficient quantities, and they engraft less readily than whole bone marrow.

"We wanted to eliminate three major barriers: the toxicity of the conditioning procedure, the need to have an immunologically matched donor and the difficulties in transplanting purified hematopoietic stem cells," George said.

Hopes for first-line therapy

The researchers found that treating mice with a combination of six specific antibodies safely and efficiently eliminated several types of immune cells in the animals' bone marrow and allowed haploidentical pure hematopoietic stem cells to engraft and begin producing blood and immune cells without the need for continued immunosuppression.

The degree of difference in a haploidentical transplant is similar to what naturally occurs between parent and child, or between about half of siblings.

"This finding suggests that, if these results are replicated in humans, we could have a child with sickle cell anemia in the clinic and, rather than considering stem cell transplant as a last resort and contingent on finding a perfectly matching donor, we could instead turn to transplant with stem cells from one of the child's parents as a first-line therapy," George said.

Additional experiments showed that the mice treated with the six antibodies could also accept completely mismatched purified hematopoietic stem cells, such as those that might be obtained from an embryonic stem cell line.

After transplantation with the mismatched stem cells, the recipient mice developed blood and immune systems that contained cells from both the donor and the recipient. This allowed them to subsequently accept transplants of heart tissue from animals genetically identical to the donor animals.

"The immune systems exist together in a kind of a symbiosis," George said, "and they view both the donor and recipient tissue as 'self.' This suggests that it may be possible to make haploidentical stem transplants both safe and achievable in human patients without the need for either conditioning with radiation or chemotherapy or subsequent immunosuppression."

The researchers are next planning to conduct similar antibody-mediated conditioning followed by transplant with mismatched hematopoietic stem cells in large animal models.

If the technique one day clears the hurdles necessary to prove it is safe and effective in humans, the researchers envision a time when people who need transplanted organs could first undergo a safe, gentle transplant with hematopoietic stem cells derived in the laboratory from embryonic stem cells. The same embryonic stem cells could also then be used to generate an organ that would be fully accepted by the recipient without requiring the need for long-term treatment with drugs to suppress the immune system. In particular, Hiromitsu Nakauchi, MD, PhD, a professor of genetics at Stanford, is studying how to generate human organs in large animals from laboratory-grown stem cells.

"With support by the California Institute for Regenerative Medicine, we've been able to make important advances in human embryonic stem cell research," Weissman said. "In the past, these stem cell transplants have required a complete match to avoid rejection and reduce the chance of graft versus host disease. But in a family with four siblings the odds of having a sibling who matches the patient this closely are only one in four. Now we've shown in mice that a 'half match,' which occurs between parents and children or in two of every four siblings, works without the need for radiation, chemotherapy or ongoing immunosuppression. This may open up the possibility of transplant for nearly everyone who needs it. Additionally, the immune tolerance we're able to induce should in the future allow the co-transplantation of hematopoietic stem cells and tissues, such as insulin-producing cells or even organs generated from the same embryonic stem cell line."

Weissman is the Virginia & D.K. Ludwig Professor for Clinical Investigation in Cancer Research. He is a member of Stanford Bio-X, the Stanford Cardiovascular Institute and the Stanford Cancer Institute.

Credit: 
Stanford Medicine

Oxygen shapes arms and legs: Origins of a new developmental mechanism called 'interdigital cell death'

image: The balance between growth of the digits (black arrows) and interdigital regions (red arrows) determine if the hands and feet form a webbing in amphibians. Amniotes employ another strategy: their interdigital regions are actively removed by cell death (dark blue). In some species, inhibition of cell death (light blue) was important for the evolution of new limb shapes.

Image: 
©Tokyo Tech

Amphibians, such as frogs and newts, form their hands and feet by differential growth. It means that differences in the growth rate between the digital and interdigital regions will determine the final proportions of these structures. In contrast, amniotes such as birds and mammals employ also cell death to shape their limbs. Removing part of the limb tissue by cell death allowed for the evolution of a great variety of limb shapes, such as the lobed fingers of coots, and even removal of some fingers in horses and camels (Figure 1). However, how this new mechanism appeared was still unknown. Now, researchers identified a surprising factor that could have been crucial for the appearance of interdigital cell death in tetrapods during evolution: the amount of oxygen surrounding the embryo.

How can oxygen shape the limbs?

The study, published in Developmental Cell, is a collaboration between the teams of professors Mikiko Tanaka from Tokyo Tech, Haruki Ochi from Yamagata University and James Hanken from Harvard University. Their goal was to understand the role of environmental oxygen in the evolution of the limbs of tetrapods. The graduate student Ingrid Cordeiro and her colleagues first used the African clawed frog (Xenopus laevis) as a model, as this amphibian has interdigital membranes in their feet. When the frog tadpoles were kept under high oxygen levels, some cells in their interdigital regions died, but not in other parts of their bodies. Increasing the amount of blood vessels - the source of oxygen in the tissue - also had the same effect. These results show how a change in the environment surrounding the tadpoles can affect their development in very specific ways.

Interestingly, production of reactive oxygen species (ROS) was key in this process. ROS are often thought as detrimental to health, and production of ROS is increased in conditions such as aging or infertility. However, ROS are not always villains - they are a natural part of the metabolism that can serve as signals to the cells, including during the formation of the embryo. The researchers showed that production of ROS during interdigital cell death is required in birds. Combined with previous reports in mammals, it is now possible to hypothesize that this mechanism is shared by all amniotes, including humans. In contrast, the amphibians African clawed-frogs and the Japanese fire-bellied newt had low ROS production in their interdigital regions.

Amphibian ecology and the evolution of the limbs

Many amphibians are aquatic for part of their lives, breathing the dissolved oxygen gas from the water. In contrast, amniotes such as chicken develop in an egg that is lined with membranes full of blood vessels that exchange gases with the air. Other amniotes, such as mice and humans, have the placenta, a structure that allows them to get oxygen directly from the mother instead. In these amniotes, oxygen is obtained in a very efficient way in comparison with the amphibian tadpoles. To understand if this factor was correlated with the presence of cell death in the limbs, the researchers looked for an amphibian that was naturally exposed to more oxygen than the aquatic tadpoles.

The answer was found in the coqui frogs, which live in a lab colony in the Museum of Comparative Zoology in Harvard University. These frogs grow without a tadpole stage in terrestrial eggs, a process called direct-development, and breathe oxygen from the air. Interestingly, the coqui frogs had dying cells in the interdigital region and high production of ROS, like chickens and mice (Figure 2). This data revealed that ecological features - where the embryos are and how much oxygen surrounds it - can have a direct effect on the presence of cell death in the limbs during development.

The first step of a new evolutionary mechanism

However, additional steps might have taken place during evolution to make cell death a necessary mechanism to shape hands and feet. "It is not clear if the dying cells are integrated to limb development in these frogs" said Mikiko Tanaka. "What we can say is that, during evolution, removal of the interdigital region by cell death became essential to shape the limbs of amniotes". The researchers suggest that cell death in the limbs might have appeared as a byproduct of the high oxygen levels and ROS in this tissue, and were possibly a first step in this evolutionary process. They point out that further studies are necessary to investigate how cell death became an integral part of the limb development of amniotes, including modulation of pathways that respond to ROS and oxidative stress.

Credit: 
Tokyo Institute of Technology

Two hours a week is key dose of nature for health and wellbeing

Spending at least two hours a week in nature may be a crucial threshold for promoting health and wellbeing, according to a new large-scale study.

Research led by the University of Exeter, published in Scientific Reports and funded by NIHR, found that people who spend at least 120 minutes in nature a week are significantly more likely to report good health and higher psychological wellbeing than those who don't visit nature at all during an average week. However, no such benefits were found for people who visited natural settings such as town parks, woodlands, country parks and beaches for less than 120 minutes a week.

The study used data from nearly 20,000 people in England and found that it didn't matter whether the 120 minutes was achieved in a single visit or over several shorter visits. It also found the 120 minute threshold applied to both men and women, to older and younger adults, across different occupational and ethnic groups, among those living in both rich and poor areas, and even among people with long term illnesses or disabilities.

Dr Mat White, of the University of Exeter Medical School, who led the study, said: "It's well known that getting outdoors in nature can be good for people's health and wellbeing but until now we've not been able to say how much is enough. The majority of nature visits in this research took place within just two miles of home so even visiting local urban greenspaces seems to be a good thing. Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit."

There is growing evidence that merely living in a greener neighbourhood can be good for health, for instance by reducing air pollution. The data for the current research came from Natural England's Monitor of Engagement with the Natural Environment Survey, the world's largest study collecting data on people's weekly contact with the natural world.

Co-author of the research, Professor Terry Hartig of Uppsala University in Sweden said: "There are many reasons why spending time in nature may be good for health and wellbeing, including getting perspective on life circumstances, reducing stress, and enjoying quality time with friends and family. The current findings offer valuable support to health practitioners in making recommendations about spending time in nature to promote basic health and wellbeing, similar to guidelines for weekly physical".

Credit: 
University of Exeter

Monitoring educational equity

WASHINGTON - A centralized, consistently reported system of indicators of educational equity is needed to bring attention to disparities in the U.S. education system, says a new report by the National Academies of Sciences, Engineering, and Medicine. Indicators - measures used to track performance and monitor change over time - can help convey why disparities arise, identify groups most affected by them, and inform policy and practice measures to improve equity in pre-K through 12th grade education.

Societal inequities influence nearly every aspect of students' education - including their academic performance, the classes they take, their access to enrichment opportunities, and their school's approach to discipline.

The system should include indicators that fall into two categories, says the report, Monitoring Educational Equity. The first category of indicators should measure and track disparities in student outcomes such as kindergarten academic readiness, coursework performance, and on-time graduation. The second category should measure and track disparities in students' access to resources and opportunities, such as high-quality pre-K programs, effective teachers, rigorous curriculum, and non-academic supports.

The purpose of a system of equity indicators, says the report, is not to simply track progress toward educational goals, but to identify differences in critical outcomes and opportunities across key subgroups. The report discusses gender, race and ethnicity, English-language fluency, family income, and disability status. Additionally, educational disparities may not be attributable to only the school environment, but also to circumstances in students' homes and neighborhoods.

"We imagine public education to be America's great engine of upward mobility and, ultimately, equality," said Christopher Edley, Jr., professor and former dean at the University of California, Berkeley Law School, and chair of the committee that wrote the report. "A good system of indicators can help measure how much we repair--or reinforce--the great divides in opportunity. Indicators help us understand how opportunities affect outcomes, and whether we match those opportunities with student needs."

The committee argues that educational equity is as important, if not more important, as other measures of a country's wellbeing, including economic and employment progress. The monthly Bureau of Labor Statistics' Employment Situation Summary, known as the "monthly jobs report," is well known, well publicized, and regularly used to inform policy decisions. Similarly, an annual "Education Equity Summary" could systematically inform national, state, and local stakeholders about the status of educational equity in the U.S., says the report. Such information would help target interventions, research, and policy initiatives to reduce disparities.

The committee proposes 16 indicators of educational equity.

Seven of those 16 indicators are related to disparities in student outcomes. They are grouped across the stages of K-12 education:

Kindergarten Readiness

Indicator 1: Disparities in Academic Readiness

Indicator 2: Disparities in Self-Regulation and Attention Skills

K-12 Learning and Engagement

Indicator 3: Disparities in Engagement in Schooling

Indicator 4: Disparities in Performance in Coursework

Indicator 5: Disparities in Performance on Tests

Educational Attainment

Indicator 6: Disparities in On-Time Graduation

Indicator 7: Disparities in Postsecondary Readiness

The remaining nine indicators are related to access to opportunities and resources:

Extent of Racial, Ethnic, and Economic Segregation

Indicator 8: Disparities in Students' Exposure to Racial, Ethnic, and Economic Segregation

Equitable Access to High-Quality Early Childhood Education

Indicator 9: Disparities in Access to and Participation in High-Quality Early Childhood Education

Equitable Access to High-Quality Curricula and Instruction

Indicator 10: Disparities in Access to Effective Teaching

Indicator 11: Disparities in Access to and Enrollment in Rigorous Coursework

Indicator 12: Disparities in Curricular Breadth

Indicator 13: Disparities in Access to High-Quality Academic Supports

Equitable Access to Supportive School and Classroom Environments

Indicator 14: Disparities in School Climate

Indicator 15: Disparities in Nonexclusionary Discipline Practices

Indicator 16: Disparities in Nonacademic Supports for Student Success

A national system of indicators would enable valid comparisons of school, district, and state performance across a number of important student outcomes and resources. The report recommends that the federal government - under the guidance of an advisory board -- work with states, school districts, and educational intermediaries to develop a national equity indicator system and incorporate it into their relevant data collection.

Credit: 
National Academies of Sciences, Engineering, and Medicine

Trinity study finds over a quarter of adults aged 50+ are deficient in vitamin D

Over a quarter of adults aged 50+ are deficient in vitamin D according to researchers from Trinity College Dublin who announced their findings today (Thursday, June 13th). Over half (57%) had inadequate serum vitamin D levels, of which 26% were classed as vitamin D deficient. Vitamin D has a known role in bone health, with growing evidence for beneficial effects on muscle strength and other non-skeletal outcomes. The study was recently published in the international, peer-reviewed journal Nutrients.

Better understanding of factors that contribute to vitamin D deficiency is needed to identify people most at-risk. Determinants of deficiency identified in this new study were female gender, advanced age (80+ years), smoking, non-white ethnicity, obesity and poor self-reported health. Researchers therefore identified a profile of older people more likely to be at risk of vitamin D deficiency. Being of a healthy weight, retired, engaging in regular vigorous physical activity, vitamin D supplement use, sun travel in past 12 months and summer season were positive determinants, and therefore potentially protective factors against vitamin D deficiency in older people.

The findings were based on 6004 midlife and older adults, living at Northern latitudes (England, 50-55oN) derived from the English Longitudinal Study of Ageing (ELSA). Since UVB radiation (sunlight) is a known determinant of vitamin D status, this was investigated. Interestingly, residents in the South of England had a reduced risk of deficiency, compared with the North, even after adjustment for socioeconomic and other predictors of vitamin D status.

This new research demonstrates that vitamin D deficiency is prevalent in older adult populations living at Northern latitudes and highlights the importance of public health strategies throughout midlife and older age to achieve optimal vitamin D status.

Associate Professor in Nutrition at Trinity College, Maria O'Sullivan commented 'Our study identified factors associated with vitamin D deficiency, including being aged 80+ years, obesity and sedentary lifestyles; all of which are increasing traits in western populations. Furthermore, this is one of the few studies to highlight the importance of non-white ethnicity in vitamin D deficiency in a large study of ageing. The findings are valuable in developing targeted strategies to eliminate vitamin D deficiency (at 30nmol/L) in older populations'.

First Author Dr Niamh Aspell, who conducted the study as part of her PhD at Trinity said: 'Those who used a vitamin D supplement, were less likely to be vitamin D deficient as may be expected, but supplement use was low (4.4%) and, therefore, food fortification and other strategies need to be considered at policy level for older populations'.

Co-Author and Trinity Research Fellow Dr Eamon Laird, said: 'The high rates of deficiency are similar to rates seen in other high latitude countries such as Ireland. However, other more northern countries such as Finland have implemented a successful vitamin D fortification policy which has all but eliminated deficiency in the population. Such a policy could easily be implemented in the UK and Ireland '.

Credit: 
Trinity College Dublin

Smoking may impair body's blood pressure autocorrect system

HERSHEY, Pa. -- Smokers may be at a higher risk for developing hypertension, and an overactive response to normal drops in blood pressure may help explain why, according to researchers.

"The human body has a buffering system that continuously monitors and maintains a healthy blood pressure," said Lawrence Sinoway, director of the Penn State Clinical and Translational Science Institute. "If blood pressure drops, a response called muscle sympathetic nerve activity (MSNA) is triggered to bring blood pressure back up to normal levels."

An additional system -- called the baroreflex -- helps correct if blood pressure gets too high, he added.

According to Sinoway, the study found that after a burst of MSNA, the rise in blood pressure in a chronic smoker was about twice as great as in a non-smoker, pushing blood pressure to unhealthy levels. The researchers suspect that impairment of baroreflex may be why.

"When the sympathetic nervous system fires, like with MSNA, your blood pressure rises and then a series of things happen to buffer that increase, to try to attenuate it," Sinoway said. "We think that in smokers, that buffering -- the baroreflex -- is impaired."

Jian Cui, associate professor of medicine, said the results suggest that this impairment may be connected to hypertension.

"The greater rise in blood pressure in response to MSNA may contribute to a higher resting blood pressure level in smokers without hypertension," Cui said. "It's possible that this higher response to MSNA could also contribute to the eventual development of hypertension."

The researchers said that while previous research has found a link between chronic smokers and higher levels of MSNA bursts, less was known about what happened to blood pressure after these bursts. Additionally, Sinoway said other studies examined the effects of acute smoking -- a single session of being exposed to cigarette smoke -- on non-smokers, instead of habitual smokers.

For the study, the researchers recruited 60 participants -- 18 smokers and 42 non-smokers. None of the participants had hypertension. The smokers reported smoking an average of 17 cigarettes a day over a period of about 13 years.

To measure MSNA, the researchers inserted an electrode into the peroneal nerve, which sits below the kneecap, of each participant. Additionally, they measured heart rate, diastolic and systolic blood pressure, and mean arterial pressure at the brachial artery in the upper arm.

After analyzing the data, the researchers found no difference in systolic blood pressure between smokers and non-smokers. However, diastolic blood pressure, mean arterial pressure and heart rate were significantly higher in smokers. Smokers also had higher levels of MSNA. In addition, resting heart rate was significantly higher in smokers.

Cui said the findings, recently published in the American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, give further evidence of the harmfulness of smoking.

"Our study reveals another mechanism by which habitual smoking may contribute to the development of hypertension," Cui said. "Further studies are needed to examine if quitting smoking can decrease this accentuated response."

In the future, Sinoway said he and the other researchers will continue to investigate the link between smoking and high blood pressure.

"We're hoping to better understand just how much cigarette smoking contributes to the development of hypertension," Sinoway said. "Then, we can try to understand if there are things we can do to intervene and prevent chronic smokers from developing this condition."

Credit: 
Penn State