Culture

Satellite galaxies can carry on forming stars when they pass close to their parent galaxies

image: Image of the simulated local group used for the article. Left, image of dark matter; on the right, gas distribution. The three main galaxies of the Local Group (MW, M31 and M33) are indicated.

Image: 
CLUES simulation team

Historically most scientists thought that once a satellite galaxy has passed close by its higher mass parent galaxy its star formation would stop because the larger galaxy would remove the gas from it, leaving it shorn of the material it would need to make new stars. However, for the first time, a team led by the researcher at the Instituto de Astrofísica de Canarias (IAC), Arianna di Cintio, has shown using numerical simulations that this is not always the case. The results of the study were recently published in the journal Monthly Notices of the Royal Astronomical Society (MNRAS).

Using sophisticated simulations of the whole of the Local Group of galaxies, including the Milky Way, the Andromeda galaxy and their respective satellite galaxies, the researchers have shown that the satellites not only can retain their gas but can also experience many new episodes of star formation just after passing close to the pericentre of their parent galaxy (the mínimum distance they reach from its centre).

The satellite galaxies of the Local Group show a wide variety of star formation histories, whose origin has not previously been fully understood. Using hydrodynamic simulations within the project Constrained Local UniversE (CLUES) the authors studied the star formation histories of satellite galaxies similar to those of the Milky Way in a cosmological context.

While in the majority of the cases the gas of the satellite is sucked out by the parent galaxy due to gravitational action and transfers itself to the larger galaxy, interrupting star formation in the satellite, in a process known as accretion; in some 25% of the sample they found that star formation was clearly enhanced by this interactive process.

The results show that the peaks of star formation are correlated with the close pass of the satellite around the parent galaxy, and occasionally by the interaction of two satellites. The researchers identified two key features to the star formation: the satellite must enter the parent galaxy with a large reserve of cold gas, and a minimum distance not too small, so that stars may form due to compression of the gas. On the contrary, galaxies which pass too close to the parent galaxy, or to a parent galaxy with little gas, are stripped of their gas and thereby lose the possibility of forming new stars.

"The passage of satellites also coincide with peaks in the star formation of their parent galaxies, which suggests that this mechanism causes bursts of stars equally in both parent galaxies and satellites, in agreement with recent studies of the history of star formation in our own Galaxy", explains Arianna di Cintio, the lead author on the paper.

"This is very important when we try to understand how star formation is produced in the smaller dwarf galaxies of our Local Group, an unresolved question", she adds.

This finding will shed light on the episodes of star formation which are observed in the dwarf galaxies of the Local Group, such as Carina and Fornax, giving an attractive explanation of their existence. It also requires a revision of the theoretical models used to explain the formation of stars in dwarf galaxies.

Credit: 
Instituto de Astrofísica de Canarias (IAC)

The evolution of vinegar flies is based on the variation of male sex pheromones

image: Mating arena for vinegar flies: For the study, Mohammed Khallaf recorded the mating behaviors of 99 different species of flies of the genus Drosophila. The species not only differed in terms of their chemical communication based on different sex pheromones. Rather, the behavior of individual species can also differ astonishingly from that of closely related species.

Image: 
Anna Schroll

By analyzing the genomes of 99 species of vinegar flies and evaluating their chemical odor profiles and sexual behaviors, researchers at the Max Planck Institute for Chemical Ecology show that sex pheromones and the corresponding olfactory channels in the insect brain evolve rapidly and independently. Female flies are able to recognize conspecific males through their specific odor profiles. Interestingly, closely related species show distinct differences in odor profiles, which helps to prevent mating between different species. Males, in turn, chemically mark females during mating so that they become less attractive to other males. The results of this study are a valuable basis for understanding how pheromone production, their perception and processing in the brain, and ultimately the resulting behavior drive the evolution of new species (Nature Communications, July 2021, DOI: 10.1038/s41467-021-24395-z).

A large family with extravagant relatives

As in most animals, mate choice in vinegar flies is primarily based on chemical signals. There are several reasons why the genus Drosophila is ideally suited for studying the evolution and diversity of sex pheromones. The more than 1.500 known species of vinegar flies are found all over the world in a wide variety of different habitats: in deserts, rainforests, caves, swamps, or mountains. Often, rotting fruit and the yeasts responsible for fermentation are the primary food source. Some species also feed on fresh fruit, mushrooms, tree bark, flowers, bacterial slime, or frog spawn. In many species, especially in the model organism Drosophila melanogaster, the processing of olfactory information in the brain has already been well described. The receptors for the sex pheromones are precisely tuned to detect the odor of the conspecific mating partner.

In a new study published now in Nature Communications, researchers led by Mohammed Khallaf and Markus Knaden investigated how sex pheromones evolved in 99 different species of the genus Drosophila. "We identified the sex pheromones and the corresponding olfactory channels in the olfactory system of flies to explore the evolution of pheromone signaling with respect to phylogenetic relationships," said Mohammed Khallaf, first author of the study. 41 of the species under investigation had already been fully sequenced. By sequencing the whole genomes of further 58 species, the scientists presented the most comprehensive phylogenetic analysis of the genus Drosophila to date.

By using the genomic data, it was possible to genetically compare differences in chemical profiles as well as differences in odor detection and processing. The researchers collected the odors of individual flies: Five males, virgin and mated females from each species, resulting in odors of over 1500 flies in total, were analyzed. "While the comparison of male and virgin female flies informed us about sex-specific differences, variations between mated and virgin females informed us about male-specific compounds transferred to females during mating," Markus Knaden, head of the project group Odor-guided Behavior in the Department of Evolutionary Neuroethology, summarized.

The difference lies in the detail

In all 99 fly species studied, 52 different odor compounds were identified. In 81 species, the scientists found pheromones that are only produced by male flies. Of these species, males produce 58 different odor blends, which can consist of up to seven individual odors. While the males attract attention with a sophisticated chemistry, it is ultimately the females who decide whether mating occurs or not. "Interestingly closely related species often show clear differences in their pheromone profile. At the same time, individual pheromones show up again and again along the phylogenetic tree. For example, males of 34 of the 99 species studied produce cis-vaccenyl-acetate (cVA), a well described Drosophila melanogaster pheromone. Most of the male-derived pheromones fulfill the same functions: Firstly, they attract females and secondly, they are transferred to the females during mating to make them smell less attractive to other males. This ensures the males' reproductive success," said Markus Knaden.

The significant differences in the pheromone profile of males of closely related species suggest that the selection pressure to prevent mating between these species that have evolved from a common ancestor is high. On the other hand, the fact that as many as 34 species produce cis-vaccenyl acetate as a sex pheromone shows that there may only be a limited number of genes responsible for pheromone production. As long as there is no pressure to mark a difference from an evolutionary perspective, different species produce the same pheromone.

A key observation is that it is mostly the male fly in the genus Drosophila that is the transmitter of the chemical signal, while the females, as receivers, recognize and interpret the signal. "The diversity and abundance of male-specific compounds compared to female ones is astonishing: Of the 52 different odor compounds, 43 are produced exclusively by males, while only 9 are produced by females. Moreover, 81 Drosophila species communicate via male pheromones, while only 15 species have specifically female pheromones," said Mohammed Khallaf.

From odor to behavior

The study is the first comprehensive analysis of the mating behaviors of a large number of Drosophila species. "While we were performing mating experiments we could already observe that some of the species showed very specific behaviors. Some of them were driven by olfaction and only started their mating dance when the partner had the right smell. Sometimes, however, the right wing pattern or the male's song also triggered the female's mating behavior," Bill Hansson, who heads the Department of Evolutionary Neuroethology where the study was performed, described the observations in the mating arena. Presumably, sex pheromones play a crucial role as the first cue to identify the right partner before mating. Once a female has been attracted and is ready to mate, other mating rituals may also be initiated, including dance, nuptial gift or song. The researchers will further evaluate the recorded mating experiments for future studies and hope that these materials motivate other research groups to take a closer look at the mating strategies of different drosophilids as well.

Credit: 
Max Planck Institute for Chemical Ecology

High risk of divorce after TBI? Not necessarily, study suggests

July 6, 2021 - Traumatic brain injury (TBI) has a major impact on the lives of affected patients and families. But it doesn't necessarily lead to an increased risk of marital instability, as two-thirds of patients with TBI are still married to the same partner 10 years after their injury, reports a study in the July/August issue of the Journal of Head Trauma Rehabilitation (JHTR). The official journal of the Brain Injury Association of America, JHTR is published in the Lippincott portfolio by Wolters Kluwer.

For marriages that do end, divorce most often occurs within the first year after TBI, according to the new research by Flora M. Hammond, MD, of Indiana University School of Medicine, Indianapolis, and colleagues. "Our data dispel myths about risk of divorce after TBI and suggest a message of hope," the researchers write.

Findings may help in assessing risk and targeting timing marital interventions after TBI

Dr. Hammond and colleagues analyzed long-term follow-up data on 1,423 patients with TBI, all of whom were married at the time of their injury. Patients were drawn from the Traumatic Brain Injury Model Systems (TBIMS) database enrolling persons hospitalized with TBI. Average age at the time of injury was 44 years; about three-fourths of patients were men.

Ten years after TBI, 66 percent of patients with TBI remained married to the same person, without separation or divorce. Of marriages that ended, 68 percent did so within five years after TBI, including 39 percent within the first year.

The study also looked at factors associated with a higher or lower risk of divorce or separation. "Marital stability over the 10-year period was higher for those who were older, were female, and had no problematic substance use history," the researchers write. The risk of a breakup didn't seem to be related to race/ethnicity, education, cause of injury, or injury severity.

Marital stability has a major impact on the ability to resume normal life and functioning in persons with TBI. Some reports have suggested high divorce rates after TBI. However, in previous studies, reported rates of marital instability after TBI varied widely: from 22 to 85 percent. Long-term follow-up in a large sample of patients with TBI are major strengths of the new study.

The results question previous studies suggesting a high divorce rate among patients who are married at the time they sustain a TBI. The study also provides insights into risk factors for a marital breakup after TBI. The findings are consistent with the known bidirectional link between TBI and substance use. "While substance use itself may not cause marital instability, a spouse's perception that substance use is problematic may contribute to marital instability," Dr. Hammond and coauthors write.

The high risk of marital loss within the first few years after TBI suggests that early education and support might be helpful. The researchers note some important limitations of their study - including the lack of information on the quality of the marital relationship before TBI.

The findings may help to identify couples who may be at high risk of marital instability after TBI, and to guide patient and family education, relationship counseling, and other marital interventions, Dr. Hammond and colleagues believe. They conclude, "Interventions aimed at substance use prevention and functional improvement may also have relevance to facilitating marital stability."

Credit: 
Wolters Kluwer Health

Relationship between chromosomal instability and senescence revealed in the fly Drosophila

image: Nuclei are labeled in pink. Note that mitochondria (in green) are accumulated in aneuploid cells that have delaminated from the epithelium and enter a senescence state.

Image: 
IRB Barcelona

Chromosomal instability is a feature of solid tumours such as carcinoma. Likewise, cellular senescence is a process that is highly related to cellular ageing and its link to cancer is becoming increasingly clear. Scientists led by ICREA researcher Dr. Marco Milán at IRB Barcelona have revealed the link between chromosomal instability and cellular senescence.

"Chromosomal instability and senescence are two characteristics common to most tumours, and yet it was not known how one related to the other. Our studies indicate that senescence may be one of the intermediate links between chromosomal alterations and cancer," says Dr. Milan, head of the Development and Growth Control laboratory at IRB Barcelona.

"The behaviour we saw in cells with chromosomal instability made us think that they could be senescent cells and indeed that was the case!" says Dr. Jery Joy, first author of the article published in Developmental Cell.

The study has been conducted on the fly Drosophila, an animal model commonly used in biomedicine, and the mechanisms described may help to understand the contribution of chromosomal instability and senescence to cancer, and facilitate the identification of possible therapeutic targets.

Reversing the effects of chromosomal instability

The researchers from the Development and Growth Control lab have shown that, in an epithelial tissue with high levels of chromosomal instability, those cells with an altered balance of chromosome number detach from their neighbouring cells and enter senescence. Senescent cells are characterised by a permanently halted cell cycle and by the secretion of a large number of proteins. This abnormal secretion of proteins alters the surrounding tissue, alerting the immune system and causing inflammation.

If the senescent cells are not eliminated by the body immediately, they promote abnormal growth of the surrounding tissue, leading to malignant tumours. "If we identify the mechanisms by which we can lower the number of senescent cells, then we will be able to reduce the growth of these tumours," says Dr. Milan. "In fact, this study shows that this is possible, at least in Drosophila," says Dr. Joy.

Cells with an unbalanced number of chromosomes accumulate a high number of aberrant mitochondria and, therefore, a high level of oxidative stress, which in turn activates the JNK signalling pathway, triggering entry into senescence. "We have shown that reducing this high anomalous number of mitochondria, or regulating the oxidative stress that they induce, is sufficient to decrease the number of senescent cells and the negative effects of chromosomal instability," reiterates Dr. Joy.

These findings open new avenues of research to find therapeutic targets and reduce senescence levels caused by chromosomal instability in solid tumours.

Extrapolating from the fly to mammals

The vinegar fly, Drosophila melanogaster, is a widely used in biomedicine. It is a valuable animal model in cancer research because of its short life cycle, the availability of a large number of genetic tools, and the presence of the same genes as in humans, but with a lower level of redundancy.

In fact, experiments designed to dissect the causal relationship between cellular behaviour or characteristics in human tumours, such as chromosomal instability and senescence, are more easily analysed in this model organism.

Future laboratory work will continue to dissect the molecular mechanisms responsible for cellular behaviours found in solid tumours of epithelial origin produced by the mere induction of chromosomal instability. "The more we understand the biology of a tissue subjected to chromosomal instability and the molecular mechanisms responsible for the cellular behaviours that emerge and give rise to malignant tumours, the greater our chances of designing effective therapies and reducing the growth and malignancy of human carcinomas," concludes Dr. Milan.

Credit: 
Institute for Research in Biomedicine (IRB Barcelona)

Digital pens provide new insight into cognitive testing results

(Boston)--During neuropsychological assessments, participants complete tasks designed to study memory and thinking. Based on their performance, the participants receive a score that researchers use to evaluate how well specific domains of their cognition are functioning.

Consider, though, two participants who achieve the same score on one of these paper-and-pencil neuropsychological tests. One took 60 seconds to complete the task and was writing the entire time; the other spent three minutes, and alternated between writing answers and staring off into space. If researchers analyzed only the overall score of these two participants, would they be missing something important?

"By looking only at the outcome, meaning what score someone gets, we lose a lot of important information about how the person performed the task that may help us to better understand the underlying problem," explains lead author Stacy Andersen, PhD, assistant professor of medicine at Boston University School of Medicine (BUSM).

Researchers with the Long Life Family Study (LLFS) used digital pens and digital voice recorders to capture differences in study participants' performance while completing a cognitive test and found that differences in 'thinking' versus 'writing' time on a symbol coding test might act as clinically relevant, early biomarkers for cognitive/motor decline.

Participants in the LLFS were chosen for having multiple siblings living to very old ages. Longevity has long been associated with an increased health span and thus these families are studied to better understand contributors to healthy aging. The participants were assessed on a number of physical and cognitive measures, including a symbol coding test called the Digit Symbol Substitution Test.

This timed test requires participants to fill in numbered boxes with corresponding symbols from a given key and assesses both cognitive (attention and processing speed) and non-cognitive factors (motor speed and visual scanning). To allow researchers to collect data about how a participant went about completing the task, the participants used a digital pen while completing the test. On the tip of this pen was a small camera that tracked what and when a participant wrote. The LLFS researchers divided the output from this digital pen into 'writing time' (the time the participant spent writing) and 'thinking time' (the time not spent writing) and looked at how these changed over the course of the 90-second test.

The researchers then identified groups of participants that had similar patterns of writing time and thinking time across the course of the test. They found that although most participants had consistent writing and thinking times, there were groups of participants who got faster or slowed down. "This method of clustering allowed us to look at other similarities among the participants in each group in terms of their health and function that may be related to differences in writing and thinking time patterns," said coauthor and lead biostatistician Benjamin Sweigart, a biostatistics doctoral student at Boston University School of Public Health. The researchers found that those who got slower in writing the symbols during the test had poorer physical function on tests of grip strength and walking speed. In contrast, those who changed speed in thinking time had poorer scores on memory and executive function tests suggesting that writing time and thinking time capture different contributors to overall performance on the test.

According to the researchers, these findings show the importance of capturing additional facets of test performance beyond test scores. "Identifying whether poor test performance is related to impaired cognitive function as opposed to impaired motor function is important for choosing the correct treatment for an individual patient" adds Andersen. "The incorporation of digital technologies amplifies our ability to detect subtle differences in test behavior and functional abilities, even on brief tests of cognitive function. Moreover, these metrics have the potential to be very early markers of dysfunction."

Credit: 
Boston University School of Medicine

Developing new techniques to build biomaterials

image: A schematic of a protein hydrogel showing regions of folded protein (in red) connected by regions of unfolded protein (in white).

Image: 
Lorna Dougan/Phospho Animations

Scientists at the University of Leeds have developed an approach that could help in the design of a new generation of synthetic biomaterials made from proteins.

The biomaterials could eventually have applications in joint repair or wound healing as well as other fields of healthcare and food production.  

But one of the fundamental challenges is to control and fine tune the way protein building blocks assemble into complex protein networks that form the basis of biomaterials.  

Scientists at Leeds are investigating how changes to the structure and mechanics of individual protein building blocks - changes at the nanoscale - can alter the structure and mechanics of the biomaterial at a macro level while preserving the biological function of the protein network.  

In a paper published by the scientific journal ACS Nano, the researchers report that they were able to alter the structure of a protein network by removing a specific chemical bond in the protein building blocks. They called these bonds the "protein staples". 

With the protein staples removed, the individual protein molecules unfolded more easily when they connect together and assemble into a network. This resulted in a network with regions of folded protein connected by regions containing the unfolded protein resulting in very different mechanical properties for the biomaterial.  

Professor Lorna Dougan, from the School of Physics and Astronomy at Leeds, who supervised the research, said: "Proteins display amazing functional properties. We want to understand how we can exploit this diverse biological functionality in materials which use proteins as building blocks. 

"But to do that we need to understand how changes at a nano scale, at the level of individual molecules, alters the structure and behaviour of the protein at a macro level."  

Dr Matt Hughes, also from the School of Physics and Astronomy and lead author of the paper, said: "Controlling the protein building block's ability to unfold by removing the "protein staples" resulted in significantly different network architectures with markedly different mechanical behaviour and this demonstrates that unfolding of the protein building block plays a defining role in the architecture of protein networks and the subsequent mechanics." 

The researchers used facilities at the Astbury Centre for Structural Molecular Biology and School of Physics and Astronomy at Leeds and the ISIS Neutron Muon Source facility at the STFC Rutherford Appleton Laboratory in Oxfordshire. Using beams of neutrons, it allowed them to identify critical changes to the protein network's structure when the nano-staples where removed.

In conjunction with the experimental work, Dr Ben Hanson, a Research Associate in the School of Physics and Astronomy at Leeds, modelled the structural changes taking place. He found that it was specifically the act of protein unfolding during network formation, that was crucial in defining the network architecture of the protein hydrogels.   

Professor Dougan added: "The ability to change the nanoscale properties of protein building blocks, from a rigid, folded state to a flexible, unfolded state, provides a powerful route to creating functional biomaterials with controllable architecture and mechanics."   

Credit: 
University of Leeds

How an unfolding protein can induce programmed cell death

image: This is Patrick van der Wel, associate professor of Solid-State NMR Spectroscopy at the University of Groningen.

Image: 
Sylvia Germes

The death of cells is well regulated. If it occurs too much, it can cause degenerative diseases. Too little, and cells can become tumours. Mitochondria, the power plants of cells, play a role in this programmed cell death. Scientists from the University of Groningen (the Netherlands) and the University of Pittsburgh (U.S.) have obtained new insights in how mitochondria receive the signal to self-destruct. Their results were published in the Journal of Molecular Biology.

How does a cell kill itself? The details of this process are still unclear. Patrick van der Wel, associate professor of Solid-State NMR Spectroscopy at the University of Groningen, is working together with colleagues at the University of Pittsburgh to see how cell death is initiated on a molecular level. 'The membranes of mitochondria play a key role', he explains. Cardiolipin, a special type of membrane lipid, acts as an important signal. 'If it gets reshuffled inside the membrane and oxidized, this can trigger cell death.'

Unfolding

Another factor is the small protein cytochrome c. This plays an important role in energy production by mitochondria, but it can also bind to cardiolipin. 'We believe it may control oxidation of the cardiolipin, which is part of the initiation process for programmed cell death', explains Van der Wel. It was previously thought that the unfolding of cytochrome c on the cell membrane was important, as this allows it to oxidize cardiolipin, a step that triggers cell death. However, in a previous paper, Van der Wel and his U.S. colleagues published evidence that cytochrome c is not unfolded.

'In our new study, we have looked in even more detail at the interaction between cytochrome c and the mitochondrial membrane', says Van der Wel. They used solid-state NMR to detect the position and the status of all 105 amino acids in the protein. The NMR signals of two connected carbon atoms in an amino acid depend on how they interact with other atoms in the molecule. Therefore, the measured spectrum of the carbon atoms can show in which amino acid they are located. Such information can be used to get an idea of the protein's structure, even if it is not well ordered.

Discrepancy

'Furthermore, with this technique, an amino acid is only visible in a fixed part of the protein structure. If it is in an unfolded part, it can move more freely and becomes invisible.' Thus, solid-state NMR spectroscopy can show which parts of the protein are folded or unfolded. 'What we saw is that cytochrome c is not fully unfolded when attached to the cardiolipins in the mitochondrial membrane.'

Proteins fold in a particular order: the first fold induces a second, third, and so on. 'These steps are called foldons. What we saw in our experiments is that when binding to the membrane, different foldons in cytochrome c unfold at different stages. And some parts do not unfold at all.' This finding explains the discrepancy between the results of Van der Wel and his colleagues and those of previous papers: these studies were too coarse-grained to be able to clearly see which parts of cytochrome c were still folded.

Drugs

These results are interesting, as they provide fundamental insight into how programmed cell death is regulated at the molecular level. 'They add to our previous idea that oxidation of cardiolipin by cytochrome c is a very well-controlled, specific process.' Furthermore, knowing which parts of the protein unfold means that it will be possible to develop drugs that stabilize or destabilize them. Such drugs could be used as regulators that could increase or reduce programmed cell death. Van der Wel: 'We would like to use our data to construct a realistic computer model for this protein interaction so that it is possible to design these drugs in silico.'

Simple Science Summary

There is a continuous turnover of cells in our body. New cells are 'born', and old ones die, often through a process called 'programmed cell death'. This type of cell death must be tightly regulated. If it occurs too much, it can cause degenerative diseases, too little, and cells can become tumours. An important step in programmed cell death is the binding of a protein (called cytochrome c) to the membrane of mitochondria, the power plants inside cells. Scientists have now discovered that cytochrome c is partly unfolded during this important step, and they can pinpoint which parts of the protein are folded or unfolded. This could lead to the development of drugs that could regulate programmed cell death by increasing or reducing the unfolding.

Credit: 
University of Groningen

Interleukin-6 antagonists improve outcomes in hospitalised COVID-19 patients

Findings from a study published today [6 July] in the Journal of the American Medical Association (JAMA) have prompted new World Health Organization (WHO) recommendations to use interleukin-6 antagonists in patients with severe or critical COVID-19 along with corticosteroids.

A new analysis of 27 randomised trials involving nearly 11,000 patients found that treating hospitalised COVID-19 patients with drugs that block the effects of interleukin-6 (the interleukin-6 antagonists tocilizumab and sarilumab) reduces the risk of death and the need for mechanical ventilation.

The study, which was coordinated by WHO in partnership with King's College London, University of Bristol, University College London and Guy's and St Thomas' NHS Foundation Trust, found that interleukin-6 antagonists were most effective when administered with corticosteroids. In hospitalised patients, administering one of these drugs in addition to corticosteroids reduces the risk of death by 17%, compared to the use of corticosteroids alone. In patients not on mechanical ventilation, the risk of mechanical ventilation or death is reduced by 21%, compared to the use of corticosteroids alone.

In severely ill COVID-19 patients, the immune system overreacts, generating cytokines such as interleukin-6. Clinical trials have been testing whether drugs that inhibit the effects of interleukin-6, such as tocilizumab and sarilumab, benefit hospitalised patients with COVID-19. These trials have variously reported benefit, no effect and harm.

This prompted researchers from WHO's Rapid Evidence Appraisal for COVID-19 Therapies [REACT] Working Group, to examine the clinical benefit of treating hospitalised COVID-19 patients with interleukin-6 antagonists, compared with either a placebo or usual care. They combined data from 27 randomised trials that were conducted in 28 countries.

This analysis included information on 10,930 patients, of whom 6,449 were randomly assigned to receive interleukin-6 antagonists and 4,481 to receive usual care or placebo.

Results showed that the risk of dying within 28 days is lower in patients receiving interleukin-6 antagonists. In this group, the risk of death is 22% compared with an assumed risk of 25% in those receiving only usual care.

Importantly, improvements in outcomes were greater in patients who also received corticosteroids. In these patients, the risk of dying within 28 days is 21% in patients receiving interleukin-6 antagonists compared with an assumed 25% in patients receiving usual care. This means that for every 100 such patients, four more will survive.

The study also looked at the effect of these drugs on whether patients progressed to mechanical ventilation or death. Among patients also treated with corticosteroids, the risk was found to be 26% for those receiving interleukin-6 antagonists compared with an assumed 33% in those receiving usual care. This means that for every 100 such patients, 7 more will survive and avoid mechanical ventilation.

Commenting on the results of the analysis Dr Janet Diaz, Lead for Clinical management, WHO Health Emergencies, said: "Bringing together the results of trials conducted around the world is one of the best ways to find treatments that will help more people survive COVID-19. We have updated our clinical care treatment guidance to reflect this latest development. While science has delivered, we must now turn our attention to access. Given the extent of global vaccine inequity, people in the lowest income countries will be the ones most at risk of severe and critical COVID-19. Those are the people these drugs need to reach."

Prof Manu Shankar-Hari, Critical Care Consultant at Guy's and St Thomas' Hospital NHS Foundation Trust, Professor of Critical Care Medicine at King's College London and a NIHR Clinician Scientist, said: "COVID-19 is a serious illness. Our research shows that interleukin-6 antagonists reduce deaths from COVID-19, i.e. save lives, and prevent progression to severe illness necessitating breathing support with a ventilator. Further, interleukin-6 antagonists appear even more effective when used alongside corticosteroids. Our research findings reflect the incredible research effort from scientists worldwide since the start of the pandemic. On a personal note, I am grateful to the patients and their families for their willingness to participate in research during these challenging times."

Jonathan Sterne, Professor of Medical Statistics and Epidemiology, University of Bristol, Deputy Director of the National Institute for Health Research Bristol Biomedical Research Centre (NIHR Bristol BRC) and Director of Health Data Research UK South West, said: "Clinical trials assessing the efficacy of monoclonal antibodies that block interleukin-6 in hospitalised patients with COVID-19 have variously reported benefit, no effect and harm. By rapidly combining 95 per cent of the worldwide data from these trials, we have shown that these drugs work consistently in reducing death and severe COVID-19 disease across countries and health care settings, and that they work better among patients who are also receiving corticosteroids."

Claire Vale, Principal Research Fellow at the MRC Clinical Trials Unit at UCL (University College London), said: "These results, which will lead to better outcomes for patients hospitalized with COVID-19, reflect a huge global effort. Bringing together this information in such a short space of time has only been possible thanks to the overwhelming commitment of all the doctors and teams who ran the trials, and of course, the patients who took part in them."

Credit: 
King's College London

Keeping bacteria under lock and key

image: Aditya Kunjapur, assistant professor of chemical and biomolecular engineering (right), and doctoral student Michaela Jones compare samples in the lab.

Image: 
Photo by Kathy Atkinson

Scientists and engineers are constantly looking for ways to better our world.

Synthetic biology is an emerging field with promise for improving our ability to manufacture chemicals, develop therapeutic medicines such as biopharmaceuticals and vaccines, and enhance agricultural production, among other things. It relies on taking natural or engineered pieces of DNA and combining them in new ways in biological systems, such as microbes, bacteria or other organisms.

According to University of Delaware's Aditya Kunjapur, assistant professor of chemical and biomolecular engineering, as these sophisticated microbial technologies are advanced, scientists need to explore ways to keep these organisms from ending up in the wrong environment.

For example, a bacterium that is good at making large amounts of chemicals is great in a bioreactor, but it isn't necessarily what we would want in the environment or in our food. Similarly, a specialized bacterium developed to help a plant be more productive is useful in agriculture, but we might not want that same bacterium to end up in our bodies.

Engineering traits directly into the microbe to create built-in safeguards to control where it can grow, known as biological containment, is one promising solution.

"Having the ability to control where microbes grow will help create technologies to treat disease and clean up the environment more safely," said Kunjapur, an emerging leader in biosecurity with expertise in teaching cells to create and harness chemical building blocks not found in nature.

Now, Kunjapur is the lead author of a new paper published in Science Advances on Friday, July 2 that describes progress on the stability of a biocontainment strategy, first reported in 2015, that uses a microbe's dependence on a synthetic nutrient to keep it contained. The work was done in collaboration with Harvard Medical School (HMS) and includes co-authors now at HMS, UD, Johns Hopkins University and industry.

Under lock and key

In the paper, the research team studied a strain of Escherichia coli (E. coli) that was made to depend on a synthetic nutrient for its survival, a method known as synthetic auxotrophy. They found that the strain remained stable and maintained this dependence when grown for 100 days.

"You can take enzymes that the bacteria need to grow and make them depend on a man made building block, something it will never see in nature," said Kunjapur.

The work builds on previous research in the Church lab at Harvard Medical School, where most of this research was conducted and where Kunjapur completed postdoctoral work. In that study, the research team showed that if one enzyme required this synthetic building block, bacteria would quickly find a way out. But if three enzymes required the synthetic building block, the bacteria couldn't "unlock the cell door" within a 24-hour period to make a quick getaway.

"Asking the microbe to mutate its genome in precisely three different ways at the same time is too much, and it won't find that solution," said Kunjapur. "So, while the same 'key' might open all of the doors, the microbe can't be at all three doors at once to open them, so it remains contained."

The question was whether they would find a way out if given more time to grow.

In this new work, as the research team extended the study and grew the E. coli strain, nicknamed DEP, continuously for 100 days, they also applied extra stress by shrinking the amount of synthetic nutrient the bacteria received over time.

In three parallel experiments, the researchers found that the DEP E. coli was unable to find a way to "escape" or grow outside the cell. It did, however, adapt to a lower concentration -- 10 times less -- of the synthetic nutrient it needed to survive. Further, the research team found that proteins on the host cell's surface changed, which had not been seen before.

"We think that as we were decreasing the amount of the nutrients, the DEP E. coli modified the proteins on the surface of the cell that regulate how much synthetic nutrient is allowed inside in order to retain more nutrients inside and keep growing," said Kunjapur. "We were surprised because at first we didn't expect them to adapt, and then we didn't expect them to adapt in this way."

To validate their results, the team went back and engineered those same genetic traits into the original ancestor E. coli strain and discovered it was able to grow at the lower synthetic nutrient concentration from the start. This information could prove useful for scaling the experiment in the future, Kunjapur said.

In a follow-up experiment, Kunjapur's colleague Eriona Hysolli, a postdoctoral fellow in the Church lab, added the bacteria directly to mammalian cells. The results were intriguing. While the bacteria grew tremendously in the first few days, over the course of a week the bacteria just faded away, without the need for antibiotics.

"Our data show the first use of recoded bacterial strains with controllable growth in a mammalian cellular model. We are excited about the promising applications in human health like the gut microbiome," said Hysolli, a co-author on the paper.

Building on this work, Michaela Jones, a UD doctoral student in Kunjapur's lab and paper co-author, conducted additional experiments that showed the bacteria stopped growing when the synthetic nutrient was taken away, then resumed growth when the nutrient was added back. Kunjapur said these findings equip researchers with new tools to control how and when bacteria grow by regulating the nutrients they need to survive.

"Now, if you need bacteria to grow at a certain rate or time, we have a synthetic building block as a lever or knob to control that," he said.

In future work, Kunjapur plans to study the biocontained DEP E. coli strain, as well as other microbe designs, under a broader set of conditions to determine if the approach will hold up in the environment. Additionally, he is working on biomedical applications for live bacterial strains, which could be used as therapeutics, probiotics and vaccines.

Credit: 
University of Delaware

Fighting COVID with COVID

UNIVERSITY PARK, Pa. -- What if the COVID-19 virus could be used against itself? Researchers at Penn State have designed a proof-of-concept therapeutic that may be able to do just that. The team designed a synthetic defective SARS-CoV-2 virus that is innocuous but interferes with the real virus's growth, potentially causing the extinction of both the disease-causing virus and the synthetic virus.

"In our experiments, we show that the wild-type [disease-causing] SARS-CoV-2 virus actually enables the replication and spread of our synthetic virus, thereby effectively promoting its own decline," said Marco Archetti, associate professor of biology, Penn State. "A version of this synthetic construct could be used as a self-promoting antiviral therapy for COVID-19."

Archetti explained that when a virus attacks a cell, it attaches to the cell's surface and injects its genetic material into the cell. The cell is then tricked into replicating the virus's genetic material and packaging it into virions, which burst from the cell and go off to infect other cells.

'Defective interfering (DI)' viruses, which are common in nature, contain large deletions in their genomes that often affect their ability to replicate their genetic material and package it into virions. However, DI genomes can perform these functions if the cell they've infected also harbors genetic material from a wild-type virus. In this case, a DI genome can hijack a wild-type genome's replication and packaging machinery.

"These defective genomes are like parasites of the wild-type virus," said Archetti, explaining that when a DI genome utilizes of a wild-type genome's machinery, it also can impair the wild-type genome growth.

In addition, he said, "given the shorter length of their genomes as a result of the deletions, DI genomes can replicate faster than wild-type genomes in coinfected cells and quickly outcompete the wild-type."

Indeed, in their new study, which published today (July 1) in the journal PeerJ, Archetti and his colleagues found that their synthetic DI genome can replicate three times faster than the wild-type genome, resulting in a reduction of the wild-type viral load by half in 24 hours.

To conduct their study, the researchers engineered short synthetic DI genomes from parts of the wild-type SARS-CoV-2 genome and introduced them into African green monkey cells that were already infected with the wild-type SARS-CoV-2 virus. Next, they quantified the relative amounts of the DI and WT genomes in the cells over time points, which gave an indication of the amount of interference of the DI genome with the wild-type genome.

The team found that within 24 hours of infection, the DI genome reduced the amount of SARS-CoV-2 by approximately half compared to the amount of wild-type virus in control experiments. They also found that the the DI genome increases in quantity 3.3 times as fast than the wild-type virus.

Archetti said that while the 50% reduction in virus load that they observed over 24 hours is not enough for therapeutic purposes, presumably, as the DI genomes increase in frequency in the cell, the decline in the amount of wild-type virus would lead to the demise of both the virus and the DI genome, as the DI genome cannot persist once it has driven the wild-type virus to extinction.

He added that further experiments are needed to verify the potential of SARS-CoV-2 DIs as an antiviral treatment, suggesting that the experiments could be repeated in human lung cell lines, and against some of the newer variants of SARS-CoV-2. Furthermore, he said, an efficient delivery method should be devised. In further work that is still unpublished, the team has now used nanoparticles as a delivery vector and observed that the virus declines by more than 95% in 12 hours.

"With some additional research and finetuning, a version of this synthetic DI could be used as a self-sustaining therapeutic for COVID-19."

Credit: 
Penn State

UK public view COVID-19 as a threat because of lockdowns, new study suggests

The UK public is likely to take the COVID-19 pandemic less seriously once restrictions are lifted, according to new research led by Cardiff University.

Psychologists found lockdown in itself was a primary reason why so many people were willing to abide by the rules from the start - believing the threat must be severe if the government imposes such drastic measures.

The team from Cardiff and the universities of Bath and Essex examined the reasons behind headline polling support for COVID-19 measures. They carried out two UK surveys*, six months apart, during 2020. Their findings are published today in the journal Royal Society Open Science.

Lead author Dr Colin Foad said: "Surprisingly, we found that people judge the severity of the COVID-19 threat based on the fact the government imposed a lockdown - in other words, they thought 'it must be bad if government's taking such drastic measures'.

"We also found that the more they judged the risk in this way, the more they supported lockdown. This suggests that if and when 'Freedom Day' comes and restrictions are lifted, people may downplay the threat of COVID."

The research also found:
- Raising people's personal threat was unlikely to enhance their support for restrictive measures;
- People supported lockdown yet thought many of its side effects were "unacceptable" in a cost-benefit analysis.

Dr Foad said: "The pandemic has been characterised by strong public support for lockdowns, but our research suggests that people have actually been much more conflicted than the headline polls suggest.

"For example, we found that when people think about the costs of this policy, such as detriment to mental health and reduced access to treatment for non-COVID health problems, these can outweigh its benefits."

On the finding around personal threat, he said: "In order to try and keep public support for lockdowns high, various strategies have been tried by the government, including reminding people that they and their loved ones are at risk from COVID-19.

"However, we find that most people's personal sense of threat does not relate to their support for restrictions. Instead, people judged the threat at a much more general level, such as towards the country as a whole. So, any messaging that targets their personal sense of threat is unlikely to actually raise support for any further restrictions."

The researchers warned there was a risk of public opinion and government policy "forming a symbiotic relationship", which could affect how policies are implemented now and in future.

Professor Lorraine Whitmarsh, an environmental psychologist from the University of Bath, said: "This has important implications for how we deal with other risks, like climate change - the public will be more likely to believe it's a serious problem if governments implement bold policies to tackle it."

Professor Whitmarsh suggested bold actions might include stopping all road building (as has happened recently in Wales) or blocking airport expansions.

The researchers are calling for more nuanced use of polling data during the pandemic to accurately gauge the diversity and complexity of public opinion.

Dr Paul Hanel, a lecturer in psychology at the University of Essex, said: "Polling data from large samples are important in understanding what people think. Our study, however, shows that it is crucial to ask the right questions because otherwise we are only getting a limited and potentially even misleading picture of how diverse and even conflicting public opinions truly are."

Credit: 
University of Bath

Loss of biodiversity in streams threatens vital biological process

image: The photographs of aquatic detritivores in this graphic represent a subset of families (ordered left to right from the most to the least abundant in the study). All over the world, detritivore populations are dwindling and disappearing at an alarming rate.

Image: 
Bradley Cardinale Lab, Penn State

UNIVERSITY PARK, Pa. -- The fast-moving decline and extinction of many species of detritivores -- organisms that break down and remove dead plant and animal matter -- may have dire consequences, an international team of scientists suggests in a new study.

The researchers observed a close relationship between detritivore diversity and plant litter decomposition in streams, noting that decomposition was highest in waters with the most species of detritivores -- including aquatic insects such as stoneflies, caddisflies, mayflies and craneflies, and crustaceans such as scuds and freshwater shrimp and crabs.

Decomposition is a biological process that's vital to life, explained study co-author Bradley Cardinale, professor and head of the Department of Ecosystem Science and Management, College of Agricultural Sciences, Penn State.

"The plant matter that doesn't get eaten by animals ultimately dies and must be recycled so that biologically essential nutrients are rereleased into the environment where they can be used again by plants," he said. "If that process of decomposition doesn't occur, or slows significantly, then life comes to a screeching halt. Phosphorus, nitrogen and other nutrients that we need as humans don't even exist in a biologically available form unless they get decomposed and recycled."

But all over the world, detritivore populations are dwindling and disappearing at an alarming rate -- a grim reality that spurred the study. There is good evidence that the rate of extinction for these organisms is 1,000 to 10,000 times faster than has occurred through the historic record, Cardinale pointed out.

"There has been a huge question about whether or not the diversity of these aquatic insects and crustaceans is crucial," Cardinale said. "If they go extinct, is this biological process of decomposition going to slow or not? We didn't have a good answer prior to this study. We didn't know if the extinction of these animals will greatly affect the ability of ecosystems to sustain life, or if other organisms such as bacteria and fungi will fill the ecological niche and accomplish a similar level of decomposition."

There are a number of causes of extinction, Cardinale said. In order of importance, they are habitat loss, overharvesting (which doesn't apply to this study), competition from invasive species, disease, pollution and climate change -- which he called "the big unknown at this point."

The study was global and uncommonly robust, involving 75 scientists analyzing decomposition in 38 headwater streams that were similar in size, depth and physical habitat across 23 countries on six continents. Most of the streams selected by the researchers had rocky substrate and were shaded by dense riparian vegetation.

At each site, the researchers incubated identical plant litter mixtures comprised of nine species collected at different locations around the world and distributed among study partners. Litter mixtures were enclosed within paired coarse-mesh and fine-mesh litterbags containing the same amount and type of litter.

"The two types of litterbags allowed us to quantify the amount of decomposition done by detritivores and by microbial organisms in the streams," Cardinale said. "We saw that in the litter enclosed by fine mesh bags that couldn't be accessed by aquatic insects or crustaceans, much less decomposition took place."

That indicates that bacteria and fungi alone likely can't accomplish the amount of decomposition needed in stream ecosystems, he added.

"When we excluded these animals, we saw a huge drop in decomposition rates, which means other organisms didn't compensate for them," Cardinale said. "When the detritivores were excluded, simulating extinction, we lost way more than 50% of the decomposition in the streams."

In findings recently published in Nature Communications, the researchers noted that streams with an abundance of detritivores had the highest rates of decomposition. They reported that the relationship between detritivore diversity and decomposition was stronger in tropical areas than in temperate areas and absent in boreal areas, and that abundance and biomass were important in temperate and boreal areas, but not in tropical areas.

According to the research team, the study results suggest that litter decomposition likely is being altered by detritivore extinctions and that these effects are particularly strong in tropical areas, where detritivore diversity already is relatively low and some environmental stressors are particularly prevalent.

Some of the study results are not surprising, Cardinale suggested. The abundance and size of detritivores are known to have a very strong impact on decomposition. So, streams that have more of them -- or that have larger invertebrates, such as big crabs and shrimp -- have more decomposition.

"But what was a surprise is that we also found that the diversity, the number of different species that were in streams, was one of the most dominant predictors of decomposition," he said. "And between abundance, body size and diversity, we could explain 82% of all of the global variation in decomposition. That means as these animals go extinct, we're going to lose the ability to decompose and recycle biologically essential materials that other organisms require for survival and growth."

Credit: 
Penn State

Perceptions of counterfeits among luxury goods differ across cultures

ABINGTON, Pa. -- Counterfeit dominance decreases Anglo-American, but not Asian, consumers' quality perception and purchase intention of authentic brands, according to a team of researchers.

"Counterfeit dominance is the perception that counterfeit products possess more than 50% of market share," Lei Song, assistant professor of marketing at Penn State Abington, said. "Counterfeit dominance is a phenomenon especially concerning for the luxury fashion industry as counterfeit luxury fashion brands account for 60% to 70% of the $4.5 trillion in total counterfeit trade and one-quarter of total sales in luxury fashion goods."

Lei and his team conducted four behavioral experiments with 149 participants on Mturk to test their hypotheses.

The results show that counterfeit dominance negatively affects the quality perception of authentic luxury fashion brands for Anglo-American, but not for Asian, consumers.

The study finds that Anglo-Americans are weaker in social-adjustive attitude, meaning that they are more likely to rely on outgroups such as people on the street to form their opinions. This is the reason for the unveiled cultural difference in perceived quality and purchase intention.

"Being aware of counterfeit dominance raises brand owners' concern that outgroups may consider their authentic brands as low-quality counterfeits, thus lowering their quality perception of authentic brands," Song said.

This research demonstrates that counterfeit dominance negatively affects the perceived quality and purchase intention of luxury fashion brands across product categories for Anglo-American, but not for Asian, consumers with a social-adjustive attitude underlying this difference. Therefore, counterfeit dominance has stronger negative impacts on luxury fashion brand owners' perceptions of their brands for those with a weak (Anglo-Americans), but not for those with a strong (Asians), social-adjustive attitude.

The team found that Asian consumers are stronger in social-adjustive attitude, suggesting that they are more likely to form opinions based on ingroups, such as friends, rather than outgroups. As a result, Asian brand owners' quality perception of authentic brands was less affected by counterfeit dominance.

Because quality perception strongly affects purchase intention, Song said the researchers also found that counterfeit dominance negatively affects the purchase intention of authentic luxury fashion brands for Anglo-American, but not for Asian, consumers.

To examine whether social-adjustive attitude is indeed the reason behind the unveiled cultural difference, the authors included a study about the moderating role of social-adjustive attitude. They found that the impact of counterfeit dominance on purchase intention was marginally significant among participants with a low social-adjustive attitude, but not for those with a high social-adjustive attitude towards luxury fashion brands. This suggests that a social-adjustive attitude underlies the effect of counterfeit dominance on different cultural groups' luxury fashion brand owners' purchase intention.

Counterfeit dominance effects spill over to other product categories of the same brand. The studies not only found that counterfeit dominance affects quality perception and purchase intention for the same product category -- for example, counterfeit Burberry sunglasses affect authentic Burberry sunglasses -- but also for a different product category of the same brand -- for example, counterfeit Burberry sunglasses affect authentic Burberry scarves. This indicates that the detrimental effect of counterfeit dominance in the Anglo-American culture is exponential.

The researchers made several recommendations to support luxury goods producers including reducing news of counterfeit dominance in Anglo-American culture and adopting word of mouth in Asian culture. Previous research indicates that acknowledgment of counterfeit dominance is more adverse for Anglo-American than Asian fashion brand owners.

"Luxury fashion brand manufacturers should collaborate with news and social media websites to reduce the amount of information related to counterfeiting of their luxury fashion brands and cooperate with government agencies to prevent counterfeit dominance in the Anglo-American culture. However, because Asian brand owners' perceptions of luxury fashion brands are strongly affected by their peers, luxury fashion brand manufacturers should focus increasingly on strategies such as word of mouth to influence these consumers' peers to augment the purchase of those brands," Song said.

"Thus, luxury fashion brand managers should segment their consumers by culture and develop different marketing strategies to remedy the loss of sales from counterfeit dominance," he continued.

Another area would be to focus on enhancing the quality of luxury products in Anglo-American culture and providing group discounts in Asian culture. Group discounts or buying refers to offering products and services at significantly reduced prices on the condition that a minimum number of buyers would make the purchase.

According to the researchers, luxury fashion brand manufacturers should deploy strategies such as creating advertisements that specifically focus on quality to maintain customers with an Anglo-American cultural identity. However, for customers with an Asian cultural background, providing a group discount may increase influence from these consumers' peers to purchase luxury fashion brands.

Credit: 
Penn State

How racial wage discrimination of football players ended in England

image: Pierre Deschamps, photo: Tara Nabavi/Stockholm University

Image: 
Tara Nabavi/Stockholm University

Increased labour mobility seems to have stopped the racial wage discrimination of black English football players. A new study in economics from Stockholm university and Université Paris-Saclay used data from the English Premier League to investigate the impact of the so-called "Bosman ruling", and found that racial discrimination against English football players disappeared - but not for non-EU players. The study was recently published in the journal European Economic Review.

In 1995, the so-called Bosman ruling turned the labour market for European footballers upside down, introducing a free transfer market and greatly reducing the power of the football clubs. This ruling, named after the Belgian footballer Jean-Marc Bosman, lifted restrictions on the players' mobility based on the principle of free mobility of labour, as set out in the European Community Treaty. This decision was perceived as a fundamental shock to the system. Players in the EU were suddenly allowed to move to another club at the end of their contract without a transfer fee being paid, among other changes in the regulations for foreign football players in European leagues.

Could this abrupt change in conditions tell us something about racial discrimination on the labour market? Researchers in Economics Pierre Deschamps and José de Sousa thought so, and used the Bosman ruling to study how a change in the labour market power of firms could affect wage discrimination against black football players in the English Premier League.

"We find that wage discrimination against black English players was substantial before the Bosman ruling and then almost disappeared afterwards. Increasing labour mobility seemed to stop the clubs from being able to wage discriminate", said Pierre Deschamps, PhD in Economics at the Swedish Institute for Social Research (SOFI) at Stockholm University.

To be able to link the wage of a football player to his performance, the researchers used a method to identify discrimination, a so-called "market-test", on data from the English Premier League. This test detects wage discrimination by comparing teams with similar wage bills, but different shares of black players in the teams. By comparing the difference in performance between these clubs before and after the Bosman ruling, they could investigate how racial wage discrimination was affected.

Before the Bosman ruling, clubs had to pay transfer fees to recruit players who were out-of-contract. The situation was similar to the non-compete clauses often found in executive contracts, and increasingly in other types of jobs. As an example, Jean-Marc Bosman refused his club's (Liege) offer of a contract extension at only 25% of his old wage, and accepted a contract from the French club Dunkirk instead. Liege set a transfer fee that was high in order to prevent the transfer and force the player to sign the contract extension.

"This was a clear case of monopsony power - a situation where firms can limit labour market competition. After the ruling, players were free to leave for other clubs once their contracts had expired." said Pierre Deschamps.

Groups with less labor mobility still face wage discrimination

This change in labour market power for football players affected the possibility of discriminating black players, according to the researchers. After the ruling, the wages were more likely to reflect the talent of the black English players. But some groups still face mobility constraints however, non-EU players who have to obtain a work permit in front of a tribunal before they are allowed to play in England.

"When we look at the post-Bosman period, we find that the only players who face wage discrimination are black non-EU players. These players are the only ones who have to face both prejudice from clubs and restrictive contracting rules. This strengthens the case that contracting rules and labour mobility are key to limiting wage discrimination" says Pierre Deschamps.

The conclusion from the study is according to the researchers, that with the right labour market conditions, wage discrimination can be counteracted - even if employers are prejudiced.

"The important take-away from the article is that prejudice does not necessarily lead to wage discrimination. Acting against prejudice, although certainly desirable, is a longer-run endeavour with uncertain results. Limiting monopsony power and increasing labour market mobility can be done right away however, and in our dataset leads to an immediate decrease in wage discrimination." said Pierre Deschamps.

More about the study

The dataset was compiled by the authors, combining detailed team sheet data from the English Premier League from 1981 to 2008 with club wage bills from audited annual accounts. This dataset was then matched to data on the skin colour of football players, determined by visual inspection of players' photographs. Since the method is based on the players' appearance, it is suitable for determining the potential basis for discrimination, because discriminators prejudge an individual based on appearances.

The market-test method involves calculating the performance of a team from its wage bill and its share of black players. The main idea behind the test is that the racial composition of the team should have no effect on performance once we take the wage bills of clubs into account, unless there is racial discrimination. The researchers apply the market-test to data from the football matches, in this case the goal difference in a match. Having more black players in a team has a significant effect on performance in the 5 years before the Bosman ruling, but none afterwards, indicating that wage discrimination has disappeared

Credit: 
Stockholm University

Patently harmful: Fewer female inventors a problem for women's health

Necessity is the father of invention, but where is its mother? According to a new study published in Science, fewer women hold biomedical patents, leading to a reduced number of patented technologies designed to address problems affecting women.

While there are well-known biases that limit the number of women in science and technology, the consequences extend beyond the gender gap in the labour market, say researchers from McGill University, Harvard Business School, and the Universidad de Navarra in Barcelona. Demographic inequities in who gets to invent lead to demographic inequities in who benefits from invention.

"Although the percentage of biomedical patents held by women has risen from 6.3% to 16.2% over the last three decades, men continue to significantly outnumber women as patent holders. As a result, health inventions have tended to focus more on the needs of men than women," says co-author John-Paul Ferguson, an Associate Professor in the Desautels Faculty of Management at McGill University.

The inventor gender gap

To determine which inventions are female-focused, male-focused, or neutral, researchers analyzed 441,504 medical patents filed from 1976 through 2010 using machine learning. They show that patented biomedical inventions created by women are up to 35% more likely to benefit women's health than biomedical inventions created by men. These patents are more likely to address conditions like breast cancer and postpartum preeclampsia, as well as diseases that disproportionately affect women, like fibromyalgia and lupus.

While inventions by women are more likely to be female-focused, such patents have been less common because so few inventors were women. In total, women were listed as co-inventors on just a quarter of all patents filed during the period.

The researchers note that female scientists are 40% less likely to commercialize their research ideas than male scientists. The causes of this gender gap are myriad, from differences in mentoring to biases in the early-stage feedback that women receive when trying to commercialize female-focused ideas.

"Our findings suggest that the inventor gender gap is partially responsible for thousands of missing female-focused inventions since 1976. Our calculations suggest that had male and female inventors been equally represented over this period, there would have been an additional 6,500 more female-focused inventions," says say co-author Rembrand Koning, an Assistant Professor at Harvard Business School.

Gender bias in biomedical innovation

The results reveal that inventions by research teams primarily or completely composed of men are more likely to focus on the medical needs of men. In 34 of the 35 years from 1976 to 2010, male-majority teams produced hundreds more inventions focused on the needs of men than those focused on the needs of women.

Male inventors also tended to target diseases and conditions like Parkinson's and sleep apnea, which disproportionately affect men. Overall, the researchers found that across inventor teams of all gender mixes, biomedical invention from 1976 to 2010 focused more on the needs of men than women.

Benefits of more women inventing

The researchers also found more subtle benefits when more women invent. Female inventors are more likely to identify how existing treatments for non-sex-specific diseases like heart attacks, diabetes, and stroke can be improved and adapted for the needs of women. They are also more likely to test whether their ideas and inventions affect men and women differently: for example, if a drug has more adverse side effects in women than in men.

"Our results suggest that increasing representation should address these invisible biases," says Koning.

Credit: 
McGill University