Culture

HDO-antimiR represents a new weapon in the fight against microRNA-related disease

image: This image illustrates the molecular mechanism how double-stranded HDO-antimiR silences the targeted microRNA within cells in comparison with the original single-stranded antimiR.

These findings provide new insights into the biology of miRNA silencing and support the potential of this new class of molecules for use in treating miRNA-related diseases, such as cancer and cardiovascular disease.

"HDO-antimiR appears to have different mechanisms of action compared with the parent antimiR," states first-author Yoshioka. "Thus, given its heightened intracellular miRNA-silencing potency, HDO-antimiR and the techniques that we used to construct it represent a new direction for miRNA regulation research."

Image: 
Department of Neurology and Neurological Science, TMDU

Tokyo, Japan - MicroRNA (miRNA) is a type of RNA that plays an important role in various cellular processes and is also involved in developing defenses against pathogens. Increasing numbers of studies have indicated that certain human diseases are caused by altered expression and organization of miRNA.

Kotaro Yoshioka and Takanori Yokota of Tokyo Medical and Dental University (TMDU) have developed new molecules that can silence or inhibit malfunctioning miRNA, with the hope that their work could lead to new ways to treat diseases which are caused by the miRNAs. Creating a new type of antimiR, which is a molecule that inhibits miRNA, could provide the foundation for new gene therapies.

Yoshioka and Yokota recently developed a new type of antimiR with heightened efficacy against miRNA and lowered toxicity, using their original technique, hetero duplex oligonucleotides (HDO). They combined an antimiR with its complementary RNA to produce a new molecule, termed "HDO-antimiR". They then characterized HDO-antimiR--specifically, they evaluated its strength in inhibiting miRNA, assessed its biological distribution, and checked its specific action by modifying its structure in mice. They recently published their findings in Nucleic Acids Research.

"HDO-antimiR has a unique double-strand structure that is totally different from previously described antimiRs," corresponding author Yokota says. "This structure is responsible for a substantial increase in the potency of miRNA silenced by HDO-antimiR compared with the single-stranded parent antimiR."

The researchers found that compared with conventional antimiR, HDO-antimiR was 12 times as efficient in terms of binding to targeted miRNA, and had improved potency within cells. Further, HDO-antimiR produced enhanced phenotypic effects in mice, specifically increasing the expression of mRNA that targets miRNA, leading to greater miRNA inhibition.

The heightened potency of HDO-antimiR was not related to high bio-stability or an increased rate of delivery to the targeted cell. Although these are common explanations for increased potency, in this case it was a reflection of improved strength once the molecule had entered the cells. This indicates that the unique structure of HDO-antimiR enabled it to behave differently than other types of miRNA inhibitors.

Credit: 
Tokyo Medical and Dental University

Stature and education level of diabetic women to find those at risk of dementia in Nigeria

* A joint survey by researchers from the Universitat Autònoma de Barcelona (UAB) and Benue State University (BSU) finds that short height and low education levels are characteristic traits of Nigerian women with type 2 diabetes showing early symptoms of dementia.

* This work points to the importance of childhood nutrition and education programmes, particularly for girls, in public health improvement strategies in Nigeria. A sustained programme addressing these problems can mitigate a future burden of dementia in the country's adult diabetic women.

Globally, DM2 in mid-life is a recognized risk factor for dementia, but this association is worse in persons of African descent, especially in women. Among African diabetics, development of dementia is more frequent and rapid, and this is associated with different factors such as lifestyle, diet, socio-economy and genetics.

This appears to be the case in Nigeria, which currently contains the largest number of persons (3 million and rising) affected by DM2 on the African continent. This makes the country a good place to study the interaction of varied factors in the development of DM and associated co-morbidities in persons of African origin. In the 1960s, the prevalence of DM in Nigeria was found to be less than 1% but this has risen to 4.6%-10%, depending on the study population within the country, in about 5 decades. Some factors implicated in this surge are ageing, obesity, hypertension, adoption of sedentary living and wide consumption of toxic diet imported from some Western countries.

Taking into account the above information, Dr. Lydia Giménez Llort (Head of the Medical Psychology Departamental Unit, representative of the Observatory for the Equality at the Faculty of Medicine, and researcher at the Institute of Neurosciences, UAB) and Dr. Efosa Kenneth Oghagbon (Department of Chemical Pathology, Faculty of Basic & Allied Medical Science, College of Health Sciences, Benue State University (BSU) and Consultant Chemical Pathologist and Metabolic Physician at the University Teaching Hospital; BSUTH, Makurdi, Nigeria) carried out a collaborative study that evaluated factors associating DM2 with dementia in a sub-Saharan population of type 2 diabetic subjects. This comprised 62 female and 47 male patients, who were compared with 53 healthy female and 46 healthy male subjects, with special attention put on the women.

Using two dementia questionnaires/scales, the mini-mental state examination (MMMS) and six item cognitive impair test (6CIT), researchers assessed the impact of anthropometric measures (body mass index, waist-hip ratio, weight and height) and other biodata (sex, age, education level and professions) on the burden of dementia.

The study results showed that diabetic women who had shorter body height and lower level of education had more severe dementia. The women afflicted with dementia had twice the level of illiteracy compared to their healthy counterparts, and this was worse compared to men who were 11 times better in literacy level. The stature or height was the most discriminating factor among the physical measures and the diabetic women in the study were generally shorter. Categorically, the two dementia scales showed that diabetic Nigerian women with shorter stature and poorer education levels had grades consistent with cognitive deterioration.

Physical height is a simple parameter that can be measured by most medical clinics in Nigeria. It should be integrated into routine assessment of diabetic patients, especially women as it can reveal increased dementia risk. The final height of adults is related to early life nutrition and thus there is the need for better feeding in childhood especially among girls and pregnant women.

Dr. Lydia Giménez-Llort emphasised that "...education augments the cognitive reserve of individuals, and this favors brain neuroplasticity and functional development". Furthermore the UAB scientist said that "...in addition to age, complications of diabetes constitutes a key predictive tool for dementia".

Dr. Efosa K. Oghagbon suggests that "an optimal nutrition at the onset of life can improve cognition and cerebral development in childhood and adolescence; key period during which to attain adequate physical and educational developments of the Nigerian child".

Both authors agree that not only is it important for government and policy makers to emphasise proper childhood nutrition in Nigerians, but should particularly focus on the girl child.

"Our results can aid the development of evidence-based public health approaches to mitigate dementia in communities in Nigeria and the sub-region. This is important as countries in the sub-region including Nigeria cannot sustain healthcare cost occasioned by dementia and associated issues", opine the two researchers.

Credit: 
Universitat Autonoma de Barcelona

Market competition sets tone for lower cost of UK mobile phone contracts, research shows

Healthy and competitive markets - and not stringent regulations - help dial back the cost of mobile phone contacts, according to new research.

Consumers in the UK benefit from comparatively cheaper bills than many of their counterparts abroad because regulations over contract length and costs are kept to a minimum.

Instead, market competition, where companies vie to offer the best deals to customers, provides the most effective way of keeping prices down and promoting a good service.

A team of law experts from the universities of Warwick and Exeter conducted the new study by comparing the strength of regulations of several countries worldwide.

While factors such as geography can also play a role, the research team found that consumers face higher bills and costs - and have less choice over which provider to select - when contracts are tied up in greater amounts of regulation.

The experts suggest that keeping regulations to a minimum creates "friendlier markets" for both providers and consumers, which can lead to significant price decreases.

Dr Timothy Dodsworth, from the University of Exeter Law School said: "We have found competition should be a key part of the mobile phone market, as it means companies must provide the best service for consumers in order to stay in business. It is part of the reason the UK has some of the lowest bills in the world.

"Smaller businesses find it harder and more expensive to function in a highly regulated market, and strict regulations mean only a few top companies dominate it.

"This means there is very little competition, and therefore no real reason to lower prices and improve services."

Academics examined the way mobile phone contracts were regulated in Germany, Britain, USA and Canada for the study.

They found regulating mobile phone contracts can discourage new companies from entering the market, and so reduce competition and choice for consumers.

While in Europe the length of contract is now determined by the EU, Canada has introduced more regulations in an attempt to manipulate the market to be more competitive.

However, this meant that companies with no flexibility to reduce or adjust fees and costs, meaning contract prices have stagnated - compared to price falls in many other countries.

"Adverse effects can also be seen in Europe though.", Christopher Bisping from the University of Warwick explained, "Before the EU introduced a sector-wide maximum length of 24 months, there was a range of different contract lengths available, allowing consumers to to pick whatever length best suited their needs."

The USA is thought to have the least regulation of mobile phone contracts in the world, but official statistics are not kept.

In the UK the regulator Ofcom has introduced new rules around contract length but has taken the decision to not introduce other regulation because it considers there is enough competition in the market.

Dr Dodsworth said: "Ofcom are acting in a sensible way by letting the UK market balance itself, and this should remain their policy if consumers are to have the best service. Examining other countries show if you try to regulate how contracts are renewed or the initial commitment period this has a knock-on effect on the price people pay or has other unintended consequences."

In Canada prices for consumers with lower than average consumption rose by roughly 16 per cent after the introduction of the "Wireless Code" legislation in 2014. However, for average consumption users, the cost remained steady between 2013 and 2014, and higher than average consumption contracts (CAN$80 and above) dropped from an average price of CAN$93 in 2013 to CAN$80 in 2014.

Credit: 
University of Exeter

Aussie businesses not ready to tackle modern slavery

New research from the University of South Australia finds that Australian businesses are ill-prepared for mandatory modern slavery reporting, with more than two-thirds of ASX 100 companies unable to produce a disclosure statement about potentially exploitative labour practices.

It is a concerning finding given that initial reporting periods have already commenced requiring Australian businesses to deliver their first modern slavery statements by 31 December 2020.

UniSA researchers Dr Katherine Christ and Dr Kathy Rao say that in order to meet the requirements of Australia’s Modern Slavery Act, businesses must significantly ramp up their efforts to ensure they have the systems and procedures ready to report on modern slavery.

“Under federal legislation, businesses with turnovers of more than $100 million must declare what they’re doing to eradicate slavery in their operations and supply chains, yet evidence shows the majority of Australian businesses are underprepared,” Dr Katherine Christ says.

“While a third of Australian businesses sampled in this research were able to produce a modern slavery statement, the volume and quality of their disclosures was low – typically narrative and descriptive of policy – plus there were many inconsistencies of where and how it was reported.

“The general nature of these disclosures just isn’t enough. For Australian businesses to appropriately report on modern slavery, they must be able to produce systematic quantitative data with associated targets and finances.”

Modern slavery is the illegal and inappropriate labour practices that include human trafficking, forced labour, child labour, organ trafficking, sex exploitation, debt bondage and other slavery-like practices. Driven by consumer demand for cheap goods it is embedded within many of the products used by Australians every day, including 73 per cent of imported computers, mobile phones and laptops (representing an estimated US$7.0 billion) and 70 per cent of imported clothing and accessories (representing an estimated US$4.5 billion).

Globally, more than 40 million people are trapped in modern slavery with nearly 25 million of these forced into slave labour. In Australia, 15,000 people are victims of modern slavery.

This research is the first to consider the state of modern slavery disclosures within an Australian context, providing a useful benchmark against which the impact of Australia’s Modern Slavery Act can be measured.

Dr Kathy Rao says that eradicating modern slavery requires commitment across all levels of Australian business.

“Modern slavery is a far-reaching and devastating issue, but the responsibility to eliminate it does not sit with business alone,” Dr Rao says.

“While all businesses have a responsibility to mitigate the risks of modern slavery, addressing modern slavery requires a joint and ongoing commitment from all parties including governments, business leaders and boards, suppliers, contractors, NGOs, researchers, professional bodies and the general public.

“Only through a committed process of ongoing and continuous improvement will we be able to ensure modern slavery is truly a thing of the past.”

Credit: 
University of South Australia

The democratic governance of agricultural multinationals is essential for environmental sustainability

The European project Diverfarming, founded by the European Commission within its Horizon 2020 programme, not only seeks environmental sustainability among crop diversification and low input management practices, but also seeks to implement sustainable innovations into the agri - food supply chain making it more sustainable too. In this way, value chain actors will also obtain benefits such as economic stability.

On the way to getting these agri - food supply chain improvements, researchers from Diverfarming investigate into two different ways: the design and analysis of the value chains and the framework for relevant policies.

The latest advance in this field has been the release of a study on how partnering can promote sustainability in agri-food supply chains focusing on the case of Barilla Sustainable Farming in the Po Valley (Italy).

While climate change, eating habits, pesticides pollution, decrease in the amount of arable land available and intensive exploitation of the natural resources like the soil (in continuous decline in fertility) are issues that threaten the sustainability and stability of the agri-food market; it is necessary to address these issues from different areas.

Considering that multinationals are in charge of the food production system, they should stimulate democratic multi - level governance to make food production more sustainable, some researchers have proposed the so-called multi - stakeholder partnerships (MSP), that means some different groups sharing a common problem or objective while having different interests.

Within this approach, Diverfarming researchers Barbara Pancino and Emmanuele Blasi from University of Tuscia, Stefano Pascucci from University of Exeter and Cesare Ronchi from Barilla G&R Fratelli SPA have studied de case of Barilla Sustainable Farming (BSF), an initiative currently designing a multi - stakeholder partnership and implementing some type of agreements between the different stakeholders.

This initiative is carried out by the Barilla group that, in 2013, introduced sustainable agriculture practices by establishing horizontal agreements between Co.Pro.B, Cereal Docks and Casalasco Tomato Consortium, its main input suppliers. Through this agreement the three companies integrated their supply chains by means of a crop rotation system that combines wheat crops, sugar beet, rapeseed, and sunflower. Although the agreements are now bilateral, they are working on change it to a multilateral one.

After the study, a two - level approach to implement the crop rotation system is suggested in the scientific article. The first level is a contract to involve farmers around the benefits of crop diversification while the second one refers to the collaboration between the stakeholders in the partnership seeking to integrate the supply chains, the collaboration between partners and the development of a set of contracts that can be offered to the farmers. With this type of partnership and multi - stakeholder partnerships the implementation of crop diversification and other sustainable management practices could be effective and useful for achieving a more sustainable agricultural system throughout Europe.

Diverfarming is a project financed by the Horizon 2020 Programme of the European Commission, within the challenge of "Food Security, Sustainable Agriculture and Forestry, Marine, Maritime and Inland Water Research and the Bioeconomy", which counts on the participation of the Universities of Cartagena and Córdoba (Spain), Tuscia (Italy), Exeter and Portsmouth (United Kingdom), Wageningen (Netherlands), Trier (Germany), Pecs (Hungary) and ETH Zurich (Switzerland), the research centres Consiglio per la ricerca in agricoltura e l'analisi dell'economia agraria (Italy), the Consejo Superior de Investigaciones Científicas (Spain) and the Natural Resources Institute LUKE (Finland), the agrarian organisation ASAJA, and the companies Casalasco and Barilla (Italy), Arento, Disfrimur Logística and Industrias David (Spain), Nieuw Bromo Van Tilburg and Ekoboerdeij de Lingehof (Netherlands), Weingut Dr. Frey (Germany), Nedel-Market KFT and Gere (Hungary) and Paavolan Kotijuustola and Polven Juustola (Finland).

Credit: 
University of Córdoba

Opioid use after vaginal or cesarean delivery among US women

What The Study Did: This study used national insurance claims data for about 988,000 women to look at the association between an opioid prescription after a vaginal or cesarean delivery and rates of new persistent opioid use among U.S. women.

Authors: Alex F. Peahl, M.D., of the University of Michigan in Ann Arbor, is the corresponding author.

(doi:10.1001/jamanetworkopen.2019.7863)

Editor's Note: The article includes conflict of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

The effects of skin aging vary depending on ethnicity, review finds

Boston - The population in the United States is expected to become increasingly older, with estimates indicating that by the year 2030, nearly 40 percent of Americans will be over the age of 65.

As people are living longer, their skin is not only chronologically, or biologically aging, but it is also being exposed to environmental factors, such as sunlight, which can cause age-related damage to the skin.

Neelam Vashi, MD, director of the Center for Ethnic Skin at Boston Medical Center, has published a review paper in Clinics in Dermatology that discusses how aging presents in patients, and the differences that are attributed to skin type, exposures and genetic factors.

For the review, the researchers examined 41 peer-reviewed published articles between 1970 and 2018 that focused on aging in ethnic skin through PubMed. The data included in the articles demonstrate that all skin types will show signs of damage from exposure to Ultraviolet rays from the sun, which include skin discoloration, loss of collagen and/or skin cancer.

Here are some key findings from the review:

Melanin is a key difference in those of light and dark skin types

Patients of color are more likely to experience changes in pigmentation (dyschromia)

Key differences in fibroblasts (cells that promote wound healing and collagen production) account for increased skin thickness of African-American patients, resulting in wrinkles that appear several years later than white counterparts

Patients of East Asian descent have a higher likelihood of experiencing hyperpigmentation, but wrinkles don't form as early in the aging process

Patients of Hispanic descent also experience fewer wrinkles earlier in the aging process

Patients of Caucasian descent (European, North African, Southwest Asian ancestry) more commonly have thinner skin and experience wrinkles, loss of skin elasticity, and reduced lip volume

"Aging is inevitable, and each person will have a unique experience with how their skin changes as it ages," said Vashi, who is also an associate professor of dermatology at Boston University School of Medicine.

As a dermatologist, Vashi treats a large number of patients for a variety of skin conditions related to aging. The one treatment she always recommends is UV protection, which helps shield all skin types from the sun's harmful rays. "Skin cancer is the most common type of cancer in the US, and using sunscreen is an extremely important practice to protect your skin," added Vashi.

Some of the other available treatments for skin aging include:

Topical agents, antioxidants, chemical peels and lasers can be effective to treat dyschromia

Botulinum and toxin and soft-tissue fillers can help treat wrinkles and sagging skin

Credit: 
Boston Medical Center

Antipsychotic use in youths with ADHD is low, but still cause for concern

NEW YORK, NY (JULY 26, 2019) -- Although fewer young people with ADHD are treated with antipsychotic drugs than suspected, many prescriptions for the drugs do not appear to be clinically warranted, according to a new study from psychiatry researchers at Columbia University Vagelos College of Physicians and Surgeons. They also found that antipsychotic use among youths with ADHD was highest among preschool-age children.

Background

In recent years, pediatricians and parents have expressed concern that some physicians are prescribing antipsychotic drugs to youths with ADHD who have significant aggressive or impulsive behavior. Youths with ADHD who are treated with antipsychotics are often also diagnosed with depression, oppositional defiant disorder (ODD), or conduct disorders (CD), even though there is limited evidence that the drugs are effective for ODD or CD and no evidence they are effective in treating depression.

"We didn't know how widespread this practice was among young people starting ADHD treatment," says senior author Mark Olfson, MD, MPH, Elizabeth K Dollard Professor of Psychiatry, Medicine, and Law at Columbia University Vagelos College of Physicians and Surgeons. "There are substantial risks associated with the use of antipsychotic drugs in young people, including weight gain, hyperlipidemia, diabetes, and even unexpected death."

To determine the prevalence of antipsychotic use in youths with ADHD, the researchers analyzed medical and prescription drug data on 187,563 commercially insured youths (ages 3 to 24) who were diagnosed with ADHD between 2010 and 2015. None of the youths had a recent coexisting psychiatric diagnosis (such as schizophrenia or bipolar disorder) that would warrant treatment with antipsychotic drugs.

What the Study Found

The researchers found that 2.6% of youths diagnosed with ADHD were prescribed an antipsychotic drug within a year of diagnosis--four times the rate among young people in general. Antipsychotic drug use was highest (4.3%) in the youngest children diagnosed with ADHD, those aged 3-5 years.

In about half of those taking antipsychotic drugs, the researchers identified a potential diagnostic rationale--such as bipolar disorder, psychosis, ODD, or CD--for prescribing them.

"While antipsychotics are not FDA-approved for these diagnoses, there is scientific evidence to support their use in treating severe symptoms of ADHD," says Ryan S. Sultan, MD, lead author of the paper and assistant professor of clinical psychiatry at Columbia University Vagelos College of Physicians and Surgeons.

The study also found that fewer than half of the children and adolescents taking antipsychotic drugs had been treated first with stimulants such as Adderall and Ritalin, the recommended medication treatment for ADHD.

"Many physicians bypassed stimulants and went right to antipsychotics--contrary to expert opinion about treatment for ADHD, and unnecessarily exposing patients to the risk of severe side effects such as substantial weight gain," adds Sultan.

What the Study Means

"It's reassuring that only a relatively small percentage of these children were prescribed antipsychotics," Olfson says. "But we should be working to reduce that number even further. For at least half of the young people in our sample who were prescribed antipsychotics, we couldn't find a rationale in their claims records to explain why they were taking these medications."

"While hospitalization and use of other medications may be markers for more severe symptoms, we don't have enough information from these records to determine symptom severity," adds Sultan. "Antipsychotic medications play a small role in the treatment of severe ADHD symptoms, but in the absence of severe symptoms, there are safer, more effective medications for youths with ADHD."

Both Sultan and Olfson suggested that many of the behavioral symptoms that prompted physicians to prescribe antipsychotic medications as an initial treatment might have been resolved by prescribing recommended ADHD medications first.

Credit: 
Columbia University Irving Medical Center

New paper points to soil pore structure as key to carbon storage

EAST LANSING, Mich. -- Alexandra Kravchenko, Michigan State University professor in the Department of Plant, Soil and Microbial Sciences, and several of her colleagues recently discovered a new mechanism determining how carbon is stored in soils that could improve the climate resilience of cropping systems and also reduce their carbon footprints.

The findings, published last week in the scientific journal Nature Communications, reveal the importance of soil pore structure for stimulating soil carbon accumulation and protection.

"Understanding how carbon is stored in soils is important for thinking about solutions for climate change," said Phil Robertson, University Distinguished Professor of Plant, Soil and Microbial Sciences, and a co-author of the study. "It's also pretty important for ways to think about soil fertility and therefore, crop production."

The study was conducted through the MSU Great Lakes Bioenergy Research Center, funded by the U.S. Department of Energy, and the Kellogg Biological Station Long-term Ecological Research program funded by the National Science Foundation, or NSF, and it was supported by NSF's Division of Earth Sciences.

Over a period of nine years, researchers studied five different cropping systems in a replicated field experiment in southwest Michigan. Of the five cropping systems, only the two with high plant diversity resulted in higher levels of soil carbon. Kravchenko and her colleagues used X-ray micro-tomography and micro-scale enzyme mapping to show how pore structures affect microbial activity and carbon protection in these systems, and how plant diversity then impacts the development of soil pores conducive to greater carbon storage.

John Schade, from the NSF Division of Environmental Biology, said the results may transform the understanding of how carbon and climate can interact in plant and soil microbial communities.

"This is a clear demonstration of a unique mechanism by which biological communities can alter the environment, with fundamental consequences for carbon cycling," Schade said.

"One thing that scientists always tend to assume is that the places where the new carbon enters the soil are also the places where it is processed by microbes and is subsequently stored and protected," Kravchenko said. "What we have found is that in order to be protected, the carbon has to move; it cannot be protected in the same place where it enters."

Scientists have traditionally believed soil aggregates, clusters of soil particles, were the principal locations for stable carbon storage.

Recent evidence, however, shows that most stable carbon appears to be the result of microbes producing organic compounds that are then adsorbed onto soil mineral particles. The research further reveals that soil pores created by root systems provide an ideal habitat where this can occur.

Of particular importance are soils from ecosystems with higher plant diversity. Soils from restored prairie ecosystems, with many different plant species, had many more pores of the right size for stable carbon storage than did a pure stand of switchgrass.

"What we found in native prairie, probably because of all the interactions between the roots of diverse species, is that the entire soil matrix is covered with a network of pores," Kravchenko said. "Thus, the distance between the locations where the carbon input occurs, and the mineral surfaces on which it can be protected is very short.

"So, a lot of carbon is being gained by the soil. In monoculture switchgrass the pore network was much weaker, so the microbial metabolites had a much longer way to travel to the protective mineral surfaces," explained Kravchenko.

Robertson said the research may prompt farmers to focus on plant diversity when attempting to increase soil carbon storage.

"We used to think the main way to put more carbon in soil is to have plants produce more biomass either as roots or as residue left on the soil surface to decompose," Robertson said.

"What this research points out is that there are smarter ways of storing carbon than such brute force approaches. If we can design or breed crops with rooting characteristics that favor this kind of soil porosity and therefore that favor soil carbon stabilization, that would be a pretty smart way to design systems that can build carbon faster."

Nick Haddad, director of the Kellogg Biological Station Long-term Ecological Research program, said research that builds from these findings will continue to discover ways to improve the sustainability of agricultural ecosystems and landscapes.

"Long-term research shows surprising ways that a diversity of plants can benefit the microbes needed for a resilient agricultural system," Haddad added.

(Note for media: Please include a link to the original paper in online coverage: https://www.nature.com/articles/s41467-019-11057-4)

Credit: 
Michigan State University

Work that kills

image: This graph shows nonstandard work schedules (% of workers, ESS, 2011).

Image: 
Andrei Shevchuk and Anna Krasilnikova

More than 64% of employed Russians work evenings, nights or weekends, and this is one of the highest figures among European countries. Andrei Shevchuk and Anna Krasilnikova from HSE University were the first to study the extent of nonstandard working hours in Russia and its impact on work-life balance.

Departure from Traditions

A five-day week of eight-hour workdays, with two days off, is the generally accepted standard of employment. But deviations from this schedule are not uncommon, and emergency services are not the only ones operating 24/7.

The negative effects of unconventional working schedules are well known and include disruption of biological and social rhythms and damage to health and subjective wellbeing.

Interference with circadian rhythms has been found to cause depression, headaches and burnout as well as limiting one's quality time with loved ones and eroding family relations and social life. Chronic fatigue and sleepiness can lead to workplace accidents and injuries.

These are the findings of international researchers. Similar studies in Russia have been limited to specific groups such as freelancers and employees of certain services and call centres, but no general conclusions or quantitative assessments have been published so far.

Night Shift Majority

The first attempt to assess the scale of the problem was based on data from the European Social Survey Round 5 (?SS, 2011), which included questions about working evenings, nights and weekends.

The findings were startling. According to the study authors,

only 41% of employed Russians do not work evenings and nights, and only 23% do not work at weekends;

one in two employees works evening/night shifts at least a few times each month, and one in four works evenings or nights several times a week or every day;

some 60% work on Saturdays and Sundays at least once every month, and about one-half work at weekends several times a month or every week;

36% of employees face both types of work schedules, i.e. evening/night shifts as well as working weekends.

Overall, slightly more than 64% of Russian employees are engaged in nonstandard work schedules on a consistent basis. These are primarily men, top and middle managers, highly-skilled industrial personnel, as well as those employed in agriculture, retail and service sectors.

Ahead of Other Countries

Conducted in 27 countries, the ESS found that countries with the highest rates of nonstandard working hours included Croatia (71%), Greece (70%) and Poland (65%), while those where nonstandard schedules were less common included Denmark, France and Portugal (51% each), Bulgaria (50%), the Netherlands (49%), and Israel (44%).

Russia ranked in the top four countries in terms of all types of nonstandard working schedules combined and also showed a high rate (36%) of dual nighttime and weekend work (30% average across countries).

But according to the researchers, the actual prevalence of nonstandard working hours may be even greater, since an estimated one-third of working Russians have additional employment, whereas ESS questions focused on respondents' main place of work only.

Disrupted Balance

In terms of subjective wellbeing and self-esteem, one's perception of work-life balance (family/ partnership, household work, cultural development, hobbies, sports, recreation and entertainment) is particularly affected by nonstandard working hours.

The researchers applied regression analysis using a specially constructed index - the average value of three variables (based on responses to three ESS questions) - to measure the deviation from the equilibrium:

How often does a person feel too tired after work to enjoy the things they would like to do at home?

How satisfied are they with the balance between the time they spend on their paid work and the time they spend on other aspects of life?

How often do they find that their job prevents them from giving the time they want to their partner or family?

The lower the index, the harder it is for the subject to maintain work-life balance, and all types of nonstandard working schedules have been found to play a role.

According to the study authors, 'Someone can feel a deterioration of their work-life balance just by working evenings/nights a few times a month or weekends once a month'.

The negative impact of night shifts grows with increasing frequency: the more often one is required to work nights, the lower the chances of a balanced life. In contrast, the frequency of weekend work does not seem to make a difference: no matter whether one has to work weekends once a month or once a week, the negative effect is the same.

Credit: 
National Research University Higher School of Economics

City of Hope study finds novel mechanism of action for NK cells in checkpoint inhibitor for cancer

DUARTE, Calif. -- PD-L1 checkpoint inhibitors are a powerful and growing form of immunotherapy used to treat melanoma, kidney cancer, head and neck cancers, Hodgkin's lymphoma and other cancers. The PD-L1 protein is expressed on tumor cells and aids the cancer by signaling to immune cells, such as T cells, to stop working against tumors.

Checkpoint inhibitors, also called anti-PD-L1 monoclonal antibodies, block the PD-L1 protein to help the immune system and, specifically, T cells, do what they're designed to do, eradicate cancer. However, in some instances, anti-PD-L1 antibodies show anti-tumor activity in patients whose tumors do not express PD-L1.

Now, for the first time, City of Hope scientists have discovered that natural killer (NK) cells provide one reason why anti-PD-L1 antibodies might work when tumor cells do not express PD-L1. The study, published today in Cancer Discovery, found that NK cells can also express PD-L1 in some cancer patients. PD-L1 expression on the NK cells identifies them as charged or highly activated and can demonstrate anti-tumor activity.

Further, when bound by the anti-PD-L1 antibody, the NK cells can kill the tumor cell better regardless of PD-L1 expression on the tumor cells.

If an NK cell expressing PD-L1 is treated with a PD-L1 antibody, the interaction activates PD-L1+ NK cells to control the growth of tumors by killing those tumors and by the secretion of cytokines. This demonstrates a novel mechanism of action that provides a significant role for the NK cell and the anti-PD-L1 antibody in anti-tumor activity especially in instances where the tumor cell does not express PD-L1.

"We have provided a scientific explanation as to how checkpoint inhibitor therapy can work when there's no checkpoint expressed on a patient's cancer cells," said Jianhua Yu, Ph.D., one of the study's senior authors, City of Hope professor in the Department of Hematology & Hematopoietic Cell Transplantation, and a Scholar of The Leukemia & Lymphoma Society. "Using checkpoint inhibitors for NK cells with PD-L1 expression can lead to stronger anti-cancer activity, providing us with another powerful therapy against even more cancers."

Michael Caligiuri, M.D., the study's other senior author, president of City of Hope National Medical Center and Deana and Steve Campbell Physician-in-Chief Distinguished Chair, M.D., noted that NK cells comprise a group of innate immune cells that can attack cancer and viral infections. But there's been no research on how PD-L1 and NK cells interact against cancer.

"Natural killer cells are the body's first line of defense against cancer and viral infections," Caligiuri said. "When NK cells detect tumor or viral cells in the body, they have the potential to kill them immediately. But in those with cancer, tumors have developed mechanisms to circumvent NK cells and T cells. We believe PD-L1 expression on NK cells identifies tumors that could be susceptible to destruction by NK cells, thereby providing a new immunotherapeutic avenue to explore."

The scientists studied PD-L1+ and PD-L1- NK cells in both humans and mice with PD-L1- tumors. PD-L1+ NK cells, upon encountering and being activated by NK-susceptible tumor cells, secreted more cytokines and cytolytic granules, which both increased the immune cells' effectiveness. PD-L1+ NK cells, which can also be generated in the laboratory by culturing with some tumor cells or with cytokines, killed more tumor cells in vitro than NK cells that were PD-L1- or than NK cells that do not see tumor cells or cytokines. These results were able to be repeated in an in vivo mouse model containing human NK cells.

Researchers also found that NK cells from a majority of 79 AML patients examined had expressed moderate to high levels of PD-L1, and those who entered a complete remission from their leukemia had a higher percentage of PD-L1+ NK cells at the time of remission when compared to diagnosis. In contrast, patients who failed to enter a complete response had no change in the percentage of PD-L1+ NK cells at remission when compared to diagnosis.

Because the percentage of PD-L1+ NK cells following chemotherapy correlated with a positive clinical response - in contrast to those AML patients whose NK cells did not express PD-L1 - the study's authors believe a next step could be a clinical trial for particular AML patients displaying an increase in PD-L1+ NK cells at the time of remission. The trial would include anti-PD-L1 monoclonal antibodies with or without NK cell-activating cytokines, thereby exploiting a novel pathway that is independent of T cells and PD-1, the other target for checkpoint inhibitor therapy.

City of Hope is also planning similar clinical trials for patients with other cancers such as lung cancer.

Credit: 
City of Hope

Mouse model supports importance of fatty acid balance in chronic disease

BOSTON - Using novel transgenic mouse models they developed, Massachusetts General Hospital (MGH) investigators have provided new evidence that it is the ratio of omega-6 and omega-3 fatty acids, rather than the total amount of them, that influences risk of developing chronic disease. This work has important implications for wellness and dietary guidelines. Their paper is just out in Nature's Communications Biology.

"Understanding of the differential effects of these two classes of polyunsaturated fatty acids on the development of chronic disease is important but challenging due to confounding dietary factors. We have developed a unique approach to address that." says the study's senior author Jing X. Kang, MD, PhD, director of the Laboratory for Lipid Medicine and Technology at MGH and associate professor of Medicine at Harvard Medical School. The team led by Kang has created several novel mouse models for studying health effects of omega-6 and omega-3 fatty acids.

The role of polyunsaturated fatty acids (PUFAs) in human health has long been debated but is of great interest. They are one of many factors thought to influence chronic disease, such as obesity, type 2 diabetes, cardiovascular disease, and cancer, but studies have shown inconsistent results regarding exactly how they impact risk. The MGH researchers' new paper lends important new evidence to this field by using mouse models that helps eliminate some of the myriad confounding dietary factors that affect studies in this field. The transgenic mice used are identical -- except in the levels of n-6 and n-3 they naturally produce, whatever their diet.

The researchers used four strains of mice for their study, a wild type or "normal" mouse, and then three related mouse strains engineered to produce varying levels of n-6 and n-3 PUFA, no matter what they were fed. These mice can synthesize sufficient levels of specific PUFAs to adjust for dietary factors that would normally disrupt PUFA levels.

The MGH team studied whether the four types of mice showed different rates of metabolic disorders, including metabolic endotoxemia, systemic inflammation, obesity, fatty liver, glucose intolerance, and cancer. The mice that over-produced n-6 PUFA had a higher risk of metabolic disease and cancer, while mice able to convert n-6 to n-3, thereby lowering the ratio, showed a healthier phenotype. The researchers were also able to uncover details about the molecular interactions between these fatty acids and biological networks. For example, the alteration of the PUFA n-6 to n-3 ratio led to changes in the gut microbiome and fecal and serum metabolites.

"The beauty of these mouse models is that they reduce confounding effects," says the study's lead author Kanakaraju Kaliannan, MD, MGH investigator of the study and instructor in Medicine at Harvard Medical School. "We will be able to use them to study many other things, including how PUFA levels specifically impact disease risk."

"Many lines of evidence now support the notion that the omega-6/omega-3 imbalance is a critical factor that contributes to the development of chronic disease," Kang added. "Balancing the PUFA ratio may be a safe and effective solution to some modern health problems." His team is currently working on translational research to explore the clinical utility of the balancing intervention and the feasibility of using the tissue omega-6/omega-3 ratio as a new health biomarker.

Credit: 
Massachusetts General Hospital

Paris Agreement hampered by inconsistent pledges, new research finds

Some countries' Paris Climate Agreement pledges may not be as ambitious as they appear, a new study has found.

The Paris Agreement takes a bottom-up approach to tackling climate change, with countries submitting pledges in the form of nationally determined contributions (NDCs) to greenhouse emissions.
However, writing today in Environmental Research Letters, researchers from the Autonomous University of Barcelona (UAB), Spain, reveal a lack of consistency and transparency between the various commitments.

Lead author Lewis King, from UAB, said: "The Paris Climate Agreement was a step in the right direction for international climate policy. But in its current form, it is at best inadequate and at worst grossly ineffective.

"Our study highlights significant issues around transparency and consistency in the agreement's pledges, which may be a contributory factor towards the lack of ambition in the pledges from some parties."

Co-author Professor Jeroen van den Bergh explained: "The sum of the agreement's national pledges on greenhouse gas emission mitigation - in the form of NDCs - falls short of meeting the agreement's 2°C target."

To shed light on the reasons behind this, the researchers analysed the different country-level commitments by categorising and normalising them to make them comparable. Their four categories were:

Absolute emission reduction targets - absolute emission reductions for a target year in percentage terms relative to a historic base year. The base year is set by the country and ranges from 1990 to 2014, while the target year is typically 2030, and in a few cases 2025.

´Business as usual´ (BAU) reduction - a percentage reduction in emissions relative to a 'business as usual' scenario, typically to 2030. It is defined by each country itself, causing a large variance in emissions growth among scenarios.

Emission intensity reductions - a reduction in emission intensity per GDP relative to a historic base year.

Projects absent of GHG-emission targets - NDCs that do not include an explicit greenhouse gas emission target.

The researchers assessed these categories by adding the dimensions of geographic region and emission intensity per capita.

Mr King said: "Our normalisation of the pledges effectively converts them all to the absolute emission reduction target format, but indicating actual emission change - whether positive or negative - compared with a consistent base year.

"We found that authentic absolute reduction pledges had the highest ambition in terms of tangible emissions reduction. By contrast, pledges in the other three categories tend to produce low ambitions with significant emissions increases of 29-53 per cent at a global level.

"Significantly, we found that Northern America and the EU were the only regions aiming for absolute reductions in emissions. In the Middle East and North Africa, and South Asia, substantial increases are expected."

Professor van de Bergh said: "The current format of the pledges mean it's difficult to accurately assess and compare what the pledges mean in actual emissions terms.

"For example, Russia, India and Pakistan all frame their NDCs in terms of percentage reductions; Russia relative to a base year, India relative to emissions per GDP and Pakistan relative to a BAU scenario. However, after the normalisation, the pledges result in substantial percentage increases in emissions by 2030. Not only does this make the associated pledges difficult to interpret and compare to other pledges without detailed analysis, but may produce a psychological effect of reducing ambition level due to framing the pledge as a percentage reduction even though emissions actually increase."

Mr King added: "Society has the right to be able to clearly understand and compare climate change commitments by countries, including whether they are fair, ambitious and add up to international climate goals. We also know that providing consistent and easily comparable information about national climate goals helps with public acceptance.

"The current lack of transparency and consistency will hinder the NDC process. To move forward, we suggest the principles of transparency and consistency from the TACCC framework are extended to the framing of the NDCs themselves. This would be easy to achieve, by having countries convert their pledges into clear emission targets relative to the most recent available base year, inclusive of all significant gases and sectors.

"Not only can this help produce targets of greater ambition that are more open to external scrutiny, but it also will assist in improving effectiveness through minimising counterproductive systemic effects."

Credit: 
IOP Publishing

Antibiotic-resistant genes found in London's canals and ponds

Central London's freshwater sources contain high levels of antibiotic resistant genes, with the River Thames having the highest amount, according to research by UCL.

The Regent's Canal, Regent's Park Pond and the Serpentine all contained the genes but at lower levels than the Thames, which contained genes providing resistance for bacteria to common antibiotics such as penicillin, erythromycin and tetracycline.

The genes come from bacteria in human and animal waste. When antibiotics are taken by humans much of the drug is excreted into the sewer system and then into freshwater sources. The presence of antibiotics in these water sources provides an environment where microbes carrying the resistance genes can multiply quicker and share their resistance with other microbes.

Project lead Dr Lena Ciric (UCL Civil, Environmental & Geomatic Engineering) said: "This shows that more research is needed into the efficiency of different water treatment methods for antibiotic removal, as none of the treatments currently used were designed to incorporate this.

"This is particularly important in the case of water bodies into which we discharge our treated wastewater, which currently still contains antibiotics. It is also important to look into the levels of antibiotics and resistant bacteria in our drinking water sources."

There is currently no legislation to remove antibiotics or the resistant genes from water sources, implying that antibiotics and the resistant genes could be present in small amounts in drinking water, although this would require testing.

The Thames is likely to have higher levels of antibiotics and resistant genes because a large number of wastewater treatment works discharge into it both upstream and in London.

Antibiotics entering the sewer system are diluted through flushing, but even low levels can encourage resistance genes to multiply and spread to more microbes.

The research team developed a DNA-based method which can provide information about the number of each of the resistant genes per litre of water. They then compared the numbers of the resistant genes in the different London water systems.

The team are now experimenting with removing antibiotics and the resistant bacteria and genes from water taken from London's natural water system using slow sand filtration, which is a form of drinking water treatment.

This technique is used around the world including at Thames Water's Coppermills Treatment Works which provides drinking water for most of north east London. They are investigating using different variations of the filtration, with changing proportions of sand and activated carbon and different flow rates.

Credit: 
University College London

When should banks chase debts? New method could help them decide

Like Kenny Rogers' gambler, who has to "know when to hold 'em, know when to fold 'em," banks face financial risks and uncertainty when deciding when to chase consumers who default on their credit card payments and when to let them go.

A new study from the McCombs School of Business at The University of Texas at Austin analyzes delinquent credit card user behaviors and develops a predictive model for sorting them into categories based on whether they are more or less likely to pay back their overdue debt.

The model was developed by Naveed Chehrazi, assistant professor of information, risk and operations management at the McCombs School, and co-authors Peter Glynn from Stanford University and Thomas Weber from École Polytechnique Fédérale de Lausanne. Their research was recently published in the journal Management Science.

"When you know how much capital is at risk when an account holder defaults, you are able to better assess the risk of an applicant and properly adjust the parameters of the account," Chehrazi said.

Banks typically rely on outside collection agencies to recoup significant credit card debts, but that can be expensive. Chehrazi and his fellow researchers worked with banks and collection agencies to develop a model so banks can determine when to chase a delinquent account and when to let it go.

Using information such as the likelihood of repayment and the amount still in debt, the method can help bankers decide the optimal collection strategy based on the state of the account and information about the user.

The optimal collection strategy maps any possible state of the account to an action. The model can help collection managers identify the right time and nature of an action, which can take the form of establishing contact, negotiating a repayment plan, or filing for a lawsuit. For example, the higher a person's unpaid debt, the more it makes sense for the bank to invest in strong actions to spur repayment. For smaller delinquencies, the cost of pursuit may not be worth it.

The research also develops an "economic balance threshold," a point at which the active pursuit of reclaiming debt no longer makes economic sense. For example, if a person initially owes $1,000, but after being contacted numerous times by credit collectors, pays his debt down to $100, the bank may be better off ceasing collection activities. This is because the person's inferred willingness and ability to pay suggest the portion of $100 that would be recovered by pursuing collection would not be enough to cover the cost of taking further action. From this perspective, the economic balance threshold of an account can be viewed as the minimum expected loss that a bank could incur when an account defaults.

The model could make it much easier for banks to determine what accounts are worth spending time and money on, but Chehrazi said the method can also be used in many different stages of the collection process.

"One application of this is determining how much capital reserve banks have to hold in order to properly take into account the risk of delinquency and loss," Chehrazi said. "It is useful for an account, it is useful for banks and also it is useful for the general consumer population. The benefit of this for the whole economy is that the risk of default is better accounted for and that the general population would have easier access to credit they need."

Credit: 
University of Texas at Austin