Culture

The Lancet Infectious Diseases: New investigational antibiotic effective against drug-resistant bacteria in phase 2 trial

Results from a phase 2 randomised trial suggest that a new investigational antibiotic is as effective as the current standard-of-care antibiotic for the treatment of complicated urinary tract infections (UTIs) caused by several multidrug resistant Gram-negative bacteria.

The findings, published in The Lancet Infectious Diseases, indicated that patients treated with the siderophore-based drug, cefiderocol, had a higher and more sustained level of pathogen eradication and similar clinical outcomes to those treated with the current standard of care, imipenem-cilastatin.

Cefiderocol is novel in its approach to overcoming the three main mechanisms of antibiotic resistance used by Gram-negative bacteria--two outer membranes that make it hard for antibiotics to penetrate, porin channels which can adapt and change to block the antibiotic entry, and efflux pumps that expel antibiotics back out of the cell and make the drugs ineffective.

"Cefiderocol acts as a trojan horse," explains Dr Simon Portsmouth, Shionogi Inc, USA, who led the research. "The drug uses a novel mechanism of cell entry that takes advantage of the bacteria's need for iron to survive. During an acute infection, one of our innate immune responses is to create an iron-poor environment. In response, bacteria increase their iron intake. Cefiderocol binds to irons and is transported through the extra outer membrane by the bacterium's own iron-transport system. These iron channels also enable the drug to bypass the bacteria's porin channels and gain repeat entry even if the bacterium has evolved efflux pumps." [1]

The findings highlight the potential of cefiderocol as an important new option for treating highly resistant Gram-negative bacteria, once approved. Cefiderocol's effect on carbapenem-resistant strains--which cause some of the hardest-to-treat infections in health-care settings, and for which there is no antibiotic alternative that does not have serious side effects or other complications--could not be properly evaluated because the carbapenem drug imipenem-cilastatin was used as the active control treatment.

The US Centers for Disease Control and Prevention (CDC) estimate that antibiotic-resistant microorganisms cause more than two million infections in the USA each year, resulting in at least 23,000 deaths. A 2014 Review on Antimicrobial Resistance predicted that by 2050, the global cost of antibiotic resistance will rise to as such as £100 trillion and account for 10 million deaths every year.

Antibiotic resistance has been identified as one of the biggest threats to human health globally. While the antibiotic arsenal dwindles in effectiveness, new antibiotics with novel modes of action are urgently needed. WHO considers carbapenem-resistant, Gram-negative pathogens including Pseudomonas aeruginosa, Acinetobacter baumannii, and Enterobacteriaceae as the highest priority and critical for development of new antibiotics. Previous studies have demonstrated that cefiderocol is active against all three multidrug-resistant pathogens.

As part of the US Food and Drug Administration (FDA) approach to fast-tracking antibiotic development [2], this study randomised 448 adults (aged 18 or older) who had been hospitalised with a complicated UTI or uncomplicated pyelonephritis (inflammation of the kidney dur to a bacterial infection) to receive three daily infusions of cefiderocol (300 patients) or imipenem-cilastatin (148 patients) for seven to 14 days. In total, 252 patients treated with cefiderocol and 119 with imipenem-cilastatin had a Gram-negative uropathogen and were included in the efficacy analysis. The majority of participants had Escherichia coli, Klebsiella pneumoniae, or P aeruginosa infections (figure 2).

Results suggested that cefiderocol was as effective as imipenem-cilastatin in a combined evaluation of the clinical and microbiological response with efficacy rates of 73% (183/252 patients) and 55% (65/119 patients) respectively seven days after treatment was stopped. This difference was mainly driven by the sustained antibacterial activity of cefiderocol whilst the clinical responses were highly similar (90% vs 87%).

Overall, cefiderocol was well tolerated with similar numbers of adverse events to that of imipenem-cilastatin (41% [122/300 patients] vs 51% [76/148 patients]). Gastrointestinal disorders (ie, diarrhoea, constipation, nausea, vomiting, and abdominal pain) were the most common adverse events in both groups (35 [12%] patients in the cefiderocol group and 27 [18%] patients in the imipenem-cilastatin group). Fourteen (5%) participants in the cefiderocol group and 12 (8%) in the imipenem-cilastatin group reported at least one serious adverse event, with C difficile colitis the most common.

Dr Portsmouth says: "Cefiderocol was found to be both safe and tolerable in a population of older patients who were very ill with complex comorbid conditions and a wide range of multidrug-resistant pathogens. Our results support cefiderocol as a novel approach that might be used to overcome Gram-negative resistance." [1]

"Ongoing clinical trials of pneumonia, including hospital-acquired pneumonia and ventilator-associated pneumonia, and a study in patients with carbapenem-resistant infections, will provide additional important information about cefiderocol." [1]

The authors note that an important limitation of the study was the exclusion of patients with carbapenem-resistant infections because the comparator was a carbapenem.

Writing in a linked Comment, Dr Angela Huttner, Geneva University Hospitals, Switzerland, discusses the importance of assessing the post-market clinical experience of the drug: "The FDA's new guidance on complicated urinary tract infection endpoints, issued in June 2018, calls for complete clinical resolution and changed the cutoff for microbiological response to bacterial counts of less than 1 × 103 CFU/mL. An accelerated process to get new antibiotics to market was urgently needed, and regulatory bodies responded. But any trial launched more than four months ago (including an ongoing phase 3 cefiderocol trial, NCT02714595) will now be adhering to outdated standards and requirements. There is still no guidance on measuring baseline or emerging resistance; this too will fall to post-market development. Although these results are promising with regard to obtaining approval for cefiderocol and in the context of increasing antimicrobial resistance, scepticism will persist until more evidence is available. Cefiderocol remains on the fast track to approval. This is welcome news, as long as those in post-market clinical medicine understand the deal we have made: it will fall to us to continue the drug's clinical development, while managing its appropriate use and conservation, and thus take its true measure."

Credit: 
The Lancet

New epigenetic drug strategy to treat cancer

(Philadelphia, PA) - Researchers have discovered that inhibiting CDK9, a DNA transcription regulator, reactivates genes that have been epigenetically silenced by cancer. Reactivation leads to restored tumor suppressor gene expression and enhanced anti-cancer immunity. It is the first time this particular kinase has been linked to gene silencing in mammals.

Jean-Pierre Issa, MD, Director of the Fels Institute for Cancer Research & Molecular Biology at the Lewis Katz School of Medicine at Temple University (LKSOM), led the research. The paper appears in the journal Cell.

It has been established that epigenetic mediators of gene silencing present new targets for cancer drugs. Hanghang Zhang, PhD, of the Fels Institute for Cancer Research & Molecular Biology at LKSOM, the first author on the report, performed a live cell drug screen with genetic confirmation to identify CDK9 as a target and to develop and test an effective inhibitor - MC180295. The new drug is highly selective, potentially avoiding the side effects associated with inhibiting the cell cycle. In the study it showed broad effectiveness against cancer both in vitro and in vivo. The drug was discovered in collaboration with investigators at the Moulder Center for Drug Discovery at the Temple University School of Pharmacy.

"In addition to reactivating tumor suppressor genes, CDK9 inhibition induces sensitivity to the immune checkpoint inhibitor α-PD-1 in vivo," said Issa. "It is an excellent target for epigenetic cancer therapy."

Credit: 
Temple University Health System

Mind's quality control center found in long-ignored brain area

The cerebellum can't get no respect. Located inconveniently on the underside of the brain and initially thought to be limited to controlling movement, the cerebellum has long been treated like an afterthought by researchers studying higher brain functions.

But researchers at Washington University School of Medicine in St. Louis say overlooking the cerebellum is a mistake. Their findings, published Oct. 25 in Neuron, suggest that the cerebellum has a hand in every aspect of higher brain functions -- not just movement, but attention, thinking, planning and decision-making.

"The biggest surprise to me was the discovery that 80 percent of the cerebellum is devoted to the smart stuff," said senior author Nico Dosenbach, MD, PhD, an assistant professor of neurology, of occupational therapy and of pediatrics. "Everyone thought the cerebellum was about movement. If your cerebellum is damaged, you can't move smoothly ­-- your hand jerks around when you try to reach for something. Our research strongly suggests that just as the cerebellum serves as a quality check on movement, it also checks your thoughts as well -- smoothing them out, correcting them, perfecting things."

Dosenbach is a founding member of the Midnight Scan Club, a group of Washington University neuroscientists who have taken turns in an MRI scanner late at night, scanning their own brains for hours to generate a massive amount of high-quality data for their research. A previous analysis of Midnight Scan Club data showed that a kind of brain scan called functional connectivity MRI can reliably detect fundamental differences in how individual brains are wired.

Postdoctoral researcher and first author Scott Marek, PhD, decided to apply a similar analysis to the cerebellum. In the better-known cerebral cortex -- the crumpled outer layer of the brain -- wiring maps have been drawn that connect distant areas into networks that govern vision, attention, language and movement. But nobody knew how the cerebellum is organized in individuals, partly because a quirk of MRI technology means that data obtained from the underside of the brain tend to be low quality. In the Midnight Scan Club dataset, however, Marek had access to more than 10 hours of scans on each of 10 people, enough to take a serious look at the cerebellum.

Using the cortex's networks as a template, Marek could identify the networks in the cerebellum. Notably, the sensory networks are missing -- vision, hearing and touch -- and only 20 percent of the cerebellum is devoted to movement, roughly the same amount as in the cerebral cortex. The remaining 80 percent is occupied by networks involved in higher-order cognition: the attention network; the default network, which has to do with daydreaming, recalling memories and just idly thinking; and two networks that oversee executive functions such as decision-making and planning.

"The executive function networks are way overrepresented in the cerebellum," Marek said. "Our whole understanding of the cerebellum needs to shift away from it being involved in motor control to it being more involved in general control of higher-level cognition."

The researchers measured the timing of brain activity and found that the cerebellum was consistently the last step in neurologic circuits. Signals were received through sensory systems and processed in intermediate networks in the cerebral cortex before being sent to the cerebellum. There, the researchers surmise, the signals undergo final quality checks before the output is sent back to the cerebral cortex for implementation.

"If you think of an assembly line, the cerebellum is the person at the end who inspects the car and says, 'This one is good; we'll sell it,' or 'This one has a dent; we have to go back and repair it,'" Dosenbach said. "It's where all your thoughts and actions get refined and quality controlled."

People with damage to their cerebellum are known to become uncoordinated, with an unsteady gait, slurred speech and difficulty with fine motor tasks such as eating. The cerebellum also is quite sensitive to alcohol, which is one of the reasons why people who have had too many drinks stumble around. But the new data may help explain why someone who is inebriated also shows poor judgment. Just as a person staggers drunkenly because his or her compromised cerebellum is unable to perform the customary quality checks on motor function, alcohol-fueled bad decisions might also reflect a breakdown of quality control over executive functions.

Marek also performed individualized network analyses on the 10 people in the data set. He found that while brain functions are arranged in roughly the same pattern in everyone's cerebellum, there is enough individual variation to distinguish brain scans performed on any two participants. The researchers are now investigating whether such individual differences in cerebellar networks correlate with intelligence, behavior, personality traits such as adaptability, or psychiatric conditions.

"Many people who are looking at links between brain function and behavior just ignore the cerebellum," Dosenbach said. "They slice off that data and throw it away, because they don't know what to do with it. But there are four times as many neurons in the cerebellum as in the cerebral cortex, so if you're leaving out the cerebellum, you've already shot yourself in the foot before you started. The promise of imaging the whole human brain at once is to understand how it all works together. You can't see how the whole circuit works together when you're missing a major piece of it."

Credit: 
Washington University School of Medicine

Acute kidney injury linked to higher risk of dementia

Highlights

Patients with acute kidney injury had more than a 3-fold higher risk of developing dementia compared with those without acute kidney injury during a median follow-up time of 5.8 years.

Results from the study will be presented at ASN Kidney Week 2018 October 23-October 28 at the San Diego Convention Center.

San Diego, CA (October 25, 2018) -- Acute kidney injury (AKI) is linked with a higher risk of developing dementia, according to a study that will be presented at ASN Kidney Week 2018 October 23-October 28 at the San Diego Convention Center.

AKI, an abrupt decline in kidney function, often arises after major surgeries or severe infections, and it is associated with long-term health problems including the development of chronic kidney disease and cardiovascular disease. AKI is also associated with acute neurologic complications, but the long-term consequences of AKI on brain health are unclear.

To study the issue, Jessica Kendrick, MD (University of Colorado School of Medicine) and her colleagues analyzed information on 2082 patients without a prior history of dementia from an integrated health care delivery system in Utah. Patients had a hospital admission between 1999 and 2009.

During a median follow-up time of 5.8 years, 97 patients developed dementia. More patients with AKI developed dementia (7.0% vs. 2.3%), and patients with AKI had more than a 3-fold higher risk of developing dementia compared with those without AKI.

"AKI, even with complete renal recovery, is associated with an increased risk of dementia," said Dr. Kendrick. "Further studies are needed to determine the long-term cognitive consequences of AKI."
 

Credit: 
American Society of Nephrology

New species of the 'first bird' Archaeopteryx uncovered

image: Reconstruction of Archaeopteryx albersdoerferi by Zhao Chuang under supervision of Martin Kundrát. Reprinted with permission from Zhao Chuang and PNSO, original copyright 2017.

Image: 
Zhao Chuang

A new species of the famous 'first bird', Archaeopteryx, supporting its status as the transitional fossil between birds and dinosaurs, has been published by in the journal Historical Biology.

Contrary to some previous studies, Archaeopteryx can now be conclusively shown to be a primitive bird antecedent, and an evolutionary intermediate between birds and dinosaurs, which possessed teeth and clawed fingers.

The new study also used state-of-the-art 3D X-ray analyses (Synchrotron microtomography) to virtually dissect the fossil and identify skeletal adaptions that would have helped Archaeopteryx albersdoerferi to fly.

The fossil was made available for study by scientists, who after over seven years of research recognised it as a new species -- Archaeopteryx albersdoerferi.

"Archaeopteryx albersdoerferi is one of the most important specimens of Archaeopteryx because it is around 400,000 years younger than any of the others found so far," noted lead author Martin Kundrát from the University of Pavol Jozef Šafárik, Slovakia.

"This is the first time that numerous bones and teeth of Archaeopteryx were viewed from all aspects including exposure of their inner structure. The use of synchrotron microtomography was the only way to study the specimen as it is heavily compressed with many fragmented bones partly or completely hidden in limestone", Kundrát continued.

"Geochemical analysis of the rock encasing the bones implies this specimen came, unlike others, from the younger Mörnsheim Formation", said Dr John Nudds from Manchester University, UK.

"Our analysis has shown that Archaeopteryx albersdoerferi shares more features in common with modern birds than their dinosaurian ancestors" said Professor Per Ahlberg of Uppsala University in Sweden.

These traits suggest that Archaeopteryx albersdoerferi may have possessed enhanced flying ability relative to geologically older species of Archaeopteryx.

The most remarkable included thin air-filled bones, increased area for attachment of flight muscles on the wishbone, and a reinforced configuration of bones in the wrist and hand. Archaeopteryx albersdoerferi also had fused bones in the skull and fewer teeth than other species of these 'first birds'.

"Significantly, however, when we examined the evolutionary relationships of various species of Archaeopteryx we found that its flight-related characteristics had appeared separately from those of more advanced bird-line dinosaurs, implying that flying lifestyles have developed more than once" said Dr Benjamin Kear of Uppsala University.

"The specimen exhibits a mosaic accumulation of flight-supporting morphologies before it reached adulthood. Whether this trend is applicable for other basal birds, it will need to be studied when more virtual data are available," added Kundrát.

Ultimately, these findings have uncovered a possible evolutionary mechanism through which flight-supporting characteristics developed during the evolution of dinosaurs, with Archaeopteryx in particular, having independently acquired increasingly bird-like traits over time.

The 150-million-year-old fossils of Archaeopteryx have been known since 1861, with 12 specimens recovered to date. The fossil in this study, the 8th or Daiting Specimen is of the most mysterious.

It was reportedly discovered in a quarry near Daiting in southern Germany in 1990 but has been held in private ownership until its purchase by palaeontologist Raimund Albersdoerfer in 2009.

Credit: 
Taylor & Francis Group

Wearable tech becomes top fitness trend for 2019, says survey of health and fitness professionals

October 25, 2018 - Fitness trackers, smart watches, and other wearable technology are the number one fitness trend for 2019, according to an annual survey of health and fitness professionals published in the November issue of ACSM's Health & Fitness Journal®, an official journal of the American College of Sports Medicine (ACSM). The journal is published in the Lippincott portfolio by Wolters Kluwer.

Other notable trends in the 13th annual survey include the continued rise of high-intensity interval training (HIIT), increased interest in hiring certified fitness professionals, and growing interest in workplace health and wellness programs, according to the report by Walter R. Thompson, PhD, FACSM, of Georgia State University, Atlanta, and Immediate Past-President of ACSM. He writes, "The survey was designed to confirm or to introduce new trends (not fads) that have a perceived positive impact on the industry according to the international respondent."  

Tracking Trends in the Health Fitness Industry - What's Hot, What's Not

Conducted each year since 2007, the 13th annual survey included responses from more than 2,000 health fitness professionals from around the world. Respondents represented all sectors of the industry: commercial, clinical, community, and corporate. 

Highlights of the 2019 annual survey of health and fitness trends included: 

Wearable technology returned to the top-ranked position it had occupied for two consecutive years, before dropping to number three in last year's survey. The return of wearables to number one "may be the result of manufacturers correcting some monitoring inaccuracies of the past," according to Dr. Thompson. 
Group training, defined as classes of more than five participants, was rated number two for the second year in a row. This item was recently revised to distinguish it from small group personal training, which was ranked number 19. 
Rounding out the top five were high-intensity interval training (HIIT), referring to short bouts of high-intensity exercise followed by a short rest period; fitness programs for older adults, which has made a strong return to the top 10 in recent surveys; and body weight training, which uses the person's own body weight to provide resistance. 
There's a growing emphasis on employing certified fitness professionals - certified by ACSM or other nationally accredited organizations. A revised item specifying professional certification ranked sixth in the 2019 survey. 
An expanded item on worksite health promotion and workplace well-being made its first appearance in the top 20, scoring at number 15. 
Trends dropping out of the top 20 since the previous survey were circuit weight training, sport-specific training, and core training. 

The November issue of ACSM's Health & Fitness Journal features articles and columns dedicated to several of the top trends in the 2019 fitness survey, including wearable technology and fitness apps, HIIT, body weight training, and yoga. The special issue also presents four commentaries on the survey findings, authored by internationally recognized experts. 

Dr. Thompson emphasizes the value of the annual survey in appreciating the differences between trends and fads in the fitness industry - helping to make important investment decisions and programming decisions for future growth and development. The author concludes, "While no one can accurately predict the future of any industry, this survey helps to track trends that can assist owners, operators, program directors, and health fitness professionals with making important business decisions." 

Credit: 
Wolters Kluwer Health

Local hormone production is root of issue for plant development

image: A study in Developmental Cell shows the importance of the interplay between local production and transport of the key hormone auxin to plant development.

Image: 
Courtesy of <i>Developmental Cell</i>; photo by Javier Brumos

Plant roots rely on local production of a key hormone that controls many aspects of development and response to environmental changes, according to new research from North Carolina State University.

The study sheds light on the importance of when and where the plant hormone auxin is produced and explores the interplay between auxin synthesis and transport that moves auxin throughout the plant.

Specifically, the research shows that local auxin production is required to keep plants healthy. That is, roots will degenerate if auxin is not produced in the root, or flowers will be sterile if auxin is not made at the right place and the right time in the flower, despite the fact that the hormone can be transported throughout the plant.

"We knew how auxin was transported within the plant and how plants respond to auxin, but surprisingly we didn't know until recently how and where auxin is produced by the plant," said Javier Brumos, an NC State post-doctoral researcher and first author of a paper describing the research.

Early hypotheses about auxin transport posited that the hormone was produced in young leaves and shoot tips and worked its way "down" the plant into stems and roots to promote plant development and growth. Later research revealed the molecular machinery that makes auxin and demonstrated local auxin production and movement between cells in roots as well as shoots.

"That led to the question of how important local production of auxin in the root - that is, auxin that moves from root cell to root cell rather than being transported from the shoot - is to plant development," Brumos said.

Using a variety of elegant experiments on Arabadopsis, or mustard weed, the NC State researchers showed the importance of local auxin production in plant roots.

First, the researchers created hybrid plants by grafting healthy, auxin-producing shoots onto roots that didn't produce auxin to see whether auxin could be transported from shoot to root to keep roots healthy. Roots degenerated when no locally produced auxin was available. The researchers also performed the opposite experiment, grafting auxin-less shoots onto auxin-producing roots. Those roots were viable and healthy.

Next, the researchers placed a paste of auxin on shoots of auxin-deficient seedlings to see whether auxin could be transported to roots to maintain healthy roots by activating stem cells located in root tip tissue called the meristem. While this auxin paste helped create some new roots, those roots soon degenerated.

"It turns out that plants may choose to make auxin in the place where they want it to accumulate rather than always bring auxin from other parts of the plant," said Anna Stepanova, an assistant professor in the Department of Plant and Microbial Biology at NC State and corresponding author of the paper. "Locally made auxin is necessary for keeping the stem cell niche alive. If there is no local auxin in the root, the plant loses its root meristematic activity and the root degenerates."

The researchers also blocked a plant's natural ability to produce auxin with specific chemical inhibitors and instead inserted bacterial genes known to make auxin. The bacterial genes could then be activated to produce auxin in either roots or shoots at will by applying heat. Again, plants with root-made auxin had viable roots, while plants reliant on auxin transport from shoots had faulty roots.

The paper also more closely examined auxin production and transport outside and within roots, showing that externally applied auxin could save auxin-starved meristems if auxin transport systems in roots were functioning. But if auxin transport was blocked in root cells, the health of the roots became reliant on local auxin production: If the hormone was not made in the right place at the right time, the roots withered and died.

The researchers also examined the role of local auxin in flowers and leaves and again saw that the sites of the biosynthesis of this hormone are critical, in this case, for full fertility and normal leaf development.

"The take-home message of this study is that local auxin production and transport work together to keep plants healthy. Through this cooperative action, plants can maintain robust stem cell niches and thus can survive and grow even in harsh conditions," said Jose Alonso, an NC State professor of plant and microbial biology and an auxin expert, who co-conceived the study along with Brumos and Stepanova.

Credit: 
North Carolina State University

Certain physical disabilities may affect outcomes in kidney transplant recipients

Highlights

Compared with kidney transplant recipients who did not report a disability, recipients with a visual disability were at higher risk of organ failure and recipients with a walking disability were at higher risk of early death.

Results from the study will be presented at ASN Kidney Week 2018 October 23-October 28 at the San Diego Convention Center.

San Diego, CA (October 25, 2018) -- Visual and walking disabilities in kidney transplant recipients were linked with poor outcomes in a study that will be presented at ASN Kidney Week 2018 October 23-October 28 at the San Diego Convention Center.

It's unclear whether physical disabilities in transplant recipients affect the function of transplanted organs or patients' survival. To investigate, Alvin Thomas (Johns Hopkins) and his colleagues studied 500 kidney transplant recipients at two medical centers from 2013 to 2017, comparing normally able recipients to recipients with self-reported hearing, visual, physical, or walking disabilities.

The prevalence of self-reported disability prior to kidney transplantation was 24%. Transplant recipients with a reported visual disability had a 3.4-times higher risk of failure of the transplanted organ compared with recipients without visual disability. Transplant recipients with reported walking disability had a 3.1-times higher risk of dying compared with recipients without walking disability. Investigators did not find evidence of better or worse outcomes associated with other disabilities. The findings suggest that kidney transplant recipients with visual or walking disabilities might benefit from additional supportive care and monitoring post-transplant.

"While data on physical functioning is routinely collected by the transplant registry in the United States, specifics on physical disabilities are not captured. Our work highlights the prevalence of specific physical disabilities in our two-center cohort and describes their post-transplant outcomes," said Thomas.

Credit: 
American Society of Nephrology

Neurology: Space travel alters the brain

Spending long periods in space not only leads to muscle atrophy and reductions in bone density, it also has lasting effects on the brain. However, little is known about how different tissues of the brain react to exposure to microgravity, and it remains unclear whether and to what extent the neuroanatomical changes so far observed persist following return to normal gravity. Initiated and guided by a team of neuroscientists (headed by Prof. Floris L. Wuyts) based at the University of Antwerp, and in close cooperation with Russian colleagues, LMU neurologist Professor Peter zu Eulenburg has completed the first long-term study in Russian cosmonauts. In this study, which appears in the New England Journal of Medicine, they show that differential changes in the three main tissue volumes of the brain remain detectable for at least half a year after the end of their last mission.

The study was carried out on ten cosmonauts, each of whom had spent an average of 189 days on board the International Space Station (ISS). The authors used magnetic resonance tomography (MRT) to image the brains of the subjects both before and shortly after the conclusion of their long-term missions. In addition, seven members of the cohort were re-examined seven months after their return from space. "This is actually the first study in which it has been possible to objectively quantify changes in brain structures following a space mission also including an extended follow-up period," zu Eulenburg points out.

The MRT scans performed in the days after the return to Earth revealed that the volume of the grey matter (the part of the cerebral cortex that mainly consists of the cell bodies of the neurons) was reduced compared to the baseline measurement before launch. In the follow-up scans done 7 months later, this effect was partly reversed, but nevertheless still detectable. In contrast, the volume of the cerebrospinal fluid, which fills the inner and outer cavities of the brain, increased within the cortex during long-term exposure to microgravity. Moreover, this process was also observable in the outside spaces that cover the brain after the return to Earth, while the cerebrospinal fluid spaces within returned to near normal size. The white matter tissue volume (those parts of the brain that are primarily made up of nerve fibers) appeared to be unchanged upon investigation immediately after landing. However, the subsequent examination 6 months later showed a widespread reduction in volume relative to both earlier measurements. In this case, the researchers postulate that over the course of a longer stint in space, the volume of the white matter may slowly be replaced by an influx of cerebrospinal fluid. Upon return to Earth, this process is then gradually reversed, which then results in a relative reduction of white matter volume.

"Taken together, our results point to prolonged changes in the pattern of cerebrospinal fluid circulation over a period of at least seven months following the return to Earth," says zu Eulenburg. "However, whether or not the extensive alterations shown in the grey and the white matter lead to any changes in cognition remains unclear at present," he adds. So far the only clinical indication for detrimental effects is a reduction in visual acuity that was demonstrated in several long-term space travelers. These changes may very well be attributable to the increased pressure exerted by the cerebrospinal fluid on the retina and the optic nerve. The governing cause for the widespread structural changes in the brain following long spaceflights might lie in minimal pressure changes within the body's various water columns under conditions of microgravity that have a cumulative effect over time. According to the authors, to minimize the risks associated with long-term missions and to characterize any clinical significance of their structural findings, further studies using a wider range of diagnostic methods are deemed essential.

Credit: 
Ludwig-Maximilians-Universität München

Experts call for health system change to tackle the challenge of multimorbidity in the NHS

The number of people with multiple long-term conditions, known as multimorbidity, is rising internationally, putting increased pressure on health care systems, including the NHS. Researchers from the 3D Study - the largest ever trial of a person-centred approach to caring for patients with multimorbidity in primary care - at the Universities of Bristol, Dundee, Manchester and Glasgow, are hosting a conference today [Thursday 25 October] with the Royal College of General Practitioners to discuss the challenges facing general practice and how the health care system needs to respond.

The researchers have also published a report, launched at the conference today, which makes detailed recommendations to policy makers about what that system change should look like.

People with multimorbidity - one or more long-term health conditions, such as diabetes, heart disease and dementia - are more likely to experience poor quality of life and poor physical and mental health. They use both general practice and hospital services far more often than the general population. However, healthcare systems around the world are largely designed to manage individual diseases or episodes of illness rather than patients with complex multiple health care needs.

Professor Chris Salisbury, a GP and multimorbidity research lead from the Centre for Academic Primary Care who will be speaking at the conference today, argues that a new approach is needed. "Health services, including the NHS, need to adapt to address this challenge", he said. "We need patient-centred care, with more emphasis on generalist rather than specialist care and better integration between general practice, hospitals and social care. There will need to be a new relationship between patients and health care professionals, which will engage patients more in managing their health conditions themselves."

In the report aimed at policy makers, published at the conference today, Professor Salisbury and co-authors, Professors Bruce Guthrie (University of Dundee), Peter Bower (University of Manchester) and Stewart Mercer (University of Glasgow), said: "People with multimorbidity account for a disproportionately high number of consultations in general practice and their treatment is expensive because they are likely to be prescribed numerous drugs. People with multimorbidity also have high rates of emergency hospital admissions and attendance at out-patient appointments. The economic impact of increasing multimorbidity in the population is therefore substantial. We need to consider new ways of providing health care which more effectively support self-care, reduce inefficiencies and reduce reliance on expensive hospital care."

The report makes a series of policy recommendations including:

Promoting patient-centred approaches to the management of multimorbidity in primary care, which requires training, support and changes in incentives.

Developing and evaluating new approaches to managing patients with multimorbidity within hospitals.

Exploring new models of integration of primary and community care, hospital care and social care which enable better co-ordination and support for people with multimorbidity, which is likely to require substantial changes in commissioning and funding mechanisms, and a rebalancing of resources.

Changes to professional education, training and regulation to prepare professionals to manage patients with multimorbidity in new and more integrated systems.

Engaging and enabling people to manage their own health and long-term conditions, requiring co-ordinated action across many aspects of government and public life.

More research to understand and improve care for multimorbidity.

Credit: 
University of Bristol

Mucus, cough and chronic lung disease: New discoveries

image: This is Gunnar C. Hansson, Professor of Medical Biochemistry at Sahlgrenska Academy, University of Gothenburg.

Image: 
Johan Wingborg

As a cold ends, a severe mucus cough starts. Sound familiar? Two studies now give explanations: First, crucial mechanisms of the mucus in both diseased and healthy airways; second, what happens in such chronic lung diseases as cystic fibrosis and chronic obstructive pulmonary disease (COPD).

"This is new knowledge, but there are no instant cures to be found yet. But to make progress, it's important to understand how the mucus clearance works," says Gunnar C. Hansson, Professor of Medical Biochemistry at Sahlgrenska Academy, University of Gothenburg.

The studies, published in European Respiratory Journal and Journal of Clinical Investigation Insight, describe how the normal lung is kept clean by long mucus bundles formed in glands are moved along the airways and thus "sweeping" and cleaning the surface.

When particles or irritant substances are inhaled, they temporarily bring the mucus bundles to a halt. Meanwhile, the cilia collect the debris onto the bundles, and these are then efficiently cleared out of the lungs when the bundles start moving again.

In a lung infection or chronic lung disease, the lung mucus is transformed into a virtually immobile mucus layer against which the cilia are powerless. This layer keeps bacteria away from the epithelial cells and thereby protecting it.

Toward the end of a cold, the mucus layer needs to be coughed up, which explains why a cold ends with mucus coughing. But in certain types of chronic lung disease, such as cystic fibrosis or COPD, the mucus remains on the airway surface. There, although the mucus layer essentially has a protective role, it accumulates bacteria that will slowly damage the lungs.

The latest studies also show that, in composition and function, the immobile mucus layer strikingly resembles the protective mucus layer in the large intestine. This too has been discovered and described by the Mucin Biology Group under Gunnar C. Hansson's leadership.

In the large intestine and lung alike, these mucus layers keep bacteria at bay minimizing contact with the body. The difference is that in the lung, the mucus tethering is transient and the mucus is cleared away once the infection is under control.

One way of impeding the formation of a tethered mucus layer in chronic lung disease is to use a common medication, an inhalation spray (Atrovent), to keep the mucus bundles moving.

"Our observations explain some of the beneficial effects of this inhaled drug in the treatment of COPD," says Anna Ermund, a researcher in the Group and first author of the article in European Respiratory Journal.

The scientists behind the studies believe that their findings on the important cleaning function of mucus, and on how tethered mucus damages the lung, will pave the way for new ways of treating both acute and chronic lung diseases.

Credit: 
University of Gothenburg

Chemists disproved the universal nature of the mercury test

image: Chemists confirmed that the test required additional control experiments to verify its results. The study might lead to the reconsideration of the existing experimental data and improving catalysis mechanisms in several chemical reactions.

Image: 
Allen Dressen

The mercury test of catalysts that has been used and considered universal for 100 years, turned out to be ambiguous. This conclusion was made by a group of scientists including a RUDN chemist. The group confirmed that the test required additional control experiments to verify its results. The study might lead to the reconsideration of the existing experimental data and improving catalysis mechanisms in several chemical reactions. The article of the scientists was published in the Organometallics journal.

Catalysts speed up chemical reactions by dozens if not hundreds of times. The mechanisms of their activity may be heterogeneous or homogeneous. In the first case a catalyst takes part in a reaction in the form of separate atoms or ions, and in the second case - as a single solid body, a colloidal solution, or a nanocluster. To determine the mechanism of activity of each catalyst scientists make homing experiments, the most popular of which is the mercury test. It is based on the ability of mercury to combine with metals or to get adsorbed on their surface. Depending on the activity of mercury in a reaction one may conclude whether the catalyst is homo- or heterogeneous. A RUDN chemist together with his Russian colleagues disproved the universal nature of this test.

The authors studied the influence of mercury on the chemical structure of catalysts belonging to the homogeneous class of palladacycles and found out it formed organic compounds with them. Previously scientists believed that mercury only reacted with heterogeneous catalysts and had no influence of homogeneous ones. Still, palladacycles react with mercury, and therefore tests based on it can produce incorrect results.

For the main test condition to apply, mercury should not react with a metal catalyst. The phenomenon discovered by Russian chemists proves this condition false for palladacycles. The authors believe the mercury test may only be used together with additional experiments.

"The reaction between a catalyst and mercury was considered a proof of the catalysis process following the heterogeneous principle. We demonstrated that the reaction between mercury and palladacycles leads to the formation of organic chlorides. This disproves the existing belief. We've established that this process depends on the structure of palladacycles, conditions of the reaction, the palladacycles/mercury ratio, and the reaction environment. The results of our work show that for mercury tests to be used in chemistry in the future, the reaction between mercury and the catalyst should be prevented, and additional control experiments should be carried out," says Viktor Khrustalyov, a co-author of the work, PhD in chemistry, and head of the department of inorganic chemistry at RUDN.

The study was carried out by a team of scientists representing RUDN, MSU, and Nesmeyanov Institute of Organoelement Compounds of the Russian Academy of Sciences.

Credit: 
RUDN University

Choice architecture for architecture choices

New research, led by the University of Portsmouth, could help decision makers make more effective choices when designing social housing initiatives.

Using the concept of 'choice architecture' the researchers have developed a new tool for decision makers to help them make the best decisions when tackling problems presenting a high number of alternatives or criteria.

Choice architecture is the design of different ways in which choices can be presented to consumers, and the impact of that presentation on consumer decision-making.

In the field of social housing for example, buildings must have low construction and operation costs, they have to be sustainable from an economic, energy and social point of view, the services that they offer have to take into account the needs of the inhabitants and they should be easily usable.

However, the decision maker has to also consider how to ensure a sufficient level of profitability for a social operation attracting private investments and how to evaluate the effectiveness of the building and the satisfaction of occupants.

In order to help the decision maker to handle all this information, the researchers have developed Multicriteria Decision Aiding methodology that structures, process and aggregates the information collected, in a simple and understandable way. The new method is based on two current methods used for organising and analysing complex decisions - Analytical Hierarchy Process (AHP) and the Non-Additive Robust Ordinal Regression (NAROR).

The new method assigns a value (using AHP) to the performances of all criteria to determine the importance and the interaction of criteria (using NAROR) taking into account all the possible values for the preference parameters compatible with the preference information supplied by the decision maker.

The lead author of the study, Professor Alessio Ishizaka from the University of Portsmouth, said: "Choice architecture influences decision making by simplifying the presentation of options, by evoking particular associations, or by making one option more salient or easier to choose than the alternatives. This transparent, fair and documented outcome provides decision makers with an efficient and fully explainable decision."

The Multicriteria Decision Aiding methodology was tested on 21 Social Housing initiatives in the Piedmont region of Italy.

Professor Ishizaka added: "This was a replicate of a previous decision with a simple point system. By working closely with decision makers, they found our method much more precise, well-structured and easy to understand for the stakeholders, and will consider it for use in future decision processes.

Credit: 
University of Portsmouth

Mathematicians propose new hunting model to save rhinos and whales from extinction

Mathematicians have created a new model - of a variety commonly found in the world of finance - to show how to harvest a species at an optimal rate, while making sure that the animals do not get wiped out by chance.

According to the theoretical study, hunting thresholds can be calculated for individual populations and species. The key to this is how quickly a population grows naturally, and how much it competes for resources.

The researchers say previous models that explore the impact of extracting wild species - such as whales, bison, rhinos, birds and fish - do not take into account random environmental factors that could threaten animals at the same time.

Published in the Journal of Mathematical Biology, the research was conducted by academics at Tufts University, Wayne State University, City, University of London and University of Hong Kong.

Dr Sergiu Ungureanu, a Lecturer in Behavioural Economics at City, said: "The problem we found was that if numbers of a species become too small, not only does their usefulness to humans decrease, but there is also a chance that the entire species goes extinct.

"This is because random fluctuations in the environment will affect the numbers of the species, on top of what people do.

"In the past, the problem was tackled either without taking into account environmental chance, or by ignoring the possibility of extinction in the distant future.

"We removed these two important limitations and we proved a simple result - there is always a population threshold under which there should be no harvesting."

A 'bang-bang' approach

The researchers conclude that a "bang-bang" approach is the most effective strategy. This is where hunting is done at one of two fixed rates, like an on-off switch.

It is argued that if humans can control animal populations better, harvesting can be more efficient and the risk of the population going extinct is lower.

According to the study, populations that grow faster should be left to grow to a larger size, before hunting starting again at the pre-designated hunting rate.

The researchers say that a faster growth rate means a higher, more efficient and sustainable rate of extraction can be maintained.

When a population becomes endangered and it has a lower growth rate, the study shows, it should be left to grow bigger to avoid the risk that it will go extinct through other environmental threats.

Dr Ungureanu said: "The goal is to help ecologists understand the problem and its parameters, the variables of interest, and to give them tools to find the exact numbers they need for the optimal extraction strategy for any particular application."

The researchers say they conducted the study because many wild species are at risk of being harvested to both local and global extinction, including whales, elephant seals, bison, rhinos, endangered birds, mammals and many species of fish.

They explain this risk of extinction is increased because animal populations fluctuate randomly in time, due to environmental factors that cannot be predicted.

As argued in the paper, the issue of chance is not taken into account in previous studies and it can lead to an overestimation of the ability of species to recover, potentially leading to extinction.

However, the authors say a rate of harvesting that is too low also causes problems for a species as higher numbers will lead to higher competition for limited resources, which could also have a negative effect on their habitat.

Credit: 
City St George’s, University of London

At least 57 negative impacts from cyber-attacks

The researchers, from Kent's School of Computing and the Department of Computer Science at the University of Oxford, set out to define and codify the different ways in which the various cyber-incidents being witnessed today can have negative outcomes.

They also considered how these outcomes, or harms, can spread as time passes. The hope is that this will help to improve the understanding of the multiple harms which cyber-attacks can have, for the public, government, and other academic disciplines.

Overall the researchers identified five key themes under which the impact - referred to in the article as a cyber-harm - from a cyber-attack can be classified:

Physical/Digital

Economic

Psychological

Reputational

Social/societal

Each category contains specific outcomes that underline the serious impact cyber-attacks can have. For example, under the Physical/Digital category there is the loss of life or damage to infrastructure, while the Economic category lists impacts such as a fall in stock price, regulatory fines or reduced profits as a possibility.

In the Psychological theme, impacts such as individuals being left depressed, embarrassed, shamed or confused are listed, while Reputational impacts can include a loss of key staff, damaged relationships with customers and intense media scrutiny.

Finally, on a Social/Societal level, there is a risk of disruption to daily life such as an impact on key services, a negative perception of technology or a drop in internal morale in organisations affected by a high-level incident.

The full list of cyber harms can be viewed online.

The researchers point to high-profile attacks against Sony, JP Morgan and online dating website Ashley Madison, as examples where a wide variety of negative outcomes were experienced, from reputational loss, causing shame and embarrassment for individuals or financial damage.

They say these incidents underline why a taxonomy of impacts and harms is so important for businesses. Many successful cyber-attacks have been traced to exploits of well-known vulnerabilities that had not been dealt with appropriately because of a lack of action by firms who did not appreciate the ways in which they could be affected by a cyber-attack.

By providing a detailed breakdown of the many different ways a cyber-attack can impact a business and third-parties, it gives board members and other senior staff a better understanding of both direct and indirect harms from cyber-attacks when considering the threats their organisation faces. This also equally applies to other organisations and even governments or those who manage critical national infrastructure.

Commenting on the article, Dr Jason R.C. Nurse from the School of Computing: 'It's been well understood that cyber-attacks can have numerous negative impacts. However, this is the first time there has been a detailed investigation into what these impacts are, how varied they can be, and how they can propagate over time. This base figure of 57 underlines how damaging cyber-incidents can be and we hope it can help to better understand how a business, individual or even nation is affected by a cyber-attack. This is going to be even more relevant as everything and everyone becomes connected and the Internet of Things is fully realised.'

Credit: 
University of Kent