Culture

Stem cell researchers develop promising technique to generate new muscle cells in lab

image: UTHealth stem cell researchers, from the left, are Nadine Matthias, DVM; Jianbo Wu, Ph.D.; Radbod Darabi, M.D., Ph.D.; and Jose L. Ortiz-Vitali.

Image: 
PHOTO Rob Cahill, UTHealth

To help patients with muscle disorders, scientists at The University of Texas Health Science Center at Houston (UTHealth) have engineered a new stem cell line to study the conversion of stem cells into muscle. Findings appeared in Cell Reports.

"We have also developed a more efficient strategy to make muscles from human stem cells. Scientists can use these cells for disease modeling, gene correction, and potential cell therapy," said Radbod Darabi, MD, PhD, the study's senior author and an assistant professor in the Center for Stem Cells & Regenerative Medicine at McGovern Medical School at UTHealth.

Muscle disorders such as muscular dystrophy cause muscles to weaken and deteriorate, and they affect more than 50,000 people in the United States. Symptoms include difficulty walking and standing. In severe cases, the disorders might involve cardiac and respiratory muscles and lead to death. There is no cure.

Darabi's team engineered a novel human stem cell line for skeletal muscle. To ensure the purity of the muscle stem cells, they tagged muscle genes (PAX7, MYF5) with two fluorescent proteins. "In order to improve the formation of the muscle from stem cells, we screened several bioactive compounds. We were also able to observe muscle stem cell activity in great detail using color tags," he said.

In the lab housed in the Brown Foundation Institute of Molecular Medicine for the Prevention of Human Diseases at UTHealth, the team used a gene-editing method called CRISPR/Cas9 to add the fluorescent tags to the genes.

The stem cells were generated from a patient's skin cells and used to generate muscle. "Our current research provides a step-by-step roadmap to make muscle stem cells from these cells," Darabi said.

The team's "approach also allowed induction and purification of skeletal myogenic progenitors in a much shorter time course (2 weeks) with considerable in vitro and in vivo myogenic potential (myofiber engraftment and satellite cell seeding)," the authors wrote.

The modified stem cells produced promising results in a culture of human tissue, as well as in a mouse model of Duchenne muscular dystrophy. "In a side-by-side comparison with previous strategies, our strategy allowed faster and more efficient generation of muscle stem cells with superior engraftment in mice," Darabi said.

Darabi believes these muscle stem cells will initially be used by researchers to study the pathophysiology of muscular dystrophies, create disease models that scientists can use to test promising drugs, or evaluate gene correction efficiency.

Human bodies are constantly replacing skeletal muscle cells but muscle disorders make it difficult to replenish muscle due to the failure and exhaustion of muscle stem cells. It is Darabi's hope that the cells can one day be used as a form of stem cell therapy.

Darabi's UTHealth coauthors are Jianbo Wu, PhD (lead author); Nadine Matthias, DVM; Jonathan Lo; Jose L. Ortiz-Vitali; and Sidney Wang, PhD. Also contributing to the paper's research is Annie Shieh, PhD, of State University of New York Medical School in Syracuse.

Credit: 
University of Texas Health Science Center at Houston

Blood test could lead to cystic fibrosis treatment tailored to each patient

Researchers at Stanley Manne Children's Research Institute at Ann & Robert H. Lurie Children's Hospital of Chicago, and colleagues, used a blood test and microarray technology to identify distinct molecular signatures in children with cystic fibrosis. These patterns of gene expression ultimately could help predict disease severity and treatment response, and lead to therapies tailored to each patient's precise biology. Findings were published in Physiological Genomics.

"Our findings pave the way to precision medicine for cystic fibrosis patients, eventually helping us match treatment to each patient's unique genomic pattern of disease," says lead author Hara Levy, MD, MMSc, from Manne Research Institute at Lurie Children's, who is an Associate Professor of Pediatrics at Northwestern University Feinberg School of Medicine. "Our study was the first to identify molecular signatures of cystic fibrosis from a blood test taken during a routine clinic visit, giving us a baseline. Greater understanding of these molecular signatures may lead to unique molecular markers that could help us intervene earlier to changes in a patient's inflammatory response to airway infection or pancreatic function, allowing us to provide more focused treatment. It would be a huge improvement over the one-size-fits-all treatment approach we currently have for patients with cystic fibrosis."

To identify baseline molecular signatures in cystic fibrosis, Dr. Levy's lab obtained genomic information from patients' blood samples using cutting-edge technologies such as Affymetric array and Illumina MiSeq. The team then merged this genomic information with each individual's clinical history gathered from electronic medical records. They compared this snapshot of patient-specific data with healthy controls. Their study provides strong evidence for distinct molecular signatures in cystic fibrosis patients that correlate with clinical outcomes.

Cystic fibrosis is a progressive genetic disease that damages multiple organs, including the lungs and pancreas. Currently, the average predicted survival is 47 years. Although cystic fibrosis is caused by dysfunction of a single gene (CFTR) and treatment that targets CFTR mutations is available, the relationships between the abnormal gene product, development of inflammation and disease progression are not fully understood. This limits the ability to predict a patient's clinical course, provide individualized treatment and rapidly monitor treatment response.

For example, it is not clear why patients with cystic fibrosis are susceptible to chronic lung infections, since they are considered to have a functional immune system.

"We are now trying to discover why patients with cystic fibrosis become infected so easily," says Dr. Levy. "We are taking a closer look at the immune cells that make up many of the molecular signatures we found in cystic fibrosis."

More study is needed before precision medicine for cystic fibrosis reaches the clinic.

"With more research, a blood test to gather genomic specifics of each patient's disease might be available in the clinic within the next five years," says Dr. Levy. "Precision medicine will revolutionize care for cystic fibrosis patients."

Credit: 
Ann & Robert H. Lurie Children's Hospital of Chicago

More taxes on alcohol as a way to tackle obesity

We don’t often equate the kilojoules we drink in our glass of wine or pint of beer with the weight that accumulates around our middle. But our new study shows increasing the price of alcohol is the most value for money policy option to prevent obesity in Australia.

The study, released today, shows if we increase alcohol taxes by standardising them across different types of alcohol, overall alcohol consumption would go down. This would lead to substantial reductions in the kilojoules Australians consume each day.

Historic earthquakes test Indonesia's seismic hazard assessment

Using data gleaned from historical reports, researchers have now identified the sources of some of the most destructive Indonesian earthquakes in Java, Bali and Nusa Tenggara, using these data to independently test how well Indonesia's 2010 and 2017 seismic hazard assessments perform in predicting damaging ground motion.

The study published in the Bulletin of the Seismological Society of America concludes that the hazard assessments do well at predicting damaging ground motion in key Javanese cities, but that there is much more to learn about earthquake sources in the region.

Indonesia has made earthquake risk prediction a priority after the magnitude 9.1 Sumatra-Andaman megathrust earthquake and tsunami in 2004, but to date most of the research on regional earthquake hazard has concentrated on Sumatra, at the expense of studies further east in Java, said Jonathan Griffin of Geoscience Australia and colleagues.

More than 57 percent (about 140 million people) of Indonesia's population lives in Java, "on a relatively small island roughly the same area as New York State, or the North Island of New Zealand, that faces many natural hazards," explained Griffin. "Getting the hazard levels right to underpin building codes is therefore critically important for a huge number of people, particularly combined with rapid economic growth and urbanization in Indonesia."

Probabilistic seismic hazard assessments or PSHA is a method that calculates the likelihood that certain levels of earthquake-related ground shaking will exceed a specific intensity at a given location and time span. PSHA calculations are based on data from earthquakes detected by seismographs, however, so some of the largest and most damaging earthquakes in a region may not be included in the assessments if they occurred before instrumentation in a region.

Griffin and colleagues analyzed historical catalogs and accounts of earthquakes in Java, Bali and Nusa Tenggara from 1681 to 1877, to determine the source and shaking intensity for some of the region's historically destructive earthquakes.

The most significant tectonic feature of the Indonesian region is the collision and subduction of the Indian and Australian tectonic plates under the Sunda and Burma tectonic plates, generating megathrust earthquakes like the 2004 Sumatra quake. However, the researchers found little evidence for the occurrence of large earthquakes on the Java Megathrust fault during the historic time period they studied.

Instead, they concluded that large intraslab earthquakes (earthquakes that occur within a subducting tectonic plate) were responsible for some of Java's most damaging historic quakes, including a magnitude 7.4 earthquake near Jakarta in 1699 and a magnitude 7.8 quake in Central Java in 1867. The researchers also noted a cluster of large earthquakes occurring on the Flores Thrust to the east of Java in 1815, 1818 and 1820, as well as earthquakes on shallow crustal faults on Java that had not been mapped previously.

The Flores Thrust was responsible for two magnitude 6.9 earthquakes in Lombok in August 2018 that together killed more than 500 people.

Intraslab earthquakes are well-known in the region, including recent events such as the magnitude 7.6 quake in West Sumatra and the magnitude 7.0 quake in West Java that together killed more than 1000 people in 2009, said Griffin. "However we were surprised that we didn't find conclusive evidence for a large megathrust event during the time period we examined."

Although it can be difficult to distinguish between megathrust and intraslab earthquakes using the data analyzed by the researchers, Griffin said that the data he and his team analyzed fit better with an intraslab model. "So while the intraslab models fit the data better for earthquakes in 1699 and 1867, we also rely on an absence of tsunami observations from coastal locations where ground shaking damage was reported to make the case that intraslab events were the more likely source," he added.

"The absence of strong historical evidence for a large megathrust earthquake south of Java over the past 350 years is a really interesting problem," said Griffin. Javanese and Dutch population centers "were historically on the north coast facing the calmer Java Sea, so we only have limited data from the less hospitable south coast. So it's quite likely that smaller megathrust earthquakes have occurred that aren't captured well in the historical records, but we'd be surprised if a really large earthquake went unnoticed."

Previous research suggests that that the length of time between earthquakes on the Sumatran megathrust varies considerably, said Griffin. "So the lack of large megathrust events south of Java over the past few centuries could just imply that we have been in a period of relative inactivity, but not that large earthquakes occur less frequently here on average over the long-term."

Credit: 
Seismological Society of America

Immune cells sacrifice themselves to protect us from invading bacteria

The stomach flu can turn the strongest individual into a limp dishrag. Snot and slime are going wild in kindergartens. This year's flu is approaching in full swing.

You can have your fever-lowering drugs ready, but the flu is a strange thing. The same bacteria and viruses don't hit everyone with the same intensity.

Some people get really sick, others less so. Some folks don't get sick at all.

Why? What's really going on in the body when viruses and bacteria sneak in the back door and gear up for a full-on party?

Black Death as a lifelong partner

A lot of researchers are intrigued by that very question. One of them is Professor Egil Lien at the Norwegian University of Science and Technology's (NTNU) Centre of Molecular Inflammation Research (CEMIR). He splits his time between Norway and the the University of Massachusetts in the United States and is one of Norway's foremost experts on how bacteria attack people.

Lien hasn't chosen the easiest bacteria to get to know. He has opted to focus his study on a really nasty one called Yersinia pestis, known as the culprit behind the Black Death outbreak. Yes, the bacteria that killed a third of Europe's population in the 1300s.

Lien has singled out precisely this bacterium as a lifelong research partner because it is a very deceptive one. Yersinia manipulates the immune system to hide from it, almost like a chameleon that changes color. It also kills cells that the body uses in the immune system.

For more effective medication

Now Lien, PhD candidate Pontus Ørning and other research colleagues have made a new discovery about what happens in the body when bacteria like Yersinia and Salmonella are at peak activity.

This information could come in handy, not only because Yersinia still exists and because antibiotic resistance is a growing problem, but because the new knowledge can be transferred to help understand other diseases.

This knowledge can also be used to make more effective medicines. Their finding has been published in the November 30 issue of Science magazine.

Sacrifice themselves in warning

It turns out that immune cells are so dedicated to their jobs that they explode themselves to release proteins that fight invading bacteria and resulting damage. The explosion does not go unnoticed and warns the other immune cells. The immune cells sacrifice themselves to let the other cells know what is going on.

The process is so explosive that it is called pyroptosis.

What happens is that the immune cell forms small pores on its surface. This causes water to flow into the cell, which then swells until it bursts. When the cell explodes, it also releases substances that inhibit the invading bacteria from growing and that alert the other cells. Pretty effective, right?

Immune system backup kicks in

Sneaky Yersinia knows all this, and tries to camouflage itself and secretes an antidote. The NTNU researchers figured out that the body knows that Yersinia disguises itself.

At this point, the action starts to get really involved, but the article in Science explains that the immune cells initiate a backup mechanism that is triggered in a way not previously understood.

"These findings show us complicated mechanisms that occur in the immune system to counter infection, but they may also apply to other diseases. Some of the same phenomena can happen in diseases that cause inflammation in the body in general, such as food poisoning or Alzheimer's disease. So these findings can also increase our understanding of inflammation, which happens in most diseases as changes occur in the body," says Lien.

Credit: 
Norwegian University of Science and Technology

Study: Innovative model helps kids on autism spectrum avoid behavioral drugs in ER

ORLANDO, Fla. (December 11, 2018) - An innovative care model developed by Nemours Children's Hospital for children with autism spectrum disorders (ASD) in the emergency department (ED) reduces the use of medication administered to kids who are prone to stress and sensory overload in this care setting. Information about this care model was presented today at the Institute for Healthcare Improvement's National Forum.

"Our program was designed to help prevent escalation of anxiety and agitation in children with ASD, therefore leading to the reduced use of sedatives and restraints," said Cara Harwell, ARNP, CPNP, PMHS, lead researcher and a Nurse Practitioner at Nemours Children's Hospital. "Sedative medications do have side effects, and if we can manage kids' stress in other ways, we create a better experience for them and their families."

In their evaluation of the program, Harwell and her colleagues reviewed two years of electronic health records and identified 860 pediatric emergency department visits in which this model, known as the REACH (Respecting Each Awesome Child Here) Program, was used for patients with ASD or similar conditions. With this approach, fewer than six percent of these patients needed an anxiolytic (anxiety medication). None needed an antipsychotic (for aggressive behavior) or an alpha-agonist (for hyperactivity and anxiety). Fewer than one percent needed physical restraints.

There is limited comparative research, but one study, not employing the REACH model, found that sedation or restraints were used in nearly one-fourth of ED visits by children and adults with ASD.

Nemours' REACH Program, now in its third year, accommodates children with ASD, sensory disorders, mental health disorders and similar conditions. Staff receive ongoing training regarding ASD, REACH concepts, procedure planning, and recognizing and managing anxiety and agitation. Distraction objects and rewards are placed throughout the ED, which includes a sensory-friendly exam room. Harwell, along with Emily Bradley MA, Certified Child Life Specialist, developed and instituted the program at Nemours Children's Hospital, which was one of the first in the nation to adapt care to the needs of children in the ED.

Beyond the reduced use of restraints and sedatives, patient satisfaction survey results show the program has led to improved patient experiences and a survey of providers found improved comfort and knowledge for treating children with ASD.

"The noise and pace of the ED environment can greatly increase stress for children with ASD, leading to the need for medications or restraints to help manage irritability, anxiety or harmful behavior. Avoiding these stimuli provides better, more positive care experiences for these kids," said Harwell.

Credit: 
Nemours

Brain activity shows development of visual sensitivity in autism

Research investigating how the brain responds to visual patterns in people with autism has shown that sensory responses change between childhood and adulthood.

The differences observed between adult and young people mimicked those seen in a strain of fruit flies that had a genetic change associated with autism and other developmental conditions.

This demonstrates that sensory issues in autism can be modelled in fruit flies, providing an opportunity to further understand the complexities of the condition.

Individuals with autism often report sensitivity to bright lights and loud sounds, as well as a variety of other sensory disturbances and differences. These can lead to problems in their everyday life, for example they might avoid bright or noisy environments.

Currently, however, there is limited research on the underlying mechanisms to explain why people with autism experience discomfort during some sensory experiences.

To investigate this, the researchers asked both children and adults, with and without autism, to look at patterns on a computer screen that flickered at specific rates.

They then measured the way that neurons in the participant's brain responded to the flickering patterns using an electroencephalogram (EEG), which detects electrical activity in the brain.

Dr Daniel Baker, from University of York's Department of Psychology, said: "Some neurons in the visual parts of the brain fired at the same frequency as the flickering patterns - at five times per second for example, while other types of neurons responded at twice this frequency.

"In adults with autism, and in our mature mutant flies, we found a reduction in brain activity at this higher frequency compared to control participants. In children, and in juvenile flies, responses were lower at both frequencies.

"This suggests that sensory differences may change during development, perhaps through some process of compensation or adjustment."

The new findings, part of a collaboration between the University of York and Stanford University, helped scientists understand the link between the differences in brain activity in adults and children, and a specific genetic change, associated with autism, as modelled in fruit flies.

The findings will allow future studies to understand the precise mechanisms involved in how sensory perception is affected in autism and whether the difference in brain responses between adults and children has any impact on how they perceive visual or other sensory stimuli.

Dr Chris Elliott, from the University of York's Department of Biology, said: "We now have a clearer picture of one sensory difference and have a genetic fly model that reflects this same difference.

"It is possible that in future the fruit fly model could be used to test potential treatments to alleviate some of the sensory difficulties experienced by people with autism."

The research, funded by the Wellcome Trust, the Simons Foundation, and the Experimental Psychology Society, is published in the journal Proceedings of the Royal Society B.

Credit: 
University of York

Researchers propose guidelines for the therapeutic use of melatonin

Sixty years after melatonin was isolated and with more than 23,000 published studies showing the many functions of this hormone secreted by the pineal gland, guidelines should be discussed and established for its therapeutic use.

This is the view expressed by José Cipolla Neto, Full Professor at the University of São Paulo's Biomedical Science Institute (ICB-USP), and Fernanda Gaspar do Amaral, a professor at the Federal University of São Paulo (UNIFESP), both in Brazil, in an article published in the journal Endocrine Reviews.

Cipolla Neto is the principal investigator for a project
supported by São Paulo Research Foundation - FAPESP on the role of melatonin in energy metabolism regulation.

"Melatonin not only adapts the organism to nocturnal rest but also prepares it metabolically for the next day, when it will need to be sufficiently sensitive to absorb food, for example," he said. The body produces melatonin only at night.

"If the nocturnal production of melatonin is blocked by light during the night, especially by the blue light from smartphones, this can contribute to diseases, such as sleep disorders and hypertension, and metabolic disturbances, including obesity and diabetes. This potentially pathogenic situation is due not only to insufficient melatonin production but also to one of its more immediate consequences, which is a condition known as chronodisruption, a temporal disorganization of the circadian rhythm of biological functions," Cipolla Neto told.

Present in almost all living beings, from bacteria to humans, melatonin has been the focus of many clinical studies. In the last five years alone, more than 4,000 studies using melatonin have been published. Almost 200 of those were randomized clinical trials.

Between 1996 and July 2017, for example, 195 systematic reviews were published on the effects of the clinical use of melatonin, among which 96 addressed the use of melatonin to treat psychiatric diseases and neurological disturbances, including sleep disorders, while 43 focused on the association between melatonin and cancer.

Patent applications relating to therapeutic uses of melatonin and analogs filed worldwide between 2012 and September 2014 focused predominantly on the central nervous system - including sleep disorders, the disruption of the circadian cycle and neuroprotection - as well as cancer and immunological issues.

In spite of the impressive amount of data on melatonin and the pineal gland, researchers and clinicians lack a systematic standard theoretical framework of analysis that could assist in the appropriate interpretation of the data obtained and the development of an adequate understanding of the role played by melatonin in human physiology and pathophysiology, according to the authors of the article, who say their intention is "to propose a framework of analysis that would help researchers and health professionals to analyze, understand and interpret the effects of melatonin and its putative role in several pathologies".

Individual variation

Characterized chemically in 1959, melatonin - which derives from tryptophan, an essential amino acid found in proteins - is highly efficient at eliminating free radicals and has remarkable antioxidant properties. It interacts directly with free radicals and stimulates antioxidant enzymes in different tissues.

This role has long been proposed as melatonin's primary function; however, in recent years, researchers have discovered that owing to its special properties, it is an exceptionally important molecule that acts through several mechanisms at almost all physiological levels. These include all components of the cardiovascular, reproductive, immune, respiratory and endocrine systems as well as energy metabolism, according to the authors.

"Melatonin's modes of action and integrative role amplify and diversify its functional activities, particularly in the time domain, enabling the organism's physiology to deal with challenges present while it's being secreted by the pineal gland, and at the same time preparing the organism for future events. Similarly, melatonin synchronizes our organism's temporal order both daily and on the seasonal time scale," Cipolla Neto said.

"Consequently, all these particular modes of action should always be taken into consideration in both laboratory experiments [in cells] and animals, and especially in clinical studies and investigations into the use of melatonin as a treatment. In this case, above all, it should be kept in mind that melatonin's effects depend not just on the route of administration and concentration but also on the time of administration, among other factors."

In addition, it is important to consider that the profile and onset of melatonin production vary from person to person. Early birds (people who wake early) start their daily melatonin production before night owls (people who stay up late), and people who sleep for longer periods of time produce melatonin over a longer time than those who sleep for shorter periods.

Furthermore, according to the researchers, it should be kept in mind that a given dose of melatonin may result in different plasma levels in different patients owing to individual differences in absorbing, distributing, metabolizing and eliminating melatonin. These differences are associated with age, clinical condition, the existence of pathologies, and the functional integrity of physiological systems such as the gastrointestinal tract, liver and kidneys.

If these substantial differences are not adequately taken into account, they may impact clinical efficacy, the authors state, adding that "a proper chronic melatonin hormonal replacement therapy is only achieved when dosage and formulation are carefully chosen and individually tailored and controlled to accomplish the desired clinical effect".

The first and most important guideline for the clinical use of melatonin proposed by the authors is to determine the duration of the daily signal and the start of production in each patient and then to prescribe melatonin according to this reference point in time, called the dim light melatonin onset (DLMO).

This specific point on the daily melatonin production curve is an important temporal reference for the proper administration of the hormone to patients. Depending on the time at which it is administered - always using the DLMO as a guide - exogenous melatonin may advance, delay or have no effect on the timing of endogenous circadian rhythms.

Because the procedure to determine DLMO is typically not feasible in everyday clinical practice, a more practical approach is to take the time at which the patient usually goes to sleep at night as a reference for the timing of melatonin administration.

According to the authors, most oral formulations require approximately 45 minutes to an hour to become bioavailable, so a dose should be taken about an hour before the usual reported bedtime. Given that melatonin is a powerful timer of the organism's physiology, it should be taken strictly at the same time every day.

Dose is another key point to be discussed. There is no consensus in the literature on this matter. On average, plasma levels in young people who take 0.1-0.3 milligrams will reach 100-200 picograms per milliliter (pg/ml), equivalent to the expected normal physiological range, while 1 gram will probably result in plasma levels of 500-600 pg/ml, which is much higher than the physiological range.

In their concluding summary, the authors note that the following precautions should be taken into consideration in melatonin therapy: chronic administration should be restricted to nighttime, the time should be carefully chosen according to the desired effect, and the dose and formulation should be individually adapted to build a blood melatonin profile that mimics the physiological ideal, ending by early morning.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

The costs and trade-offs of reforming long-term care for older people

A £36k lifetime cap on care costs for older people would cost £3.6 billion by 2035 - according to research from the University of East Anglia, the London School of Economics and Political Science and the Pensions Policy Institute.

Rolling out a minimum level of social care to all older people with high needs and limited resources would cost a similar amount.

A new report published today reveals the costs and trade-offs of reforming long-term care funding for older people in England, and identifies those who stand to gain and lose from a range of proposed reforms.

It comes ahead of an eagerly-awaited Government Green Paper on Social Care.

In 2011, the Dilnot Commission found people with similar needs were getting "very different" levels of support and recommended a cap of £35k on the amount an individual must pay for their own care costs during their lifetime.

Today's findings show that such a cap would cost about £3.6 billion by 2035, in today's prices. And ensuring a minimum level of social care for all older people with high needs and limited resources would cost a similar amount.

Lead researcher Prof Ruth Hancock, from the Health Economics Group at the University of East Anglia, said: "There have been big questions about whether people should have to use all their savings to pay for care in old age, or whether local authorities should be given funds to ensure that a minimum level of care is available to all those in high need - even if individuals have to contribute something towards that care."

"Our research estimates the current and projected care costs across a range of potential reforms.

"We have also identified trade-offs - in terms of protecting the savings of those who currently pay for all their care themselves and easing the means tests, for example for people who get some help with their care home's fee but typically have to handover nearly all their income to their Local Authority as a contribution."

The report reveals that easing the means test would enable some people, who currently fund their own care because of their savings or incomes, to receive publicly funded care.

Meanwhile funding care for a greater number of older people would enable those with high needs and limited resources, who may currently rely on unpaid care, to receive publicly funded care.

The research also shows that extending social care to all older people with at least moderate needs and limited resources would cost £5.8 billion by 2035. This would enable some of those whose needs are not currently deemed high enough to receive publicly funded care to do so in future.

For a similar cost, free personal care could be provided to all older people in England. This would enable those with substantial needs who currently fund their own care because of their savings or incomes to receive publicly funded care.

The proposed reforms investigated include previous plans for a £72k lifetime cap on care costs which had been due to be implemented in 2020, suggestions for a cap on care costs which covers daily living costs in care homes as well as care costs, free personal care as implemented in Scotland and the Conservative Party manifesto suggestion of including housing wealth in the means test for home care.

The research has been carried out by the CASPeR team which comprises members from the Pension Policy Institute, the London School of Economics and Political Science and the University of East Anglia. It was funded by the Nuffield Foundation.

Associate Professorial Research Fellow Raphael Wittenberg, from the Personal Social Services Research Unit at the London School of Economics and Political Science, said: "How best to reform the system of financing social care has proved a challenge for successive governments.

"There are difficult trade-offs to address. How far should additional resources be focused on relaxing the means test to help people with substantial care needs who because of the means test currently fund their own care? And how far should they be focused on people with limited resources who currently do not receive publicly funded care because their needs are not assessed as sufficiently substantial to meet the eligibility criteria? In order to inform decisions we have examined in detail the likely impacts of a range of potential reforms."

Credit: 
University of East Anglia

Employee incentives can lead to unethical behavior in the workplace

image: This is Bill Becker, co-author of the study and associate professor of management in the Pamplin College of Business at Virginia Tech.

Image: 
Virginia Tech

Considering end-of-year bonuses for your employees? Supervisors be forewarned, a new study finds that while incentive rewards can help motivate and increase employee performance it can also lead to unethical behavior in the workplace.

"Goal fixation can have a profound impact on employee behavior, and the damaging effects appear to be growing stronger in today's competitive business landscape," says Bill Becker, co-author of the study and associate professor of management in the Pamplin College of Business at Virginia Tech.

The study, "The effects of goals and pay structure on managerial reporting dishonesty," provides valuable insight into the relationship between pay structures and motivation.

Findings suggest that setting compensation goals can increase dishonesty when managers are also paid a bonus for hitting certain targets. "These unintended negative consequences can lead to dishonesty, unethical behavior, increased risk-taking, escalation of commitment, and depletion of self-control," says Becker.

The study points to observations of unethical behaviors in the workplace that include employees falsifying or manipulating financial reporting information as well as time and expense reports.

For example, service professionals such as auditors, contractors, lawyers, and consultants who report hours billed against a target budget is often based on a fixed contract price. "This causes potential for both under-reporting and over-reporting costs, which can undermine organizational objectives and negatively impact the interest of the firm," says Becker. "Using purely monetary incentives is almost always a double edged sword."

Credit: 
Virginia Tech

Criminalisation & repressive policing of sex work linked to increased risk of violence

Sex workers who face repressive policing are more likely to experience violence and poorer health and well-being, according to new research published in PLOS Medicine.

Led by the London School of Hygiene & Tropical Medicine (LSHTM), the systematic review found that sex workers who had been exposed to repressive policing (such as recent arrest, prison, displacement from a work place, extortion or violence by officers) had a three times higher chance of experiencing sexual or physical violence by anyone, for example, a client, a partner, or someone posing as a client. They were also twice as likely to have HIV and/or other sexually transmitted infections (STIs), compared with sex workers who had avoided repressive policing practices.

The few studies that looked at emotional health showed that sex workers who had experienced recent incarceration, arrest, or increased police presence were also more likely to have poorer mental health outcomes.

From what is the first systematic review to examine the impacts of criminalisation on sex workers' safety, health, and access to services, the researchers conclude that reform of demonstrably harmful policies and laws is urgently needed to protect and improve sex workers' safety, health and broader rights. Data included in the review came from 33 countries, including the UK.

The study is particularly timely given the active political interest in models of decriminalisation of sex work (introduced in New Zealand) and the criminalisation of the purchase of sex (currently law in Canada, France, Iceland, Northern Ireland, Norway, Republic of Ireland and Sweden).

To gather literature, the team searched databases of peer-reviewed journals from 1990 to 2018, for research on sex work, legislation, policing and health. Only studies reporting data provided by sex workers themselves were included.

The researchers reviewed the effects of criminalisation and police repression, examples of which included recent arrest, prison, displacement from a work place, confiscation of needles/syringes or condoms, and extortion, sexual or physical violence by police officers.

Using techniques including meta-analysis (pooling results from included quantitative studies), the team were then able to estimate the average effect of being exposed to repressive policing compared to no such exposure. The team also identified the main pathways through which these effects occurred in different legislative contexts (synthesising results of included qualitative studies).

Sex workers who had avoided repressive policing were 30% less likely to engage in sex with clients without a condom - a risk factor for HIV and STIs. Although prevalence is highly variable in different contexts, in low and middle income countries, sex workers are on average 13 times more at risk of HIV , compared to women of reproductive age (age 15 to 49), so their ability to negotiate condom use is important.

Lucy Platt, lead author and Associate Professor in Public Health Epidemiology at LSHTM, said: "Our important review highlights the impact of sex work laws and policing practices on the safety and health of individuals who sell sex around the world.

"Where some or all aspects of sex work were criminalised, concerns about their own or their clients' arrest meant that sex workers often had to rush screening clients negotiating services, or work in isolated places, to avoid the police. This increased sex workers' vulnerability to theft and violence.

"At the same time, police frequently failed to act on sex workers' reports of such crimes, or blamed and arrested sex workers themselves, meaning that offenders could operate with impunity and sex workers were reluctant to report to the police in future. These experiences were reported time and again across a wide range of countries."

The research also showed that repressive policing not only further marginalised sex workers as a population, but it also reinforced inequalities within sex-working communities, as police often targeted specific groups or work settings.

Research in Sweden and Canada showed that criminalising sex workers' clients did not improve sex workers' safety or access to services. In New Zealand, following decriminalisation, sex workers reported being better able to refuse clients and insist on condom use, amid improved relationships with police and managers. However, migrants continue to be excluded from this system. Studies from Guatemala, Mexico, Turkey and Nevada (USA), showed how regulatory models exacerbate disparities within sex worker communities. They enabled access to safer conditions for some but excluded the majority.

Pippa Grenfell, co-author and Assistant Professor of Public Health Sociology at LSHTM, said: "It is clear from our review that criminalisation of sex work normalises violence and reinforces gender, racial, economic and other inequalities. It does so by restricting sex workers' access to justice, and by increasing the vulnerability, stigmatisation and marginalisation of already-marginalised women and minorities.

"Decriminalisation of sex work is urgently needed, but other areas must also be addressed. Wider political action is required to tackle the inequalities, stigma and exclusion that sex workers face, not only within criminal justice systems but also in health, domestic violence, housing, welfare, employment, education and immigration sectors."

The authors say that while legislative reforms and related institutional shifts are likely to require long-term efforts, immediate interventions are also needed to support sex workers. This includes the sustained and renewed funding and scale-up of specialist and sex-worker-led services that can help to address the multiple and diverse health and social care needs of people who sell sex around the world.

The research team acknowledge the limitations of their review, including only a small number of studies that examined contexts where sex work is decriminalised or the purchase of sex was criminalised. There were also few studies conducted with trans women or men, or that examined the interaction between criminalisation and other social factors that affect sex workers' health and safety.

Credit: 
London School of Hygiene & Tropical Medicine

Pregnant women, young children most likely to use bed nets to prevent malaria

When households in sub-Saharan Africa don't have an adequate number of insecticide-treated bed nets, pregnant women and children under five are the most likely family members to sleep under the ones they have, leaving men and school-aged children more exposed to malaria, new Johns Hopkins Center for Communication Programs (CCP) research suggests. CCP is based at the Johns Hopkins Bloomberg School of Public Health.

The findings, published last month in Malaria Journal, also show that when households have an adequate supply of treated bed nets - one for every two members living under the same roof - these gender and age disparities shrink.

The World Health Organization credits the widespread use of insecticide-treated bed nets with playing a huge role in the reduction of the number of malaria cases in sub-Saharan Africa since 2001.

The new research finds, however, that across 29 countries in sub-Saharan Africa, on average, only 30 percent of households have enough bed nets, ranging from 8.5 percent in Cameroon to 62 percent in Uganda.

"The good news is that we have succeeded in protecting some of the most vulnerable people - pregnant women and young children - from malaria," says Bolanle Olapeju, MBBS, PhD, a senior research data analyst with CCP who led the research as part of CCP's VectorWorks team. "Now, we need to go even further to provide enough nets for everyone else."

Mosquito nets are draped over a bed as a barrier against bites from mosquitos - and the diseases they carry. The nets do double duty in that they are treated with an insecticide that kills many of those mosquitos that land on them.

Under the global effort to prevent malaria in sub-Saharan Africa, nearly two billion have nets have been distributed for free since 2004. Nets don't last forever and must be replaced roughly every three years. The most common way that nets are distributed is through what are known as mass campaigns. Nets are also provided through health clinics to pregnant women, during vaccination of small children and through school-based campaigns in some countries.

For the research, Olapeju and her team analyzed data from Malaria Indicator Surveys and Demographic and Health Surveys conducted in sub-Saharan Africa between 2011 and 2016. This analysis allowed them to determine patterns of net use both when there were an adequate number of bed nets in a household and when there were not.

They found that, for the most part, people use the nets they have. One concern going into the research was that heads of households could be hoarding nets and not allowing others in the home to use them, but the researchers found this was not the case.

"Once you get more nets into households, age and gender disparities shrink, so we need to keep reminding policymakers of the importance of getting nets to as many people as possible," Olapeju says. "It's one of the best ways we have to prevent malaria."

Whether there were an adequate number of nets in the home or not, however, the researchers found that those who were least likely to sleep under nets were school-aged children, those from ages five to 14. The researchers say this is concerning because even though this group is less likely to suffer from the symptoms of malaria, these children are more likely to have asymptomatic malaria which can still be spread to others by mosquito bites.

"The global stakeholders have done a lot to provide nets and prevent malaria," Olapeju says. "But this is the time to push harder toward eliminating malaria and one way to do that, it appears, is with more nets."

Credit: 
Johns Hopkins Bloomberg School of Public Health

New models sense human trust in smart machines

video: New models are informing how intelligent machines should be designed so as to "earn" the trust of humans.

Image: 
Purdue University video/Jared Pike

WEST LAFAYETTE, Ind. - New "classification models" sense how well humans trust intelligent machines they collaborate with, a step toward improving the quality of interactions and teamwork.

The long-term goal of the overall field of research is to design intelligent machines capable of changing their behavior to enhance human trust in them. The new models were developed in research led by assistant professor Neera Jain and associate professor Tahira Reid, in Purdue University's School of Mechanical Engineering.

"Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans," Jain said. "As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions."

For example, aircraft pilots and industrial workers routinely interact with automated systems. Humans will sometimes override these intelligent machines unnecessarily if they think the system is faltering.

"It is well established that human trust is central to successful interactions between humans and machines," Reid said.

The researchers have developed two types of "classifier-based empirical trust sensor models," a step toward improving trust between humans and intelligent machines. A YouTube video is available at https://www.youtube.com/watch?v=Mucl6pAgEQg.

The work aligns with Purdue's Giant Leaps celebration, acknowledging the university's global advancements made in AI, algorithms and automation as part of Purdue's 150th anniversary. This is one of the four themes of the yearlong celebration's Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

The models use two techniques that provide data to gauge trust: electroencephalography and galvanic skin response. The first records brainwave patterns, and the second monitors changes in the electrical characteristics of the skin, providing psychophysiological "feature sets" correlated with trust.

Forty-five human subjects donned wireless EEG headsets and wore a device on one hand to measure galvanic skin response.

One of the new models, a "general trust sensor model," uses the same set of psychophysiological features for all 45 participants. The other model is customized for each human subject, resulting in improved mean accuracy but at the expense of an increase in training time. The two models had a mean accuracy of 71.22 percent, and 78.55 percent, respectively.

It is the first time EEG measurements have been used to gauge trust in real time, or without delay.

"We are using these data in a very new way," Jain said. "We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event."

Findings are detailed in a research paper appearing in a special issue of the Association for Computing Machinery's Transactions on Interactive Intelligent Systems. The journal's special issue is titled "Trust and Influence in Intelligent Human-Machine Interaction." The paper was authored by mechanical engineering graduate student Kumar Akash; former graduate student Wan-Lin Hu, who is now a postdoctoral research associate at Stanford University; Jain and Reid.

"We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship," Jain said. "In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this."

The issue of human trust in machines is important for the efficient operation of "human-agent collectives."

"The future will be built around human-agent collectives that will require efficient and successful coordination and collaboration between humans and machines," Jain said. "Say there is a swarm of robots assisting a rescue team during a natural disaster. In our work we are dealing with just one human and one machine, but ultimately we hope to scale up to teams of humans and machines."

Algorithms have been introduced to automate various processes.

"But we still have humans there who monitor what's going on," Jain said. "There is usually an override feature, where if they think something isn't right they can take back control."

Sometimes this action isn't warranted.

"You have situations in which humans may not understand what is happening so they don't trust the system to do the right thing," Reid said. "So they take back control even when they really shouldn't."

In some cases, for example in the case of pilots overriding the autopilot, taking back control might actually hinder safe operation of the aircraft, causing accidents.

"A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real time," Jain said.

To validate their method, 581 online participants were asked to operate a driving simulation in which a computer identified road obstacles. In some scenarios, the computer correctly identified obstacles 100 percent of the time, whereas in other scenarios the computer incorrectly identified the obstacles 50 percent of the time.

"So, in some cases it would tell you there is an obstacle, so you hit the brakes and avoid an accident, but in other cases it would incorrectly tell you an obstacle exists when there was none, so you hit the breaks for no reason," Reid said.

The testing allowed the researchers to identify psychophysiological features that are correlated to human trust in intelligent systems, and to build a trust sensor model accordingly. "We hypothesized that the trust level would be high in reliable trials and be low in faulty trials, and we validated this hypothesis using responses collected from 581 online participants," she said.

The results validated that the method effectively induced trust and distrust in the intelligent machine.

"In order to estimate trust in real time, we require the ability to continuously extract and evaluate key psychophysiological measurements," Jain said. "This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor."

The EEG headset records signals over nine channels, each channel picking up different parts of the brain.

"Everyone's brainwaves are different, so you need to make sure you are building a classifier that works for all humans."

For autonomous systems, human trust can be classified into three categories: dispositional, situational, and learned.

Dispositional trust refers to the component of trust that is dependent on demographics such as gender and culture, which carry potential biases.

"We know there are probably nuanced differences that should be taken into consideration," Reid said. "Women trust differently than men, for example, and trust also may be affected by differences in age and nationality."

Situational trust may be affected by a task's level of risk or difficulty, while learned is based on the human's past experience with autonomous systems.

The models they developed are called classification algorithms.

"The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting," she said.

Jain and Reid have also investigated dispositional trust to account for gender and cultural differences, as well as dynamic models able to predict how trust will change in the future based on the data.

Credit: 
Purdue University

Veterans health administration hospitals outperform non-VHA hospitals in most markets

image: Dr. Weeks and co-author Alan West, PhD, used the most current publicly available data to compare health outcomes for VA and non-VA hospitals within 121 local healthcare markets that included both a VA medical center and a non-VA hospital.

Image: 
The Dartmouth Institute

The Veterans Health Administration (VHA) is the largest integrated health care system in the United States, providing care at 1,243 health care facilities, including 172 VA Medical Centers and 1,062 outpatient sites. Many of the 9 million veterans enrolled in the VA healthcare program will, at some point, have to decide whether to seek care at a VA or non-VA facility. In a new study, researchers from The Dartmouth Institute for Health Policy and Clinical Practice and the White River Junction VA Medical Center in White River Junction, Vermont, used the most current publicly available data to compare health outcomes for VA and non-VA hospitals within 121 local healthcare markets that included both a VA medical center and a non-VA hospital.

In their findings, recently published in the Annals of Internal Medicine, Dartmouth Institute Professor William Weeks, MD, PhD, MBA, and Alan N. West, PhD, of the White River Junction VA Medical Center note that several recent studies using broad representative samples of VHA patients with representative samples not in the VHA system have found that outcomes at VA hospitals are at least as good as those in the private sector. Several circumstances they say could account for these findings: The VHA may provide better care than the private sector in every local area. Alternatively, non-VHA care may be better than VHA care in more local areas but by a small amount, whereas VHA care may be better than non-VHA care in fewer local areas but by a large amount in each area. The average across all patients and hospitals would favor the VHA in the former circumstance and might favor the VHA in the latter.

"We wanted to take a closer look at local healthcare markets and specific health conditions because if you're a veteran deciding where to seek treatment what you're really concerned with are the outcomes at your local VA," Weeks says.

Weeks and West identified 15 outcome measures that were reported by VHA and non-VHA hospitals by using data from Hospital Compare, a Centers for Medicaid & Medicare Studies (CMS) website which provides information on how well hospitals provide recommended care to their patients. These measures included 30-day risk-adjusted mortality rates for four common diseases--acute myocardial infarction, COPD, heart failure, and pneumonia--plus 11 additional patient safety indicators. They used each hospital's ZIP code to assign the hospital to one of 306 hospital referral regions--limiting their analyses to the 121 regions in which at least one VHA and one non-VHA hospital reported at least one of the measures. (The Dartmouth Atlas of Health Care defines these regions as distinct health care markets.) The researchers found that VA hospitals were likely to provide the best care in a local health care market and rarely provided the worst care in local markets.

"Our findings suggest that, despite some recent negative reports, the VA generally provides truly excellent care," Weeks says. "If that is the case, outsourcing VA care to non-VA settings solely for patient convenience should be reconsidered."

However, Weeks and West also raise the possibility that VA and non-VA hospitals may report data differently to Hospital Compare. If so, the authors recommend the VA and Centers for Medicare and Medicaid Services (CMS) take steps to adapt reporting methods to ensure fair comparisons by end users who are trying to make healthcare decisions.

Credit: 
The Dartmouth Institute for Health Policy & Clinical Practice

Scientific assessment of endangered languages produces mixed results

(Washington, DC) - A new study of the progress made over the last 25 years in documenting and revitalizing endangered languages shows both significant advances and critical shortfalls. The article, "Language documentation twenty-five years on", by Frank Seifart (CNRS & Université de Lyon, University of Amsterdam, and University of Cologne), Nicholas Evans (ARC Centre of Excellence for the Dynamics of Language, The Australian National University), Harald Hammarström (Uppsala University and Max Planck Institute for the Science of Human History) and Stephen C. Levinson (Max Planck Institute for Psycholinguistics), will be published in the December, 2018 issue of the scholarly journal Language. A pre-print version of the article may be found at: https://www.linguisticsociety.org/sites/default/files/e05_94.4Seifart.pdf .

This article is being published as UNESCO's International Year of Indigenous Languages 2019 is fast approaching. It is a follow-up to the seminal article by Ken Hale et al. that appeared in Language in 1992. The study presents the most reliable figures on world-wide languages endangerment so far: more than half of the close to 7,000 now living languages are currently endangered. Around 600 of these are already nearly extinct, and are now only spoken occasionally by members of the grandparent generation. About 950 endangered languages are still also spoken by children, but the proportion of children acquiring these languages is getting smaller and smaller. The authors warn that "if this trend is not reversed, these languages will also die out."

With the growing network of researchers carrying out language documentation around the world, and helped by technological progress for data collection, processing and archiving, our scientific knowledge of the world's languages has significantly increased over the past 25 years. So has the engagement of indigenous researchers on their own languages. Over this period, many hundreds of languages have been documented in sustainably archived audio and video collections, as well as more traditional products like grammars and dictionaries. But the study also shows that well over a third of the world's languages, including over 1,400 endangered languages, are still severely under-described, and lack even basic information on their grammar and lexicon, let alone proper documentation of culture-specific language use.

The authors sound an urgent alarm: "The potential loss if linguists do not up their game is enormous on all accounts." The documentation of linguistic diversity keeps turning up new phenomena and there are no signs that new discoveries are tailing off. These discoveries keep driving linguistics to broaden its canon of possible grammatical categories. Whole new meaning domains have been discovered, and entirely new speech sounds are also still being brought to light. Beyond such core categories of linguistic structure, work with little-studied languages is expanding our knowledge of how language is learned, processed, socially organized, aesthetically extended, and how it evolves, within as little as one generation.

The authors conclude that there are thus many reasons for intensifying research on small and often endangered languages. Such research can now take full advantage of technological developments through automating particularly time-consuming aspects of transcription work. But intensifying this work also depends on full recognition of the value of linguistic diversity, ranging from international observances by UNESCO, all the way through to the admissibility of descriptive and documentary research as degree work in academic programs.

Credit: 
Linguistic Society of America