Culture

Exhaled e-vapor particles evaporate in seconds -- new study

image: Exhaled e-vapor evaporates in seconds.

Image: 
Fontem Ventures

Amsterdam, July 23rd 2018 - A new peer-reviewed study published in the prestigious journal Nicotine & Tobacco Research shows that exhaled e-vapour product particles are actually liquid droplets that evaporate within seconds.

"No accumulation of particles was registered in the room following subjects' vaping. This shows us how fundamentally different exhaled e-vapour particles are compared to those released when smoking conventional cigarettes, the latter of which linger in the air for longer periods of time," said Dr Grant O'Connell, Corporate Affairs Manager at Fontem Ventures, and senior author of the study.

The research is one of the first detailed studies conducted to investigate the dynamic properties of exhaled e-vapour aerosol particles. The study entitled "Characterisation of the Spatial and Temporal Dispersion Differences between Exhaled e-cigarette mist and Cigarette Smoke," was a collaboration between Kaunas University of Technology in Lithuania, EMPA (Swiss Federal Laboratories for Materials Science and Technology), ETH Zurich (Swiss Federal Institute of Technology) and Fontem Ventures.

During the study, regular vapers used commercially available closed and open system vaping products while researchers measured particle concentrations in the surrounding air. Unlike for conventional cigarette smoke, following immediate exhalation, scientists observed a rapid decay and evaporation of the liquid aerosol droplets, with levels returning to background levels within seconds. This was also observed under no room ventilation conditions, representing a worst case scenario.

"Exhaled e-vapour aerosol particles have a different chemical composition to cigarette smoke and here we show the physical properties are also significantly different. This data adds to the growing body of evidence that vaping indoors is unlikely to pose an air quality issue," said Dr O'Connell.

For both e-vapour products and conventional cigarettes, the particle concentrations registered following each puff were in the same order of magnitude. However, for e-vapour products the particle concentration returned to background values within a few seconds; for conventional cigarettes it increased with successive puffs, only returning to background levels after 30-45 minutes.

In 2016, the UK Government issued advice to employers to encourage workplaces to adopt pro-vaping policies that make it as easy and convenient as possible for smokers to switch on the basis that "International peer-reviewed evidence indicates that the risk to the health of bystanders from exposure to e-cigarette vapour is extremely low".1

Credit: 
Fontem Ventures

Democracies are more prone to start wars -- except when they're not

What kind of political leader is most likely to start a war--an invective-spewing dictator or the elected head of a democratic nation? Surprisingly, science says it's probably not the autocrat.

Leaders of democratic nations actually have stronger incentives to start and exacerbate conflicts with other countries than their autocratic counterparts, suggests a new study to be published by the American Journal of Political Science.

The difference boils down to public pressure, say the study's authors, Michael Gibilisco of Caltech and Casey Crisman-Cox of Texas A&M University. Because of pressure from voters to not back down and appear weak, democratic leaders tend to act more aggressively in international conflicts. An autocrat, on the other hand, is answerable to no one and can back down from a conflict without facing personal consequences.

"If an elected leader makes a threat during a conflict with another country and the threat isn't followed through, they may face a decrease in approval ratings, or they may lose an election," says Gibilisco, assistant professor of political science. In democracies, he notes, voters can punish their leaders for appearing weak--these punishments or consequences are known as "audience costs" in political science parlance. To avoid those costs, leaders in representative governments become more aggressive during disputes.

In their study, Gibilisco and Crisman-Cox, who is also an assistant professor of political science, first developed a mathematical model of dispute initiation between countries and then fit the model to data of actual conflicts that occurred among 125 countries between 1993-2007.

They also estimated audience costs for the countries in their sample using existing databases containing country-by-country information on levels of democracy and press freedom. In general, they found that audience costs are highest in democracies with strong protections for a free press.

However, they also found that audience costs are much lower in democracies that have a rival that threatens their existence. (For example, South Korea's existential rival is North Korea.) One reason, the researchers say, is that a nation's voters will give their leader more leeway in deciding how to resolve a conflict with an existential rival, because survival is more of a concern than saving face.

In contrast to democracies, dictatorships tend to have low audience costs, but here, too, Gibilisco and Crisman-Cox found an exception. Dictatorships that provide a legal mechanism for removing a leader--as was the case in China before it abolished term limits this past March--have higher audience costs.

Once the researchers produced an audience-cost estimate for each country, they considered how changing a country's audience costs affects its willingness to engage in conflict. Overall, they found, increasing a country's audience costs, perhaps by strengthening democratic institutions, makes it more likely to start a conflict.

However, Gibilisco and Crisman-Cox found that other dynamics are at play that create more nuanced international dynamics.

For example, while democratic leaders may be less likely to back down during a crisis, they can also be more aggressive and prone to initiate conflict, because they know their opponent won't want to get in a fight against a country that will hold its ground, even if it leads to war. Alternatively, a democratic leader may be less likely to initiate a conflict in the first place, as they know that they won't be able to easily stand down from it.

Lastly, the researchers found a sort of mutually assured destruction effect with audience costs. Two countries that each have high levels of audience cost know the other cannot back down and thus avoid conflicts with each other; if they do end up in a dispute, however, the countries will have a harder time resolving it peacefully.

"The model kind of explains this behavior where peace and conflict are both self-enforcing," Gibilisco says. "So, if we're in peace today, none of us want to escalate a dispute into war tomorrow. But once we're at war, we want to avoid de-escalating."

Credit: 
California Institute of Technology

Enabling technology in cell-based therapies: Scale-up, scale-out or program in-place

image: Technologies that are reducing costs and changing the ways in which researchers and clinicians process and use therapeutic cells are showcased in the August 2018 special issue of SLAS Technology.

Image: 
Coleman Murray, University of California, Los Angeles

Technologies that are reducing costs and changing the ways in which researchers and clinicians process and use therapeutic cells are showcased in the August 2018 special issue of SLAS Technology. With leadership from guest editor Christopher Puleo, Ph.D., and colleagues of General Electric Global Research (Niskayuna, NY), the issue presents two review articles that detail the status of cell bioreactors in both stem cell and tissue/organ engineering applications and five original research reports by life sciences researchers from universities, pharma companies and hospitals in Australia and across the United States.

Advances reported in this issue include methods of cell separation that utilize unique microscale forces for use with higher cell concentrations or larger sample volumes; techniques, device packages and footprints that utilize "smart" dynamic magnetic traps, microfluidic separators, and acoustic energy-based cell separation techniques provide new inline and closed-loop systems; and methods to better automate or package complex cell manipulations into closed bioreactor systems.

The arrival of FDA-approved chimeric antigen receptors (CAR) T-cell therapies and the expansion of T-cell and other cell-based therapies beyond oncology applications, have reinvigorated discussions around the ways in which researchers harvest, culture, process, or directly alter therapeutic cells. However, the manufacturing process (i.e. selection of peripheral blood mononuclear cells from whole blood, activation of T cells, transduction with CAR viral vectors or transposons, and expansion in an appropriate bioreactor) for combination gene/cell therapies such as CAR T is complex, and there remain many opportunities to decrease costs and improve safety of these important new clinical tools.

Credit: 
SLAS (Society for Laboratory Automation and Screening)

Archaeologists identify ancient North American mounds using new image analysis technique

image: Four shell ring features identified using the object-based image analysis (OBIA) algorithm. The ring on the top right has been confirmed as a shell ring site.

Image: 
Carl Lipo

BINGHAMTON, N.Y. - Researchers at Binghamton University, State University at New York have used a new image-based analysis technique to identify once-hidden North American mounds, which could reveal valuable information about pre-contact Native Americans.

"Across the East Coast of the United States, some of the most visible forms for pre-contact Native American material culture can be found in the form of large earthen and shell mounds," said Binghamton University anthropologist Carl Lipo. "Mounds and shell rings contain valuable information about the way in which past people lived in North America. As habitation sites, they can show us the kinds of foods that were eaten, the way in which the community lived, and how the community interacted with neighbors and their local environments.

In areas that are deeply wooded or consist of bayous and swamps, there exist mounds that have eluded more than 150 years of archaeological survey and research. Due to vegetation, these kinds of environments make seeing more than a couple of dozen feet difficult, and even large mounds can be hidden from view, even when one systematically walks on the terrain.

The use of satellites and new kinds of aerial sensors such as LiDAR (light detection and ranging) have transformed the way archaeologists can gather data about the archaeological record, said Lipo. Now scientists can study landscapes from images and peer through the forest canopy to look at the ground. LiDAR has been particularly effective at showing the characteristic rises in topography that mark the presence of mounds. The challenge to archaeologists, however, is to manage such a vast array of new data that are available for study. Object based image analysis (OBIA) allows archaeologists to configure a program to automatically detect features of interest. OBIA is a computer-based approach to use data from satellite images and aerial sensors to look for shapes and combinations of features that match objects of interest. Unlike traditional satellite image analyses that looks at combinations of light wavelengths, OBIA adds characteristics of shape to the search algorithm, allowing archaeologists to more easily distinguish cultural from natural phenomena.

Lipo's team systematically identified over 160 previously undetected mound features using LiDAR data from Beaufort County, S.C., and an OBIA approach. The result improves the overall knowledge of settlement patterns by providing systematic knowledge about past landscapes, said Lipo.

"Through the use of OBIA, archaeologists can now repeatedly generate data about the archaeological record and find historic and pre-contact sites over massive areas that would be cost-prohibitive using pedestrian survey. We can now also peer beneath the dense canopy of trees to see things that are otherwise obscured. In areas like coast South Carolina, with large swaths of shallow bays, inlets and bayous that are covered in forest, OBIA offers us our first look at this hidden landscape."

Having demonstrated the effectiveness for using OBIA in conditions of dense vegetation and after optimizing our processing, Lipo and his team are expanding their efforts to include much-larger areas.

"Fortunately, satellite and LiDAR data are now available for much of the eastern seaboard, so undertaking a large-scale project is now a task that is achievable," said Lipo. "Due to climate change and sea-level rise, many major mounds and middens on the East Coast are threatened by erosion and inundation. It is urgent we document this pre-contact landscape as soon as possible, in order to learn as much as we can about the past before it is gone forever."

Credit: 
Binghamton University

First practice guidelines for clinical evaluation of Alzheimer's disease

CHICAGO, July 22, 2018 - Despite more than two decades of advances in diagnostic criteria and technology, symptoms of Alzheimer's disease and Related Dementias (ADRD) too often go unrecognized or are misattributed, causing delays in appropriate diagnoses and care that are both harmful and costly. Contributing to the variability and inefficiency is the lack of multidisciplinary ADRD evaluation guidelines to inform U.S clinicians in primary and specialty care settings.

Responding to the urgent need for more timely and accurate Alzheimer's disease diagnosis and improvement in patient care, a workgroup convened by the Alzheimer's Association has developed 20 recommendations for physicians and nurse practitioners. There currently are no U.S. national consensus best clinical practice guidelines that provide integrated multispecialty recommendations for the clinical evaluation of cognitive impairment suspected to be due to ADRD for use by primary and specialty care medical and nursing practitioners.

The recommendations range from enhancing efforts to recognize and more effectively evaluate symptoms to compassionately communicating with and supporting affected individuals and their caregivers. The recommendations were reported at the Alzheimer's Association International Conference (AAIC) 2018 by Alireza Atri, MD, PhD, Co-chair of the AADx-CPG workgroup, and Director of the Banner Sun Health Research Institute, Sun City, AZ, and Lecturer in Neurology at the Center for Brain/Mind Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston. Details of the AADx-CPG workgroup document are being honed with input from leaders in the field, with the goal of publication in late 2018.

At their core, the recommendations include guidance that:

All middle-aged or older individuals who self-report or whose care partner or clinician report cognitive, behavioral or functional changes should undergo a timely evaluation.

Concerns should not be dismissed as "normal aging" without a proper assessment.

Evaluation should involve not only the patient and clinician but, almost always, also involve a care partner (e.g., family member or confidant).

"Too often cognitive and behavioral symptoms due to Alzheimer's disease and other dementias are unrecognized, or they are attributed to something else," said James Hendrix, PhD, Alzheimer's Association Director of Global Science Initiatives and staff representative to the workgroup. "This causes harmful and costly delays in getting the correct diagnosis and providing appropriate care for persons with the disease. These new guidelines will provide an important new tool for medical professionals to more accurately diagnose Alzheimer's and other dementias. As a result, people will get the right care and appropriate treatments; families will get the right support and be able to plan for the future."

In 2017, the Alzheimer's Association convened a Diagnostic Evaluation Clinical Practice Guideline workgroup (AADx-CPG workgroup) of experts from multiple disciplines in dementia care and research, representing medical, neuropsychology, and nursing specialties. The AADx-CPG workgroup used a rigorous process for evidence-based consensus guideline development.

"Our goal is to provide evidence-based and practical recommendations for the clinical evaluation process of cognitive behavioral syndromes, Alzheimer's disease and related dementias that are relevant to a broad spectrum of U.S. health care providers," Atri said. "Until now, we have not had highly specific and multispecialty U.S. national guidelines that can inform the diagnostic process across all care settings, and that provide standards meant to improve patient autonomy, care, and outcomes."

"Whether in primary or specialty care, the recommendations guide best practices for partnering with the patient and their loved ones to set goals, and to appropriately educate and evaluate memory, thinking and personality changes," Atri added.

The Clinical Practice Guidelines (CPG) recognize the broader category of "Cognitive Behavioral Syndromes" -- indicating that neurodegenerative conditions such as ADRD lead to both behavioral and cognitive symptoms of dementia. As a result, these conditions can produce changes in mood, anxiety, sleep, and personality -- plus interpersonal, work and social relationships -- that are often noticeable before more familiar memory and thinking symptoms of ADRD appear.

"In all cases, there is something we can do to help and support those who entrust us with the privilege of advising and caring for them," said Atri. "The guidelines can empower patients, families, and clinicians to expect that symptoms will be evaluated in a patient-centered, structured, and collaborative manner. In addition, they help to ensure that, regardless of the specific diagnosis, the results are communicated in a timely and compassionate way to help patients and families live the best lives possible."

The 20 consensus recommendations describe a multi-tiered approach to the selection of assessments and tests that are tailored to the individual patient. The recommendations emphasize obtaining a history from not only the patient but also from someone who knows the patient well to:

First, establish the presence and characteristics of any substantial changes, to categorize the cognitive behavioral syndrome.

Second, investigate possible causes and contributing factors to arrive at a diagnosis/diagnoses.

Third, appropriately educate, communicate findings and diagnosis, and ensure ongoing management, care and support.

"Evaluation of cognitive or behavioral decline is often especially challenging in primary care settings," said Bradford Dickerson, MD, MMSc, Co-chair of the workgroup, and Director of the Frontotemporal Disorders Unit at Massachusetts General Hospital, and Associate Professor of Neurology at Harvard Medical School, Boston. "Also, with recent advances in available diagnostic technology, there is a need for guidance on use of such tests in specialty and subspecialty care settings."

According to the workgroup, a timely and accurate diagnosis of ADRD increases patient autonomy at earlier stages when they are most able to participate in treatment, life and care decisions; allows for early intervention to maximize care and support opportunities, and available treatment outcomes; and may also reduce health care costs. The Alzheimer's Association encourages early diagnosis to provide the opportunity for people with Alzheimer's to participate in decisions about their care, current and future treatment plans, legal and financial planning, and may also increase their chances of participating in Alzheimer's research studies.

"Next steps include reaching out to physician groups and medical societies to encourage primary care doctors, dementia experts, and nurse practitioners to adopt these new best clinical practice guidelines," Hendrix said.

Credit: 
Alzheimer's Association

Women under-treated for heart attacks die at twice the rate of men

Published in today's Medical Journal of Australia, the study of 2898 patients (2183 men, 715 women) reveals that six months after hospital discharge, death rates and serious adverse cardiovascular events in women presenting with ST-Elevation Myocardial Infarction (STEMI) in the past decade were more than double the rates seen in men.

Sex differences in the management and outcomes of patients with acute coronary syndromes such as STEMI have been reported in the medical literature, but most studies fail to adjust for 'confounding' factors that can affect the accuracy of findings.

This new study, authored by leading cardiac specialists and researchers from across Australia, offers robust insights into this life-threatening condition by adjusting for factors that could affect treatment and health outcomes.

"We focused on patients with ST-Elevation Myocardial Infarction because the clinical presentation and diagnosis of this condition is fairly consistent, and patients should receive a standardised management plan," said the University of Sydney's Professor Clara Chow who is a cardiologist at Westmead hospital, the study's senior author.

"The reasons for the under-treatment and management of women compared to men in Australian hospitals aren't clear.

"It might be due to poor awareness that women with STEMI are generally at higher risk, or by a preference for subjectively assessing risk rather than applying more reliable, objective risk prediction tools.

"Whatever the cause, these differences aren't justified. We need to do more research to discover why women suffering serious heart attacks are being under-investigated by health services and urgently identify ways to redress the disparity in treatment and health outcomes."

Professor David Brieger, co-author of the study and leader of the CONCORDANCE registry from which the findings were extracted, agrees: "While we have long recognised that older patients and those with other complicating illnesses are less likely to receive evidence based treatment, this study will prompt us to refocus our attention on women with STEMI."

What is STEMI or ST-elevation myocardial infarction?

A STEMI or ST-elevation myocardial infarction (heart attack) happens when a fatty deposit on an arterial wall causes a sudden and complete blockage of blood to the heart, starving it of oxygen and causing damage to the heart muscle.

A STEMI diagnosis is typically made initially by administering an electrocardiogram (ECG) that reveals a tell-tale ECG signature (see image below). These heart attacks can cause sudden death due to ventricular fibrillation (a serious heart rhythm disturbance) or acute heart failure (when the heart can't pump enough blood to properly supply the body).

STEMI represents about 20 percent of all heart attack presentations. In 2016, an average of 22 Australians died from a heart attack each day.

About the study

Researchers collected data from 41 hospitals across all Australian states and territories between February 2009 and May 2016. Twenty-eight hospitals (68 percent) are in metropolitan regions and 13 are in rural locations.

Data was sourced from the CONCORDANCE (Cooperative National Registry of Acute Coronary care, Guideline Adherence and Clinical Events) registry, intended for use by clinicians to help improve the quality of patient care in line with treatment guidelines.

Main outcome measures: the primary outcome was total revascularisation, a composite endpoint encompassing patients receiving PCI (percutaneous coronary intervention), thrombolysis, or coronary artery bypass grafting (CABG) during the index admission.

Secondary outcomes: timely vascularisation rates; major adverse cardiac event rates; clinical outcomes and preventive treatments at discharge; mortality in hospital and 6 months after admission.

The average age of women presenting with STEMI was 66.6 years; the average age of men was 60.5 years.

More women than men had hypertension, diabetes, a history of prior stroke, chronic kidney disease, chronic heart failure, or dementia. Fewer had a history of previous coronary artery disease or myocardial infarction, or of prior PCI or CABG.

Dr Clara Chow is Professor of Medicine at Sydney Medical School, a Consultant Cardiologist at Westmead Hospital and Academic Director of the Westmead Applied Research Centre (WARC). Her principal research interests are in cardiovascular disease prevention in Australia and internationally.

Credit: 
University of Sydney

The need for speed: Why malaria parasites are faster than human immune cells

image: Mosquitoes (left) inject malaria parasites (top middle) into skin. The parasites move very rapidly (bottom middle left) using a protein that is very similar to the one our cells (lower middle right) use to construct their form and contract: actin (right). Douglas and co-workers found that certain parts of the parasite protein are responsible for the different behavior of the actin in parasites.

Image: 
Heidelberg University Hospital/ HITS/ ZMBH

Malaria parasites of the genus Plasmodium move ten times faster through the skin than immune cells, whose job it is to capture such pathogens. Heidelberg scientists have now found a reason why the parasite is faster than its counterpart. They did this by studying actin, a protein that is important to the structure and movement of cells and that is built differently in parasites and mammals. The findings of Ross Douglas and his colleagues at the Centre for Infectious Diseases (Department of Parasitology) at Heidelberg University Hospital, the Centre for Molecular Biology at the University of Heidelberg (ZMBH), and the Heidelberg Institute for Theoretical Studies (HITS) are not only changing our understanding of a key component of all living cells, but they also provide information that could help in the discovery of new drugs.

How does the malaria parasite move so fast?

Like Lego blocks, which can be put together into long chains, actin is assembled into long rope-like structures called filaments. These filaments are important for the proper functioning of cells - such as muscle cells - and enable each of our movements. However, they also serve to enable immune system cells to move and capture invading pathogens. Likewise, they are of great importance for the movement of the malaria parasite. "Strangely enough, malaria parasites are ten times nimbler than the fastest of our immune cells and literally outrun our immune defences. If we understand this important difference in movement, we can target and stop the parasite," says Dr. Ross Douglas from the Heidelberg Centre for Infectious Diseases. A key issue in the paper published in the journal PLOS Biology is how the rate at which actin filaments are formed and broken down differs between parasites and mammals.

Mammal-parasite protein hybrids lead to new insights

It was known that certain sections of the actin protein differ between the parasite and mammals. To investigate the reasons behind the difference in speed, scientists replaced parts of the parasite protein with corresponding sections of protein from mammalian actin in the laboratory. "When we made these changes in the parasite, we noticed that some parasites could not survive at all and others suddenly hesitated when they moved," says Dr. Ross Douglas. To investigate the underlying mechanism, the participating scientists performed experiments and computer simulations ranging from modeling at the molecular level to observing the parasites in live animals. "High-performance computers were required for simulations to observe how the structure and dynamics of actin filaments change when individual sections are swapped," says Prof. Rebecca Wade, who heads research groups at the Heidelberg Institute for Theoretical Studies (HITS) and at the Centre for Molecular Biology (ZMBH) at Heidelberg University that investigate protein interactions via computer simulations and mathematical modelling.

These findings could now be used to discover chemical compounds that selectively target parasite actin and affect either the building or breakdown of the filament. "In this way, it could be possible to effectively stop the entire parasite," Dr. Ross Douglas summarizes. An example for this approach is tubulin, another type of protein which is involved in the building of the cytoskeleton via so-called microtubules. Medicines that target parasite microtubules - such as mebendazole - have been successfully used for decades to treat humans and animals for parasitic worms. This joint research project was partially funded by the innovation fund FRONTIER at Heidelberg University.

Credit: 
Heidelberg Institute for Theoretical Studies (HITS)

Academic environmentalists identify their most pressing issues posed by chemicals in the environment

Scientists have identified 22 key research questions surrounding the risks associated with chemicals in the environment in Europe.

Chemicals released into the environment by human activity are resulting in biodiversity loss; increased natural hazards; threats to food, water and energy security; negative impacts on human health and degradation of environmental quality.

Now, an international study published in Environmental Toxicology and Chemistry involving scientists from the University of York has identified the 22 most important research questions that need to be answered to fill the most pressing knowledge gaps over the next decade.

The list includes questions about which chemicals pose the greatest threat to European populations and ecosystems, where the hotspots of key contaminants are around the globe, and how we can develop methods to protect the environment.

The research, which resulted from a recent 'big questions' exercise involving researchers from across Europe, aims to serve as a roadmap for policy makers, regulators, industry and funders and result in a more coordinated approach to studying and regulating chemicals in the environment.

One of the lead authors of the study, Dr Alistair Boxall from the University of York's Environment Department, said: "Our research has highlighted international scientists' research priorities and our key knowledge gaps when it comes to the risks and impacts of chemicals.

The study aims to help focus scientific effort on the questions that really matter and inform decisions about the type of research needed to update policies and regulations.

"This research is part of a much larger global horizon scanning exercise co-ordinated by the Society for Environmental Toxicology and Chemistry. Similar studies to ours are being performed in North America, Latin America, Africa, Asia and Australasia. Taken together these exercises should help to focus global research into the impacts of chemicals in the environment."

A key suggestion in the report is that the harmful effects of chemicals on human health and the environment should be considered in combination with other stressors.

Boxall added, "Considering chemicals in isolation can result in a simplistic assessment that doesn't account for the complexity of the real world. For example, a fish won't be exposed to a single chemical but to hundreds if not thousands of chemicals. Other pressures, such as temperature stress, will also be at play and it is likely that these components work together to adversely affect ecosystem health."

Credit: 
University of York

Eagle-eyed machine learning algorithm outdoes human experts

image: Radiation-damaged materials resemble a cratered lunar surface, and machine learning can now help with nuclear reactor design by finding a specific variety of defect faster and more accurately than expert humans.

Image: 
Kevin Fields

MADISON, Wis. -- Artificial intelligence is now so smart that silicon brains frequently outthink people.

Computers operate self-driving cars, pick friends' faces out of photos on Facebook, and are learning to take on jobs typically entrusted only to human experts.

Researchers from the University of Wisconsin-Madison and Oak Ridge National Laboratory have trained computers to quickly and consistently detect and analyze microscopic radiation damage to materials under consideration for nuclear reactors. And the computers bested humans in this arduous task.

"Machine learning has great potential to transform the current, human-involved approach of image analysis in microscopy," says Wei Li, who earned his master's degree in materials science and engineering this year from UW-Madison.

Many problems in materials science are image-based, yet few researchers have expertise in machine vision -- making image recognition and analysis a major research bottleneck. As a student, Li realized that he could leverage training in the latest computational techniques to help bridge the gap between artificial intelligence and materials science research.

Li, with Oak Ridge staff scientist Kevin Field and UW-Madison materials science and engineering professor Dane Morgan, used machine learning to make artificial intelligence better than experienced humans at analyzing damage to potential nuclear reactor materials. The collaborators described their approach in a paper published July 18 in the journal npj Computational Materials.

Machine learning uses statistical methods to guide computers toward improving their performance on a task without receiving any explicit guidance from a human. Essentially, machine learning teaches computers to teach themselves.

"In the future, I believe images from many instruments will pass through a machine learning algorithm for initial analysis before being considered by humans," says Morgan, who was Li's graduate school advisor.

The researchers targeted machine learning as a means to rapidly sift through electron microscopy images of materials that had been exposed to radiation, and identify a specific type of damage -- a challenging task because the photographs can resemble a cratered lunar surface or a splatter-painted canvas.

That job, absolutely critical to developing safe nuclear materials, could make a time-consuming process much more efficient and effective.

"Human detection and identification is error-prone, inconsistent and inefficient. Perhaps most importantly, it's not scalable," says Morgan. "Newer imaging technologies are outstripping human capabilities to analyze the data we can produce."

Previously, image-processing algorithms depended on human programmers to provide explicit descriptions of an object's identifying features. Teaching a computer to recognize something simple like a stop sign might involve lines of code describing a red octagonal object.

More complex, however, is articulating all of the visual cues that signal something is, for example, a cat. Fuzzy ears? Sharp teeth? Whiskers? A variety of critters have those same characteristics.

Machine learning now takes a completely different approach.

"It's a real change of thinking. You don't make rules. You let the computer figure out what the rules should be," says Morgan.

Today's machine learning approaches to image analysis often use programs called neural networks that seem to mimic the remarkable layered pattern-recognition powers of the human brain. To teach a neural network to recognize a cat, scientists simply "train" the program by providing a collection of accurately labeled pictures of various cat breeds. The neural network takes over from there, building and refining its own set of guidelines for the most important features.

Similarly, Morgan and colleagues taught a neural network to recognize a very specific type of radiation damage, called dislocation loops, which are some of the most common, yet challenging, defects to identify and quantify even for a human with decades of experience.

After training with 270 images, the neural network, combined with another machine learning algorithm called a cascade object detector, correctly identified and classified roughly 86 percent of the dislocation loops in a set of test pictures. For comparison, human experts found 80 percent of the defects.

"When we got the final result, everyone was surprised," says Field, "not only by the accuracy of the approach, but the speed. We can now detect these loops like humans while doing it in a fraction of the time on a standard home computer."

After he graduated, Li took a job with Google, but the research is ongoing. Morgan and Field are working to expand their training data set and teach a new neural network to recognize different kinds of radiation defects. Eventually, they envision creating a massive cloud-based resource for materials scientists around the world to upload images for near-instantaneous analysis.

"This is just the beginning," says Morgan. "Machine learning tools will help create a cyber infrastructure that scientists can utilize in ways we are just beginning to understand."

Credit: 
University of Wisconsin-Madison

NASA prepares to launch Parker Solar Probe, a mission to touch the Sun

video: Parker Solar Probe will swoop to within 4 million miles of the sun's surface, facing heat and radiation like no spacecraft before it. Launching in 2018, Parker Solar Probe will provide new data on solar activity and make critical contributions to our ability to forecast major space-weather events that impact life on Earth. Download this video in HD formats: https://svs.gsfc.nasa.gov/12978

Image: 
NASA's Goddard Space Flight Center

Early on an August morning, the sky near Cape Canaveral, Florida, will light up with the launch of Parker Solar Probe. No earlier than Aug. 6, 2018, a United Launch Alliance Delta IV Heavy will thunder to space carrying the car-sized spacecraft, which will study the Sun closer than any human-made object ever has.

On July 20, 2018, Nicky Fox, Parker Solar Probe's project scientist at the Johns Hopkins University Applied Physics Lab in Laurel, Maryland, and Alex Young, associate director for science in the Heliophysics Science Division at NASA's Goddard Space Flight Center in Greenbelt, Maryland, introduced Parker Solar Probe's science goals and the technology behind them at a televised press conference from NASA's Kennedy Space Center in Cape Canaveral, Florida.

"We've been studying the Sun for decades, and now we're finally going to go where the action is," said Young.

Our Sun is far more complex than meets the eye. Rather than the steady, unchanging disk it seems to human eyes, the Sun is a dynamic and magnetically active star. The Sun's atmosphere constantly sends magnetized material outward, enveloping our solar system far beyond the orbit of Pluto and influencing every world along the way. Coils of magnetic energy can burst out with light and particle radiation that travel through space and create temporary disruptions in our atmosphere, sometimes garbling radio and communications signals near Earth. The influence of solar activity on Earth and other worlds are collectively known as space weather, and the key to understanding its origins lies in understanding the Sun itself.

"The Sun's energy is always flowing past our world," said Fox. "And even though the solar wind is invisible, we can see it encircling the poles as the aurora, which are beautiful - but reveal the enormous amount of energy and particles that cascade into our atmosphere. We don't have a strong understanding of the mechanisms that drive that wind toward us, and that's what we're heading out to discover."

That's where Parker Solar Probe comes in. The spacecraft carries a lineup of instruments to study the Sun both remotely and in situ, or directly. Together, the data from these state-of-the-art instruments should help scientists answer three foundational questions about our star.

One of those questions is the mystery of the acceleration of the solar wind, the Sun's constant outflow of material. Though we largely grasp the solar wind's origins on the Sun, we know there is a point - as-yet unobserved - where the solar wind is accelerated to supersonic speeds. Data shows these changes happen in the corona, a region of the Sun's atmosphere that Parker Solar Probe will fly directly through, and scientists plan to use Parker Solar Probe's remote and in situ measurements to shed light on how this happens.

Second, scientists hope to learn the secret of the corona's enormously high temperatures. The visible surface of the Sun is about 10,000 F - but, for reasons we don't fully understand, the corona is hundreds of times hotter, spiking up to several million degrees F. This is counterintuitive, as the Sun's energy is produced at its core.

"It's a bit like if you walked away from a campfire and suddenly got much hotter," said Fox.

Finally, Parker Solar Probe's instruments should reveal the mechanisms at work behind the acceleration of solar energetic particles, which can reach speeds more than half as fast as the speed of light as they rocket away from the Sun. Such particles can interfere with satellite electronics, especially for satellites outside of Earth's magnetic field.

To answer these questions, Parker Solar Probe uses four suites of instruments.

The FIELDS suite, led by the University of California, Berkeley, measures the electric and magnetic fields around the spacecraft. FIELDS captures waves and turbulence in the inner heliosphere with high time resolution to understand the fields associated with waves, shocks and magnetic reconnection, a process by which magnetic field lines explosively realign.

The WISPR instrument, short for Wide-Field Imager for Parker Solar Probe, is the only imaging instrument aboard the spacecraft. WISPR takes images from of structures like coronal mass ejections, or CMEs, jets and other ejecta from the Sun to help link what's happening in the large-scale coronal structure to the detailed physical measurements being captured directly in the near-Sun environment. WISPR is led by the Naval Research Laboratory in Washington, D.C.

Another suite, called SWEAP (short for Solar Wind Electrons Alphas and Protons Investigation), uses two complementary instruments to gather data. The SWEAP suite of instruments counts the most abundant particles in the solar wind -- electrons, protons and helium ions -- and measures such properties as velocity, density, and temperature to improve our understanding of the solar wind and coronal plasma. SWEAP is led by the University of Michigan, the University of California, Berkeley, and the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts.

Finally, the IS?IS suite - short for Integrated Science Investigation of the Sun, and including ?, the symbol for the Sun, in its acronym - measures particles across a wide range of energies. By measuring electrons, protons and ions, IS?IS will understand the particles' lifecycles -- where they came from, how they became accelerated and how they move out from the Sun through interplanetary space. IS?IS is led by Princeton University in New Jersey.

Parker Solar Probe is a mission some sixty years in the making. With the dawn of the Space Age, humanity was introduced to the full dimension of the Sun's powerful influence over the solar system. In 1958, physicist Eugene Parker published a groundbreaking scientific paper theorizing the existence of the solar wind. The mission is now named after him, and it's the first NASA mission to be named after a living person.

Only in the past few decades has technology come far enough to make Parker Solar Probe a reality. Key to the spacecraft's daring journey are three main breakthroughs: The cutting-edge heat shield, the solar array cooling system, and the advanced fault management system.

"The Thermal Protection System (the heat shield) is one of the spacecraft's mission-enabling technologies," said Andy Driesman, Parker Solar Probe project manager at the Johns Hopkins Applied Physics Lab. "It allows the spacecraft to operate at about room temperature."

Other critical innovations are the solar array cooling system and on-board fault management systems. The solar array cooling system allows the solar arrays to produce power under the intense thermal load from the Sun and the fault management system protects the spacecraft during the long periods of time when the spacecraft can't communicate with the Earth.

Using data from seven Sun sensors placed all around the edges of the shadow cast by the heat shield, Parker Solar Probe's fault management system protects the spacecraft during the long periods of time when it can't communicate with Earth. If it detects a problem, Parker Solar Probe will self-correct its course and pointing to ensure that its scientific instruments remain cool and functioning during the long periods when the spacecraft is out of contact with Earth.

Parker Solar Probe's heat shield - called the thermal protection system, or TPS - is a sandwich of carbon-carbon composite surrounding nearly four and half inches of carbon foam, which is about 97% air. Though it's nearly eight feet in diameter, the TPS adds only about 160 pounds to Parker Solar Probe's mass because of its lightweight materials.

Though the Delta IV Heavy is one of the world's most powerful rockets, Parker Solar Probe is relatively small, about the size of a small car. But what Parker Solar Probe needs is energy - getting to the Sun takes a lot of energy at launch to achieve its orbit around the Sun. That's because any object launched from Earth starts out traveling around the Sun at the same speed as Earth - about 18.5 miles per second - so an object has to travel incredibly quickly to counteract that momentum, change direction, and go near the Sun.

The timing of Parker Solar Probe's launch - between about 4 and 6 a.m. EDT, and within a period lasting about two weeks - was very precisely chosen to send Parker Solar Probe toward its first, vital target for achieving such an orbit: Venus.

"The launch energy to reach the Sun is 55 times that required to get to Mars, and two times that needed to get to Pluto," said Yanping Guo from the Johns Hopkins Applied Physics Laboratory, who designed the mission trajectory. "During summer, Earth and the other planets in our solar system are in the most favorable alignment to allow us to get close to the Sun."

The spacecraft will perform a gravity assist to shed some of its speed into Venus' well of orbital energy, drawing Parker Solar Probe into an orbit that - already, on its first pass - carries it closer to the solar surface than any spacecraft has ever gone, well within the corona. Parker Solar Probe will perform similar maneuvers six more times throughout its seven-year mission, assisting the spacecraft to final sequence of orbits that pass just over 3.8 million miles from the photosphere.

"By studying our star, we can learn not only more about the Sun," said Thomas Zurbuchen, the associate administrator for the Science Mission Directorate at NASA HQ. "We can also learn more about all the other stars throughout the galaxy, the universe and even life's beginnings."

Credit: 
NASA/Goddard Space Flight Center

Fewer injuries in girls' sports when high schools have athletic trainers

image: Study shows that recurrent injury rates are three times higher in girls' basketball in schools without athletic trainers.

Image: 
Lurie Children's

Study: Fewer Injuries in Girls' Soccer and Basketball When High Schools Have Athletic Trainers

Recurrent injury rates were six times higher in girls' soccer and nearly three times higher in girls' basketball in schools without athletic trainers

Availability of a full-time certified athletic trainer in high school reduces overall and recurrent injury rates in girls who play on the soccer or basketball team, according to a study published in Injury Epidemiology. Schools with athletic trainers were also better at identifying athletes with concussion. This is the first study to compare injury rates in schools that have an athletic trainer with those that do not.

"Our results are significant because currently only about a third of high schools have access to a full-time athletic trainer," says study co-author Cynthia LaBella, MD, Medical Director of the Institute for Sports Medicine at Ann & Robert H. Lurie Children's Hospital of Chicago, and Professor of Pediatrics at Northwestern University Feinberg School of Medicine. "The positive impact we observed is likely because athletic trainers are licensed healthcare professionals who work with coaches and athletes to apply evidence-based injury prevention strategies, and they are able to recognize and manage injuries when they happen, which may reduce severity or complications."

LaBella and colleagues analyzed data from two injury reporting systems, for high schools with athletic trainers and for those without, over a two-year period. They found that overall injury rates in both girls' soccer and basketball were significantly higher in schools without athletic trainers. Recurrent injury rates were six times higher in girls' soccer and nearly three times higher in girls' basketball in schools without athletic trainers.

The study also found that concussion rates in both sports were significantly higher in schools with athletic trainers, however.

"Although rates of concussion were lower in schools without athletic trainers, it is unlikely that fewer concussions are occurring in these schools," says Dr. LaBella. "More likely, concussions are reported more often in schools with athletic trainers because these professionals are better skilled than coaches and athletes in identifying signs and symptoms of concussions and remove athletes with suspected concussion from play until they can be evaluated and cleared for return by an appropriate healthcare provider."

The study provides evidence-based support for position statements from medical professional organizations, such as the American Medical Association, American Academy of Pediatrics, American Academy of Family Physicians and American Academy of Neurology, that call for greater athletic trainer coverage for high school athletes.

Credit: 
Ann & Robert H. Lurie Children's Hospital of Chicago

Diabetes during pregnancy may increase baby's heart disease risk

Rockville, Md. (July 19, 2018)--Gestational diabetes may increase the risk of blood vessel dysfunction and heart disease in offspring by altering a smooth muscle protein responsible for blood vessel network formation. Understanding of the protein's function in fetal cells may improve early detection of disease in children. The study is published ahead of print in the American Journal of Physiology--Cell Physiology.

Gestational diabetes, a state of prolonged high blood sugar during pregnancy, affects approximately 7 percent of pregnant women. Uncontrolled gestational diabetes may result in high blood pressure during pregnancy or in premature birth or stillbirth. Previous research has found that levels of a protein called transgelin are higher in offspring of women with gestational diabetes. Transgelin is found in the endothelial colony forming cells (ECFCs) that line the walls of blood vessels. Transgelin regulates cell migration, a process involved in wound healing and building blood vessel networks. A baby's umbilical cord blood is rich in ECFCs; dysfunction of these cells that occurs in the womb may play a role in long-term blood vessel health and increase the risk of children developing heart disease later in life.

Researchers from Indiana University School of Medicine studied the effects of elevated transgelin levels on cord blood ECFCs. Cord blood samples taken at the time of birth from women with gestational diabetes were compared to a control group without pregnancy complications. Cord blood ECFCs do not typically contain high levels of transgelin. However, the samples taken from the umbilical cord blood of the gestational diabetes group showed higher protein levels and increased dysfunction of the blood vessels during formation. Decreasing transgelin in the diabetes-exposed cells "significantly improved initial [blood vessel] network formation, ongoing network stabilization and cell migration," the research team wrote.

Improving the tools that measure an infant's diabetes exposure--and relevant protein fluctuations--at the time of birth "would increase the accuracy of health assessments to enable more informed predictions of long-term health outcomes," the researchers wrote. "Unfortunately, these [conditions] often go undiagnosed until children present with disease later in life, at which time the opportunity for prevention has ended."

Credit: 
American Physiological Society

Scientists uncover the role of a protein in production & survival of myelin-forming cells

image: The study by Scaglione et al, identifies PRMT5 as a molecule that promotes new myelin formation , by acting on histones (proteins bound to DNA) and placing marks (CH3), which preclude the formation of obstacles to the differentiation of progenitor cells (by preventing KATs from depositing Ac marks)

Image: 
Carter Van Eitreim

NEW YORK, July 19, 2018 -- The nervous system is a complex organ that relies on a variety of biological players to ensure daily function of the human body. Myelin--a membrane produced by specialized glial cells--plays a critical role in protecting the fibers that help carry messages throughout the body. In the central nervous system (CNS), glial cells known as oligodendrocytes are responsible for producing myelin. Now, a paper published today in Nature Communications explains how researchers at the Advanced Science Research Center (ASRC) at The Graduate Center of The City University of New York have uncovered the role of a protein known as "PRMT5" in the production of myelin and, ultimately, proper development and function of the CNS.

From infancy through adolescence, myelinating oligodendrocytes are generated in abundance in the human brain by progenitor cells in a process that is highly sensitive to hormones, nutrients and environmental conditions. In the adult brain, these progenitors cells--which similar to stem cells have the ability to differentiate into adult cells that perform specific tasks--serve as a reservoir for the generation of new myelin in response to learning and social experiences or to repair myelin loss after injury (e.g. after stroke or immune attack to myelin, as in Multiple Sclerosis).

The molecular mechanisms that generate myelin-forming oligodendrocytes are only partially understood, but through their research, ASRC scientists are one step closer to identifying them. Their work has pinpointed PRMT5 as a protein that regulates the molecules responsible for stopping or promoting the expression of certain genes that are needed for survival of oligodendrocytes and production of myelin. In other words, PRMT5 essentially acts as a traffic cop, allowing progenitor cells to become oligodendrocytes and stopping the biological signals that would interfere with myelin production.

"We were able to show that when PRMT5 is present, the progenitor cells are able to differentiate and become myelin-producing cells," said Patrizia Casaccia, director of the ASRC's Neuroscience Initiative and the Einstein Professor of Biology at Hunter College and at The Graduate Center, CUNY. "We discovered that progenitor cells lacking PRMT5 function essentially commit suicide while they are in the process of transitioning into myelin-forming cells. This discovery is important from a developmental and a translational standpoint. On one end, our findings allow a better understanding of how myelin is formed and possibly repaired when damaged. On the other end, they warn about potentially the possibility that pharmacological inhibitors of PRMT5, currently evaluated for their toxic function on glial tumor cells, might also kill healthy cells and prevent new myelin formation.

The study by Scaglione et al, identifies PRMT5 as a molecule that promotes new myelin formation , by acting on histones (proteins bound to DNA) and placing marks (CH3), which preclude the formation of obstacles to the differentiation of progenitor cells (by preventing KATs from depositing Ac marks)

ASRC researchers used three methods to eliminate PRMT5 and determine its role in myelin production in laboratory mice. First, they used CRISPR genetic ablation to target and eliminate the gene that produces PRMT5. In the second cohort, they used a pharmacological inhibitor to block activity of the protein. In the final cohort, they studied a group of knock-out mice who were born without the PRMT5-producing gene.

In each case, removing or blocking PRMT5 resulted in reduced progenitor cell differentiation and death of the cells that were attempting to become myelin producers.

"A logical next step was to try and determine how, in the absence or malfunction of PRMT5, we could help the progenitor cells differentiate and create myelin," said Antonella Scaglione, lead author of the paper and a postdoctoral research associate with the ASRC. "We were able to identify ways to rescue the differentiation process of oligodendrocyte progenitors lacking PRMT5."

The discovery of this correction was based on previous findings from the Casaccia's laboratory about signals that interfere with myelin generation. These signals are carried out by enzymes called KATs (lysine acetyltransferases). The lab had previously shown that when KATs attach to nuclear proteins called histone, myelin formation is blocked. The researchers' new work shows that blocking KATs can favor myelin formation and also overcome the effect of PRMT5 inhibitors. These findings could be critical to improving the survival of patients with malignancies that need to be treated with PRMT5 inhibitors.

The ASRC's neuroscience research team is now focused on determining how they can create favorable conditions for myelin-forming oligodendrocytes for the purpose of promoting healthy cognitive and behavioral development and identifying novel regenerative strategies for the injured brain.

Credit: 
Advanced Science Research Center, GC/CUNY

Finding a planet with a 10 years orbit in a few months

image: This is data from the light curve of the EPIC248847494 star. The transit is clearly visible, on the upper right part of the image.

Image: 
© UNIGE

To discover and confirm the presence of a planet around stars other than the Sun, astronomers wait until it has completed three orbits. However, this very effective technique has its drawbacks since it cannot confirm the presence of planets at relatively long periods (it is ideally suited for periods of a few days to a few months). To overcome this obstacle, a team of astronomers under the direction of the University of Geneva (UNIGE) have developed a method that makes it possible to ensure the presence of a planet in a few months, even if it takes 10 years to circle its star: this new method is described for the first time in the journal Astronomy & Astrophysics.

The method of transits, consisting of detecting a dip in the luminosity of the host star at the time the planet passes, is a very effective technique to search for exoplanets. It makes it possible to estimate the radius of the planet, the inclination of the orbit and can be applied to a large number of stars at the same time. However, it has a significant limitation: since it is necessary to wait at least three passes in front of the star to confirm the existence of a planet, it is currently only suitable to detect planets with rather short orbital periods (typically from a few days to a few months). We would indeed have to wait more than 30 years to detect a planet similar to Jupiter which needs 11 years to make the full tour).

To overcome this obstacle, a team of astronomers led by researcher Helen Giles, from the Astronomy Department at the Faculty of Science of UNIGE and a member of the NCCR PlanetS, has developed an original method. By analysing data from the space telescope K2, one star showed a significant long-duration temporary decrease of luminosity, the signature of a possible transit, in other words, the passage of a planet in front of its star. "We had to analyse hundreds of light curves" explains the astronomer, to find one where such a transit was unequivocal.

Helen Giles consulted recent data from the Gaïa mission to determine the diameter of the star referenced as EPIC248847494 and its distance, 1500 light-years away from the planet Earth. With that knowledge and the fact that the transit lasted 53 hours, she found that the planet is located at 4.5 times the distance from the Sun to the Earth, and that consequently it takes about 10 years to orbit once. The key question left to answer was whether it was a planet and not a star. The Euler telescope of the UNIGE in Chile would provide the answer. By measuring the radial velocity of the star, which makes it possible to deduce the mass of the planet, she was able to show that the mass of the object is less than 13 times that of Jupiter -- well below the minimum mass of a star (at least 80 times the mass of Jupiter).

"This technique could be used to hunt habitable, Earth-like planets around stars like the Sun" enthuses Helen Giles, "we have already found Earths around red dwarf stars whose stellar radiation may have consequences on life which are not exactly known". With her method it will no longer be necessary to wait many years to know whether the detected single transit is due to the presence of a planet. "In the future, we could even see if the planet has one or more moons, like our Jupiter," she concludes.

Credit: 
Université de Genève

Social Impact Bonds have a role but are no panacea for public service reform

Led by the Policy Innovation Research Unit at the London School of Hygiene & Tropical Medicine with RAND Europe, and funded by the NIHR Policy Research Programme of the Department of Health and Social Care, the findings show that policymakers should focus on the components within SIBs that show promise in developing outcome-based contracting, such as personalised support to clients, and greater flexibility and innovation in service delivery, while avoiding the notion that SIB offer the only way forward for such contracting.

Despite claims that SIBs should generate rich quantitative information on the costs and outcomes of SIB-funded and non-SIB services, in practice the researchers were unable to access suitable quantitative data to make this comparison.

SIBs are a relatively new type of payment for performance contract where public sector commissioners partner with private or third sector social investors to fund interventions that seek to tackle complex social issues.

This report is the first to examine the impact of the SIB financing mechanism on each of the main groups of participants, including service providers.

Professor Nicholas Mays, Professor of Health Policy at the London School of Hygiene & Tropical Medicine, said: "The main demonstrable success of SIB projects in health and social care has been in helping marginalised groups who had, previously, been neglected by public services. It is much less clear that SIB-related services for other groups, such as people with chronic health conditions, have led to marked improvements in health."

The study evaluated nine projects across England, collectively known as the SIB 'Trailblazers', that received seed funding from the Government's Social Enterprise Investment Fund to develop and potentially implement a SIB. The team analysed Trailblazer plans and contracts, conducted interviews with national policy makers and local participants in Trailblazer SIBs (commissioners, investors, SIB specialist organisations and providers), as well as local participants in comparable non-SIB services.

The SIBs funded a wide range of different interventions for different clients: older people who are socially isolated; people with multiple chronic health conditions; entrenched rough sleepers; adolescents in care; and people with disabilities requiring long-term supported living.

Three models of SIB were identified: the Direct Provider SIB; SIB with Special Purpose Vehicle; and the Social Investment Partnership. Each allocated financial risks differently, with providers bearing more of the financial risk in the Direct Provider model than in the others.

Frontline staff were more aware of the financial incentives associated with meeting client outcomes in the Direct Provider model than in the Special Purpose Vehicle model.
Providers in the Trailblazers were more outcome-focused than providers of comparable non-SIB services.

Despite this, the up-front financing of providers by investors tended to be provided in instalments related to hitting volume and/or throughput targets rather than improvements in client outcomes. The bulk of the subsequent payments to investors for achieving targets came from central government and sources such as the Big Lottery rather than from local commissioners.

Only one Trailblazer reported having made any cashable savings during the evaluation period as a result of its SIB-financed interventions. Typically, the planning of the SIB services and subsequent oversight were better resourced, and the services more flexibly provided than in similar non-SIB services.

Professor Mays said: "Our research provides important information for governments looking for new financing mechanisms for health and care. So far at least, cashable savings from SIBs, despite early hopes and rhetoric, remain unproven. Policymakers should learn from different models but SIBs are no panacea for better commissioning of health and care services."

The researchers conclude that SIBs, as currently conceived, may have a role in specific circumstances, especially where outcomes are uncontroversial, easily attributable to the actions of the provider and easily measured but are unlikely to be widely applicable in public services.

The authors acknowledge limitations of the study including that, with the exception of one Trailblazer which ended during the evaluation, the other four operational SIBs were evaluated during their early to mid-period of implementation, making it possible that the performance of these projects will change before they conclude in two to five years' time.

Credit: 
London School of Hygiene & Tropical Medicine