Culture

In soil carbon measurements, tools tell the tale

image: Gross classifies a soil profile during soil sampling.

Image: 
Jason James

A (wo)man is only as good as his or her tools. In the case of soil scientists, they are only as good as the tools and methods they use. And when it comes to estimating soil organic carbon stocks, new research shows not all tools give the same results.

Soil organic carbon stocks are the amount of organic carbon found in soil. There are several common ways of measuring these stocks. Until now they were all believed to give pretty much the same results. Cole Gross, a graduate student in the Department of Renewable Resources at the University of Alberta, questioned this commonly-held assumption.

Gross explains that all organic materials found in soils are in some way from a living thing, such as decomposing plants and animals. This type of material is known as soil organic matter and about half of its mass is carbon. The amount of soil organic carbon differs from soil to soil, location to location.

"The ability to accurately measure soil organic carbon stocks and compare changes over time will help us make the best decisions about land use and management practices, which could ultimately improve soil health and productivity," Gross says. "If we can increase our understanding of soil organic carbon, we will also increase our understanding of climate-carbon feedbacks and better our climate models. Unreliable data regarding soil organic carbon stocks could lead to misconceptions about how land use, management, or climate change affects soil organic carbon."

Three measurements commonly used are clod, core, and excavation. For the clod method, a scientist takes a clod of soil from the surface or another specific depth and takes it to the lab for chemical analysis. The core method uses a hollow tube to pull a core of soil from a specific depth for analysis. The excavation method is the least common of the three, as it requires the most time and labor. However, it is considered the most accurate of the methods. It involves digging a large pit to get at a large amount of soil.

Although many believe the results of these three methods are similar, Gross found many key differences. He and his team found that the most commonly used method, the core method, greatly underestimated the soil organic carbon stock. Most of this difference occurred in soil deeper than 20 centimeters (just under 8 inches), which Gross says holds most of the soil organic carbon stock.

"Our results suggest that regional and global soil organic carbon stocks may be largely underestimated due to shallow sampling and the frequent use of core methods," he explains. "We found that these common soil sampling methods gave significantly different results and should not be assumed to be interchangeable."

Gross explains that the tools and methods soil scientists use are as important, if not more important, than the data they provide.

"For much of the work that we do, small errors in the first steps of a long process can amplify later in the process," he says. "It is always important to look back and check assumptions and the accuracy of methods, even if these methods have been accepted for a long time."

Based on the research team's findings, Gross recommends that the potential for the core method to underestimate soil mass be determined in a given soil and then adjusted to account for this. Additionally, they found that the clod method can be used as a standard reference for soil mass measurements in non-rocky soils.

"The inspiration behind this study was a bit serendipitous," he says. "As a fairly new soil scientist, when the soil sampling core I was using broke in the field, I was instructed to use the clod method and told that the methods were interchangeable. This seemed curious to me and inspired my research into different soil sampling methods, which ultimately led to this study."

Credit: 
American Society of Agronomy

Red or yellow? A simple paper test detects false or substandard antibiotics

image: A simple, paper-based test can quickly identify a falsified or substandard antibiotic.

Image: 
John Eisele/Colorado State University

Antibiotics - medicines that treat bacterial infections - have saved millions of lives worldwide since their discovery in the early 20th century. When we fill a prescription at the doctor's office or pharmacy today, most of us take for granted that these commonly prescribed medicines are real, and of good quality.

But in the developing world, the manufacture and the distribution of substandard, nonlegitimate medicines is widespread. The World Health Organization estimates that up to 10 percent of all drugs worldwide could be falsified, with up to 50 percent of those some form of antibiotics. A counterfeit or diluted antibiotic can not only endanger an unwitting patient, but can also contribute to the wider problem of antimicrobial resistance.

A Colorado State University laboratory is putting chemistry to work on a simple, inexpensive way to identify such falsified and substandard antibiotics, offering a practical solution to a very real problem. The researchers have created a paper-based test that can quickly determine whether an antibiotic sample is appropriate strength, or diluted with filler substances like baking soda. Similar to the mechanism of a home pregnancy test, a strip of paper turns a distinctive color if a falsified antibiotic is present.

It's the latest paper-based chemical assay developed in the lab of Chuck Henry, professor in the Department of Chemistry. Researchers including first author Kat Boehle, a recently graduated Ph.D. student, describe the invention in ACS Sensors.

"In this country, we take for granted that our antibiotics are good - we don't even think twice," Boehle said. "But counterfeit and substandard antibiotics are an extremely common thing in other parts of the world. The goal of this project has been to make a cheap detection device that is easy to use; our device costs literally a quarter to make."

Here's how it works: Bacteria naturally produce an enzyme that can give them resistance to antibiotics by chemically binding to portions of the antibiotic molecule. The researchers used this very enzyme, called beta-lactamase, to empower their device to detect the presence of antibiotics in a given sample.

For the test, the user dissolves the antibiotic in water, and adds the solution to a small paper device. The paper contains a molecule called nitrocefin that changes color when it reacts with the enzyme. In this setup, the antibiotic and the nitrocefin on the paper are in competition to bind with the enzyme in a detection zone.

With a good antibiotic dose, there is little color change in the paper strip, because the antibiotic outcompetes the nitrocefin and successfully binds with the beta-lactamase enzyme. But in a falsified or weakened antibiotic, the paper goes red, because the enzyme instead reacts with the nitrocefin. In short, yellow means good (appropriate strength antibiotic); red means bad (diluted antibiotic).

The device also includes a pH indicator, to determine if a sample is acidic or alkaline. This extra information could further alert the user to whether a sample has been falsified with filler ingredients, which might otherwise confound the main test.

It's simple, it's fast (about 15 minutes), and it can be used by an untrained professional - all key goals of the project, Henry said. Traditional approaches for testing drug purity rely on large, expensive analytical equipment in labs, including mass spectrometry, making it challenging or impossible for developing countries to access easily.

To ensure the usability of the device, the researchers included in their experiment a blind test with five users who were unfamiliar with the device or the science behind it. They all successfully identified 29 out of 32 antibiotic samples as either legitimate or false.

The test is effective for a broad spectrum of beta-lactam antibiotics, but there's room for refinement. The sample most misidentified by untrained users was acetylsalicylic acid -commonly known as aspirin - which did not turn as red as the other false samples because its acidic pH destabilized the reaction. Being able to more accurately distinguish such specific chemicals will be the subject of future optimization of the new test, the researchers say.

Credit: 
Colorado State University

CEOs paid less than peers more likely to engage in layoffs, research finds

BINGHAMTON, N.Y. - CEOs who are paid less than their peers are four times more likely to engage in layoffs, according to research led by faculty at Binghamton University, State University of New York.

Scott Bentley, an assistant professor of strategy at Binghamton University's School of Management, worked on the research as a PhD student at Rutgers University. He and fellow researchers Rebecca Kehoe and Ingrid Fulmer, both associate professors at the Rutgers School of Management and Labor Relations, sought to find out if CEO pay was related to layoff announcements made by CEOs.

"In terms of strategic decisions that a CEO can make that could lead to higher pay, layoffs are one of the easiest to do," Bentley said. "Relative to other decisions such as mergers or acquisitions, layoffs typically don't need the approval of shareholders, the board or regulators, and they don't take years to do. Layoffs can be determined overnight."

Researchers analyzed data that included CEO pay and layoff announcements made by S&P 500 firms from 1992-2014 in the financial services, consumer staples and IT industries.

Adjusting the analysis for a number of different factors that could influence a layoff (industry conditions, company size, firm performance, etc.), researchers found that the "underpaid" CEOs were four times more likely to announce a layoff, even when all of those other factors were accounted for.

"In a way, CEOs are just like any other type of employee. They are going to compare their pay to those around them," Bentley said. "The difference is that the average employee can't make strategic decisions for the company that influences their own pay. Executives can."

Bentley says what surprised him most was that the relationship between lower pay and the likelihood of layoffs all but disappears when a CEO is paid more than his or her peers.

"Right around the point where CEOs are paid equal to their peers, the effect kind of goes away. We found that there's this huge dropoff in the likelihood of announcing layoffs once your pay is relatively the same as, or more than, your peers," he said.

So, do these actions actually pay off for the CEO? Well, it depends.

On average, researchers found CEO pay generally increased in the year following a layoff when firm performance also improved.

"While there are some instances where pay increased when performance decreased, we found that if the company and the shareholders don't benefit from the layoffs, neither does the CEO in most cases," Bentley said.

He said the findings highlight the importance of corporate governance and aligning the interests of the CEO with shareholders and employees.

"While we can't necessarily restrict a CEO's behavior or motivations, there may be ways to restrict the extent to which they are rewarded or impacted by decisions such as layoffs."

Credit: 
Binghamton University

How the United States landed in a debt 'danger zone'

COLUMBUS, Ohio - The interaction of public and private debt in the United States reduced economic growth about 0.43 percentage points per year between 2009 and 2014, a new study suggests.

In addition, growth declined an additional 0.40 percent due solely to high levels of private debt, taking into account public debt.

Overall, the results suggest debt dragged U.S. growth down by at least 0.83 percentage points in this time period.

"The effect that debt is having on our economic growth is much larger than we expected. We were surprised," said Mehmet Caner, co-author of the study and professor of economics at The Ohio State University.

The nation's GDP (Gross Domestic Product) grew 4.1 percent in the most recent quarter, but Caner said growth would have been even larger without the current level of debt.

"We should be worried," he said.

Most economists have examined how one type of debt - either public or private - affects economic growth. But this study is one of the first to show that the interaction of the two should be a real concern, Caner said.

"We were able to quantify the effects of this debt interaction and it is kind of scary."

Caner and his colleagues used data from 29 advanced countries (members of the Organisation for Economic Co-operation and Development, or OECD) from 1995 to 2014 to see how debt was related to economic growth. They found that when both public and private debt were relatively low, their interaction could stimulate economic growth.

Even when debt rises, increases in public debt can be offset by decreases in private debt, or vice versa. But if they are both at relatively high levels and increasing at the same time, their interaction can be particularly harmful for growth, results showed.

The study found that the interaction of public and private debt reaches a "danger zone" when it goes above 100 and 137 percent of the nation's GDP for public and private debt respectively.

The researchers calculated that 12 of the 29 countries studied - including the United States - were in this danger zone during the time of the study. The U.S. public-private debt interaction was at 203 percent in the 2009-2014 period.

If debt was reducing U.S. growth by around 1 percentage point a year - as this study suggests - that may explain a large portion of why growth went from 3.3 percent per year before the Great Recession (2007-2009) to about 2.2 percent since the recession.

"Since 2014, both public and private debt ratios in the U.S. have increased, indicating that debt has become an even greater obstacle to growth," Caner said.

Not all kinds of private debt had the same effects, the study found. Results showed that household debt had a much more negative effect than corporate debt.

The reason may be that corporate debt generally goes to more productive uses compared to household spending, such as investments in plants and machinery.

Why is the interaction between private and public debt important for economic growth?

Caner said one reason might be that the government guarantees much private debt, including mortgages and school loans.

"Greater private default often means greater public debt," he said.

These results suggest that Congress should work to control public debt, as many commentators and political groups have suggested, Caner said.

"The other implication is that agencies responsible for regulating private debt should not ignore the interaction between public and private debt, especially mortgage debt," he said.

Credit: 
Ohio State University

Picture this: Camera with no lens

image: University of Utah electrical and computer engineering associate professor Rajesh Menon has discovered a way to create an optics-less camera in which a regular pane of glass or any see-through window can become the lens.

Image: 
Dan Hixson/University of Utah College of Engineering

Aug. 21, 2018 -- In the future, your car windshield could become a giant camera sensing objects on the road. Or each window in a home could be turned into a security camera.

University of Utah electrical and computer engineers have discovered a way to create an optics-less camera in which a regular pane of glass or any see-through window can become the lens.

Their innovation was detailed in a research paper, "Computational Imaging Enables a 'See-Through' Lensless Camera," published in the newest issue of Optics Express. A copy of the paper, which was co-authored by University of Utah electrical and computer engineering graduate Ganghun Kim, can be downloaded here.

University of Utah electrical and computer engineering associate professor Rajesh Menon argues that all cameras were developed with the idea that humans look at and decipher the pictures. But what if, he asked, you could develop a camera that can be interpreted by a computer running an algorithm?

"Why don't we think from the ground up to design cameras that are optimized for machines and not humans. That's my philosophical point," he says.

If a normal digital camera sensor such as one for a mobile phone or an SLR camera is pointed at an object without a lens, it results in an image that looks like a pixelated blob. But within that blob is still enough digital information to detect the object if a computer program is properly trained to identify it. You simply create an algorithm to decode the image.

Through a series of experiments, Menon and his team of researchers took a picture of the University of Utah's "U" logo as well as video of an animated stick figure, both displayed on an LED light board. An inexpensive, off-the-shelf camera sensor was connected to the side of a plexiglass window, but pointed into the window while the light board was positioned in front of the pane at a 90-degree angle from the front of the sensor. The resulting image from the camera sensor, with help from a computer processor running the algorithm, is a low-resolution picture but definitely recognizable. The method also can produce full-motion video as well as color images, Menon says.

The process involves wrapping reflective tape around the edge of the window. Most of the light coming from the object in the picture passes through the glass, but just enough--about 1 percent--scatters through the window and into the camera sensor for the computer algorithm to decode the image.

While the resulting photo is not enough to win a Pulitzer Prize, it would be good enough for applications such as obstacle-avoidance sensors for autonomous cars. But Menon says more powerful camera sensors can produce higher-resolution images.

Applications for a lensless camera can be almost unlimited. Security cameras could be built into a home during construction by using the windows as lenses. It could be used in augmented-reality goggles to reduce their bulk. With current AR glasses, cameras have to be pointed at the user's eyes in order to track their positions, but with this technology they could be positioned on the sides of the lens to reduce size. A car windshield could have multiple cameras along the edges to capture more information. And the technology also could be used in retina or other biometric scanners, which typically have cameras pointed at the eye.

"It's not a one-size-fits-all solution, but it opens up an interesting way to think about imaging systems," Menon says.

From here, Menon and his team will further develop the system, including 3-D images, higher color resolution and photographing objects in regular household light. His current experiments involved taking pictures of self-illuminated images from the light board.

Credit: 
University of Utah

Certain antibiotic-resistant infections on the rise, new research shows

WASHINGTON -- Nearly six percent of urinary tract infections analyzed by a California emergency department were caused by drug-resistant bacteria in a one-year study period, according to new research in Annals of Emergency Medicine. The bacteria were resistant to most of the commonly used antibiotics. And, in many cases, patients had no identifiable risk for this kind of infection, the study found.

"The rise of drug-resistant infections is worrisome," said Bradley W. Frazee, MD, attending physician, Alameda Health System Highland Hospital and lead study author. "What's new is that in many of these resistant urinary tract infections, it may simply be impossible to identify which patients are at risk. Addressing the causes of antibiotic resistance, and developing novel drugs, is imperative. A society without working antibiotics would be like returning to preindustrial times, when a small injury or infection could easily become life-threatening."

The authors urge some immediate changes to clinical practice such as wider use of urine culture tests and a more reliable follow-up system for patients who turn out to have a resistant bug; improving emergency physician awareness of their hospital's antibiogram (a chart showing whether certain antibiotics work against certain bacteria); adherence to treatment guidelines and knowing which antibiotics to avoid in certain circumstances.

The Centers for Disease Control and Prevention (CDC) estimates that currently 23,000 Americans die each year from antibiotic-resistant infections.

The bacteria analyzed in this study were mostly E coli, that were resistant to cephalosporin antibiotics. Historically, such resistant bacteria were found in hospital-based infections. But, the authors note that they have been infecting more people outside of the hospital, particularly those with urinary tract infections. More than two in five (44%) of the infections analyzed were community-based (contracted outside of the hospital), the highest proportion reported in the United States to date.

Credit: 
American College of Emergency Physicians

This tiny particle might change millions of lives

image: Nanoparticles move past the glomerular filtration barrier of the kidney to target diseased cells.

Image: 
Illustrator: Yekaterina (Katya) Kadyshevskaya from the USC Bridge Institute at the Michelson Center for Convergent Bioscience

Remember the scene in the movie Mission: Impossible when Tom Cruise has to sneak into the vault? He had to do all sorts of moves to avoid detection. That's what it's like to sneak a targeted drug into a kidney and keep it from getting eliminated from the body.

Since kidneys are the filtering agents in our body, they are keen to get rid of small particles that they sense do not belong. And if the kidney does not filter out a particle, excreting it through urine, it may be eliminated by the liver, which uses macrophages to search for and get rid of foreign bodies.

Researchers at the USC Viterbi School of Engineering, along with colleagues from the Keck School of Medicine at USC, have engineered peptide nanoparticles to outsmart the biological system and target the kidney cells. The innovation may prove critical to addressing chronic kidney disease.

One out of three Americans will have chronic kidney disease in their lifetime. To date, there have been few solutions for advanced kidney disease beyond dialysis and kidney transplant--both of which are incredibly expensive and taxing. Previously, doctors would also have to prescribe heavy doses of medication as they hoped some of the medication would be able to reach and target the kidney. However, this heavy dosing had adverse effects on other organs in the body.

While targeted drug delivery has long been an area of concentration for cancer research, nanoparticles for targeted drug delivery for the kidneys has largely gone unexplored, says the study's lead author, Eun Ji Chung, a WiSE Gabilan Assistant Professor and Assistant Professor of Biomedical Engineering, Chemical Engineering and Materials Science, and Nephrology and Hypertension at USC and a professor in the new USC Michelson Center for Convergent Bioscience.

Essentially, the researchers took several months to create their kidney targeting particle. This nanoparticle is a micelle, which is 10-20 times smaller than a traditional nanoparticle. This particular micelle is synthesized from a peptide chain that is formulated from lysine and glutamic acids. The extra small size of the nanoparticle allows passage into the kidneys through the initial barrier of kidney filtration while the peptide allows the nanoparticle to stay in the kidneys and potentially unload a drug at the site of the disease without getting removed by the urine. In this way, the researchers are taking advantage of a natural mechanism of the body to target the kidneys, and can minimize systemic off-target side effects that are characteristic to most kidney drugs.

Results of In Vivo Testing:

The researchers injected mice with fluorescent-labeled nanoparticles. They found that the nanoparticles they had engineered were more present in the kidney than other parts of the body. These particles thus could carry drugs more selectively than previous tests by other researchers. Furthermore, these biocompatible, bio-degradable particles were able to clear out of the body in less than one week and would not damage other organs.

The study "Design and in vivo characterization of kidney-targeting multimodal micelles for renal drug delivery," was conducted by Eun Ji Chung, Jonathan Wang, Christopher Poon, Deborah Chin, Sarah Milkowski, Vivian Lu at the Viterbi School of Engineering; and Kenneth R. Hallows of the Keck School of Medicine at USC. It was featured in the journal Nano Research and Professor Chung was selected as a Young Innovator in Nanobiotechnology from the journal.

Funding for this research came from the University of Southern California (Provost Fellowship), the National Heart, Lung, and Blood Institute at the NIH, and the U.S. Department of Defense.

Credit: 
University of Southern California

New ESMO tumor DNA scale helps match patients with cancer to optimal targeted medicines

image: This is the ESCAT grading system for cancer treatment decision-making

Image: 
European Society for Medical Oncology

Lugano, Switzerland- 21 August 2018 - A new scale for tumour DNA mutations which will simplify and standardise choices for targeted cancer treatment has been agreed by leading cancer specialists in Europe and North America. The scale, called ESCAT (ESMO Scale for Clinical Actionability of molecular Targets), is published this week in the Annals of Oncology (1). It aims to optimise patient care by making it easier to identify patients with cancer who are likely to respond to precision medicines, and help make treatment more cost effective.

"Doctors receive a growing amount of information about the genetic make-up of each patient's cancer, but this can be difficult to interpret for making optimal treatment choices," explains Professor Fabrice André, Chair of the ESMO Translational Research and Precision Medicine Working Group (2) who initiated this project. "The new scale will help us distinguish between alterations in tumour DNA that are important for decisions about targeted medicines or access to clinical trials, and those which aren't relevant."

The new grading system classes alterations in tumour DNA according to their relevance as markers for selecting patients for targeted treatment, based on the strength of clinical evidence supporting them (Tier I-V, Table 1). It is the first time that a classification has been developed that is relevant to all potential targeted cancer medicines, not just those that have been approved for use by national regulatory bodies. The classification also enables mutations to be upgraded or downgraded in response to newly available data.

"For the first time, ESMO has created the tools to make it clear what data are needed for a mutation to be considered actionable and how this may change in response to new clinical data," says Dr Joaquin Mateo, lead author of the paper, Principal Investigator of the Prostate Cancer Translational Research Group from the Vall d'Hebron Institute of Oncology, Barcelona, Spain.

"The scale focuses on the clinical evidence for matching tumour mutations with the drugs we have in our clinics and gives us a common vocabulary for communication between clinicians, and for explaining potential treatment benefits to patients," he continues.

As ESMO disseminates ESCAT into clinical practice, it is hoped that cancer centres and laboratories will start to routinely include Tier I-V grading of genomic mutations in patients' clinical and laboratory reports and discuss results at tumour boards and clinics.

"If one mutation is Tier I and another is Tier III, it is important that everyone understands the need to prioritise the Tier I mutation in determining the patient's treatment and implementing precision medicine," Mateo points out.

ESCAT will also make it easier for clinicians and patients to discuss the results of multigene sequencing. More and more patients are offered a multigene sequencing nowadays. Current testing techniques frequently show that many of the genes in a patient's tumour are mutated but it is unclear which are relevant to treatment decisions. By using the scale to show which alterations are relevant, it becomes easier to identify and agree the right treatment for the right patient.

"ESCAT will bring order to the current jungle of mutation analysis so that we all speak the same language for classifying mutations and prioritising how we use them to enhance patient care," concludes André.

Credit: 
European Society for Medical Oncology

Areas with more alcohol vendors have higher hospital admission rates

The study is the largest of its kind worldwide and examined data on more than one million hospital admissions wholly attributable to alcohol over 12 years.

Areas with a high density of alcohol outlets have higher drink-related hospital admission rates, a new study from the University of Sheffield has found.

The study, conducted by researchers from the University's School of Health and Related Research (ScHARR), revealed that the places in England with the most pubs, bars and nightclubs had a 13 per cent higher admission rate for acute conditions caused by alcohol such as drunkenness and vomiting.

These areas also had a 22 per cent higher hospital admission rate for chronic conditions caused by alcohol - such as liver disease, compared with places with the lowest density of alcohol vendors.

The research, funded by Alcohol Research UK, analysed both on-trade outlets - where alcohol can be bought and consumed on the premises such as pubs, clubs and restaurants - as well as off-trade outlets - where alcohol is purchased to drink elsewhere, like supermarkets and convenience stores.

The study, which is the largest of its kind worldwide, examined data on more than one million admissions wholly attributable to alcohol over 12 years. It included all 32,482 census areas in England.

The results also showed:

Places with the highest density of restaurants licenced to sell alcohol had nine per cent higher admission rates for acute conditions and nine per cent higher admission rates for chronic conditions caused by alcohol.

Areas with the highest density of other on-trade outlets (such as hotels, casinos and sports clubs) had 12 per cent higher admission rates for acute conditions and 19 per cent higher admission rates for chronic conditions caused by alcohol, compared with areas with the lowest density of other on-trade outlets.

Places with the highest density of convenience stores had 10 per cent higher admission rates for acute conditions and seven per cent higher admission rates for chronic conditions compared with areas with the lowest density of convenience stores.

Ravi Maheswaran, Professor of Epidemiology and Public Health at the University of Sheffield, said: "The strongest link was between pubs, bars and nightclubs and admissions for alcoholic liver disease.

"We also observed an association between restaurants licenced to sell alcohol and hospital admissions, which we had not expected. This needs further investigation to establish if there is a causal link.

"While convenience stores were clearly associated with hospital admissions, the association for supermarkets was modest, as we had expected. Supermarkets account for a significant proportion of alcohol sales, however they tend to serve large catchment areas whilst our study was set up to examine the effects of outlet density in small local areas."

Outlet density was measured as the number of alcohol retail outlets within a 1km radius of the centre of every residential postcode in England. This was classified into four categories, ranging from lowest to highest and the analysis adjusted for other factors which could have influenced associations, including differences in age, socio-economic deprivation and hospital admission policies in different areas.

Professor Maheswaran added: "Although we have observed clear associations between alcohol outlet densities and hospital admissions, our study cannot confirm if these associations are causally linked.

"However, there is emerging evidence from other studies suggesting that local licencing enforcement could reduce alcohol related harms."

The research was funded by Alcohol Research UK, an independent charity working to reduce alcohol-related harm through ensuring policy and practice can be developed on the basis of reliable, research-based evidence.

Dr James Nicholls, Director of Research and Policy Development at Alcohol Research UK said: "Understanding the relationship between outlet density and alcohol hospital admissions is essential to reducing harm. Local licensing authorities, in particular, need to factor this information into their decisions.

"We often hear that no individual outlet can be held responsible for increased hospital admissions, and because of this licensing teams can't plan on that basis. However, this study adds weight to the argument that licensing needs to also think about the overall level of availability in a given area.

"As the evidence on the relationship between availability and harm becomes stronger, those tasked with regulating the market need to respond."

Previous work by the University of Sheffield researchers showed a vast increase in the number of off-trade outlets, such as convenience stores and supermarkets that sell alcohol. The amount of convenience stores selling alcohol more than doubled from 2003 to 2013, with an increase of 104 per cent. The number of supermarkets selling alcohol also increased by 33 per cent.

Credit: 
University of Sheffield

Sprouty 1 and 2 molecules offer great potential to combat cancer and chronic infections

image: Gladstone scientists Shomyseh Sanjabi (right) and Hesham Shehata (left) discover how to enhance the longevity of cells that kill cancer tumors and infected cells.

Image: 
Gladstone Institutes

SAN FRANCISCO--August 20, 2018--To fight viral infections, your immune system calls on CD8 T cells to kill the infected cells. The CD8 T cells can also be used in immunotherapy approaches to kill cancer cells, including the CAR T cell therapy currently attracting broad public attention.

"The problem is that CD8 T cells are often exhausted in cancer and chronic infections like HIV, so they die off or stop functioning properly," said Shomyseh Sanjabi, PhD, an assistant investigator at the Gladstone Institutes who has been studying this cell type for nearly 15 years. "I've been trying to better understand how these cells develop in order to find ways to help them regain their function and live longer."

When you initially get exposed to an invading pathogen, such as a virus, CD8 T cells begin to rapidly multiply. At this stage, they are called effector cells and act like foot soldiers, killing infected cells. Once the pathogen is gone, most effector cells die to ensure they don't begin to attack your own body.

The ones that survive become memory cells, which are more like specialized guards, patrolling your body for the same invaders for the rest of your life. The next time you get exposed to the same pathogen, these memory cells allow your body to respond much more quickly and protect you.

In a new study published in the scientific journal PNAS, Sanjabi and her team identified two molecules, Sprouty 1 and Sprouty 2, that modify the survival of effector T cells and the development of memory CD8 T cells. Their findings offer promising potential for immunotherapeutic strategies to combat cancer and chronic infections.

Better Without Sprouty

Using animal models that Sanjabi has been developing for the past 10 years, the researchers in her laboratory deleted both Sprouty 1 and Sprouty 2 from CD8 T cells to see what would happen.

They found that a larger than usual number of effector cells survive and become memory cells. The team also showed that the resulting memory cells, which lack the Sprouty molecules, actually have better protective capacity against a bacterial pathogen than regular memory cells.

They also showed these same memory cells consume less glucose (sugar) as an energy source than normal CD8 T cells. While effector cells depend on glucose to live, memory cells generally use more fatty acids.

"Tumor cells use a lot of glucose, so effector cells have difficulty surviving in the tumor environment because it doesn't have a sufficient source of energy," explained Hesham Shehata, PhD, former postdoctoral scholar in Sanjabi's laboratory and first author of the study. "While memory cells generally don't depend on glucose, our study suggests that effector cells without Sprouty 1 and 2 consume less glucose, so they could survive and function in a tumor environment much better."

Memory Is Good for Immunotherapy

Sanjabi's study offers a new way to increase the number, survival, and function of memory CD8 T cells, which could provide better protection against tumors and pathogenic infections.

"By shedding light on the role of Sprouty 1 and 2, our work revealed another layer of the underlying biology of T cells," said Shehata. "Cells that lack Sprouty 1 and 2 have immense potential not only to fight tumors, but chronic viral infections as well. It's exciting that our study can be applied to multiple contexts."

In the case of HIV, for instance, deleting the two Sprouty molecules could lead to memory cells that better survive and could effectively kill the activated cells harboring latent virus, one of the main barriers to a cure.

As for cancer immunotherapy, recent studies have shown that approaches using memory cells can help reduce tumor sizes or even completely eliminate tumors, as compared to treatments using effector cells that have led to more patients relapsing.

"There's been great interest within the scientific community to enhance the development and function of memory CD8 T cells, which work better for immunotherapies than effector T cells," said Sanjabi, who is also an associate professor of microbiology and immunology at UC San Francisco. "Our findings could provide an opportunity to improve future engineering of CAR T cells against tumors. This could potentially be used in combination with a genome editing technique like CRISPR that would remove the Sprouty 1 and 2 molecules from the cells to make them more effective."

Credit: 
Gladstone Institutes

Healthy diet linked to healthy cellular aging in women

ANN ARBOR--Eating a diet that is rich in fruits, vegetables and whole grains and low in added sugar, sodium and processed meats could help promote healthy cellular aging in women, according to a new study published in the American Journal of Epidemiology.

"The key takeaway is that following a healthy diet can help us maintain healthy cells and avoid certain chronic diseases," said lead author Cindy Leung, assistant professor of nutritional sciences at the University of Michigan School of Public Health. "Emphasis should be placed on improving the overall quality of your diet rather than emphasizing individual foods or nutrients."

In the study, researchers used telomere length to measure cellular aging.

Telomeres are DNA-protein structures located on the ends of chromosomes that promote stability and protect DNA. Age is the strongest predictor of telomere length--telomeres shorten in length during each cell cycle.

However, recent studies have shown that telomeres can also be shortened due to behavioral, environmental and psychological factors. Shorter telomeres have been associated with an increased risk for heart disease, type 2 diabetes and some cancers.

Leung and colleagues examined the diets of a nationally representative sample of nearly 5,000 healthy adults and how well they scored on four evidence-based diet quality indices, including the Mediterranean diet, the DASH diet and two commonly used measures of diet quality developed by the U.S. Department of Agriculture and the Harvard T.H. Chan School of Public Health.

For women, higher scores on each of the indices were significantly associated with longer telomere length.

"We were surprised that the findings were consistent regardless of the diet quality index we used," Leung said. "All four diets emphasize eating plenty of fruits, vegetables, whole grains and plant-based protein and limiting consumption of sugar, sodium and red and processed meat.

"Overall, the findings suggest that following these guidelines is associated with longer telomere length and reduces the risk of major chronic disease."

Co-author Elissa Epel, professor of psychiatry at the University of California, San Francisco, said "the commonality to all of the healthy diet patterns is that they are antioxidant and anti-inflammatory diets. They create a biochemical environment favorable to telomeres."

In men, the findings were in the same direction, but not statistically significant.

"We have seen some gender differences in previous nutrition and telomere studies," Leung said. "In our study, as well as in previous studies, men tended to have lower diet quality scores than women. Men also had higher intakes of sugary beverages and processed meats, both of which have been associated with shorter telomeres in prior studies.

"It's possible that not all foods affect telomere length equally and you need higher amounts of protective foods in order to negate the harmful effects of others. However, more research is needed to explore this further."

Credit: 
University of Michigan

Next-generation insect repellents to combat mosquito-borne diseases

BOSTON, Aug. 20, 2018 -- Nearly 700 million people suffer from mosquito-borne diseases -- such as malaria, West Nile, Zika and dengue fever -- each year, resulting in more than 1 million deaths. Increasingly, many species of mosquitoes have become resistant to the popular pyrethroid-based insecticides. Today, researchers report a new class of mosquito repellents based on naturally occurring compounds that are effective in repelling mosquitoes with potentially fewer environmental side effects than existing repellents.

The scientists will present their research today at the 256th National Meeting & Exposition of the American Chemical Society (ACS). ACS, the world's largest scientific society, is holding the meeting here through Thursday. It features more than 10,000 presentations on a wide range of science topics.

A brand-new video on the research is available at
http://bit.ly/acsmosquitoes.

"Our new repellents are based on how nature already works," Joel R. Coats, Ph.D., says. "For example, citronella, a spatial repellent that comes from lemongrass, contains naturally occurring essential oils that have been used for centuries to repel mosquitoes. But citronella doesn't last long and blows away easily. Our new, next-generation spatial repellents are variations of natural products that are longer-lasting and have greater repellency."

Coats and graduate students James S. Klimavicz and Caleb L. Corona at Iowa State University in Ames have been synthesizing and testing hundreds of compounds against mosquitoes. They knew that sesquiterpenoids, which are found in many plants, are effective insect repellents, but these large molecules are difficult to isolate from plants and hard to make and purify in the laboratory.

Because of the challenges of synthesizing sesquiterpenoids, Coats' team designed their repellents using smaller, less complex, easily obtainable molecules -- monoterpenoids and phenylpropanoid alcohols with known, short-term repellent activities against insects. By modifying these compounds chemically, they produced new potential repellents with higher molecular weights, making them less volatile and longer-lasting. Klimavicz has synthesized more than 300 compounds, the most effective of which are α-terpinyl isovalerate (a natural compound), citronellyl cyclobutanecarboxylate and citronellyl 3,3-difluorocyclobutanecarboxylate.

To determine the compounds' effectiveness as repellents against mosquitoes, Corona tests them in a tubular chamber developed in the Coats laboratory. The chamber has filter papers at either end. One filter paper has nothing on it; the other has the synthesized repellent applied. Then mosquitoes -- raised in the Iowa State University medical entomology lab -- are introduced into the chamber. Corona uses time-lapse photography and in-person monitoring over 2.5 hours to document whether the mosquitoes migrate away from the candidate repellents. The researchers are currently exploring computer tracking of mosquitoes using video footage to gain a better understanding of mosquito repellency and behavior when exposed to these compounds.

With this method, the researchers tested the repellents with Culex pipiens, the northern house mosquito, which is most closely linked to West Nile transmission in the Midwestern U.S.; Aedes aegypti, the yellow fever mosquito which is also known to transmit the Zika and dengue viruses; and Anopheles gambiae, which transmits malaria.

"We think the mechanism of our terpene-based repellents, which try to mimic what nature does, is different from that of the pyrethroids," which many mosquito species have become resistant to, Coats says. "We believe these 'next-gen' spatial repellents are new tools that could provide additional protection against mosquitoes in treated yards, parks, campgrounds, horse stables and livestock facilities. Our next step is to understand more precisely how the repellents biologically affect the mosquitoes."

Credit: 
American Chemical Society

Natural disasters widen racial wealth gap

Damage caused by natural disasters and recovery efforts launched in their aftermaths have increased wealth inequality between races in the United States, according to new research from Rice University and the University of Pittsburgh.

"Damages Done: The Longitudinal Impacts of Natural Hazards on Wealth Inequality in the United States" will appear in an upcoming edition of Social Problems. A supplement to the paper highlights the wealth gap between whites and blacks attributable to natural disaster damage from 1999 through 2013 in 20 U.S. counties.

Researchers Junia Howell, a scholar at Rice's Kinder Institute for Urban Research and an assistant professor of sociology at the University of Pittsburgh and Jim Elliott, a professor of sociology at Rice and fellow at Rice's Kinder Institute combined longitudinal data from nearly 3,500 families across the U.S. with governmental data on local natural disaster damages, FEMA aid and demographics. They followed these people from 1999 through 2013 as disaster damage of varying scale struck counties where they lived, and examined how their personal wealth was impacted.

"Last year the United States suffered more than $260 billion in direct damages from natural disasters --mainly from hurricanes Harvey, Irma and Maria," said Howell, who was the study's lead author. "And there were also numerous wildfires, floods and tornadoes. Data show that since 2000, approximately 99 percent of counties in the U.S. have experienced significant damage from some type of natural disaster, with costs expected to increase significantly over coming years. We wanted to investigate how these damages impact wealth inequality and accumulation."

Whites who lived in counties with only $100,000 in damage from 1999 to 2013 gained an average of approximately $26,000 in wealth. However, those who lived in counties with at least $10 billion in damage during the same time period gained nearly $126,000, the paper said.

"In other words, whites living in counties with considerable damage from natural disasters accumulate more wealth than their white counterparts living in counties without major natural disaster damage," Howell said.

However, among blacks, Latinos and Asians, the results went the other direction. Blacks who lived in counties with just $100,000 in damage gained an estimated $19,000 in wealth on average, while those living in counties with at least $10 billion in damage lost an estimated $27,000. Latinos in counties with $100,000 in damage gained $72,000 on average, and those in areas with at least $10 billion in damage lost an estimated $29,000. And Asians gained $21,000 on average and lost $10,000, respectively. These differences occurred even after the researchers controlled for a wide range of factors including age, education, homeownership, family status, residential mobility, neighborhood status and county population.

"Put another way, whites accumulate more wealth after natural disasters while residents of color accumulate less," Elliott said. "What this means is wealth inequality is increasing in counties that are hit by more disasters."

The researchers were able to estimate by county how much of the inequality is attributed to natural disasters. In Harris County, Texas, the disaster-related increase in the black-white wealth gap, on average, was $87,000.

The story does not stop there, Howell and Elliott said. Counties that received more aid from the Federal Emergency Management Agency (FEMA) saw additional increases in wealth inequality beyond that attributed to the natural disasters themselves. For example, whites living in counties that received at least $900 million in FEMA aid from 1999 to 2013 accumulated $55,000 more wealth on average than otherwise similar whites living in counties that received only $1,000 in aid. Conversely, blacks living in counties that received at least $900 million in FEMA aid accumulated $82,000 less wealth on average than otherwise similar blacks living in counties that received only $1,000 in FEMA aid. Similarly, Latinos accumulated $65,000 less on average, and other races (majority Asians) accumulated $51,000 less.

"It's unclear why more FEMA aid is exacerbating inequality," Howell said. "More research is clearly needed. However, based on previous work on disasters such as hurricanes Katrina and Harvey, we know FEMA aid is not equitably distributed across communities. This is particularly true when it comes to infrastructural redevelopment, which often has profound effects on residents' property appreciation and business vitality. When certain areas receive more redevelopment aid and those neighborhoods also are primarily white, racial inequality is going to be amplified."

In addition to exacerbating racial wealth gaps, the researchers found that after natural disasters wealth inequality also increases based on home ownership. Individuals who owned homes in counties that experienced high levels of natural disaster damage accumulated $72,000 more wealth on average than their counterparts in counties with few disasters. Renters, on the other hand, lost $61,000 in wealth on average relative to renters in counties with few natural disasters.

"Put another way, natural disasters were responsible for a $133,000 increase in inequality between homeowners and renters in the hardest hit counties," Elliott said.

Similarly, college-educated residents accumulated $111,000 more on average if they lived in a county that experienced extreme disasters compared to their counterparts who did not live through disasters. Conversely, those with only a 10th-grade education who lived in counties that experienced extreme disasters lost $48,000 from natural disaster damages on average when compared to counterparts who did not live through disasters.

"In other words, in the counties with the most damage, natural disasters are responsible for a $159,000 increase in the educational wealth gap," Howell said.

Howell and Elliott said the results indicate that two major social challenges of our age - wealth inequality and rising costs of natural disasters - are increasingly and dynamically connected. They hope the research will encourage further examination of wealth inequality in the U.S. and development of solutions to address the problem.

"The good news is that if we develop more equitable approaches to disaster recovery, we can not only better tackle that problem but also help build a more just and resilient society," Howell and Elliott concluded.

The researchers are now building on this work by examining how local for-profit and nonprofit organizations influence social inequality after natural disasters.

Credit: 
Rice University

New research reveals how the body clock controls inflammation

image: New research reveals how the body clock controls inflammation. Pictured left to right: Dr Richard Carroll, Dr Annie Curtis, Mariana Cervantes and George Timmons at RCSI (Royal College of Surgeons in Ireland).

Image: 
Patrick Bolger

Researchers at RCSI and Trinity College Dublin have revealed insights into how the body clock controls the inflammatory response, which may open up new therapeutic options to treat excess inflammation in conditions such as asthma, arthritis and cardiovascular disease. By understanding how the body clock controls the inflammatory response, we may be able to target these conditions at certain times of the day to have the most benefit. These findings may also shed light on why individuals who experience body clock disruption such as shift workers are more susceptible to these inflammatory conditions.

The body clock, the timing mechanism in each cell in the body, allows the body to anticipate and respond to the 24-hour external environment. Inflammation is normally a protective process that enables the body to clear infection or damage, however if left unchecked can lead to disease. The new study, led by researchers at Dr. Annie Curtis's Lab at RCSI (Royal College of Surgeons in Ireland) in partnership with Prof. Luke O'Neill's Lab at Trinity College Dublin, is published in the Proceedings of the National Academy of Sciences (PNAS), a leading international multidisciplinary scientific journal.

Dr Annie Curtis, Research Lecturer in the Department of Molecular and Cellular Therapeutics at RCSI and senior author, explained that: "Macrophages are key immune cells in our bodies which produce this inflammatory response when we are injured or ill. What has become clear in recent years is that these cells react differently depending on the time of day that they face an infection or damage, or when we disrupt the body clock within these cells".

Dr. Jamie Early, first author on the study, said: "We have made a number of discoveries into the impact of the body clock in macrophages on inflammatory diseases such as asthma and multiple sclerosis. However, the underlying molecular mechanisms by which the body clock precisely controls the inflammatory response were still unclear. Our study shows that the central clock protein, BMAL1 regulates levels of the antioxidant response protein NRF2 to control a key inflammatory molecule called IL-1β from macrophages."

"The findings although at a preliminary stage, offers new insights into the behaviour of inflammatory conditions such as arthritis and cardiovascular disease which are known to be altered by the body clock", added Dr Early.

Funded by Science Foundation Ireland, the research was undertaken in collaboration between RCSI, Trinity College Dublin and the Broad Institute in Boston, USA.

Credit: 
RCSI

Getting policy right: why fisheries management is plagued by the panacea mindset

Fisheries management has often been characterized by regulatory policies that result in panaceas--broad based policy solutions that are expected to address several problems, which result in unintended consequences. An international research team shows how one size fits all policies like individual transferable quotas may be doomed from the onset, as these policies perpetuate "the panacea mindset." The team calls for a more customized policy approach in a new piece that will be published this week in the Proceedings of the National Academy of Sciences.

Individual transferable quotas were first adopted in the 1970s by the Netherlands, Iceland and Canada and rose to popularity in the 1980s. Prior research reported in 2009 that 18 countries used ITQs, including Australia, Canada, Denmark, Iceland, the Netherlands, and New Zealand, to manage their marine fish stocks of nearly 250 species. Even though ITQs are intended to function as a fish management strategy, the researchers cite examples of how ITQs have backfired. In some countries, this fish quota system has: proved unsuccessful in preventing fish stock declines, inadvertently led to fish oligopolies and resulted in community upheaval, as fishing rights of indigenous and subsistence users have often been overlooked. For instance, in Kodiak, Alaska, ITQs undermined core cultural values of hard work, opportunity and fairness by increasing the power of a few boat owners over their crew and other community members. In Iceland, transferable quotas were used as collateral for loans and were a major contributor to the economic collapse of the country in the 2008 recession.

According to the research team, reliance on the simple formulaic policies or panaceas, such as the continued use of ITQs, may be explained by a collection of factors, which they label the panacea mindset. This mindset is based on conceptual narratives, power disconnects, and heuristics and biases, which may make one predisposed to embrace panaceas as a solution and may perpetuate the problem more broadly:

To understand problems, people rely on conceptual narratives; however, such narratives are often based on an oversimplified notion of the problem, which make panaceas appealing and plausible. For example, one of the conceptual narratives driving ITQs, especially among commercial fisheries, is that fisheries can be managed by a single-stock approach even though multiple species may exist. Despite literature underscoring the benefits of a multi-species approach, modifications to ITQs are rarely made.

Given that there are typically winners and losers with policies, power disconnects occur, which create vested interests in panaceas by reinforcing inequities. With ITQs, fishers with more political and economic power than their counterparts are more likely to benefit from such a quota system and may even monopolize it; as a result, they also may become insulated from the costs associated with ITQs.

Heuristics (the use of mental shortcuts often when dealing with complex information) and biases prevent people from accurately assessing panaceas. As a result, a sweeping solution that may lack context may be more likely to be adopted than rejected. Other cognitive factors and behaviors may also play a role here, which interfere with one's ability to evaluate the pros and cons of a possible solution. For example, prior research has found that people often are unable to adequately assess risk. With ITQs, the researchers point out how the future of of fisheries in Alaska is often based on the inaccurate premise that there are only two scenarios: collapse versus rebuilding, when in fact there may be other options in between.

To combat the panacea mindset, the team proposes compiling resources about the given issue into a searchable online, institutional diagnostics toolkit. The toolkit could include best practices, links to related journal articles, checklists and other resources, which challenge people's biases and help them become more informed about policy options, as they develop specific policies for a given context.

"Oversimplified, broad based policies or panaceas are an institutional problem throughout our society. In spite of negative side effects or outright failure, panaceas can be found in policies designed to address issues affecting the environment, healthcare, the economy and many other areas," said co-author DG Webster, an associate professor of environmental studies at Dartmouth College. "Exploring the panacea mindset is a first step toward explaining why panaceas are so entrenched in the human condition and what methods will be most effective at combating them. We cannot simply say, 'avoid panaceas,' as many have said before; we need to develop systems like the institutional diagnostic toolkit that make it easier for people to find solutions that accurately reflect the political, economic, social, cultural and environmental context," added Webster.

Credit: 
Dartmouth College