Culture

New study sheds light into origins of neurodegenerative disease

image: Al La Spada, MD, PhD

Image: 
Duke Department of Neurology

New research has shed light on the origins of spinocerebellar ataxia type 7 (SCA7) and demonstrates effective new therapeutic pathways for SCA7 and the more than 40 other types of spinocerebellar ataxia. The study, which appears online Monday on the website of the journal Neuron, implicates metabolic dysregulation leading to altered calcium homeostasis in neurons as the underlying cause of cerebellar ataxias.

"This study not only tells us about how SCA7 begins at a basic mechanistic level,but it also provides a variety of therapeutic opportunities to treat SCA7 and other ataxias," said Al La Spada, MD, PhD, professor of Neurology, Neurobiology, and Cell Biology, at the Duke School of Medicine, and the study's senior author.

SCA7 is an inherited neurodegenerative disorder that causes progressive problems with vision, movement, and balance. Individuals with SCA7 have CAG-polyglutamine repeat expansions in one of their genes; these expansions lead to progressive neuronal death in the cerebellum. SCA7 has no cure or disease-modifying therapies.

La Spada and colleagues performed transcriptome analysis on mice living with SCA7. These mice displayed down-regulation of genes that controlled calcium flux and abnormal calcium-dependent membrane excitability in neurons in their cerebellum.

La Spada's team also linked dysfunction of the protein Sirtuin 1 (Sirt1) in the development of cerebellar ataxia. Sirt1 is a "master regulator" protein associated both with improved neuronal health and with reduced overall neurodegenerative effects associated with aging. La Spada's team observed reduced activity of Sirt1 in SCA7 mice; this reduced activity was associated with depletion of NAD+, a molecule important for metabolic functions and for catalyzing the activity of numerous enzymes, including Sirt1.

When the team crossed mouse models of SCA7 with Sirt1 transgenic mice, they found improvements in cerebellar degeneration, calcium flux defects, and membrane excitability. They also found that NAD+ repletion rescued SCA7 disease phenotypes in both mouse models and human stem cell-derived neurons from patients.

These findings elucidate Sirt1's role in neuroprotection by promoting calcium regulation and describe changes in NAD+ metabolism that reduce the activity of Sirt1 in neurodegenerative disease.

"Sirt1 has been known to be neuroprotective, but it's a little unclear as to why," said Colleen Stoyas, PhD, first author of the study, and a postdoctoral fellow at the Genomics Institute of the Novartis Research Foundation in San Diego. "Tying NAD+ metabolism and Sirt1 activity to a crucial neuronal functional pathway offers a handful of ways to intervene that could be potentially useful and practical to patients."

Credit: 
Duke Department of Neurology

Physics of Living Systems: How cells muster and march out

Many of the cell types in our bodies are constantly on the move. Ludwig-Maximilians-Universitaet (LMU) in Munich physicists have developed a mathematical model that describes, for the first time, how single-cell migration can coalesce into coordinated movements of cohorts of cells.

Many vital biological processes, such as growth, wound healing and immune responses to pathogens, require the active movement of cells. Inflammation and metastasis also involve the migration of specific kinds of cells through tissues to distant sites. A detailed understanding of the mechanisms that underlie cell migration - of single cells and small cohorts of cells, and the coordinated locomotion of tissue-level cell collectives - promises to elucidate the basis for one of the fundamental properties of cells. A team of researchers led by LMU theoretical physicist Erwin Frey (Professor of Statistical Physics and Biophysics at LMU) has now developed a new model, which is capable of describing, on both microscopic and macroscopic scales, the motions of cells on planar surfaces, which yields new insights into the collective dynamics of cells. The authors report their findings in the online journal eLife.

Many models have been constructed that seek to account for either the dynamics of single cells or the motions of cell sheets. However, the integration of both approaches into a single model presents a considerable challenge. This is largely because the levels of abstraction needed to capture the requisite phenomenology vary widely, owing to the differences in scale involved. The theoretical model constructed by Frey and his students is specifically designed to close the gap between the paradigms that have been applied to the analysis of cell locomotion at both single-cell and the multicellular scales. It does so by representing the interaction of cells with the underlying substrate in terms of a honeycomb lattice of contact sites, while also taking adhesive contacts between cells into account. "In contrast to the typically macroscopic approaches to the modelling of locomotion at the tissue level, our model explicitly incorporates the relevant properties of the individual cells, such as cell polarization, the structure of the cytoskeleton and the ability to actively reconfigure cytoskeletal organization in response to mechanical cues," explains Andriy Goychuk, joint first author of the paper. "Nonetheless, unlike strategies that depend on the microscopic analysis of shape changes in single cells, which are computationally costly, our framework is entirely rule-based and efficient enough to make simulations at the tissue level possible."

As the new study shows, the model can be used to investigate the migratory behavior of single cells, the transition to collective cell motion, and the coordinated movement of advancing epithelial sheets consisting of several thousands of cells that is involved in wound repair. The analyses and simulations based on the model uncovered links between specific cellular parameters and characteristic patterns of movement, which accurately reflect the experimental findings. Among other things, the authors found that the forces exerted by the cytoskeleton at cell-substrate contact sites and the contractility of the cytoskeletal network on the inner face of the cell membrane both play vital roles in locomotory behavior. In addition, there is a defined relationship between the expansion of cells owing to mechanical pressure within a monolayer and density-dependent cell growth, which leads to specific patterns of multicellular migration. "Our results constitute a considerable advance in our understanding of collective migration on flat substrates," says Frey. "Furthermore, our new model provides us with a highly flexible instrument for studying the migratory behavior of cells of a wide range of contexts, and a very versatile research tool for further studies in this field."

Credit: 
Ludwig-Maximilians-Universität München

Study exposes surprise billing by hospital physicians

Patients with private health insurance face a serious risk of being treated and billed by an out-of-network doctor when they receive care at in-network hospitals, according to a new study by Yale researchers. Addressing the issue could reduce health spending by 3.4% -- $40 billion annually, the researchers conclude.

The study, published Dec. 16 in the journal Health Affairs, analyzes 2015 data from a large commercial insurer covering tens of millions of individuals throughout the United States to show that anesthesiologists, pathologists, radiologists, and assistant surgeons at in-network hospitals billed out of network in about 10% of cases.

"When physicians whom patients do not choose and cannot avoid bill out of network, it exposes people to unexpected and expensive medical bills and undercuts the functioning of U.S. health care markets," said Zack Cooper, associate professor of public health at the Yale School of Public Health and in the Department of Economics, and one of the study's authors. "Moreover, the ability to bill out of network allows specialists to negotiate inflated in-network rates, which are passed on to consumers in the form of higher insurance premiums."

The study, which was supported by the James Tobin Center for Economic Policy at Yale, adds to a body of work by Cooper and his colleagues analyzing the causes of surprise medical billing in the United States. A 2018 study in the New England Journal of Medicine found that over 1 in 5 patients who went to in-network emergency departments were treated by out-of-network emergency physicians. A 2019 study analyzed the drivers of surprise medical billing and New York State's approach of protecting consumers by introducing binding arbitration between insurers and out-of-network physicians.

Their research triggered the recent push in Congress to pass federal protections against surprise medical billing. Several relevant bills are currently under consideration in Congress. Cooper's research has been cited by the White House, highlighted by congressional leaders, and featured extensively in the media.

The latest paper focused on anesthesiologists, pathologists, radiologists, and assistant surgeons -- hospital-based physicians who are not chosen by patients. After analyzing more than 3.9 million cases involving at least one of the four specialties, the researchers found out-of-network billing at in-network hospitals occurred in 12.3% of pathology cases, 11.8% of anesthesiology care, 11.3% of cases involving an assistant surgeon, and 5.6% of claims for radiologists.

Out-of-network billing was more prevalent at for-profit hospitals and at hospitals located in concentrated hospital and insurance markets where there is little competition, according to the study.

When a private insurance company declines to cover care delivered by an out-of-network provider, patients can get stuck with exorbitant bills. Mean out-of-network charges were $7,889 for assistant surgeons, $2,130 for anesthesiologists, $311 for pathologists, and $194 for radiologists.

The study analyzes several potential policy measures to address the problem. The researchers' preferred approach would be to regulate the contracts of physicians who work in hospitals and are not chosen by patients. The policy would require hospitals to sell a bundled package of services that included fees for anesthesiologists, pathologists, radiologists, assistant surgeons, and emergency department physicians.

"This approach eliminates the possibility of out-of-network specialists treating patients at in-network hospitals," said Cooper, who is associate director of the James Tobin Center for Economic Policy at Yale. "It wouldn't require patients to take any action and it would restore competitively set rates for specialists who patients cannot choose."

The authors emphasize the need a federal policy to protect patients. Cooper has hosted webinars with policymakers from New York and California to better understand what they're doing and describe how their efforts could form the basis of a national policy.

"Ultimately, a well-designed arbitration system that allows arbitrators to consider in-network rates could work," said Cooper. "So could a benchmark-style approach, where out-of-network providers are paid mean in-network payments. Then there are hybrid models, like California, which seems to be working well, where there's a benchmark rate and providers can go to arbitration if they choose. At the end of the day, patients are getting crushed, and we need a change to protect them."

Credit: 
Yale University

Children allergic to cow's milk smaller and lighter

image: This is Karen A. Robbins, M.D., lead study author.

Image: 
Children's National Hospital

Children who are allergic to cow's milk are smaller and weigh less than peers who have allergies to peanuts or tree nuts, and these findings persist into early adolescence. The results from the longitudinal study - believed to be the first to characterize growth patterns from early childhood to adolescence in children with persistent food allergies - was published online in The Journal of Allergy and Clinical Immunology.

"Published data about growth trajectories for kids with ongoing food allergies is scarce," says Karen A. Robbins, M.D.*, lead study author and an allergist in the Division of Allergy and Immunology at Children's National Hospital when the study was conducted. "It remains unclear how these growth trends ultimately influence how tall these children will become and how much they'll weigh as adults. However, our findings align with recent research that suggests young adults with persistent cow's milk allergy may not reach their full growth potential," Dr. Robbins says.

According to the Centers for Disease Control and Prevention, 1 in 13 U.S. children has a food allergy with milk, eggs, fish, shellfish, wheat, soy, peanuts and tree nuts accounting for the most serious allergic reactions. Because there is no cure and such allergies can be life-threatening, most people eliminate one or more major allergen from their diets.

The multi-institutional research team reviewed the charts of pediatric patients diagnosed with persistent immunoglobulin E-mediated allergy to cow's milk, peanuts or tree nuts based on their clinical symptoms, food-specific immunoglobulin levels, skin prick tests and food challenges. To be included in the study, the children had to have at least one clinical visit during three defined time frames from the time they were age 2 to age 12. During those visits, their height and weight had to be measured with complete data from their visit available to the research team. The children allergic to cow's milk had to eliminate it completely from their diets, even extensively heated milk.

From November 1994 to March 2015, 191 children were enrolled in the study, 111 with cow's milk allergies and 80 with nut allergies. All told, they had 1,186 clinical visits between the ages of 2 to 12. Sixty-one percent of children with cow's milk allergies were boys, while 51.3% of children with peanut/tree nut allergies were boys.

In addition to children allergic to cow's milk being shorter, the height discrepancy was more pronounced by ages 5 to 8 and ages 9 to 12. And, for the 53 teens who had clinical data gathered after age 13, differences in their weight and height were even more notable.

"As these children often have multiple food allergies and other conditions, such as asthma, there are likely factors besides simply avoiding cow's milk that may contribute to these findings. These children also tend to restrict foods beyond cow's milk," she adds.

The way such food allergies are handled continues to evolve with more previously allergic children now introducing cow's milk via baked goods, a wider selection of allergen-free foods being available, and an improving understanding of the nutritional concerns related to food allergy.

Dr. Robbins cautions that while most children outgrow cow's milk allergies in early childhood, children who do not may be at risk for growth discrepancies. Future research should focus on improving understanding of this phenomenon.

Credit: 
Children's National Hospital

The sympathetic nervous system can inhibit the defense cells in autoimmune disease

The results of a study conducted in Brazil suggest that the sympathetic nervous system - the part of the autonomous nervous system that controls responses to danger or stress - can modulate the action of defense cells in patients with autoimmune diseases.

Using an experimental model of multiple sclerosis, the scientists found that the sympathetic nervous system can limit the generation of effector responses by inhibiting the action of the cells that attack an antigen taken as a threat by the immune system.

The study, which was supported by São Paulo Research Foundation - FAPESP, was conducted at the Federal University of São Paulo (UNIFESP), with Alexandre Basso as principal investigator. Basso is a professor in the Department of Microbiology, Immunology and Parasitology at UNIFESP's Medical School (Escola Paulista de Medicina). The findings are published in the journal Cell Reports.

"Our study opens up an opportunity for the development of novel therapies. The model we describe could theoretically be applied to other autoimmune diseases besides multiple sclerosis," Basso told Agência FAPESP.

According to the Brazilian Multiple Sclerosis Association (ABEM), more than 35,000 Brazilians suffer from the disease, which affects more women than men. Patients are usually between 20 and 40 years old when symptoms begin.

The first author of the article is Leandro Pires Araújo, a researcher in the same department of UNIFESP. The study was funded by FAPESP via a Regular Research Grant, a Young Investigator Grant and a doctoral scholarship.

Contradictory research findings

The most widely used model in research on multiple sclerosis and comparable autoimmune diseases is an animal model known as experimental autoimmune encephalomyelitis, which consists of inducing an inflammatory response in the animal's central nervous system by means of immunization with antigens from myelin, the lipid-rich insulating substance that surrounds nerve fibers and helps transmit electrical pulses. The model can involve different animals depending on the requirements of the experiment.

In the case of multiple sclerosis, defense cells attack the antigens, causing nerve fiber demyelination (loss of myelin) and impairing communication between neurons. Alterations in the transmission of electrical pulses result in problems such as muscle weakness, loss of balance and motor coordination, and joint pain.

In previous studies using these models, the animals were treated with a substance called 6-hydroxydopamine (6-OHDA) in an attempt to find out how the sympathetic nervous system influences the development of autoimmune disease. The synthetic neurotoxin eliminates fibers in the sympathetic nervous system that release noradrenaline, one of the neurotransmitters that control involuntary movement. The absence of these fibers prevents the release of noradrenaline in the organs innervated by the sympathetic nervous system.

"6-Hydroxydopamine enters the noradrenaline synthesis pathway where it's taken up by sympathetic nerve fibers that express tyrosine hydroxylase, an enzyme present in neurons and in immune system cells. It's a key enzyme in the noradrenaline synthesis pathway," Basso explained.

"Neurons and cells that express tyrosine hydroxylase are also capable of taking up 6-hydroxydopamine through specific transporters. Because of its toxicity, 6-OHDA eventually eliminates the cells and fibers of the sympathetic nervous system."

The results of studies using 6-OHDA are contradictory. Some suggest that the process limits the development of autoimmune disease, while others show exactly the opposite - the disorder becomes even more severe in the absence of these nerve fibers.

Some studies point to the possibility that treatment with 6-OHDA could eliminate immune system cells that are important to the development of the disease. "Based on this finding, we formulated the hypothesis that the contradictions in the studies using 6-OHDA could reflect the fact that some immune system cells with which the nervous system interacts also express tyrosine hydroxylase and are capable of synthesizing and secreting noradrenaline, so they're targets of 6-OHDA," Basso said.

Alternative model

Basso's research group then proposed an alternative experimental strategy to study the influence of the sympathetic nervous system on the development of autoimmune disease, using mice genetically modified to lack certain adrenergic receptors with a key role in the process of controlling release of the neurotransmitter by sympathetic nervous system fibers.

Animals that lack these receptors release much more noradrenaline. "We opted for the opposite strategy: instead of using a model that eliminated the fibers [reducing production of noradrenaline], we used a model in which the sympathetic nervous system was hyperactive [and released more noradrenaline]," Basso said.

"After finding that animals with sympathetic nervous system hyperactivity did indeed develop a milder form of the disease with an impaired effector immune response [which should destroy myelin antigens], we wondered how the higher level of noradrenaline released by the sympathetic nervous system might influence development of the disease in these animals."

To answer this question, the scientists pharmacologically blocked the ß2-adrenergic receptor, one of the cell receptors activated by noradrenaline. After this procedure, the animals developed a more severe form of the disease than that in the control group (with a hyperactive sympathetic nervous system), confirming that the sympathetic nervous system influences the development of autoimmune disease.

"In sum, we concluded that the higher level of noradrenaline released by the sympathetic nervous system regulated development of the disease by augmenting activation of the ß2-adrenergic receptor in immune system cells, especially CD4+ T lymphocytes," Basso said. This type of T cell plays a key role in the activation and stimulation of other leukocytes and orchestrated the central nervous system's inflammatory response in the animals with encephalomyelitis.

The new model is being used at UNIFESP to study the mechanism whereby the sympathetic nervous system influences allergic responses in the lungs. There are molecules that activate or block the ß2-adrenergic receptor and are used in various situations. "One of them is fenoterol, used to relax the airways in patients with asthma and bronchoconstriction, so they can breathe more easily. How does its use affect the immune response? Our research is now pursuing answers to such questions," Basso said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Paper: Cultural variables influence consumer demand for private-label brands

image: Consumer attitudes toward private-label store brands might be driven more by social variables than price, says new research co-written by Carlos Torelli, a professor of business administration and James F. Towey Faculty Fellow at Illinois.

Image: 
Photo by Gies College of Business

CHAMPAIGN, Ill. -- New research co-written by a University of Illinois expert in consumer behavior and global marketing explores why certain segments of consumers prefer national or global brands over their less-pricey private-label equivalents, and the managerial and marketing implications of those choices.

Private-label brands - think not-so-generic store brands such as Costco's Kirkland Signature line or Target's "up & up" labeled products - contribute significantly to retailer profits by catering to bargain-driven consumers who also value quality. But consumer attitudes toward store brands might be driven by the consumer's own social status and beliefs about societal hierarchy more generally, with results varying between products of high symbolism (sunglasses or jeans, for example) versus products of low symbolism such as bleach, according to a paper co-written by Carlos Torelli, a professor of business administration and the James F. Towey Faculty Fellow at Illinois.

"Private-label brands have been around for many years, but they've been undergoing an evolution lately," Torelli said. "In the past, they were considered and branded as generic products - laundry detergent or dish soap that didn't have a name on the label other than what it was. Just a container with the product inside. Now we have store brands that mimic the elements, attributes and packaging of their big-name competitors but cost less."

Although store brands are popular with consumers, their market share hasn't increased proportionally and has remained steady at 10-15% in most countries.

"Given the widespread belief that private-label brands offer good value, it's surprising that the market share of such brands has remained stubbornly low," said Torelli, also the executive director of Professional and Executive Education at the Gies College of Business. "The preference for national brands has puzzled marketers, who are continuously striving to understand the factors that drive consumer choice."

Torelli and his co-authors examined the interactive effect of "power distance belief" - the acceptance and expectation of hierarchies and inequalities in society - and consumers' social status on the effects they have on preference for private-label versus national brands. They used a data set spanning 32 countries from 2006-10 on the aggregate market share of private-label brands in 21 common product categories.

The researchers found that in societies high in power distance belief (countries such as China, Indonesia and Mexico), low-status consumers preferred national brands when purchasing low-status-symbol products such as laundry detergent - even though the national brands were more expensive than their private-label equivalent - in order to fulfill their need for "heightened status." High-status consumers, on the other hand, preferred private-label brands for everyday products.

"You would assume that it would be the other way around - that low-status consumers would buy the cheaper private-label brand because they have less disposable income, but that's not what we found," Torelli said.

The research has implications for how private-label marketers can penetrate the developing markets of countries where people accept and endorse hierarchy, including the potentially lucrative markets of Brazil, China, India and Russia, Torelli said.

"There's an opening for the national brand to target low-status consumers who are not traditionally thought of as part of their consumer demographic," he said. "If national brands manage the size and certain other parameters to make the product slightly more affordable, then there is a market for premium brands in that demographic - as long as they don't cheapen or water down the quality of the product itself to make it more price competitive with the store brand."

The results also suggest that enhancing the prestige of private-label brands may more successfully attract low-status consumers than offering lower quality products at lower prices, Torelli said.

"If you're a private-label brand, the one thing you could possibly do is burnish your image by 'branding up,' much like what Target did, and create a higher-end private label to sell exclusively in your stores," Torelli said. "That's a trend we're seeing - a movement among retailers to do their own branding. Our research would suggest that just because it's a private-label brand doesn't mean it's destined to be low status.

"If you do a good enough job branding it, you spin it off into its own brand, much like how The Limited spun off Victoria's Secret, which was originally a private-label brand. We don't think of Victoria's Secret as a private label now, but that's how it started. In order to do that, the parent company really has to be invested in the brand - invested in the packaging, the advertising, the signage, everything."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Hospital patient portals lack specific and informative instructions for patients

image: Regenstrief Institute research scientists Joy Lee, PhD, and Michael Weiner, M.D., MPH, conducted a study of hospital patient portals, the secure online websites that give patients access to their personal health information. Among their findings: over half of the 200 patient portals they studied lacked specific instructions on how they should be used.

Image: 
Regenstrief Institute

INDIANAPOLIS -- Most hospitals in the United States, but not all, have secure online websites called patient portals that give patients access to their personal health information. However, many hospitals fail to inform patients fully about using the portals, according to new research from Regenstrief Institute and Indiana University School of Medicine.

Patient portals offer the opportunity to expand people's access to both their own health information and communication with their clinicians. A federal law, known as the Health Information Technology for Economic and Clinical Health (HITECH) Act, has provided financial incentives for healthcare providers to adopt these portals, and patients' access to them has increased significantly over the last ten years. Clinicians, however, say they are concerned about patients misusing the portals, especially when it comes to electronic messages. Likewise, patients have expressed a desire for more guidance on using portals and secure messaging.

The goal of the new study, the most recent in Regenstrief Institute's extensive work in the field of doctor-patient communications, was to determine the availability of hospital portals in the U.S. and what instructions were given to patients about using them. Researchers found:

Portal instructions were more focused on operational and legal information, like how to sign on and liability limits, than on instructing the patient on what medical circumstances are best suited for portal use.

More than half of portals with secure messaging did not have available guidance describing the appropriate uses of messages and practices relating to them. Many had generic statements describing secure messaging, such as "send and receive messages from staff," but included no information on what message content would be considered appropriate.

Some guidance used complicated language and vocabulary, which may hinder understanding by a general audience.

"We found that many instructional materials had more of a medicolegal focus, rather than a focus on the patient as a user," said Joy L. Lee, PhD, M.S., Regenstrief research scientist and lead author of the paper. "This research indicates there is room for improvement when it comes to educating patients on the portals, especially related to secure messaging. The guidance that exists includes a lot of 'don'ts', but not very many 'dos'. This makes it difficult for patients to properly utilize and benefit from the service."

Content of patient portal guidance

Dr. Lee and the research team collected information from a random sample of 200 acute-care hospitals from across the U.S. The study team accessed publicly available portal information from hospital websites and called the hospitals to request any additional information that was distributed to patients about portals or messaging. Then they read and analyzed the content.

Some key results of the analysis were:

Only 89 percent of hospitals had patient portals

66 percent of patient portals included secure messaging

58 percent of secure messaging portals did not detail how the patient was supposed to use the messaging.

Many hospitals included disclaimers that the messaging was not for emergencies, however 23 included that inside the "Terms and Conditions" section, which few patients may actually read.

"Hospitals and healthcare systems have invested a lot of money in patient portals, but the investment won't pay off for them or the people they provide care for if patients are confused about how to use the portals or don't understand how to get the most out of the tool," said Dr. Lee.

"Hospitals and health systems are expanding their uses and provision of online resources, including patient portals," said Michael Weiner, M.D., MPH, senior author of the article and associate director of Regenstrief Institute's William M. Tierney Center for Health Services Research. "Health systems need to be active participants in engaging patients, providing them with more and better information, and clarifying expectations. As guidance is developed at a system level, clinicians can also guide conversations with their patients about how to use messaging tools."

Study authors add that while many instructions could be improved, several good examples of complete and informative patient guidance do exist.

Credit: 
Regenstrief Institute

Hydrogels control inflammation to help healing

image: An illustration shows how effective a selection of custom-designed peptide hydrogels are in controlling inflammation. The gels developed at Rice University serve as scaffolds for new tissue and show promise for treating wounds and cancer and for delivering drugs. The hydrogels are designed to dissolve in the body as they are replaced by natural, functional tissue.

Image: 
Illustration by Tania Lopez-Silva/Rice University

HOUSTON - (Dec. 16, 2019) - Hydrogels for healing, synthesized from the molecules up by Rice University bioengineers, are a few steps closer to the clinic.

Rice researchers and collaborators at Texas Heart Institute (THI) have established a baseline set of injectable hydrogels that promise to help heal wounds, deliver drugs and treat cancer. Critically, they've analyzed how the chemically distinct hydrogels provoke the body's inflammatory response -- or not.

Hydrogels developed at Rice are designed to be injectable and create a mimic of cellular scaffolds in a desired location. They serve as placeholders while the body naturally feeds new blood vessels and cells into the scaffold, which degrades over time to leave natural tissue in its place. Hydrogels can also carry chemical or biological prompts that determine the scaffold's structure or affinity to the surrounding tissue.

The study led by chemist and bioengineer Jeffrey Hartgerink and graduate student Tania Lopez-Silva at Rice and Darren Woodside, vice president for research and director of the flow cytometry and imaging core at THI, demonstrates it should be possible to tune multidomain peptide hydrogels to produce appropriate inflammatory response for what they're treating.

The research appears in Biomaterials.

"We've been working on peptide-based hydrogels for a number of years and have produced about 100 different types," Hartgerink said. "In this paper, we wanted to back up a bit and understand some of the fundamental ways in which they modify biological environments."

The researchers wanted to know specifically how synthetic hydrogels influence the environment's inflammatory response. The two-year study offered the first opportunity to test a variety of biocompatible hydrogels for the levels of inflammatory response they trigger.

"Usually, we think of inflammation as bad," Hartgerink said. "That's because inflammation is sometimes associated with pain, and nobody likes pain. But the inflammatory response is also extremely important for wound healing and in clearing infection.

"We don't want zero inflammation; we want appropriate inflammation," he said. "If we want to heal wounds, inflammation is good because it starts the process of rebuilding vasculature. It recruits all kinds of cells that are regenerative to that site."

The labs tested four basic hydrogel types -- two with positive charge and two negative -- to see what kind of inflammation they would trigger. They discovered that positively charged hydrogels triggered a much stronger inflammatory response than negatively charged ones.

"Among the positive materials, depending on the chemistry generating that charge, we can either generate a strong or a moderate inflammatory response," Hartgerink said. "If you're going for wound-healing, you really want a moderate response, and we saw that in one of the four materials.

"But if you want to go for a cancer treatment, the higher inflammatory response might be more effective," he said. "For something like drug delivery, where inflammation is not helpful, one of the negatively charged materials might be better.

"Basically, we're laying the groundwork to understand how to develop materials around the inflammatory responses these materials provoke. That will give us our best chance of success."

The THI team helped analyze the cellular response to the hydrogels through multidimensional flow cytometry.

"The results of this work lay the groundwork for specifically tailoring delivery of a therapeutic by a delivery vehicle that is functionally relevant and predictable," Woodside said. "Aside from delivering drugs, these hydrogels are also compatible with a variety of cell types.

"One of the problems with stem cell therapies at present is that adoptively transferred cells don't necessarily stay in high numbers at the site of injection," he said. "Mixing these relatively inert, negatively charged hydrogels with stem cells before injection may overcome this limitation."

Hartgerink said the work is foundational, rather than geared toward a specific application, but is important to the long-term goal of bringing synthetic hydrogels to the clinic. "We have been speculating about a lot of the things we think are good and true about this material, and we now have more of a sound mechanistic understanding of why they are, in fact, true," Hartgerink said.

Credit: 
Rice University

Tiny insects become 'visible' to bats when they swarm

Bats use echolocation to hunt insects, many of which fly in swarms. In this process, bats emit a sound signal that bounces off the target object, revealing its location. Smaller insects like mosquitos are individually hard to detect through echolocation, but a new Tel Aviv University study reveals that they become perceptible when they gather in large swarms.

The findings could provide new insights into the evolution of bat echolocation and explain why tiny insects are found in the diets of bats that seem to use sound frequencies that are too high to effectively detect them.

The new research was conducted by Dr. Arjan Boonman and Prof. Yossi Yovel at TAU's Department of Zoology and colleagues at Canada's Western University. It was published in PLOS Computational Biology on December 12.

Few studies have addressed what swarms of insects -- as opposed to single insects -- "look" like to bats. To find out, Dr. Boonman and colleagues combined three-dimensional computer simulations of insect swarms with real-world measurements of bat echolocation signals to examine how bats sense swarms that vary in size and density.

They found that small insects that are undetectable on their own, such as mosquitos, suddenly become "visible" to bats when they gather in large swarms. They also discovered that the fact that bats use signals with multiple frequencies is well suited to the task of detecting insect swarms. These signals appear to be ideal for detecting an object if more than one target falls inside the echolocation signal beam at once.

"Using simulations, we investigated something that could never have been measured in reality," Dr. Boonman says. "Modeling enabled us to have full control over any aspect of an insect swarm, even the full elimination of the shape of each insect within the swarm."

The insect model the researchers used has a tiny mesh (skeleton) and minuscule legs and wings. "We are still adding new features, such as the bat's acoustic beam or ears, which were not in the original model," says Prof. Yovel. "We also developed a faster version of the algorithm. All of this will open a new world for us in which we can get echoes even from entire landscapes, so we can learn what a bat or sonar-robot would 'see' much more quickly."

The study could also affect technology being developed to improve defense systems. "The algorithms developed for this study could potentially be applied to radar echoes of drone swarms in order to lower the probability of detection by enemy radar," Dr. Boonman explains. "Since drones are playing an ever more prominent role in warfare, our biological study could spawn new ideas for the defense industry."

Credit: 
American Friends of Tel Aviv University

Neutrons optimize high efficiency catalyst for greener approach to biofuel synthesis

image: Illustration of the optimized zeolite catalyst (NbAlS-1), which enables a highly efficient chemical reaction to create butene, a renewable source of energy, without expending high amounts of energy for the conversion.

Image: 
ORNL/Jill Hemman

OAK RIDGE, Tenn., December 16, 2019--Researchers led by the University of Manchester have designed a catalyst that converts biomass into fuel sources with remarkably high efficiency and offers new possibilities for manufacturing advanced renewable materials.

Neutron scattering experiments at the Department of Energy's Oak Ridge National Laboratory played a key role in determining the chemical and behavioral dynamics of a zeolite catalyst--zeolite is a common porous material used in commercial catalysis--to provide information for maximizing its performance.

The optimized catalyst, called NbAlS-1, converts biomass-derived raw materials into light olefins--a class of petrochemicals such as ethene, propene, and butene, used to make plastics and liquid fuels. The new catalyst has an impressive yield of more than 99% but requires significantly less energy compared to its predecessors. The team's research is published in the journal Nature Materials.

"Industry relies heavily on the use of light olefins from crude oil, but their production can have negative impacts on the environment," said lead author Longfei Lin at the University of Manchester. "Previous catalysts that produced butene from purified oxygenated compounds required lots of energy, or extremely high temperatures. This new catalyst directly converts raw oxygenated compounds using much milder conditions and with significantly less energy and is more environmentally friendly."

Biomass is organic matter that can be converted and used for fuel and feedstock. It is commonly derived from leftover agricultural waste such as wood, grass, and straw that gets broken down and fed into a catalyst that converts it to butene--an energy-rich gas used by the chemical and petroleum industries to make plastics, polymers and liquid fuels that are otherwise produced from oil.

Typically, a chemical reaction requires a tremendous amount of energy to break the strong bonds formed from elements such as carbon, oxygen, and hydrogen. Some bonds might require heating them to 1,000°C (more than 1,800°F) and hotter before the bonds are broken.

For a greener design, the team doped the catalyst by replacing the zeolite's silicon atoms with niobium and aluminum. The substitution creates a chemically unbalanced state that promotes bond separation and radically reduces the need for high degrees of heat treatments.

"The chemistry that takes place on the surface of a catalyst can be extremely complicated. If you're not careful in controlling things like pressure, temperature, and concentration, you'll end up making very little butene," said ORNL researcher Yongqiang Cheng. "To obtain a high yield, you have to optimize the process, and to optimize the process you have to understand how the process works."

Neutrons are well suited to study chemical reactions of this type due to their deeply penetrating properties and their acute sensitivity to light elements such as hydrogen. The VISION spectrometer at ORNL's Spallation Neutron Source enabled the researchers to determine precisely which chemical bonds were present and how they were behaving based on the bonds' vibrational signatures. That information allowed them to reconstruct the chemical sequence needed to optimize the catalyst's performance.

"There's a lot of trial and error associated with designing such a high-performance catalyst such as the one we've developed," said corresponding author Sihai Yang at University of Manchester. "The more we understand how catalysts work, the more we can guide the design process of next-generation materials."

Synchrotron X-ray diffraction measurements at the UK's Diamond Light Source were used to determine the catalyst's atomic structure and complementary neutron scattering measurements were made at the Rutherford Appleton Laboratory's ISIS Neutron and Muon Source.

Credit: 
DOE/Oak Ridge National Laboratory

Research brief: New methods promise to speed up development of new plant varieties

image: Researchers triggered seedlings to develop new shoots that contain edited genes.

Image: 
Kit Leffler, University of Minnesota.

A University of Minnesota research team recently developed new methods that will make it significantly faster to produce gene-edited plants. They hope to alleviate a long-standing bottleneck in gene editing and, in the process, make it easier and faster to develop and test new crop varieties with two new approaches described in a paper recently published in Nature Biotechnology.

Despite dramatic advances in scientists' ability to edit plant genomes using gene-editing tools such as CRISPR and TALENs, researchers were stuck using an antiquated approach -- tissue culture. It has been in use for decades and is costly, labor intensive and requires precise work in a sterile environment. Researchers use tissue culture to deliver genes and gene editing reagents, or chemicals that drive the reaction, to plants.

"A handful of years ago the National Academy of Sciences convened a meeting of plant scientists, calling on the community to solve the tissue culture bottleneck and help realize the potential of gene editing in plants," said Dan Voytas, professor in Genetics, Cell Biology and Development in the College of Biological Sciences and senior author on the paper. "We have advanced genome editing technology but we needed a novel way to efficiently deliver gene editing reagents to plants. The methods in this paper present a whole new way of doing business."

The new methods will:

drastically reduce the time needed to edit plant genes from as long as nine months to as short as a few weeks;

work in more plant species than was possible using tissue culture, which is limited to specific species and varieties;

allow researchers to produce genetically edited plants without the need of a sterile lab, making it a viable approach for small labs and companies to utilize.

To eliminate the arduous work that goes into gene-editing through tissue culture, co-first authors Ryan Nasti and Michael Maher developed new methods that leverage important plant growth regulators responsible for plant development.

Using growth regulators and gene editing reagents, researchers trigger seedlings to develop new shoots that contain edited genes. Researchers collect seeds from these gene-edited shoots and continue experiments. No cell cultures needed.

The approaches differ in how the growth regulators are applied and at what scale. The approach developed by Nasti allows small-scale rapid testing -- with results in weeks instead of months or years -- of different combinations of growth regulators. "This approach allows for rapid testing so that researchers can optimize combinations of growth regulators and increase their efficacy," he said.

Maher used the same basic principles to make the process more accessible by eliminating the need for a sterile lab environment. "With this method, you don't need sterile technique. You could do this in your garage," he said. He added that this technique opens up the possibility that smaller research groups with less resources can gene edit plants and test how well they do.

"Nasti and Maher have democratized plant gene editing. It will no longer take months in a sterile lab with dozens of people in tissue culture hoods," Voytas said.

The researchers used a tobacco species as their model, but have already demonstrated the method works in grape, tomato and potato plants. They believe the findings will likely transfer across many species. Plant geneticists and agricultural biotechnologists aim to ensure stable food sources for a growing global population in a warming climate, where pest outbreaks and extreme weather events are commonplace. These new methods will allow them to work more efficiently.

Credit: 
University of Minnesota

Simple test could prevent fluoride-related disease

image: The test tube on the left shows a real positive result from water sampled in Costa Rica. The middle tube is a negative control. The tube on the right is a positive control.

Image: 
Julius B. Lucks/Northwestern University

With one drop of water, test detects fluoride levels that exceed EPA standards

Test costs pennies to make, is easy to read and requires no expertise to use

Method works by using an RNA riboswitch, which flips when fluoride is not present

Researchers tested device in Costa Rica, where fluorosis has been reported

EVANSTON, Ill. -- Northwestern University synthetic biologists developed a simple, inexpensive new test that can detect dangerous levels of fluoride in drinking water.

Costing just pennies to make, the system only needs a drip and a flick: Drip a tiny water droplet into a prepared test tube, flick the tube once to mix it and wait. If the water turns yellow, then an excessive amount of fluoride -- exceeding the EPA's most stringent regulatory standards -- is present.

This method is starkly different from current tests, which cost hundreds of dollars and often require scientific expertise to use.

The researchers tested the system both in the laboratory at Northwestern and in the field in Costa Rica, where fluoride is naturally abundant near the Irazu volcano. When consumed in high amounts over long periods of time, fluoride can cause skeletal fluorosis, a painful condition that hardens bones and joints.

Americans tend to think of the health benefits of small doses of fluoride that strengthen teeth. But elsewhere in the world, specifically across parts of Africa, Asia and Central America, fluoride naturally occurs at levels that are dangerous to consume.

"In the United States, we hear about fluoride all the time because it's in toothpaste and the municipal water supply," said Northwestern's Julius Lucks, who led the project. "It makes calcium fluoride, which is very hard, so it strengthens our tooth enamel. But above a certain level, fluoride also hardens joints. This mostly isn't an issue in the U.S. But it can be a debilitating problem in other countries if not identified and addressed."

The research was published online last week (Dec. 13) in the journal ACS Synthetic Biology.

Lucks is an associate professor of chemical and biological engineering in the McCormick School of Engineering and a member of Northwestern's Center for Synthetic Biology. The work was performed in collaboration with Michael Jewett, professor of chemical and biological engineering in McCormick and director of the Center for Synthetic Biology. Graduate students Walter Thavarajah, Adam Silverman and Matthew Verosloff spearheaded the research.

Field test success

Fluoride is a naturally occurring element, which can seep out of bedrock into groundwater. Also found in volcanic ash, fluoride is particularly abundant in regions surrounding volcanoes.

Home to three volcanic range systems, Costa Rica seemed like a natural place to test the device in the field. Matthew Verosloff, a Ph.D. candidate in Lucks' laboratory, traveled to Costa Rica and sampled various water samples -- from mud puddles, ponds and ditches.

"Every test on these field samples worked," Lucks said. "It's exciting that it works in the lab, but it's much more important to know that it works in the field. We want it to be an easy, practical solution for people who have the greatest need. Our goal is to empower individuals to monitor the presence of fluoride in their own water."

How it works

Although the device is simple to use, the prepared test tube houses a sophisticated synthetic biology reaction. Lucks has spent years working to understand RNA folding mechanisms. In his new test, he puts this folding mechanism to work.

"RNA folds into a little pocket and waits for a fluoride ion," he explained. "The ion can fit perfectly into that pocket. If the ion shows up, then RNA expresses a gene that turns the water yellow. If the ion doesn't show up, then RNA changes shape and stops the process. It's literally a switch."

According to Lucks, organisms already perform this function in nature. "Fluoride is toxic to bacteria," he said. "They use RNA to sense fluoride in the cell, then they make a protein to pump it out and detoxify."

Lucks' system works in the same way. But instead of producing a protein pump, his test produces a protein enzyme that makes a yellow pigment, so people can see the results with a simple glance.

Lucks' team freeze-dried the RNA reaction, which looks like a tiny cotton ball, and put it into a test tube. In this form, the reaction is safe and shelf-stable. A small pipette accompanies the test tube. When placed in water, the pipette absorbs exactly 20 microliters -- just the small drop that's needed to rehydrate the reaction. From there, it takes two hours to get a result, which Lucks intends to accelerate in future iterations.

"We're currently limited to testing for fluoride," said Thavarajah, the paper's first author. "But we're trying to engineer other RNAs to detect all sorts of targets."

Credit: 
Northwestern University

New review study shows that egg-industry-funded research downplays danger of cholesterol

image: The graph tracks the rise of egg-industry-funded cholesterol studies over time.

Image: 
Physicians Committee for Responsible Medicine

WASHINGTON--Controversial headlines claiming that eggs don't raise cholesterol levels could be the product of faulty industry-funded research, according to a new review published in the American Journal of Lifestyle Medicine.

Researchers with the Physicians Committee for Responsible Medicine examined all research studies published from 1950 to March of 2019 that evaluated the effect of eggs on blood cholesterol levels. The researchers examined funding sources and whether those sources influenced study findings.

The results show that prior to 1970, industry played no role in cholesterol research. The percentage of industry-funded studies increased over time, from 0 percent in the 1950s to 60 percent in 2010-2019.

"In decades past, the egg industry played little or no role in cholesterol research, and the studies' conclusions clearly showed that eggs raise cholesterol," says study author Neal Barnard, MD, president of the Physicians Committee for Responsible Medicine. "In recent years, the egg industry has sought to neutralize eggs' unhealthy image as a cholesterol-raising product by funding more studies and skewing the interpretation of the results."

Overall, more than 85 percent of the studies--whether funded by industry or not--showed that eggs have unfavorable effects on blood cholesterol. Industry-funded studies, however, were more likely to downplay these findings. That is, although the study data showed cholesterol increases, study conclusions often reported that eggs had no effect at all. Approximately half (49 percent) of industry-funded intervention studies reported conclusions that were discordant with actual study results, compared with 13 percent of non-industry-funded trials.

For example, in one 2014 study in college freshmen, the addition of two eggs at breakfast, five days a week over 14 weeks, was associated with a mean LDL cholesterol increase of 15 mg/dL. Despite this rise in cholesterol, investigators concluded that the "additional 400 mg/day of dietary cholesterol did not negatively impact blood lipids." The cholesterol change did not reach statistical significance, meaning that there was at least a 5 percent chance that the cholesterol rise could have been due to chance alone.

"It would have been appropriate for the investigators to report that the cholesterol increases associated with eggs could have been due to chance. Instead, they wrote that the increases did not happen at all. Similar conclusions were reported in more than half of industry-funded studies," adds Dr. Barnard.

These studies have even influenced policymakers. In 2015, the U.S. Dietary Guidelines Advisory Committee reported that "available evidence shows no appreciable relationship between consumption of dietary cholesterol and serum cholesterol...." After reviewing the evidence, however, the government did not carry that statement forward in the final Guidelines, which called for eating "as little dietary cholesterol as possible."

"The egg industry has mounted an intense effort to try to show that eggs do not adversely affect blood cholesterol levels," adds Dr. Barnard. "For years, faulty studies on the effects of eggs on cholesterol have duped the press, public, and policymakers to serve industry interests."

Several meta-analyses have concluded that egg consumption does raise cholesterol levels. According to a 2019 meta-analysis, eating an egg each day raises low density lipoprotein (LDL, or "bad") cholesterol by about nine points. The study, published in the American Journal of Clinical Nutrition, combined the findings of 55 prior studies, finding that every 100 milligrams of added dietary cholesterol (approximately half an egg) raised LDL ("bad") cholesterol levels by about 4.5 mg/dL. A 2019 JAMA study of nearly 30,000 participants found that eating even small amounts of eggs daily significantly raised the risk for both cardiovascular disease and premature death from all causes.

Of 153 studies analyzed in the American Journal of Lifestyle Medicine report, 139 showed that eggs raise blood cholesterol (68 of these reached statistical significance, meaning the results were very unlikely to be due to chance). No studies reported significant net decreases in cholesterol concentrations. Non-significant net cholesterol decreases were reported by six non-industry-funded and eight industry-funded studies.

Credit: 
Physicians Committee for Responsible Medicine

Having a psychotic disorder may increase decline of some areas of cognition over adulthood

A new study has shown that relative to participants without a psychotic disorder, those diagnosed with a disorder were consistently impaired across all areas of cognitive (memory and thinking) ability measured. The comparison also suggested that declines in some cognitive areas might worsen with age.

This was as part of a cross- sectional comparison 20-years after diagnosis of their first psychotic episode.

Crucially, the study found that cognitive impairment of participants with a psychotic disorder was linked to their symptoms, particularly loss of interest in everyday activities, and also negative changes in their employment.

Academics from City, University of London, Icahn School of Medicine at Mount Sinai, New York, Stony Brook University, New York and others, conducted the study as part of the Suffolk County Mental Health Project in the United States. The project began in 1989 in order to find out what challenges people diagnosed with psychotic disorders may face throughout their lives.

Previous research has shown cognitive impairment to be a core feature of schizophrenia and is associated with poor social and vocational outcomes for those affected. However, little was previously known about how cognitive impairment may progress in the longer term in schizophrenia and other psychotic disorders, as studies beyond 10 years after first diagnosis are rare.

The study involved 445 participants who had been admitted to psychiatric inpatient units within Suffolk County. Participants came back to complete cognitive testing at two and 20 year follow-up after their first episode of psychosis. Participants undertook a range of tests which measured different aspects of their cognitive functioning, including their vocabulary knowledge, their ability to recount words from memory, memory of factual information and previous experiences, and ability to conceptualise across ideas and decision-making. They also took part in clinical interviews that assessed their symptom level and how well they were doing socially, as well as functionally in terms of vocation/ and employment.

Twenty years after their diagnosis, cognitive functioning of those with a psychotic disorder was compared with a group of non-psychotic participants from Suffolk County who were matched to them by gender and age.

Co-first author of the study, Dr Anne-Kathrin Fett, Senior Lecturer in Psychology at City, University of London, said:

"Our study provides the first comprehensive picture of long-term cognitive changes and associated clinical and functional outcomes in psychotic disorders, and is an important step toward providing clarity on what challenges people with these disorders face in the community.

"However, it is important to note that while there was a general downward trend, participants varied in terms of cognitive changes and some also achieved improvement over the follow-up period. We need to find out what can influence cognitive functioning positively. We do not yet have medication, but lifestyle changes may be able to improve cognition long-term to some extent.

"Importantly replication and further studies will be necessary to offer directions for the development of strategies to help prevent the progressive deterioration of cognitive functioning in later stages of psychotic illness."

The study also found that cognitive impairment across schizophrenia spectrum disorders and other psychotic conditions, including psychotic bipolar disorder, major depression with psychosis and substance induced psychosis, had similar trajectories of cognitive decline over an 18-year period since measured two years after the first diagnosis.

Credit: 
City St George’s, University of London

The uncertain role of natural gas in the transition to clean energy

A new MIT study examines the opposing roles of natural gas in the battle against climate change -- as a bridge toward a lower-emissions future, but also a contributor to greenhouse gas emissions.

Natural gas, which is mostly methane, is viewed as a significant "bridge fuel" to help the world move away from the greenhouse gas emissions of fossil fuels, since burning natural gas for electricity produces about half as much carbon dioxide as burning coal. But methane is itself a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas. Increasing its usage, as a strategy for decarbonizing the electricity supply, will also increase the potential for such "fugitive" methane emissions, although there is great uncertainty about how much to expect. Recent studies have documented the difficulty in even measuring today's emissions levels.

This uncertainty adds to the difficulty of assessing natural gas' role as a bridge to a net-zero-carbon energy system, and in knowing when to transition away from it. But strategic choices must be made now about whether to invest in natural gas infrastructure. This inspired MIT researchers to quantify timelines for cleaning up natural gas infrastructure in the United States or accelerating a shift away from it, while recognizing the uncertainty about fugitive methane emissions.

The study shows that in order for natural gas to be a major component of the nation's effort to meet greenhouse gas reduction targets over the coming decade, present methods of controlling methane leakage would have to improve by anywhere from 30 to 90 percent. Given current difficulties in monitoring methane, achieving those levels of reduction may be a challenge. Methane is a valuable commodity, and therefore companies producing, storing, and distributing it already have some incentive to minimize its losses. However, despite this, even intentional natural gas venting and flaring (emitting carbon dioxide) continues.

The study also finds policies that favor moving directly to carbon-free power sources, such as wind, solar, and nuclear, could meet the emissions targets without requiring such improvements in leakage mitigation, even though natural gas use would still be a significant part of the energy mix.

The researchers compared several different scenarios for curbing methane from the electric generation system in order to meet a target for 2030 of a 32 percent cut in carbon dioxide-equivalent emissions relative to 2005 levels, which is consistent with past U.S. commitments to mitigate climate change. The findings appear today in the journal Environmental Research Letters, in a paper by MIT postdoc Magdalena Klemun and Associate Professor Jessika Trancik.

Methane is a much stronger greenhouse gas than carbon dioxide, although how much more depends on the timeframe you choose to look at. Although methane traps heat much more, it doesn't last as long once it's in the atmosphere -- for decades, not centuries. When averaged over a 100-year timeline, which is the comparison most widely used, methane is approximately 25 times more powerful than carbon dioxide. But averaged over a 20-year period, it is 86 times stronger.

The actual leakage rates associated with the use of methane are widely distributed, highly variable, and very hard to pin down. Using figures from a variety of sources, the researchers found the overall range to be somewhere between 1.5 percent and 4.9 percent of the amount of gas produced and distributed. Some of this happens right at the wells, some occurs during processing and from storage tanks, and some is from the distribution system. Thus, a variety of different kinds of monitoring systems and mitigation measures may be needed to address the different conditions.

"Fugitive emissions can be escaping all the way from where natural gas is being extracted and produced, all the way along to the end user," Trancik says. "It's difficult and expensive to monitor it along the way."

That in itself poses a challenge. "An important thing to keep in mind when thinking about greenhouse gases," she says, "is that the difficulty in tracking and measuring methane is itself a risk." If researchers are unsure how much there is and where it is, it's hard for policymakers to formulate effective strategies to mitigate it. This study's approach is to embrace the uncertainty instead of being hamstrung by it, Trancik says: The uncertainty itself should inform current strategies, the authors say, by motivating investments in leak detection to reduce uncertainty, or a faster transition away from natural gas.

"Emissions rates for the same type of equipment, in the same year, can vary significantly," adds Klemun. "It can vary depending on which time of day you measure it, or which time of year. There are a lot of factors."

Much attention has focused on so-called "super-emitters," but even these can be difficult to track down. "In many data sets, a small fraction of point sources contributes disproportionately to overall emissions," Klemun says. "If it were easy to predict where these occur, and if we better understood why, detection and repair programs could become more targeted." But achieving this will require additional data with high spatial resolution, covering wide areas and many segments of the supply chain, she says.

The researchers looked at the whole range of uncertainties, from how much methane is escaping to how to characterize its climate impacts, under a variety of different scenarios. One approach places strong emphasis on replacing coal-fired plants with natural gas, for example; others increase investment in zero-carbon sources while still maintaining a role for natural gas.

In the first approach, methane emissions from the U.S. power sector would need to be reduced by 30 to 90 percent from today's levels by 2030, along with a 20 percent reduction in carbon dioxide. Alternatively, that target could be met through even greater carbon dioxide reductions, such as through faster expansion of low-carbon electricity, without requiring any reductions in natural gas leakage rates. The higher end of the published ranges reflects greater emphasis on methane's short-term warming contribution.

One question raised by the study is how much to invest in developing technologies and infrastructure for safely expanding natural gas use, given the difficulties in measuring and mitigating methane emissions, and given that virtually all scenarios for meeting greenhouse gas reduction targets call for ultimately phasing out natural gas that doesn't include carbon capture and storage by mid-century. "A certain amount of investment probably makes sense to improve and make use of current infrastructure, but if you're interested in really deep reduction targets, our results make it harder to make a case for that expansion right now," Trancik says.

The detailed analysis in this study should provide guidance for local and regional regulators as well as policymakers all the way to federal agencies, they say. The insights also apply to other economies relying on natural gas. The best choices and exact timelines are likely to vary depending on local circumstances, but the study frames the issue by examining a variety of possibilities that include the extremes in both directions -- that is, toward investing mostly in improving the natural gas infrastructure while expanding its use, or accelerating a move away from it.

Credit: 
Massachusetts Institute of Technology