Culture

Mimicking the ultrastructure of wood with 3D-printing for green products

image: 3D printing with sustainable Swedish forest materials. The microscopy images of real wood tissue and the 3D printed version show how the researchers mimicked the real wood's cellular architecture. The printed version is at a larger scale for ease of handling and display, but the researchers are able to print at any scale.

Image: 
Yen Strandqvist/Chalmers University of Technology

Researchers at Chalmers University of Technology, Sweden, have succeeded in 3D printing with a wood-based ink in a way that mimics the unique 'ultrastructure' of wood. Their research could revolutionise the manufacturing of green products. Through emulating the natural cellular architecture of wood, they now present the ability to create green products derived from trees, with unique properties - everything from clothes, packaging, and furniture to healthcare and personal care products.

The way in which wood grows is controlled by its genetic code, which gives it unique properties in terms of porosity, toughness and torsional strength. But wood has limitations when it comes to processing. Unlike metals and plastics, it cannot be melted and easily reshaped, and instead must be sawn, planed or curved. Processes which do involve conversion, to make products such as paper, card and textiles, destroy the underlying ultrastructure, or architecture of the wood cells. But the new technology now presented allows wood to be, in effect, grown into exactly the shape desired for the final product, through the medium of 3D printing.

By previously converting wood pulp into a nanocellulose gel, researchers at Chalmers had already succeeded in creating a type of ink that could be 3D printed. Now, they present a major progression -successfully interpreting wood's genetic code, and digitising it so that it can instruct a 3D printer.

It means that now, the arrangement of the cellulose nanofibrils can be precisely controlled during the printing process, to actually replicate the desirable ultrastructure of wood. Being able to manage the orientation and shape means that they can capture those useful properties of natural wood.

"This is a breakthrough in manufacturing technology. It allows us to move beyond the limits of nature, to create new sustainable, green products. It means that those products which today are already forest-based can now be 3D printed, in a much shorter time. And the metals and plastics currently used in 3D printing can be replaced with a renewable, sustainable alternative," says Professor Paul Gatenholm, who has led this research through the Wallenberg Wood Science Centre at Chalmers.

A further advance is the addition of hemicellulose, a natural component of plant cells, to the nanocellulose gel. The hemicellulose acts as a glue, giving the cellulose sufficient strength to be useful, in a similar manner to the natural process of lignification, through which cell walls are built.

The new technology opens up a whole new area of possibilities. Wood-based products could now be designed and 'grown' to order - at a vastly reduced timescale compared with natural wood.

Paul Gatenholm's group has already developed a prototype for an innovative packaging concept. They printed out honeycomb structures, with chambers in between the printed walls, and then managed to encapsulate solid particles inside those chambers. Cellulose has excellent oxygen barrier properties, meaning this could be a promising method for creating airtight packaging for foodstuffs or pharmaceuticals for example.

"Manufacturing products in this way could lead to huge savings in terms of resources and harmful emissions," he says. "Imagine, for example, if we could start printing packaging locally. It would mean an alternative to today's industries, with heavy reliance on plastics and C02-generating transport. Packaging could be designed and manufactured to order without any waste".

They have also developed prototypes for healthcare products and clothing. Another area where Paul Gatenholm sees huge potential for the technology is in space, believing that it offers the perfect first test bed to develop the technology further.

"The source material of plants is fantastically renewable, so the raw materials can be produced on site during longer space travel, or on the moon or on Mars. If you are growing food, there will probably be access to both cellulose and hemicellulose," says Paul Gatenholm.

The researchers have already successfully demonstrated their technology at a workshop at the European Space Agency, ESA, and are also working with Florida Tech and NASA on another project, including tests of materials in microgravity.

"Traveling in space has always been the catalyst for material development on earth," he says.

Credit: 
Chalmers University of Technology

Some crocs of the past were plant eaters

image: This image shows crocodyliform life reconstructions.

Image: 
Jorge Gonzalez

Based on careful study of tooth remains, researchers have found that ancient groups of crocodyliforms--the group including living and extinct relatives of crocodiles and alligators--were not the carnivores we know today, as reported in the journal Current Biology on June 27. In fact, the evidence suggests that a veggie diet arose in the distant cousins of modern crocodylians at least three times.

"The most interesting thing we discovered was how frequently it seems extinct crocodyliforms ate plants," says Keegan Melstrom (@gulosuchus) of the University of Utah. "Complex teeth, which we infer to indicate herbivory, appear in the extinct relatives of crocodiles at least three times and maybe as many as six in our dataset alone."

All living crocodylians possess a similar general morphology and ecology to match their lifestyle as semiaquatic generalist carnivores, which includes relatively simple, conical teeth. It was clear from the start that extinct species showed a different pattern, including species with many specializations not seen today. One such specialization is a feature known as heterodonty, regionalized differences in tooth size or shape.

"Carnivores possess simple teeth whereas herbivores have much more complex teeth," Melstrom says. "Omnivores, organisms that eat both plant and animal material, fall somewhere in between. Part of my earlier research showed that this pattern holds in living reptiles that have teeth, such as crocodylians and lizards. So these results told us that the basic pattern between diet and teeth is found in both mammals and reptiles, despite very different tooth shapes, and is applicable to extinct reptiles."

To infer what those extinct crocodyliforms most likely ate, Melstrom and his advisor Randall Irmis compared the tooth complexity of extinct crocodyliforms to those of living animals using a method originally developed for use in living mammals. Overall, they measured 146 erupted teeth from 16 different taxa of extinct crocodyliforms at a resolution of 25 data rows per tooth.

Using a combination of quantitative dental measurements and other morphological features, the researchers reconstructed the diets of those extinct crocodyliforms. The results show that those animals had a wider range of dental complexities and presumed dietary ecologies than had been appreciated previously.

Plant-eating crocodyliforms appeared early in the evolutionary history of the lineage, the researchers conclude, shortly after the end-Triassic mass extinction, and persisted until the end-Cretaceous mass extinction. Their analysis suggests that herbivory arose independently a minimum of three times, and possibly six times, in Mesozoic crocodyliforms.

"Our work demonstrates that extinct crocodyliforms had an incredibly varied diet," Melstrom says. "Some were similar to living crocodylians and were primarily carnivorous, others were omnivores, and still others likely specialized in plants. The herbivores lived on different continents at different times, some alongside mammals and mammal relatives, and others did not. This suggests that an herbivorous crocodyliform was successful in a variety of environments!"

Melstrom says they are continuing to reconstruct the diets of extinct crocodyliforms, including in fossilized species that are missing teeth. He also wants to understand why the extinct relatives of crocodiles diversified so radically after one mass extinction but not another and whether dietary ecology could have played a role.

Credit: 
Cell Press

Growing embryonic tissues on a chip

image: Colony of human embryonic stem cells that was exposed to a microfluidic gradient of a morphogen (BMP4), resulting in the establishment of different cell types arranged in layers.

Image: 
Andrea Manfrin, EPFL

It's no surprise that using human embryos for biological and medical research comes with many ethical concerns. Correct though it is to proceed with caution in these matters, the fact is that much science would benefit from being able to study human biology more accurately.

One solution lies with alternative tools - what scientists call in vitro models. But despite some advancements with adult tissues, when it comes to modelling the early developmental processes of the human embryo, things become complicated.

Now, scientists at EPFL's Institute of Bioengineering have simulated aspects of embryo formation in vitro starting from embryonic stem cells.

"A tricky problem in reliably constructing tissues outside of an organism in general is how to present key signaling molecules, also termed morphogens, to the cells in culture at the right time and dose," says EPFL Professor Matthias Lütolf, whose research group led the research. "Simply exposing a collection of stem cells to a single concentration of a morphogen ends in uncontrolled morphogenesis because the cells lack important instructions."

But in a developing embryo, stem cells receive a highly dynamic range of morphogen concentrations from so-called "signaling centers". It is this gradient of morphogens that tells stem cells what type of specialized cell and tissue to become.

To implement this principle, Dr Andrea Manfrin in Lütolf's lab developed a method for exposing human embryonic stem cells in culture to gradients of morphogens, mimicking the real-life conditions of gastrulation - an early stage of the developing embryo where its cells begin to transform into different cell types and tissues.

The method involves growing the stem cells in a microfluidic device, which is a chip with small channels that allow the precise control of tiny amounts of fluid. The researchers grew stem cells in a culture chamber on the microfluidic chip, and were able to expose them to carefully controlled concentration gradients of various morphogens.

The results were impressive: the cells developed and organized into domains of different cell types, depending on the concentration they were exposed to, like they do in the body. In fact, the scientists report that they were able to successfully mimic aspects of gastrulation, paving the way for growing specific human tissues in the lab in a more controlled manner.

"We hypothesized that engineering an artificial signaling center 'ex vivo' could allow us to steer the self-organization of a stem cell population towards a desired outcome," explains Manfrin. "This has obvious advantages for tissue and organ engineering."

These advantages include new tools for drug testing and regenerative medicine. The new technique can also help scientists study processes related to developmental biology - like gastrulation - and could provide alternatives to animal experimentation in some areas of research.

"One of our long-term goals is to engineer organs for transplantation," says Lütolf, who is already working with groups at the Lausanne University Hospital (CHUV) and elsewhere to generate miniaturized organs ('organoids') from patient-derived cells. "We are still far from growing functional organs in a dish; but recent progress in stem cell biology and bioengineering make me optimistic that this can become a reality. The key is to better understand how cells themselves build tissues and organs in the embryo".

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Natural biodiversity protects rural farmers' incomes from tropical weather shocks

image: Frederik Noack, assistant professor of food and resource economics in the Faculty of Land and Food Systems at the University of British Columbia.

Image: 
UBC Media Relations

A big data study covering more than 7,500 households across 23 tropical countries shows that natural biodiversity could be effective insurance for rural farmers against drought and other weather-related shocks.

Frederick Noack, assistant professor of food and resource economics in UBC's faculty of land and food systems, worked with colleagues from ETH Zurich and the University of Geneva to study whether natural biodiversity helps buffer farmers' incomes against weather shocks.

They found that farmers in areas with greater biodiversity took less of an income hit from droughts than their peers who farmed amid less biodiversity.

Their calculations also indicated that a loss of half the species within a region would double the impact of weather extremes on income.

"We should conserve biodiversity, not just because we like to see tigers and lions, but because it's also an important input for production," said Noack. "It's especially important for people who live in areas where it's hard to get insurance or loans to compensate for environmental shocks. A certain level of biodiversity conservation could be beneficial for people in agriculture, forestry and these sorts of industries."

Access to huge datasets allowed the researchers to compare farmers' actual incomes--using data gathered every three months--with geocoded data indicating the number of plant species in the local environment. They cross-referenced this with weather data through the growing, planting and harvesting seasons.

While the results clearly link natural biodiversity to income stabilization during adverse weather, the ways in which biodiversity accomplishes this were beyond the scope of the study. Noack pointed to a variety of processes occurring naturally within local ecosystems that could contribute. For example, an environment that supports several bee species should allow pollination to happen at a broader range of temperatures. The same environment might also support the natural enemies of pests, which would reduce farmers' dependence on pesticides to stabilize their yield.

The research is the first to relate biodiversity directly to incomes at such a scale. Earlier studies have shown that biodiversity can stabilize the production of biomass such as leaves in a field or trees in a forest--but not how that translates into real income for farmers.

"The difference between studying biomass and studying income is that income assigns value to different types of biomass," said Noack. "Price signals our value for specific things, so looking at income converts something that happens in the ecosystem to something that we actually value."

The data came from tropical countries in Latin America, Asia and Africa, where weather extremes are expected to increase as the earth's atmosphere warms. The analysis shows that conservation of natural biodiversity could play an important role in alleviating poverty for rural households with little access to insurance or loans.

The findings also inform the ongoing debate about where conservation efforts should be directed. Conserving large swaths of parkland far from agricultural land may be less effective than conserving smaller pockets in close proximity to farms.

Credit: 
University of British Columbia

Higher salt intake can cause gastrointestinal bloating

A study led by researchers at the Johns Hopkins Bloomberg School of Public Health found that individuals reported more gastrointestinal bloating when they ate a diet high in salt.

The scientists re-analyzed data from a large clinical trial--the Dietary Approaches to Stop Hypertension-Sodium trial (DASH-Sodium)--conducted two decades ago, and found that high sodium intake increased bloating among trial participants. The researchers also found that the high-fiber DASH diet increased bloating among trial participants compared to a low-fiber control diet.

The study was published June 17 in the American Journal of Gastroenterology.

"Bloating is one of the leading gastrointestinal complaints in the U.S. and can be exacerbated in some people by a high-fiber diet; our results suggest that they might be able to reduce that bloating, without compromising on healthy fiber, by lowering their sodium intake," says study senior author Noel Mueller, PhD, MPH, an assistant professor in the Department of Epidemiology at the Bloomberg School.

Bloating is estimated to affect up to a third of U.S. adults overall, and more than 90 percent of those with irritable bowel syndrome. Bloating features a buildup of excess gas in the gut. The production of gas can be attributed to gas-producing gut bacteria breaking down fiber. There is also some evidence that sodium can stimulate bloating. The study by Mueller and colleagues is the first to examine sodium as a cause of bloating in the context of low- and high-fiber diets.

The study analyzed data from the DASH-Sodium trial, which was co-led by Bloomberg School researcher Lawrence Appel, MD, MPH, and sponsored by the National Heart, Lung and Blood Institute. Conducted at four clinical centers during 1998-99, it tested the DASH diet, a high-fiber diet which is relatively low in fat and high in fruits, nuts, and vegetables, against a low-fiber control diet. Each of the two diets was tested at three levels of sodium, and the 412 participants all had high blood pressure at the trial start. The trial was set up chiefly to determine the effect of dietary sodium and other factors on blood pressure, but included data on participants' reports of bloating--data that Mueller and colleagues analyzed for the new study.

The team found that prior to the trial, 36.7 percent of the participants reported bloating, which is more or less in line with national surveys of bloating prevalence. They found too that the high-fiber DASH diet increased the risk of bloating by about 41 percent, compared to the low-fiber control diet--and men were more susceptible to this effect, compared to women. But the scientists also determined that sodium was a factor in bloating. When they combined data from the DASH and control diets, and compared the highest level of sodium intake to the lowest, they found that the high-sodium versions of those diets collectively increased the risk of bloating by about 27 percent compared to the low-sodium versions.

The key implication is that reducing sodium can be an effective way to reduce bloating--and in particular may be able to help people maintain a healthy, high-fiber diet.

How sodium causes bloating is still being studied. Salt causes water retention, and that may be one factor. "We hypothesize that sodium intake also alters the gut microbiome in a manner that modifies bacterial sulfide production," Mueller says.

He and his team are now researching how bloating is affected by the major dietary macronutrients: protein, carbs and fat.

Credit: 
Johns Hopkins Bloomberg School of Public Health

Men ask most of the questions at scientific conferences; we can choose to change that

Even in a majority-women audience at an academic conference, men ask questions most of the time, researchers report on June 27th in The American Journal of Human Genetics. After analyzing participation in Q&As at the American Society of Human Genetics and Biology of Genomes conferences over four years, the study authors found that public discussion and policy change focused on gender equity can make a significant difference.

"When women are 70% of a room, they still asked only about 40% of the questions," says Natalie Telis (@NatalieTelis), then a graduate student at Stanford University. "At that rate an audience would need to be 80% or 90% women before question asking would be split evenly between men and women."

Telis was inspired to study Q&A participation after a conference she attended as an undergraduate. She asked one question, but every other question she heard that day was asked by a man. To determine whether the ratio of questions asked was representative of the gender makeup of the audience, Telis and co-author Emily Glassberg (@ecgberg) recorded data about who asked questions after presentations they attended at seven conferences.

When Telis shared some of their findings online during a conference in 2015, it immediately sparked conversation and a policy change: the first question during every Q&A would come from a trainee. The following year, the conversation had quieted, but the policy remained, providing a classic experimental setup.

At the 2017 meeting of the American Society of Human Genetics, data were collected through a crowdsourced approach. Any conference attendee could record data about who was asking questions after the presentations he or she attended. Recorders' data overlapped but didn't exactly match--some recorders may have left mid-Q&A or missed a question while recording data for the previous one, for example. Telis and Glassberg had to reconstruct the series of questions and found a solution in the tools of computational biology.

"We thought, we're trying to figure out a sequence, and we have some broken up chunks of that sequence. That sounds exactly like sequence alignment," says Telis. "We ended up creating this modified approach where we would align evidence for a question by time."

Telis and Glassberg used both the program genderizeR and U.S. census data to infer the gender makeup of the audience and presenters. Although this work focuses on representation across a simple binary, the authors acknowledge that future work should analyze effects from self-reported gender identity, race, ethnicity, and other demographics.

Telis has already seen the positive impact of her work. "I had tons of people tell me or tweet at me that they asked their first questions after hearing about this work," she says. Their study may provide a framework for future research about representation in the scientific community. "Making a choice and then evaluating its contribution to change is a critical part of experiment design. I hope that this is the start of a longer trend of us asking questions about the genetics community we want to create and how we create it."

Credit: 
Cell Press

Order from chaos: Australian vortex studies are first proof of decades-old theory

image: This is professor Matt Davis (University of Queensland)

Image: 
FLEET

Two Australian studies published this week offer the first proof of a 70-year-old theory of turbulence.

"The studies confirm a seminal theory of the formation of large-scale vortices from turbulence in 2D fluid flow, where the large vortices emerge from an apparent chaos of smaller vortices," says author Prof Matt Davis, FLEET's lead on the University of Queensland paper.

Fluids restricted to flow in two-dimensions can be observed in systems ranging from electrons in semiconductors, to the surface of soap bubbles, to atmospheric phenomena such as cyclones.

"One of the commonly observed features in such 2D flow is the formation of large-scale swirling motion of the fluid from the initially chaotic swirling motion typical of turbulent flow, such as Jupiter's famous Great Red Spot," says the Monash study's lead author, Shaun Johnstone.

Turbulence, with its seemingly random and chaotic motion of the fluid, is a notoriously difficult problem, for which there is no general theoretical description. (In fact, the Clay Mathematics Institute offers a million dollar prize to anyone that come up with a theory of turbulence.)

There is, however, a simple theory, proposed in 1949 by the Nobel laureate Lars Onsager, to explain the formation of large-scale vortex motion from initially turbulent 2D flow.

Despite the appeal of Onsager's physical picture of 2D turbulence, it can only make quantitative predictions for one special type of fluid: a 'superfluid', which flows without any viscosity or drag, and which can only be realised at extremely low temperatures. This had made a testing of Onsager's theory difficult, until now.

"The study is broadly relevant to the emerging research field of non-equilibrium physics, and more specifically relevant to study of superfluids and superconductors," says author Prof Kris Helmerson, who works with Johnstone in Monash's School of Physics and Astronomy.

The new research is described in two papers out in Science today, with one experimental study led from FLEET's Monash University node, and the other led from an EQUS/FLEET collaboration at the University of Queensland.

WHY VORTICES & QUANTUM TURBULENCE

Most people are familiar with the concept of a vortex: whether the familiar twisting shape of a tornado, or the simple whirlpool that forms at a bathtub drains away through the plughole.

Vortices also occur in 2D systems where there is no vertical movement, such as at the surface of liquids, or in atmospheric system such as cyclones. In fact, 2D vortices cover a vast range of systems, from the superfluid movement of neutrons on the surface of neutron stars to the Atlantic Ocean Gulf Stream to the zero-resistance movement of electrons in high-temperature superconductors.

For 70 years, our understanding of such 2D vortex systems has been governed by Lars Onsager's theory that as more energy is put into the chaotic mix of small vortices in a turbulent 2D system, over time the vortices rotating in the same direction would cluster to form larger, stable vortices - the system becomes ordered, rather than more chaotic.

In order to make his 1949 theory mathematically tractable, Onsager considered a superfluid, which he stated would have 'quantised' vortices (vortices with quantised angular momentum), a concept further developed by Richard Feynman.

Onsager's theory described a 2D turbulent system's energy congregating in high-energy, long-lived, large-scale vortices. This is an unusual equilibrium state where entropy decreases as a function of energy - the opposite of what we would consider 'normal' thermodynamic regimes.

The Monash-led team generated vortex distributions at a range of temperatures and observed their subsequent evolution. States that began with relatively random distributions of vortices were seen to begin to order themselves, as Onsager had described. The University of Queensland study, on the other hand, directly generated two large clusters of vortices, flowing in opposite directions, testing the stability of this highly-ordered configuration.

Both studies experimented using Bose Einstein Condensates (BECs), a quantum state that exists at ultra-low temperatures, and in which quantum effects become visible at a macroscopic scale.

The researchers created turbulence in condensates of rubidium atoms using lasers, and observed the behaviour of the resulting vortices over time.

Both studies offer great promise for future studies of emergent structures in interacting quantum systems driven far from equilibrium.

THE STUDIES

The two studies: Giant vortex clusters in a two-dimensional quantum fluid and Giant vortex clusters in a two-dimensional quantum fluid were both published in Science today.

The University of Queensland study was led by Dr Tyler Neely and Prof Halina Rubinzstein-Dunlop from the ARC Centre of Excellence for Engineered Quantum Systems (EQUS), with theoretical support from FLEET's Prof Matt Davis and Dr Matt Reeves along with researchers from the Centre for Quantum Science at the University of Otago, NZ. In addition to support of the Australian Research Council, funding was received from the Dodd-Walls Centre for Photonic and Quantum Technologies (NZ) and the Royal Society of New Zealand Marsden Fund.

In addition to researchers from the School of Physics and Astronomy, the Monash University study included authors from the Joint Quantum Institute, University of Maryland. This study was also supported by the Australian Research Council.

STUDYING NON-EQUILIBRIUM SYSTEMS

While the results specifically describe turbulence, a notoriously difficult problem to describe theoretically, they are more broadly relevant to non-equilibrium physics - the evolution of systems far from equilibrium, in particular, the development of coherent, large-scale flow from by putting energy into a turbulent system.

For centuries scientists have developed an excellent understanding of systems in equilibrium. But what happens to a system driven far from equilibrium remains one of the great unknown challenges of modern material physics.

At FLEET, non- equilibrium systems are used to pursue zero-resistance paths for electrical current.

FLEET is an Australian Research Council-funded research centre bringing together over a hundred Australian and international experts to develop a new generation of ultra-low energy electronics, motivated by the need to reduce the energy consumed by computing.

Video and images are available at http://www.fleet.org.au/blog/vortex-story-resources/

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

Researchers unlock mysteries of complex microRNA oncogenes

image: Ariel Donayo, Ph.D. candidate in biochemistry at McGill University, and Dr. Thomas Duchaine, Professor in the Department of Biochemistry at McGill University.

Image: 
McGill University

MicroRNAs are tiny molecules of nucleic acid that control gene expression, acting like a dimmer switch to tone down gene output at key positions in the network of information that governs a cell's function. MicroRNAs are important for the day-to-day inner working of cells and especially important during development. They also become profoundly defective in diseases such as cancer. Unlike most other human or animal genes, microRNAs are often encoded in genomes and expressed as beads-on-a-string groupings, known as polycistrons. The purpose for this organisation has, until now, been a mystery.

A new collaborative study, led by researchers at McGill University's Goodman Cancer Research Centre (GCRC), and published in the journal Molecular Cell, set out to solve this mystery, uncovering novel functions for polycistronic microRNAs and showing how cancers such as lymphoma twist these functions to reorganize the information networks that control gene expression.

A discovery thanks to a single oncogene

The researchers made their discovery by examining how strongly the oncogenic microRNA polycistron miR-17-92 was over-expressed in several types of cancer. Surprisingly, this led to only small increases in the mature microRNA expression in the same types of cells. This meant a lot was happening during their biogenesis, especially in cancer, and that there may be more to the purpose of microRNA polycistrons than previously thought.

"Why some microRNAs are expressed as polycistrons, and how cancers such as lymphoma change microRNA biogenesis were not known," explains Dr. Thomas Duchaine, Professor in the Department of Biochemistry at McGill, member of the GCRC and the study's senior author. "We were able to identify some mysterious steps in microRNA biogenesis that occur in cell nuclei, which had been completely missed for the nearly 20 years since the discovery of the conservation of microRNA's."

Understanding microRNA's role in cancer

While researchers knew that microRNAs are important in a broad variety of cancers, how and why was not fully understood. "We discovered an entirely new function for microRNA polycistrons and showed how deep an impact it has in certain types of cancer," notes Dr. Duchaine. The findings will help make sense of many of the genomic reorganizations that occur in microRNA loci in those cancers. "We also think this may be happening in physiological conditions, early in development, in embryonic stem cells for example, in placenta, and in other types of tumours."

Knowing what drives specific types of cancer is critical in stratifying cancer sub-types, in developing new therapeutic strategies, or anticipating treatment outcomes in precision medicine.

"The breadth of the impact of the amplification of a single microRNA locus on the gene networks is pretty amazing, in my opinion," says Dr. Duchaine. "Especially considering that this occurs through a mechanism entirely outside of the traditional targeting function of microRNAs. We are not done understanding microRNA mechanistics. I am always amazed at how complex their functional relationships are within our genomes."

While it is not always easy to anticipate the practical implications of basic research findings, Dr. Duchaine believes that they will be diverse. "Besides forcing a reinterpretation of the function of the miR-17-92 proto-oncogene, it will prompt new potential therapeutic strategies. For example, the depth of the impact on the gene network in cells wherein miR-17-92 is amplified indicates a completely different gene network state. To me, this is a screaming opportunity for the testing of genotype-specific treatments in a precision medicine perspective."

Credit: 
McGill University

Researchers discriminate between mutations that promote cancer growth and those that don't

image: (from left) Rémi Buisson, PhD, an assistant professor in the Department of Biological Chemistry at the UCI School of Medicine, led a new study titled, 'Passenger hotspot mutations in cancer driven by APOBEC3A and mesoscale genomic features.' Pictured here with Danae Bowen, a research technician in the Buisson lab.

Image: 
UCI School of Medicine

Until now, researchers believed recurrent mutations (hotspot mutations) in cancer tumors were the important mutations (driver mutations) that promoted cancer progression. A new University of California, Irvine-led study indicates this is not always true.

Methods for identifying driver genes important to cancer progression have relied on the gold standard of recurrence across patients. Seeing exactly the same DNA base pair mutated repeatedly across many patients has been taken as incontrovertible proof that the mutation must be contributing to tumor development. However, the study, "Passenger hotspot mutations in cancer driven by APOBEC3A and mesoscale genomic features," published today in Science, reveals many recurrent cancer mutations are likely passenger hotspot mutations and not important for cancer development.

"In our study, we identified that certain recurrent mutations found in DNA stem loops, a common DNA structure, were in fact passenger hotspots and not the drivers we had originally believed them to be." said Rémi Buisson, PhD, an assistant professor in the Department of Biological Chemistry at the UCI School of Medicine and lead researcher on the study. More importantly, we discovered other recurrent mutations, found in DNA locations outside of stem loops, may be new drivers that are not yet characterized. The importance of our finding is that it gives us the ability to discriminate among mutations, which is essential in order to develop novel cancer therapies."

Research teams from UCI School of Medicine and Massachusetts General Hospital examined the mutation landscape and the distribution of mutations in the cancer genomes of more than 9,000 patient tumors. They identified that certain hotspot mutations arise from the activity of a family of proteins called APOBEC (apolipoprotein B mRNA editing catalytic polypeptide-like).

"Our analyses show that APOBEC3A, in particular, has a strong preference for DNA structures called stem-loops or DNA hairpins. This preference results in the creation of hotspot mutations in patient tumors," said Buisson.

Tens of thousands of DNA damage events occur daily in human cells. The APOBEC family of proteins are one of the most common sources of endogenous DNA damage events in cancer cells. APOBEC3A directly attacks genomic DNA, inducing DNA double-strand breaks and mutations. Until now, little was known about how APOBEC3A targets genomic DNA and if some structures of the genome are more prone to APOBEC3A attacks than others. DNA stem-loop structures can be transiently generated during DNA replication, transcription or through diverse DNA repair processes. These multiple pathways provide many opportunities for APOBEC3A to promote structure-specific cytosine deamination favoring the emergence of passenger hotspot mutations in patient tumors.

Credit: 
University of California - Irvine

Health disparity for blacks exists within lung screening guidelines

Guidelines that determine which smokers qualify for CT scans are excluding a significant number of African Americans who develop lung cancer, according to a study released today in JAMA Oncology.

The health disparity merits modifications to lung cancer screening criteria, said lead author Melinda Aldrich, PhD, MPH, assistant professor of Thoracic Surgery at Vanderbilt University Medical Center.

"Among smokers diagnosed with lung cancer, 32% of African Americans versus 56% of whites were eligible for screening, so it's a striking disparity in eligibility," Aldrich said.

The study reviewed cancer incidence data on 48,364 smokers from the Southern Community Cohort Study in one of the largest comprehensive evaluations to date of lung cancer screening guidelines established by the U.S. Preventive Services Task Force (USPSTF).

The USPSTF issued the guidelines in 2013 after the National Lung Screening Trial demonstrated that CT scans provided early detection of lung cancer and reduced deaths from the disease by 20% compared to participants who received standard chest X-rays.

Aldrich and fellow researchers concluded that those guidelines may be too conservative for African Americans, setting the stage for later diagnoses and reduced odds of survival.

The guidelines, which insurance companies follow in determining coverage for CT scans, are based on smoking history and age. However, studies have shown that African Americans have a higher risk of lung cancer than whites even if they smoke less over time. The USPSTF guidelines currently recommend screenings for smokers age 55 to 80 who have a 30 pack-year history and who still smoke or have quit within 15 years.

The pack-year measurement is based on smoking a pack a day for one year and can be adjusted accordingly if someone smokes less or more than that amount. For example, if someone smoked half a pack for 30 years, the smoking history would equal 15 pack-years.

The researchers calculated that lowering the threshold for African Americans to a minimum 20 pack-year history increased their eligibility and resulted in more equitable screening eligibility.

"This is a proposal for the first step, acknowledging that the guidelines are inadequate -- woefully inadequate, actually, as they exist right now -- with a suggested change that would largely correct the disparity," said William Blot, PhD, associate director for Population Science Research at Vanderbilt-Ingram Cancer Center, research professor of Medicine and Ingram Professor of Cancer Research.

The study also noted that the mean age for lung cancer diagnosis occurs significantly earlier in African Americans compared to whites. Modifying the minimum age for African Americans from 55 to 50 would also increase the eligibility percentage.

"The age shift may be equally important because it will shift the age at which we can diagnose African Americans to an earlier cancer stage and have better potential for curative treatment," Aldrich said. "If we don't shift that, then we are still going to potentially diagnose African Americans at a later stage."

Among patients diagnosed with stage IV lung cancer, the median age for diagnosis for whites was 63 compared to 59 for African Americans.

"This was one of those rare instances when the data are so clear that only a relatively simple adjustment is needed to level the playing field," said Vanderbilt's Jeffrey Blume, PhD, associate professor of Biostatistics and Biomedical Informatics, vice-chair for Education in Biostatistics and director of Graduate Education in the Data Science Institute. "Moreover, the simple changes to the guidelines that we are proposing can be easily incorporated into clinical practice without the need for fancy modeling or IT support."

Kim Sandler, MD, assistant professor of Radiology and Radiological Sciences, is co-director of the Vanderbilt Lung Screening Program and another author of the study. She said the new data offer an opportunity to move toward a better risk prediction model and improve screening guidelines.

"I found this data to be so compelling because you could greatly improve the sensitivity and find so many more patients with lung cancer in this African American population without sacrificing specificity, so you are not going to have more false positives than what we see in the white population," Sandler said.

Credit: 
Vanderbilt University Medical Center

Cosmic cat and mouse: Astronomers capture and tag a fleeting radio burst

image: Artist's impression of CSIRO's Australian SKA Pathfinder (ASKAP) radio telescope finding a fast radio burst and determining its precise location. The KECK, VLT and Gemini South optical telescopes joined ASKAP with follow-up observations to image the host galaxy.

Image: 
CSIRO/Dr Andrew Howells

An Australian-led team of astronomers using the Gemini South telescope in Chile have successfully confirmed the distance to a galaxy hosting an intense radio burst that flashed only once and lasted but a thousandth of a second. The team made the initial discovery of the fast radio burst (FRB) using the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope.

The critical Gemini observations were key to verifying that the burst left its host galaxy some 4 billion years ago.

Since the first FRB discovery in 2007, these mysterious objects have played a game of cosmic cat-and-mouse with astronomers -- with astronomers as the sharp-eyed cats! Fleeting radio outbursts, lasting about a millisecond (one-thousandth of one second), are difficult to detect, and even more difficult to locate precisely. In this case, the FRB, known as FRB 180924, was a single burst, unlike others that can flash multiple times over an extended period.

"It is especially challenging to pinpoint FRBs that only flash once and are gone," said Keith Bannister of Australia's Commonwealth Science and Industrial Research Organisation (CSIRO), who led the Australian team in the search effort. However, Bannister and his team did just that, which is a first.

The result is published in the June 27th issue of the journal Science.

The momentary pulse was first spotted in September 2018 during a dedicated search for FRBs using ASKAP -- a 36-antenna array of radio telescopes working together as a single instrument in Western Australia -- which also pinpointed the signal's location in the sky.

The researchers used the miniscule differences in the amount of time it takes for the light to reach different antennas in the array to zoom in on the host galaxy's location. "From these tiny time differences -- just a fraction of a billionth of a second -- we identified the burst's home galaxy," said team member Adam Deller, of Swinburne University of Technology.

Once pinpointed, the team enlisted the Gemini South telescope, along with the W.M. Keck Observatory and European Southern Observatory's Very Large Telescope (VLT) to determine the FRB's distance and other characteristics by carefully observing the galaxy that hosted the outburst. "The Gemini South data absolutely confirmed that the light left the galaxy about 4 billion years ago," said Nicolas Tejos of Pontificia Universidad Católica de Valparaíso, who led the Gemini observations.

"ASKAP gave us the two-dimensional position in the sky, but the Gemini, Keck, and VLT observations locked down the distance, which completes the three-dimensional picture," said Tejos.

"When we managed to get a position for FRB 180924 that was good to 0.1 arcsecond, we knew that it would tell us not just which object was the host galaxy, but also where within the host galaxy it occurred," said Deller. "We found that the FRB was located away from the galaxy's core, out in the 'galactic suburbs.'"

"The Gemini telescopes were designed with observations like this in mind," said Ralph Gaume, Deputy Division Director of the US National Science Foundation (NSF) Division of Astronomical Sciences, which provides funding for the US portion of the Gemini Observatory international partnership. Knowing where an FRB occurs in a galaxy of this type is important because it enables astronomers to get some hint of what the FRB progenitor might have been. "And for that," Gaume continues, "we need images and spectroscopy with superior image quality and depth, which is why Gemini and the optical and infrared observatory observations in this study were so important."

Localizing FRBs is critical to understanding what causes the flashes, which is still uncertain: to explain the high energies and short timescales, most theories invoke the presence of a massive yet very compact object such as a black hole or a highly magnetic neutron star. Finding where the bursts occur would tell us whether it is the formation, evolution, or collision and destruction of these objects that is generating the radio bursts."

"Much like gamma-ray bursts two decades ago, or the more recent detection of gravitational wave events, we stand on the cusp of an exciting new era where we are about to learn where fast radio bursts take place," said team member Stuart Ryder of Macquarie University, Australia. Ryder also noted that by knowing where within a galaxy FRBs occur, astronomers hope to learn more about what causes them, or at least rule out some of the many models. "Ultimately though," Ryder continued, "our goal is to use FRBs as cosmological probes, in much the same way that we use gamma ray bursts, quasars, and supernovae." According to Ryder, such a map could pinpoint the location of the 'missing baryons,' (baryons are the subatomic building blocks of matter) which standard models predict must be out there, but which don't show up using other probes.

By pinpointing the bursts and how far their light has traveled, astronomers can also obtain "core samples" of the intervening material between us and the flashes. With a large sample of FRB host galaxies, astronomers could conduct "cosmic tomography,"' to build the first 3D map of where baryons are located between galaxies. On that note Tejos added, "once we have a large sample of FRBs with known distances, we will also have a revolutionary new method for measuring the amount of matter in the cosmic web!"

To date, only one other fast radio burst (FRB 121102) has been localized, and it had a repeating signal that flashed more than 150 times, While both single and multiple flash FRBs are relatively rare, single FRBs are more common than repeating ones. The discovery of FRB 180924, then, could lead the way for future methods of localization.

"Fast turnaround follow-up contributions from Gemini Observatory will be especially significant in the future of time-domain astronomy," Tejos said, "as it promises not only to help astronomers perfect the study of transient phenomena, but perhaps alter our perceptions of the Universe."

Credit: 
Association of Universities for Research in Astronomy (AURA)

New Zealand and Australian researchers observe 70-year-old prediction, with wide-reaching effects

image: The Great Red Spot on Jupiter, compared with large vortex clusters observed in a tiny superfluid pool after precision laser cooling, confinement, and stirring.

Image: 
Ashton Bradley

As you stir milk into a cup of coffee, you will see fluid turbulence in action - rapid mixing that has defied deep scientific understanding.

A collaboration between researchers at the University of Otago, New Zealand, and University of Queensland, Australia, set out to learn more about the everyday enigma of turbulence by using the remarkable properties of superfluids, strange quantum fluids able to flow endlessly without any friction.

The team's break-through findings, just published in Science, may have implications for our understanding of quark-gluon plasmas, electrons in solids, and the persistence of Jupiter's Great Red Spot, or could help create more efficient transportation.

Co-author Dr Ashton Bradley, Principal Investigator at the Dodd-Walls Centre for Photonic and Quantum Technologies, says the group observed never-before-seen negative temperature states of quantum vortices in an experiment.

"Despite being important for modern understanding of turbulent fluids, these states have never been observed in nature. They contain significant energy, yet appear to be highly ordered, defying our usual notions of disorder in statistical mechanics," Dr Bradley says.

He describes understanding fluid turbulence as a challenging problem.

"Despite a long history of study, the chaotic nature of turbulence has defied a deep understanding. So much so, that the need for a complete description has been recognised as one of the Clay Mathematics Institute's unsolved 'Millennium Problems'.

"Fluid turbulence plays a major role in our everyday lives - about 30 per cent of carbon emissions come from transportation, with fluid turbulence playing a significant role. A deeper understanding of turbulence may eventually help create a more sustainable world by improving transport efficiencies."

An interesting aspect of turbulence is that it has universal properties, meaning turbulent systems on a scale from microscopic to planetary lengths appear to share similar descriptions and characteristics.

Nobel Laureate Lars Onsager came up with a toy theory for two-dimensional turbulence in 1949. Simply put, it states that if you add enough energy to a 2D system, turbulence will result in the appearance of giant vortices - just like in the atmosphere of Jupiter.

However, his theory only directly applies to superfluids, where vortices rotate in discrete (quantum) steps, and are almost particle-like.

Seventy years on, the Queensland-Otago collaboration has observed Onsager's predictions.

Dr Bradley says they utilised the high degree of control available in the Bose-Einstein condensation laboratory in Queensland's Centre of Excellence for Engineered Quantum Systems, using optical manipulation technology pioneered there.

They created a superfluid by cooling a gas of rubidium atoms down to nearly absolute zero temperature, and holding it in the focus of laser beams. The optical techniques developed allow them to precisely stir vortices into the fluid - much like stirring that milk into your coffee.

Lead author Dr Tyler Neely, of Queensland, says the "amazing" thing is that the group achieved this with light and at such a small scale.

"The cores of the vortices created in our system are only about 1/10 of the diameter a human blood cell," he says.

One of the more bizarre aspects of Onsager's theory is that the more energy you add to the system of vortices, the more concentrated the giant vortices become. It turns out if you consider the vortices as a gas of particles moving around inside the superfluid, vortex clusters exist in absolutely negative temperature states, below absolute zero.

"This aspect is really weird. Absolute negative temperature systems are sometimes described as 'hotter than hot' because they really want to give up their energy to any normal system at positive temperature. This also means that they are extremely fragile.

"Our study counters this intuition by showing that since the vortices are sufficiently isolated inside the superfluid, the negative-temperature vortex clusters can persist for nearly ten seconds," Dr Neely says.

Credit: 
University of Otago

Reducing delays in identifying visceral leishmaniasis

Women in Indian states with endemic visceral leishmaniasis - also known as Kala Azar - should be encouraged to seek care for persistent fever without delay. Raised awareness about the disease and its symptoms, and the prioritization of women's care-seeking over household work could help reduce fatalities and potentially reduce overall transmission, according to research by independent consultant Beulah Jayakumar and colleagues, published in PLOS Neglected Tropical Diseases.

Visceral leishmaniasis is transmitted by sand flies. It is endemic in the Indian subcontinent and since 2005 there have been efforts to eliminate it as a public health problem. Though largely effective, these efforts have not yet achieved the threshold target in India, with four endemic states continuing to harbor disease. Detecting and treating the disease quickly is key, as this interrupts and shortens transmission from human hosts and improves patient outcomes - the only known hosts in the Indian subcontinent.

Beulah Jayakumar and colleagues interviewed 33 female patients from two states, Bihar and Jharkhand, along with 11 unqualified health providers and 12 groups of community elders. Women seem to access care later than men, partly due to the accepted prioritization of household work over what were considered mild and vague symptoms not considered serious enough to seek appropriate care and spend money on associated expenses. Additional causes for delayed treatment include securing a male chaperone for hospital visits and the perception that private facilities provide higher quality care.

Study limitations include the small sample size and lack of comparative analysis with male patients; additionally, patients were found from a national visceral leishmaniasis surveillance register, excluding patients who were not reported or recorded due to private care.

The authors conclude that there is a need for clearer messaging to increase awareness in the general population and among informal and formal care providers, and an emphasis on the need for early care seeking for women with persistent fever. They also identify missed opportunities in government care facilities, where diagnoses were missed either due to lack of testing or diagnostic tests administered too early.

Credit: 
PLOS

The world needs a global system to detect and halt the spread of emerging crop diseases

image: Cassava roots affected by disease.

Image: 
Georgina Smith / International Center for Tropical Agriculture

More than 20 percent of the five staple crops that provide half the globe's caloric intake are lost to pests each year. Climate change and global trade drive the spread, emergence, and re-emergence of crop disease, and containment action is often inefficient, especially in low-income countries. A Global Surveillance System (GSS) to strengthen and interconnect crop biosecurity systems could go a long way to improving global food security, argues a team of experts in the June 28 issue of Science.

"As part of efforts to satisfy global demand for food - which could mean increasing agricultural production by as much as 70 percent by 2050 - we need a GSS to reduce food lost to pests," said Mónica Carvajal, a researcher at the International Center for Tropical Agriculture (CIAT) and lead author. "A lot of collaboration and discussion is needed to rapidly take action and avoid outbreaks that could negatively impact food security and trade."

Carvajal and colleagues hope the GSS framework they propose gains traction in 2020, which was designated International Year of Plant Health by the United Nations. The system would prioritize six major food crops - maize, potato, cassava, rice, beans, and wheat - as well as other important food and cash crops that are traded across borders. The GSS proposal is the result of a scientific meeting convened by CIAT and held in 2018 at the Rockefeller Foundation's Bellagio Center in Italy.

Inspired by recent outbreaks

In 2015, Cassava Mosaic Disease (CMD) was discovered in Cambodia but the findings were not reported until 2016. By 2018, the disease had spread to Thailand and Vietnam, and is now estimated to be present in 10 percent of the surfaces cultivated in the region, threatening millions of smallholders who cultivate cassava and generate US$4 billion in export revenue.

This year, agricultural authorities from four countries - Cambodia, Thailand, Vietnam, and Lao PDR -supported by research organizations including CIAT, published an emergency control plan for CMD in Southeast Asia.

Carvajal, who studied the CMD outbreak after its initial report, says that a GSS would help expedite action for future outbreaks.

"The question I asked was why does it take so long to respond to crop diseases in some cases?" said Carvajal. "What is the limitation to responding faster from the outset?"

The GSS proposal draws on lessons learned from the wheat blast outbreak that hit Bangladesh in 2016 and the bacterial outbreak of Xylella fastidiosa that started affecting olive trees in Europe in 2013. The proposal is from a multidisciplinary group of experts from academia, research centers, and funding organizations that work on issues related to plant health and human health.

What would GSS do?

The GSS would focus on tightening networks "active surveillance" and "passive surveillance" personnel who are on the front lines of disease outbreaks. Active surveillance consists of laboratories at agriculture inspection stations, and customs and phytosanitary inspectors at borders and ports of entry. Despite their formal infrastructure, only an estimated 2-6 percent of cargo can be effectively screened.

The second group includes loose networks of farmers, extension workers with national agricultural organizations, scientists and agronomists at research centers and universities, and specialists in agriculture industries.

"For this infrastructure to be effective, connections between first detectors and downstream responders must be enhanced and actions coordinated," said the authors. "But diagnostic capacity, information sharing, and communications protocols are lacking or weakly established in some regions, especially in low-income countries. Our reflection on many disease outbreaks is that whether in high-income countries or low-income countries, the passive surveillance infrastructure has the most in-field monitoring eyes but the least coordination from local to global."

The GSS would tap into cutting-edge technology for rapid disease diagnostics and take advantage of communications networks, including social media, to rapidly share information. The system would have regional hubs and consist of five formal global networks. These would include a diagnostic laboratory network, a risk assessment network, a data management network, an operational management network, and a communications network.

"Our team realized that there is a big issue with communication, even when we speak the same language and use the same technologies," said Carvajal. "One of the most relevant components is the communications network."
T

he GSS team hopes to contribute to future efforts on strengthening pest outbreak response systems within the International Plant Protection Convention's (IPPC) 2020-2030 Strategic Framework.

"We encourage the annual G20 Agriculture Ministers Meeting, the World Bank Group, and FAO, among others, to join efforts toward enhancing cooperation for a multi-year action plan for the proposed GSS to more effectively reduce the impact of crop diseases and increase global food security," the authors conclude.

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

Mutational 'hotspots' in cancer genomes may not necessarily drive cancer growth

A study by investigators from the Massachusetts General Hospital (MGH) Cancer Center has found that, contrary to common assumptions, the fact that a specific genetic mutation frequently arises in particular tumors may not signify that the mutation drives cancer development and progression. Their article published in Science describes how DNA single strands that fold back on themselves in what is called a "hairpin" structure appear highly sensitive to mutation by a gene-editing enzyme expressed in many cancers. But many of these mutation "hotspots" occur in genes that are totally unrelated to cancers, including many in noncoding areas of the genome.

"A typical cancer genome will have five to ten driver mutations and thousands or even millions of these 'passenger' mutations that are just along for the ride," says Michael Lawrence, PhD, of the MGH Cancer Center, co-senior author of the Science article. "The thinking has been that, if the exact same mutation occurs in many different patients' cancers, it must confer a fitness advantage to the cancer cells. While the recurrence-based approach to identifying cancer driver genes has been successful, it's also possible that certain positions in the genome are just very easy to mutate."

Although much is known about how patterns of gene mutation are affected by small-scale structures - such as the groups of three bases called trinucleotides - or by the large-scale "compartments" into which DNA is organized in the nucleus, relatively little is known about the effects of "mesoscale" DNA structures that may extend 30 base pairs around the site of a mutation. Previous investigation by others into breast cancer mutation hotspots associated with APOBEC enzymes identified DNA "palindromes" - in which a specific sequence on one side of a mutation is repeated in reverse on the other side - suggesting the stem-loop hairpin structure.

The MGH team's analysis of data from The Cancer Genome Atlas and other sources focused on the potential influence of such mesoscale structures on the frequency and recurrence of mutations. They specifically investigated mutations associated with the APOBEC family of proteins, which - among many other functions - helps protect against viruses that enter cells by altering the viral genome. Many types of cancer cells are known to activate APOBEC enzymes and, in contrast to other cancer-associated mutations that preferentially accumulate in a specific region of the genome, APOBEC-associated mutations are distributed evenly throughout the genome, frequently occurring in DNA hairpins.

Their experiments revealed that the APOBEC3A enzyme commonly mutates cytosine bases located at the end of a hairpin loop, converting them into uracils, even in genes that have little or no association with cancer. In contrast, recurrent APOBEC-associated mutations in known driver genes were at ordinary sites in the genome, not the special hairpin sites that are especially easy for APOBEC3A to mutate. This suggests that the driver mutations, while possibly being difficult to generate, do confer a survival advantage on cancer cells; and that is why they are frequently observed in patients with cancer.

"Several APOBEC3A hairpin hotspots have already been claimed to be drivers, based solely on their frequency and with no functional evidence," says co-senior author Lee Zou, PhD, of the MGH Cancer Center. "Our findings suggest that these are simply 'passenger hotspots' - a term that would have been considered an oxymoron until now - and that researchers' time would be better spent on mutations that have been proven to alter the properties of cells in ways that drive their malignant proliferation."

Lawrence adds, "There are so many important questions in cancer research, and anything we can do to avoid investigators' pursuing false leads would save time and money. But the challenge of distinguishing drivers from passengers is also central to other important questions: How many drivers does it take to make a cancer? What is the process by which a normal cell becomes cancer? Why do some cancers appear to have no driver mutations? Our colleague professor Gad Getz has shown that there must be considerable 'dark matter' in the genome to explain these driver-negative cases. Having an accurate inventory of drivers requires being able to see through passenger mutations that masquerade as drivers, and our results suggest there might be even more genomic dark matter!"

Credit: 
Massachusetts General Hospital