Culture

Is parents' use of marijuana associated with greater likelihood of kids' substance use?

Bottom Line: Recent and past use of marijuana by parents was associated with increased risk of marijuana, tobacco and alcohol use by adolescent or young adult children living in the same household in this survey study. Researchers examined data for 24,900 parent-child pairs from National Surveys on Drug Use and Health from 2015-2018. Parental marijuana use was a risk factor for marijuana and tobacco use by adolescent and young adult children and for alcohol use by adolescent children when researchers accounted for a variety of potential family and environmental factors. When those factors were considered, parental marijuana use wasn't associated with opioid misuse by their children. The study has limitations, including that the surveys cannot provide a complete picture of family substance use.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Bertha K. Madras, Ph.D., Harvard Medical School, Boston, and McLean Hospital, Belmont, Massachusetts, and coauthors

(doi:10.1001/jamanetworkopen.2019.16015)

Editor's Note: The article contains conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Firearm violence impacts young people disproportionately

(Boston)--Although the magnitude of firearm deaths has remained constant since 2001, a new study has found that deaths have increased since 2014.

An examination of years of life lost due to guns showed there has been a slow increase in years of life lost due to guns since 1999 with a sudden and large increase since 2014. "A constant mortality rate with increasing percentage of years of potential life lost indicates that although the magnitude of firearm deaths remained the same, the deaths were increasingly premature or among younger people across time," explained corresponding author Bindu Kalesan, PhD, MPH, assistant professor of medicine at Boston University School of Medicine (BUSM).

Researchers from BUSM, Boston University School of Public Health (BUSPH), University of Florida and Columbia University used national and state data from the Injury Statistics Query and Reporting System database to explore the patterns of gun deaths and the life years lost.

Previous studies showed that certain populations experience more burden in terms of violence, particularly men and Non-Hispanic black populations. This new study additionally showed that the jump in national rates was driven principally due to increases among men, non-Hispanic black populations, Hispanic white populations and due to homicides. This study also found that 21 U.S. states mirrored the national increase. In contrast, gun deaths declined during the 18 years only in two states, New York and Arizona. Additionally, the shift in increasing burden of gun deaths to younger Americans was noted in 14 states.

According to the researcher, these alarming shifts indicate a wide variation in emerging patterns of gun deaths in America. "Not only did the amount of gun deaths vary by states and within different population groups, there were also unique changes in burden of gun deaths with time. The public health crisis of gun deaths in this country is therefore not unidimensional, but is complex and multidimensional with distinctive risk profiles," added Kalesan, who is also assistant professor of community health sciences at BUSPH.

The researchers believe that in an era of increasing and complex pattern of gun deaths, it is imperative to understand the state-specific and subgroup specific changes additional to national changes so as to combat them in the future based on the unique features. "Future interventions, programs and policies should be created to address this shifting burden locally and should bear in mind the populations that are being most affected by shifts in firearm death," said Kalesan.

Credit: 
Boston University School of Medicine

New Cochrane Review assesses different HPV vaccines & vaccine schedules in adolescent girls and boys

New evidence published in the Cochrane Library today provides further information on the benefits and harms of different human papillomavirus (HPV) vaccines and vaccine schedules in young women and men.

HPV is the most common viral infection of the reproductive tract in both women and men globally (WHO 2017). Most people who have sexual contact will be exposed to HPV at some point in their life. In most people, their own immune system will clear the HPV infection.

HPV infection can sometimes persist if the immune system does not clear the virus. Persistent infection with some 'high-risk' strains of HPV can lead to the development of cancer. High-risk HPV strains cause almost all cancers of the cervix and anus, and some cancers of the vagina, vulva, anus, penis, and head and neck. Other 'low risk', HPV strains cause genital warts but do not cause cancer. Development of cancer due to HPV happens gradually, over many years, through a number of pre-cancer stages, called intra-epithelial neoplasia. In the cervix (neck of the womb) these changes are called cervical intraepithelial neoplasia (CIN). High-grade CIN changes have a 1 in 3 chance of developing into cervical cancer, but many CIN lesions regress and do not develop into cancer. HPV-related cancers accounted for an estimated 4.5% of cancers worldwide in 2012 (de Martel 2017).

Vaccination aims to prevent future HPV infection and the cancers caused by high-risk HPV infection. HPV vaccines are mainly targeted towards adolescent girls because cancer of the cervix is the most common HPV-associated cancer. For the prevention of cervical cancer, the World Health Organization recommends vaccinating girls aged 9-14 years with HPV vaccine using a two-dose schedule (0, 6 months) as the most effective strategy. A three-dose schedule is recommended for older girls ?15 years of age or for people with human immunodeficiency virus (HIV) infection or other causes of immunodeficiency (WHO 2017).

Three HPV vaccines are currently in use: a bivalent vaccine that is targeted at the two most common high-risk HPV types; a quadrivalent vaccine targeted at four HPV types, and a nonavalent vaccine targeted at nine HPV types. In women, the bivalent and quadrivalent vaccines have been shown to protect against pre-cancer of the cervix caused by the HPV types contained in the vaccine if given before natural infection with HPV (Arbyn 2018).

This Cochrane Review summarizes the results from 20 randomized controlled trials involving 31,940 people conducted across all continents. In most studies, the outcome reported was the production of HPV antibodies by the vaccine recipient's immune system. HPV antibody responses predict protection against the HPV-related diseases and cancers the vaccines are intended to prevent. Antibody response is often used as a surrogate in HPV vaccine studies because it takes many years for pre-cancer to develop after HPV infection, so it is difficult for studies to follow participants over such long periods of time. Moreover, because trial participants were tested for HPV infection and offered treatment, if HPV-related precancer was found, progression to cervical cancer in this group would be expected to be very low, even without vaccination.

Four studies compared a two-dose vaccine schedule with a three-dose schedule in 2,317 adolescent girls and three studies compared different time intervals between the first two vaccine doses in 2,349 girls and boys. Antibody responses were similar after two-dose and three-dose HPV vaccine schedules in girls. Antibody responses in girls and boys were stronger when the interval between the first two doses of HPV vaccine was longer.

There was evidence from one study of 16 to 26-year old men that the quadrivalent HPV vaccine reduces the incidence of external genital lesions and genital warts compared with a group who did not receive the HPV vaccine.

There was also evidence from a study of 16 to 26-year old women that compared the nonavalent and quadrivalent vaccines that they provide a similar level of protection against cervical, vaginal, and vulval pre-cancerous lesions.

There was evidence from seven studies about HPV vaccines in people living with HIV. HPV antibody responses in children living with HIV were higher after vaccination with either bivalent or quadrivalent vaccine than with a non-HPV control vaccine. These antibody responses against HPV could be maintained up to two years. The evidence about clinical outcomes and harms for HPV vaccines in people with HIV was very limited.

Evidence suggested that up to 90% of males and females who received an HPV vaccine experienced local minor adverse events such as redness, swelling and pain at the injection site. Due to the low rates of serious adverse events in quadrivalent and nonavalent vaccine groups, and the broad definition of these events used in the trials, we cannot really determine the relative safety of different vaccine schedules.

The lead editor of this review and Consultant in Gynaecological Oncology, Musgrove Park Hospital, Somerset, UK, Dr. Jo Morrison said: "We need long-term population-level studies to provide data on the effects of dosing intervals, schedules and vaccines on HPV-related cancers, as well as giving us a more complete picture of rare harms. However, with fewer doses having a similar antibody response, and more extensive evidence from vaccine studies in boys, policy makers are now in a better position to determine how local vaccination programmes can be designed. It would be interesting to see how different schedules and vaccines influence immunisation coverage, but this review, and the studies within it, were not designed to be able to answer that question."

Credit: 
Wiley

Industrial scale production of layer materials via intermediate-assisted grinding

image: (a) Schematic of the decomposition of the macroscopic compressive forces Fc and Fc? into much smaller microscopic forces fi and fi? that were loaded onto the layer materials by force intermediates. (b) Exfoliation mechanism of layer materials. fi and fi? transfer to sliding frictional forces ffi and ffi? under the relative slipping of the intermediates and layer materials due to the rotation of the bottom container. (c) Atomic force microscopy image of 2D flakes. (d) Photos of several bottoms of 2D MoS2 flakes in aqueous solution.

Image: 
©Science China Press

The large number of 2D materials, including graphene, hexagonal boron nitride (h-BN), transition metal dichalcogenides (TMDCs) like MoS2 and WSe2, metal oxides (MxOy), black phosphorene (b-P), etc, provide a wide range of properties and numerous potential applications.

In order to realize their commercial use, the prerequisite is large-scale production. Bottom-up strategies like chemical vapor deposition (CVD) and chemical synthesis have been extensively explored but only small quantities of 2D materials have been produced so far. Another important strategy to obtain 2D materials is from a top-down path by exfoliating bulk layer materials to monolayer or few layer 2D materials, such as ball milling, liquid phase exfoliation, etc. It seems that top-down strategies are most likely to be scaled-up, however, they are only suitable for specific materials. So far, only graphene and graphene oxide can be prepared at the tons level, while for other 2D materials, they still remain in the laboratory state because of the low yield. Therefore, it is necessary to develop a high-efficiency and low-cost preparation method of 2D materials to progress from the laboratory to our daily life.

The failure of solid lubricants is caused by the slip between layers of bulk materials, and the result of the slip is that the bulk materials will be peeled off into fewer layers. Based on this understanding, in a new research article published in the Beijing-based National Science Review, the Low-Dimensional Materials and Devices lab led by Professor Hui-Ming Cheng and Professor Bilu Liu from Tsinghua University proposed an exfoliation technology which is named as interMediate-Assisted Grinding Exfoliation (iMAGE). The key to this exfoliation technology is to use intermediate materials that increase the coefficient of friction of the mixture and effectively apply sliding frictional forces to the layer material, resulting in a dramatically increased exfoliation efficiency.

Considering the case of 2D h-BN, the production rate and energy consumption can reach 0.3 g h-1 and 3.01×106 J g-1, respectively, both of which are one to two orders of magnitude better than previous results. The resulting exfoliated 2D h-BN flakes have an average thickness of 4 nm and an average lateral size of 1.2 μm. Besides, this iMAGE method has been extended to exfoliate a series of layer materials with different properties, including graphite, Bi2Te3, b-P, MoS2, TiOx, h-BN, and mica, covering 2D metals, semiconductors with different bandgaps, and insulators.

It is worth mentioning that, with the cooperation with the Luoyang Shenyu Molybdenum Co. Ltd., molybdenite concentrate, a naturally existing cheap and earth abundant mineral, was used as a demo for the industrial scale exfoliation production of 2D MoS2 flakes.

"This is the very first time that 2D materials other than graphene have been produced with a yield of more than 50% and a production rate of over 0.1g h-1. And an annual production capability of 2D h-BN is expected to be exceeding 10 tons by our iMAGE technology." Prof. Bilu Liu, one of the leading authors of this study, said, "Our iMAGE technology overcomes a main challenge in 2D materials, i.e., their mass production, and is expected to accelerate their commercialization in a wide range of applications in electronics, energy, and others."

Credit: 
Science China Press

United in musical diversity

image: In ethnomusicology, universality became something of a dirty word. But new research promises to once again revive the search for deep universal aspects of human musicality.

Image: 
© Tecumseh Fitch

Is music really a "universal language"? Two articles in the most recent issue of Science support the idea that music all around the globe shares important commonalities, despite many differences. Researchers led by Samuel Mehr at Harvard University have undertaken a large-scale analysis of music from cultures around the world. Cognitive biologists Tecumseh Fitch and Tudor Popescu of the University of Vienna suggest that human musicality unites all cultures across the planet.

The many musical styles of the world are so different, at least superficially, that music scholars are often sceptical that they have any important shared features. "Universality is a big word - and a dangerous one", the great Leonard Bernstein once said. Indeed, in ethnomusicology, universality became something of a dirty word. But new research promises to once again revive the search for deep universal aspects of human musicality.

Samuel Mehr at Harvard University found that all cultures studied make music, and use similar kinds of music in similar contexts, with consistent features in each case. For example, dance music is fast and rhythmic, and lullabies soft and slow - all around the world. Furthermore, all cultures showed tonality: building up a small subset of notes from some base note, just as in the Western diatonic scale. Healing songs tend to use fewer notes, and more closely spaced, than love songs. These and other findings indicate that there are indeed universal properties of music that likely reflect deeper commonalities of human cognition - a fundamental "human musicality".

In a Science perspective piece in the same issue, University of Vienna researchers Tecumseh Fitch and Tudor Popescu comment on the implications. "Human musicality fundamentally rests on a small number of fixed pillars: hard-coded predispositions, afforded to us by the ancient physiological infrastructure of our shared biology. These 'musical pillars' are then 'seasoned' with the specifics of every individual culture, giving rise to the beautiful kaleidoscopic assortment that we find in world music", Tudor Popescu explains.

"This new research revives a fascinating field of study, pioneered by Carl Stumpf in Berlin at the beginning of the 20th century, but that was tragically terminated by the Nazis in the 1930s", Fitch adds.

As humanity comes closer together, so does our wish to understand what it is that we all have in common - in all aspects of behaviour and culture. The new research suggests that human musicality is one of these shared aspects of human cognition. "Just as European countries are said to be 'United In Diversity', so too the medley of human musicality unites all cultures across the planet", concludes Tudor Popescu.

Credit: 
University of Vienna

Increased use of antibiotics may predispose to Parkinson's disease

Higher exposure to commonly used oral antibiotics is linked to an increased risk of Parkinson's disease according to a recently published study by researchers form the Helsinki University Hospital, Finland.

The strongest associations were found for broad spectrum antibiotics and those that act against against anaerobic bacteria and fungi. The timing of antibiotic exposure also seemed to matter.

The study suggests that excessive use of certain antibiotics can predispose to Parkinson's disease with a delay of up to 10 to 15 years. This connection may be explained by their disruptive effects on the gut microbial ecosystem.

"The link between antibiotic exposure and Parkinson's disease fits the current view that in a significant proportion of patients the pathology of Parkinson's may originate in the gut, possibly related to microbial changes, years before the onset of typical Parkinson motor symptoms such as slowness, muscle stiffness and shaking of the extremities. It was known that the bacterial composition of the intestine in Parkinson's patients is abnormal, but the cause is unclear. Our results suggest that some commonly used antibiotics, which are known to strongly influence the gut microbiota, could be a predisposing factor," says research team leader, neurologist Filip Scheperjans MD, PhD from the Department of Neurology of Helsinki University Hospital.

In the gut, pathological changes typical of Parkinson's disease have been observed up to 20 years before diagnosis. Constipation, irritable bowel syndrome and inflammatory bowel disease have been associated with a higher risk of developing Parkinson's disease. Exposure to antibiotics has been shown to cause changes in the gut microbiome and their use is associated with an increased risk of several diseases, such as psychiatric disorders and Crohn's disease. However, these diseases or increased susceptibility to infection do not explain the now observed relationship between antibiotics and Parkinson's.

"The discovery may also have implications for antibiotic prescribing practices in the future. In addition to the problem of antibiotic resistance, antimicrobial prescribing should also take into account their potentially long-lasting effects on the gut microbiome and the development of certain diseases," says Scheperjans.

The possible association of antibiotic exposure with Parkinson's disease was investigated in a case-control study using data extracted from national registries. The study compared antibiotic exposure during the years 1998-2014 in 13,976 Parkinson's disease patients and compared it with 40,697 non-affected persons matched for the age, sex and place of residence.

Antibiotic exposure was examined over three different time periods: 1-5, 5-10, and 10-15 years prior to the index date, based on oral antibiotic purchase data. Exposure was classified based on number of purchased courses. Exposure was also examined by classifying antibiotics according to their chemical structure, antimicrobial spectrum, and mechanism of action.

Credit: 
University of Helsinki

Filaments that structure DNA

image: After the cell has been treated with a messenger substance (right), the green actin molecules in the red cell nucleus (left) form actin filaments that structure the genome.

Image: 
Robert Grosse

They play a leading role not only in muscle cells. Actin filaments are one of the most abundant proteins in all mammalian cells. The filigree structures form an important part of the cytoskeleton and locomotor system. Cell biologists at the University of Freiburg are now using cell cultures to show how receptor proteins in the plasma membrane of these cells transmit signals from the outside to actin molecules inside the nucleus, which then form threads. In a new study, the team led by pharmacologist Professor Robert Grosse uses physiological messengers to control the assembly and disassembly of actin filaments in the cell nucleus and shows, which signaling molecules control the process. The results of their study have been published in the latest Nature Communications.

"It was previously unknown just how a hormone or agent induces the cell to begin filament formation in the intact cell nucleus," Grosse says. Back in 2013, he discovered that actin threads were formed in the nucleus when he exposed cells to serum components. In the nucleus, actin usually occurs as a single protein. It only forms filaments when a signal is given. Actin filaments resemble a double chain of beads and create possible anchor points or pathways for the structures in the cell nucleus. They give the DNA structure - for example, determining how densely packed the chromosomes in the form of chromatin are. This influences the readability of the genetic material. "What we have here is a generally valid mechanism that shows how external, physiological signals can control the cytoskeleton in the nucleus and reorganize the genome in a very short time," Grosse explains.

Grosse is familiar with the signaling pathway that reaches into the nucleus, known as the G-protein-coupled receptor pathway. Agents, hormones, or signal transmitters bind these receptor types at the cell membrane, which is a target for a large number of clinical drugs. The receptor initiates a calcium release in the cell via a signaling cascade. The team then shows that intracellular calcium causes filaments to be formed on the inner membrane of the cell nucleus. In the study, they used fluorescence microscopy and genetic engineering methods to show how actin filaments appeared in the nucleus after physiological messengers such as thrombin and LPA bound to G-protein-coupled receptors.

In the research project within the University of Freiburg excellence cluster CIBSS - Centre for Integrative Biological Signalling Studies, Grosse is now investigating the more exact processes in the cell nucleus. "My team in Freiburg wants to find out in detail how filament formation influences the readability of the genetic material and what role the inner cell nucleus membrane plays in this process."

Credit: 
University of Freiburg

DNA repeats -- the genome's dark matter

image: The sequencing device has tiny nanopores that can determine both the DNA sequence and the epigenetic signature.

Image: 
MPI f. Molecular Genetics/ Pay Gießelmann

Expansions of DNA repeats are very hard to analyze. A method developed by researchers at the Max Planck Institute for Molecular Genetics in Berlin allows for a detailed look at these previously inaccessible regions of the genome. It combines nanopore sequencing, stem cell, and CRISPR-Cas technologies. The method could improve the diagnosis of various congenital diseases and cancers in the future.

Large parts of the genome consist of monotonous regions where short sections of the genome repeat hundreds or thousands of times. But expansions of these "DNA repeats" in the wrong places can have dramatic consequences, like in patients with Fragile X syndrome, one of the most commonly identifiable hereditary causes of cognitive disability in humans. However, these repetitive regions are still regarded as an unknown territory that cannot be examined appropriately, even with modern methods.

A research team led by Franz-Josef Müller at the Max Planck Institute for Molecular Genetics in Berlin and the University Hospital of Schleswig-Holstein in Kiel recently shed light on this inaccessible region of the genome. Müller's team was the first to successfully determine the length of genomic tandem repeats in patient-derived stem cell cultures. The researchers additionally obtained data on the epigenetic state of the repeats by scanning individual DNA molecules. The method, which is based on nanopore sequencing and CRISPR-Cas technologies, opens the door for research into repetitive genomic regions, and the rapid and accurate diagnosis of a range of diseases.

A gene defect on the X chromosome

In Fragile X syndrome, a repeat sequence has expanded in a gene called FMR1 on the X chromosome. "The cell recognizes the repetitive region and switches it off by attaching methyl groups to the DNA," says Müller. These small chemical changes have an epigenetic effect because they leave the underlying genetic information intact. "Unfortunately, the epigenetic marks spread over to the entire gene, which is then completely shut down," explains Müller. The gene is known to be essential for normal brain development. He states: "Without the FMR1 gene, we see severe delays in development leading to varying degrees of intellectual disability or autism."

Female individuals are, in most cases, less affected by the disease, since the repeat region is usually located on only one of the two X chromosomes. Since the unchanged second copy of the gene is not epigenetically altered, it is able to compensate for the genetic defect. In contrast, males have only one X chromosome and one copy of the affected gene and display the full range of clinical symptoms. The syndrome is one of about 30 diseases that are caused by expanding short tandem repeats.

First precise mapping of short tandem repeats

In this study, Müller and his team investigated the genome of stem cells that were derived from patient tissue. They were able to determine the length of the repeat regions and their epigenetic signature, a feat that had not been possible with conventional sequencing methods. The researchers also discovered that the length of the repetitive region could vary to a large degree, even among the cells of a single patient.

The researchers also tested their process with cells derived from patients that contained an expanded repeat in one of the two copies of the C9orf72 gene. This mutation leads to one of the most common monogenic causes of frontotemporal dementia and amyotrophic lateral sclerosis. "We were the first to map the entire epigenetics of extended and unchanged repeat regions in a single experiment," says Müller. Furthermore, the region of interest on the DNA molecule remained physically wholly unaltered. "We developed a unique method for the analysis of single molecules and for the darkest regions of our genome - that's what makes this so exciting for me."

Tiny pores scan single molecules

"Conventional methods are limited when it comes to highly repetitive DNA sequences. Not to mention the inability to simultaneously detect the epigenetic properties of repeats," says Björn Brändl, one of the first authors of the publication. That's why the scientists used Nanopore sequencing technology, which is capable of analyzing these regions. The DNA is fragmented, and each strand is threaded through one of a hundred tiny holes ("nanopores") on a silicon chip. At the same time, electrically charged particles flow through the pores and generate a current. When a DNA molecule moves through one of these pores, the current varies depending on the chemical properties of the DNA. These fluctuations of the electrical signal are enough for the computer to reconstruct the genetic sequence and the epigenetic chemical labels. This process takes place at each pore and, thus, each strand of DNA.

Genome editing tools and bioinformatics illuminate "dark matter"

Conventional sequencing methods analyze the entire genome of a patient. Now, the scientists designed a process to look at specific regions selectively. Brändl used the CRISPR-Cas system to cut DNA segments from the genome that contained the repeat region. These segments went through a few intermediate processing steps and were then funneled into the pores on the sequencing chip.

"If we had not pre-sorted the molecules in this way, their signal would have been drowned in the noise of the rest of the genome," says bioinformatician Pay Giesselmann. He had to develop an algorithm specifically for the interpretation of the electrical signals generated by the repeats: "Most algorithms fail because they do not expect the regular patterns of repetitive sequences." While Giesselmann's program "STRique" does not determine the genetic sequence itself, it counts the number of sequence repetitions with high precision. The program is freely available on the internet.

Numerous potential applications in research and the clinic

"With the CRISPR-Cas system and our algorithms, we can scrutinize any section of the genome - especially those regions that are particularly difficult to examine using conventional methods," says Müller, who is heading the project. "We created the tools that enable every researcher to explore the dark matter of the genome," says Müller. He sees great potential for basic research. "There is evidence that the repeats grow during the development of the nervous system, and we would like to take a closer look at this."

The physician also envisions numerous applications in clinical diagnostics. After all, repetitive regions are involved in the development of cancer, and the new method is relatively inexpensive and fast. Müller is determined to take the procedure to the next level: "We are very close to clinical application."

Credit: 
Max-Planck-Gesellschaft

Increase in cannabis cultivation or residential development could impact water resources

Cannabis cultivation could have a significant effect on groundwater and surface water resources when combined with residential use, evidence from a new study suggests.

Researchers in Canada and the US investigated potential reductions in streamflow, caused by groundwater pumping for cannabis irrigation, in the Navarro River in Mendocino County, California, and contextualized it by comparing it with residential groundwater use.

Reporting their findings in the journal Environmental Research Communications, they note that the combination of cannabis cultivation and residential use may cause significant streamflow depletion, with the largest impacts in late summer when streams and local fish species depend most on groundwater inflows.

Dr Sam Zipper, from the University of Kansas, USA, is the study's lead author. He said: "Cannabis is an emerging agricultural frontier, but thanks to its long illegal and quasi-legal history, we know very little about the impacts of cannabis cultivation on water resources.

"What we do know is that there has been a big increase in cannabis cultivation in in recent years. Researchers have found that the area under cultivation in Mendocino and Humboldt counties nearly doubled between 2012 and 2016.

"It has often been assumed most cannabis cultivators irrigate using surface water. But recent evidence from Northern California shows that groundwater is the primary irrigation water supply in this region. That means it is essential to understand how groundwater pumping for cannabis cultivation affects water resources, particularly given that regulations governing cannabis legalisation and management are currently being debated and designed in many US states."

The study team examined the impacts of ongoing groundwater pumping on streamflow and aquatic ecosystems, using an analytical depletion function - a newly developed tool for estimating streamflow depletion with low data and computational requirements.

They found that both cannabis and residential groundwater use can cause significant streamflow depletion in streams with high salmon habitat potential in the Navarro River Watershed, with shifting drivers of impacts and implications through time.

Dr Zipper said: "Cannabis groundwater pumping has an important impact on streamflow during the dry season. But it is dwarfed by streamflow depletion caused by residential groundwater use, which is five times greater.

"However, cannabis pumping is a new and expanding source of groundwater depletion, which may further deplete a summer baseflow already stressed by residential water use and traditional agriculture."

The groundwater withdrawals at each residence or cannabis cultivation site were relatively small compared to irrigation for row crops like corn. However, with over 300 groundwater-irrigated cannabis cultivation sites and over 1300 residences in the Navarro River Watershed, these relatively small withdrawals added up to a significant impact on local streams.

"This study shows that it is not just big agriculture in the Central Valley that have potential to deplete streamflow by pumping groundwater," states co-author Dr. Jeanette Howard, from The Nature Conservancy California. "We showed that even in watersheds where there aren't big groundwater pumpers, that the cumulative impacts of many small groundwater pumpers has the potential to negatively impact stream flow."

Dr Zipper concluded: "Our results indicate that the emerging cannabis agricultural frontier is likely to increase stress on both surface water and groundwater resources, and groundwater-dependent ecosystems, particularly in areas already stressed by other groundwater users. Further residential development may have a similar effect. This study illustrates a valuable approach to assess potential for surface water depletion associated with dispersed groundwater pumping in other watersheds where this may be a concern.

"The ongoing legalisation of cannabis will require management approaches which consider the connection between groundwater and surface water to protect other water users and ecosystem needs."

Credit: 
IOP Publishing

NASA space data can cut disaster response times, costs

image: Banner In 2011, heavy monsoon rains and La Niña conditions across Southeast Asia's Mekong River basin inundated and destroyed millions of acres of crops, displacing millions of people and killing hundreds. The floodwaters are visible as a solid blue triangle on the left side of this MODIS image from November 1, 2011. Credit: LANCE/EOSDIS MODIS Rapid Response Team, NASA's Goddard Space Flight Center

Image: 
LANCE/EOSDIS MODIS Rapid Response Team, NASA's Goddard Space Flight Center

According to a new study, emergency responders could cut costs and save time by using near-real-time satellite data along with other decision-making tools after a flooding disaster.

In the first NASA study to calculate the value of using satellite data in disaster scenarios, researchers at NASA's Goddard Space Flight Center in Greenbelt, Maryland, calculated the time that could have been saved if ambulance drivers and other emergency responders had near-real-time information about flooded roads, using the 2011 Southeast Asian floods as a case study. Ready access to this information could have saved an average of nine minutes per emergency response and potential millions of dollars, they said.

The study is a first step in developing a model to deploy in future disasters, according to the researchers.

With lives on the line, time is money

In 2011, heavy monsoon rains and La Niña conditions across Southeast Asia's Mekong River basin inundated and destroyed millions of acres of crops, displacing millions of people and killing hundreds. NASA Goddard's Perry Oddo and John Bolten investigated how access to near-real-time satellite data could have helped in the aftermath of the floods, focusing on the area surrounding Bangkok, Thailand.

The Mekong River crosses more than 2,000 miles in Southeast Asia, passing through parts of Vietnam, Laos, Thailand, Cambodia, China and other countries. The river is a vital source of food and income for the roughly 60 million people who live near it, but it is also one of the most flood-prone regions in the world.

In previous work, they helped develop an algorithm that estimated floodwater depth from space-based observations, then combined this data with information on local infrastructure, population and land cover. They used this algorithm to calculate the disaster risk for the region, considering the vulnerability and exposure for various land cover types, and mapped where the costliest damage occurred. Assessing cost of damage can help emergency managers see what areas may be most in need of resources and also aid flood-mitigation planning and develop disaster resilience. The team used this tool to support disaster recovery after the 2018 failure of the Xepian-Xe Nam Noy hydropower dam in Laos.

In the current study, the researchers investigated the value of near-real-time information on flooded roadways -- specifically, how much time could have been saved by providing satellite-based flood inundation maps to emergency responders in their drive from station to destination.

Flood depth information was calculated from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS), and land cover from the NASA-USGS Landsat satellites. Infrastructure, road and population data came from NASA's Socioeconomic Data and Applications Center (SEDAC) and OpenStreetMap, an open-access geographic data source.

"We chose data that represented what we would know within a couple hours of the event," said Perry Oddo, an associate scientist at Goddard and the study's lead author. "We took estimates of flood depth and damage and asked how we could apply that to route emergency response and supplies. And ultimately, we asked, what is the value of having that information?"

First, the researchers used OpenRouteService's navigation service to chart the most direct routes between emergency dispatch sites and areas in need, without flooding information. Then they added near-real-time flooding information to the map, generating new routes that avoided the most highly flooded areas.

The direct routes contained about 10 miles' worth of flooded roadways in their recommendations. In contrast, the routes with flood information were longer, but avoided all highly flooded areas and contained just 5 miles of affected roadways. This made the flood-aware routes about 9 minutes faster than their baseline counterparts on average.

"The response time for emergency responders is heavily dependent on the availability and fidelity of the mapped regions," said John Bolten, associate program manager of the NASA Earth Science Water Resources Program, and the study's second author. "Here we demonstrate the value of this map, especially for emergency responders, and assign a numeric value to it. It has a lot of value for planning future response scenarios, allowing us to move from data to decision-making."

A 9-minute reduction in response time may seem insignificant, but previous research has pegged that value in the millions of dollars, the team said. While Oddo and Bolten did not include explicit financial calculations in their model, one previous study in Southeast Asia showed that reducing emergency vehicles' response time by just one minute per trip over the course of a year could save up to $50 million.

Working together to save lives

The study represents a first step toward a model that can be used in future disasters, the team said.

NASA has participated in research and applications in Southeast Asia for over 15 years via several Earth Science efforts, including NASA's Disasters, Water Resources and Capacity Building Programs. Through these efforts, NASA works with regional partners -- including the Mekong River Commission (MRC), the Asian Disaster Preparedness Center (ADPC) and other agencies -- to provide Earth observation data and value-added tools for local decision makers in the Mekong River basin.

Oddo and Bolten have not only developed tools for partners, but also shared their results with Southeast Asian decision makers.

"The NASA Earth Sciences Applied Sciences Program works by collaborating with partners around the world," Bolten said. "This isn't just research; our partner groups desperately need this information. The work we've laid out here demonstrates the utility of satellite observations in providing information that informs decision making, and mitigates the impact of flooding disasters, both their monetary impact and perhaps loss of life."

Credit: 
NASA/Goddard Space Flight Center

Breast cancer recurrence after lumpectomy & RT is treatable with localized RT without mastectomy

Approximately 10% of breast cancer patients treated with lumpectomy (breast-conserving surgery [BCS]) and whole-breast radiation (WBI) will have a subsequent in-breast local recurrence of cancer (IBTR) when followed long term. The surgical standard of care has been to perform mastectomy if breast cancer recurs following such breast-preserving treatment. However, a new multi-center study led by Douglas W. Arthur, MD, Chair of the Department of Radiation at the Virginia Commonwealth University/Massey Cancer Center provides the first evidence that partial breast re-irradiation is a reasonable alternative to mastectomy following tumor recurrence in the same breast. Unlike WBI, which exposes the entire breast to high-powered X-ray beams, partial-breast irradiation targets a high dose of radiation directly on the area where the breast tumor is located and thus avoids exposing the surrounding tissue to radiation.

"Effectiveness of Breast Conserving Surgery and 3-Dimensional Conformal Partial Breast Reirradiation for Recurrence of Breast Cancer in the Ipsilateral Breast The NRG-RTOG 1014 Phase 2 Clinical Trial" published in JAMA Oncology demonstrates that for patients experiencing an IBTR after a lumpectomy, breast conservation is achievable in 90% of patients using adjuvant partial breast re-irradiation, which results in an acceptable risk reduction of IBTR. The study investigators, from 15 institutions, analyzed late adverse events (those occurring one or more years after treatment), mastectomy incidence, distant metastasis-free survival, overall survival, and circulating tumor cell incidence. Median follow-up was 5.5 years.

Patients who were eligible for the study experienced an IBTR that was 3 centimeters or less occurring one year or less after lumpectomy with WBI who had undergone reexcision of the tumor with negative margins. Of 58 patients (median age 67) whose tumors were evaluable for analysis, 23 IBTRs were non-invasive, 35 were invasive; 91% were ? 2 cm in size and all were clinically node negative. Estrogen receptor was positive in 76%, progesterone receptor 57% and Her2Neu was over-expressed in 17%. IBTRs occurred in 4 patients, for a 5-year cumulative incidence of 5% (95% CI: 1 %, 13%). Seven patients had ipsilateral mastectomies for a 5-year cumulative incidence of 10% (95% CI: 4%, 20%). Distant metastasis-free survival and overall survival were 95% (95% CI: 85%, 98%). Four patients (7%) had grade 3 and none a grade ?4 late treatment adverse event.

"This is exciting data for women experiencing an IBTR after an initial lumpectomy and WBI who want to preserve their breast. Our study suggests that breast-conserving treatment may be a viable alternative to mastectomy," stated Douglas Arthur, MD, the Principle Investigator and Lead Author of the NRG-RTOG 1014 manuscript.

Credit: 
NRG Oncology

New electrodes could increase efficiency of electric vehicles and aircraft

image: Texas A&M doctoral student Paraskevi Flouda holds sample of new electrode.

Image: 
Texas A&M Engineering

The rise in popularity of electric vehicles and aircraft presents the possibility of moving away from fossil fuels toward a more sustainable future. While significant technological advancements have dramatically increased the efficiency of these vehicles, there are still several issues standing in the way of widespread adoption.

One of the most significant of these challenges has to do with mass, as even the most current electric vehicle batteries and supercapacitors are incredibly heavy. A research team from the Texas A&M University College of Engineering is approaching the mass problem from a unique angle.

Most of the research aimed at lowering the mass of electric vehicles has focused on increasing the energy density, thus reducing the weight of the battery or supercapacitor itself. However, a team led by Dr. Jodie Lutkenhaus, professor in the Artie McFerrin Department of Chemical Engineering, believes that lighter electric vehicles and aircraft can be achieved by storing energy within the structural body panels. This approach presents its own set of technical challenges, as it requires the development of batteries and supercapacitors with the same sort of mechanical properties as the structural body panels. Specifically, batteries and supercapacitor electrodes are often formed with brittle materials and are not mechanically strong.

In an article published in Matter, the research team described the process of creating new supercapacitor electrodes that have drastically improved mechanical properties. In this work, the research team was able to create very strong and stiff electrodes based on dopamine functionalized graphene and Kevlar nanofibers. Dopamine, which is also a neurotransmitter, is a highly adhesive molecule that mimics the proteins that allow mussels to stick to virtually any surface. The use of dopamine and calcium ions leads to a significant improvement in mechanical performance.

In fact, in the article, researchers report supercapacitor electrodes with the highest, to date, multifunctional efficiency (a metric that evaluates a multifunctional material based on both mechanical and electrochemical performance) for graphene-based electrodes.

This research leads to an entirely new family of structural electrodes, which opens the door to the development of lighter electric vehicles and aircraft.

While this work mostly focused on supercapacitors, Lutkenhaus hopes to translate the research into creating sturdy, stiff batteries.

Credit: 
Texas A&M University

Do obesity and smoking impact healing after wrist fracture surgery?

Boston, Mass. - Both obesity and smoking can have negative effects on bone health. A recent study led by a team at Beth Israel Deaconess Medical Center (BIDMC) examined whether they also impact healing in patients who have undergone surgery for fractures of the wrist, or the distal radius, which are among the most common bone fractures. Such fractures account for 5 percent to 20 percent of all emergency room fracture visits, and affected patients can experience challenges with daily living as well as potentially serious and costly complications.

For the study, published in the Journal of Hand Surgery, the investigators analyzed data on patients surgically treated for a distal radius fracture between 2006 and 2017 at two trauma centers. The 200 patients were divided into obese and non-obese groups (39 and 161 patients, respectively) and were also characterized as current, former, and never smokers (20, 32, and 148 patients, respectively) based on self-reported cigarette use.

At three-month and one-year follow-ups after surgery, both the obese and nonobese groups achieved acceptable scores that pertained to patient-reported function in the upper extremity - close to those of the general population. The two groups were also similar in regards to range of motion and bone alignment. At three months, smokers demonstrated worse scores related to arm, shoulder, and hand function and a lower percentage of healed fractures, but these effects improved over the course of a year. Complications were similar between groups.

"Overall we found that we can achieve excellent clinical and radiographic outcomes with surgery for displaced wrist fractures in patients who are obese and in those who smoke," said senior author Tamara D. Rozental, MD, Chief of Hand and Upper Extremity Surgery at BIDMC and Professor of Orthopedic Surgery at Harvard Medical School. "Our results show that treatment for distal radius fractures in obese and smoking patients is safe, and these patients may be treated like the general population with similar long-term results. Their short-term outcomes, however, demonstrate higher disability and, in the case of smokers, slower fracture healing."

Rozental stressed that obesity and smoking are currently considered among the two most important preventable causes of poor health in developed nations, and both are modifiable risk factors. "As such, we believe that lifestyle interventions focusing on weight loss and smoking cessation should be emphasized whenever possible," she said.

Credit: 
Beth Israel Deaconess Medical Center

Nov. journal highlights: First MCI prevalence estimates in US Latino populations

image: Alzheimer's & Dementia: The Journal of the Alzheimer's Association November 2019 issue cover

Image: 
Alzheimer's Association

CHICAGO, November 22, 2019 - In the largest dementia study of a diverse group of U.S. Latinos to date, researchers found that nearly 10% of middle-age and older Latinos have a decline in memory and thinking skills known as mild cognitive impairment (MCI), according to a new article published online by Alzheimer's & Dementia: The Journal of the Alzheimer's Association. MCI marks early memory changes that can progress to dementia.

Hector M. González, Ph.D., University of California, San Diego, and colleagues analyzed data from more than 6,000 individuals in the ongoing Study of Latinos-Investigation of Neurocognitive Aging (SOL-INCA) funded by the National Institutes of Health. The researchers found that older age, high cardiovascular disease risk and depression symptoms were significantly associated with MCI diagnosis. MCI prevalence rates ranged from 12.9% for individuals with Puerto Rican backgrounds to 8.0% among individuals with Cuban backgrounds.

Link: "Prevalence and correlates of mild cognitive impairments among diverse Hispanics/Latinos: Study of Latinos-Investigation of Neurocognitive Aging results"

In a related Perspectives article, also newly published online, Dr. González and colleagues offer SOL-INCA to illustrate a new framework for advancing research into Alzheimer's and other dementias in Latino populations. Latinos represent nearly one-fifth of the U.S. population, and are a growing segment that is culturally and genetically diverse, while also facing major risks and disparities for Alzheimer's disease and related dementias.

Link: "A research framework for cognitive aging and Alzheimer's disease among diverse US Latinos: Design and implementation of the Hispanic Community Health Study/Study of Latinos--Investigation of Neurocognitive Aging (SOL-INCA)"

A sharp upward change in the use of health care services, including twice as many primary care visits, was seen in the year that individuals received a diagnosis of Alzheimer's or another dementia in the first study of tribal health care service use by Alaska Natives and American Indians. The paper by Krista R. Schaefer, M.P.H., from Southcentral Foundation, Anchorage, AK, and colleagues is published in the November print issue of Alzheimer's & Dementia: The Journal of the Alzheimer's Association.

"Alaska has the fastest growing population of people 65 years and older compared with any other state; moreover, Alaska Native and American Indian people make up to 20% of Alaska's population. Therefore, healthcare systems will need to tailor their services in anticipation of an increase in the numbers of patients suffering from Alzheimer's disease and related dementias," the authors write.

Link: "Differences in service utilization at an urban tribal health organization before and after Alzheimer's disease or related dementia diagnosis: A cohort study" (not embargoed)

Also in the November print issue, a research article from Denmark by Laerke Taudorf, M.D., from the University of Copenhagen, Denmark and colleagues analyzed data from three Danish national health registries for people 65 years and older. After adjusting for age and sex, the incidence rate (new cases) for the individuals in these population studies in Denmark increased by an average of 9% annually from 1996 to 2003, followed by a 2% annual decline, while total prevalence (number of people living with dementia) increased during the entire time and is still increasing. In conclusion, the authors state, "the decline in total incidence and incidence rates of dementia leads to a cautious optimism that with better health and management of risk factors, it may be possible to lower the risk of dementia."

Link: "Declining incidence of dementia: A national registry-based study over 20 years" (not embargoed)

Credit: 
Alzheimer's Association

Small, fast, and highly energy-efficient memory device inspired by lithium-ion batteries

image: The stacked layers in the proposed memory device form a mini-battery that can be quickly and efficiently switched between three different voltage states (0.95 V, 1.35 V, and 1.80 V).

Image: 
ACS Applied Materials and Interfaces

Virtually all digital devices that perform any sort of processing of information require not only a processing unit, but also a quick memory that can temporarily hold the inputs, partial results, and outputs of the operations performed. In computers, this memory is referred to as dynamic random-access memory, or DRAM. The speed of DRAM is very important and can have a significant impact in the overall speed of the system. In addition, lowering the energy consumption of memory devices has recently become a hot topic to achieve highly energy-efficient computing. Therefore, many studies have focused on testing out new memory technologies to surpass the performance of conventional DRAM.

The most basic unit in a memory chip are its memory cells. Each cell typically stores a single bit by adopting and holding one of two possible voltage values, which correspond to a stored value of either "0" or "1". The characteristics of the individual cell largely determine the performance of the overall memory chip. Simpler and smaller cells with high speed and low energy consumption would be ideal to take highly efficient computing to the next level.

A research team from Tokyo Tech led by Prof. Taro Hitosugi and student Yuki Watanabe recently reached a new milestone in this area. These researchers had previously developed a novel memory device inspired by the design of solid lithium-ion batteries. It consisted of a stack of three solid layers made of lithium, lithium phosphate, and gold. This stack is essentially a miniature low-capacity battery that functions as a memory cell; it can be quickly switched between charged and discharged states that represent the two possible values of a bit. However, gold combines with lithium to form a thick alloy layer, which increases the amount of energy required to switch from one state to the other.

In their latest study, the researchers created a similar three-layer memory cell using nickel instead of gold. They expected better results using nickel because it does not easily form alloys with lithium, which would lead to lower energy consumption when switching. The memory device they produced was much better than the previous one; it could actually hold three different voltage states instead of two, meaning that it is a three-valued memory device. "This system can be viewed as an extremely low-capacity thin-film lithium battery with three charged states," explains Prof. Hitosugi. This is a very interesting feature that has potential advantages for three-valued memory implementations, which may be more area efficient.

The researchers also found that nickel forms a very thin nickel oxide layer between the Ni and the lithium phosphate layers (see Fig. 1), and this oxide layer is essential for the low-energy switching of the device. The oxide layer is much thinner than that of the gold-lithium alloys that formed in their previous device, which means that this new "mini-battery" cell has a very low capacity and is therefore quickly and easily switched between states by applying minuscule currents. "The potential for extremely low energy consumption is the most noteworthy advantage of this device," remarks Prof. Hitosugi.

Increased speed, lower energy consumption, and smaller size are all highly demanded features in future memory devices. The memory cell developed by this research team is a very promising stepping stone toward much more energy-efficient and faster computing.

Credit: 
Tokyo Institute of Technology