Culture

NYS winters could pose solar farm 'ramping' snag for power grid

ITHACA, N.Y. - By adding utility-scale solar farms throughout New York state, summer electricity demand from conventional sources could be reduced by up to 9.6 percent in some places.

But Cornell University engineers caution that upstate winters tell a different tale. With low energy demand around midday in the winter, combined with solar-electricity production, New York's power system could face volatile swings of "ramping" - which is how power system operators describe quick increases or decreases in demand.

"It's a very surprising finding," said senior author Max Zhang, associate professor at Cornell's Sibley School of Mechanical and Aerospace Engineering. "When are you going to have maximum ramping take place in New York? It's not going to be in the summer when the solar power is the highest and the needs are more balanced. It turns out to be in the winter. When you have several days of sunshine in a row during winter, that causes the largest ramping on the power system in New York state."

The paper, "Strategic Planning for Utility-Scale Solar Photovoltaic Development - Historical Peak Events Revisited," was published in Applied Energy. In addition to Zhang, co-authors are Cornell doctoral candidates Jeff Sward and Jiajun Gu, and Jackson Siff.

Ramping makes the grid less efficient, because system operators then must employ natural gas or other carbon methods to keep up with demand, Sward said. "This paper can inform regional development trends and could lead to the improvement of electricity transmission from upstate to downstate."

"The increasing ramping requirement will be a challenge in pursuing our renewable energy target," said Zhang, "but it can be met with flexible resources, both in the supply and demand sides, as well as energy storage."

Credit: 
Cornell University

Self-esteem may be key to success for Portland's homeless youth, PSU study finds

Service providers for youth experiencing homelessness typically focus on the big three: food, shelter and health care. But a new study from Portland State University Community Psychology graduate student Katricia Stewart shows overall well-being is just as important.

She published her study, "Intrapersonal and Social-contextual Factors Related to Psychological Well-being Among Youth Experiencing Homelessness," with her advisor Greg Townley in the Journal of Community Psychology earlier this year. This work was funded by a student research grant awarded to Stewart by APA Division 27, the Society for Community Research and Action.

"In the end, they're still just kids and young adults who need to enjoy themselves and have creative outlets and make friends," Stewart said. "There needs to be a balance between serving those basic needs and having opportunities to just be a young adult."

Stewart argues that focusing only on food, shelter and health care above all else isn't the best way to serve youth experiencing homelessness.

"While those fundamentals are important, so are opportunities for youth to cultivate community, develop supportive relationships, and engage in meaningful hobbies," she said.

Stewart studied components of well-being -- including self-esteem, mental health, sense of community and empowerment -- and which factors made a difference in the day-to-day lives of homeless youth.

While all the factors are important, Stewart said self-esteem and mental health stood out as the primary predictors of psychological well-being.

"Greater self-esteem predicted greater psychological well-being, which makes sense when you consider the age group -- 18 to 24 years old -- who are in a time in life when identity and self-esteem are important parts of development," she said.

Many of the 100 Portland youth surveyed for Stewart's study stated they were homeless because they were either kicked out of their home or chose to leave; were managing personal issues like drug use or pregnancy; or struggled financially.

Further, many were living in unhealthy or abusive environments and battling with a difference in family beliefs or values.

These circumstances might contribute to youth feeling dis-empowered, Stewart said.

"However, if supported in the right way, they can develop a stronger sense of empowerment and self-worth," she said. "This, in turn, might be one of the many important factors in changing the trajectory of their lives."

Building their self-esteem and helping them recognize they can take action to improve their situations is one factor at play; service providers can also help youth identify and pursue opportunities or skills that support their future -- such as education, housing, or employment, she added.

p:ear, a Portland nonprofit providing educational, artistic and recreational opportunities to youth experiencing homelessness, worked with Stewart during the study. She identified p:ear as an example of a service provider examining the bigger picture and helping youth make strides in overall wellness.

"They provide a space for youth to build confidence and develop themselves, pursue activities and do things that can make them feel good about themselves," she said. That includes access to art, job skills training and recreational activities.

Stewart hopes her findings will help inform future research and program development at homeless service centers. She looks forward to opportunities to continue working with p:ear and with youth experiencing homelessness in her role as a graduate student research assistant with PSU's Homelessness Research & Action Collaborative beginning this summer.

Credit: 
Portland State University

New compounds could be used to treat autoimmune disorders

image: Newly-developed molecules bind to a key enzyme pocket to inhibit its activity, and possibly prevent autoimmune responses.

Image: 
Laboratory of RNA Molecular Biology at The Rockefeller University

The immune system is programmed to rid the body of biological bad guys--like viruses and dangerous bacteria--but its precision isn't guaranteed. In the tens of millions of Americans suffering from autoimmune diseases, the system mistakes normal cells for malicious invaders, prompting the body to engage in self-destructive behavior. This diverse class of conditions, which includes Type I diabetes, lupus, and multiple sclerosis, can be very difficult to treat.

In a new report in Nature Communications, researchers in the laboratory of Thomas Tuschl describe their development of small molecules that inhibit one of the main enzymes implicated in misguided immune responses. This research could lead to new treatments for people with certain autoimmune disorders and, more broadly, sheds light on the causes of autoimmunity.

Cellular security

In eukaryotes, including humans, DNA typically resides in a cell's nucleus, or in other sequestered organelles such as mitochondria. So if DNA is found outside of these compartments--in the cell's cytosol--the immune system goes into high alert, assuming the genetic material was leaked by an invading bacterium or virus.

In 2013, researchers discovered an enzyme called cyclic GMP-AMP synthase, or cGAS, that detects and binds to cytosolic DNA to initiate a chain reaction--a cascade of cellular signaling events that leads to immune activation and usually ends with the destruction of the DNA-shedding pathogen.

Yet, cytosolic DNA isn't always a sign of infection. Sometimes it's produced by the body's own cells--and cGAS does not discriminate between infectious and innocuous DNA. The enzyme will bind to perfectly harmless genetic material, prompting an immune response even in the absence of an intruder.

"There is no specificity. So in addition to sensing foreign microbial DNA, cGAS will also sense aberrant cytosolic DNA made by the host," says postdoctoral associate Lodoe Lama. "And this lack of self versus non-self specificity could be driving autoimmune reactions."

Since the discovery of cGAS, researchers in the Tuschl laboratory have sought to understand its potential clinical relevance. If autoimmune disorders are the result of an erroneously activated immune system, then perhaps, they believe, a cGAS inhibitor could be used to treat these conditions.

Until now, no potent and specific small-molecule compound existed to block cGAS in human cells, though the researchers previously identified one that can do the job in mouse cells. Hoping to fill this gap, Tuschl's team collaborated with Rockefeller's High-Throughput and Spectroscopy Resource Center to scan through a library of almost 300,000 small molecules, searching for one that might target human cGAS.

Building a blocker

Through their screen, the researchers identified two molecules that showed some activity against cGAS--but this result was just the beginning of a long process towards developing an inhibitor that might be used in a clinical setting.

"The hits from library compounds were a great starting point, but they were not potent enough," says Lama. "So we used them as molecular scaffolds on which to make improvements, altering their structures in ways that would increase potency and also reduce toxicity."

Working with the Tri-Institutional Therapeutics Discovery Institute, the researchers modified one of their original scaffolds to create three compounds that blocked cGAS activity in human cells--making them the first molecules with this capability. Further analysis by researchers at Memorial Sloan Kettering Cancer Center revealed that the compounds inhibit cGAS by wedging into a pocket of the enzyme that is key to its activation.

The compounds are now being further optimized for potential use in patients, with an initial focus on treatment of the rare genetic disease Aicardi-Goutières syndrome. People with this condition accumulate abnormal cytosolic DNA that activates cGAS, leading to serious neurological problems. A drug that blocks the enzyme would therefore be of tremendous therapeutic value to those with the disease, who currently have few treatment options.

"This class of drug could potentially also be used to treat more common diseases, such as systemic lupus erythematosus, and possibly neurodegenerative diseases that include inflammatory contributions, such as Parkinson's disease," says Tuschl.

Further, the researchers believe that these compounds could serve as practical laboratory tools.

"Scientists will now have simple means by which to inhibit cGAS in human cells," says Lama. "And that could be immensely useful for studying and understanding the mechanisms that lead to autoimmune responses."

Credit: 
Rockefeller University

All ears: Genetic bases of mammalian inner ear evolution

Mammals have adapted to live in the darkest of caves and the deepest oceans, and from the highest mountains to the plains. Along the way, mammals have also adapted a remarkable capacity in their sense of hearing, from the high-frequency echolocation calls of bats to low frequency whale songs. Even our best friend companion animals, dogs, have developed a hearing range twice as wide as people.

Assuming that these adaptations have a root genetic cause, a team of scientists led by Lucia Franchini of the National Council of Scientific and Technological Research (CONICET), in Buenos Aires, Argentina, have made it their goal to identify the genetic bases underlying the evolution of the inner ear in mammals. Their latest findings underscored the promise of their approach, which identified two new genes involved in hearing. The study was published in the advanced online edition of Molecular Biology and Evolution.

"This paper builds on the premise that the evolution of mammalian inner ear hearing related novelties should leave a discoverable trace of adaptive molecular signature," said Franchini. "This work highlights the usefulness of evolutionary studies to pinpoint novel key functional genes."

The basic processes of hearing in different mammalian species is the same. The auditory system of mammals is characterized by a middle ear composed of three ossicles (the incus (anvil), malleus (hammer) and stapes (stirrup), which funnels sound to the inner ear.

Franchini's group focused on the inner ear, which turns changes in sound intensity into electrical signals that the brain can process. Within the inner ear is the snail-shaped cochlea that transforms sound waves into nerve impulses, including an auditory organ of Corti that possesses two types of specialized sensory hair cells (HCs), inner (IHCs) and outer hair cells (OHCs).

"In the mammalian cochlea, IHCs and OHCs display a clear division of labor," explains Franchini. "The IHCs receive and relay sound information behaving as the true sensory cells, while OHCs amplify sound information. Thus, IHCs which are the primary transducers, release glutamate to excite the sensory fibers of the cochlear nerve and OHCs act as biological motors to amplify the motion of the sensory epithelium."

In their study, they used a two-pronged approach, complementing in silico gene comparisons with follow-up experimental studies, to gain a more complete understanding of the genetic circuitry behind mammalian inner ear adaptations.

"These functional and morphological innovations in the mammalian inner ear contribute to its unique hearing capacities," said lead author Lucia Franchini. However, the genetic bases underlying the evolution of this mammalian landmark are poorly understood. We propose that the emergence of morphological and functional innovations in the mammalian inner ear could have been driven by adaptive molecular evolution."

First, they took advantage of extensive gene expression databases to perform software-based, or, in silico comparative studies of 1,300 genes to identify genes that may have been positively selected to help mammals adapt over evolutionary time. In total, they found 13%, or 165 inner ear genes that may have been selected for adaptation.

"This analysis indicated that both IHCs and OHCs went through similar levels of gene adaptive evolution probably underlying the morphological and functional remodelling that both cellular types underwent in the mammalian lineage," said Franchini.

"Notably we found that analysing functional categories of positively selected genes the most enriched functional term were 'cytoskeletal protein binding' and 'structural constituent of the cytoskeleton'. These findings indicate that the OHC genes that underwent positive selection could have contributed to the acquisition of the highly specialized cytoskeleton present in these cells that underlies its distinctive functional properties, including somatic electromotility."

Next, they experimentally tested hearing gene functions in a series of mouse studies. Among these, they focused on two previously unknown inner ear genes: STRIP2 (from Striatin Interacting Protein 2) and ABLIM2 (Actin Binding LIM domain 2), which were functionally characterized by generating novel strains of mutant mice by using CRISPR/Cas9 technology. In each case, they used CRISPR to turn off part of the normal gene function to see how it affected the hearing genetic circuitry.

"We performed auditory functional studies of Strip2 and Ablim2 newly generated mutant mice by means of two complementary techniques that allow differential diagnosis of OHC versus IHC/neuronal dysfunction throughout the cochlea," said Franchini. "To evaluate the integrity of the hearing system we recorded ABRs (Auditory Brainstem Responses) that are sound-evoked potentials generated by neuronal circuits in the ascending auditory pathways. We also evaluated the OHCs function through distortion product otoacoustic emissions (DPOAE) testing."

They discovered that Strip2 likely plays a functional role in the first synapse between IHCs and nerve fibers. Moreover, when they at the cochlear sensory epithelium, they found a significant reduction in auditory-nerve synapses. In contrast, the mutant studies of Ablim2 suggest that the absence of Ablim2 does not affect either cochlear amplification or auditory nerve function.

"In summary, through this evolutionary approach we discovered that STRIP2 underwent strong positive selection in the mammalian lineage and plays an important role in the physiology of the inner ear," said Franchini. "Moreover, our combined evolutionary and functional studies allow us to speculate that the extensive evolutionary remodeling that this gene underwent in the mammalian lineage provided an adaptive value. Thus, our study is a proof of concept that evolutionary approaches paired with functional studies could be a useful tool to uncover new key players in the function of organs and tissues."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Factors associated with elephant poaching

image: The African elephant poaching rates have fallen since 2011.

Image: 
Photo: Colin Beale/Universität York

Elephants are essential to savannah and forest ecosystems and play an important role in ecotourism in Africa - yet poaching has contributed to a rapid decline in elephant populations in recent decades. An international research team has now released a study presenting a more positive perspective: Severin Hauenstein and Prof. Dr. Carsten Dormann from the Department of Biometry and Environmental Systems Analysis at the University of Freiburg, together with Dr. Colin Beale from the University of York/England as well as Dr. Mrigesh Kshatriya and Dr. Julian Blanc from the elephant monitoring program MIKE in Kenya/Africa, used a statistical approach to show that the African elephant poaching rates have fallen since 2011. In a study published in the current issue of the journal Nature Communications, the researchers associated the illegal elephant hunting rates with a local poverty, national corruption and global ivory demand.

While almost all elephant populations have experienced drastic declines since 2000, some populations have been stable or even increasing in recent years, such as that in the Kruger National Park in South Africa. The analysis shows that the number of elephants killed by poachers has fallen from an estimated peak of more than ten percent of the African elephant population in 2011 to less than four percent in 2017. "This is a positive trend, but we should not see this as an end to the poaching crisis," cautions Hauenstein. "After some changes in the political environment, the total number of illegally killed elephants in Africa seems to be falling, but to assess possible protection measures, we need to understand the local and global processes driving illegal elephant hunting."

The results indicate that in a regional comparison, corruption and poverty among the local population are the main factors that drive poaching rates. The researchers show that efforts to curb the demand for ivory in Asian markets and reduce local corruption and poverty could be more successful in the fight against poaching than solely focusing on law enforcement: the recorded annual poaching rates correlate strongly with proxies of ivory demand in China, the traditional market for ivory. In addition, the variation of illegal killing rates among the 29 African countries was primarily explained by the degree of corruption and poverty in the respective country.

In the CITES programme "Monitoring the Illegal Killing of Elephants" (MIKE), which is co-financed by the European Union, wildlife law enforcement patrols annually record the elephant carcasses in 53 monitoring sites in 29 African countries and identify the cause of death. Between 2002 and 2017, the programme documented 18,007 carcasses, of which 8,860 were identified as illegal killings. MIKE was established by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) to inform decision-making by the Parties regarding trade in specimens of elephants, and build capacity in elephant range States for the overall goal of better management of elephants and enhanced enforcement efforts.

Credit: 
University of Freiburg

Chloropicrin application increases production and profit potential for potato growers

image: Chloropicrin was first used on potato in 1940 as a wireworm suppressant and then in 1965 as a verticillium suppressant. Farmers stopped using it on potato for many years, but over the last decade, it has seen a resurgence in popularity -- and for good reason, according to Chad Hutchinson, director of research at TriEst Ag Group, Inc., in his webcast 'Chloropicrin Soil Fumigation in Potato Production Systems.'

Image: 
Hutchinson C.M.

St. Paul, MN (May, 2019)--The chemical compound chloropicrin was first synthesized in 1848 by Scottish chemist John Stenhouse and first applied to agriculture in 1920, when it was used to cure tomato "soil sickness." Over the next decade, it was used to restore pineapple productivity in Hawaii and to address soil fungal problems in California. Over time, it began to be widely used as a fungicide, herbicide, insecticide, and nematicide.

Chloropicrin was first used on potato in 1940 as a wireworm suppressant and then in 1965 as a verticillium suppressant. Farmers stopped using it on potato for many years, but over the last decade, it has seen a resurgence in popularity--and for good reason, according to Chad Hutchinson, director of research at TriEst Ag Group, Inc., in his webcast "Chloropicrin Soil Fumigation in Potato Production Systems."

Used as a preplant soil treatment measure, chloropicrin suppresses soilborne pathogenic fungi and some nematodes and insects. With a half-life of hours to days, it is completely digested by soil organisms before the crop is planted, making it safe and efficient. Contrary to popular belief, chloropicrin does not sterilize soil and does not deplete the ozone layer, as the compound is destroyed by sunlight. Additionally, chloropicrin has never been found in groundwater, due to its low solubility.

According to Hutchinson, chloropicrin-treated soil has a healthier root system, improved water use, and more efficient fertilizer use. Applying chloropicrin to soil also results in greater crop yield and health. Hutchinson also comments on the compound's ability to suppress many common pathogens, including the pathogen that causes common scab and species of Verticillium, Fusarium, and Phytophthora.

Hutchinson concludes that the use of chloropicrin not only increases production efficiency and profit potential for potato farmers, but it can also improve soil health, "the foundation of a positive crop production system." His presentation "Chloropicrin Soil Fumigation in Potato Production Systems" is fully open access and available online.

This webcast, sponsored by TriEst, is part of the "Focus on Potato" series on the Plant Management Network (PMN). PMN is a cooperative, not-for-profit resource for the applied agricultural and horticultural sciences. Together with more than 80 partners, which include land-grant universities, scientific societies, and agribusinesses, PMN publishes quality, applied, and science-based information for practitioners.

Credit: 
American Phytopathological Society

Reading clinician visit notes can improve patients' adherence to medications

BOSTON--A new study of patients reading the visit notes their clinicians write, report positive effects on their use of prescription medications. The study, Patients Managing Medications and Reading their Visit Notes: A survey of OpenNotes participants, published today in the Annals of Internal Medicine, shows that when patients read their notes, they report significant benefits, including feeling more comfortable with and in control of their medications, a greater understanding of medication's side effects, and being more likely to take medications as prescribed.

The study of approximately 20,000 adult patients at Beth Israel Deaconess Medical Center in Boston (BIDMC) in Boston, at the University of Washington Medicine (UW) in Seattle, and at Geisinger, a health system in rural Pennsylvania was conducted online between June and October of 2017. The three health systems have been sharing visit notes written by primary care doctors, medical and surgical specialists, and other clinicians for several years.

"Sharing clinical notes with patients is a relatively low-cost, low-touch intervention," said study lead Catherine DesRoches, DrPH, Executive Director of OpenNotes, and also of the Division of General Medicine at BIDMC. "While note sharing requires a culture shift in medicine, it is not technically difficult with most Electronic Health Record Systems (EHRs), and could have an enormous payoff, given that we know poor adherence to medications costs the health care system about $300 billion per year. Anything that we can do to improve adherence to medications has significant value."

Patients reported that they gained important benefits from reading their notes: 64 percent reported increased understanding of why a medication was prescribed; 62 percent felt more in control of their medications; 57 percent found answers to questions about medications; and 61 percent felt more comfortable with medications. Fourteen percent of patients at BIDMC and Geisinger said that they were more likely to take their medications as prescribed after reading their notes, while 33% of patients at UW rated notes as very important in helping them with their medications. The study also showed that patients speaking primary languages other than English and those with lower levels of formal education were more likely to report benefits.

"This kind of transparent communication presents a big change in long-standing practice, and it's not easy," said study co-author and OpenNotes co-founder Tom Delbanco, MD, MACP, John F. Keane & Family Professor of Medicine at Harvard Medical School and BIDMC. "Doctors contemplating it for the first time are nervous. They worry about many things, including potential effects on their workflow, and scaring their patients. But once they start, we know of few doctors who decide to stop, and patients overwhelmingly love it. The promise it holds for medication adherence is enormous, and we are really excited by these findings."

Study participants were aged 18 years or older, had logged into the secure patient portal at least once in the previous 12 months, had at least one ambulatory visit note available and had been prescribed or were taking medications in the previous 12 months. The survey respondents represented urban and rural settings, varied levels of education, and broad age and racial distributions. The main outcome measures included patient-reported behaviors and their perceptions concerning benefits versus risks.

In an accompanying editorial, David Blumenthal, MD and Melinda K. Abrams, MS of the Commonwealth Fund write: "Transparency is no longer the distant, radical vision it was when the pioneering OpenNotes team began their work. Rather, it is a fact of clinical life, mandated by federal law and policy...Our challenge now is to make the best and most of shared health care information as a tool for clinical management and health improvement."

Credit: 
Beth Israel Deaconess Medical Center

As plaque deposits increase in the aging brain, money management falters

image: Scans of two study participants show the brain of a cognitively healthy 74-year-old (top row) who demonstrated average financial skills compared to an 86-year-old with mild Alzheimer's disease (bottom row) who demonstrated impaired financial skills. The bottom scan is positive for amyloid plaques, highlighted in yellow and orange throughout the brain and extending to its edges.

Image: 
Duke Health

DURHAM, N.C. - Aging adults often show signs of slowing when it comes to managing their finances, such as calculating their change when paying cash or balancing an account ledger.

These changes happen even in adults who are cognitively healthy. But trouble managing money can also be a harbinger of dementia and, according to new Duke research in The Journal of Prevention of Alzheimer's Disease, could be correlated to the amount of protein deposits built up in the brain.

"There has been a misperception that financial difficulty may occur only in the late stages of dementia, but this can happen early and the changes can be subtle," said P. Murali Doraiswamy, MBBS, a professor of psychiatry and geriatrics at Duke and senior author of the paper. "The more we can understand adults' financial decision-making capacity and how that may change with aging, the better we can inform society about those issues."

The findings are based on 243 adults ages 55 to 90 participating in a longitudinal study called the Alzheimer's Disease Neuroimaging Initiative, which included tests of financial skills and brain scans to reveal protein buildup of beta-amyloid plaques.

The study included cognitively healthy adults, adults with mild memory impairment (sometimes an Alzheimer's precursor) and adults with an Alzheimer's diagnosis.

Testing revealed that specific financial skills declined with age and at the earliest stages of mild memory impairment. The decline was similar in men and women. After controlling for a person's education and other demographics, the scientists found the more extensive the amyloid plaques were, the worse that person's ability to understand and apply basic financial concepts or completing tasks such as calculating an account balance.

"Older adults hold a disproportionate share of wealth in most countries and an estimated $18 trillion in the U.S. alone," Doraiswamy said. "Little is known about which brain circuits underlie the loss of financial skills in dementia. Given the rise in dementia cases over the coming decades and their vulnerability to financial scams, this is an area of high priority for research."

Even cognitively healthy people can develop protein plaques as they age, but the plaques may appear years earlier and be more widespread in those at risk for Alzheimer's disease due to a family history or mild memory impairment, Doraiswamy said.

Most testing for early dementia and Alzheimer's disease focuses on memory, said Duke researcher Sierra Tolbert, the study's lead author. A financial capacity assessment, such as the 20-minute Financial Capacity Instrument-Short Form used in the Duke study, could also be a tool for doctors to track a person's cognitive function over time and is sensitive enough to detect even subtle changes, she said.

"Doctors could consider proactively counseling their patients using this scale, but it's not widely in use," Tolbert said. "If someone's scores are declining, that could be a warning sign. We're hoping with this research more doctors will become aware there are tools that can measure subtle changes over time and possibly help patients and families protect their loved ones and their finances."

In addition to Doraiswamy and Tolbert, study authors include Yuhan Liu, Caroline Hellegers, Jeffrey R. Petrella, Michael W. Weiner and Terence Z. Wong.

This research used data from the Alzheimer's Disease Neuroimaging Initiative, which is funded by the National Institutes of Health (U01 AG024904) and the U.S. Department of Defense (W81XWH-12-2-0012), as well as the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through contributions from numerous other organizations. A full list of contributors and financial disclosures is available with the manuscript.

Credit: 
Duke University Medical Center

UTSA study shows vaping is linked to adolescents' propensity for crime

(San Antonio, May 28, 2019) -- UTSA criminal justice professor Dylan Jackson recently published one of the first studies to explore emerging drug use in the form of adolescent vaping and its association with delinquency among 8th and 10th grade students across the nation.

The Centers for Disease Control and Prevention estimate that 4.9 million middle and high school students used some type of tobacco product in 2018, up from 3.6 million in 2017. Moreover, the percentage of high school-aged children who report using e-cigarettes increased by more than 75 percent between 2017 and 2018.

New legislation is targeting this dangerous trend. Earlier this year, the FDA introduced new policies to prevent adolescents from accessing flavored tobacco products, including e-cigarettes. U.S. Senators Mitch McConnell and Tim Kaine have also introduced a bipartisan bill to raise the federal smoking age to 21. The proposed bill includes the use of e-cigarettes, citing it as an "epidemic" among adolescents that has been largely overlooked.

Using a nationally representative sample of 8th and 10th graders in 2017, Jackson found that adolescents who vape are at an elevated risk of engaging in criminal activities such as violence and property theft. He also found that teens who vape marijuana are at a significantly higher risk of violent and property offenses than youth who ingest marijuana through traditional means.

He believes that these findings might be explained by the ability to conceal an illegal substance through the mechanism of vaping, which can reduce the likelihood of detection and apprehension among youth who vape illicit substances and thereby embolden them to engage other delinquent behaviors.

Ultimately, he argues that youth who vape illicit substances such as marijuana may easily go unnoticed and/or unchallenged due to the ambiguity surrounding the substance they are vaping and the ease of concealability of vaping devices, which can look like a flash drive.

These behaviors include four categories of delinquency:

violent delinquency including fighting at school, engaging in a gang fight, causing injury to another or carrying a weapon to school

property delinquency such as stealing an item or damaging school property

"Other" types of delinquency such as trespassing or running away from home

Some combination of the behaviors mentioned above

Jackson also discussed other factors related to vaping, such as youth perceptions of media messaging by product manufacturers that vaping is acceptable because it is a "healthier" option than traditional forms of smoking nicotine or marijuana. "Our hope is that this research will lead to the recognition among policymakers, practitioners, and parents that the growing trend of adolescent vaping is not simply "unhealthy" - or worse, an innocuous pastime - but that it may in fact be a red flag or an early marker of risk pertaining to violence, property offending, and other acts of misconduct."

Credit: 
University of Texas at San Antonio

AccessLab: New workshops to broaden access to scientific research

image: The trust scale at an AccessLab workshop -- how much do you trust the sources of information that you use?

Image: 
Amber G.F. Griffiths, amber@fo.am

A team from the transdisciplinary laboratory FoAM Kernow and the British Science Association detail how to run an innovative approach to understanding evidence called AccessLab in a paper published on May 28 in the open-access journal PLOS Biology. The AccessLab project enables a broader range of people to access and use scientific research in their work and everyday lives.

Five trial AccessLabs have taken place for policy makers, media and journalists, marine sector participants, community groups, and artists. Through direct citizen-scientist pairings, AccessLab encourages people to come with their own science-related questions and work one-to-one with a science researcher to find and access trustworthy information together. Among the many who've benefited from AccessLabs' approach include a town councillor researching the impacts of building developments on the environment, a GP researching nutrition for advising patients with specific diseases, and a dancer and choreographer researching physiology and injuries.

The act of pairing science academics with local community members from other backgrounds helps build understanding and trust between groups, at a time where this relationship is under increasing threat from different political and economic currents in society. This process also exposes science researchers to the difficulties accessing their work and the importance of publishing research findings in a way that is more inclusive.

"AccessLab is a powerful example of researchers using their expertise to unlock skills in their local communities," the authors say in the paper. "The workshops focus on transferring research skills rather than subject-specific knowledge, highlighting that not having a science background doesn't need to be a barrier to understanding and using scientific knowledge."

Credit: 
PLOS

Computer-assisted diagnostic procedure enables earlier detection of brain tumor growth

image: A computer-assisted diagnostic procedure helps physicians detect the growth of low-grade brain tumors earlier and at smaller volumes than visual comparison alone, according to a study published May 28 in the open-access journal PLOS Medicine.

Image: 
geralt, Pixabay

A computer-assisted diagnostic procedure helps physicians detect the growth of low-grade brain tumors earlier and at smaller volumes than visual comparison alone, according to a study published May 28 in the open-access journal PLOS Medicine by Hassan Fathallah-Shaykh of the University of Alabama at Birmingham, and colleagues. However, additional clinical studies are needed to determine whether early therapeutic interventions enabled by early tumor growth detection prolong survival times and improve quality of life.

Low-grade gliomas constitute 15% of all adult brain tumors and cause significant neurological problems. There is no universally accepted objective technique available for detecting the enlargement of low-grade gliomas in the clinical setting. The current gold standard is subjective evaluation through visual comparison of 2D images from longitudinal radiological studies. A computer-assisted diagnostic procedure that digitizes the tumor and uses imaging scans to segment the tumor and generate volumetric measures could aid in the objective detection of tumor growth by directing the attention of the physician to changes in volume. This is important because smaller tumor sizes are associated with longer survival times and less neurological morbidity. In the new study, the authors evaluated 63 patients--56 diagnosed with grade 2 gliomas and 7 followed for an imaging abnormality without pathological diagnosis--for a median follow-up period of 150 months, and compared tumor growth detection by seven physicians aided by a computer-assisted diagnostic procedure versus retrospective clinical reports.

The computer-assisted diagnostic procedure involved digitizing magnetic resonance imaging scans of the tumors, including 34 grade 2 gliomas with radiological progression and 22 radiologically stable grade 2 gliomas. Physicians aided by the computer-assisted method diagnosed tumor growth in 13 of 22 glioma patients labeled as clinically stable by the radiological reports, but did not detect growth in the imaging-abnormality group. In 29 of the 34 patients with progression, the median time-to-growth detection was 14 months for the computer-assisted method compared to 44 months for current standard-of-care radiological evaluation. Using the computer-assisted method, accurate detection of tumor enlargement was possible with a median of only 57% change in tumor volume compared to a median of 174% change in volume required using standard-of-care clinical methods. According to the authors, the findings suggest that current clinical practice is associated with significant delays in detecting the growth of low-grade gliomas, and computer-assisted methods could reduce these delays.

Credit: 
PLOS

Study finds how prostate cancer cells mimic bone when they metastasize

DURHAM, N.C. -- Prostate cancer often becomes lethal as it spreads to the bones, and the process behind this deadly feature could potentially be turned against it as a target for bone-targeting radiation and potential new therapies.

In a study published online Tuesday in the journal PLOS ONE, Duke Cancer Institute researchers describe how prostate cancer cells develop the ability to mimic bone-forming cells called osteoblasts, enabling them to proliferate in the bone microenvironment.

Attacking these cells with radium-233, a radioactive isotope that selectively targets cells in these bone metastases, has been shown to prolong patients' lives. But a better understanding of how radium works in the bone was needed.

The mapping of this mimicking process could lead to a more effective use of radium-233 and to the development of new therapies to treat or prevent the spread of prostate cancer to bone.

"Given that most men who die of prostate cancer have bone metastases, this work is critical to helping understand this process," said lead author Andrew Armstrong, M.D., director of research at the Duke Cancer Institute Center for Prostate and Urologic Cancers.

Armstrong and colleagues enrolled a small study group of 20 men with symptomatic bone-metastatic prostate cancer. When analyzing the circulating tumor cells from study participants, they found that bone-forming enzymes appeared to be expressed commonly, and that genetic alterations in bone forming pathways were also common in these prostate cancer cells.

They validated these new genetic findings in a separate multicenter trial involving a larger group of more than 40 men with prostate cancer and bone metastases.

Following treatment with radium-223, the researchers found that the radioactive isotope was concentrated in bone metastases, but tumor cells still circulated and cancer progressed within six months of therapy. The researchers found a range of complex genetic alterations in these tumor cells that likely enabled them to persist and develop resistance to the radiation over time.

"Osteomimicry may contribute in part to how prostate cancer spreads to bone, but also to the uptake of radium-223 within bone metastases and may thereby enhance the therapeutic benefit of this bone targeting radiotherapy," Armstrong said. He said by mapping this lethal pathway of prostate cancer bone metastasis, the study points to new targets and thus critical areas of research into designing better tumor-targeting therapies.

Credit: 
Duke University Medical Center

New genetic engineering strategy makes human-made DNA invisible

image: This new genetic engineering tool opens up the possibilities for research on bacteria that haven't been well studied before.

Image: 
Image courtesy of Peter Hoey.

Bacteria are everywhere. They live in the soil and water, on our skin and in our bodies. Some are pathogenic, meaning they cause disease or infection. To design effective treatments against pathogens, researchers need to know which specific genes are to blame for pathogenicity.

Scientists can identify pathogenic genes through genetic engineering. This involves adding human-made DNA into a bacterial cell. However, the problem is that bacteria have evolved complex defense systems to protect against foreign intruders--especially foreign DNA. Current genetic engineering approaches often disguise the human-made DNA as bacterial DNA to thwart these defenses, but the process requires highly specific modifications and is expensive and time-consuming.

In a paper published recently in the Proceedings of the National Academy of Sciences journal, Dr. Christopher Johnston and his colleagues at the Forsyth Institute describe a new technique to genetically engineer bacteria by making human-made DNA invisible to a bacterium's defenses. In theory, the method can be applied to almost any type of bacteria.

Johnston is a researcher in the Vaccine and Infectious Disease Division at the Fred Hutchinson Cancer Research Center and lead author of the paper. He said that when a bacterial cell detects it has been penetrated by foreign DNA, it quickly destroys the trespasser. Bacteria live under constant threat of attack by a virus, so they have developed incredibly effective defenses against those threats.

The problem, Johnston explained, is that when scientists want to place human-made DNA into bacteria, they confront the exact same defense systems that protect bacteria against a virus.

To get past this barrier, scientists add specific modifications to disguise the human-made DNA and trick the bacterium into thinking the intruder is a part of its own DNA. This approach sometimes works but can take considerable time and resources.

Johnston's strategy is different. Instead of adding a disguise to the human-made DNA, he removes a specific component of its genetic sequence called a motif. The bacterial defense system needs this motif to be present to recognize foreign DNA and mount an effective counter-attack. By removing the motif, the human-made DNA becomes essentially invisible to the bacterium's defense system.

"Imagine a bacterium like an enemy submarine in a dry-dock, and a human-made genetic tool as your soldier that needs to get inside the submarine to carry out a specific task. The current approaches would be like disguising the spy as an enemy soldier, having them walk up to each gate, allowing the guards to check their credentials, and if all goes well, they're in," Johnston said. "Our approach is to make that soldier invisible and have them sneak straight through the gates, evading the guards entirely."

This new method requires less time and fewer resources than current techniques. In the study, Johnston used Staphylococcus aureus bacteria as a model, but the underlying strategy he developed can be used to sneak past these major defense systems that exist in 80 to 90 percent of bacteria that are known today.

This new genetic engineering tool opens up the possibilities for research on bacteria that haven't been well studied before. Since scientists have a limited amount of time and resources, they tend to work with bacteria that have already been broken into, Johnston explained. With this new tool, a major barrier to breaking into bacteria DNA has been solved, and researchers can use the method to engineer more clinically relevant bacteria.

"Bacteria are the drivers of our planet," said Dr. Gary Borisy, a Senior Investigator at the Forsyth Institute and co-author of the paper. "The capacity to engineer bacteria has profound implications for medicine, for agriculture, for the chemical industry, and for the environment."

Credit: 
Forsyth Institute

Synthetic version of CBD treats seizures in rats

image: CBD from extracts of cannabis or hemp plants could be used to treat epilepsy and other conditions. UC Davis chemists have come up with a way to make a synthetic version of CBD and showed that it is as effective as herbal CBD in treating seizures in rats. Left to right: chemical structures of THC and CBD from plants, and of synthetic H2CBD.

Image: 
Mascal laboratory, UC Davis

A synthetic, non-intoxicating analogue of cannabidiol (CBD) is effective in treating seizures in rats, according to research by chemists at the University of California, Davis.

The synthetic CBD alternative is easier to purify than a plant extract, eliminates the need to use agricultural land for hemp cultivation, and could avoid legal complications with cannabis-related products. The work was recently published in the journal Scientific Reports.

"It's a much safer drug than CBD, with no abuse potential and doesn't require the cultivation of hemp," said Mark Mascal, professor in the UC Davis Department of Chemistry. Mascal's laboratory at UC Davis carried out the work in collaboration with researchers at the University of Reading, U.K.

Products containing CBD have recently become popular for their supposed health effects and because the compound does not cause a high. CBD is also being investigated as a pharmaceutical compound for conditions including anxiety, epilepsy, glaucoma and arthritis. But because it comes from extracts of cannabis or hemp plants, CBD poses legal problems in some states and under federal law. It is also possible to chemically convert CBD to tetrahydrocannabinol (THC), the intoxicating compound in marijuana.

8,9-Dihydrocannabidiol (H2CBD) is a synthetic molecule with a similar structure to CBD. Mascal's laboratory developed a simple method to inexpensively synthesize H2CBD from commercially available chemicals. "Unlike CBD, there is no way to convert H2CBD to intoxicating THC," he said.

One important medical use of cannabis and CBD is in treatment of epilepsy. The U.S. Food and Drug Administration has approved an extract of herbal CBD for treating some seizure conditions and there is also strong evidence from animal studies.

The researchers tested synthetic H2CBD against herbal CBD in rats with induced seizures. H2CBD and CBD were found to be equally effective for the reduction of both the frequency and severity of seizures.

Mascal is working with colleagues at the UC Davis School of Medicine to carry out more studies in animals with a goal of moving into clinical trials soon. UC Davis has applied for a provisional patent on antiseizure use of H2CBD and its analogues, and Mascal has founded a company, Syncanica, to continue development.

Credit: 
University of California - Davis

Replacing diesel with liquefied natural leads to a fuel economy of up to 60% in São Paulo

The substitution of diesel oil by liquefied natural gas (LNG) for cargo transportation in São Paulo would possibly lead to a significant reduction in fuel costs and greenhouse gas (GHG) emissions - as well as other pollutants - in São Paulo State, Brazil. This was presented in a study by the Research Centre for Gas Innovation (RCGI) supported by the São Paulo Research Foundation - FAPESP - and Shell.

Hosted at the Engineering School of the University of São Paulo (Poli-USP), the RCGI is one of the Engineering Research Centers (ERCs) financed by FAPESP in partnership with large companies.

"The biggest benefits, both in terms of pollution reductions and in prices of the fuels being discussed herein, are perceived in São Paulo and Campinas, which are regions with greater potential for substituting diesel oil with LNG and where diesel oil is more expensive than it is in the rest of the State. Our results show that in São Paulo, LNG can be up to 60% cheaper than diesel oil," said Dominique Mouette, Professor in the School of Arts, Sciences, and Humanities at the University of São Paulo (EACH-USP), in an RCGI press communiqué. Mouette is principal author of the article and leader of the RCGI project focusing on the viability of a Blue Corridor in São Paulo State.

The objective of the study, which resulted in an article published in Science of Total Environment, was to evaluate the economic and environmental benefits of substituting diesel oil with LNG for the purpose of establishing a Blue Corridor in the state. This concept appears in Russia and designates routes on which trucks use LNG instead of diesel oil.

LNG is obtained by cooling natural gas to minus 163 °C. Gas is condensed so that its volume is reduced up to 600 times, making it possible to be transported using cryogenic carts to places located far from oil ducts.

To analyze the substitution of diesel with LNG, the investigation considered four scenarios. "Within the best scenario, the use of LNG would reduce fuel costs by up to 40%; equivalent CO2 emissions [a measure used to compare the potential heating effect among several greenhouse gases (GHGs), also known as CO2-eq] by 5.2%; particulate materials by 88%; nitrogen oxides (NOx) by 75%; and would eliminate hydrocarbon emissions," states Pedro Gerber Machado, a researcher at the University of São Paulo's Institute of Energy and Environment and coauthor of the article.

"The methodology initially considered two contexts: one for the geographical regions served by gas pipelines, called the Restricted Scenario (RS), and another covering the 16 administrative regions of the state, called the State Scenario (SS). Both scenarios had different versions of the Blue Corridor, with 3,100 and 8,900 kilometers of roads, respectively," Machado explained.

According to Machado, in the case of each scenario, two forms of LNG distribution were considered: the first one considered a centralized liquefaction with road distribution and generated two subscenarios, a State Scenario with Centralized Liquefaction (SSCL) and a Restricted Scenario with Centralized Liquefaction (RSCL). The second would perform the liquefaction locally in the region where it would be used, which would eliminate the need for distributing LNG on highways. From this scenario, two more subscenarios were derived: the State Scenario with Hybrid Local and Central Liquefaction (SSHL) and the Restricted Scenario with Local Liquefaction (RSLL).

Cost comparison

"The RSLL scenario presents the lowest average price difference for the consumer between LNG and diesel, which means that, in this case, the delivery process of LNG is more expensive, as influenced by the scale factor and greater operating costs," Machado explains.

He continues, "The RSCL scenario offers the lowest gas price for the consumer, that is, 12 dollars per MMBTU (million British thermal units), whereas diesel, in this same scenario, would cost 22.01 dollars per MMBTU. The difference in price between LNG and diesel, in this scenario, is also the largest: 10.01 dollars per MMBTU."

However, the RSLL scenario was designed within the context of a shorter corridor, where the investment would be US$ 243.40 per meter. This contrasts with the SSHL scenario, which has the lowest investment per meter of the four scenarios (US$ 122.10 per meter).

Emissions avoided

Machado explains that to calculate the GHG and pollutant emissions, only the two macroscenarios were considered: SS and RS. "When using LNG, the GHG emissions are different from diesel oil emissions due to CH4 and N2O, which are greenhouse gases with potential for global warming. If the fuel used is diesel, CO2 is responsible for 99% of the emissions of CO2-eq, and if the fuel used is LNG, it represents 82% of the CO2-eq emissions, while CH4 is responsible for 10% and N2O for 8%," he states.

Regarding the GHG emissions generated by the logistics of transporting LNG, the worst-case scenario refers to the SSCL and corresponds to 1% of the total CO2-eq emitted with the use of trucks. In the SCHL, the logistics represent 0.34% of the emissions, and in the RSCL, the logistics correspond to 0.28% of the emissions.

As for pollutants, in the RS scenario, 119,129 tons of emissions from particulate matter (PM) would be avoided: 7.3 million tons of NOx and 209,230 tons of hydrocarbon (HC). In the SS scenario, the benefits are even greater, with reductions of 163,000 tons of MP, 10 million tons of NOx, and 286,000 tons of HC.

When one compares the burning of natural gas and diesel oil, the reduction of 5.2% in GHG emissions, which was observed in the State Scenario, might not be so great, but there are considerable reductions in local pollutants - NOx, PM, and HC saw reductions of 75%, 88%, and 100%, respectively.

However, despite the economic and environmental advantages presented, LNG still faces regulatory barriers to its general use in the transportation sector. "It is not regulated to be used as a fuel for vehicles in Brazil. Most of the LNG used here is compressed natural gas (CNG)," states Professor Mouette.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo