Culture

USPSTF still recommends against screening for pancreatic cancer in asymptomatic adults

Bottom Line: The U.S. Preventive Services Task Force (USPSTF) still recommends against screening for pancreatic cancer in adults without symptoms. The USPSTF routinely makes recommendations about the effectiveness of preventive care services. In this statement, the USPSTF reaffirmed its 2004 recommendation against screening for asymptomatic adults. Pancreatic cancer is an uncommon cancer with a poor prognosis.

(doi:10.1001/jama.2019.10232)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Note: More information about the U.S. Preventive Services Task Force, its process, and its recommendations can be found on the newsroom page of its website.

#  #  #

Embed this link to provide your readers free access to the full-text article This link will be live at the embargo time and all USPSTF articles remain free indefinitely: https://jamanetwork.com/journals/jama/fullarticle/2740727?guestAccessKey=4b279f9c-411f-4de8-b6a7-449e0b98979d&utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=080619

Credit: 
JAMA Network

NZ big bird a whopping 'squawkzilla'

image: Reconstruction of the giant parrot Heracles, dwarfing a bevy of 8cm high Kuiornis -- small New Zealand wrens scuttling about on the forest floor.

Image: 
Dr Brian Choo, Flinders University

Australasian palaeontologists have discovered the world's largest parrot, standing up to 1m tall with a massive beak able to crack most food sources.

The new bird has been named Heracles inexpectatus to reflect its Herculean myth-like size and strength - and the unexpected nature of the discovery.

"New Zealand is well known for its giant birds," says Flinders University Associate Professor Trevor Worthy. "Not only moa dominated avifaunas, but giant geese and adzebills shared the forest floor, while a giant eagle ruled the skies.

"But until now, no-one has ever found an extinct giant parrot - anywhere."

The NZ fossil is approximately the size of the giant 'dodo' pigeon of the Mascarenes and twice the size of the critically endangered flightless New Zealand kakapo, previously the largest known parrot.

Like the kakapo, it was a member of an ancient New Zealand group of parrots that appear to be more primitive than parrots that thrive today on Australia and other continents.

Experts from Flinders University, UNSW Sydney and Canterbury Museum in New Zealand estimate Heracles to be 1 m tall, weighing about 7 kg.

The new parrot was found in fossils up to 19 million years old from near St Bathans in Central Otago, New Zealand, in an area well known for a rich assemblage of fossil birds from the Miocene period.

"We have been excavating these fossil deposits for 20 years, and each year reveals new birds and other animals," says Associate Professor Worthy, from the Flinders University Palaeontology Lab.

"While Heracles is one of the most spectacular birds we have found, no doubt there are many more unexpected species yet to be discovered in this most interesting deposit."

"Heracles, as the largest parrot ever, no doubt with a massive parrot beak that could crack wide open anything it fancied, may well have dined on more than conventional parrot foods, perhaps even other parrots," says Professor Mike Archer, from the UNSW Sydney Palaeontology, Geobiology and Earth Archives (PANGEA) Research Centre.

"Its rarity in the deposit is something we might expect if it was feeding higher up in the food chain," he says, adding parrots "in general are very resourceful birds in terms of culinary interests".

"New Zealand keas, for example, have even developed a taste for sheep since these were introduced by European settlers in 1773."

Birds have repeatedly evolved giant species on islands. As well as the dodo, there has been another giant pigeon found on Fiji, a giant stork on Flores, giant ducks in Hawaii, giant megapodes in New Caledonia and Fiji, giant owls and other raptors in the Caribbean.

Heracles lived in a diverse subtropical forest where many species of laurels and palms grew with podocarp trees.

"Undoubtedly, these provided a rich harvest of fruit important in the diet of Heracles and the parrots and pigeons it lived with. But on the forest floor Heracles competed with adzebills and the forerunners of moa," says Professor Suzanne Hand, also from UNSW Sydney.

"The St Bathans fauna provides the only insight into the terrestrial birds and other animals that lived in New Zealand since dinosaurs roamed the land more than 66 million years ago," says Paul Scofield, Senior Curator at Canterbury Museum, Christchurch.

Canterbury Museum research curator Vanesa De Pietri says the fossil deposit reveals a highly diverse fauna typical of subtropical climates with crocodilians, turtles, many bats and other mammals, and over 40 bird species.

"This was a very different place with a fauna very unlike that which survived into recent times," she says.

Credit: 
Flinders University

Deregulated mTOR is responsible for autophagy defects exacerbating kidney stone formation

image: Numbers of GFP-MAP1LC3 puncta (white arrows) and autophagosomes (black arrows) in renal tubular cells were increased with mTOR inhibitor of GFP-MAP1LC3-mice treated with GOX. This means the activation of autophagy. Amount of crystals formed in kidneys extracted from mice after GOX and mTOR inhibitor injection was suppressed.

Image: 
Takahito Yasui, M.D., Ph.D.

Dr. Takahiro Yasui (Professor, Nagoya City University) and Dr. Rei Unno (Research fellow, Nagoya City University) in collaboration with Dr. Tsuyoshi Kawabata (Associate Professor, Nagasaki University) have revealed a novel mechanism for kidney stone formation, using the mouse cultured cell, mouse model for kidney stone, and human kidney tissue. They found that autophagic activity was significantly decreased in mouse renal tubular cells (RTCs) exposed to calcium oxalate (CaOx) monohydrate crystals and in the kidneys of GFP-conjugated MAP1LC3B (microtubule- associated protein 1 light chain 3 beta) transgenic mice with CaOx nephrocalcinosis induced by glyoxylate (GOX). This caused accumulation of damaged intracellular organelles, such as mitochondria and lysosomes, the normal functioning of which is mediated by functional autophagy. An impairment of autophagy was also observed in the mucosa with plaques of CaOx kidney stone formers. Moreover, they determined that the decrease in autophagy was caused by an upregulation of mTOR, which consequently resulted in the suppression of the upstream autophagy regulator TFEB (transcription factor EB). Furthermore, they showed that an mTOR inhibitor could recover a decrease in autophagy and alleviate crystal-cell interactions and the formation of crystals associated with increased inflammatory responses (Figure 1). As chemical inhibition of mTOR ameliorates kidney stone development, this result proposed that deregulated mTOR and resultant impairment in autophagy is a key target for prevention or treatment of the disease (Figure 2).

Credit: 
Nagoya City University

OU microbiologists provide framework for assessing ecological diversity

image: The study provides a tool that ecologists can use to quantitatively assess ecological stochasticity.

Image: 
University of Oklahoma

A University of Oklahoma team of microbiologists have developed a mathematical framework for quantitatively assessing ecological diversity in an ecological community whether deterministic or stochastic. A recent study by the team published in the Proceedings of the National Academy of Sciences examines the mechanisms controlling biological diversity and provides guidance for use of the null-model-based approaches for examining processes within the community.

"An ecological community is a dynamic complex system with a myriad of interacting species. Both deterministic or stochastic forces can shape the community, but how to quantify their relative contribution remains a great challenge. This study provides an effective and robust tool to ecologists for quantitatively assessing ecological stochasticity," said Jizhong Zhou, director of the Institute for Environmental Genomics, professor in the OU Colleges of Arts and Sciences and Gallogy College of Engineering, and affiliate of the U.S. Department of Energy's Lawrence Berkeley National Laboratory.

Zhou led the study with OU team members Daliang Ning and Ye Deng; and James M. Tiedje, Michigan State University. In this study, the team modified the framework for more general situations when quantifying stochastic mechanisms underlying ecological communities and demonstrated that it has obviously better quantitative performance than previous methods.

The team used the framework to reassess the importance of determinism and stochasticity in mediating the succession of groundwater microbial communities in response to organic carbon injection, in this case emulsified vegetable oil, to stimulate bioremediation. Also, the team evaluated the effects of different null-model algorithms and similarity metrics on the quantitative assessment of stochasticity in groundwater microbial communities in response to the carbon injection.

The study results show the microbial community shifted from deterministic to more stochastic right after organic carbon input. As the vegetable oil was consumed, the community returned to more deterministic. In addition, the study results demonstrated that null-model algorithms and community similarity metrics had strong effects on quantifying ecological stochasticity.

Credit: 
University of Oklahoma

Novel school improvement program can raise teaching quality while reducing inequality

A multi-national European study, looking at over 5,500 students, has found that a novel school intervention program can not only improve the mathematics scores of primary school children from disadvantaged areas, but can also lessen the achievement gap caused by socioeconomic status.

Known as the Dynamic Approach to School Improvement (DASI), the program is based on the latest findings in educational research.

Rather than a one-size-fits-all, top-down approach, DASI works by first assessing a school to identify the specific teaching areas that could be improved and then implementing targeted measures to improve them. This process involves all members of the school community, including teachers, pupils and parents, with support from a specialized Advisory and Research Team.

Several studies have already shown that DASI can improve student learning progress and academic outcomes, but this latest study, published in Educational Research, is the first to have been conducted on schools in disadvantaged areas.

Furthermore, DASI was specifically designed to enhance both academic quality and equity, by countering non-school factors that can influence pupils' academic outcomes, including socioeconomic status, gender and ethnicity.

The effectiveness of this aspect of DASI had also not been tested before until this new study, led by a team of researchers from Cyprus and the Netherlands, led by Professor Leonidas Kyriakides at the University of Cyprus.

After enrolling 72 primary schools, including some 5,560 pupils from disadvantaged areas in four countries - Cyprus, Greece, England and Ireland - the researchers randomly assigned the schools to experimental and control groups. Those in the experimental group made use of DASI for an entire school year, while those in the control groups were offered support to develop their own improvement programs.

Pupils aged between nine and 12 at the schools were given mathematics tests at both the start of the school year and the end to assess the effect of DASI over that time. The researchers chose to assess mathematical ability because previous studies have shown that mathematics tends to respond better than any other subject to school improvement programs. The researchers also recorded the socioeconomic status, gender and ethnicity of the pupils.

At the beginning of the year, all the pupils in the 72 schools achieved a similar range of scores on the mathematics tests, and showed similar achievement gaps based on socioeconomic status, gender and ethnicity. In contrast, at the end of the year, pupils in the schools that received DASI achieved better results on the mathematics test than those in the control group, and this was seen in all four countries.

Furthermore, the achievement gap based on socioeconomic status reduced in the schools that received DASI but stayed the same in the control group, although DASI didn't have any effect on the achievement gap based on gender or ethnicity.

"One could argue that this paper has not only significant implications for research on improvement but also for developing policies on equal educational opportunities," Professor Kyriakides concludes.

However, despite the fall in the achievement gap for socioeconomic status, the authors admit that DASI appears to be more effective at enhancing quality than equity. They suggest that more research needs to be done to identify policies and actions that address equity in a more comprehensive way, so that DASI can lessen the achievement gap based on gender and ethnicity as well.

Credit: 
Taylor & Francis Group

Seeing how computers 'think' helps humans stump machines and reveals AI weaknesses

One of the ultimate goals of artificial intelligence is a machine that truly understands human language and interprets meaning from complex, nuanced passages. When IBM's Watson computer beat famed "Jeopardy!" champion Ken Jennings in 2011, it seemed as if that milestone had been met. However, anyone who has tried to have a conversation with virtual assistant Siri knows that computers have a long way to go to truly understand human language. To get better at understanding language, computer systems must train using questions that challenge them and reflect the full complexity of human language.

Researchers from the University of Maryland have figured out how to reliably create such questions through a human-computer collaboration, developing a dataset of more than 1,200 questions that, while easy for people to answer, stump the best computer answering systems today. The system that learns to master these questions will have a better understanding of language than any system currently in existence. The work is described in an article published in the 2019 issue of the journal Transactions of the Association for Computational Linguistics.

"Most question-answering computer systems don't explain why they answer the way they do, but our work helps us see what computers actually understand," said Jordan Boyd-Graber, associate professor of computer science at UMD and senior author of the paper. "In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do."

Most current work to improve question-answering programs uses either human authors or computers to generate questions. The inherent challenge in these approaches is that when humans write questions, they don't know what specific elements of their question are confusing to the computer. When computers write the questions, they either write formulaic, fill-in-the blank questions or make mistakes, sometimes generating nonsense.

To develop their novel approach of humans and computers working together to generate questions, Boyd-Graber and his team created a computer interface that reveals what a computer is "thinking" as a human writer types a question. The writer can then edit his or her question to exploit the computer's weaknesses.

In the new interface, a human author types a question while the computer's guesses appear in ranked order on the screen, and the words that led the computer to make its guesses are highlighted.

For example, if the author writes "What composer's Variations on a Theme by Haydn was inspired by Karl Ferdinand Pohl?" and the system correctly answers "Johannes Brahms," the interface highlights the words "Ferdinand Pohl" to show that this phrase led it to the answer. Using that information, the author can edit the question to make it more difficult for the computer without altering the question's meaning. In this example, the author replaced the name of the man who inspired Brahms, "Karl Ferdinand Pohl," with a description of his job, "the archivist of the Vienna Musikverein," and the computer was unable to answer correctly. However, expert human quiz game players could still easily answer the edited question correctly.

By working together, humans and computers reliably developed 1,213 computer-stumping questions that the researchers tested during a competition pitting experienced human players--from junior varsity high school trivia teams to "Jeopardy!" champions--against computers. Even the weakest human team defeated the strongest computer system.

"For three or four years, people have been aware that computer question-answering systems are very brittle and can be fooled very easily," said Shi Feng, a UMD computer science graduate student and a co-author of the paper. "But this is the first paper we are aware of that actually uses a machine to help humans break the model itself."

The researchers say these questions will serve not only as a new dataset for computer scientists to better understand where natural language processing fails, but also as a training dataset for developing improved machine learning algorithms. The questions revealed six different language phenomena that consistently stump computers.

These six phenomena fall into two categories. In the first category are linguistic phenomena: paraphrasing (such as saying "leap from a precipice" instead of "jump from a cliff"), distracting language or unexpected contexts (such as a reference to a political figure appearing in a clue about something unrelated to politics). The second category includes reasoning skills: clues that require logic and calculation, mental triangulation of elements in a question, or putting together multiple steps to form a conclusion.

"Humans are able to generalize more and to see deeper connections," Boyd-Graber said. "They don't have the limitless memory of computers, but they still have an advantage in being able to see the forest for the trees. Cataloguing the problems computers have helps us understand the issues we need to address, so that we can actually get computers to begin to see the forest through the trees and answer questions in the way humans do."

There is a long way to go before that happens added Boyd-Graber, who also has co-appointments at the University of Maryland Institute for Advanced Computer Studies (UMIACS) as well as UMD's College of Information Studies and Language Science Center. But this work provides an exciting new tool to help computer scientists achieve that goal.

"This paper is laying out a research agenda for the next several years so that we can actually get computers to answer questions well," he said.

Credit: 
University of Maryland

Streamlining fee waiver requests helped low-income immigrants become citizens

Many Americans only experience government bureaucracy when dealing with the IRS or DMV, and in the popular imagination these interactions are known for being almost comically time-consuming and complicated. But for people who receive public benefits, bureaucracy is a more routine obstacle.

All too often, benefits programs are designed with an eye to the convenience of the administrators, not the clients. Lengthy forms full of jargon and fine print can be a big obstacle for someone who speaks English as a second language; commuting to distant agency offices is challenging for someone who lacks transportation and can't take time off work. Accessing benefits sometimes involves a logistical high-wire act, and ad hoc, inconsistent processes effectively deter people who need these benefits the most.

Policymakers and researchers are beginning to recognize this problem. In some cases, however, attempts to ease access bring in people who are relatively better off--people with higher incomes, education levels, and language skills. Meanwhile, the people with the greatest hardships are less likely to take advantage of support systems primarily designed for them.

Now, a new study from the Immigration Policy Lab at Stanford University offers insights into a time when lightening the paperwork and confusion surrounding a public benefit did get a bigger response from the neediest beneficiaries.

The researchers studied a reform that made it easier for low-income immigrants to apply for citizenship free of charge. When the convoluted fee waiver request process was replaced with a single form, they found, about 73,000 people per year became citizens who otherwise wouldn't have applied.

It may sound like dull, procedural detail, but reforms like this go a long way toward making sure that programs for the disadvantaged actually work as intended. And in this case, all the red tape interfered with the principle that U.S. citizenship should be open to all who are eligible, not just those with means.

Equal Access to Citizenship

Keeping citizenship affordable is a rising priority for policymakers and immigrant advocates in the United States. The application fee, now $725, has risen sharply over the past few decades, creating a new barrier to citizenship.

At the same time, the federal fee waiver for low-income immigrants is curiously underused. Of the roughly 9 million immigrants eligible for citizenship, almost half are eligible for the fee waiver. Why don't more people use it?

Before 2010, U.S. Citizenship and Immigration Services (USCIS) used a pretty opaque process for fee waiver requests. Immigrants demonstrated their inability to pay the application fee with either an affidavit or an unsworn declaration. USCIS officers used their best judgment in evaluating the claims, as there were only vague guidelines governing whether a claim should be approved.

Then, USCIS introduced a streamlined, transparent process: a simple form (the I-912) and clear rules for eligibility (receipt of any means-tested benefits, like SNAP or Medicaid, or household income below 150 percent of the federal poverty guidelines).

"We were excited about this study because the standardization was a major policy change with the potential to affect millions of immigrants. It was also relevant for current debates about the future of the fee waiver program," said Vasil Yasenov, a postdoctoral fellow at the Immigration Policy Lab and lead author of the study.

The researchers looked at 739,301 low-income immigrants who met the eligibility requirements for citizenship between 2007 and 2016 using data from the American Community Survey. They divided them into two groups to compare: those who would likely qualify for the fee waiver and those who wouldn't. Other than income and employment, the two groups looked very similar. They were composed of roughly the same proportion of ages, ethnicities, origin countries, and time spent living in the United States.

After the fee waiver reform, the group eligible for the fee waiver naturalized at higher rates than the comparison group. The researchers found that there was no difference between the two groups beforehand, but afterward a gap of 1.5 percentage points opened between them. That amounted to about 73,000 new citizens per year.

The Role of Non-Profits

As they delved further into the data, the researchers were intrigued to find an outsized response from those who could be considered least likely to naturalize--in other words, people whose circumstances make it particularly difficult to successfully apply. If you think of this group as most likely to be deterred by the cumbersome process surrounding the fee waiver before 2010, it makes sense that they would also be more likely to change their behavior once that obstacle was lifted. But this finding contrasts with research showing that it's people with greater advantages in life who respond most to these kinds of "user-friendly" reforms.

What explains the anomaly? There was something else at work, something that helped translate the procedural changes into a positive force in people's lives. That something, the researchers thought, was the hundreds of nonprofits and community-based organizations across the country devoted to serving low-income immigrants, known as immigration service providers. While they are in the public eye when engaging in political advocacy and community organizing, they are less well known for the everyday work they do: providing legal advice, referrals to health care or adult education programs, and help filling out naturalization forms.

As it turned out, their work was reflected in the data. The fee waiver reform has a greater effect among immigrants living in states with a higher density of these supportive organizations. And an additional survey of low-income immigrants found that those who received help from an immigration service provider were 21.5 percentage points more likely to use the fee waiver.

"With the I-912 form and the standardization of evidence needed for a fee waiver, organizations have been able to set up efficient processes to assist low-income immigrants who may have thought that their dreams of citizenship were out of reach because of the fees normally required to apply," noted study co-author Michael Hotard.

Immigration service providers may soon play an even more important role in making citizenship possible for low-income immigrants. USCIS has proposed making fee waivers more difficult to get, narrowing eligibility criteria and requiring a tax transcript as proof of inability to pay. While immigration service providers will rally to help immigrants navigate the potential changes, cities and states can promote the fee waiver program and experiment with public-private partnerships to offer financial support for immigration fees. As with the 2010 fee waiver reform, a little creativity goes a long way.

Credit: 
Stanford University - Immigration Policy Lab

What do you mean the hamburger isn't all that American?

image: Hamburgers are an American favorite, but the origins of this meal's common
ingredients are as diverse as the US population. The meat patties were first
served in Hamburg, Germany, and appeared on menus in New York City as
early as the 1870s thanks to German immigrants.

Image: 
Graphic: Álvaro Valiño, Kelsey Nowakowski and Colin Khoury

Say you're a scientist who studies the origins and history of food, and you want to communicate to the world your findings that the all-American hamburger - including the side of fries - doesn't contain a single ingredient that originally came from the United States. You could publish an article in a top-notch journal, ask a communications officer to write a press release about the paper, or take to Twitter and tell your hundreds of devoted followers all about your discovery. All of these create some impact.

But you could also join forces with a professional graphic designer and map out the ingredients' origins in an attractive infographic display, and by publishing it, potentially reach a much wider audience.

This is exactly what Colin Khoury of the International Center for Tropical Agriculture (CIAT) did. And the end result communicates his findings perhaps just as well - or perhaps even better than - the common communications channels that scientists use. Now Khoury - who also took the pizza to task for not being all that Italian and pad thai for being less than wholly Thai - and his collaborators at leading universities are encouraging their scientific colleagues to embrace graphic design as a serious asset in science communication efforts, as well as a useful process for the advance of science itself.

"Visual depictions of scientific findings aren't a new thing. In fact, people have been making them for hundreds of years, and especially since scientific journals started to publish," said Khoury, the lead author of a new paper about scientist-artist collaborations in Communications Biology, a journal published by Nature. "But new technologies, new audiences, and new ways to communicate are making high-quality, sophisticated graphics ever more important, and collaborations with skilled professionals are the most productive way to create them."

To test the efficacy of collaborations between scientists and graphic artists, Khoury and colleagues paired six research laboratories that work on societally relevant food and agricultural challenges with graphic designers and media content creators. In addition to the food origins research, they tackled complex subjects related to pollinators and biodiversity threats, modern plant breeding, agricultural development and land-use change, and new technologies in agriculture.

The scientists first presented the results at this year's annual meeting of the American Association for the Advancement of Science (AAAS).

Challenging scientists to identify audiences and clarify their messages

The collaborations began with asking the scientists to define their target audience and "the general public" was not an acceptable answer. To explain the importance of pollinators and biodiversity, the teams eventually identified the target audience as "English and Spanish speakers already interested in biodiversity conservation," which led to a relatively detailed infographic with versions in English and Spanish.

The scientists agonized over the challenge of distilling complex concepts into clear, focused, and accessible messages, but the process helped them push their science forward. In some cases, they better identified the central components of their work. In others, they discovered areas they hadn't studied sufficiently.

"Seeing the science through the eyes of a graphic artist challenged my thought process on how to reduce complex mechanisms to a more accessible form," said Michael Gore, a researcher at Cornell University who collaborated on an infographic about harnessing new technology to improve crop resilience.

The graphic artists, not all of whom had backgrounds in science, enjoyed the challenge.

"Science communication in general is broadening and breaking down barriers between scientists and the public, and infographics have become mainstream," said Yael Kisel, an independent artist based in San Jose, California, who worked on the pollinator infographic. "Science-art partnerships are popping up all over the place like mushrooms after rain. I feel like we're riding on a growing wave, and I can't wait to see how this field continues to develop."

The researchers and designers collectively identified a number of steps to make scientist-artist partnerships a more common component of a research communication agenda. They encourage research institutions to make graphic design a substantive component of communications teams. They suggest graphic art professionals can create and expand networks and businesses. They ask funders of research projects to allocate resources to graphic communication of socially relevant research results.

"If a picture is worth a thousand words, a science-based infographic is worth at least a million," said Ari Novy, president of the San Diego Botanic Garden, who managed the project.

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

'Bone in a dish' opens new window on cancer initiation, metastasis, bone healing

image: Luiz Bertassoni, D.D.S., Ph.D., holds a 3D 'bone in a dish' as a model to study bone function, diseases, and bone regeneration.

Image: 
OHSU/Joe Rojas-Burke

Researchers in Oregon have engineered a material that replicates human bone tissue with an unprecedented level of precision, from its microscopic crystal structure to its biological activity. They are using it to explore fundamental disease processes, such as the origin of metastatic tumors in bone, and as a treatment for large bone injuries.

"Essentially it is a miniaturized bone in a dish that we can produce in a matter of 72 hours or less," says biomedical engineer Luiz Bertassoni, D.D.S., Ph.D., an assistant professor in the OHSU School of Dentistry and a member of CEDAR, the Cancer Early Detection Advanced Research Center in the OHSU Knight Cancer Institute.

Like real bone, the material has a 3D mineral structure populated with bone cells, nerve cells and endothelial cells that self-organize into functioning blood vessels.

"What is remarkable is that researchers in our field have become used to cultivating cells within a protein mixture to approximate how cells live in the body. But this is the first time anyone has been able to embed cells in minerals, which is what characterizes the bone tissue," Bertassoni says.

And that's what makes the new material promising as a model to study bone function, diseases, and bone regeneration. "With this model system, you can start asking questions about how bone cells attract different types of cancers, how cancer cells move into bone, how bone takes part in the regulation of marrow function," says Bertassoni.

"It can even be relevant to dissect the mechanisms that are leading to diseases such as leukemia." Bertassoni published the results in the journal Nature Communications with postdoctoral researcher Greeshma Nair, Ph.D., doctoral student Avathamsa Athirasala, M.Sc.Eng., and other co-authors.

"Being able to engineer truly bone-like tissues in the lab can also be transformative for regenerative medicine," Bertassoni says, "since the current treatment for large bone fractures requires the removal of the patient's own healthy bone so that it can be implanted at the site of injury."

The recipe starts with the mixing of human stem cells into a solution loaded with collagen, a protein abundant in the matrix of bone tissue. The collagen proteins link together, forming a gel embedded with the stem cells. The researchers then flood the gel with a mixture of dissolved calcium and phosphate, the minerals of bone. They add another key ingredient, the protein osteopontin derived from cow milk, to keep the minerals from forming crystals too quickly. This additive, which clings to calcium and phosphate, also minimizes the minerals' toxicity to cells.

The mixture diffuses through a network of channels about the width of a strand of DNA in the spongey collagen, and the dissolved minerals precipitate into orderly layers of crystals.

"We can reproduce the architecture of bone down to a nanometer scale," says Bertassoni. "Our model goes through the same biophysical process of formation that bone does."

In this calcified environment, stem cells develop into functioning bone cells, osteoblasts and osteocytes without the addition of any other molecules, as if they knew they were being embedded in an actual bone matrix. Within days, the growing cells squeeze slender protrusions through spaces in their mineralized surroundings to connect and communicate with neighboring cells. The bone-like engineered structure creates a microenvironment that is sufficient to cue stem cells that it's time to mature into bone cells.

"It's nature doing its job, and it's beautiful," Bertassoni says. "So, we went farther and decided to try it with the bone vasculature and innervation as well."

Nerve cells added to the mixture formed interconnected networks that persisted after mineralization. Endothelial cells, likewise, formed networks of tubes that remained opened after mineralization.

To test the usefulness of the material as a disease model, the researchers implanted their engineered bone beneath the skin of laboratory mice. After a few days, the lab-made blood vessels had connected with the vasculature in the mouse bodies. When the researchers injected prostate cancer cells nearby, they found that tumor growth was three times higher in the mice given mineralized bone constructs than in those with the non-mineralized controls.

The team now is engineering versions with marrow cells growing within a surrounding of artificial bone for use as a model to study the initiation and development of blood cancers including the various forms of leukemia. Also, they have tested the engineered bone-like material as a replacement for injured bone in animal models - with positive results that they expect to report in the near future.

Credit: 
Oregon Health & Science University

Study explores blood-brain barrier leakage in CNS infections

Washington, DC - August 6, 2019 - A new study published in the journal mBio shines light on the breakdown of the blood-brain barrier (BBB) that occurs during many infections of the central nervous system. The findings implicate interferon gamma, a major cytokine upregulated in most central nervous system (CNS) viral infections, as a major contributor of blood brain barrier breakdown. Using an experimental viral encephalitis mouse model in which mice are infected with reovirus, the research provides new insight into how the breakdown occurs, which may lead to new therapeutic avenues.

"Gene expression studies on brain material from infected mice suggested that one of the pathways that was really upregulated during infection was interferon signaling in general, and in particular, a subset of interferon, the type 2 interferon or interferon gamma," said study investigator Kenneth Tyler, MD, a neurovirologist and Chairman of the Department of Neurology at the University of Colorado School of Medicine. "Interferon was one of the things that was causing not only a loss of pericytes, support cells, but a disruption of the connections that are usually pretty tight between endothelial cells in the blood brain barrier called tight junctions and adherens junctions."

Many previous studies have demonstrated that the blood brain barrier breaks down during the process of encephalitis. Mechanistically, there have been some connections between type 1 and type 2 interferon and that process, but what occurs at the cellular level and molecular level, at the level of the vasculature, has been very unclear.

"Is the blood brain barrier breakdown an early feature of the pathology? Is it late? What sort of relationship does the breakdown have with other aspects of disease progression?" said principal study investigator Julie Siegenthaler, PhD, a neuroscientist in the Department of Pediatrics, University of Colorado School of Medicine.

To find out, scientists at University of Colorado School of Medicine inoculated reovirus into the brains of neonatal mice, producing a devastating encephalitis, which they then closely monitored. The researchers replicated much of what they were seeing in the mouse brains in experiments using cultured brain endothelial cells.

The researchers found that BBB breakdown happens late in the course of disease. "Disease progression happens over about 10 days. We see the blood brain barrier breakdown happening in the last 6 to 7 days after infection, after seeing evidence that the virus is being replicated and after seeing evidence that there is an inflammatory response," said Dr. Siegenthaler.

Infection upregulated interferon gamma, and endothelial cells in the BBB responded to interferon gamma by initiating a signaling cascade that changed their behavior so that they could no longer maintain the BBB integrity. IFN-gamma reduced barrier properties in cultured brain endothelial cells through Rho kinase (ROCK) mediated cytoskeletal contractions, resulting in junctional disorganization and cell-cell separation.

"Infection with reovirus causes a change in the proteins that regulate the cytoskeleton," said Dr. Siegenthaler. "It causes an overactivation of the cytoskeleton, causing the cells to pull apart from each other, whichhas not been shown before."

The researchers showed that if they blocked interferon gamma signaling, by using a neutralizing antibody, many of the disruptions, such as loss of pericytes and changes in tight junctions, could be restored.

The work may be a model for what happens in other forms of viral and even bacterial infections where there is breakdown of the BBB. "We have learned that in a lot of models of both meningitis and viral encephalitis, you need to stop the replication of the pathogen - whether that is an antibiotic for bacterial meningitis or an antiviral when it is available for viral meningitis - and employ strategies that inhibit a whole series of things that add to the seriousness of the disease that occur downstream of that infection."

"Interferon gamma seems to be mediating the damage in the endothelial cells that make up the blood vessels," said Stephanie Bonney, PhD, who is now a researcher at Seattle Children's Research Institute. "We see changes within the endothelial cells themselves that effect the way they connect to each other. These changes allow for the acceleration of fluids and immune cells into the CNS, contributing to edema and other problems in patients."

"Manipulating the blood brain barrier in combination with antiviral therapies may lead to better outcomes," said Dr. Tyler "You could specifically target the endothelial cells to maintain blood brain barrier integrity," said Dr. Siegenthaler.

The scientists say more research is needed to show whether other viruses such as West Nile or another flavivirus behaves similarly to reovirus, and whether the findings can be extrapolated to infections in humans.

Credit: 
American Society for Microbiology

Antineutrino detection could help remotely monitor nuclear reactors

image: These images compare the evolution of antineutrino spectrum and antineutrino detector response as a function of reactor operational time in a pressurized water reactor and an ultra-long cycle fast reactor.

Image: 
Georgia Tech

Technology to measure the flow of subatomic particles known as antineutrinos from nuclear reactors could allow continuous remote monitoring designed to detect fueling changes that might indicate the diversion of nuclear materials. The monitoring could be done from outside the reactor vessel, and the technology may be sensitive enough to detect substitution of a single fuel assembly.

The technique, which could be used with existing pressurized water reactors as well as future designs expected to require less frequent refueling, could supplement other monitoring techniques, including the presence of human inspectors. The potential utility of the above-ground antineutrino monitoring technique for current and future reactors was confirmed through extensive simulations done by researchers at the Georgia Institute of Technology.

"Antineutrino detectors offer a solution for continuous, real-time verification of what is going on within a nuclear reactor without actually having to be in the reactor core," said Anna Erickson, associate professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering. "You cannot shield antineutrinos, so if the state running a reactor decides to use it for nefarious purposes, they can't prevent us from seeing that there was a change in reactor operations."

The research, to be reported August 6 in the journal Nature Communications, was partially supported by a grant from the Nuclear Regulatory Commission (NRC). The research evaluated two types of reactors, and antineutrino detection technology based on a PROSPECT detector currently deployed at Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR).

Antineutrinos are elementary subatomic particles with an infinitesimally small mass and no electrical charge. They are capable of passing through shielding around a nuclear reactor core, where they are produced as part of the nuclear fission process. The flux of antineutrinos produced in a nuclear reactor depends on the type of fission materials and the power level at which the reactor is operated.

"Traditional nuclear reactors slowly build up plutonium 239 in their cores as a consequence of uranium 238 absorption of neutrons, shifting the fission reaction from uranium 235 to plutonium 239 during the fuel cycle. We can see that in the signature of antineutrino emission changes over time," Erickson said. "If the fuel is changed by a rogue nation attempting to divert plutonium for weapons by replacing fuel assemblies, we should be able to see that with a detector capable of measuring even small changes in the signatures."

The antineutrino signature of the fuel can be as unique as a retinal scan, and how the signature changes over time can be predicted using simulations, she said. "We could then verify that what we see with the antineutrino detector matches what we would expect to see."

In the research, Erickson and recent Ph.D. graduates Christopher Stewart and Abdalla Abou-Jaoude used high-fidelity computer simulations to assess the capabilities of near-field antineutrino detectors that would be located near - but not inside - reactor containment vessels. Among the challenges is distinguishing between particles generated by fission and those from natural background.

"We would measure the energy, position and timing to determine whether a detection was an antineutrino from the reactor or something else," she said. "Antineutrinos are difficult to detect and we cannot do that directly. These particles have a very small chance of interacting with a hydrogen nucleus, so we rely on those protons to convert the antineutrinos into positrons and neutrons."

Nuclear reactors now used for power generation must be refueled on a regular basis, and that operation provides an opportunity for human inspection, but future generations of nuclear reactors may operate for as long as 30 years without refueling. The simulation showed that sodium-cooled reactors could also be monitored using antineutrino detectors, though their signatures will be different from those of the current generation of pressurized water reactors.

Among the challenges ahead is reducing the size of the antineutrino detectors to make them portable enough to fit into a vehicle that could be driven past a nuclear reactor. Researchers also want to improve the directionality of the detectors to keep them focused on emissions from the reactor core to boost their ability to detect even small changes.

The detection principle is similar in concept to that of retinal scans used for identity verification. In retinal scans, an infrared beam traverses a person's retina and the blood vessels, which are distinguishable by their higher light absorption relative to other tissue. This mapping information is then extracted and compared to a retinal scan taken earlier and stored in a database. If the two match, the person's identity can be verified.

Similarly, a nuclear reactor continuously emits antineutrinos that vary in flux and spectrum with the particular fuel isotopes undergoing fission. Some antineutrinos interact in a nearby detector via inverse beta decay. The signal measured by that detector is compared to a reference copy stored in a database for the relevant reactor, initial fuel and burnup; a signal that sufficiently matches the reference copy would indicate that the core inventory has not been covertly altered. However, if the antineutrino flux of a perturbed reactor is sufficiently different from what would be expected, that could indicate that a diversion has taken place.

The emission rates of antineutrino particles at different energies vary with operating lifetime as reactors shift from burning uranium to plutonium. The signal from a pressurized water reactor consists of a repeated 18-month operating cycle with a three-month refueling interval, while signal from an ultra-long cycle fast reactor (UCFR) would represent continuous operation, excluding maintenance interruptions.

Preventing the proliferation of special nuclear materials suitable for weapons is a long-term concern of researchers from many different agencies and organizations, Erickson said.

"It goes all the way from mining of nuclear material to disposition of nuclear material, and at every step of that process, we have to be concerned about who's handling it and whether it might get into the wrong hands," she explained. "The picture is more complicated because we don't want to prevent the use of nuclear materials for power generation because nuclear is a big contributor to non-carbon energy."

The paper shows the feasibility of the technique and should encourage the continued development of detector technologies, Erickson said.

"One of the highlights of the research is a detailed analysis of assembly-level diversion that is critical to our understanding of the limitations on antineutrino detectors and the potential implications for policy that could be implemented," she said. "I think the paper will encourage people to look into future systems in more detail."

Credit: 
Georgia Institute of Technology

1 in 300 thrives on very-early-to-bed, very-early-to-rise routine

A quirk of the body clock that lures some people to sleep at 8 p.m., enabling them to greet the new day as early as 4 a.m., may be significantly more common than previously believed.

So-called advanced sleep phase -- previously believed to be very rare -- may affect at least one in 300 adults, according to a study led by UC San Francisco and publishing in the journal SLEEP on Aug. 6, 2019.

Advanced sleep phase means that the body's clock, or circadian rhythm, operates on a schedule hours earlier than most people's, with a premature release of the sleep hormone melatonin and shift in body temperature. The condition is distinct from the early rising that develops with normal aging, as well as the waking in the wee hours experienced by people with depression.

"While most people struggle with getting out of bed at 4 or 5 a.m., people with advanced sleep phase wake up naturally at this time, rested and ready to take on the day," said the study's senior author, Louis Ptacek, MD, professor of neurology at the UCSF School of Medicine. "These extreme early birds tend to function well in the daytime but may have trouble staying awake for social commitments in the evening."

Advanced Sleepers 'Up and at 'Em' on Weekends too

Additionally, "advanced sleepers" rouse more easily than others, he said, and are satisfied with an average of an extra five-to-10 minutes of sleep on non-work days, versus the 30-to-38 minutes' more sleep of their non-advanced sleeper family members.

Ptacek and his colleagues at the University of Utah and the University of Wisconsin calculated the estimated prevalence of advanced sleepers by evaluating data from patients at a sleep disorder clinic over a nine-year period. In total, 2,422 patients were followed, of which 1,748 presented with symptoms of obstructive sleep apnea, a condition that the authors found was not related to sleep-cycle hours.

Among this group, 12 people met initial screening criteria for advanced sleep phase. Four of the 12 declined enrollment in the study and the remaining eight comprised the 0.03 percent of the total number of patients -- or one out of 300 -- that was extrapolated for the general population.

This is a conservative figure, the researchers noted, since it excluded the four patients who did not want to participate in the study and may have met the criteria for advanced sleep phase, as well as those advanced sleepers who had no need to visit a sleep clinic.

Night Owls More Likely to Struggle with Sleep Deficits

"Generally, we find that it's the people with delayed sleep phase -- those night owls that can't sleep until as late as 7 a.m. -- who are more likely to visit a sleep clinic. They have trouble getting up for work and frequently deal with chronic sleep deprivation," said Ptacek.

Criteria for advanced sleep phase include the ability to fall asleep before 8:30 p.m. and wake before 5:30 a.m. regardless of any occupational or social obligations, and having only one sleep period per day. Other criteria include the establishment of this sleep-wake pattern by the age of 30, no use of stimulants or sedatives, no bright lights to aid early rising and no medical conditions that may impact sleep.

All study participants were personally seen by Christopher R. Jones, MD, a former neurologist at the University of Utah and co-author of the paper. Patients were asked about their medical histories and both past and present sleep habits on work days and work-free days. Researchers also looked at sleep logs and level of melatonin in the participants' saliva, as well as sleep studies, or polysomnography, that record brainwaves, oxygen levels in the blood, heart rate and breathing.

Of note, all eight of the advanced sleepers claimed that they had at least one first-degree relative with the same sleep-wake schedule, indicating so-called familial advanced sleep phase. Of the eight relatives tested, three did not meet the full criteria for advanced sleep phase and the authors calculated that the remaining five represented 0.21 percent of the general population.

The authors believe that the percentage of advanced sleepers who have the familial variant may approach 100 percent. However, some participants may have de novo mutations that may be found in their children, but not in parents or siblings, and some may have family members with "nonpenetrant" carrier mutations. Two of the remaining five were found to have genetic mutations that have been identified with familial advanced sleep phase. Conditions associated with these genes include migraine and seasonal affective disorder.

"We hope the results of this study will not only raise awareness of advanced sleep phase and familial advanced sleep phase," said Ptacek, "but also help identify the circadian clock genes and any medical conditions that they may influence."

Credit: 
University of California - San Francisco

Natural gas storage research could combat global warming

image: Synthesis at small scale (82.11 grams of product). Note that the reaction is carried out in a beaker open to air.

Image: 
Vepa Rozyyev/Texas A&M University

To help combat global warming, a team led by Dr. Mert Atilhan from Texas A&M University and Dr. Cafer Yavuz at the Korea Advanced Institute of Science and Technology (KAIST), is working on a new porous polymer that can store natural gas more effectively than anything currently being used. Their research focuses on adsorbed natural gas (ANG), a process to store natural gas that is a safer and cheaper alternative to compressed natural gas and liquefied natural gas.

Natural gas burns more cleanly as a fuel, making it a useful alternative in vehicles, and applications such as cooking, heating or running generators. It contains mostly methane and ethane, and has almost zero sulfur dioxide emissions and far fewer nitrogen oxide and particulate emissions. Natural gas also releases almost 30 percent less carbon dioxide (the leading cause of greenhouse gases) than oil and 43 percent less than coal.

"Currently we are facing serious issues that are related to global warming due to the excessive use of coal and petroleum," said Atilhan. "Natural gas is a much cleaner source and there is an abundant amount of gas being explored in the United States, the Mediterranean Sea and elsewhere all around the world. If natural gas can be stored effectively, it can be utilized easily, even in remote areas. We have high aspirations to utilize these materials in vehicular applications as well, which is one of the main causes of global warming."

Adsorbed gases collect condensed gas from a surface. These light gases have very high vapor pressure at ambient temperatures, and their storage requires either high-pressure compression, adsorbent (solid substance that adsorbs another substance) systems or an extreme reduction of temperature. In the ANG process, natural gas adsorbs to a porous adsorbent at relatively low pressure (100 to 900 psi) and ambient temperature, solving both the high-pressure and low-temperature problems.

Atilhan and Yavuz have been collaborating since 2008 on the development of new materials for gas capture and separation. In the last few years they have been specifically looking more into storing natural gas in novel porous based materials. The team focused on swelling mechanisms of network polymers. The idea would be to pressurize natural gas on the sorbent so that it would expand and take a lot. During consumption (desorption), the swollen polymer would release the gas until it completely deflates.

"With this work, we are introducing a new plastic-based material that can store natural gas very effectively," said Atilhan. "We broke the world record for natural gas storage and passed well above the target for materials in order to be considered feasible, which is determined by U.S. Department of Energy (DOE). Yet it has a very cheap production cost, which makes it even more attractive to use it in widespread applications."

"We looked into designing an ANG adsorbent from a different perspective, most research is focused on raising the upper limit, the total capacity by introducing more pore volume," said Yavuz, adding that the more pore volume also meant more leftover gas since it remains comfortably stored even if the pressure went below the minimum tank pressure needed by a vehicle. "We said, 'Let's make sure the porous material squeezes all out when desorbed to the minimum pressure.'"

This expansion/contraction mechanism also solves certain ANG issues. As it turns out, all the adsorbents warm up when in contact with gas and that causes all kinds of problems, not to mention new safety risks.

Atilhan said by having the adsorbent release energy by expanding itself, they are solving many issues at once. By keeping the adsorbent unheated, they get the maximum performance. And since thermal management is an absolutely critical design feature in engineering fuel systems, they eliminate any unsafe pressure spikes that might come up because the temperature swings and contamination is minimized since the adsorbent remains contracted when no gas is stored.

To fast forward the feasibility checks on their technology, the team began working on real gas cylinders.

"Lab results were great but you always have this what-if question when it comes to pushing your technology out in real life," said Vepa Rozyyev, the first author of an article published in Nature Energy on the research, who has since moved from KAIST to the University of Chicago for a Ph.D. He said to test it they went to a gas station and stuck the pressurized nozzle onto a cylinder full of their adsorbent. Their material beat the top industrial and literature examples by at least 20 percent. It also marked the first time any study ever did this type of field testing.

The team is excited about the prospects and possibilities that this work will introduce. "This is just the beginning," said Yavuz. "We envision a whole host of new designs and mechanisms based on our concept. Since natural gas is much cleaner fuel than coal, new developments in this realm will help in switching to less polluting fuels."

Atilhan agrees the most important impact of their research is on the environment. He said lowering toxic gaseous emissions by using natural gas more than coal or oil will significantly reduce the greenhouse gas emissions that are emitted from various sources.

"It will also help to reduce the operating cost that is spent on acid/sour gas capture operations since we propose to store much cleaner fuel source and replace current state-of-the-art with these materials for fuel storage," he said. "We believe one day we might see vehicles equipped with our materials that are run by a cleaner fuel source -- natural gas."

Credit: 
Texas A&M University

Calcium levels in freshwater lakes declining in Europe and North America

A new global study of how calcium concentrations are changing in freshwater lakes around the world has revealed that in widespread areas in Europe and eastern North America, calcium levels are declining towards levels that can be critically low for the reproduction and survival of many aquatic organisms.

The decline of calcium may have significant impacts on freshwater organisms that depend on calcium deposition, including integral parts of the food web, such as freshwater mussels and zooplankton.

In Widespread diminishing anthropogenic effects on calcium in freshwaters, published recently in Nature, researchers discovered that the global median calcium concentration was 4.0 mg L-1, with 20.7% of the water samples showing calcium concentrations ? 1.5 mg L-1.

1.5 mg L-1 is a threshold considered critical for the survival of many organisms that require calcium for their survival, therefore, some lakes are approaching levels of calcium that endanger organisms that rely on that calcium for structure and growth.

The study also attributes some of its results to freshwater lakes' ongoing recovery from the impacts of acid rain.

"Given governmental and industry action in the last few decades to reduce sulphate deposition associated with acid rain, lakes are now subject to less calcium leaching from surrounding terrestrial areas," said Gesa Weyhenmeyer, Professor at the Department of Ecology and Genetics/Limnology, at Uppsala University in Sweden and lead researcher on the study.

"Paradoxically, therefore, successful actions taken to address the harmful impacts of acid rain may have led a decline towards critically low levels of calcium for many aquatic organisms."

The study drew on 440,599 water samples from 43,184 inland water sites from 57 countries and analyzed decadal trends in over 200 water bodies since the 1980s. It was a global study conducted by multiple researchers across Europe and North America. IISD Experimental Lakes Area--the world's freshwater laboratory-- in northwestern Ontario, Canada, contributed expertise and data from its unparalleled long-term monitoring dataset of over 50 years.

Credit: 
IISD Experimental Lakes Area

A hog in wolf's clothing

image: Feral hog eating a lamb in Australia.

Image: 
Dr. Peter Heise-Pavlov

Human and wildlife conflict has increased along with expanding human populations, particularly when wildlife endanger humans or their livelihoods. Most research on human-wildlife conflict has focused on the ways tigers, wolves, and other predators impact livestock even though noncarnivores also threaten livestock. New research by Dr. Shari Rodriguez and Dr. Christie Sampson, both from Clemson University, publishing on August 6 in the open-access journal PLOS Biology, examines the effects of these less-studied relationships, particularly for feral hogs and elephants, and the potential consequences of excluding these animals from research focused on mitigating wildlife impacts on livestock.

"Our study highlights the importance of including species not traditionally considered in the livestock protection conversation, and finding similarities in how the effects of non-Carnivora species can be addressed through the same methodologies as species such as wolves, tigers, or lions," says Dr. Rodriguez.

Results show that these species can have significant effects on livelihood by killing young and small livestock and damaging livestock farming infrastructure. They may also affect local communities' perception of the species, which in the case of species of conservation concern such as elephants could potentially reduce people's willingness to support conservation initiatives.

"Sharing experiences across taxa and adopting methodology found to be successful for other [predatory] species may help us to improve the tools we use to promote co-existence and conservation efforts for elephants," reported Dr. Sampson.

Credit: 
PLOS