Culture

How brain cells pick which connections to keep

image: Top row: A dendritic spine comes and goes around Day 14. Bottom row: a different spine becomes permanent after about four weeks.

Image: 
Nedivi Lab/ MIT Picower Institute

Brain cells, or neurons, constantly tinker with their circuit connections, a crucial feature that allows the brain to store and process information. While neurons frequently test out new potential partners through transient contacts, only a fraction of fledging junctions, called synapses, are selected to become permanent.

The major criterion for excitatory synapse selection is based on how well they engage in response to experience-driven neural activity, but how such selection is implemented at the molecular level has been unclear. In a new study, MIT neuroscientists have identified the gene and protein, CPG15, that allows experience to tap a synapse as a keeper.

In a series of novel experiments described in Cell Reports, the team at MIT's Picower Institute for Learning and Memory used multi-spectral, high-resolution two-photon microscopy to literally watch potential synapses come and go in the visual cortex of mice - both in the light, or normal visual experience, and in the darkness, where there is no visual input. By comparing observations made in normal mice and ones engineered to lack CPG15, they were able to show that the protein is required in order for visual experience to facilitate the transition of nascent excitatory synapses to permanence.

Mice engineered to lack CPG15 only exhibit one behavioral deficiency: They learn much more slowly than normal mice, said senior author Elly Nedivi, William R. (1964) & Linda R. Young Professor of Neuroscience in the Picower Institute. They need more trials and repetitions to learn associations that other mice can learn quickly. The new study suggests that's because without CPG15, they must rely on circuits where synapses simply happened to take hold, rather than on a circuit architecture that has been refined by experience for optimal efficiency.

"Learning and memory are really specific manifestations of our brain's ability in general to constantly adapt and change in response to our environment," Nedivi said. "It's not that the circuits aren't there in mice lacking CPG15, they just don't have that feature, which is really important, of being optimized through use."

Watching in light and darkness

The first experiment reported in the paper, led by former MIT postdoc Jaichandar Subramanian, who is now an assistant professor at the University of Kansas, is a contribution to neuroscience in and of itself, Nedivi said. The novel labeling and imaging technologies implemented in the study, she said, allowed tracking key events in synapse formation with unprecedented spatial and temporal resolution. The study resolved the emergence of "dendritic spines," which are the structural protrusions on which excitatory synapses are formed, and the recruitment of the synaptic scaffold, PSD95, that signals that a synapse is there to stay.

The team tracked specially labeled neurons in the visual cortex of mice after normal visual experience, and after two weeks in darkness. To their surprise they saw that spines would routinely arise and then typically disappear again at the same rate regardless of whether the mice were in light or darkness. This careful scrutiny of spines confirmed that experience doesn't matter for spine formation, Nedivi said. That upends a common assumption in the field, which held that experience was necessary for spines to even emerge.

By keeping track of the presence of PSD95 they could confirm that the synapses that became stabilized during normal visual experience were the ones that had accumulated that protein. But the question remained: How does experience drive PSD95 to the synapse? The team hypothesized that CPG15, which is activity dependent and associated with synapse stabilization, does that job.

CPG15 represents experience

To investigate that, they repeated the same light vs dark experiences, but this time in mice engineered to lack CPG15. In the normal mice, there was much more PSD95 recruitment during the light phase than during the dark, but in the mice without CPG15, the experience of seeing in the light never made a difference. It was as if CPG15-less mice in the light were like normal mice in the dark.

Later they tried another experiment testing whether the low PSD95 recruitment seen when normal mice were in the dark could be rescued by exogenous expression of CPG15. Indeed, PSD95 recruitment shot up, as if the animals were exposed to visual experience. This showed that CPG15 not only carries the message of experience in the light, it can actually substitute for it in the dark, essentially "tricking" PSD95 into acting as if experience had called upon it.

"This is a very exciting result, because it shows that CPG15 is not just required for experience-dependent synapse selection, but it's also sufficient," says Nedivi, "That's unique in relation to all other molecules that are involved in synaptic plasticity."

A new model and method

In all, the paper's data allowed Nedivi to propose a new model of experience-dependent synapse stabilization: Regardless of neural activity or experience, spines emerge with fledgling excitatory synapses and the receptors needed for further development. If activity and experience send CPG15 their way, that draws in PSD95 and the synapse stabilizes. If experience doesn't involve the synapse, it gets no CPG15, very likely no PSD95 and the spine withers away.

The paper potentially has significance beyond the findings about experience-dependent synapse stabilization, Nedivi said. The method it describes of closely monitoring the growth or withering of spines and synapses amid a manipulation (like knocking out or modifying a gene) allows for a whole raft of studies in which examining how a gene, or a drug, or other factors affect synapses.

"You can apply this to any disease model and use this very sensitive tool for seeing what might be wrong at the synapse," she said.

Credit: 
Picower Institute at MIT

Bullet shape, velocity determine blood spatter patterns

image: Blood spatters are hydrodynamic signatures of violent crimes, often revealing when an event occurred and where the perpetrator and victim were located, and researchers have worked toward better understanding the fluid dynamics at play during gunshot spatters. In the Physics of Fluids, they propose a model for the disintegration of a liquid due to an arbitrarily shaped projectile. Their model focuses on providing predictive models of gunshot blood atomization and droplet flight and spattering. These are images of the bullets used in the experiments: the .45 auto bullet (left) and the 7.62 39 mm bullet (right).

Image: 
Alexander Yarin

WASHINGTON, D.C., August 6, 2019 -- Blood spatters are hydrodynamic signatures of violent crimes, often revealing when an event occurred and where the perpetrator and victim were located at the time of the crime.

A group of researchers from the University of Illinois at Chicago and Iowa State University realized that gaining a better physical understanding of the fluid dynamical phenomena at play during gunshot spatters could enhance crime scene investigations.

In the journal Physics of Fluids, from AIP Publishing, they propose a generalized model for the chaotic disintegration of a liquid due to an arbitrarily shaped projectile. Their model focuses on providing theoretical, predictive models of gunshot blood atomization (in particular, blood droplets) and droplet flight and spattering to provide physical insights that could enhance current bloodstain pattern analysis.

"Bloodstain pattern analysis in forensics implies a physical relation between blood droplet formation, trajectory, and impact into a solid surface," said Alexander Yarin, a distinguished professor at the University of Illinois at Chicago. "The formation of blood spatters involves a rheologically complex fluid -- blood -- and a nontrivial atomization process affected by the bullet shape and velocity."

One of the basic physical concepts involved in the study is the Rayleigh-Taylor instability phenomenon associated with denser blood accelerating toward lighter air. It is accompanied by a cascade of instability phenomena triggered by the original Rayleigh-Taylor instability -- think of a water layer dripping from the ceiling. The researchers examine the viscoelasticity of blood, which affects ligament formation, the blood droplet spray propagation within air, and blood deposition on the floor, accounting for gravity and air drag forces, with the latter being diminished by the collective effect related to the droplet-droplet interaction. With that information, deposition on the floor or other surfaces is predicted.

The group's previous theory of forward blood spatter included the prediction of backward blood spattering caused by slender and blunt bullets and of forward blood spattering caused by a bullet shaped like a blunt ovoid (egg). In their present work, the latter was generalized for a bullet of arbitrary shape, because many different bullet shapes exist.

By doing this, the researchers were able to predict the fragmentation of the blood-filled target within the framework of percolation theory, which studies the filtration of liquid through a porous medium.

"Our results revealed that bullet shape and velocity determine the blood spatter patterns, because they dictate the velocity field in the blood body," said Yarin. "When this approach is incorporated into bloodstain pattern analysis in forensics in the future, hopefully it can help crack tough questions."

The group's technique for forensics "will allow a more accurate determination of the origins of blood spatter, as well as potentially facilitate investigators' conclusions about the weapon used for cases in which a bullet is not found," Yarin said.

Credit: 
American Institute of Physics

Sleep interrupted: What's keeping us up at night?

image: Christine E. Spadola, Ph.D., lead author and an assistant professor in FAU's Phyllis and Harvey Sandler School of Social Work within the College for Design and Social Inquiry.

Image: 
Florida Atlantic University

Between 50 to 70 million Americans have a sleep disorder. Sleepless nights are associated with a number of adverse health outcomes including heart disease, high blood pressure, diabetes, and certain cancers. Evening use of alcohol, caffeine and nicotine are believed to sabotage sleep. Yet, studies examining their effects on sleep are limited by small sample sizes that don't represent racial and ethnic diversity or objective measures of sleep. Furthermore, these investigations have been conducted in laboratory or observatory settings.

Considering the public health importance of getting a good night's sleep and the widespread use of these substances, relatively few studies have thoroughly investigated the association between evening use of alcohol, caffeine and nicotine and sleep parameters.

A study led by a researcher at Florida Atlantic University with collaborators from Brigham and Women's Hospital, Harvard T. H Chan School of Public Health, Harvard Medical School, Emory University the National Institutes of Health, and the University of Mississippi Medical Center, is one of the largest longitudinal investigations to date to examine evening consumption of alcohol, caffeine and nicotine among an African-American cohort with objectively measured sleep outcomes in their natural environments.

Using actigraphy (wrist-watch-like sensor) and concurrent daily sleep diaries, the researchers examined the night-to-night associations of evening use of alcohol, caffeine and nicotine on sleep duration, sleep efficiency and wake after sleep onset. The study involved 785 participants and totaled 5,164 days of concurrent actigraphy and daily sleep diaries that recorded how much alcohol, caffeine or nicotine they consumed within four hours of bedtime.

Results of the study, published in the journal Sleep, may be good news for coffee lovers. The researchers did not find an association between consumption of caffeine within four hours of bedtime with any of the sleep parameters. However, the researchers warn that caffeine dosing, and individual variations in caffeine sensitivity and tolerance, were not able to be measured and can play an important role in the association between caffeine use and sleep.

For smokers and those who enjoy "Happy Hour" or an alcoholic beverage with dinner, the study shows that a night with use of nicotine and/or alcohol within four hours of bedtime demonstrated worse sleep continuity than a night without these substances, even after controlling for age, gender, obesity, level of education, having work/school the next day, and depressive symptoms, anxiety, and stress.

Nicotine was the substance most strongly associated with sleep disruption and yet another reason to quit smoking. There was a statistically significant interaction between evening nicotine use and insomnia in relation to sleep duration. Among participants with insomnia, nightly nicotine use was associated with an average 42.47-minute reduction in sleep duration. The effects of nicotine may be particularly significant among individuals with insomnia.

The results from this study are especially meaningful as they were observed in individuals unselected for sleep problems and who generally had high sleep efficiency. Moreover, they were based on longitudinal data so that the associations can take account of not only between-person differences but also within-person variations in exposures and covariates such as age, obesity, educational attainment, having work/school the next day, and mental health symptomatology.

"African Americans have been underrepresented in studies examining the associations of nicotine, alcohol, and caffeine use on sleep," said Christine E. Spadola, Ph.D., lead author and an assistant professor in FAU's Phyllis and Harvey Sandler School of Social Work within the College for Design and Social Inquiry. "This is especially significant because African Americans are more likely to experience short sleep duration and fragmented sleep compared to non-Hispanic Whites, as well as more deleterious health consequences associated with inadequate sleep than other racial or ethnic groups."

These findings support the importance of sleep health recommendations that promote the restriction of evening alcohol and nicotine use to improve sleep continuity.

Credit: 
Florida Atlantic University

USPSTF still recommends against screening for pancreatic cancer in asymptomatic adults

Bottom Line: The U.S. Preventive Services Task Force (USPSTF) still recommends against screening for pancreatic cancer in adults without symptoms. The USPSTF routinely makes recommendations about the effectiveness of preventive care services. In this statement, the USPSTF reaffirmed its 2004 recommendation against screening for asymptomatic adults. Pancreatic cancer is an uncommon cancer with a poor prognosis.

(doi:10.1001/jama.2019.10232)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Note: More information about the U.S. Preventive Services Task Force, its process, and its recommendations can be found on the newsroom page of its website.

#  #  #

Embed this link to provide your readers free access to the full-text article This link will be live at the embargo time and all USPSTF articles remain free indefinitely: https://jamanetwork.com/journals/jama/fullarticle/2740727?guestAccessKey=4b279f9c-411f-4de8-b6a7-449e0b98979d&utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=080619

Credit: 
JAMA Network

NZ big bird a whopping 'squawkzilla'

image: Reconstruction of the giant parrot Heracles, dwarfing a bevy of 8cm high Kuiornis -- small New Zealand wrens scuttling about on the forest floor.

Image: 
Dr Brian Choo, Flinders University

Australasian palaeontologists have discovered the world's largest parrot, standing up to 1m tall with a massive beak able to crack most food sources.

The new bird has been named Heracles inexpectatus to reflect its Herculean myth-like size and strength - and the unexpected nature of the discovery.

"New Zealand is well known for its giant birds," says Flinders University Associate Professor Trevor Worthy. "Not only moa dominated avifaunas, but giant geese and adzebills shared the forest floor, while a giant eagle ruled the skies.

"But until now, no-one has ever found an extinct giant parrot - anywhere."

The NZ fossil is approximately the size of the giant 'dodo' pigeon of the Mascarenes and twice the size of the critically endangered flightless New Zealand kakapo, previously the largest known parrot.

Like the kakapo, it was a member of an ancient New Zealand group of parrots that appear to be more primitive than parrots that thrive today on Australia and other continents.

Experts from Flinders University, UNSW Sydney and Canterbury Museum in New Zealand estimate Heracles to be 1 m tall, weighing about 7 kg.

The new parrot was found in fossils up to 19 million years old from near St Bathans in Central Otago, New Zealand, in an area well known for a rich assemblage of fossil birds from the Miocene period.

"We have been excavating these fossil deposits for 20 years, and each year reveals new birds and other animals," says Associate Professor Worthy, from the Flinders University Palaeontology Lab.

"While Heracles is one of the most spectacular birds we have found, no doubt there are many more unexpected species yet to be discovered in this most interesting deposit."

"Heracles, as the largest parrot ever, no doubt with a massive parrot beak that could crack wide open anything it fancied, may well have dined on more than conventional parrot foods, perhaps even other parrots," says Professor Mike Archer, from the UNSW Sydney Palaeontology, Geobiology and Earth Archives (PANGEA) Research Centre.

"Its rarity in the deposit is something we might expect if it was feeding higher up in the food chain," he says, adding parrots "in general are very resourceful birds in terms of culinary interests".

"New Zealand keas, for example, have even developed a taste for sheep since these were introduced by European settlers in 1773."

Birds have repeatedly evolved giant species on islands. As well as the dodo, there has been another giant pigeon found on Fiji, a giant stork on Flores, giant ducks in Hawaii, giant megapodes in New Caledonia and Fiji, giant owls and other raptors in the Caribbean.

Heracles lived in a diverse subtropical forest where many species of laurels and palms grew with podocarp trees.

"Undoubtedly, these provided a rich harvest of fruit important in the diet of Heracles and the parrots and pigeons it lived with. But on the forest floor Heracles competed with adzebills and the forerunners of moa," says Professor Suzanne Hand, also from UNSW Sydney.

"The St Bathans fauna provides the only insight into the terrestrial birds and other animals that lived in New Zealand since dinosaurs roamed the land more than 66 million years ago," says Paul Scofield, Senior Curator at Canterbury Museum, Christchurch.

Canterbury Museum research curator Vanesa De Pietri says the fossil deposit reveals a highly diverse fauna typical of subtropical climates with crocodilians, turtles, many bats and other mammals, and over 40 bird species.

"This was a very different place with a fauna very unlike that which survived into recent times," she says.

Credit: 
Flinders University

Deregulated mTOR is responsible for autophagy defects exacerbating kidney stone formation

image: Numbers of GFP-MAP1LC3 puncta (white arrows) and autophagosomes (black arrows) in renal tubular cells were increased with mTOR inhibitor of GFP-MAP1LC3-mice treated with GOX. This means the activation of autophagy. Amount of crystals formed in kidneys extracted from mice after GOX and mTOR inhibitor injection was suppressed.

Image: 
Takahito Yasui, M.D., Ph.D.

Dr. Takahiro Yasui (Professor, Nagoya City University) and Dr. Rei Unno (Research fellow, Nagoya City University) in collaboration with Dr. Tsuyoshi Kawabata (Associate Professor, Nagasaki University) have revealed a novel mechanism for kidney stone formation, using the mouse cultured cell, mouse model for kidney stone, and human kidney tissue. They found that autophagic activity was significantly decreased in mouse renal tubular cells (RTCs) exposed to calcium oxalate (CaOx) monohydrate crystals and in the kidneys of GFP-conjugated MAP1LC3B (microtubule- associated protein 1 light chain 3 beta) transgenic mice with CaOx nephrocalcinosis induced by glyoxylate (GOX). This caused accumulation of damaged intracellular organelles, such as mitochondria and lysosomes, the normal functioning of which is mediated by functional autophagy. An impairment of autophagy was also observed in the mucosa with plaques of CaOx kidney stone formers. Moreover, they determined that the decrease in autophagy was caused by an upregulation of mTOR, which consequently resulted in the suppression of the upstream autophagy regulator TFEB (transcription factor EB). Furthermore, they showed that an mTOR inhibitor could recover a decrease in autophagy and alleviate crystal-cell interactions and the formation of crystals associated with increased inflammatory responses (Figure 1). As chemical inhibition of mTOR ameliorates kidney stone development, this result proposed that deregulated mTOR and resultant impairment in autophagy is a key target for prevention or treatment of the disease (Figure 2).

Credit: 
Nagoya City University

OU microbiologists provide framework for assessing ecological diversity

image: The study provides a tool that ecologists can use to quantitatively assess ecological stochasticity.

Image: 
University of Oklahoma

A University of Oklahoma team of microbiologists have developed a mathematical framework for quantitatively assessing ecological diversity in an ecological community whether deterministic or stochastic. A recent study by the team published in the Proceedings of the National Academy of Sciences examines the mechanisms controlling biological diversity and provides guidance for use of the null-model-based approaches for examining processes within the community.

"An ecological community is a dynamic complex system with a myriad of interacting species. Both deterministic or stochastic forces can shape the community, but how to quantify their relative contribution remains a great challenge. This study provides an effective and robust tool to ecologists for quantitatively assessing ecological stochasticity," said Jizhong Zhou, director of the Institute for Environmental Genomics, professor in the OU Colleges of Arts and Sciences and Gallogy College of Engineering, and affiliate of the U.S. Department of Energy's Lawrence Berkeley National Laboratory.

Zhou led the study with OU team members Daliang Ning and Ye Deng; and James M. Tiedje, Michigan State University. In this study, the team modified the framework for more general situations when quantifying stochastic mechanisms underlying ecological communities and demonstrated that it has obviously better quantitative performance than previous methods.

The team used the framework to reassess the importance of determinism and stochasticity in mediating the succession of groundwater microbial communities in response to organic carbon injection, in this case emulsified vegetable oil, to stimulate bioremediation. Also, the team evaluated the effects of different null-model algorithms and similarity metrics on the quantitative assessment of stochasticity in groundwater microbial communities in response to the carbon injection.

The study results show the microbial community shifted from deterministic to more stochastic right after organic carbon input. As the vegetable oil was consumed, the community returned to more deterministic. In addition, the study results demonstrated that null-model algorithms and community similarity metrics had strong effects on quantifying ecological stochasticity.

Credit: 
University of Oklahoma

Novel school improvement program can raise teaching quality while reducing inequality

A multi-national European study, looking at over 5,500 students, has found that a novel school intervention program can not only improve the mathematics scores of primary school children from disadvantaged areas, but can also lessen the achievement gap caused by socioeconomic status.

Known as the Dynamic Approach to School Improvement (DASI), the program is based on the latest findings in educational research.

Rather than a one-size-fits-all, top-down approach, DASI works by first assessing a school to identify the specific teaching areas that could be improved and then implementing targeted measures to improve them. This process involves all members of the school community, including teachers, pupils and parents, with support from a specialized Advisory and Research Team.

Several studies have already shown that DASI can improve student learning progress and academic outcomes, but this latest study, published in Educational Research, is the first to have been conducted on schools in disadvantaged areas.

Furthermore, DASI was specifically designed to enhance both academic quality and equity, by countering non-school factors that can influence pupils' academic outcomes, including socioeconomic status, gender and ethnicity.

The effectiveness of this aspect of DASI had also not been tested before until this new study, led by a team of researchers from Cyprus and the Netherlands, led by Professor Leonidas Kyriakides at the University of Cyprus.

After enrolling 72 primary schools, including some 5,560 pupils from disadvantaged areas in four countries - Cyprus, Greece, England and Ireland - the researchers randomly assigned the schools to experimental and control groups. Those in the experimental group made use of DASI for an entire school year, while those in the control groups were offered support to develop their own improvement programs.

Pupils aged between nine and 12 at the schools were given mathematics tests at both the start of the school year and the end to assess the effect of DASI over that time. The researchers chose to assess mathematical ability because previous studies have shown that mathematics tends to respond better than any other subject to school improvement programs. The researchers also recorded the socioeconomic status, gender and ethnicity of the pupils.

At the beginning of the year, all the pupils in the 72 schools achieved a similar range of scores on the mathematics tests, and showed similar achievement gaps based on socioeconomic status, gender and ethnicity. In contrast, at the end of the year, pupils in the schools that received DASI achieved better results on the mathematics test than those in the control group, and this was seen in all four countries.

Furthermore, the achievement gap based on socioeconomic status reduced in the schools that received DASI but stayed the same in the control group, although DASI didn't have any effect on the achievement gap based on gender or ethnicity.

"One could argue that this paper has not only significant implications for research on improvement but also for developing policies on equal educational opportunities," Professor Kyriakides concludes.

However, despite the fall in the achievement gap for socioeconomic status, the authors admit that DASI appears to be more effective at enhancing quality than equity. They suggest that more research needs to be done to identify policies and actions that address equity in a more comprehensive way, so that DASI can lessen the achievement gap based on gender and ethnicity as well.

Credit: 
Taylor & Francis Group

Seeing how computers 'think' helps humans stump machines and reveals AI weaknesses

One of the ultimate goals of artificial intelligence is a machine that truly understands human language and interprets meaning from complex, nuanced passages. When IBM's Watson computer beat famed "Jeopardy!" champion Ken Jennings in 2011, it seemed as if that milestone had been met. However, anyone who has tried to have a conversation with virtual assistant Siri knows that computers have a long way to go to truly understand human language. To get better at understanding language, computer systems must train using questions that challenge them and reflect the full complexity of human language.

Researchers from the University of Maryland have figured out how to reliably create such questions through a human-computer collaboration, developing a dataset of more than 1,200 questions that, while easy for people to answer, stump the best computer answering systems today. The system that learns to master these questions will have a better understanding of language than any system currently in existence. The work is described in an article published in the 2019 issue of the journal Transactions of the Association for Computational Linguistics.

"Most question-answering computer systems don't explain why they answer the way they do, but our work helps us see what computers actually understand," said Jordan Boyd-Graber, associate professor of computer science at UMD and senior author of the paper. "In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do."

Most current work to improve question-answering programs uses either human authors or computers to generate questions. The inherent challenge in these approaches is that when humans write questions, they don't know what specific elements of their question are confusing to the computer. When computers write the questions, they either write formulaic, fill-in-the blank questions or make mistakes, sometimes generating nonsense.

To develop their novel approach of humans and computers working together to generate questions, Boyd-Graber and his team created a computer interface that reveals what a computer is "thinking" as a human writer types a question. The writer can then edit his or her question to exploit the computer's weaknesses.

In the new interface, a human author types a question while the computer's guesses appear in ranked order on the screen, and the words that led the computer to make its guesses are highlighted.

For example, if the author writes "What composer's Variations on a Theme by Haydn was inspired by Karl Ferdinand Pohl?" and the system correctly answers "Johannes Brahms," the interface highlights the words "Ferdinand Pohl" to show that this phrase led it to the answer. Using that information, the author can edit the question to make it more difficult for the computer without altering the question's meaning. In this example, the author replaced the name of the man who inspired Brahms, "Karl Ferdinand Pohl," with a description of his job, "the archivist of the Vienna Musikverein," and the computer was unable to answer correctly. However, expert human quiz game players could still easily answer the edited question correctly.

By working together, humans and computers reliably developed 1,213 computer-stumping questions that the researchers tested during a competition pitting experienced human players--from junior varsity high school trivia teams to "Jeopardy!" champions--against computers. Even the weakest human team defeated the strongest computer system.

"For three or four years, people have been aware that computer question-answering systems are very brittle and can be fooled very easily," said Shi Feng, a UMD computer science graduate student and a co-author of the paper. "But this is the first paper we are aware of that actually uses a machine to help humans break the model itself."

The researchers say these questions will serve not only as a new dataset for computer scientists to better understand where natural language processing fails, but also as a training dataset for developing improved machine learning algorithms. The questions revealed six different language phenomena that consistently stump computers.

These six phenomena fall into two categories. In the first category are linguistic phenomena: paraphrasing (such as saying "leap from a precipice" instead of "jump from a cliff"), distracting language or unexpected contexts (such as a reference to a political figure appearing in a clue about something unrelated to politics). The second category includes reasoning skills: clues that require logic and calculation, mental triangulation of elements in a question, or putting together multiple steps to form a conclusion.

"Humans are able to generalize more and to see deeper connections," Boyd-Graber said. "They don't have the limitless memory of computers, but they still have an advantage in being able to see the forest for the trees. Cataloguing the problems computers have helps us understand the issues we need to address, so that we can actually get computers to begin to see the forest through the trees and answer questions in the way humans do."

There is a long way to go before that happens added Boyd-Graber, who also has co-appointments at the University of Maryland Institute for Advanced Computer Studies (UMIACS) as well as UMD's College of Information Studies and Language Science Center. But this work provides an exciting new tool to help computer scientists achieve that goal.

"This paper is laying out a research agenda for the next several years so that we can actually get computers to answer questions well," he said.

Credit: 
University of Maryland

Streamlining fee waiver requests helped low-income immigrants become citizens

Many Americans only experience government bureaucracy when dealing with the IRS or DMV, and in the popular imagination these interactions are known for being almost comically time-consuming and complicated. But for people who receive public benefits, bureaucracy is a more routine obstacle.

All too often, benefits programs are designed with an eye to the convenience of the administrators, not the clients. Lengthy forms full of jargon and fine print can be a big obstacle for someone who speaks English as a second language; commuting to distant agency offices is challenging for someone who lacks transportation and can't take time off work. Accessing benefits sometimes involves a logistical high-wire act, and ad hoc, inconsistent processes effectively deter people who need these benefits the most.

Policymakers and researchers are beginning to recognize this problem. In some cases, however, attempts to ease access bring in people who are relatively better off--people with higher incomes, education levels, and language skills. Meanwhile, the people with the greatest hardships are less likely to take advantage of support systems primarily designed for them.

Now, a new study from the Immigration Policy Lab at Stanford University offers insights into a time when lightening the paperwork and confusion surrounding a public benefit did get a bigger response from the neediest beneficiaries.

The researchers studied a reform that made it easier for low-income immigrants to apply for citizenship free of charge. When the convoluted fee waiver request process was replaced with a single form, they found, about 73,000 people per year became citizens who otherwise wouldn't have applied.

It may sound like dull, procedural detail, but reforms like this go a long way toward making sure that programs for the disadvantaged actually work as intended. And in this case, all the red tape interfered with the principle that U.S. citizenship should be open to all who are eligible, not just those with means.

Equal Access to Citizenship

Keeping citizenship affordable is a rising priority for policymakers and immigrant advocates in the United States. The application fee, now $725, has risen sharply over the past few decades, creating a new barrier to citizenship.

At the same time, the federal fee waiver for low-income immigrants is curiously underused. Of the roughly 9 million immigrants eligible for citizenship, almost half are eligible for the fee waiver. Why don't more people use it?

Before 2010, U.S. Citizenship and Immigration Services (USCIS) used a pretty opaque process for fee waiver requests. Immigrants demonstrated their inability to pay the application fee with either an affidavit or an unsworn declaration. USCIS officers used their best judgment in evaluating the claims, as there were only vague guidelines governing whether a claim should be approved.

Then, USCIS introduced a streamlined, transparent process: a simple form (the I-912) and clear rules for eligibility (receipt of any means-tested benefits, like SNAP or Medicaid, or household income below 150 percent of the federal poverty guidelines).

"We were excited about this study because the standardization was a major policy change with the potential to affect millions of immigrants. It was also relevant for current debates about the future of the fee waiver program," said Vasil Yasenov, a postdoctoral fellow at the Immigration Policy Lab and lead author of the study.

The researchers looked at 739,301 low-income immigrants who met the eligibility requirements for citizenship between 2007 and 2016 using data from the American Community Survey. They divided them into two groups to compare: those who would likely qualify for the fee waiver and those who wouldn't. Other than income and employment, the two groups looked very similar. They were composed of roughly the same proportion of ages, ethnicities, origin countries, and time spent living in the United States.

After the fee waiver reform, the group eligible for the fee waiver naturalized at higher rates than the comparison group. The researchers found that there was no difference between the two groups beforehand, but afterward a gap of 1.5 percentage points opened between them. That amounted to about 73,000 new citizens per year.

The Role of Non-Profits

As they delved further into the data, the researchers were intrigued to find an outsized response from those who could be considered least likely to naturalize--in other words, people whose circumstances make it particularly difficult to successfully apply. If you think of this group as most likely to be deterred by the cumbersome process surrounding the fee waiver before 2010, it makes sense that they would also be more likely to change their behavior once that obstacle was lifted. But this finding contrasts with research showing that it's people with greater advantages in life who respond most to these kinds of "user-friendly" reforms.

What explains the anomaly? There was something else at work, something that helped translate the procedural changes into a positive force in people's lives. That something, the researchers thought, was the hundreds of nonprofits and community-based organizations across the country devoted to serving low-income immigrants, known as immigration service providers. While they are in the public eye when engaging in political advocacy and community organizing, they are less well known for the everyday work they do: providing legal advice, referrals to health care or adult education programs, and help filling out naturalization forms.

As it turned out, their work was reflected in the data. The fee waiver reform has a greater effect among immigrants living in states with a higher density of these supportive organizations. And an additional survey of low-income immigrants found that those who received help from an immigration service provider were 21.5 percentage points more likely to use the fee waiver.

"With the I-912 form and the standardization of evidence needed for a fee waiver, organizations have been able to set up efficient processes to assist low-income immigrants who may have thought that their dreams of citizenship were out of reach because of the fees normally required to apply," noted study co-author Michael Hotard.

Immigration service providers may soon play an even more important role in making citizenship possible for low-income immigrants. USCIS has proposed making fee waivers more difficult to get, narrowing eligibility criteria and requiring a tax transcript as proof of inability to pay. While immigration service providers will rally to help immigrants navigate the potential changes, cities and states can promote the fee waiver program and experiment with public-private partnerships to offer financial support for immigration fees. As with the 2010 fee waiver reform, a little creativity goes a long way.

Credit: 
Stanford University - Immigration Policy Lab

What do you mean the hamburger isn't all that American?

image: Hamburgers are an American favorite, but the origins of this meal's common
ingredients are as diverse as the US population. The meat patties were first
served in Hamburg, Germany, and appeared on menus in New York City as
early as the 1870s thanks to German immigrants.

Image: 
Graphic: Álvaro Valiño, Kelsey Nowakowski and Colin Khoury

Say you're a scientist who studies the origins and history of food, and you want to communicate to the world your findings that the all-American hamburger - including the side of fries - doesn't contain a single ingredient that originally came from the United States. You could publish an article in a top-notch journal, ask a communications officer to write a press release about the paper, or take to Twitter and tell your hundreds of devoted followers all about your discovery. All of these create some impact.

But you could also join forces with a professional graphic designer and map out the ingredients' origins in an attractive infographic display, and by publishing it, potentially reach a much wider audience.

This is exactly what Colin Khoury of the International Center for Tropical Agriculture (CIAT) did. And the end result communicates his findings perhaps just as well - or perhaps even better than - the common communications channels that scientists use. Now Khoury - who also took the pizza to task for not being all that Italian and pad thai for being less than wholly Thai - and his collaborators at leading universities are encouraging their scientific colleagues to embrace graphic design as a serious asset in science communication efforts, as well as a useful process for the advance of science itself.

"Visual depictions of scientific findings aren't a new thing. In fact, people have been making them for hundreds of years, and especially since scientific journals started to publish," said Khoury, the lead author of a new paper about scientist-artist collaborations in Communications Biology, a journal published by Nature. "But new technologies, new audiences, and new ways to communicate are making high-quality, sophisticated graphics ever more important, and collaborations with skilled professionals are the most productive way to create them."

To test the efficacy of collaborations between scientists and graphic artists, Khoury and colleagues paired six research laboratories that work on societally relevant food and agricultural challenges with graphic designers and media content creators. In addition to the food origins research, they tackled complex subjects related to pollinators and biodiversity threats, modern plant breeding, agricultural development and land-use change, and new technologies in agriculture.

The scientists first presented the results at this year's annual meeting of the American Association for the Advancement of Science (AAAS).

Challenging scientists to identify audiences and clarify their messages

The collaborations began with asking the scientists to define their target audience and "the general public" was not an acceptable answer. To explain the importance of pollinators and biodiversity, the teams eventually identified the target audience as "English and Spanish speakers already interested in biodiversity conservation," which led to a relatively detailed infographic with versions in English and Spanish.

The scientists agonized over the challenge of distilling complex concepts into clear, focused, and accessible messages, but the process helped them push their science forward. In some cases, they better identified the central components of their work. In others, they discovered areas they hadn't studied sufficiently.

"Seeing the science through the eyes of a graphic artist challenged my thought process on how to reduce complex mechanisms to a more accessible form," said Michael Gore, a researcher at Cornell University who collaborated on an infographic about harnessing new technology to improve crop resilience.

The graphic artists, not all of whom had backgrounds in science, enjoyed the challenge.

"Science communication in general is broadening and breaking down barriers between scientists and the public, and infographics have become mainstream," said Yael Kisel, an independent artist based in San Jose, California, who worked on the pollinator infographic. "Science-art partnerships are popping up all over the place like mushrooms after rain. I feel like we're riding on a growing wave, and I can't wait to see how this field continues to develop."

The researchers and designers collectively identified a number of steps to make scientist-artist partnerships a more common component of a research communication agenda. They encourage research institutions to make graphic design a substantive component of communications teams. They suggest graphic art professionals can create and expand networks and businesses. They ask funders of research projects to allocate resources to graphic communication of socially relevant research results.

"If a picture is worth a thousand words, a science-based infographic is worth at least a million," said Ari Novy, president of the San Diego Botanic Garden, who managed the project.

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

'Bone in a dish' opens new window on cancer initiation, metastasis, bone healing

image: Luiz Bertassoni, D.D.S., Ph.D., holds a 3D 'bone in a dish' as a model to study bone function, diseases, and bone regeneration.

Image: 
OHSU/Joe Rojas-Burke

Researchers in Oregon have engineered a material that replicates human bone tissue with an unprecedented level of precision, from its microscopic crystal structure to its biological activity. They are using it to explore fundamental disease processes, such as the origin of metastatic tumors in bone, and as a treatment for large bone injuries.

"Essentially it is a miniaturized bone in a dish that we can produce in a matter of 72 hours or less," says biomedical engineer Luiz Bertassoni, D.D.S., Ph.D., an assistant professor in the OHSU School of Dentistry and a member of CEDAR, the Cancer Early Detection Advanced Research Center in the OHSU Knight Cancer Institute.

Like real bone, the material has a 3D mineral structure populated with bone cells, nerve cells and endothelial cells that self-organize into functioning blood vessels.

"What is remarkable is that researchers in our field have become used to cultivating cells within a protein mixture to approximate how cells live in the body. But this is the first time anyone has been able to embed cells in minerals, which is what characterizes the bone tissue," Bertassoni says.

And that's what makes the new material promising as a model to study bone function, diseases, and bone regeneration. "With this model system, you can start asking questions about how bone cells attract different types of cancers, how cancer cells move into bone, how bone takes part in the regulation of marrow function," says Bertassoni.

"It can even be relevant to dissect the mechanisms that are leading to diseases such as leukemia." Bertassoni published the results in the journal Nature Communications with postdoctoral researcher Greeshma Nair, Ph.D., doctoral student Avathamsa Athirasala, M.Sc.Eng., and other co-authors.

"Being able to engineer truly bone-like tissues in the lab can also be transformative for regenerative medicine," Bertassoni says, "since the current treatment for large bone fractures requires the removal of the patient's own healthy bone so that it can be implanted at the site of injury."

The recipe starts with the mixing of human stem cells into a solution loaded with collagen, a protein abundant in the matrix of bone tissue. The collagen proteins link together, forming a gel embedded with the stem cells. The researchers then flood the gel with a mixture of dissolved calcium and phosphate, the minerals of bone. They add another key ingredient, the protein osteopontin derived from cow milk, to keep the minerals from forming crystals too quickly. This additive, which clings to calcium and phosphate, also minimizes the minerals' toxicity to cells.

The mixture diffuses through a network of channels about the width of a strand of DNA in the spongey collagen, and the dissolved minerals precipitate into orderly layers of crystals.

"We can reproduce the architecture of bone down to a nanometer scale," says Bertassoni. "Our model goes through the same biophysical process of formation that bone does."

In this calcified environment, stem cells develop into functioning bone cells, osteoblasts and osteocytes without the addition of any other molecules, as if they knew they were being embedded in an actual bone matrix. Within days, the growing cells squeeze slender protrusions through spaces in their mineralized surroundings to connect and communicate with neighboring cells. The bone-like engineered structure creates a microenvironment that is sufficient to cue stem cells that it's time to mature into bone cells.

"It's nature doing its job, and it's beautiful," Bertassoni says. "So, we went farther and decided to try it with the bone vasculature and innervation as well."

Nerve cells added to the mixture formed interconnected networks that persisted after mineralization. Endothelial cells, likewise, formed networks of tubes that remained opened after mineralization.

To test the usefulness of the material as a disease model, the researchers implanted their engineered bone beneath the skin of laboratory mice. After a few days, the lab-made blood vessels had connected with the vasculature in the mouse bodies. When the researchers injected prostate cancer cells nearby, they found that tumor growth was three times higher in the mice given mineralized bone constructs than in those with the non-mineralized controls.

The team now is engineering versions with marrow cells growing within a surrounding of artificial bone for use as a model to study the initiation and development of blood cancers including the various forms of leukemia. Also, they have tested the engineered bone-like material as a replacement for injured bone in animal models - with positive results that they expect to report in the near future.

Credit: 
Oregon Health & Science University

Study explores blood-brain barrier leakage in CNS infections

Washington, DC - August 6, 2019 - A new study published in the journal mBio shines light on the breakdown of the blood-brain barrier (BBB) that occurs during many infections of the central nervous system. The findings implicate interferon gamma, a major cytokine upregulated in most central nervous system (CNS) viral infections, as a major contributor of blood brain barrier breakdown. Using an experimental viral encephalitis mouse model in which mice are infected with reovirus, the research provides new insight into how the breakdown occurs, which may lead to new therapeutic avenues.

"Gene expression studies on brain material from infected mice suggested that one of the pathways that was really upregulated during infection was interferon signaling in general, and in particular, a subset of interferon, the type 2 interferon or interferon gamma," said study investigator Kenneth Tyler, MD, a neurovirologist and Chairman of the Department of Neurology at the University of Colorado School of Medicine. "Interferon was one of the things that was causing not only a loss of pericytes, support cells, but a disruption of the connections that are usually pretty tight between endothelial cells in the blood brain barrier called tight junctions and adherens junctions."

Many previous studies have demonstrated that the blood brain barrier breaks down during the process of encephalitis. Mechanistically, there have been some connections between type 1 and type 2 interferon and that process, but what occurs at the cellular level and molecular level, at the level of the vasculature, has been very unclear.

"Is the blood brain barrier breakdown an early feature of the pathology? Is it late? What sort of relationship does the breakdown have with other aspects of disease progression?" said principal study investigator Julie Siegenthaler, PhD, a neuroscientist in the Department of Pediatrics, University of Colorado School of Medicine.

To find out, scientists at University of Colorado School of Medicine inoculated reovirus into the brains of neonatal mice, producing a devastating encephalitis, which they then closely monitored. The researchers replicated much of what they were seeing in the mouse brains in experiments using cultured brain endothelial cells.

The researchers found that BBB breakdown happens late in the course of disease. "Disease progression happens over about 10 days. We see the blood brain barrier breakdown happening in the last 6 to 7 days after infection, after seeing evidence that the virus is being replicated and after seeing evidence that there is an inflammatory response," said Dr. Siegenthaler.

Infection upregulated interferon gamma, and endothelial cells in the BBB responded to interferon gamma by initiating a signaling cascade that changed their behavior so that they could no longer maintain the BBB integrity. IFN-gamma reduced barrier properties in cultured brain endothelial cells through Rho kinase (ROCK) mediated cytoskeletal contractions, resulting in junctional disorganization and cell-cell separation.

"Infection with reovirus causes a change in the proteins that regulate the cytoskeleton," said Dr. Siegenthaler. "It causes an overactivation of the cytoskeleton, causing the cells to pull apart from each other, whichhas not been shown before."

The researchers showed that if they blocked interferon gamma signaling, by using a neutralizing antibody, many of the disruptions, such as loss of pericytes and changes in tight junctions, could be restored.

The work may be a model for what happens in other forms of viral and even bacterial infections where there is breakdown of the BBB. "We have learned that in a lot of models of both meningitis and viral encephalitis, you need to stop the replication of the pathogen - whether that is an antibiotic for bacterial meningitis or an antiviral when it is available for viral meningitis - and employ strategies that inhibit a whole series of things that add to the seriousness of the disease that occur downstream of that infection."

"Interferon gamma seems to be mediating the damage in the endothelial cells that make up the blood vessels," said Stephanie Bonney, PhD, who is now a researcher at Seattle Children's Research Institute. "We see changes within the endothelial cells themselves that effect the way they connect to each other. These changes allow for the acceleration of fluids and immune cells into the CNS, contributing to edema and other problems in patients."

"Manipulating the blood brain barrier in combination with antiviral therapies may lead to better outcomes," said Dr. Tyler "You could specifically target the endothelial cells to maintain blood brain barrier integrity," said Dr. Siegenthaler.

The scientists say more research is needed to show whether other viruses such as West Nile or another flavivirus behaves similarly to reovirus, and whether the findings can be extrapolated to infections in humans.

Credit: 
American Society for Microbiology

Antineutrino detection could help remotely monitor nuclear reactors

image: These images compare the evolution of antineutrino spectrum and antineutrino detector response as a function of reactor operational time in a pressurized water reactor and an ultra-long cycle fast reactor.

Image: 
Georgia Tech

Technology to measure the flow of subatomic particles known as antineutrinos from nuclear reactors could allow continuous remote monitoring designed to detect fueling changes that might indicate the diversion of nuclear materials. The monitoring could be done from outside the reactor vessel, and the technology may be sensitive enough to detect substitution of a single fuel assembly.

The technique, which could be used with existing pressurized water reactors as well as future designs expected to require less frequent refueling, could supplement other monitoring techniques, including the presence of human inspectors. The potential utility of the above-ground antineutrino monitoring technique for current and future reactors was confirmed through extensive simulations done by researchers at the Georgia Institute of Technology.

"Antineutrino detectors offer a solution for continuous, real-time verification of what is going on within a nuclear reactor without actually having to be in the reactor core," said Anna Erickson, associate professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering. "You cannot shield antineutrinos, so if the state running a reactor decides to use it for nefarious purposes, they can't prevent us from seeing that there was a change in reactor operations."

The research, to be reported August 6 in the journal Nature Communications, was partially supported by a grant from the Nuclear Regulatory Commission (NRC). The research evaluated two types of reactors, and antineutrino detection technology based on a PROSPECT detector currently deployed at Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR).

Antineutrinos are elementary subatomic particles with an infinitesimally small mass and no electrical charge. They are capable of passing through shielding around a nuclear reactor core, where they are produced as part of the nuclear fission process. The flux of antineutrinos produced in a nuclear reactor depends on the type of fission materials and the power level at which the reactor is operated.

"Traditional nuclear reactors slowly build up plutonium 239 in their cores as a consequence of uranium 238 absorption of neutrons, shifting the fission reaction from uranium 235 to plutonium 239 during the fuel cycle. We can see that in the signature of antineutrino emission changes over time," Erickson said. "If the fuel is changed by a rogue nation attempting to divert plutonium for weapons by replacing fuel assemblies, we should be able to see that with a detector capable of measuring even small changes in the signatures."

The antineutrino signature of the fuel can be as unique as a retinal scan, and how the signature changes over time can be predicted using simulations, she said. "We could then verify that what we see with the antineutrino detector matches what we would expect to see."

In the research, Erickson and recent Ph.D. graduates Christopher Stewart and Abdalla Abou-Jaoude used high-fidelity computer simulations to assess the capabilities of near-field antineutrino detectors that would be located near - but not inside - reactor containment vessels. Among the challenges is distinguishing between particles generated by fission and those from natural background.

"We would measure the energy, position and timing to determine whether a detection was an antineutrino from the reactor or something else," she said. "Antineutrinos are difficult to detect and we cannot do that directly. These particles have a very small chance of interacting with a hydrogen nucleus, so we rely on those protons to convert the antineutrinos into positrons and neutrons."

Nuclear reactors now used for power generation must be refueled on a regular basis, and that operation provides an opportunity for human inspection, but future generations of nuclear reactors may operate for as long as 30 years without refueling. The simulation showed that sodium-cooled reactors could also be monitored using antineutrino detectors, though their signatures will be different from those of the current generation of pressurized water reactors.

Among the challenges ahead is reducing the size of the antineutrino detectors to make them portable enough to fit into a vehicle that could be driven past a nuclear reactor. Researchers also want to improve the directionality of the detectors to keep them focused on emissions from the reactor core to boost their ability to detect even small changes.

The detection principle is similar in concept to that of retinal scans used for identity verification. In retinal scans, an infrared beam traverses a person's retina and the blood vessels, which are distinguishable by their higher light absorption relative to other tissue. This mapping information is then extracted and compared to a retinal scan taken earlier and stored in a database. If the two match, the person's identity can be verified.

Similarly, a nuclear reactor continuously emits antineutrinos that vary in flux and spectrum with the particular fuel isotopes undergoing fission. Some antineutrinos interact in a nearby detector via inverse beta decay. The signal measured by that detector is compared to a reference copy stored in a database for the relevant reactor, initial fuel and burnup; a signal that sufficiently matches the reference copy would indicate that the core inventory has not been covertly altered. However, if the antineutrino flux of a perturbed reactor is sufficiently different from what would be expected, that could indicate that a diversion has taken place.

The emission rates of antineutrino particles at different energies vary with operating lifetime as reactors shift from burning uranium to plutonium. The signal from a pressurized water reactor consists of a repeated 18-month operating cycle with a three-month refueling interval, while signal from an ultra-long cycle fast reactor (UCFR) would represent continuous operation, excluding maintenance interruptions.

Preventing the proliferation of special nuclear materials suitable for weapons is a long-term concern of researchers from many different agencies and organizations, Erickson said.

"It goes all the way from mining of nuclear material to disposition of nuclear material, and at every step of that process, we have to be concerned about who's handling it and whether it might get into the wrong hands," she explained. "The picture is more complicated because we don't want to prevent the use of nuclear materials for power generation because nuclear is a big contributor to non-carbon energy."

The paper shows the feasibility of the technique and should encourage the continued development of detector technologies, Erickson said.

"One of the highlights of the research is a detailed analysis of assembly-level diversion that is critical to our understanding of the limitations on antineutrino detectors and the potential implications for policy that could be implemented," she said. "I think the paper will encourage people to look into future systems in more detail."

Credit: 
Georgia Institute of Technology

1 in 300 thrives on very-early-to-bed, very-early-to-rise routine

A quirk of the body clock that lures some people to sleep at 8 p.m., enabling them to greet the new day as early as 4 a.m., may be significantly more common than previously believed.

So-called advanced sleep phase -- previously believed to be very rare -- may affect at least one in 300 adults, according to a study led by UC San Francisco and publishing in the journal SLEEP on Aug. 6, 2019.

Advanced sleep phase means that the body's clock, or circadian rhythm, operates on a schedule hours earlier than most people's, with a premature release of the sleep hormone melatonin and shift in body temperature. The condition is distinct from the early rising that develops with normal aging, as well as the waking in the wee hours experienced by people with depression.

"While most people struggle with getting out of bed at 4 or 5 a.m., people with advanced sleep phase wake up naturally at this time, rested and ready to take on the day," said the study's senior author, Louis Ptacek, MD, professor of neurology at the UCSF School of Medicine. "These extreme early birds tend to function well in the daytime but may have trouble staying awake for social commitments in the evening."

Advanced Sleepers 'Up and at 'Em' on Weekends too

Additionally, "advanced sleepers" rouse more easily than others, he said, and are satisfied with an average of an extra five-to-10 minutes of sleep on non-work days, versus the 30-to-38 minutes' more sleep of their non-advanced sleeper family members.

Ptacek and his colleagues at the University of Utah and the University of Wisconsin calculated the estimated prevalence of advanced sleepers by evaluating data from patients at a sleep disorder clinic over a nine-year period. In total, 2,422 patients were followed, of which 1,748 presented with symptoms of obstructive sleep apnea, a condition that the authors found was not related to sleep-cycle hours.

Among this group, 12 people met initial screening criteria for advanced sleep phase. Four of the 12 declined enrollment in the study and the remaining eight comprised the 0.03 percent of the total number of patients -- or one out of 300 -- that was extrapolated for the general population.

This is a conservative figure, the researchers noted, since it excluded the four patients who did not want to participate in the study and may have met the criteria for advanced sleep phase, as well as those advanced sleepers who had no need to visit a sleep clinic.

Night Owls More Likely to Struggle with Sleep Deficits

"Generally, we find that it's the people with delayed sleep phase -- those night owls that can't sleep until as late as 7 a.m. -- who are more likely to visit a sleep clinic. They have trouble getting up for work and frequently deal with chronic sleep deprivation," said Ptacek.

Criteria for advanced sleep phase include the ability to fall asleep before 8:30 p.m. and wake before 5:30 a.m. regardless of any occupational or social obligations, and having only one sleep period per day. Other criteria include the establishment of this sleep-wake pattern by the age of 30, no use of stimulants or sedatives, no bright lights to aid early rising and no medical conditions that may impact sleep.

All study participants were personally seen by Christopher R. Jones, MD, a former neurologist at the University of Utah and co-author of the paper. Patients were asked about their medical histories and both past and present sleep habits on work days and work-free days. Researchers also looked at sleep logs and level of melatonin in the participants' saliva, as well as sleep studies, or polysomnography, that record brainwaves, oxygen levels in the blood, heart rate and breathing.

Of note, all eight of the advanced sleepers claimed that they had at least one first-degree relative with the same sleep-wake schedule, indicating so-called familial advanced sleep phase. Of the eight relatives tested, three did not meet the full criteria for advanced sleep phase and the authors calculated that the remaining five represented 0.21 percent of the general population.

The authors believe that the percentage of advanced sleepers who have the familial variant may approach 100 percent. However, some participants may have de novo mutations that may be found in their children, but not in parents or siblings, and some may have family members with "nonpenetrant" carrier mutations. Two of the remaining five were found to have genetic mutations that have been identified with familial advanced sleep phase. Conditions associated with these genes include migraine and seasonal affective disorder.

"We hope the results of this study will not only raise awareness of advanced sleep phase and familial advanced sleep phase," said Ptacek, "but also help identify the circadian clock genes and any medical conditions that they may influence."

Credit: 
University of California - San Francisco