Culture

Parent mentors improve Latino children's health insurance coverage rates

image: The study's lead author Dr. Glenn Flores of UConn School of Medicine and Connecticut Children's Medical Center (Photo by Connecticut Children's).

Image: 
Connecticut Children's

Latino children have the highest uninsured rate in the United States. However, new study findings in the March issue of Health Affairs show parent mentors are highly effective at providing uninsured Latino children with health insurance coverage.

"Our randomized trial testing the power of a simple intervention program that trained Latino parents to mentor other Latino parents rigorously documents that we can truly help eliminate disparities in insurance coverage, while improving access to healthcare, care quality, and parental satisfaction, and reducing families' out-of-pocket healthcare costs and financial burden," says the study's lead author Dr. Glenn Flores, professor and associate chair of research in the Department of Pediatrics at UConn School of Medicine and chief research officer at Connecticut Children's Medical Center.

Even though one in four children in the U.S. is Latino, equivalent to more than 18.2 million, they continue to be the most uninsured, accounting for one-third of all uninsured children nationally. Currently, Latino children have the highest uninsurance rate of any U.S. racial/ethnic group of children, at 7.9 percent, compared with 4.1 percent of white, 5 percent of Asian, and 5.5 percent of African-American children.

"Lack of health insurance in children is a tremendous health disparity that urgently needs to be addressed, especially in the highly impacted Latino community," says Flores. "When children are uninsured, they are at much higher risk of poorer health, lack of healthcare access, more trips to the emergency room and hospitalizations, and sadly, premature death."

Flores and his research team evaluated this innovative intervention program after their team's prior study showed community health workers were more effective than traditional outreach and enrollment methods used by the state Medicaid and Children's Health Insurance Programs (CHIP).

This randomized, controlled, community-based trial testing the new Kids' HELP (Health Insurance by Educating Lots of Parents) intervention program included 155 uninsured Latino children under 18 years old eligible for Medicaid or CHIP in Texas, along with their parents. The intervention employed trained and paid bilingual Latino parent mentors with personal experience and success navigating Medicaid or CHIP coverage for their own children.

At one year follow-up, 95 percent of Latino children assigned a parent mentor obtained health insurance, compared with only 69 percent of children in the control group, who received the state's traditional health insurance outreach and enrollment methods. Those in the parent mentor program also got their coverage much faster, at an average of two months, versus five months for controls.

Parent mentors taught parents about available insurance programs, helped them complete and submit applications, contacted and acted as a family liaison with Medicaid and CHIP, assisted families with finding medical and dental homes for their children, and helped them to address any social determinants of health in the household, including food insufficiency, housing problems, and poverty.

In addition, the study found children in the parent mentor group gained greater healthcare access to preventive care visits, primary care providers, and specialists. Plus, parents had significantly lower out-of-pocket spending, and were three times less likely to report financial problems due to their child's medical bills. The quality of well-child and specialty care also was much better, and the program saved $698 annually per child insured.

The intervention additionally was highly effective in teaching parents how to maintain their children's coverage. Two years after families stopped having parent mentors, 100 percent of children in the parent mentor group were insured, thanks to knowledge and skills gained from the parent mentoring program.

"Our highly successful Kids' HELP program holds great promise for insuring more Latino children, and even other racial or ethnic groups in other states and nationally, while eliminating health disparities," says Flores. "Parent mentors are such an exciting intervention because they insure more children, improve healthcare access and quality, reduce families' financial stress, create jobs, reinvest dollars in underserved communities, and save money."

Because of these study findings and other work by Flores and his team on parent mentors, CHIP reauthorization legislation approved by the federal government in January 2018 makes organizations that employ parent mentors eligible to receive part of $120 million in grants for CHIP outreach and enrollment, so all 50 states and D.C. have the opportunity to fund and implement parent mentor programs.

Credit: 
University of Connecticut

Tropical forest response to drought depends on age

image: University of Wyoming postdoctoral researcher Mario Bretfeld uses a sensor to assess water flow in a tree in the Panama Canal watershed as part of a project conducted in 2015-16. Results of the research, published this week, show that tropical trees respond to drought differently depending on their ages.

Image: 
Mario Bretfeld

Tropical trees respond to drought differently depending on their ages, according to new research led by a postdoctoral scientist at the University of Wyoming.

Mario Bretfeld, who works in the lab of UW Department of Botany Professor Brent Ewers, is the lead author of an article that appears today (Monday) in the journal New Phytologist, one of the top journals in the field of plant controls over the water cycle. The research was conducted in collaboration with the Smithsonian Tropical Research Institute (STRI).

"The paper provides some very interesting insights into how forest age interacts with drought to determine how much water is produced from tropical forests," Ewers says. "This work has implications for the operation of the Panama Canal, as well as providing fundamental insights into how forests control the water cycle."

The research team compared responses to drought in 8-, 25- and 80-year-old forest patches in the Agua Salud project, a 700-hectare land-use experiment collaboration with the Panama Canal Authority, Panama's Ministry of the Environment and other partners. The team measured water use in 76 trees representing more than 40 different species in forests of different ages in the Panama Canal watershed during an especially extended drought resulting from El Niño conditions in 2015 and 2016.

The information gained from the study is critical to understanding how tropical forests respond to the severe and frequent droughts predicted by climate change scenarios, says Jefferson Hall, staff scientist at STRI. He notes that, globally, 2016 registered as the warmest year since climate records began to be compiled.

"Droughts can be really hard on tropical forests," Hall says. "Too much heat, low humidity and not enough water can drastically alter which trees survive. We found that forest age matters."

Water moves from soil into roots, through stems and branches into tree leaves, where some of it is used for photosynthesis. Most of this water is released into the atmosphere -- a process called transpiration. Transpiration, or plant water use, can be measured using sap flow sensors in the stem.

"Transpiration is regulated by external factors -- for example, how dry the atmosphere is and how much water is available in the soil -- as well as internal factors, such as differences in the structure and function of wood and leaves," Bretfeld says. "Our results indicate that the factors most important for regulation of transpiration in young forests had to do with their ability to access water in the soil, whereas older forests were more affected by atmospheric conditions."

During the record drought, water use increased significantly in the oldest forests, whose expansive root systems supplied trees with water from deep soil layers and allowed for maintenance of transpiration on typically sunny and hot days. Trees in younger forests suffered from a lack of water, probably because their shallower root systems could not access water stored deeper in the ground. In response, trees in younger forests regulated the amount of water they were using during the dry period.

"All trees are not created equal. Their species and age matter. We are working on designing techniques we're calling 'smart reforestation,' making decisions about which tree species to plant to achieve different land-use objectives," Hall says. "This study is the perfect example of the link between basic and applied science, because it highlights the need to consider drought tolerance as we reforest wet, yet drought-prone areas."

Credit: 
University of Wyoming

Preschoolers exposed to nighttime light lack melatonin

image: CU Boulder researcher Lameese Akacem and undergraduate research assistant Allie Coy play with Lauren Meier over a light table in Meier's Colorado home.

Image: 
University of Colorado Boulder

Exposing preschoolers to an hour of bright light before bedtime almost completely shuts down their production of the sleep-promoting hormone melatonin and keeps it suppressed for at least 50 minutes after lights out, according to new University of Colorado Boulder research.

The study, published today in the journal Physiological Reports, is the first to assess the hormonal impact nighttime light exposure can have on young children.

The study comes at a time when use of electronics is rapidly expanding among this age group and adds to a growing body of evidence suggesting that-because of structural differences in their eyes-children may be more vulnerable to the impact light has on sleep and the body clock.

"Although the effects of light are well studied in adults, virtually nothing is known about how evening light exposure affects the physiology, health and development of preschool-aged children," said lead author Lameese Akacem, a CU Boulder instructor and researcher in the Sleep and Development Lab. "In this study we found that these kids were extremely sensitive to light."

For the study, the researchers enrolled 10 healthy children ages 3 to 5 years in a seven-day protocol. On days one through five, the children followed a strict bedtime schedule to normalize their body clocks and settle into a pattern in which their melatonin levels began to go up at about the same time each evening.

On day six, Akacem's team came into the children's homes and created a dim-light environment, covering windows with black plastic and swapping out existing lights with low-wattage bulbs. This ensured that all the children were exposed to the same amount of light-which can influence melatonin timing and levels-before samples were taken.

That afternoon, the researchers took periodic saliva samples to assess melatonin levels at various times. The following evening, after spending the day in what they playfully referred to as "the cave," the children were invited to color or play with magnetic tiles on top of a light table emitting 1,000 lux of light (about the brightness of a bright room) for one hour.

Then the researchers took samples again, comparing them to those taken the night before.

Melatonin levels were 88 percent lower after bright light exposure. Levels remained suppressed at least 50 minutes after the light was shut off.

Direct comparisons between this study and studies in adults must be made with caution because of differing research protocols, the researchers stress. However, they note that in one study, a one-hour light stimulus of 10,000 lux (10 times that of the current study) suppressed melatonin by only 39 percent in adults.

"Light is our brain clock's primary timekeeper," explains senior author Monique LeBourgeois, an associate professor in the Department of Integrative Physiology. "We know younger individuals have larger pupils, and their lenses are more transparent. This heightened sensitivity to light may make them even more susceptible to dysregulation of sleep and the circadian clock."

She explains that when light hits the retina in the eye in the evening, it produces a cascade of signals to the circadian system to suppress melatonin and push back the body's entrance into its "biological night." For preschoolers, this may not only lead to trouble falling asleep one night, but to chronic problems feeling sleepy at bedtime.

Melatonin also plays a role in other bodily processes, regulating temperature, blood pressure and glucose metabolism.

"The effects of light at night exposure can definitely go beyond sleep," Akacem said.

The study sample size was small and it used only one intensity of light, 1,000 lux, which is far greater than the intensity of a typical handheld electronic device, she notes.

With a new $2.4 million grant from the National Institutes of Health, LeBourgeois recently launched a study in which she will expose 90 children to light of different intensities to determine how much it takes to impact the circadian clock.

"The preschool years are a very sensitive time of development during which use of digital media is growing more and more pervasive," Le Bourgeois said. Use of electronic media among young children has tripled since 2011. "We hope this research can help parents and clinicians make informed decisions on children's light exposure."

The takeaway for parents today: Dim the lights in the hours before bedtime.

Credit: 
University of Colorado at Boulder

UTSA researchers want to teach computers to learn like humans

A new study by Paul Rad, assistant director of the UTSA Open Cloud Institute, and Nicole Beebe, Melvin Lachman Distinguished Professor in Entrepreneurship and director of the UTSA Cyber Center for Security and Analytics, describes a new cloud-based learning platform for artificial intelligence (A.I.) that teaches machines to learn like humans.

"Cognitive learning is all about teaching computers to learn without having to explicitly program them," Rad said. "In this study, we're presenting an entirely new platform for machine learning to teach computers to learn the way we do."

To build the cloud-based platform, Rad and Beebe studied how education and understanding has evolved over the past five centuries. They wanted to gain a better picture of how computers could be taught to approach deductive reasoning.

"Our goal here is to teach the machine to become smarter, so that it can help us. That's what they're here to do," Rad said. "So how do we become better? We learn from experience."

The UTSA researchers also studied how humans learn across their lifetimes. Children, for example, begin by identifying objects such as faces and toys, then move on from there to understand communication. This process helps their thought processes mature as they get older.

Ultimately, Rad and Beebe want AI agents to learn automatic threat detection. This means the AI agent can dynamically learn network traffic patterns and normal behavior and thus become more effective in discovering and thwarting new attacks before significant damage.

"Or It would be nice if an intelligent computer assistant could aggregate thousands of news items or memos for someone, so that the process of reading that material was quicker and that person could decide almost instantly how to use it," Rad said.

Additionally, intelligent machines could be used in medical diagnoses, which Rad says could lead to more affordable health care, and other fields that require precise, deductive reasoning.

"During the history, humans have invented and used tools such as swords, calculators and cars, and tools have changed human society and enable us to evolve," Rad said. "That's what we're doing here, but on a much more impactful scale."

Credit: 
University of Texas at San Antonio

Chaperones can hold protein in non-equilibrium states

After translation, proteins must fold to their functional 3D shape and keep it while under attack by various perturbations: external stress such as temperature changes, wrong interactions with other proteins in the cell, and even deleterious mutations. To ensure that proteins stay functional, the cell uses a particular class of proteins, the chaperones. These are present in all organisms and are among the most abundant proteins in cells, emphasizing how crucial they are to sustain life.

The current view is that the functional 3D shape of a protein is also its most thermodynamically stable state, and that chaperones help proteins reach this state by keeping them from aggregating and by allowing them to escape so-called "kinetic traps" - points where the protein may get "stuck" in a non-functional state. And to do all this, chaperones need energy, which in the cell comes in the form of adenosine triphosphate, or ATP.

The labs of Paolo De Los Rios at EPFL and Pierre Goloubinoff at UNIL, in collaboration with Alessandro Barducci (INSERM - Montpellier) have now shown that the energy from ATP is used by chaperones to actively maintain the proteins they are working on in a non-equilibrium but transiently stable version of the functional form, even under conditions upon which the functional form should not be thermodynamically stable.

"What we found is that chaperones can actively repair and revert the proteins they act upon in a non-equilibrium steady-state," says De Los Rios. "In this state, the proteins are in their native state even if, from an equilibrium thermodynamics perspective, they should not."

The researchers combined theoretical and experimental approaches to prove that chaperones are molecular motors, capable of performing work and extending the stability range of proteins. The results may challenge parts of the prevalent view that evolution has designed amino acid sequences so that the functional state of the protein they belong to is thermodynamically optimal.

"In the presence of chaperones, even thermodynamically sub-optimal proteins might be able to reach their functional form, facilitating evolution in its endless exploration of chemical possibilities," says De Los Rios.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Culturing cheaper stem cells

image: A human embryonic stem cell colony cultured in the newly developed medium.

Image: 
Kyoto Univ iCeMS

Human pluripotent stem cells (hPSCs) can infinitely self-renew and develop into all major cell types in the body, making them important for organ repair and replacement. But culturing them in large quantities can be expensive. Now, scientists at Japan's Kyoto University, with colleagues in India and Iran, have developed a more cost-effective culture by using a new combination of chemical compounds.

Current culture systems need to contain components that can sustain hPSC self-renewal while preventing them from differentiating into other cell types. Of these components, genetically engineered growth factors produced in bacteria or animal cells, are particularly expensive.

The new culture was able to support and maintain the long-term renewal of hPSCs without the need for expensive growth factors.

Kouichi Hasegawa of Kyoto University's Institute for Integrated Cell-Material Sciences (iCeMS) and his team developed their 'AKIT' culture using three chemical compounds: 1-azakenpaullone (AK), ID-8 (I), and tacrolimus (T).

1-azakenpaullone supported hPSC self-renewal, but also induced their differentiation into other cells. To turn off the differentiation, the team added ID-8. This compound, however, also leads to partial cell growth arrest, so a third compound, tacrolimus, was finally added to counter this effect.

The survival and growth rates of some hPSC cell lines were slightly lower in the AKIT medium than in other culture media. But its key advantage lies in the simplicity and low cost of its preparation, which is five to ten times cheaper than any currently available hPSC culture medium.

"This improved method of culturing may thus facilitate the large-scale, quality-controlled and cost-effective translation of hPSC culture practices to clinical and drug-screening applications," say the researchers in their study published in the journal Nature Biomedical Engineering.

Credit: 
Kyoto University

Massive astrophysical objects governed by subatomic equation

image: An artist's impression of research presented in Batygin (2018), MNRAS 475, 4. Propagation of waves through an astrophysical disk can be understood using Schrödinger's equation -- a cornerstone of quantum mechanics.

Image: 
James Tuttle Keane, California Institute of Technology

Quantum mechanics is the branch of physics governing the sometimes-strange behavior of the tiny particles that make up our universe. Equations describing the quantum world are generally confined to the subatomic realm--the mathematics relevant at very small scales is not relevant at larger scales, and vice versa. However, a surprising new discovery from a Caltech researcher suggests that the Schrödinger Equation--the fundamental equation of quantum mechanics--is remarkably useful in describing the long-term evolution of certain astronomical structures.

The work, done by Konstantin Batygin, a Caltech assistant professor of planetary science and Van Nuys Page Scholar, is described in a paper appearing in the March 5 issue of Monthly Notices of the Royal Astronomical Society.

Massive astronomical objects are frequently encircled by groups of smaller objects that revolve around them, like the planets around the sun. For example, supermassive black holes are orbited by swarms of stars, which are themselves orbited by enormous amounts of rock, ice, and other space debris. Due to gravitational forces, these huge volumes of material form into flat, round disks. These disks, made up of countless individual particles orbiting en masse, can range from the size of the solar system to many light-years across.

Astrophysical disks of material generally do not retain simple circular shapes throughout their lifetimes. Instead, over millions of years, these disks slowly evolve to exhibit large-scale distortions, bending and warping like ripples on a pond. Exactly how these warps emerge and propagate has long puzzled astronomers, and even computer simulations have not offered a definitive answer, as the process is both complex and prohibitively expensive to model directly.

While teaching a Caltech course on planetary physics, Batygin (the theorist behind the proposed existence of Planet Nine) turned to an approximation scheme called perturbation theory to formulate a simple mathematical representation of disk evolution. This approximation, often used by astronomers, is based upon equations developed by the 18th-century mathematicians Joseph-Louis Lagrange and Pierre-Simon Laplace. Within the framework of these equations, the individual particles and pebbles on each particular orbital trajectory are mathematically smeared together. In this way, a disk can be modeled as a series of concentric wires that slowly exchange orbital angular momentum among one another.

As an analogy, in our own solar system one can imagine breaking each planet into pieces and spreading those pieces around the orbit the planet takes around the sun, such that the sun is encircled by a collection of massive rings that interact gravitationally. The vibrations of these rings mirror the actual planetary orbital evolution that unfolds over millions of years, making the approximation quite accurate.

Using this approximation to model disk evolution, however, had unexpected results.

"When we do this with all the material in a disk, we can get more and more meticulous, representing the disk as an ever-larger number of ever-thinner wires," Batygin says. "Eventually, you can approximate the number of wires in the disk to be infinite, which allows you to mathematically blur them together into a continuum. When I did this, astonishingly, the Schrödinger Equation emerged in my calculations."

The Schrödinger Equation is the foundation of quantum mechanics: It describes the non-intuitive behavior of systems at atomic and subatomic scales. One of these non-intuitive behaviors is that subatomic particles actually behave more like waves than like discrete particles--a phenomenon called wave-particle duality. Batygin's work suggests that large-scale warps in astrophysical disks behave similarly to particles, and the propagation of warps within the disk material can be described by the same mathematics used to describe the behavior of a single quantum particle if it were bouncing back and forth between the inner and outer edges of the disk.

The Schrödinger Equation is well studied, and finding that such a quintessential equation is able to describe the long-term evolution of astrophysical disks should be useful for scientists who model such large-scale phenomena. Additionally, adds Batygin, it is intriguing that two seemingly unrelated branches of physics--those that represent the largest and the smallest of scales in nature--can be governed by similar mathematics.

"This discovery is surprising because the Schrödinger Equation is an unlikely formula to arise when looking at distances on the order of light-years," says Batygin. "The equations that are relevant to subatomic physics are generally not relevant to massive, astronomical phenomena. Thus, I was fascinated to find a situation in which an equation that is typically used only for very small systems also works in describing very large systems."

"Fundamentally, the Schrödinger Equation governs the evolution of wave-like disturbances." says Batygin. "In a sense, the waves that represent the warps and lopsidedness of astrophysical disks are not too different from the waves on a vibrating string, which are themselves not too different from the motion of a quantum particle in a box. In retrospect, it seems like an obvious connection, but it's exciting to begin to uncover the mathematical backbone behind this reciprocity."

Credit: 
California Institute of Technology

Provide stroke patients with palliative care support minus the label

When caring for stroke patients, health care providers should focus on the social and emotional issues facing patients, rather than only physical rehabilitation, according to a new study published in CMAJ (Canadian Medical Association Journal).

"Rather than focusing only on physical rehabilitation, a realistic approach to managing care should consider the emotional needs of patients and their caregivers," says Dr. Scott Murray, Primary Palliative Care Research Group, University of Edinburgh, Edinburgh, United Kingdom. "Balancing the need for hope of recovery with the potential of severe disability or death is important in this approach."

Stroke is the second leading cause of death, accounting for 11% of deaths worldwide. Survival is especially poor for people who have had a severe total anterior circulation stroke with loss of motor control, language and other conditions.

The study of 219 patients in central Scotland with severe stroke (total anterior circulation stroke) looked at the experiences, concerns and priorities of patients, families and health care professionals in the 12 months after stroke. In the first 6 months, 57% (125 people) died, with 1-year fatality of 60% (132 deaths.) About two-thirds (67%) of deaths occurred within the first month after stroke.

Researchers found that patients and their families reported grief over the loss of their previous life, anxiety among caregivers over whether they were "doing the right thing," uncertainty about the future and confusion about prognosis. As well, the term "palliative care" was interpreted negatively by many health care providers, families and informal caregivers, as it is associated with care for people, for example patients with advanced cancer, who are dying.

"Many patients and informal caregivers would have welcomed more support in making decisions and in planning for the future from day one," writes Dr. Murray with coauthors. "The focus was on active rehabilitation, recovery, motivation and hope, with much less discussion and preparation for limited recovery."

The authors suggest that the principles of palliative care rather than the term itself should be applied to stroke patients, which means supporting people to live well with deteriorating health and making them comfortable until their eventual death.

In a related commentary http://www.cmaj.ca/lookup/doi/10.1503/cmaj.170956, Dr. Jessica Simon, Department of Oncology, University of Calgary, writes "the challenging questions for physicians and other health care providers should not be, 'What shall we call it?' or 'Who should receive palliative care?'; the questions for each patient who is facing the challenges associated with life-threatening illness should be, 'Am I providing the palliative care support my patient needs?' and 'Is there access to sufficient specialist palliative care resources in my community if needed?'."

"Outcomes, experiences and palliative care in major stroke: a multicentre, mixed-method, longitudinal study" is published March 5, 2018.

Credit: 
Canadian Medical Association Journal

How are we related? A Compara-bly easy workflow to find gene families

Published in GigaScience, the open source Galaxy workflow allows researchers to make easier work of finding gene families; an important tool when it comes to analysing the evolution, structure and function of genes across species.

Co-author, Wilfried Haerty explained why this tool is so useful to biologists: "The software developed at the Earlham Institute enables scientists to investigate species of interest using a flexible and reproducible pipeline. The performance of our workflow was assessed on vertebrate genome assemblies of various qualities (platypus, pig, horse, dog, mouse and human). The species were selected to assess the impact of genome quality on gene families identification. The mouse, dog and human genomes are of high quality whereas the three others are at different stages of analysis completion."

Based on and expanding Ensembl's existing EnsemblCompara Gene Trees pipeline, the GeneSeqToFamily workflow removes many complex prerequisites of the process, such as having to use the command line to install a large number of separate tools, by converting the whole process into Galaxy; a much simpler platform to use.

Importantly, the workflow is highly customisable, allowing users to choose parameters, change tools and run the software on their own genes, without having to use the Ensembl database.

Not just a workflow, GeneSeqToFamily contains a number of new, standalone Galaxy tools, including TreeBeST, hcluster_sg, T-Coffee and ETE. Developed at EI by Anil Thanki and Nicola Soranzo of the Data Infrastructure Group, the software makes the process of finding and generating phylogenetic trees easier, using a range of open platforms and databases. Anil Thanki, Scientific Programmer, said: "We are excited to put our work in the open domain, where it allows biologists and bioinformaticians to use the Ensembl Compara GeneTrees Pipeline in a simple, graphical user interface and modify it if needed."

The team hopes that the new workflow will help users unfamiliar with the complexities associated with using Compara to be able to more easily analyse phylogenetic datasets, while collating a number of useful gene family tools in one Galaxy workflow. Users can either select existing Ensembl databases to use as the reference sets for their analysis, or provide their own data in the same format, and tools are provided that can help.

Earlham Institute is committed to providing tools and algorithms to support, enable and develop computational biology and life sciences research, with projects such as Galaxy helping to open access to a range of scientific tools and databases.

The Data Infrastructure group, led by Dr. Rob Davey, also supports resources such as CyVerse UK and COPO which, alongside Galaxy, expand the availability and usability of computational resources to the wider scientific community in the UK and internationally through EI's National Capability in e-Infrastructure.

Credit: 
Earlham Institute

Changing size of neurons could shed light on new treatments for motor neurone disease

New research published in The Journal of Physiology improves our understanding of how motor nerve cells (neurons) respond to motor neurone disease, which could help us identify new treatment options.

Motor neurone disease referred to as Amyotrophic Lateral Sclerosis (ALS) is associated with the death of motor nerve cells (neurons). It starts with the progressive loss of muscle function, followed by paralysis and ultimately death due to inability to breathe. Currently, there is no cure for ALS and no effective treatment to halt, or reverse, the progression of the disease. Most people with ALS die within 3 to 5 years from when symptoms first appear.

Previous studies in animal models of ALS have reported inconsistencies in the changes in the size of motor neurons. This new study is the first to show robust evidence that motor neurons change size over the course of disease progression and that, crucially, different types of neurons experience different changes. Specifically, the study shows that motor neuron types that are more vulnerable to the disease - that is, they die first - increase in size very early in the disease, before there are symptoms. Other motor neuron types that are more resistant to the disease (they die last) do not increase their size. These changes in the size of the motor neurons have a significant effect on their function and their fate as the diseases progresses.

The hope is that by understanding more about the mechanisms by which the neurons are changing size, it will be possible to identify and pursue new strategies for slowing or halting motor nerve cell death.

This research suggests motor neurons might alter their characteristics as a response to the disease in an attempt to compensate for loss of function. However these changes can lead to the neuron's early death. Furthermore the research supports the idea that the most vulnerable motor neurons undergo unique changes that might impact their ability to survive.

The research conducted by Wright State University involved identifying and measuring size changes of motor neuron types in a mouse model of familial ALS. The motor neurons were examined at every key stage of the disease to observe when and where these changes begin, and how they progress through the entirety of the disease. Specific antibodies were used as markers to bind to the structure of motor neurons so that they could be easily viewed under high-power microscopes, and a computer program performed the three-dimensional measurement of the size and shape of a motor neuron's cell body.

It is important to note that the research was carried out in only one mouse model which was the most aggressive mouse model of ALS. Future work should focus on other mouse models of ALS in order to determine how relevant these results are likely to translate in human patients.

Sherif M. Elbasiouny, the lead investigator on the research commented potential areas for further study:

"This research approach could be applicable not only to ALS, but also to other neurodegenerative diseases, such as Alzheimer's and Parkinson's diseases. This new understanding could help us to identify new therapeutic targets for improving motor neuron survival."

Credit: 
The Physiological Society

Short-term increases in inhaled steroid doses do not prevent asthma flare-ups in children

Researchers have found that temporarily increasing the dosage of inhaled steroids when asthma symptoms begin to worsen does not effectively prevent severe flare-ups, and may be associated with slowing a child's growth, challenging a common medical practice involving children with mild-to-moderate asthma.

The study, funded by the National Heart, Lung, and Blood Institute (NHLBI), part of the National Institutes of Health, will appear online on March 3 in the New England Journal of Medicine (NEJM) to coincide with its presentation at a meeting of the 2018 Joint Congress of the American Academy of Allergy, Asthma & Immunology (AAAAI) and the World Allergy Organization (WAO) in Orlando, Florida. It will appear in print on March 8th.

Asthma flare-ups in children are common and costly, and to prevent them, many health professionals recommend increasing the doses of inhaled steroids from low to high at early signs of symptoms, such as coughing, wheezing, and shortness of breath. Until now, researchers had not rigorously tested the safety and efficacy of this strategy in children with mild-to-moderate asthma.

"These findings suggest that a short-term increase to high-dose inhaled steroids should not be routinely included in asthma treatment plans for children with mild-moderate asthma who are regularly using low-dose inhaled corticosteroids," said study leader Daniel Jackson, M.D., associate professor of pediatrics at the University of Wisconsin School of Medicine and Public Health, Madison, and an expert on childhood asthma. "Low-dose inhaled steroids remain the cornerstone of daily treatment in affected children."

The research team studied 254 children 5 to 11 years of age with mild-to-moderate asthma for nearly a year. All the children were treated with low-dose inhaled corticosteroids (two puffs from an inhaler twice daily). At the earliest signs of asthma flare-up, which some children experienced multiple times throughout the year, the researchers continued giving low-dose inhaled steroids to half of the children and increased to high-dose inhaled steroids (five times the standard dose) in the other half, twice daily for seven days during each episode.

Though the children in the high-dose group had 14 percent more exposure to inhaled steroids than the low-dose group, they did not experience fewer severe flare-ups. The number of asthma symptoms, the length of time until the first severe flare-up, and the use of albuterol (a drug used as a rescue medication for asthma symptoms) were similar between the two groups.

Unexpectedly, the investigators found that the rate of growth of children in the short-term high-dose strategy group was about 0.23 centimeters per year less than the rate for children in the low-dose strategy group, even though the high-dose treatments were given only about two weeks per year on average. While the growth difference was small, the finding echoes previous studies showing that children who take inhaled corticosteroids for asthma may experience a small negative impact on their growth rate. More frequent or prolonged high-dose steroid use in children might increase this adverse effect, the researchers caution.

The study did not include children with asthma who do not take inhaled steroids regularly, nor did it include adults. "This study allows caregivers to make informed decisions about how to treat their young patients with asthma," said James Kiley, Ph.D., director of the NHLBI's Division of Lung Diseases. "Trials like this can be used in the development of treatment guidelines for children with asthma."

Credit: 
NIH/National Heart, Lung and Blood Institute

Kids persistently allergic to cow's milk are smaller than peers with nut allergies

image: Karen A. Robbins, M.D., a pediatric allergist/immunologist at Children's National Health System and lead study author.

Image: 
Children's National Health System

ORLANDO, Fla.-(March 4, 2018)-Children who experience persistent allergies to cow's milk may remain shorter and lighter throughout pre-adolescence when compared with children who are allergic to peanuts or tree nuts, according to a retrospective chart review to be presented March 4, 2018, during the American Academy of Allergy, Asthma & Immunology/World Allergy Organization (AAAAI/WAO) Joint Conference.

"The relationship between food allergies and childhood growth patterns is complex, and we have an incomplete understanding about the influence food allergies have on children's growth," says Karen A. Robbins, M.D., a pediatric allergist/immunologist at Children's National Health System and lead study author. "Our study begins to fill this research gap but further study is needed, especially as children enter their teens, to gauge whether these growth deficits are transitory or lasting."

Approximately 6 percent to 8 percent of U.S. children suffer from a food allergy, according to the AAAAI. Eight food groups account for 90 percent of serious allergic reactions, including milk, egg, fish, crustacean shellfish, wheat, soy, peanuts and tree nuts, the Centers for Disease Control and Prevention adds. Allergy to cow's milk in particular can foreclose a wide array of food choices during early childhood, a time when children's bodies undergo a series of growth spurts.

"We learned from our previous research that there is a continuum of risk for deficits in height and weight among children with food allergies, and kids who are allergic to cow's milk are at heightened risk," Dr. Robbins adds. "They never have had cow's milk in their diet. Looking at food labeling, many items 'may contain milk,' which severely narrows what could be a wide variety of food items for growing children. They also frequently have allergies to additional foods."

To gauge how specific food allergies impact children's height and weight, the study team conducted a longitudinal chart review for 191 children. To be included in the study, the children had to have at least one clinic visit from the time they were aged 2 to 4, 5 to 8 and 9 to 12 years old, ages that span from early childhood to preadolescence. From each clinical visit, the research team recorded weight; height; co-morbid conditions, such as asthma, eczema and seasonal allergies; and use of inhaled corticosteroids.

They calculated mean differences in height, weight and body mass index (BMI) z-scores, which act like the percentile measures kids and parents hear about during well-child visits, comparing values with what is normal among other kids of the same age and gender in the general population.

"Children who are allergic to cow's milk had lower mean weight and height when compared with kids who are allergic to peanuts and tree nuts," she says. "These growth deficits remained prominent in the 5- to 8-year-old and the 9- to 12-year-old age ranges."

Dr. Robbins says future research will explore whether older children with cow's milk allergies begin to bridge that height gap during their teen years or if growth differences persist.

Credit: 
Children's National Hospital

Dual frequency comb generated on a single chip using a single laser

image: A compact, integrated, silicon-based chip used to generate dual combs for extremely fast molecular spectroscopy.

Image: 
A. Dutt, A. Mohanty, E. Shim, G. Patwardhan/Columbia Engineering

New York, NY--March 2, 2018--In a new paper published today in Science Advances, researchers under the direction of Columbia Engineering Professors Michal Lipson and Alexander Gaeta (Applied Physics and Applied Mathematics) have miniaturized dual-frequency combs by putting two frequency comb generators on a single millimeter-sized chip.

"This is the first time a dual comb has been generated on a single chip using a single laser," says Lipson, Higgins Professor of Electrical Engineering.

A frequency comb is a special kind of light beam with many different frequencies, or "colors," all spaced from each other in an extremely precise way. When this many-color light is sent through a chemical specimen, some colors are absorbed by the specimen's molecules. By looking at which colors have been absorbed, one can uniquely identify the molecules in the specimen with high precision. This technique, known as frequency-comb spectroscopy, enables molecular fingerprinting and can be used to detect toxic chemicals in industrial areas, to implement occupational safety controls, or to monitor the environment.

"Dual-comb spectroscopy is this technique put on steroids," says Avik Dutt, former student in Lipson's group (now a postdoctoral scholar at Stanford) and lead author of the paper. "By mixing two frequency combs instead of a single comb, we can increase the speed at which measurement are made by thousandfolds or more."

The work also demonstrated the broadest frequency span of any on-chip dual comb?i.e., the difference between the colors on the low-frequency end and the high-frequency end is the largest. This span enables a larger variety of chemicals to be detected with the same device, and also makes it easier to uniquely identify the molecules: the broader the range of colors in the comb, the broader the diversity of molecules that can see the colors.

Conventional dual-comb spectrometers, which have been introduced over the last decade, are bulky tabletop instruments, and not portable due to their size, cost, and complexity. In contrast, the Columbia Engineering chip-scale dual comb can easily be carried around and used for sensing and spectroscopy in field environments in real time.

"There is now a path for trying to integrate the entire device into a phone or a wearable device," says Gaeta, Rickey Professor of Applied Physics and of Materials Science.

The researchers miniaturized the dual comb by putting both frequency comb generators on a single millimeter-sized chip. They also used a single laser to generate both the combs, rather than the two lasers used in conventional dual combs, which reduced the experimental complexity and removed the need for complicated electronics. To produce miniscule rings?tens of micrometers in diameter?that guide and enhance light with ultralow loss, the team used silicon nitride, a glass-like material they have perfected specifically for this purpose. By combining the silicon nitride with platinum heaters, they were able to very finely tune the rings and make them work in tandem with the single input laser.

"Silicon nitride is a widely used material in the silicon-based semiconductor industry that builds computer/smartphone chips," Lipson notes. "So, by leveraging the capabilities of this mature industry, we can foresee reliable fabrication of these dual comb chips on a massive scale at a low cost."

Using this dual comb, Lipson's and Gaeta's groups demonstrated real-time spectroscopy of the chemical dichloromethane at very high speeds, over a broad frequency range. A widely used organic solvent, dichloromethane is abundant in industrial areas as well as in wetland emissions. The chemical is carcinogenic, and its high volatility poses acute inhalation hazards. Columbia Engineering's compact, chip-scale dual comb spectrometer was able to measure a broad spectrum of dichloromethane in just 20 microseconds (there are 1,000,000 microseconds in one second), a task that would have taken at least several seconds with conventional spectrometers.

As opposed to most spectrometers, which focus on gas detection, this new, miniaturized spectrometer is especially suited for liquids and solids, which have broader absorption features than gases?the range of frequencies they absorb is more spread out. "That's what our device is so good at generating," Gaeta explains. "Our very broad dual combs have a moderate spacing between the successive lines of the frequency comb, as compared to gas spectrometers which can get away with a less broad dual comb but need a fine spacing between the lines of the comb."

The team is working on broadening the frequency span of the dual combs even further, and on increasing the resolution of the spectrometer by tuning the lines of the comb. In a paper published last November in Optics Letters, Gaeta's and Lipson's groups demonstrated some steps towards showing an increased resolution.

"One could also envision integrating the input laser into the chip for further miniaturizing the system, paving the way for commercializing this technology in the future," says Dutt.

Credit: 
Columbia University School of Engineering and Applied Science

Do racial and gender disparities exist in newer glaucoma treatments?

SAN FRANCISCO - The American Glaucoma Society today announced that it has awarded a grant to Mildred MG Olivier, MD, to study how often minimally invasive glaucoma surgery (MIGS) devices and procedures are used in black and Latino glaucoma patients and whether these devices perform similarly across races, ethnicities, genders, ages, and regions. The goal of Dr. Olivier's research is to increase quality care for glaucoma patients in all demographic groups.

Dr. Olivier's team includes Eydie Miller-Ellis, MD, Clarisse Croteau-Chonka, PhD, Oluwatosin U. Smith, MD, Maureen G. Maguire, PhD, and Brian VanderBeek, MD, MPH, MSCE. They will work in conjunction with a bioinformatics team from the American Academy of Ophthalmology to mine the IRIS® Registry (Intelligent Research in Sight) database to explore this important topic.

The Academy's IRIS Registry is the nation's first and largest comprehensive eye disease clinical registry. It allows ophthalmologists to pioneer research based on real-world clinical practice. The Academy developed it as part of the profession's shared goal of continual improvement in the delivery of eye care.

The American Glaucoma Society last year issued a call for proposals for an IRIS Registry grant. The Society today announced at its 2018 Annual Meeting that it chose Dr. Olivier's application among seven strong proposals, primarily because it will help clinicians better understand where MIGS fits in the management of glaucoma.

Glaucoma affects 2.7 million people in the United States, a number that is expected to grow to 6.3 million by 2050. Glaucoma affects minority groups at a significantly higher rate than whites. A 2010 National Eye Institute report puts the total percentage of minority groups with glaucoma at 34 percent, even though minority groups represent just 30 percent of the U.S. population.

Glaucoma is typically treated with eyedrops or lasers first. If the disease progresses to moderate or advanced stages, ophthalmologists perform an invasive surgical procedure called trabeculectomy. MIGS offers a new option for patients with mild to moderate glaucoma, filling a gap between medication and more invasive filtration procedures. MIGS is also safer than trabeculectomy, has fewer complications, and patients recover faster.

Dr. Olivier's research seeks to assess the real-world demographic differences in the use, safety, and effectiveness of MIGS compared with other glaucoma treatments. Findings may also help inform the structure of future studies involving MIGS procedures.

Credit: 
American Academy of Ophthalmology

Unprecedentedly wide and sharp dark matter map

image: Hyper Suprime-Cam image of a location with a highly significant dark matter halo detected through the weak gravitational lensing technique. This halo is so massive that some of the background (blue) galaxies are stretched tangentially around the center of the halo. This is called strong lensing.

Image: 
NAOJ

A research team of multiple institutes, including the National Astronomical Observatory of Japan and University of Tokyo, released an unprecedentedly wide and sharp dark matter map based on the newly obtained imaging data by Hyper Suprime-Cam on the Subaru Telescope. The dark matter distribution is estimated by the weak gravitational lensing technique. The team located the positions and lensing signals of the dark matter halos and found indications that the number of halos could be inconsistent with what the simplest cosmological model suggests. This could be a new clue to understanding why the expansion of the Universe is accelerating.

Mystery of the accelerated Universe

In the 1930's, Edwin Hubble and his colleagues discovered the expansion of the Universe. This was a big surprise to most of the people who believed that the Universe stayed the same throughout eternity. A formula relating matter and the geometry of space-time was required in order to express the expansion of the Universe mathematically. Coincidentally, Einstein had already developed just such a formula. Modern cosmology is based on Einstein's theory for gravity.

It had been thought that the expansion is decelerating over time because the contents of the Universe (matter) attract each other. But in the late 1990's, it was found that the expansion has been accelerating since about 8 Giga years ago. This was another big surprise which earned the astronomers who found the expansion a Nobel Prize in 2011. To explain the acceleration, we have to consider something new in the Universe which repels the space.

The simplest resolution is to put the cosmological constant back into Einstein's equation. The cosmological constant was originally introduced by Einstein to realize a static universe, but was abandoned after the discovery of the expansion of the Universe. The standard cosmological model (called LCDM) incorporates the cosmological constant. LCDM is supported by many observations, but the question of what causes the acceleration still remains. This is one of the biggest problems in modern cosmology.

Wide and deep imaging survey using Hyper Suprime-Cam

The team is leading a large scale imaging survey using Hyper Suprime-Cam (HSC) to probe the mystery of the accelerating Universe. The key here is to examine the expansion history of the Universe very carefully.

In the early Universe, matter was distributed almost but not quite uniformly. There were slight fluctuations in the density which can now be observed through the temperature fluctuations of the cosmic microwave background. These slight matter fluctuations evolved over cosmic time because of the mutual gravitational attraction of matter, and eventually the large scale structure of the present day Universe become visible. It is known that the growth rate of the structure strongly depends on how the Universe expands. For example, if the expansion rate is high, it is hard for matter to contract and the growth rate is suppressed. This means that the expansion history can be probed inversely through the observation of the growth rate.

It is important to note that growth rate cannot be probed well if we only observe visible matter (stars and galaxies). This is because we now know that nearly 80 % of the matter is an invisible substance called dark matter. The team adopted the 'weak gravitation lensing technique.' The images of distant galaxies are slightly distorted by the gravitational field generated by the foreground dark matter distribution. Analysis of the systematic distortion enables us to reconstruct the foreground dark matter distribution.

This technique is observationally very demanding because the distortion of each galaxy is generally very subtle. Precise shape measurements of faint and apparently small galaxies are required. This motivated the team to develop Hyper Suprime-Cam. They have been carrying out a wide field imaging survey using Hyper Suprime-Cam since March 2014. At this writing in February 2018, 60 % of the survey has been completed.

Unprecedentedly wide and sharp dark matter map

In this release, the team presents the dark matter map based on the imaging data taken by April 2016. This is only 11 % of the planned final map, but it is already unprecedentedly wide. There has never been such a sharp dark matter map covering such a wide area.

Imaging observations are made through five different color filters. By combining these color data, it is possible to make a crude estimate of the distances to the faint background galaxies (called photometric redshift). At the same time, the lensing efficiency becomes most prominent when the lens is located directly between the distant galaxy and the observer. Using the photometric redshift information, galaxies are grouped into redshift bins. Using this grouped galaxy sample, dark matter distribution is reconstructed using tomographic methods and thus the 3D distribution can be obtained. Data for 30 square degrees are used to reconstruct the redshift range between 0.1 (~1.3 G light-years) and 1.0 (~8 G light-years). At the redshift of 1.0, the angular span corresponds to 1.0 G x 0.25 G light-years. This 3D dark matter mass map is also quite new. This is the first time the increase in the number of dark matter halos over time can be seen observationally.

What the dark matter halo count suggests and future prospects

The team counted the number of dark matter halos whose lensing signal is above a certain threshold. This is one of the simplest measurements of the growth rate. It is suggested that the number count of the dark matter halos is less than what is expected from LCDM. This could indicate there is a flaw in LCDM and that we might have to consider an alternative rather than the simple cosmological constant (Note 1).

The statistical significance is, however, still limited as the large error bars suggest. There has been no conclusive evidence to reject LCDM, but many astronomers are interested in testing LCDM because discrepancies can be a useful probe to unlock the mystery of the accelerating Universe. Further observation and analysis are needed to confirm the discrepancy with higher significance. There are some other probes of the growth rate and such analysis are also underway (e.g. angular correlation of galaxy shapes) in the team to check the validity of standard LCDM.

Credit: 
National Institutes of Natural Sciences