Culture

US public companies have increasingly shorter lifespans, IU research says

BLOOMINGTON, Ind. -- At a time when more Americans are living longer, the companies where many people spend their working lives have increasingly shorter lifespans, according to research from Indiana University's Kelley School of Business.

A paper by two Kelley professors, forthcoming in the Academy of Management Annals, found that the odds of a company surviving more than five years has declined dramatically since the 1960s and that this trend also holds for firms lasting 10, 15 and 20 years.

Companies emerging as publicly listed firms in the 1960s had a 50-50 shot of making it to their 20-year anniversary. By the 1990s, that percentage fell to 20 percent. Similarly, such companies used to have an 80 percent chance of being around after 10 years, but those odds were down to 50 percent for firms established after 2000.

These findings are based on an empirical analysis of nearly 32,000 U.S. publicly listed companies between 1960 and 2015 by Rene Bakker, assistant professor of management and entrepreneurship, and Matthew Josefy, assistant professor of strategy and entrepreneurship.

The researchers believe that long-held views about the age of companies have changed as organizations have become increasingly temporary in nature. Research on firm age dates back to the mid-1960s, but little has been done since the early 1990s. Bakker and Josefy question whether firm age today is anything more than a number and suggest that it may no longer predict organizational success.

"We believe this trend reflects an important shift in our understanding of the very DNA of what constitutes an organization," the researchers wrote. "Short-lived, temporary organizations that rapidly accomplish a complex task and disband on its accomplishment are increasingly common across a broad range of industries."

In the previous century, many businesses were run with the expectation that they'd be taken over by the next generation of a family.

"The history of American businesses was centered around these family enterprises," Josefy said. "The whole reason of incorporating was so the organization would live longer than the founder, giving them a sense of immortality. Here's the irony: These large corporations aren't even going to live as long as the founder, much less get passed down to the next generation of owners."

Classic management theory suggests that firm age is positively associated with learning, reliability and legitimacy. But today it may have more negative connotations in certain contexts, such as the tech sector, which rewards novelty and youthfulness.

In their paper, Bakker and Josefy raise the question of whether the American corporation is becoming more disposable, reflecting the temporary nature of other aspects of our society. They suspect that the diminishing lifespan of U.S. companies is partly due to more active merger and acquisition activity in recent years. But they also attribute it to a culture that encourages start-ups.

"We live in an environment where starting and liquidating a business is quite easy," Bakker said. "Many young firms today seem destined to become 'organizational supernovas,' which burn brighter but die quicker."

Once a start-up reaches its desired objectives, it may be acquired or morph into another entity with new goals.

"While society is often concerned to see businesses fail, sometimes it may be better for a company to die to free up resources and allow people to go on and do something else, rather than persist with uncertainty and stagnation," Josefy said. "Apparent failure of one entity may actually be the precursor to success of another."

Credit: 
Indiana University

From property damage to lost production: How natural disasters impact economics

URBANA, Ill. - When a natural disaster strikes, major disaster databases tend to compile information about losses such as damages to property or cost of repairs, but other economic impacts after the disaster are often overlooked--such as how a company's lost ability to produce products may affect the entire supply-chain within the affected region and in other regions.

Without using the right model to study these losses, the data may give an incomplete picture of the full financial impact of the disaster as it doesn't fully portray business interruptions incurred locally, or by trade partners, after the event. As a result, a locality may receive less than it should in state or federal government recovery support.

In a recent study, published in Earth System Dynamics, an interdisciplinary journal devoted to the study of Earth and global change, economists at the University of Illinois partnered with atmospheric scientists and hydrologists from U of I and UCLA, and with the Army Corps of Engineers to capture the characteristics of an atmospheric river--a transporter of water vapor in the sky--that hit the western part of Washington State in 2007. This event resulted in record flooding (a 500-year stream peak event in some parts of the river) and record damages.

The team hopes to show that carefully selecting the characteristics of the extreme weather event under study and the correct model to estimate losses--based on characteristics of the disaster and the affected region, and on the interdependence between one area and another--can help in determining vulnerability and preparing for future disasters, from an economic standpoint.

"This is quite different from what insurance companies do," explains Sandy Dall'Erba, an associate professor in the Department of Agricultural and Consumer Economics at U of I, and a co-author of the study. "After a disaster, when an insurance company comes, they basically say that your building has been destroyed by this particular amount. And because you were out of the building for, say, a week, you couldn't produce anything for a week. What insurance companies forget, and what our paper is trying to demonstrate is that the total amount of economic losses is much greater."

The ripple effects from a disaster can be significant if major industrial chains are disrupted when infrastructure is comprised. For instance, while the study finds that intersectoral and interregional linkages add up to around 10 percent in standard economic damage estimates in the small rural area chosen in this study, that share could go up to 50-70 percent for a major metropolitan area like Houston, Texas, because of its enormous transportation system and interregional trade.

"Let's say for instance that a company producing tires is flooded. Obviously, no tires are coming from that company," Dall'Erba explains. "The car company, which could be located in a place that wasn't flooded, suddenly is expecting a delay in the tires that are not coming on time anymore. There are all of these connections from one city to the next that are traditionally not accounted for after a disaster."

In his U of I lab, the Regional Economic Applications Laboratory (REAL), Dall'Erba studies very specific disaster events using a technique called input-output--a technique that shows the interdependencies between the sectors within a region and across different regions. He also teaches this technique in his home department (ACE) on a regular basis.

"We look at each industry. What kind of products and services do they buy? To whom do they sell their goods and services? In the case of Chehalis, Washington, that was flooded in 2007, it was mainly companies selling to other companies, not to individual households. Those are the kinds of links that we try to include in the work we do in REAL, how to understand that level of dependence between one location and other places. We also provide measurements that are very specific to each locality," Dall'Erba says.

Using the input-output technique helped the researchers calculate the actual losses after the event in Chehalis. "We rely on a huge amount of data to do it. We have information from each locality about how each industry is connected to every industry within that locality and outside of that locality so we can understand how dollars flow," he adds.

In addition, they must account for the timing of the event. "While the affected area is very agricultural, the luck it had is that the flood took place in December, way ahead of the growing season of the crop the locality depends the most on, corn. If it had taken place a few months later, the local economy would have experienced much larger losses," Dall'Erba says.

Finally, Dall'Erba's team is particularly careful to account for the location of the companies in charge of reconstruction. "Most of the literature assumes without evidence that reconstruction will dampen the local losses and boost employment. It turns out that for small rural communities--like the one in our study--no local construction company is present or is large enough to be in charge of reconstruction. As such, reconstruction efforts are delivered by companies outside of the affected economy and it is these other localities that see an increase in output and construction jobs," Dall'Erba adds.

The latter part of their study focuses on the future. Assuming that climate change will result in 15 percent more streamflow (water from rivers or streams) for all return periods, the authors find that the total losses would increase from $6.2 million, the figure seen during the actual event, to $8.6 million (a 39 percent increase).

"Adaptation strategies could include, among others, larger floodplains in upstream locations, levees close to critical buildings, and/or developing a more resilient supply-chain," Dall'Erba says. "However it is particularly hard for small rural communities as they do not always have the budget necessary to implement large adaptation projects; yet they have already been more frequently affected by floods than urban areas over the last few decades."

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Hormone imbalance causes treatment-resistant hypertension

British researchers have discovered a hormone imbalance that explains why it is very difficult to control blood pressure in around 10 per cent of hypertension patients.

The team, led by Queen Mary University of London, found that the steroid hormone 'aldosterone' causes salt to accumulate in the bloodstream. The salt accumulation occurs even in patients on reasonable diets, and pushes up blood pressure despite use of diuretics and other standard treatments.

Two patients in the study with previously resistant hypertension were able to come off all drugs after a benign aldosterone-producing nodule was discovered in one adrenal and removed at surgery.

High blood pressure is one of the most common and important preventable causes of heart attack, heart failure, stroke and premature death. It affects over 1 billion people across the world and accounts for approximately 10 million potentially avoidable deaths per year.

Most patients can be treated effectively with adjustments to their lifestyle and the use of regular medication. However, in as many as 1 in 10 patients, blood pressure can be difficult to control and is termed 'resistant hypertension'. These patients are at the highest risk of stroke and heart disease because their blood pressure remains uncontrolled.

The new results come from a six-year trial in 314 patients with resistant hypertension, undertaken by investigators in the British Hypertension Society, funded by the British Heart Foundation and the National Institute for Health Research, and published in the journal The Lancet Diabetes & Endocrinology.

Chief Investigator Professor Morris Brown from Queen Mary University of London said: "This has been a wonderful story of using sophisticated modern methods to solve an old problem - why some patients have apparently intractable hypertension. The discovery of salt overload as the underlying cause has enabled us to identify the hormone which drives this, and to treat or cure most of the patients."

Professor Bryan Williams from University College London said: "These results are important because they will change clinical practice across the world and will help improve the blood pressure and outcomes of our patients with resistant hypertension.

"It is remarkable when so many advances in medicine depend on expensive innovation, that we have been able to revisit the use of drugs developed over half a century ago and show that for this difficult-to-treat population of patients, they work really well."

In previous work, the team showed that resistant hypertension is controlled much better by the drug spironolactone (a steroid blocker of aldosterone) than by drugs licensed for use in hypertension. Now they have shown that the superiority of spironolactone is due to its ability to overcome the salt excess in resistant hypertension.

They also found that spironolactone can be substituted, to good effect, by a drug, amiloride, which could be an option for patients unable to tolerate spironolactone.

The research comes from the PATHWAY-2 study, part of a series of studies designed to develop more effective ways of treating high blood pressure. It investigated the hypothesis that resistant hypertension was mainly caused by a defect in eliminating salt and water and that the high blood pressure in these patients would be best treated by additional diuretic therapy to promote salt and water excretion by the kidneys.

Credit: 
Queen Mary University of London

New glaucoma treatment could ease symptoms while you sleep

image: This is lead researcher Vikramaditya Yadav, a professor of chemical and biological engineering, and biomedical engineering at UBC.

Image: 
Clare Kiernan, University of British Columbia

Eye drops developed by UBC researchers could one day treat glaucoma while you sleep - helping to heal a condition that is one of the leading causes of blindness around the world.

"Medicated eye drops are commonly used to treat glaucoma but they're often poorly absorbed. Less than five per cent of the drug stays in the eye because most of the drops just roll off the eye," said lead researcher Vikramaditya Yadav, a professor of chemical and biological engineering, and biomedical engineering at UBC.

"Even when the drug is absorbed, it may fail to reach the back of the eye, where it can start repairing damaged neurons and relieving the pressure that characterizes glaucoma."

To solve these problems, the UBC team developed a hydrogel that was then filled with thousands of nanoparticles containing cannabigerolic acid (CBGA), a cannabis compound that has shown promise in relieving glaucoma symptoms.

They applied the drops on donated pig corneas, which are similar to human corneas, and found that the drug was absorbed quickly and reached the back of the eye.

"You would apply the eye drops just before bedtime, and they would form a lens upon contact with the eye. The nanoparticles slowly dissolve during the night and penetrate the cornea. By morning, the lens will have completely dissolved," said Yadav.

Previous research shows that cannabinoids like CBGA are effective in relieving glaucoma symptoms, but no cannabis-based eye drops have so far been developed because cannabinoids don't easily dissolve in water, according to the researchers.

"By suspending CBGA in a nanoparticle-hydrogel composite, we have developed what we believe is the first cannabinoid-based eye drops that effectively penetrate through the eye to treat glaucoma. This composite could also potentially be used for other drugs designed to treat eye disorders like infections or macular degeneration," said study co-author Syed Haider Kamal, a research associate in Yadav's lab.

Credit: 
University of British Columbia

The changing chemistry of the Amazonian atmosphere

How do you measure a chemical compound that lasts for less than a second in the atmosphere?

That's the question atmospheric chemists have been trying to answer for decades. The compound: hydroxyl radicals -- also known as OH radicals. These oxidizing chemicals are vital to the atmosphere's delicate chemical balance, acting as natural air scrubbers to remove organic compounds and greenhouse gasses such as formaldehyde and methane from the atmosphere. But OH radicals also initiate reactions leading to secondary pollutants that affect human health and climate, such as organic particulate matter and ozone.

Researchers have been debating whether pollutants emitted from human activities, in particular nitrogen oxides (NOx), can affect levels of OH radicals in a pristine atmosphere, but quantifying that relationship has been difficult. Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have quantified the effect of NOx pollution on OH radicals in the Amazon rainforest.

The research is published in Science Advances.

While a remote and seemingly sparsely populated rainforest might seem like a strange place to study the effects of human pollution, the Amazon is actually home to one of the fastest growing cities in the world: Manaus, Brazil. Manaus, with more than 2 million people, has seen a boom in a range of industries from petroleum refining and chemical manufacturing to mobile phone manufacturing, ship building and tourism.

That growth has led to substantial amounts of pollution released into the atmosphere and when it moves downwind and meets the pristine air from the rainforest, it creates a real-world laboratory for atmospheric chemists. That laboratory is the perfect spot to study the impact of pollution on OH concentrations, as it is easy to distinguish unpolluted regional air, from air that had passed over Manaus.

"Because we were able to contrast unpolluted air with polluted air, this research has given us a great opportunity to understand how pollution from human activity will affect the atmospheric chemistry, especially with continued urbanization of forested areas," said Yingjun Liu, a former graduate student at SEAS and first author of the paper.

The researchers measured levels of isoprene, a chemical compound naturally emitted by trees, as well as levels of major OH oxidation products of isoprene. As OH concentration increases, the ratio of oxidation products to isoprene increase in the atmosphere. The researchers were able to infer OH concentrations from the measured product-to-parent ratio, using a delicate model that took many confounding factors into account.

The researchers collected data in the rainforest region, about 70 km downwind of Manaus, during the wet and dry seasons of 2014. The researchers found that accompanying the increase of NOx concentration from urban pollution, daytime peak OH concentrations in the rainforest skyrocketed, increasing by at least 250 percent.

"Our research shows that the oxidation capacity over tropical forests is susceptible to anthropogenic NOx emissions," said Scot Martin, the Gordon McKay Professor of Environmental Science and Engineering and Professor of Earth and Planetary Sciences at SEAS and senior author of the paper. "In the future, if trends of deforestation and urbanization continue, these increased levels of OH concentrations in the Amazon atmosphere could lead to changes in atmospheric chemistry, cloud formation, and rainfall."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Do Democrat and Republican doctors treat patients differently at the end of life?

The divide that separates conservative and liberal values may be as wide now as it has ever been in our country. This divide shows itself in areas of everyday life, and health care is no exception.

But do doctors' political beliefs influence the way they practice medicine, choose therapies and treat patients?

Which treatments and how much care to provide at the end of a patient's life has been a notoriously contentious question in medicine. However, a physician's party affiliation appears to have no bearing on clinical decisions in the care of terminally ill patients at the end of life, according to a new study led by researchers at Harvard Medical School.

Previous research has looked at the role of physicians' personal beliefs in hypothetical scenarios. Results of the new study, believed to be the first to look at how doctors behave in real-life clinical practice, are published April 11 in The BMJ.

For this new analysis, Anupam Jena, the Ruth L. Newhouse Associate Professor of Health Care Policy at Harvard Medical School and an internist at Massachusetts General Hospital, teamed up with colleagues from Columbia University, Cornell University, Stanford University and New York University to compare the end-of-life care and treatment patterns of doctors with different political affiliations.

Despite the dramatic rhetorical differences among members of the two parties on end-of-life care, the researchers found no evidence that a physician's political affiliation influenced decisions on how much and what kind of care to offer their terminally ill patients.

"Our findings are reassuring evidence that the political and ideological opinions of physicians don't seem to have any discernible impact on how they care for patients at the end of life," Jena said. "Physicians seem to be able to look past their politics to determine the best care for each patient's individual situation."

A 2016 study in the Proceedings of the National Academy of Sciences suggested that physicians' personal beliefs on politically controversial health care issues like abortion, gun safety and birth control were linked to differences in how they delivered care. That earlier study was based on surveys that combined questions about political opinions with questions about how a physician would react in hypothetical clinical scenarios.

In the new study, researchers compared clinical treatment data from nearly 1.5 million Medicare beneficiaries who died in the hospital or shortly after discharge against data from the Federal Election Commission on the patients' attending physicians' political contributions. The investigators looked for differences in treatment patterns among doctors who donated to either party or to no party. Comparing physicians working within the same hospital to account for any regional variations in end-of-life care, the researchers found no differences between the groups in end-of-life spending, use of intensive resuscitation or life-sustaining interventions such as intubation or feeding tubes or in how likely they were to refer their patients to hospice.

Jena and colleagues say that further study is needed to determine whether decisions about other politically controversial health care and medical issues might break down along party lines.

Credit: 
Harvard Medical School

Some can combat dementia by enlisting still-healthy parts of the brain

image: A person with primary progressive aphasia activates part of the right-hand side of their brain (shown in blue) to decipher a sentence, whereas the normal person (left-hand image) does not.

Image: 
Aneta Kielar, University of Arizona

People with a rare dementia that initially attacks the language center of the brain recruit other areas of the brain to decipher sentences, according to new research led by a University of Arizona cognitive scientist.

The study is one of the first to show that people with a neurodegenerative disease can call upon intact areas of the brain for help. People who have had strokes or traumatic brain injuries sometimes use additional regions of the brain to accomplish tasks that were handled by the now-injured part of the brain.

"We were able to identify regions of the brain that allowed the patients to compensate for the dying of neurons in the brain," said first author Aneta Kielar, a UA assistant professor of speech, language and hearing sciences and of cognitive science.

The type of the dementia the researchers tested, primary progressive aphasia, or PPA, is unusual because it starts by making it hard for people to process language, rather than initially harming people's memory.

Kielar and her colleagues used magnetoencephalography, or MEG, to track how the 28 study participants' brains responded when confronted with several different language tasks. MEG revealed which part of the participant's brain responded and how fast the person's brain responded to the task.

People typically rely on the left side of the brain to comprehend language. Some of the people with PPA who were tested showed additional brain activity on the right, and those people did better on the language tests.

Senior author Jed Meltzer said, "These findings offer hope since it demonstrates that despite the brain's degeneration during PPA, the brain naturally adapts to try and preserve function."

Meltzer, a scientist at the Rotman Research Institute of the Baycrest Health Sciences Toronto, in Ontario, Canada, and Canada Research Chair in Interventional Cognitive Neuroscience, added, "This brain compensation suggests there are opportunities to intervene and offer targeted treatment to those areas."

Kielar conducted the research as a part of a postdoctoral fellowship at the Rotman Research Institute.

Kielar's and Meltzer's co-authors on the paper, "Abnormal language-related oscillatory responses in primary progressive aphasia," are Regina Jokel and Tiffany Deschamps of the University of Toronto. The journal NeuroImage: Clinical published the paper online in March.

The Ontario Brain Institute, the Alzheimer's Association, the Ontario Research Coalition and the Sandra A. Rotman program in Cognitive Neuroscience funded the research.

Kielar became intrigued by PPA because its effects on language are so different from other dementias. PPA's unusual characteristics also make it hard to diagnose, she said.

At the early stages of the disease, people with PPA can drive, go to the grocery store by themselves and do other things that require working memory, but they have trouble reading, writing and speaking grammatical sentences, she said.

"PPA specifically attacks language initially," she said. "I wanted to know what is special about the language regions of the brain."

Previous research using electroencephalograms, or EEGs, of PPA patients showed as the disease progressed, something at the neuronal level became slower. However, EEGs do not provide information about which brain region is slowing.

Therefore Kielar and her colleagues used MEG to take images of the brains of 28 people--13 people with PPA and 15 age-matched healthy controls--as they read sentences on an LCD screen. Some of the sentences had grammatical errors or mismatched words.

The researchers also conducted MRI scans of each participant to map each person's brain.

Working brains generate tiny changes in magnetic fields. MEG records those tiny, millisecond changes in magnetic fields that occur as the brain processes information. The MEG machinery is so sensitive that it requires a special shielded room that prevents any outside magnetic fields--such as those from electric motors, elevators, and even the Earth's magnetic field--from entering.

MEG can tell both when the brain was working on a task and what region of the brain was doing the task, Kielar said.

She and her colleagues found the brains of people with PPA responded more slowly to the language tests, which was not known before.

"You can tell that they are struggling, but we did not know that the neural processing in the brain was slowed down," she said. "It seems that this delay in processing may account for some of the deficits they have in processing language."

She and her colleagues hope knowing which parts of the brain are damaged by PPA will help develop a treatment. There is no cure for PPA, she said.

Transcranial magnetic stimulation, or TMS, a non-invasive treatment that sends a magnetic pulse to specific brain regions, has helped people who have had strokes. Kielar and her colleagues are planning to see if TMS can slow the progression of PPA.

Credit: 
University of Arizona

Melting of Arctic mountain glaciers unprecedented in the past 400 years

image: Scientists spent a month in Denali National Park in 2013 drilling ice cores from the summit plateau of Mt. Hunter. The ice cores showed the glaciers on Mt. Hunter are melting more now than at any time in the past 400 years.

Image: 
Dominic Winski.

WASHINGTON D.C. -- Glaciers in Alaska's Denali National Park are melting faster than at any time in the past four centuries because of rising summer temperatures, a new study finds.

New ice cores taken from the summit of Mt. Hunter in Denali National Park show summers there are least 1.2-2 degrees Celsius (2.2-3.6 degrees Fahrenheit) warmer than summers were during the 18th, 19th, and early 20th centuries. The warming at Mt. Hunter is about double the amount of warming that has occurred during the summer at areas at sea level in Alaska over the same time period, according to the new research.

The warmer temperatures are melting 60 times more snow from Mt. Hunter today than the amount of snow that melted during the summer before the start of the industrial period 150 years ago, according to the study. More snow now melts on Mt. Hunter than at any time in the past 400 years, said Dominic Winski, a glaciologist at Dartmouth College in Hanover, New Hampshire and lead author of the new study published in the Journal of Geophysical Research: Atmospheres, a journal of the American Geophysical Union.

The new study's results show the Alaska Range has been warming rapidly for at least a century. The Alaska Range is an arc of mountains in southern Alaska home to Denali, North America's highest peak.

The warming correlates with hotter temperatures in the tropical Pacific Ocean, according to the study's authors. Previous research has shown the tropical Pacific has warmed over the past century due to increased greenhouse gas emissions.

The study's authors conclude warming of the tropical Pacific Ocean has contributed to the unprecedented melting of Mt. Hunter's glaciers by altering how air moves from the tropics to the poles. They suspect melting of mountain glaciers may accelerate faster than melting of sea level glaciers as the Arctic continues to warm.

Understanding how mountain glaciers are responding to climate change is important because they provide fresh water to many heavily-populated areas of the globe and can contribute to sea level rise, Winski said.

"The natural climate system has changed since the onset of the anthropogenic era," he said. "In the North Pacific, this means temperature and precipitation patterns are different today than they were during the preindustrial period."

Assembling a long-term temperature record

Winski and 11 other researchers from Dartmouth College, the University of Maine and the University of New Hampshire drilled ice cores from Mt. Hunter in June 2013. They wanted to better understand how the climate of the Alaska Range has changed over the past several hundred years, because few weather station records of past climate in mountainous areas go back further than 1950.

The research team drilled two ice cores from a glacier on Mt. Hunter's summit plateau, 13,000 feet above sea level. The ice cores captured climate conditions on the mountain going back to the mid-17th century.

The physical properties of the ice showed the researchers what the mountain's past climate was like. Bands of darker ice with no bubbles indicated times when snow on the glacier had melted in past summers before re-freezing.

Winski and his team counted all the dark bands - the melt layers - from each ice core and used each melt layer's position in the core to determine when each melt event occurred. The more melt events they observed in a given year, the warmer the summer.

They found melt events occur 57 times more frequently today than they did 150 years ago. In fact, they counted only four years with melt events prior to 1850. They also found the total amount of annual meltwater in the cores has increased 60-fold over the past 150 years.

The surge in melt events corresponds to a summer temperature increase of at least 1.2-2 degrees Celsius (2.2-3.6 degrees Fahrenheit) relative to the warmest periods of the 18th and 19th centuries, with nearly all of the increase occurring in the last 100 years. Because there were so few melt events before the start of the 20th century, the temperature change over the past few centuries could be even higher, Winski said.

Connecting the Arctic to the tropics

The research team compared the temperature changes at Mt. Hunter with those from lower elevations in Alaska and in the Pacific Ocean. Glaciers on Mt. Hunter are easily influenced by temperature variations in the tropical Pacific Ocean because there are no large mountains to the south to block incoming winds from the coast, according to the researchers.

They found during years with more melt events on Mt. Hunter, tropical Pacific temperatures were higher. The researchers suspect warmer temperatures in the tropical Pacific Ocean amplify warming at high elevations in the Arctic by changing air circulation patterns. Warmer tropics lead to higher atmospheric pressures and more sunny days over the Alaska Range, which contribute to more glacial melting in the summer, Winski said.

"This adds to the growing body of research showing that changes in the tropical Pacific can manifest in changes across the globe," said Luke Trusel, a glaciologist at Rowan University in Glassboro, New Jersey who was not connected to the study. "It's adding to the growing picture that what we're seeing today is unusual."

Credit: 
American Geophysical Union

Tiny distortions in universe's oldest light reveal strands in cosmic web

image: In this illustration, the trajectory of cosmic microwave background (CMB) light is bent by structures known as filaments that are invisible to our eyes, creating an effect known as weak lensing captured by the Planck satellite (left), a space observatory. Researchers used computers to study this weak lensing of the CMB and produce a map of filaments, which typically span hundreds of light years in length.

Image: 
Siyu He, Shadab Alam, Wei Chen, and Planck/ESA

Scientists have decoded faint distortions in the patterns of the universe's earliest light to map huge tubelike structures invisible to our eyes - known as filaments - that serve as superhighways for delivering matter to dense hubs such as galaxy clusters.

The international science team, which included researchers from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, analyzed data from past sky surveys using sophisticated image-recognition technology to home in on the gravity-based effects that identify the shapes of these filaments. They also used models and theories about the filaments to help guide and interpret their analysis.

Published April 9 in the journal Nature Astronomy, the detailed exploration of filaments will help researchers to better understand the formation and evolution of the cosmic web - the large-scale structure of matter in the universe - including the mysterious, unseen stuff known as dark matter that makes up about 85 percent of the total mass of the universe.

Dark matter constitutes the filaments - which researchers learned typically stretch across hundreds of millions of light years - and the so-called halos that host clusters of galaxies are fed by the universal network of filaments. More studies of these filaments could provide new insights about dark energy, another mystery of the universe that drives its accelerating expansion.

Filament properties could also put gravity theories to the test, including Einstein's theory of general relativity, and lend important clues to help solve an apparent mismatch in the amount of visible matter predicted to exist in the universe - the "missing baryon problem."

"Usually researchers don't study these filaments directly - they look at galaxies in observations," said Shirley Ho, a senior scientist at Berkeley Lab and Cooper-Siegel associate professor of physics at Carnegie Mellon University who led the study. "We used the same methods to find the filaments that Yahoo and Google use for image recognition, like recognizing the names of street signs or finding cats in photographs."

The study used data from the Baryon Oscillation Spectroscopic Survey, or BOSS, an Earth-based sky survey that captured light from about 1.5 million galaxies to study the universe's expansion and the patterned distribution of matter in the universe set in motion by the propagation of sound waves, or "baryonic acoustic oscillations," rippling in the early universe.

The BOSS survey team, which featured Berkeley Lab scientists in key roles, produced a catalog of likely filament structures that connected clusters of matter that researchers drew from in the latest study.

Researchers also relied on precise, space-based measurements of the cosmic microwave background, or CMB, which is the nearly uniform remnant signal from the first light of the universe. While this light signature is very similar across the universe, there are regular fluctuations that have been mapped in previous surveys.

In the latest study, researchers focused on patterned fluctuations in the CMB. They used sophisticated computer algorithms to seek out the imprint of filaments from gravity-based distortions in the CMB, known as weak lensing effects, that are caused by the CMB light passing through matter.

Since galaxies live in the densest regions of the universe, the weak lensing signal from the deflection of CMB light is strongest from those parts. Dark matter resides in the halos around those galaxies, and was also known to spread from those denser areas in filaments.

"We knew that these filaments should also cause a deflection of CMB and would also produce a measurable weak gravitational lensing signal," said Siyu He, the study's lead author who is a Ph.D. researcher from Carnegie Mellon University - she is now at Berkeley Lab and is also affiliated with UC Berkeley. The research team used statistical techniques to identify and compare the "ridges," or points of higher density that theories informed them would point to the presence of filaments.

"We were not just trying to 'connect the dots' - we were trying to find these ridges in the density, the local maximum points in density," she said. They checked their findings with other filament and galaxy cluster data, and with "mocks," or simulated filaments based on observations and theories. The team used large cosmological simulations generated at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC), for example, to check for errors in their measurements.

The filaments and their connections can change shape and connections over time scales of hundreds of millions of years. The competing forces of the pull of gravity and the expansion of the universe can shorten or lengthen the filaments.

"Filaments are this integral part of the cosmic web, though it's unclear what is the relationship between the underlying dark matter and the filaments," and that was a primary motivation for the study, said Simone Ferraro, one of the study's authors who is a Miller postdoctoral fellow at UC Berkeley's Center for Cosmological Physics.

New data from existing experiments, and next-generation sky surveys such as the Berkeley Lab-led Dark Energy Spectroscopic Instrument (DESI) now under construction at Kitt Peak National Observatory in Arizona should provide even more detailed data about these filaments, he added.

Researchers noted that this important step in sleuthing the shapes and locations of filaments should also be useful for focused studies that seek to identify what types of gases inhabit the filaments, the temperatures of these gases, and the mechanisms for how particles enter and move around in the filaments. The study also allowed them to determine the length of filaments.

Siyu He said that resolving the filament structure can also provide clues to the properties and contents of the voids in space around the filaments, and "help with other theories that are modifications of general relativity," she said.

Ho added, "We can also maybe use these filaments to constrain dark energy - their length and width may tell us something about dark energy's parameters."

Credit: 
DOE/Lawrence Berkeley National Laboratory

Study highlights the health and economic benefits of a US salt reduction strategy

New research, published in PLOS Medicine, conducted by researchers at the University of Liverpool, Imperial College London, Friedman School of Nutrition Science and Policy at Tufts and collaborators as part of the Food-PRICE project, highlights the potential health and economic impact of the United States (US) Food and Drug Administration's proposed voluntary salt policy.

Excess salt consumption is associated with higher risk of cardiovascular disease (CVD) and gastric cancer. Globally, more than 1.5 million CVD related deaths every year can be attributed to the excess dietary salt intake.

Further salt-related deaths come from gastric cancer. Health policies worldwide, therefore, are being proposed to reduce dietary salt intake.

Health and economic impact

The US Food & Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. The researchers aimed to quantify the potential health and economic impact if this FDA policy was successfully implemented. The results will be of great interest to policy makers.

The researchers modelled and compared the potential health and economic effects of three differing levels of implementing the FDA's proposed voluntary sodium reformulation policy over a 20-year period.

They found that the optimal scenario, 100% compliance with the 10-year FDA targets, could prevent approximately 450,000 CVD cases, gain 2 million Quality Adjusted Life Years (QALYs) and produce discounted cost savings of approximately $40 billion over a 20 year period (2017-2016).

In contrast, the modest scenario, 50% compliance of the 10-year FDA targets, and the pessimistic scenario, 100% compliance of the two-year targets but no further progress, could yield health and economic gains approximately half as great, and a quarter as great, respectively.

All three scenarios were likely to be cost-effective by 2021 and cost-saving by 2031.

Substantial decreases

Dr Jonathan Pearson-Stuttard, University of Liverpool and Imperial College London, said: "Our study suggests that full industry compliance with the FDA voluntary sodium reformulation targets, would result in very substantial decreases in CVD incidence and mortality whilst also offering impressive cost savings to the health payers and the wider economy.

Senior author Professor Martin O'Flaherty, University of Liverpool, said: "There is no doubt that these findings have important implications for the processed and commercially prepared food industry in the US."

Senior author Renata Micha, Research Associate Professor at Friedman School of Nutrition Science and Policy at Tufts University, said: "Population-wide salt reduction strategies with high industry compliance should be prioritized to save lives and reduce healthcare costs. Industry engagement is crucial in implementing dietary policy solutions to improve population health, particularly for developing and marketing healthier foods."

Other research collaborators in the project were Department of Preventive Medicine and Education, Medical University of Gdansk (Poland) and the American Heart Association (Washington).

This work was supported by awards from the US National Heart, Lung, and Blood Institute of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH. For conflict of interest disclosure, please see the article.

Credit: 
University of Liverpool

Newly discovered biomarkers could be key to predicting severity of brain tumor recurrence

Researchers have identified specific predictive biomarkers that could help assess the level of risk for recurrence in patients with malignant glioma. The study, led by Henry Ford Health System's Department of Neurosurgery and Department of Public Health Sciences, was published today in Cell Reports.

The team performed an analysis of 200 brain tumor samples from 77 patients with diffuse glioma harboring IDH mutation, the largest collection of primary and recurrent gliomas from the same patients to date. Comparing samples from the patients' initial diagnosis with those from their disease recurrence, researchers focused, in particular, on a distinct epigenetic modification occurring along the DNA segment, a process called DNA methylation.

Previously, their research showed that when there was no change in the DNA methylation, patients had a good clinical outcome. When the DNA methylation was lost, patients had a poor outcome. In this latest study, the authors were able to identify a set of epigenetic biomarkers that can predict, at a patient's initial diagnosis, which tumors are likely to recur with a more aggressive tumor type.

Houtan Noushmehr, Ph.D., Henry Ford Department of Neurosurgery, Hermelin Brain Tumor Center, and senior author of the study, says this discovery could make a huge difference when a patient is first diagnosed. "To date, we really don't have any predictive clinical outcomes once a patient is diagnosed with glioma. By pinpointing these molecular abnormalities, we can begin to predict how aggressive a patient's recurrence will be and that can better inform the treatment path we recommend from the very beginning."

Of the 200 tissue samples, 10% were found to have a distinct epigenetic alteration at genomic sites known to be functionally active in regulating genes that are known to be associated with aggressive tumors such as glioblastoma.

"This research presents a set of testable DNA-methylation biomarkers that may help clinicians predict if someone's brain tumor is heading in a more or less aggressive direction, essentially illustrating the behavior of a patient's disease," says James Snyder, D.O., study co-author and neuro-oncologist, Henry Ford Department of Neurosurgery and Hermelin Brain Tumor Center. "If we can identify which brain tumors will have a more aggressive course at the point of initial diagnosis then hopefully we can change the disease trajectory and improve care for our patients."

For example, patients predicted to have a more aggressive tumor at recurrence could be monitored more intensively after their initial treatment, or, undergo a more dynamic therapeutic regimen. Conversely, patients predicted to have a less aggressive recurrence might benefit from a reduction or delay of potentially harsh therapies such as standard chemotherapy and radiation.

"Right now, this level of molecular analysis is not routinely available in precision medicine testing and that needs to change," says Steven N. Kalkanis, M.D., Medical Director, Henry Ford Cancer Institute, and Chair, Department of Neurosurgery. "We need to be examining this level of information for every patient. The hope is that discoveries like this one will lead to clinical trials and increased access and education that make it available for every person who receives a cancer diagnosis."

Credit: 
Henry Ford Health

Two Colorado studies find resistance mechanisms in ALK+ and ROS1+ cancers

image: Robert C. Doebele, MD, PhD, and colleagues find resistance mechanisms in ALK+ and ROS1+ lung cancers, and demonstrate use to circulating tumor DNA to search for these mechanisms in patient samples.

Image: 
University of Colorado Cancer Center

Targeted treatments have revolutionized care for lung cancer patients whose tumors harbor ALK or ROS1 alterations. Basically, cancers may use these genetic changes to drive their growth, but also become dependent on the action of these altered genes for their survival. Targeted treatments like crizotinib block the actions of ALK and ROS1, thus killing cancers that depend on them. However, when doctors target ALK or ROS1, cancers often evolve new ways to survive. After a period of success, targeted treatments against ALK+ and ROS1+ lung cancers often fail.

A University of Colorado Cancer Center study published today in the journal Clinical Cancer Research provides an in-depth look at how these ALK+ and ROS1+ cancers evolve to resist treatment. A second study demonstrates the ability to identify these changes in patient blood samples, perhaps easing the ability to monitor patients for these changes that provide early evidence that treatment is failing.

Unfortunately, the first study shows there is no single or even a dominant way that ALK+ and ROS1+ lung cancers change in response to targeted treatments.

"If there were only one change that follows these treatments, we would know that when treatment fails, we should switch to another, defined treatment," says Robert C. Doebele, MD, PhD, director of the CU Cancer Center Thoracic Oncology Research Initiative. "However, rather than providing a path of action, this study throws down a challenge: There's a lot of stuff we're not looking for or don't even know how to look for, but might be treatable if we knew how to look for it."

Doebele worked with CU postdoctoral fellow Caroline McCoach, MD (now an assistant professor of medical oncology at University of California at San Francisco), to examine tumor samples of 12 ROS1+ patients and 43 ALK+ patients that had evolved to resist targeted treatment. As expected, a percentage of these samples showed genetic changes somewhat similar to the original causes - ALK and ROS1 are both "kinases" that can control the expression of other genes. In one of the 12 ROS1+ samples and 15 of the 43 ALK+ samples, new kinases had been altered to allow treatment resistance.

In the researchers' opinion, these are encouraging cases because, "these kinase mutations are the easiest to detect and, conceptually, the easiest to treat," Doebele says. This ease of detection and possibility to treat kinase mutations with drugs similar to those that already treat ALK+ and ROS1+ lung cancers have led researchers to focus on these changes.

"But we found a lot of stuff besides kinase mutations," he says. "What we're trying to say is that resistance happens in a lot of different ways and we need to be thinking about all the genetic and non-genetic changes that can occur."

For example, one ROS1+ cell line had no identifiable genetic changes. Genetically, the cancer should have remained sensitive to treatments targeting ROS1. But functional analysis showed that the known breast cancer driver, HER2, was creating drug resistance in this cell line.

"On one hand, the panoply of resistance mechanisms that can occur is incredibly frustrating. You're taking a small population of patients and further subdividing them into many other resistance mechanisms. How do we attack that, respond to that resistance when every patient is a little different?" Doebele says. "But on the other hand, though we are learning that resistance is really complex, the more we look and the better our tests are at capturing different types of alterations, the more we are able to target these resistance mechanisms. That's incredibly exciting."

A second paper, published as a companion to the first, shows that once resistance mechanisms are defined, doctors may be able to test lung cancer patients for these changes by sifting blood samples for DNA signatures released by cancers.

"Basically, we show that circulating tumor DNA or ctDNA can show us what's driving the cancer at any given point," Doebele says. "In theory, this strategy gives us an alternate method to spot these changes without having to do a biopsy."

In addition to being less invasive, the use of ctDNA to monitor a cancer's genetics saves time. "Due to the time it takes to schedule a biopsy and then the two weeks it takes to run a tumor test, using ctDNA instead can save patients a week or more." Knowing when a mechanism of drug resistance has evolved can ensure that patients have the opportunity to explore new treatment options as soon as possible.

Testing ctDNA in blood also allows researchers to take an overall sample of cancer genetics, rather than being limited to a snapshot of genetics from a single site of biopsy, "possibly giving us a broader picture of what's going on," Doebele says. However, this and other studies show that ctDNA has somewhat reduced sensitivity compared with biopsy and, "we may miss things," Doebele says, implying that analysis of ctDNA may be an appropriate strategy to monitor tumor evolution in addition to but not instead of biopsy.

The current paper used the ctDNA test Guardant360 to explore blood samples of 88 ALK+ lung cancer patients, showing the partner genes that "fused" with ALK to cause cancer (including EML4, STRN and others). Thirty-one of these patients were tested again at the time their cancer progressed after ALK-targeted treatment. In 16 of these blood samples, researchers found that the ctDNA test was able to identify ALK resistance mechanisms.

"There's been a huge focus on kinase mutations," Doebele says. "But not everything is driven by a simple mutation. A focus on broader testing and on new methods of broad testing will help us widen our net to catch these other changes that are driving resistance to ALK and ROS1 targeted treatments."

Credit: 
University of Colorado Anschutz Medical Campus

Large-scale study links PCOS to mental health disorders

WASHINGTON -- Women with polycystic ovary syndrome (PCOS), the most common hormone condition among young women, are prone to mental health disorders, and their children face an increased risk of developing attention deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD), according to a new study published in the Endocrine Society's Journal of Clinical Endocrinology & Metabolism.

PCOS affects 7 percent to 10 percent of women of childbearing age. It costs an estimated $5.46 billion annually to provide care to reproductive-aged PCOS women in the United States, according to the Society's Endocrine Facts and Figures report. PCOS is the most common cause of infertility in young women, and the elevated male hormone levels associated with the condition lead to many other emotionally distressing symptoms like irregular periods, excessive facial and body hair, weight gain and acne.

"PCOS is one of the most common conditions affecting young women today, and the effect on mental health is still under appreciated," said one of the study's authors, Aled Rees, M.B.B.Ch., Ph.D., F.R.C.P., of Cardiff University in Cardiff, United Kingdom. "This is one of the largest studies to have examined the adverse mental health and neurodevelopmental outcomes associated with PCOS, and we hope the results will lead to increased awareness, earlier detection and new treatments."

In the retrospective cohort design study, researchers from the Neuroscience and Mental Health Research Institute at Cardiff University assessed the mental health history of nearly 17,000 women diagnosed with PCOS. The study leveraged data from the Clinical Practice Research Datalink (CPRD), a database containing records for 11 million patients collected from 674 primary care practices in the United Kingdom.

When compared with unaffected women, matched for age and body mass index, the study found that PCOS patients were more likely to be diagnosed with mental health disorders, including depression, anxiety, bipolar disorder and eating disorders.

Children born to mothers with PCOS were also found to be at greater risk of developing ADHD and autism spectrum disorders. These findings suggest that women with PCOS should be screened for mental health disorders, to ensure early diagnosis and treatment, and ultimately improve their quality of life.

"Further research is needed to confirm the neurodevelopmental effects of PCOS, and to address whether all or some types of patients with PCOS are exposed to mental health risks," said Rees.

Credit: 
The Endocrine Society

New biological research framework for Alzheimer's seeks to spur discovery

image: This table shows column of the eight biomarker profiles (left) and corresponding categories (right) outlined in the framework that could be used to group research participants. The biomarker profiles can be sorted into three broader categories: Normal Alzheimer's biomarkers, Alzheimer's continuum and non-Alzheimer's pathologic change.

Image: 
NIA-AA Research Framework

The research community now has a new framework toward developing a biologically-based definition of Alzheimer's disease. This proposed "biological construct" is based on measurable changes in the brain and is expected to facilitate better understanding of the disease process and the sequence of events that lead to cognitive impairment and dementia. With this construct, researchers can study Alzheimer's, from its earliest biological underpinnings to outward signs of memory loss and other clinical symptoms, which could result in a more precise and faster approach to testing drug and other interventions.

The National Institute on Aging (NIA), part of the National Institutes of Health, and the Alzheimer's Association (AA) convened the effort, which as the "NIA-AA Research Framework: Towards a Biological Definition of Alzheimer's Disease," appears in the April 10, 2018 edition of Alzheimer's & Dementia: The Journal of the Alzheimer's Association. Drafts were presented at several scientific meetings and offered online, where the committee developing the framework gathered comments and ideas which informed the final published document. The framework, as it undergoes testing and as new knowledge becomes available, will be updated in the future.

The framework will apply to clinical trials and can be used for observational and natural history studies as well, its authors noted. They envision that this common language approach will unify how different stages of the disease are measured so that studies can be easily compared and presented more clearly to the medical field and public.

"In the context of continuing evolution of Alzheimer's research and technologies, the proposed research framework is a logical next step to help the scientific community advance in the fight against Alzheimer's," said NIA Director Richard J. Hodes, M.D. "The more accurately we can characterize the specific disease process pathologically defined as Alzheimer's disease, the better our chances of intervening at any point in this continuum, from preventing Alzheimer's to delaying progression,"

Evolution in thinking

This framework reflects the latest thinking in how Alzheimer's disease begins perhaps decades before outward signs of memory loss and decline may appear in an individual. In 2011, NIA-AA began to recognize this with the creation of separate sets of diagnostic guidelines that incorporated recognition of a preclinical stage of Alzheimer's and the need to develop interventions as early in the process as possible. The research framework offered today builds from the 2011 idea of three stages--pre-clinical, mild cognitive impairment and dementia--to a biomarker-based disease continuum.

The NIA-AA research framework authors, which included 20 academic, advocacy, government and industry experts, noted that the distinction between clinical symptoms and measurable changes in the brain has blurred. The new research framework focuses on biomarkers grouped into different pathologic processes of Alzheimer's which can be measured in living people with imaging technology and analysis of cerebral spinal fluid samples. It also incorporates measures of severity using biomarkers and a grading system for cognitive impairment.

"We have to focus on biological or physical targets to zero in on potential treatments for Alzheimer's," explained Eliezer Masliah, M.D., director of the Division of Neuroscience at the NIA. "By shifting the discussion to neuropathologic changes detected in biomarkers to define Alzheimer's, as we look at symptoms and the range of influences on development of Alzheimer's, I think we have a better shot at finding therapies, and sooner."

In an accompanying editorial, Masliah and NIA colleagues, including Dr. Hodes, highlighted both the promise and limitations of the biological approach. They noted that better operational definitions of Alzheimer's are needed to help better understand its natural history and heterogeneity, including prevalence of mimicking conditions. They also emphasized that the research framework needs to be extensively tested in diverse populations and with more sensitive biomarkers.

Batching and matching biomarkers

The NIA-AA research framework proposes three general groups of biomarkers--beta-amyloid, tau and neurodegeneration or neuronal injury--and leaves room for other and future biomarkers. Beta-amyloid is a naturally occurring protein that clumps to form plaques in the brain. Tau, another protein, accumulates abnormally forming neurofibrillary tangles which block communication between neurons. Neurodegeneration or neuronal injury may result from many causes, such as aging or trauma, and not necessarily Alzheimer's disease.

Researchers can use measures from a study participant and identify beta-amyloid (A), tau (T) or neurodegeneration or neuronal injury (N) to characterize that person's combination of biomarkers in one of eight profiles. For example, if a person has a positive beta-amyloid (A+) biomarker but no tau (T-), he or she would be categorized as having "Alzheimer's pathologic change." Only those with both A and T biomarkers would be considered to have Alzheimer's disease, along a continuum. The N biomarker group provides important pathologic staging information about factors often associated with Alzheimer's development or worsening of symptoms.

Framework for certain research only

The authors emphasized that the NIA-AA research framework is neither a diagnostic criteria nor guideline for clinicians. It is intended for research purposes, requiring further testing before it could be considered for general clinical practice, they noted.

They also stressed that the biological approach to Alzheimer's is not meant to supplant other measures, such as neuropsychological tests, to study important aspects of the disease such as its cognitive outcomes. In some cases, the article pointed out, biomarkers may not be available or requiring them would be counterproductive for particular types of research.

The authors acknowledge that the research framework may seem complex, but stress that it is flexible and may be employed to answer many research questions, such as how cognitive outcomes differ among various biomarker profiles, and what the influence of age is on those relationships.

In its commentary the NIA leadership developed a table to help explain how the proposed framework might be used and where it might not apply:

The research framework is...

A testable hypothesis

An approach that facilitates standardized research reporting

A common language and a reference point for researchers for longitudinal studies and clinical trials

A welcome for other approaches

A welcome for other indicators of Alzheimer's and comorbidities

The research framework is NOT...

A requirement for NIH grant submission

A statement about Alzheimer's pathogenesis or etiology

An NIA policy, guideline or criterion for papers or grants

A disease definition for standard medical use

A fixed notion of Alzheimer's

Credit: 
NIH/National Institute on Aging

Alzheimer's disease redefined: New research framework defines Alzheimer's by brain changes, not symptoms

Chicago, April 10, 2018 - "NIA-AA Research Framework: Towards a Biological Definition of Alzheimer's Disease" was published today in the April 2018 issue of Alzheimer's & Dementia: The Journal of the Alzheimer's Association. First author Clifford R. Jack, Jr., M.D., of Mayo Clinic Rochester, MN and colleagues propose shifting the definition of Alzheimer's disease in living people - for use in research - from the current one, based on cognitive changes and behavioral symptoms with biomarker confirmation, to a strictly biological construct. This represents a major evolution in how we think about Alzheimer's.

Understanding and effectively treating Alzheimer's disease and other dementias may be the most difficult challenge for the medical/scientific community this century. The field has experienced monumental challenges developing new and effective drug therapies, not the least of which was the discovery that - until recently -clinical trials were conducted where up to 30% of participants did not have the Alzheimer's disease-related brain change targeted by the experimental drug.

"With the aging of the global population, and the ever-escalating cost of care for people with dementia, new methods are desperately needed to improve the process of therapy development and increase the likelihood of success," said Maria Carrillo, Ph.D., Alzheimer's Association chief science officer and a co-author of the new article. "This new Research Framework is an enormous step in the right direction for Alzheimer's research."

According to the authors, "This evolution of the previous diagnostic criteria is in line with most chronic diseases that are defined biologically, with clinical symptoms being a ... consequence." They say, "the goal of much of medicine is to identify and treat diseases prior to overt symptoms. The [NIA-AA Research] Framework is intended to provide a path forward to ... prevention trials of Alzheimer's disease among persons who are clinically asymptomatic."

Other areas of medicine have used this approach to define disease processes using biomarkers, for example: bone mineral density, hypertension, hyperlipidemia and diabetes are defined by biomarkers. Therapies that address these biomarkers have been shown to reduce the likelihood of developing fractures, heart attacks and strokes.

The authors, "take the position that biomarker evidence of Alzheimer's disease indicates the presence of the disease whether or not symptoms are present, just as an abnormal HbA1C indicates the presence of diabetes whether or not symptoms are present."

In 2011, the Alzheimer's Association (AA) and the National Institute on Aging (NIA) at the U.S. National Institutes of Health convened experts to update the diagnostic guidelines for Alzheimer's disease. The landmark publications designated three stages of Alzheimer's - preclinical (before symptoms affecting memory, thinking or behavior can be detected), mild cognitive impairment and dementia. In bringing together global leaders again in 2017 to review advances in the field and update the guidelines, a profound shift in thinking occurred to define Alzheimer's disease biologically, by pathologic brain changes or their biomarkers, and treat cognitive impairment as a symptom/sign of the disease, rather than its definition.

According to Dr. Jack, once validated in diverse global populations, this new definition will create a powerful tool to speed and improve the development of disease-modifying treatments for Alzheimer's disease.

The authors envision that defining Alzheimer's disease as a biological construct will enable a more accurate understanding of the sequence of events that lead to the cognitive impairment associated with Alzheimer's disease, as well as the multiple causes of the disease. This will enable a more precise approach to therapy trials including focusing more specific targets and including the appropriate people.

In an accompanying editorial, Ara S. Khachaturian, Ph.D., Executive Editor, and the editorial staff of Alzheimer's & Dementia, "commend the effort within the Research Framework to create a common language that may lead to new thinking for the generation of new testable hypotheses about the conceptual basis for Alzheimer's disease. Such a language is a critical and essential element in addressing the ongoing challenge of developing more intricate and comprehensive models of Alzheimer's disease ... for the identification of new interventions and diagnostics."

In their "Editorial comment to the 'NIA-AA Research Framework: Towards a Biological Definition of Alzheimer's Disease,'" Nina Silverberg, Ph.D., Cerise Elliott, Ph.D., Laurie Ryan, Ph.D., Eliezer Masliah, M.D., and Richard Hodes, M.D., of NIA point out that the Framework - in addition to improving early detection and the development of new therapies - could potentially "allow more precise estimates of how many people are at risk [for or living with] Alzheimer's disease, how best to monitor response to therapies, and how to distinguish the effects of Alzheimer's disease from other similar pathologies."

Anticipating questions on the impact of the NIA-AA Research Framework on research funding, they add that, "The NIH will consider research applications using the Framework as well as proposals using alternative schemes when designing experimental approaches. The NIH continues to welcome applications where biomarkers may not be appropriate."

In the article, the authors say they, "appreciate the concern that this biomarker-based Research Framework has the potential to be misunderstood and misused. Therefore, we emphasize: First, it is premature and inappropriate to use this Research Framework in general medical practice. Second, this Research Framework should not be used to restrict alternative approaches to hypothesis testing that do not employ biomarkers ... biomarker-based research should not be considered a template for all research into age-related cognitive impairment and dementia."

That said, the authors believe that Framework applies to the entire Alzheimer's disease research community. In the drafting of the document, "we were careful to include ... representatives of the Industry and the Food and Drug Administration in addition to government and non-governmental organizations. Finally, the Framework was vetted with numerous stakeholders at several meetings as well as posted for months for public comment."

"It is called a 'Research Framework' because it needs to be thoroughly examined - and modified, if needed - before being adopted into general clinical practice," Dr. Jack said. "Importantly, this Framework should be examined in diverse populations."

The authors recognize that the current form of the NIA-AA Research Framework is designed around only the biomarker technology that is presently available. They point out that the proposed biomarker scheme (see the attached fact sheet) is expandable to incorporate new biomarkers, as they are developed and verified.

Credit: 
Alzheimer's Association