Culture

A single concussion may increase risk of Parkinson's disease

MINNEAPOLIS - People who have been diagnosed with a mild concussion, or mild traumatic brain injury, may have a 56 percent increased risk of developing Parkinson's disease, according to a study published in the April 18, 2018, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"Previous research has shown a strong link between moderate to severe traumatic brain injury and an increased risk of developing Parkinson's disease but the research on mild traumatic brain injury has not been conclusive," said senior study author Kristine Yaffe, MD, of the University of California, San Francisco, the San Francisco Veterans Affairs Medical Center, and a member of the American Academy of Neurology. "Our research looked a very large population of U.S. veterans who had experienced either mild, moderate or severe traumatic brain injury in an effort to find an answer to whether a mild traumatic brain injury can put someone at risk."

Moderate to severe traumatic brain injury was defined as a loss of consciousness for more than 30 minutes, alteration of consciousness of more than 24 hours or amnesia for more than 24 hours. Mild traumatic brain injury was defined as loss of consciousness for zero to 30 minutes, alteration of consciousness of a moment to 24 hours or amnesia for zero to 24 hours.

For the study, researchers identified 325,870 veterans from three U.S. Veterans Health Administration medical databases. Half of the study participants had been diagnosed with either a mild, moderate or severe traumatic brain injury and half had not. The study participants, who ranged in age from 31 to 65, were followed for an average of 4.6 years. At the start of the study, none had Parkinson's disease or dementia. All traumatic brain injuries were diagnosed by a physician.

A total of 1,462 of the participants were diagnosed with Parkinson's disease at least one year and up to 12 years after the start of the study. The average time to diagnosis was 4.6 years.

A total of 949 of the participants with traumatic brain injury, or 0.58 percent, developed Parkinson's disease, compared to 513 of the participants with no traumatic brain injury, or 0.31 percent. A total of 360 out of 76,297 with mild traumatic brain injury, or 0.47 percent, developed the disease and 543 out of 72,592 with moderate to severe traumatic brain injury, or 0.75 percent, developed the disease.

After researchers adjusted for age, sex, race, education and other health conditions like diabetes and high blood pressure, they found that those with any kind of traumatic brain injury had a 71 percent increased risk of Parkinson's disease, those with moderate to severe traumatic brain injury had an 83 percent increased risk, and those with mild traumatic brain injury had a 56 percent increased risk of Parkinson's disease.

Researchers also found that those with any form of traumatic brain injury were diagnosed with Parkinson's disease an average of two years earlier than those without traumatic brain injury.

"This study highlights the importance of concussion prevention, long-term follow-up of those with concussion, and the need for future studies to investigate if there are other risk factors for Parkinson's disease that can be modified after someone has a concussion," said lead study author Raquel C. Gardner, MD, of the University of California, San Francisco, the San Francisco Veterans Affairs Medical Center, and a member of the American Academy of Neurology. "While our study looked at veterans, we believe the results may have important implications for athletes and the general public as well."

One limitation of the study was that medical codes were used to identify people with traumatic brain injury and some cases may have been missed. In addition, mild traumatic brain injury may be underreported in those serving in combat.

Credit: 
American Academy of Neurology

The 'radical' ways sunlight builds bigger molecules in the atmosphere

With summer approaching, "sea and sun" might conjure up images of a beach trip. But for scientists, the interactions of the two have big implications for the climate and for the formation of tiny droplets, or aerosols, that lead to clouds. In ACS Central Science, researchers demonstrate that sunlight can cause certain molecules at the ocean's surface to activate others, resulting in larger molecules that could affect the atmosphere.

Certain organic molecules become activated and react when they absorb sunlight. They often go through a reactive intermediate called a "radical," which initiates a chain reaction leading to the formation of more complex chemicals. This "radical initiator" pathway is important for understanding which molecules at the sea surface end up in the atmosphere, where they seed clouds. Which molecules are found in the atmosphere on aerosols will determine whether they absorb or reflect sunlight, affecting the temperature of the planet. Until now, scientists had focused much of their attention on the hydroxyl radical, which reacts very efficiently in the atmosphere. Rebecca Rapf, Veronica Vaida and colleagues at the University of Colorado propose that a class of compounds called α-keto acids can be photo-activated by sunlight and drive reactions with molecules that do not themselves absorb sunlight.

The researchers studied two different α-keto acids and showed that light caused the acid to react with several fatty acids and alcohols. These classes of molecules are commonly found near the ocean's surface, and they are ubiquitous in biology. The authors explain that this sunlight-initiated chemistry could change the composition of the sea surface. The new, larger molecules formed may add to aerosols, changing their properties and leading to interesting and previously unforeseen consequences to human health, visibility and climate.

Credit: 
American Chemical Society

Artificial pancreas is a safe and effective treatment for type 1 diabetes

Use of an artificial pancreas is associated with better control of blood sugar levels for people with type 1 diabetes compared with standard treatment, finds a review of the available evidence published by The BMJ today.

The findings show that artificial pancreas treatment provides almost two and a half extra hours of normal blood glucose levels (normoglycaemia) a day, while reducing time in both high (hyperglycaemia) and low (hypoglycaemia) blood glucose levels.

While further research is needed to verify the findings, the researchers say these results support the view that "artificial pancreas systems are a safe and effective treatment approach for people with type 1 diabetes."

The artificial pancreas is a system that measures blood sugar levels using a continuous glucose monitor (CGM) and transmits this information to an insulin pump that calculates and releases the required amount of insulin into the body, just as the pancreas does in people without diabetes.

Lead researcher, Eleni Bekiari at Aristotle University of Thessaloniki, Greece and the team set out to investigate the effectiveness and safety of artificial pancreas systems in people with type 1 diabetes.

They reviewed the results of 41 randomised controlled trials involving over 1000 people with type 1 diabetes, that compared artificial pancreas systems with other types of insulin based treatment, including insulin pump therapy.

They found that the artificial pancreas was associated with almost two and a half additional hours in normoglycaemia compared with other types of treatment when used overnight and over a 24 hour period.

Use of the artificial pancreas also reduced time spent in hyperglycaemia by approximately two hours - and in hypoglycaemia (20 minutes less) - compared to other types of therapy.

Further analyses to test the strength of the associations for different devices and in different settings were consistent, suggesting that the results are robust.

As such, the authors say that their review provides a valid and up to date overview on the use of artificial pancreas systems for type 1 diabetes. However, they point out that most trials were at high or unclear risk of bias, had a small sample size and short duration, and therefore should be interpreted with caution.

Furthermore, they suggest more should be done to assess cost-effectiveness "to support adoption of artificial pancreas systems in clinical practice."

The authors also recommend that future research should "explore artificial pancreas use in relevant groups of people with type 2 diabetes" and say "the effect of artificial pancreas use on quality of life and on reducing patient burden should be further explored."

In a linked editorial, Professor Norman Waugh at the University of Warwick and colleagues, argue that closed loop systems have much to offer, "but we need better evidence to convince policymakers faced with increasing demands and scarce resources."

Credit: 
BMJ Group

ACP calls for a 'time out' to assess and revise approach to performance measurement

Philadelphia, April 18, 2018 - In "Time Out -- Charting a Path for Improving Performance Measurement," published today in the New England Journal of Medicine, the American College of Physicians (ACP) reports that the majority of quality measures for ambulatory internal medicine in Medicare's Merit-based Incentive Payment System (MIPS) program are not valid based on criteria developed by ACP.

ACP performed this analysis in response to physician concerns that the current measures are not meaningful in improving patient outcomes. ACP analyzed 86 performance measures included in Medicare's MIPS and Quality Payment Program (QPP) and found that 32 were valid (37 percent), 30 (35 percent) were not valid, and 24 (28 percent) were of uncertain validity.

Of the 30 measures rated as not valid, 19 were judged to have insufficient evidence to support them. A characteristic of measures rated as not valid was inadequately specified exclusions, "resulting in a requirement that a process or outcome occur across broad groups of patients, including patients who might not benefit," the authors wrote.

"ACP has long supported and advocated improving performance measures so they help physicians provide the best possible care to their patients without creating unintended adverse consequences," said ACP President Dr. Jack Ende, MD, MACP.

ACP identified performance measures that had poor specifications that might misclassify high-quality care as low-quality care. The paper notes that using flawed measures is not only frustrating to physicians but potentially harmful to patients. Physician practices spend $15.4 billion per year, or about $40,000 per physician, to report on performance. In a recent survey, nearly two-thirds of physicians said that current measures do not capture the quality of the care they provide.

ACP also identified troubling inconsistencies among leading U.S. organizations in judgments of the validity of measures of physician quality. ACP suggests that a single set of standards, such as those developed by the National Academy of Medicine for clinical practice guidelines, would allow others to evaluate the trustworthiness of performance measures before they are launched.

ACP believes the next generation of performance measurement should not be limited by the use of easy-to-obtain (e.g., administrative) data and not function as a stand-alone, retrospective exercise.

"A possible solution is to have physicians with expertise in clinical medicine and research develop measures using clinically relevant methodology," Dr. Ende said. "Performance measures should be fully integrated into care delivery so they can help to address the most pressing performance gaps and direct quality improvement."

With more than 2,500 performance measures used inconsistently in various programs, ACP called for a "time out" to assess and revise the approach to assessment of physician performance.

Credit: 
American College of Physicians

Dual-class firms have higher market valuations near time of IPO that drop over next six years, study

Facebook, Google, Comcast and Berkshire Hathaway are among a number of large companies that have dual-class stock structures, providing controlling shareholders with majority voting power despite owning a minority of total equity.

For these dual-class firms, market valuations are higher early in their life cycles, while the valuation premium tends to disappear about six years after their IPOs, according to new research from the University of Notre Dame.

"The Life Cycle of Dual-Class Firms," released this month by the European Corporate Governance Institute, was co-authored by Martijn Cremers, Bernard J. Hank Professor of Finance in Notre Dame's Mendoza College of Business, along with Beni Lauterbach from Bar-Ilan University and Anete Pajuste from the Stockholm School of Economics in Latvia.

The team examined an extensive matched sample of U.S. dual- and single-class firms in 1980 to 2015 from the time of their IPO. The study found that at around the time of the IPO, dual-class firms tend to have higher valuations than otherwise-similar single-class firms. However, their valuation premium dissipates over time and becomes insignificant about six years after the IPO. On the one hand, dual-class firms that start with a valuation premium when they are young tend not to have any valuation discount when they are mature. On the other hand, for dual-class firms with a valuation discount at the time of their IPO, this valuation discount tends to remain fairly similar over time, on average.

"Our evidence may have some regulatory implications, and can inform the debate regarding dual-class stock financing," Cremers says. "For policymakers, our finding that many dual-class firms have a valuation premium over single-class firms during the first few years after the IPO should provide some legitimacy to dual-class financing. This initial valuation premium suggests that dual-class stocks should not indiscriminately be excluded from stock exchanges or financial indices.

"On the other hand, we also provide evidence that for dual-class firms with an initial valuation discount, this discount seems to persist in the long term, suggesting their public shareholders and the firm itself may benefit from some form of a sunset clause of dual-class structures."

Credit: 
University of Notre Dame

New research seeks to optimize space travel efficiency

image: Koki Ho, assistant professor of aerospace engineering at the University of Illinois.

Image: 
University of Illinois Department of Aeropspace Engineering

Sending a human into space and doing it efficiently presents a galaxy of challenges. Koki Ho, University of Illinois assistant professor in the Department of Aerospace Engineering, and his graduate students, Hao Chen and Bindu Jagannatha, explored ways to integrate the logistics of space travel by looking at a campaign of lunar missions, spacecraft design, and creating a framework to optimize fuel and other resources.

Ho said it's about finding a balance between time and the amount of fuel - getting there fast requires more fuel. If time isn't an issue, slow but efficient low-thrust propulsion might be a better choice. Taking advantage of this classical tradeoff, Ho noted that there are opportunities to minimize the launch mass and cost when looking at the problems from a campaign perspective--multiple launches/flights.

"Our goal is to make space travel efficient," Ho said. "One way to do that is to consider campaign designs, that is, multiple missions together--not just launching everything from the ground for every mission like Apollo did. In a multi-mission campaign, previous missions are leveraged for subsequent missions. So if a previous mission deployed some infrastructure, such as a propellant depot, or if work had begun to mine oxygen from soil on the moon, those are used in the design of the next mission."

Ho used data from previously flown or planned missions to create simulated models of a combined campaign. The model can be modified to include heavier or lighter spacecraft, a specified set of destinations, the precise number of humans on board, etc., to validate his predictions about the efficiency.

"There are issues with the vehicle sizing," Ho said. "In our previous studies, in order to make the problem efficiently solvable, we had to use a simplified model for the vehicle and infrastructure sizing. So creating the model was fast, but the validity of the model wasn't as good as we desired."

In one of the current studies, Ho and his colleagues addressed the fidelity issue in these previous simplified models by creating a new method to consider more realistic mission and vehicle design models while maintaining the mission planning computational load at a reasonable level.

"In this research we are designing the vehicles from scratch so that the vehicle design can become part of the campaign design," Ho said. "For example, if we know we want to send a human into space Mars by the 2030s, we can design the vehicle and plan the multi-mission campaign to achieve the maximum efficiency and the minimum launch cost over the given time horizon."

Ho's research also incorporates the concept of propellant depots in space, like strategically located truck stops on a turnpike. He said it is an idea that has been tossed around for a while among scientists. "There are questions about how efficient the depots actually are," Ho said. "For example, if it takes the same or more amount of propellant just to deliver the depot then what's the point of sending it ahead?"

Ho's studies provide one solution to this question by leveraging a combination of high-thrust and low-thrust propulsion system.

"A preparatory mission might be conducted beforehand to deliver into orbit mini-space stations that store fuel, cargo, or other supplies," Ho said. "These craft can be pre-deployed so they are orbiting and available to a manned spacecraft that is deployed later. The cargo/fuel space craft can make use of low-thrust technologies because the time it takes to get to its destination isn't critical. Then for the manned spacecraft, we'd use high-thrust rockets because time is of the essence when putting humans in space. This also means that because the fuel is already at these space stations, the actual manned ship doesn't have to carry as much fuel."

Credit: 
University of Illinois Grainger College of Engineering

Study suggests social workers could help families navigate foreclosure

Community-based service professionals think that helping clients navigate a financial crisis--such as foreclosure--is a good idea.

We know that because researchers from the Jack, Joseph and Morton Mandel School of Applied Social Sciences at Case Western Reserve University asked them.

In a qualitative study, researchers focused on Cleveland service providers who shared how foreclosure affects their clients. The research was recently published in The Journal of Contemporary Social Services.

Service providers' observations of their clients' experiences yielded similar themes of foreclosure threatening children's education, family memories, sense of self and the desire to attain the American Dream--commonly represented by home ownership.

"Take that away and the results of losing that dream are devastating," said Elizabeth Anthony, a senior research associate for the Mandel School's Center on Urban Poverty and Community Development.

The research concludes that social workers have on-the-ground capability--and willingness--to mitigate impending financial calamity before it happens.

In addition, the study highlights that further training could only strengthen social workers' abilities to stave off foreclosure.

"Social coworkers could bolster services and support and connect people to the appropriate resources," said Anthony, noting that the 18 professionals working in community-based and governmental organizations charged with helping those experiencing foreclosure.

The housing crisis--which began in 2008 as a result of too much borrowing and flawed financial modeling--had a detrimental effect on the national economy and in cities across the country. That first year alone, an estimated 861,000 homeowners lost their properties to foreclosure.

"It goes without saying that home foreclosure had negative effects on children and families," Anthony said. "The loss of one's home fundamentally destabilizes a family unit and that has implications for school, work, community connections."

The impacts of the 2008 economic recession in general--and the real estate crisis in particular--worsened the situation for already struggling cities and families. Job losses and depressed home prices left many homeowners owing more than their homes were worth.

Additionally, the housing crisis highlighted seismic and widening gaps in racial equality, Anthony noted.

"The impact of foreclosure has been disproportionate in minority communities that were targeted for subprime loans," she said. "Certainly, we hope that there's never another foreclosure crisis, but if there is, we could have (social workers) trained in the complexities and bureaucracies of financial institutions."

The research suggests while foreclosure is devastating, families continue to strive toward future homeownership.

"In general, the American Dream represents an ethos that hard work and determination bring prosperity, upward social mobility, material comfort, and ultimately a sense of self, of who homeowners are in the world," said Anthony.

Credit: 
Case Western Reserve University

First long-term study finds half trillion dollars spent on HIV/AIDS

SEATTLE - Spending on HIV/AIDS globally between 2000 and 2015 totaled more than half a trillion dollars, according to a new scientific study, the first comprehensive analysis of funding for the disease.

The total was $562.6 billion over the 16-year period. Annual spending peaked in 2013 with $49.7 billion. Two years later, $48.9 billion was provided for the care, treatment, and prevention of the disease.

"This research is an important initial step toward global disease-specific resource tracking, which makes new, policy-relevant analyses possible, including understanding the drivers of health spending growth," said Dr. Christopher Murray, director of the Institute for Health Metrics and Evaluation (IHME) at the University of Washington. "We are quantifying spending gaps and evaluating the impact of expenditures."

Globally, governments were the largest source of spending on HIV/AIDS in 2015, contributing $29.8 billion or 61 percent of total spending on HIV/AIDS. Prepaid private spending was the smallest, making up only $1.4 billion of the 2015 total.

Development assistance for health (DAH), funding from high-income nations to support health efforts in lower-income ones, made up 0.5 percent of total health spending globally in 2015; DAH totaled 30 percent of all HIV/AIDS spending in 2015. Consider:

Not only does sub-Saharan Africa have the largest HIV-positive population (24.4 million in 2015), it also depends most substantially on DAH: 64 percent of HIV/AIDS spending in the region is DAH.

South Asia also has a high level of dependence on donor financing, with DAH making up 45 percent of spending on HIV/AIDS.

"Reliance on development assistance to fight HIV/AIDS in high-prevalence countries leaves them susceptible to fluctuations in the external resources available for HIV/AIDS," said IHME's Dr. Joseph Dieleman, lead author of the study. "Nations' HIV/AIDS programs are at risk for gaps in support and unrealized investment opportunities."

Overall health spending worldwide, which totaled $9.7 trillion in 2015, Dieleman said, continues to rise and outpaces economic growth in many countries, although 66 percent of this spending was in high-income countries. Low-income countries, which together make up 8.8 percent of the global population, represent less than 1 percent of health spending globally. Spending per capita in 2015 varied widely across countries, spanning from $28 per capita per year on health (Central African Republic) to nearly $10,000 (United States).

"With growth steady or accelerating, it is more important than ever to understand where resources for health go and how they align with health needs," he said.

The study, "Spending on health and HIV/AIDS: domestic health spending and development assistance in 188 countries, 1995-2015," was published today in the international medical journal The Lancet. Dieleman, Murray, and IHME researchers worked with the organization's health financing collaborative network, a group of 256 researchers in 63 countries.

Credit: 
Institute for Health Metrics and Evaluation

Opioid-related hospitalizations rising in Medicare patients without opioid prescriptions

GALVESTON, Texas - A 2014 federal change that limited the dispensing of hydrocodone products may be indirectly contributing to the illegal use of some of those drugs, a study by University of Texas Medical Branch researchers has found.

UTMB found that, while prescriptions for opioids went down among older Medicare recipients, opioid-related hospitalization of people who did not have a prescription for opioids went up.

In light of the growing opioid crisis, the Federal Drug Administration in October 2014 reclassified all hydrocodone products in a way that makes them more carefully controlled, including limiting prescriptions to a 30-day supply with no refills.

"Our team from The University of Texas Medical Branch at Galveston conducted the first study showing that the 2014 federal hydrocodone rescheduling policy was associated with decreased opiate use among the elderly," said Yong-Fang Kuo, professor in the department of preventive medicine and community health. "However, we also observed a 24 percent increase in opioid-related hospitalizations in Medicare patients without documented opioid prescriptions, which may represent an increase in illegal use."

Older adults are among the largest consumers of prescription opioids in the U.S. Compared with people holding commercial health insurance, Medicare enrollees are at least five times more likely to be diagnosed with opiate abuse and are also particularly vulnerable to toxic and other negative effects of opiate use.

The team also noted that they did not see decreased rates of opioid use among high-risk groups such as the disabled. Ten percent of the enrollees included in the analyses had initially become eligible for Medicare before age 65 because of disability. These people accounted for 25 percent of chronic opioid users and 40 percent of high-dose users in 2015.

"One explanation for the high rate of risky opioid use among disability-entitled enrollees is the group of risk factors linked with opioid misuse including mood disorders, cognitive disability and back pain-related disorders," Kuo said. "The continuing opioid epidemic despite state and federal actions highlights the need for people to continue supporting community-wide education on the risks and limitations of opioids, starting in medical and nursing schools, on safe opioid prescribing and how to recognize signs of opioid use disorder."

The UTMB research team conducted their analyses on a 20 percent national sample of Medicare enrollment and claims data from 2012 through 2015. The study is currently available in the new edition of the Journal of the American Geriatrics Society.

"As policy experts and medical professionals move forward in their search for the proper balance between pain control and opioid over-prescribing, it will be important to keep high-risk groups in mind when refining public policy and medical practice," said Kuo.

Credit: 
University of Texas Medical Branch at Galveston

New study finds people covered by universal health coverage will fall far below SDGs

SEATTLE - An estimated 5.4 billion people globally are expected to be covered under some form of universal health care (UHC) by 2030, up from 4.3 billion in 2015, but far below the related target in United Nations Sustainable Development Goal 3, according to a new scientific study.

The study finds that, while health spending is expected to rise over the coming decades, it is likely to continue constraining efforts to achieve universal health coverage. The analysis was conducted by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington and published today in the international medical journal The Lancet.

"Our analysis emphasizes the need to ensure sufficient health financing for UHC in the era of the UN Sustainable Development Goals," said Dr. Christopher Murray, IHME's director. "We identified the correlation that a 10 percent increase of pooled resources, such as government health spending, prepaid private spending, and development assistance for health, equates to a 1.4 percent increase in universal health coverage."

Moreover, the study finds global health spending is expected to double over the next 20 years, from US$10 trillion in 2015 to $20 trillion in 2040; spending per person is expected to increase the most in middle-income countries.

It is estimated that per-person health spending in 2040 would range from a low of $40 in Central African Republic to $16,362 in the United States. Among four income groups, the breakdown is $8,666 per capita for high-income, $2,670 for upper-middle-income, $714 for lower-middle-income, and $190 for low-income countries in 2040.

Country-specific pooled spending levels are projected to range from $30 to $14,876 per person. The study notes this "magnitude of disparity could hinder progress on UHC for nations most in need."

Other findings include:

High-income countries are projected to spend more than 45 times more on health per person than low-income countries in 2040.

Per-person spending is projected to increase in 177 of 188 countries by 2040. Across income groups, the highest annual growth rates for total spending per person were estimated to occur in upper-middle- and lower-middle-income countries, with an average of 4.2% and 4.0% over time.

Globally, out-of-pocket spending was estimated to increase the fastest, although governments are expected to remain the largest source of funding in 2040, with 61.3% of total health spending.

On average, high-income countries will have the largest projected pooled health spending - $7,508 per capita - in 2040. Conversely, sub-Saharan Africa and South Asia are expected to have the lowest projections for pooled health spending per person, $175 and $273, respectively, for 2040.

"Tracking countries' pooled resources for health, and understanding how resource trends can affect health service coverage, are important contributions to policy development and budgeting processes related to UHC," said IHME's Dr. Joseph Dieleman, lead author of the study, entitled, "Trends in future health financing and coverage: future health spending and universal health coverage in 188 countries, 2016-2040."

Dieleman, Murray, and IHME researchers worked with the organization's health financing collaborative network, a group of 256 researchers in 63 countries.

Credit: 
Institute for Health Metrics and Evaluation

350,000 stars' DNA interrogated in search for sun's lost siblings

video: Every astronomical object has a unique spectrum, or "rainbow fingerprint," that allows astronomers to determine its contents, age, formation history, movements through space, temperature and more!

The AAO continually builds innovative spectrographs for the 4-meter AAT and UK Schmidt Telescopes to collect spectral data of hundreds of thousands stars and galaxies.

This video, first published in 2015, shows the path the light from distant astronomical objects follows through the telescope's optics in order to split the light up into its rainbow spectrum.

Image: 
AAO and Dr. Amanda Bauer

An Australian-led group of astronomers working with European collaborators has revealed the "DNA" of more than 340,000 stars in the Milky Way, which should help them find the siblings of the Sun, now scattered across the sky.

This is a major announcement from an ambitious Galactic Archaeology survey, called GALAH, launched in late 2013 as part of a quest to uncover the formulation and evolution of galaxies. When complete, GALAH will investigate more than a million stars.

The GALAH survey used the HERMES spectrograph at the Australian Astronomical Observatory's (AAO) 3.9-metre Anglo-Australian Telescope near Coonabarabran, NSW, to collect spectra for the 340,000 stars.

The GALAH Survey today makes its first major public data release.

The 'DNA' collected traces the ancestry of stars, showing astronomers how the Universe went from having only hydrogen and helium - just after the Big Bang - to being filled today with all the elements we have here on Earth that are necessary for life.

"No other survey has been able to measure as many elements for as many stars as GALAH," said Dr Gayandhi De Silva, of the University of Sydney and AAO, the HERMES instrument scientist who oversaw the groups working on today's major data release.

"This data will enable such discoveries as the original star clusters of the Galaxy, including the Sun's birth cluster and solar siblings - there is no other dataset like this ever collected anywhere else in the world," Dr De Silva said.

Dr. Sarah Martell from the UNSW Sydney, who leads GALAH survey observations, explained that the Sun, like all stars, was born in a group or cluster of thousands of stars.

"Every star in that cluster will have the same chemical composition, or DNA - these clusters are quickly pulled apart by our Milky Way Galaxy and are now scattered across the sky," Dr Martell said.

"The GALAH team's aim is to make DNA matches between stars to find their long-lost sisters and brothers."

For each star, this DNA is the amount they contain of each of nearly two dozen chemical elements such as oxygen, aluminium, and iron.

Unfortunately, astronomers cannot collect the DNA of a star with a mouth swab but instead use the starlight, with a technique called spectroscopy.

The light from the star is collected by the telescope and then passed through an instrument called a spectrograph, which splits the light into detailed rainbows, or spectra.

Associate Professor Daniel Zucker, from Macquarie University and the AAO, said astronomers measured the locations and sizes of dark lines in the spectra to work out the amount of each element in a star.

"Each chemical element leaves a unique pattern of dark bands at specific wavelengths in these spectra, like fingerprints," he said.

Dr Jeffrey Simpson of the AAO said it takes about an hour to collect enough photons of light for each star, but "Thankfully, we can observe 360 stars at the same time using fibre optics," he added.

The GALAH team has spent more than 280 nights at the telescope since 2014 to collect all the data.

The GALAH survey is the brainchild of Professor Joss Bland-Hawthorn from the University of Sydney and the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) and Professor Ken Freeman of the Australian National University (ANU). It was conceived more than a decade ago as a way to unravel the history of our Milky Way galaxy; the HERMES instrument was designed and built by the AAO specifically for the GALAH survey.

Measuring the abundance of each chemical in so many stars is an enormous challenge. To do this, GALAH has developed sophisticated analysis techniques.

PhD student Sven Buder of the Max Planck Institute for Astronomy, Germany, who is lead author of the scientific article describing the GALAH data release, is part of the analysis effort of the project, working with PhD student Ly Duong and Professor Martin Asplund of ANU and ASTRO 3D.

Mr. Buder said: "We train [our computer code] The Cannon to recognize patterns in the spectra of a subset of stars that we have analysed very carefully, and then use The Cannon's machine learning algorithms to determine the amount of each element for all of the 340,000 stars." Ms. Duong noted that "The Cannon is named for Annie Jump Cannon, a pioneering American astronomer who classified the spectra of around 340,000 stars by eye over several decades a century ago - our code analyses that many stars in far greater detail in less than a day."

The GALAH survey's data release is timed to coincide with the huge release of data on 25 April from the European Gaia satellite, which has mapped more than 1.6 billion stars in the Milky Way - making it by far the biggest and most accurate atlas of the night sky to date.

In combination with velocities from GALAH, Gaia data will give not just the positions and distances of the stars, but also their motions within the Galaxy.

Professor Tomaz Zwitter (University of Ljubljana, Slovenia) said today's results from the GALAH survey would be crucial to interpreting the results from Gaia: "The accuracy of the velocities that we are achieving with GALAH is unprecedented for such a large survey."

Dr Sanjib Sharma from the University of Sydney concluded: "For the first time we'll be able to get a detailed understanding of the history of the Galaxy."

The ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) is a $40m Research Centre of Excellence funded by the Australian Research Council (ARC) and six collaborating Australian universities - The Australian National University, The University of Sydney, The University of Melbourne, Swinburne University of Technology, The University of Western Australia and Curtin University.

Credit: 
University of Sydney

Competition between males improves resilience against climate change

Animal species with males who compete intensively for mates might be more resilient to the effects of climate change, according to research by Queen Mary University of London.

Moths exposed to increasing temperatures were found to produce more eggs and have better offspring survival when the population had more males competing for mating opportunities (three males for every female).

The study, published in the journal Proceedings of the Royal Society B, suggests that sexual selection can provide a buffer against climate change and increase adaptation rates within a changing environment. This could improve understanding of how changing environments might affect animal species in both natural and agricultural systems.

PhD student and lead author Jon Parrett from Queen Mary's School of Biological and Chemical Sciences said: "Climate change is altering environments all over the world in a variety of ways, with increases in temperature of several degrees being likely in many places. It is vitally important that we understand how animal populations will respond to these changing environments. Our study is the first to look at how sexual selection affects an animal population's ability to respond to gradual increases in temperature."

"We found that moths were more likely to succeed in stressful environments of increasing temperature when there were more males competing for mating opportunities. This is because males who were best adapted to the new environment were more likely to be mated with, and these successful fathers passed on their 'good genes' to their offspring, aiding survival in the new environment."

Several populations of the Indian meal moth Plodia interpunctella were established with either a male-biased sex ratio of three males for every female (strong competition) or a female-biased sex ratio of one male for every three females (weak competition). The team then gradually increased the temperature that they were reared at by 2°C every other generation.

As temperature increased beyond the normal range for these animals, populations showed declines in the number of eggs produced per female and also in the survival of offspring to adulthood.

The populations kept with a male-biased sex ratio, however, were more resilient to increasing temperatures. Production of offspring and survival rates were still affected, but significantly less than in the female-biased populations.

The team extended the study by comparing females who were allowed to choose their mates with females who were only given a single option of a male to mate with. They found that when females were allowed to be choosy they also laid more eggs and had better offspring survival in the face of increasing temperatures.

These positive effects of sexual selection may, however, be too small to protect populations and delay extinction when environmental changes are relatively rapid.

Co-author Dr Rob Knell from Queen Mary's School of Biological and Chemical Sciences said: "We used a laboratory system for this research, but our conclusions are likely to be applicable to many animal species. Intense competition for mates is a feature of many well-known animals: rutting stags, displaying peacocks, male birds of paradise and singing male crickets are all trying to win the mating game.

"Our results indicate that these competitive mating systems can play an important role in determining the response to new environments, whereas species where there is less competition for mates are likely to be less able to adapt to new conditions."

The authors caution that the study is only a laboratory demonstration of the effect and more research is needed to fully understand how these effects might operate in natural systems.

Credit: 
Queen Mary University of London

A potential setback in the personalized medicine of cancer

image: RAS-less mouse stem cells do not grow, do not differentiate into other cell types and are not capable of forming tumors.

Image: 
Spanish National Cancer Research Centre (CNIO)

One of the most constant and exhaustive searches in cancer research is for a treatment aimed specifically at the Ras family of genes, the most common oncogenes and those that initiate many of the most lethal tumours. However, the results of this hypothetical treatment may be far less positive than speculated due to a manuscript published in the Genes & Development journal by the Genomic Instability Group at the Spanish National Cancer Research Centre (CNIO). The study shows how cells are capable of surviving even in the total absence of Ras genes if another gene, Erf, is also lost.

Discovered in 1982 - by Mariano Barbacid's group, among others (who is also participating in this study) -, alterations in Ras genes were the first mutation described in cancer. This was a paradigm-shifting discovery, since for the first time revealed that tumours are initiated by mutations in our own genes, thereby raising hope that if inhibitors for these mutated genes were created, cancer could be cured. "It is the base of personalised medicine", explains Óscar Fernández-Capetillo, leader of this work.

Ras, the Holy Grail of targets to fight cancer

In addition to being the first oncogene ever described, mutations in Ras genes are the most common and those that initiate the most lethal tumours: e.g. lung, pancreas or colon. Therefore, developing a RAS pharmacological inhibitor has frequently been described as the search for the "Holy Grail" in the battle against cancer, in which billions have been invested and its highest exponent lies in 'The Ras Initiative', launched in 2013 by the US National Institutes for Health (NIH).

However, achieving an inhibitor for RAS proteins is complicated due mainly to their three-dimensional structure, similar to a sphere, which makes it difficult to generate pharmaceuticals that inhibit their activity. As an alternative to treating these tumours, pharmaceuticals have been developed that attack the other members of the Ras route, such as inhibitors of MEK, RAF, EGFR...

"Personalised medicine, in spite of being a good idea and having success stories, has its Achilles heel in the fact that tumours do not only have one mutation, but dozens or even hundreds of them, so while treatments generally work for a limited time, tumours invariably end up developing resistance due to another mutation", highlights Fernández-Capetillo.

Although therapies against the RAS route constitute an important part of current antitumoral strategies, the search for RAS inhibitors continues "despite the fact that it is not clear if tumours are not going to be capable of developing resistance to these treatments", highlights Sergio Ruiz, co-leader of the study. "In our work, we show that it is even possible to develop teratomas (a type of germinal tumour) lacking all RAS genes, if the tumour also lacks ERF expression", he adds.

ERF loss rescues the effects of RAS deficiency

The main role of RAS proteins consists of translating external growth signals (nutrients, growth factors, etc.) into proliferating responses within the cell. When RAS proteins are eliminated in mouse stem cells, these remain in a sort-of suspended state: they do not grow, they do not differentiate into other cell types and are not capable of forming tumours.

Cristina Mayor-Ruiz, first author of the study, initially observed that certain tumour cells the authors were working with were capable of growing even in the absence of serum, if the gene Erf was also eliminated.

Fernández-Capetillo explains "For me, this discovery was the origin of the project as it made us speculate that if cells can grow with hardly any nutrients upon eliminating ERF, this could even allow the growth of RAS-free cells". This hypothesis turned out to be true: eliminating ERF allows mouse embryonic stem cells to grow, differentiate and even generate tumours in total absence of RAS genes.

The study also explains the mechanism by which ERF restricts the action of RAS proteins. In the absence of RAS, ERF is recruited to the regulating areas ("enhancers") of multiple genes, modulating their function, which ultimately limits cell growth. "ERF is a kind of brake that limits the consequences of RAS activation", indicates Cristina Mayor-Ruiz.

"The message is not good, but its knowledge is important for cancer research and the so frequently mentioned personalised medicine", sums up Fernández-Capetillo. "Although a perfect inhibitor of RAS is finally achieved, tumours may be capable of becoming resistant to the treatment accumulating mutations in genes like ERF". In fact, recent studies have found ERF mutations in cancer patients, indicating that such situation may indeed exist in the clinic. Therefore, Fernández-Capetillo's group is now exploring whether mutations in ERF can account for the resistance to personalised therapies against inhibitors of the RAS route.

Credit: 
Centro Nacional de Investigaciones Oncológicas (CNIO)

Army research rejuvenates older zinc batteries

image: An artist's conception of a zinc ion that has been fully surrounded by anions in the super-concentrated electrolyte is trying to break out of the binding and deposit on zinc metal surface in a controlled and smooth manner.

Image: 
Photo Artwork by Eric Proctor

ADELPHI, MD. -- Army scientists, with a team of researchers from the University of Maryland and the National Institute of Standards and Technology, have created a water-based zinc battery that is simultaneously powerful, rechargeable and intrinsically safe.

The high-impact journal Nature Materials published a peer-reviewed paper based on this ground-breaking research April 16.

In prior achievements, these scientists invented a new class of water-based electrolytes that can work under extreme electrochemical conditions that ordinary water cannot, and have successfully applied it on different lithium-ion chemistries. In this work, they adapted the electrolyte to a battery chemistry much cheaper than lithium: Zinc, and they demonstrated that an aqueous battery can satisfy the multi-facet goals of high energy, high safety and low cost simultaneously.

The world's very first battery used zinc as anode in 1799. In the following two centuries, many zinc-based batteries were commercialized, some of which are still on market. These batteries used to provide safe and reliable energy, although at moderate energy density, to satisfy our daily needs, but their presence in our lives has significantly shrunken since the emergence of lithium-ion batteries 28 years ago. Besides energy density, a major reason for the diminishing role of zinc batteries is the poor reversibility of the zinc chemistry in aqueous electrolytes. Non-rechargeable batteries already created significant amount of landfill, imposing serious environmental burden on industrialized societies.

"On the other hand, with increasing presence of lithium-ion batteries in our lives, from portable electronics to electric vehicles, their safety raises more public concerns, from Tesla car fires to the global grounding of the entire Boeing 787 fleet," explained Dr. Kang Xu, who is an ARL fellow and team leader, and co-corresponding authors of this paper. "The safety hazard of lithium-ion batteries are rooted in the highly flammable and toxic non-aqueous electrolytes used therein. The batteries of aqueous nature thus become attractive, if they can be made rechargeable with high energy densities. Zinc is a natural candidate."

The researchers said the new aqueous zinc battery could eventually be used not just in consumer electronics, but also in extreme conditions to improve the performance of safety-critical vehicles such as those used in aerospace, military and deep-ocean environments.

As an example of the aqueous zinc battery's power and safety, Fei Wang, a jointly appointed postdoctoral associate at UMD's Clark School and ARL, and first author of the paper, cites the numerous battery fire incidents in cell phones, laptops and electric cars highlighted in recent media coverage. The new aqueous zinc battery presented in this work could be the answer to the call for safe battery chemistry while still maintaining the comparable or even higher energy densities of conventional lithium-ion batteries.

"Water-based batteries could be crucial to preventing fires in electronics, but their energy storage and capacity have been limited -- until now. For the first time, we have a battery that could compete with the lithium-ion batteries in energy density, but without the risk of explosion or fire," Wang said.

This highly concentrated aqueous zinc battery also overcomes other disadvantages of conventional zinc batteries, such as the capacity to endure only limited recharging cycles, dendrite (tree-like structures of crystals) growth during usage and recharging, and sustained water consumption, resulting in the need to regularly replenishing the batteries' electrolyte with water.

"Existing zinc batteries are safe and relatively inexpensive to produce, but they aren't perfect due to poor cycle life and low energy density. We overcome these challenges by using a water-in-salt electrolyte," says Chunsheng Wang, UMD professor of chemical and biomolecular engineering and corresponding author of the paper.

The research team says this battery technology advance lays the groundwork for further research, and they are hopeful for possible future commercialization.

"The significant discovery made in this work has touched the core problem of aqueous zinc batteries, and could impact other aqueous or non-aqueous multivalence cation chemistries that face similar challenges, such as magnesium and aluminum batteries", Xu said. "A much more difficult challenge is, of course, the reversibility of lithium metal, which faces similar but much more difficult challenges."

Resolution of lithium-metal deposition could unlock the "Holy Grail" of all batteries, which is the area where these scientists are closely working on with the scientists at the Department of Energy, he said.

Credit: 
U.S. Army Research Laboratory

Divorce and low socioeconomic status carry higher risk of second heart attack or stroke

image: ESC logo.

Image: 
ESC logo

Sophia Antipolis, 17 April 2018: Heart attack survivors who are divorced or have low socioeconomic status have a higher risk of a second attack, according to research from Karolinska Institutet, Stockholm, Sweden, published today in the European Journal of Preventive Cardiology, a European Society of Cardiology journal.1

Previous studies have shown that low socioeconomic status is associated with a first heart attack, but these findings could not be extended to heart attack survivors to calculate their risk of a second event.

This study enrolled 29,226 one-year survivors of a first heart attack from the SWEDEHEART-registry and cross-referenced data from other national registries. Socioeconomic status was assessed by disposable household income (categorised by quintiles) and education level (nine years or less, 10-12 years, more than 12 years). Marital status (married, unmarried, divorced, widowed) was also recorded in the study.

Patients were followed up for an average of four years for the first recurrent event, which was defined as non-fatal heart attack, death from coronary heart disease, fatal stroke, or non-fatal stroke.

The study found that divorce and low socioeconomic status were significantly associated with a higher risk of a recurrent event. Each indicator was linked with recurrent events.

After adjusting for age, sex, and year of first heart attack, patients with more than 12 years of education had a 14% lower risk of a recurrent event than those with nine or fewer years of education. Patients in the highest household income quintile had a 35% lower risk than those in the lowest quintile.

Divorced patients had an 18% greater risk of a recurrent event than married patients.

Unmarried and widowed patients had higher rates of recurrent events than married patients, but the associations were not significant. Study author Dr Joel Ohm, a PhD student at Karolinska Institutet, said the proportions of unmarried and widowed patients in the study may have been too small for the link to be statistically significant. However, he said: "Marriage appears to be protective against recurrent events and aligns with traditional indicators of higher socioeconomic status, but conclusions on the underlying mechanisms cannot be drawn from this study."

In a subgroup analysis by sex, unmarried men were at higher risk of recurrence and unmarried women were at lower risk. "These findings should be interpreted cautiously," Dr Ohm warned, "This was a subgroup analysis and we cannot conclude that women are better off being single and that men should marry and not divorce. Unmarried women had a higher level of education compared to unmarried men, and this difference in socioeconomic status may be the underlying cause."

The subgroup analysis by sex also found that higher household income was associated with a lower risk of recurrent events in men, but there was no association in women. Dr Ohm said this could be due to the lower proportion of women in the study (27%), since the age cutoff for inclusion was 76 years and women are generally older than men when they have a first heart attack. In addition, the difference between the lowest and highest quintiles of household income is likely to be greater when men have a first heart attack because they and their spouse are still of working age.

The study did not investigate reasons for the association between socioeconomic status and recurrent events. Numerous factors that are difficult to measure may be involved, such as diet and exercise habits throughout life and even genetic factors. In theory, unequal access to healthcare and compliance with treatment regimes could play a role. Of these two, compliance appears to be a bigger issue, since most treatments were prescribed equally to all income groups and adjusting for treatment did not change the association between socioeconomic status and recurrent events.

"The take-home message from this study is that socioeconomic status is associated with recurrent events," said Dr Ohm, "No matter the reasons why, doctors should include marital and socioeconomic status when assessing a heart attack survivor's risk of a recurrent event. More intense treatment could then be targeted to high risk groups."

Credit: 
European Society of Cardiology