Culture

Exposure to second-hand e-cigarettes increasing among young people

image: Andy Tan, PhD, MPH, MBA, MBBS Assistant Professor, Division of Population Sciences, Center for Community-Based Research at Dana-Farber Cancer Institute

Image: 
Sam Ogden

BOSTON -- A growing number of middle- and high-school students are being exposed to second-hand aerosols from e-cigarettes by living with or being around individuals who are vaping, according to data from a national survey.

Such exposure increased rapidly in 2018 compared to the years 2015-2017, report scientists from Dana-Farber Cancer Institute in a research letter in the August 28th edition of JAMA Network Open. The analysis was conducted using data collected by the National Youth Tobacco survey carried out under the auspices of the U.S. Centers for Disease Control and Prevention.

The survey revealed that about one-third of middle- and high-school students said they were exposed to vaping aerosols in 2018 - an increase by about 30% compared with the previous three years, said Andy S.L. Tan, PhD, MPH, MBBS, of Dana-Farber, the corresponding author of the report. He called the increase in exposure to vaping aerosols "concerning," given that an array of potentially hazardous chemicals are released by e-cigarettes. Fumes from e-cigarettes contain a variety of chemicals including nicotine, heavy metals, aldehydes, glycerin, and flavoring substances. "The majority of studies have concluded that passive exposure may pose a health risk to bystanders, particularly vulnerable populations such as children and teens" Tan said.

While exposure to vaping aerosols is increasing, the survey also revealed that second-hand tobacco smoke from conventional cigarettes remains a serious public health concern, said Tan. About half the students in the survey reported exposure to secondhand tobacco smoke, which he said is much more harmful than emissions from e-cigarettes. "So we need to make sure that reducing exposure to secondhand smoke is still high on the agenda" along with policies to protect young people from all forms of second-hand exposures," he said. Tan is an investigator in the Center for Community Based Research at Dana-Farber's Division of Population Sciences.

Participants in the survey were asked how often they breathed smoke from someone who was smoking tobacco products and/or breathed vapor from someone using an e-cigarette in indoor or outdoor public places in the last 30 days.

The prevalence of exposure to second hand aerosols from e-cigarettes increased from about one in four students between 2015 and 2017 to one of three students in 2018. The increase is occurring, the report noted, despite 16 states and more than 800 municipalities introducing laws to restrict e-cigarette use in 100% smoke-free or other venues, including schools, in the past few years.

The researchers added that "education about potential second-hand aerosol harms for parents and youth, and interventions to reduce youth vaping are needed to protect young people from being exposed to all forms of tobacco product emissions, including from e-cigarettes."

Credit: 
Dana-Farber Cancer Institute

Teen birth control use linked to depression risk in adulthood

Women who used oral contraceptives during adolescence are more likely to develop depression as adults, suggests new research from the University of British Columbia.

In a study published today in the Journal of Child Psychology and Psychiatry, researchers found teenage birth control pill users were 1.7 times to three times more likely to be clinically depressed in adulthood, compared to women who started taking birth control pills as adults, and to women who had never taken birth control pills.

The study is the first to look at oral contraceptive use during adolescence and its link with women's long-term vulnerability to depression. Depression is the leading cause of disability and suicide deaths worldwide, and women are twice as likely as men to develop depression at some point in their lives.

"Our findings suggest that the use of oral contraceptives during adolescence may have an enduring effect on a woman's risk for depression--even years after she stops using them," said Christine Anderl, the study's first author and a UBC psychology postdoctoral fellow. "Adolescence is an important period for brain development. Previous animal studies have found that manipulating sex hormones, especially during important phases of brain development, can influence later behaviour in a way that is irreversible."

The researchers analyzed data from a population-representative survey of 1,236 women in the U.S. and controlled for a number of factors that have previously been proposed to explain the relationship between oral contraceptive use and depression risk. These include age at onset of menstruation, age of first sexual intercourse and current oral contraceptive use.

While the data clearly shows a relationship between birth control use during adolescence and increased depression risk in adulthood, the researchers note that it does not prove one causes the other.

"Millions of women worldwide use oral contraceptives, and they are particularly popular among teenagers," said Frances Chen, the study's senior author and UBC psychology associate professor. "While we strongly believe that providing women of all ages with access to effective methods of birth control is and should continue to be a major global health priority, we hope that our findings will promote more research on this topic, as well as more informed dialogue and decision-making about the prescription of hormonal birth control to adolescents."

The researchers are currently working on a prospective study to investigate how hormonal changes during adolescence can affect teenagers' emotions, social interactions and mental health. They are recruiting girls from age 13 to 15 to participate in the study, which will involve a series of lab tasks and the collection of saliva samples to measure hormone levels over three years.

Credit: 
University of British Columbia

Animal ethics and animal behavioral science -- bridging the gap

The moral status of animals is an important emerging topic for society, one that is leading to significant changes at academic, political, and legal levels in both wealthy and developing nations. However, some fields, such as animal behavioral science, have remained relatively aloof, despite producing evidence that is deeply enmeshed in animal ethics arguments.

Writing in the journal BioScience, an interdisciplinary group of scholars urges animal behavior scientists to position themselves more actively in the growing ethical conversation. The authors maintain that a greater integration between animal ethics and behavior communities will be valuable for both ethical and pragmatic reasons.

Coauthor Christine Webb, an animal behavior scientist at Harvard University, explains that "while it is commonplace for animal behavior scientists to emphasize the conservation implications of their work, other broader impacts related to the moral standing of animals are emphasized relatively less in their public outreach. However, scientists have a social responsibility to proactively engage with the ethical debates that are informed by their evidence." To highlight how such engagement may benefit ethical theory and practice, coauthor Peter Woodford, an assistant professor of religion, science, and philosophy at Union College, said, "Greater scientific understanding of animals has and undeniably will continue to raise some of the most important questions for us today regarding the scope and nature of morality." Coauthor Elise Huchard, a behavioral ecologist at the University of Montpellier/CNRS adds that more integration between science and philosophy in turn "may encourage scientists to embark on new research about the nature of animal minds, question the anthropocentric legacy of behavioral studies, and enrich other aspects of their scientific practices through a more careful consideration of animal interests and subjectivity."

Credit: 
American Institute of Biological Sciences

After 10-year search, scientists find second 'short sleep' gene

After a decade of searching, the UC San Francisco scientists who identified the only human gene known to promote "natural short sleep" -- lifelong, nightly sleep that lasts just four to six hours yet leaves individuals feeling fully rested -- have discovered a second.

"Before we identified the first short-sleep gene, people really weren't thinking about sleep duration in genetic terms," said Ying-Hui Fu, PhD, professor of neurology and a member of the UCSF Weill Institute for Neurosciences. Fu led the research teams that discovered both short sleep genes, the newest of which is described in a paper published August 28, 2019 in the journal Neuron.

According to Fu, many scientists once thought that certain sleep behaviors couldn't be studied genetically. "Sleep can be difficult to study using the tools of human genetics because people use alarms, coffee and pills to alter their natural sleep cycles," she said. These sleep disruptors, the thinking went, made it difficult for researchers to distinguish between people who naturally sleep for less than six hours and those who do so only with the aid of an artificial stimulant.

Natural short sleepers remained a mystery until 2009, when a study conducted by Fu's team discovered that people who had inherited a particular mutation in a gene called DEC2 averaged only 6.25 hours of sleep per night; study participants lacking the mutation averaged 8.06 hours. This finding provided the first conclusive evidence that natural short sleep is, at least in some cases, genetic. But this mutation is rare, so while it helped explain some natural short sleepers, it couldn't account for all of them.

"Sleep is complicated," said UCSF's Louis Ptáček, MD, the John C. Coleman Distinguished Professor in Neurodegenerative Diseases and co-senior author of the new study. "We didn't think there was just one gene or one region of the brain telling our bodies to sleep or wake." Ptáček and Fu reasoned that there had to be other, as yet undiscovered, causes of short sleep.

As the new study describes, a breakthrough came when the researchers identified a family that included three successive generations of natural short sleepers, none of whom harbored the DEC2 mutation. The researchers used gene sequencing and a technique known as linkage analysis, which helps scientists pinpoint the exact chromosomal location of mutations associated with a particular trait, to comb through the family's genome. Their efforts uncovered a single-letter mutation in a gene known as ADRB1 that, like the mutation in DEC2, was associated with natural short sleep.

Eager to understand how the newly discovered mutation might lead to short sleep, the researchers performed a series of experiments in lab-grown cells and in mice that had been genetically engineered to harbor an identical mutation in the mouse version of ADRB1.

The cell-based experiments revealed that the mutant form of the beta-1 adrenergic receptor -- the protein encoded by the ADRB1 gene, which plays a role in a variety of essential biological processes -- degrades more rapidly than the non-mutant version, suggesting that it might also function differently.

This hunch was confirmed in mouse experiments. The researchers discovered that the ADRB1 gene was highly expressed in the dorsal pons, a region of the brainstem involved in regulating sleep. Using a technique known as optogenetics, in which cells are modified so they can be activated by light, the researchers focused light on neurons in the pons to stimulate those in which ADRB1 was expressed. Triggering these neurons immediately roused sleeping mice -- specifically, those that were experiencing non-REM sleep, the sleep phase during which these neurons are not normally active -- demonstrating that these neurons promote wakefulness.

Additional experiments showed that wakefulness-promoting neurons in the pons with the mutated version of ADRB1 were more easily activated. Furthermore, the ratio of wakefulness-promoting to sleep-promoting neurons skewed heavily towards the former in mice with the ADRB1 mutation. These experiments suggest that the mutant form of ADRB1 promotes natural short sleep because it helps build brains that are easier to rouse and that stay awake longer.

Though they sleep less, natural short sleepers don't suffer any of the adverse health effects associated with sleep deprivation. "Today, most people are chronically sleep deprived. If you need eight to nine hours, but only sleep seven, you're sleep deprived," Fu said. "This has well-known, long-term health consequences. You're more likely to suffer from cardiovascular disease, cancer, dementia, metabolic problems and a weakened immune system."

But natural short sleepers actually seem to benefit from this quirk of their biology. Fu says researchers have found that short sleepers tend to be more optimistic, more energetic and better multitaskers. They also have a higher pain threshold, don't suffer from jet lag and some researchers believe they may even live longer. Though the exact reasons for these benefits remain unknown, Fu and Ptáček think their work represents an important step towards understanding the connection between good sleep and overall health.

"Natural short sleepers experience better sleep quality and sleep efficiency," Fu said. "By studying them, we hope to learn what makes for a good night's sleep, so that all of us can be better sleepers leading happier, healthier lives."

Credit: 
University of California - San Francisco

Hydrophobic silica colloid electrolyte holds promise for safer Li-O2 batteries

image: Schematic graph and experimental data showing the lithium dendrite prevention effect of 10 wt% HSCE. Schematic graph and experimental data showing the anticorrosion effect of 10 wt% HSCE.

Image: 
ZHANG Xinbo

Traditional lithium-ion (Li-ion) batteries cannot satisfy increasing demand for large-scale electricity consumption. Rechargeable aprotic lithium-oxygen (Li-O2) batteries have become potential candidates due to their ultrahigh theoretical energy density, which is about 10 times that of Li-ion batteries. Lithium metal as anode is one of the key factors in obtaining such high specific capacity.

However, the use of a lithium metal anode inevitably triggers serious safety issues because the dendrite growth of lithium will pierce the separator and give rise to a short-circuit fire. Furthermore, the semi-open nature as well as the oxidizing environment of Li-O2 batteries will cause more severe parasitic side reactions, thus hindering the development of Li-O2 batteries. Therefore, it is vital to figure out how to effectively protect the lithium metal anode of Li-O2 batteries.

Recently, a research team led by ZHANG Xinbo from the Changchun Institute of Applied Chemistry (CIAC) of the Chinese Academy of Sciences developed an electrolyte regulation strategy by in situ coupling of CF3SO3- on hydrophobic silica colloidal particles via electrostatic interactions in order to prevent lithium dendrite growth and corrosion. These findings were published in Matter on August 28.

The researchers found that this strategy could couple the anion with nanosilica via electrostatic interaction thus avoiding the formation of a strong electrical field during the lithium deposition process.

A hydrophobic silica colloid electrolyte (HSCE) with a low diffusion coefficient of 10 wt% along with the hydrophobic property of silica led to a 980-times better anticorrosion effect, thus greatly reducing lithium corrosion in Li-O2 batteries. Moreover, by using 10 wt% HSCE, stable and long-life electrochemical performance was obtained in Li-O2 batteries.

"We believe this comprehensive and effective protection strategy can spark more inspiration in electrolyte regulation methods, thus achieving better electrochemical performance," said ZHANG.

This study also provides an effective electrolyte regulation strategy to solve the dendrite and corrosion issues in alkali-O2 batteries and alkali-air batteries. These batteries have the potential for good electrochemical performance in practical applications, and will help release the great potential of alkali metal anodes.

Credit: 
Chinese Academy of Sciences Headquarters

More rain yet less water expected for up to 250 million people along the Nile

image: An increase in precipitation in coming decades is counteracted by more hot and dry weather brought on by changes in the climate. On average, the result is a dramatic rise in unmet water demands for people in the region that rely on the Nile River.

Image: 
Richard Clark/Dartmouth College. Nile River map is Jacques Descloitres, NASA/GSFC

HANOVER, N.H. - August 28, 2019 - Hot and dry conditions coupled with increasing population will reduce the amount of water available for human, agricultural and ecological uses along the Nile River, according to a study from Dartmouth College.

The study, published in the AGU journal Earth's Future, shows that water scarcity is expected to worsen in coming decades even as climate models suggest more precipitation around the river's source in the Upper Nile Basin.

An increase in the frequency of hot and dry years could impact the water and food supplies for as many as 250 million people in the Upper Nile region alone toward the end of the century.

"Climate extremes impact people," said Ethan Coffel, a fellow at Dartmouth's Neukom Institute for Computational Science and lead author of the study. "This study doesn't only look at high-level changes in temperature or rainfall, it explains how those conditions will change life for real people."

The Upper Nile Basin is a chronically water-stressed region that includes western Ethiopia, South Sudan, and Uganda. Nearly all of the rain that feeds the Nile's northward flow to Egypt and the Mediterranean Sea falls in this area that is already home to 200 million people.

"It's hard to overstate the importance of the Nile, and the risk of increasing water insecurity in an already water-scarce place," said Justin Mankin, an assistant professor of geography at Dartmouth and senior researcher on the study. "The Nile has served as an oasis for water, food, commerce, transportation and energy for thousands of years. But we show that the river won't be able to consistently provide all of those competing services in coming decades."

Using a mix of available climate models, the study demonstrates that it is likely that the Upper Nile Basin will experience an increase in regional precipitation for the remainder of this century. The projected upward trend in precipitation comes as a result of increased atmospheric moisture normally associated with warming.

At the same time, the study finds that hot and dry years in the region have become more frequent over the past four decades. Despite some uncertainties in the models, this trend is projected to continue throughout the century with the frequency of hot and dry years as much as tripling even if warming is limited to only 2 degrees Celsius.

Further complicating conditions, population in the region is projected to nearly double by 2080 and will impose large additional demands on water resources.

As a result, the report finds that increased evaporation from higher temperatures coupled with the doubling of runoff demand from a larger population counteract any projected increase in rainfall. The trend of increased precipitation will simply be too slow to result in significant changes in runoff over the time periods studied.

"At first glance you would expect more rain to reduce scarcity, but not on the Nile. The dice are loaded for additional hot and dry years in the future, meaning increasing shocks to households because of crop yield declines and less water available for households to be resilient against warming temperatures," said Mankin.

According to the study, annual demand for water runoff from the Nile will regularly exceed supply by 2030, causing the percentage of the Upper Nile population expected to suffer from water scarcity to rise sharply. By 2080, the study estimates that as much as 65 percent of the regional population - 250 million people - could face chronic water scarcity during excessively hot and dry years.

Even during normal years, the researchers found that as many as 170 million people on average could encounter unmet demand annually by the latter part of the century. Fewer than 25 million people in the region are projected to suffer from water scarcity in 2020.

Most of the increase in water demand is expected to occur during a period of rapid population rise between 2020 and 2040.

"The Nile Basin is one of several fast growing, predominantly agricultural regions that is really on the brink of severe water scarcity. Climate change coupled with population growth will make it much harder to provide food and water for everyone in these areas. Those environmental stresses could easily contribute to migration and even conflict," said Coffel.

To confirm the connection between the impact on food supply and climate conditions in the region, the researchers assessed agricultural yields data from six major crops in Ethiopia's food supply: maize, millet, barley, pulses, sorghum, and wheat.

While food shortages in the region are complex, and can result from a variety of factors such as governance and conflict, the study demonstrates that nearly all recent regional crop failures have occurred amid hot and dry conditions when water runoff is scarcer.

According to the paper, the frequency of hot and dry years that can cause poor crop yields is projected to increase from 10 percent to 15 percent depending on modelling assumptions on climate and greenhouse gas emissions. The result is less water and less food for a growing population.

"We already have a global-scale picture of water scarcity, but that does not tell the story for people in any particular place. With this study, we are able to explain these changes in water scarcity and what that actually means for the millions of people who are going to be in water poverty. It is no longer just colors of basins on a map."

Researchers from Columbia University and the United States Military Academy contributed to this study.

Credit: 
Dartmouth College

Healthy foods more important than type of diet to reduce heart disease risk

BOSTON - Everyone knows that achieving or maintaining a healthy body weight is one key to preventing cardiovascular disease. But even experts don't agree on the best way to achieve that goal, with some recommending eliminating carbohydrates and others emphasizing reducing fats to lose weight. Few studies have investigated the effects of these specific macronutrients on cardiovascular health.

In a study published online in the International Journal of Cardiology, researchers at Beth Israel Deaconess Medical Center (BIDMC) examined the effects of three healthy diets emphasizing different macronutrients - carbohydrates, proteins, or unsaturated fats - on a biomarker that directly reflects heart injury. Using highly specific tests, the team found that all three diets reduced heart cell damage and inflammation, consistent with improved heart health.

"It's possible that macronutrients matter less than simply eating healthy foods," said corresponding author Stephen Juraschek, MD, PhD, Assistant Professor of Medicine at BIDMC and Harvard Medical School. "Our findings support flexibility in food selection for people attempting to eat a healthier diet and should make it easier. With the average American eating fewer than two servings of fruit and vegetables a day, the typical American diet is quite different from any of these diets, which all included at least four to six servings of fruits and vegetables a day."

Juraschek and colleagues analyzed stored blood samples from 150 participants of the Optimal MacroNutrient Intake Trial to Prevent Heart Disease (OmniHeart) trial, a two-center, inpatient feeding study conducted in Boston and Baltimore between April 2003 and June 2005. The average age among the study participants was 53.6 years, while 55 percent were African American and 45 percent were women. The participants - all of whom had elevated blood pressure, but were not yet taking medications to control hypertension or cholesterol - were fed each of three diets - emphasizing carbohydrates, protein, or unsaturated fat - for six weeks with feeding periods separated by a washout period.

The diets were: a carbohydrate-rich diet similar to the well-known DASH diet, with sugars, grains and starches accounting for more than half of its calories; a protein-rich diet with 10 percent of calories from carbohydrates replaced by protein; and an unsaturated fat-rich diet with 10 percent of calories from carbohydrates replaced by the healthy fats found in avocados, fish and nuts. All three diets were low in unhealthy saturated fat, cholesterol, and sodium, while providing other nutrients at recommended dietary levels. The research team looked at the effects of each diet on biomarkers measured at the end of each dietary period compared to baseline and compared between diets.

All three healthy diets reduced heart injury and inflammation and acted quickly within a 6-week period. However, changing the macronutrients of the diet did not provide extra benefits. This is important for two reasons: First, the effects of diet on heart injury are rapid and cardiac injury can be reduced soon after adopting a healthy diet. Second, it is not the type of diet that matters for cardiac injury (high or low fat, high or low carb), but rather the overall healthfulness of the diet.

"There are multiple debates about dietary carbs and fat, but the message from our data is clear: eating a balanced diet rich in fruits and vegetables, lean meats, and high in fiber that is restricted in red meats, sugary beverages, and sweets, will not only improve cardiovascular risk factors, but also reduce direct injury to the heart," said Juraschek. "Hopefully, these findings will resonate with adults as they shop in grocery stores and with health practitioners providing counsel in clinics throughout the country."

Credit: 
Beth Israel Deaconess Medical Center

New optical array, multisite stimulator advances optogenetics

image: A) Schematic of the integrated device with the glass Utah Optrode Array (UOA) bonded to a microLED array. Illumination is from the microLED, through the sapphire (which is bonded to the glass UOA) and delivered to tissue either by the glass needles or through interstitial sites. A pinhole layer (not shown) is patterned onto the sapphire substrate of the microLED array before bonding to reduce optical crosstalk. B) Completed device with polymer coated wire bonds permitting independent control of the microLEDs. C) Illumination of a single microLED delivering high-intensity light to the needle tip. D) All 100 sites simultaneously illuminated.

Image: 
Niall McAlinden et al.

BELLINGHAM, Washington, USA and CARDIFF, UK - In an article published in the peer-reviewed, open-access SPIE publication Neurophotonics, "A multi-site microLED optrode array for neural interfacing," researchers present an implantable optrode array capable of exciting below-surface neurons in large mammal brains at two levels, both by structured-light delivery and large-volume illumination. This development is promising for studies aiming to link neural activity to specific cognitive functions in large mammals.

While other optogenetic devices have been previously developed, this latest innovation addresses a multitude of ongoing challenges and improvements when it comes to optical stimulation and neuroscience in large animal models. Enhanced elements include depth of access, heat control, and an electric delivery system compatible with future wireless applications. This is achieved by connecting a glass needle array to a custom-built microLED array and creating a compact and lightweight device optimally suited for behavioral studies.

This new technology heralds exciting potential for new mammalian behavioral discoveries, according to Neurophotonics Associate Editor Anna Wang Roe, a professor in the Division of Neuroscience at the Oregon National Primate Research Center and director of the Interdisciplinary Institute of Neuroscience and Technology at Zhejiang University in Hangzhou, China. "The development of a large array multisite optical stimulator is a significant advance in the field of optogenetics and optical stimulation," she said. "Having this array will lead to the ability to stimulate multiple cortical sites simultaneously or sequentially, in a cell-type specific manner. It may also open up the possibility of stimulating with different wavelengths, and therefore different cell types, at selected locations. This capability broadens the menu of possible stimulation paradigms for brain-machine interface."

Credit: 
SPIE--International Society for Optics and Photonics

Gout 'more than doubles' risk of kidney failure, according to UL led research study

image: Professor Austin Stack, Foundation Chair of Medicine at UL GEMS who is lead author of the study and Principal Investigator for the UL Kidney Health Consortium at the Health Research Institute and Consultant Nephrologist at UL Hospitals.

Image: 
Press 22

Patients with gout are at increased risk of chronic kidney disease and kidney failure, according to new University of Limerick (UL), Ireland led research.

In one of the largest and most detailed studies ever conducted, patients recruited in general practice with a diagnosis of gout were more than twice as likely to develop kidney failure than those without, according to the study led by researchers at University of Limerick's (UL) Graduate Entry Medical School (GEMS).

The largest and most detailed study ever published on this subject used data from more than 620,000 patients in the UK health system.

It found that gout patients were also more likely to suffer a short-term deterioration in kidney function, as well as a sustained deterioration of function to less than 10% of normal, compared to patients without gout.

The researchers based their findings on results of a large UK-wide study that analysed data from the Clinical Research Practice Data Datalink (CPRD), a research database that collects clinical information on patients attending primary care centres from across the UK.

In their analysis, researchers analysed the risk of advanced chronic kidney disease (CKD) in 68,897 gout patients followed for an average of 3.7 years and compared them to 554,964 patients without gout.

"The results were quite astonishing," said Professor Austin Stack, Foundation Chair of Medicine at UL GEMS who is lead author of the study and Principal Investigator for the UL Kidney Health Consortium at the Health Research Institute and Consultant Nephrologist at UL Hospitals.

"While we always believed that high levels of uric acid might be bad for kidneys and that patients with gout may have a higher risk of kidney failure, we were quite surprised by the magnitude of the risk imposed by gout in these patients. We were particularly interested in the risk of advanced kidney disease, as these patients in general have a higher risk of kidney failure and death.

"In our analysis, we defined advanced kidney disease based on four specific criteria; need for dialysis or kidney transplant; failing kidney function to less than 10% of normal; doubling of serum creatinine from baseline; and death associated with CKD.

"Overall, we discovered that patients who suffered from gout had a 29% higher risk of advanced CKD compared to those without gout. Indeed when we analysed each of the components of advanced kidney disease, we found that in general gout patients were at higher risk of a deterioration in kidney function compared to those without.

"Astonishingly, when we looked at the risk of kidney failure and those who needed dialysis or a kidney transplant, we found that gouts patients had more than a 200% higher risk of kidney failure than those without gout," Professor Stack added of the study, which has just been published by medical journal BMJ Open.

The study sheds new light on the importance and potential impact of gout on kidney function. Although previous studies have shown that gout patients have a higher burden of kidney disease, none has convincingly shown that gout can contribute to the development of kidney failure.

"Our study had several important strengths that overcame the limitations of previous studies. It is one of the largest studies ever conducted with over 620,000 patients included," said Professor Stack.

"Second, the study was representative of patients that are typically seen in general practice within the UK health system. Third, the analysis accounted for known confounders - factors that may have contributed to the development or kidney disease like hypertension and diabetes - and our findings were further confirmed in several additional analysis. Taken together, the findings from this study suggest that gout is an independent risk factor for progression of CKD and kidney failure."

Gout is the most common inflammatory arthritis which causes severe pain and suffering due to a build-up of uric acid in joints. It affects almost 2.5 % of the adult population and causes significant pain and disability due to its effects on joints, tendons and bone. Treatments that lower uric acid levels in the blood stream are effective in preventing both the acute flares of gout and the long-term damage it causes in joints, however current evidence shows that gout remains poorly managed in the population.

CKD is a common chronic condition that affects around 15% of adults in the Irish health system and has a major impact on a person health.

"Each year over 450 patients develop kidney failure in Ireland and require some form of dialysis treatment or a kidney transplant," explained Professor Stack.

"This continues to be the case despite our best efforts at controlling blood pressure m diabetes and other well established risk factors. In fact in over a decade, the numbers of patients who develop kidney failure in Ireland has increased from 2,848 in 2005 to 4,440 in 2017 (a growth of 56%).

"The result of this new research suggests that gout may also play an important role in the progression of kidney disease. The identification of gout as a potential risk factor opens up new opportunities for the prevention of kidney disease and its consequences," added Professor Stack.

Credit: 
University of Limerick

Autism rates increasing fastest among black, Hispanic youth

Autism rates among racial minorities in the United States have increased by double digits in recent years, with black rates now exceeding those of whites in most states and Hispanic rates growing faster than any other group, according to new University of Colorado Boulder research.

The study, published this month in the Journal of Autism and Developmental Disorders, also found that prevalence of autism among white youth is ticking up again, after flattening in the mid-2000s.

While some of the increase is due to more awareness and greater detection of the disorder among minority populations, other environmental factors are likely at play, the authors conclude.

"We found that rates among blacks and Hispanics are not only catching up to those of whites -- which have historically been higher -- but surpassing them," said lead author Cynthia Nevison, an atmospheric research scientist with the Institute of Arctic and Alpine Research. "These results suggest that additional factors beyond just catch-up may be involved."

For the study, Nevison teamed up with co-author Walter Zahorodny, an autism researcher and associate professor of pediatrics at Rutgers New Jersey Medical School, to analyze the most recent data available from the Individuals with Disabilities Education Act (IDEA) and the Autism and Developmental Disabilities Monitoring (ADDM) Network.

IDEA tracks prevalence, including information on race, among 3-to-5-year-olds across all 50 states annually. ADDM tracks prevalence among 8-year-olds in 11 states every two years.

The new study found that between birth year 2007 and 2013, autism rates among Hispanics age 3-5 rose 73%, while rates among blacks that age rose 44% and rates among whites rose 25%.

In 30 states, prevalence among blacks was higher than among whites by 2012.

In states with "high prevalence," 1 in 79 whites, 1 in 68 blacks and 1 in 83 Hispanics born in 2013 have been diagnosed with autism by age 3-5.

Other states like Colorado fell in a "low-prevalence" category, but the authors cautioned that differences between states likely reflect differences in how well cases are reported by age 3-5. They also said the real prevalence is substantially higher, as many children are not diagnosed until later in life.

"There is no doubt that autism prevalence has increased significantly over the past 10 to 20 years, and based on what we have seen from this larger, more recent dataset it will continue to increase among all race and ethnicity groups in the coming years," said Zahorodny.

In 2018, the Centers for Disease Control reported that about 1 in 59 children of all races have been diagnosed with autism and that rates had risen 15 percent overall from the previous two year period, largely due to better outreach and diagnosis among historically underdiagnosed minority populations.

"Our data contradict the assertion that these increases are mainly due to better awareness among minority children," said Zahorodny. "If the minority rates are exceeding the white rates that implies some difference in risk factor, either greater exposure to something in the environment or another trigger."

Established risk factors associated with autism include advanced parental age, challenges to the immune system during pregnancy, genetic mutations, premature birth and being a twin or multiple.

The authors said that, based on current research, they cannot pinpoint what other environmental exposures might be factoring into the increases in autism. But they would like to see more research done in the field.

Credit: 
University of Colorado at Boulder

The role of a single molecule in obesity

image: University of Houston assistant professor of biology Michihisa Umetani, left, and biology doctoral student Arvand Asghari in the lab. Long term applications of their findings could lead to a treatment that results in reduced capacity for making fat.

Image: 
University of Houston

A single cholesterol-derived molecule, called 27-hydroxycholesterol (27HC), lurks inside your bloodstream and will increase your body fat, even if you don't eat a diet filled with red meat and fried food. That kind of diet, however, will increase the levels of 27HC and body weight.

"We found 27HC directly affects white adipose (fat) tissue and increases body fat, even without eating the diet that increases body fat," reports University of Houston assistant professor of biology Michihisa Umetani in the journal Endocrinology. First author of the paper, doctoral student Arvand Asghari, adds, "But it does need some help from the diet to increase body weight because it expands the capacity of the fat already in the body."

Long term applications of the findings could lead researchers to a treatment that reduces the levels of 27HC, which could result in reduced capacity for making fat. "We hope to develop a new therapeutic approach toward modulating 27HC levels to treat cholesterol and/or estrogen receptor-mediated diseases such as cardiovascular diseases, osteoporosis, cancer and metabolic diseases," said Umetani, whose lab is part of the UH Center for Nuclear Receptors and Cell Signaling.

Prior to this research, 27HC was known as an abundant cholesterol metabolite, and Umetani's group has reported its detrimental effects on the cardiovascular system, but its impact on obesity was not well known.

Role of estrogen receptors

Obesity is one of the main risk factors influencing cardiovascular disease worldwide in both men and women and estrogen plays a role in both sexes. Menopause in females, with its accompanying decrease in estrogen, seems to hasten the increase in fat tissue because estrogen protects against adiposity and body weight gain. In men, estrogens are also synthesized locally by conversion of testosterone, so they may also play important roles in the development of fat tissues in males.

"Estrogen receptors (ERα and ERβ) are members of the nuclear receptor superfamily and are present in adipocytes," said Umetani. "Patients with a non-functional ERα are obese, and those that do not have ERα have increased fat tissue even when they eat the same amount of food, indicating that ERα is the important isoform in the regulation of adipose tissue by estrogen."

The main function of 27HC in the liver is to reduce excess cholesterol. Previously, Umetani discovered that 27HC binds to estrogen receptors and acts as an inhibitor of ER action in the vasculature. It turned out that the effects by 27HC are tissue-specific, thus 27HC is the first identified naturally-produced selective estrogen receptor modulator, or SERM.

Credit: 
University of Houston

Busy older stars outpace stellar youngsters, new study shows

The oldest stars in our Galaxy are also the busiest, moving more rapidly than their younger counterparts in and out of the disk of the Milky Way, according to new analysis carried out at the University of Birmingham.

The findings provide fresh insights into the history of our Galaxy and increase our understanding of how stars form and evolve.

Researchers calculate that the old stars are moving more quickly in and out of the disc - the pancake-shaped mass at the heart of the Galaxy where most stars are located.

A number of theories could explain this movement - it all depends where the star is in the disc. Stars towards the outskirts could be knocked by gravitational interactions with smaller galaxies passing by. Towards the inner parts of the disc, the stars could be disturbed by massive gas clouds which move along with the stars inside the disc. They could also be thrown out of the disc by the movement of its spiral structure.

Dr Ted Mackereth, a galactic archaeologist at the University of Birmingham, is lead author on the paper. He explains: "The specific way that the stars move tells us which of these processes has been dominant in forming the disc we see today. We think older stars are move active because they have been around the longest, and because they were formed during a period when the Galaxy was a bit more violent, with lots of star formation happening and lots of disturbance from gasses and smaller satellite galaxies. There are lots of different processes at work, and untangling all these helps us to build up a picture of the history of our Galaxy."

The study uses data from the Gaia satellite, currently working to chart the movements of around 1 billion stars in the Milky Way. It also takes information from APOGEE, an experiment run by the Sloan Digital Sky Survey that uses spectroscopy to measure the distribution of elements in stars, as well as images from the recently-retired Kepler space telescope.

Measurements provided by Kepler show how the brightness of stars varies over time, which gives insights into how they vibrate. In turn, that yields information about their interior structure, which enables scientists to calculate their age.

The Birmingham team, working with colleagues at the University of Toronto and teams involved with the Sloan Digital Sky Survey, were able to take these different data strands and calculate the differences in velocity between different sets of stars grouped by age.

They found that the older stars were moving in many different directions with some moving very quickly out from the galactic disk. Younger stars move closely together at much slower speeds out from the disc, although they are faster than the older stars as they rotate around the Galaxy within the disc.

The eventual goal of the research is to link what is known about the Milky Way with information about how other galaxies in the universe formed, ultimately being able to place our Galaxy within the very earliest signatures of the universe.

Credit: 
University of Birmingham

Microbiota in home indoor air may protect children from asthma

Large amounts of a certain type of bacteria, most likely from outdoors, may reduce the child's risk of developing asthma. This was shown by Finnish Institute for Health and Welfare's (THL) new study that analysed the microbiota in over 400 Finnish homes.

However, the study was unable to identify individual bacterial taxa that provide protection against asthma.

It remains unclear why exposure to microbes protects against asthma. Earlier studies have found that high diversity of microbes is of particular importance in protecting against asthma.

THL's study, published in June, also showed that farm-like microbiota of the child's home protected children from asthma also in urban homes.

Finns spend 90% of the time indoors - contact with natural microbiota has decreased

On average, Finns spend 90% of the time indoors and more and more often in an urban environment. This means less contact with natural microbiota. The diversity of bacteria protects against asthma but certain soil microbes protect even more effectively.

"In this study, we identified certain groups of bacteria found in soil that protect against asthma. These groups of bacteria provided more effective protection against asthma than the previously observed diversity of microbiota," says Anne Karvonen, Senior Researcher at THL.

"If we want to develop products that protect against asthma, such as microbes that you can bring home or place on the skin, it would have been helpful to identify individual asthma-protective bacteria. However, our results help to restrict the bacteria that should be studied more."

Increased contact with nature is beneficial.

"We could explore nature with children more often and play in the nature instead of urban playgrounds covered with rubber. With regard to microbial exposure, it is important to have contact with nature in our everyday lives," says Karvonen.

Credit: 
Finnish Institute for Health and Welfare

Global study reveals most popular marketing metrics

Satisfaction is the most popular metric for marketing decisions around the world, according to a new study from the University of Technology Sydney (UTS) Business School.

Satisfaction measures how satisfied a company's customers are with a company and its product or service. It was the most used metric in eight of the 16 countries studied, and was employed in 53% of all marketing-mix decisions analysed.

Researchers analysed more than 4,000 marketing plans from over 1,600 companies in 16 countries, including Australia, US, Russia, India, UK and China, to uncover the answer.

"Despite trillions spent on marketing globally, managers have said consistently over the last couple of decades that one of the most difficult activities is demonstrating the impact of their marketing actions," says UTS Business School lead researcher Dr Ofer Mintz.

"We wanted to know what metrics managers are using globally, what drives metric use, including cultural influences, and how many metrics managers are using. In today's digital technology-intensive and data-rich environment, it is important for managers to know which metrics count."

The other two most popular metrics to help determine marketing strategy were Awareness and Return on Investment (ROI).

Awareness measures how many people recognise a company, brand, product or service, and was used in 45% of plans.

This was followed by ROI, which measures revenue generated per dollar spent on marketing and was used in 43% of plans.

Other popular metrics included Target Volume, Likeability, and Net Profit.

The study, just published in the Journal of International Business Studies, also found a significant and positive relationship between a company's total metric use and its marketing performance, in each of the 16 countries.

"We found the greater a manager's overall use of quantitative information or metrics when making decisions, the better the performance, accuracy, and overall quality of decisions. It also leads to greater CEO satisfaction, and increased profits and shareholder value," says Dr Mintz.

"Metrics provide information to help managers diagnose, coordinate, and monitor their actions. They also quantify trends or outcomes, reveal current relationships, and help predict the results of future actions," he says.

The study identified 84 different marketing and financial metrics in use, with managers employing on average around nine metrics per marketing-mix.

Country and organisational culture also had an impact on the types of analytics managers used.

South Korea, China and India had the highest use of metrics, and Japan, France and the US had the lowest use. Managers residing in countries with a lower tolerance for uncertainty and ambiguity employed significantly more metrics for their marketing decisions.

Other areas of national cultural difference that had an impact on metric use included collectivism, assertiveness, power distance, and future orientation. The study also identified organisational characteristics that drove metric use, however the result was counterintuitive.

"Rigid organisational cultures were less effective than more organic, free-flowing cultures where there was flexibility for managers to exchange ideas and choose their own metrics, rather than focusing on a strict set of instructions," says Dr Mintz.

"It's important for managers to understand the different drivers for metric use, both cultural and organisational. It is also useful to know what metrics other managers are using, and how many they are using, as it provides a benchmark for their own marketing-mix."

Credit: 
University of Technology Sydney

Using artificial intelligence to track birds' dark-of-night migrations

image: Map colors indicate estimates of migration traffic from measurements at 143 radar stations. Locations are indicated by white circles, with size proportional to migration traffic at the station. The central corridor of the U.S. receives particularly intense migration. Estimated numbers of birds crossing different latitudes are shown to the right.

Image: 
Courtesy of the authors and Kyle G. Horton

On many evenings during spring and fall migration, tens of millions of birds take flight at sunset and pass over our heads, unseen in the night sky. Though these flights have been recorded for decades by the National Weather Services' network of constantly scanning weather radars, until recently these data have been mostly out of reach for bird researchers.

That's because the sheer magnitude of information and lack of tools to analyze it made only limited studies possible, says artificial intelligence (AI) researcher Dan Sheldon at the University of Massachusetts Amherst.

Ornithologists and ecologists with the time and expertise to analyze individual radar images could clearly see patterns that allowed them to discriminate precipitation from birds and study migration, he adds. But the massive amount of information ¬- over 200 million images and hundreds of terabytes of data - significantly limited their ability to sample enough nights, over enough years and in enough locations to be useful in characterizing, let alone tracking, seasonal, continent-wide migrations, he explains.

Clearly, a machine learning system was needed, Sheldon notes, "to remove the rain and keep the birds."

Now, with colleagues from the Cornell Lab of Ornithology and others, senior authors Sheldon and Subhransu Maji and lead author Tsung-Yu Lin at UMass's College of Information and Computer Sciences unveil their new tool "MistNet." In Sheldon's words, it's the "latest and greatest in machine learning" to extract bird data from the radar record and to take advantage of the treasure trove of bird migration information in the decades-long radar data archives. The tool's name refers to the fine, almost invisible, "mist nets" that ornithologists use to capture migratory songbirds.

MistNet can "automate the processing of a massive data set that has measured bird migration over the continental U.S. for over two decades," Sheldon says. "This is a really important advance. Our results are excellent compared with humans working by hand. It allows us to go from limited 20th-century insights to 21st-century knowledge and conservation action." He and co-authors point out, "Deep learning has revolutionized the ability of computers to mimic humans in solving similar recognition tasks for images, video and audio."

For this work, supported in part by a National Science Foundation grant to Sheldon to design and test new mathematical approaches and algorithms for such applications, the team conducted a large-scale validation of MistNet and competing approaches using two evaluation data sets. Their new paper also presents several case studies to illustrate MistNet's strengths and flexibility. Details appear in the current issue of Methods in Ecology and Evolution.

MistNet is based on neural networks for images and includes several architecture components tailored to the unique characteristics of radar data, the authors point out. Radar ornithology is advancing rapidly and leading to significant discoveries about continent-scale patterns of bird movements, they add.

The team made maps of where and when migration occurred over the past 24 years and animated these to illustrate, for example, "the most intensive migration areas in the continental United States," Sheldon explains - a corridor roughly along and just west of the Mississippi River. MistNet also allows researchers to estimate flying velocity and traffic rates of migrating birds.

MistNet, designed to address one of the "long-standing challenges in radar aero-ecology," the authors note, comes just in time to help scientists better use not only existing weather radar data, but the "explosion" of large new data sets generated by citizen science projects such as eBird, animal tracking devices and earth observation instruments, say Sheldon and colleagues.

"We hope MistNet will enable a range of science and conservation applications. For example, we see in many places that a large amount of migration is concentrated on a few nights of the season," Sheldon says. "Knowing this, maybe we could aid birds by turning off skyscraper lights on those nights." Another question the ornithologists are interested in is the historical timing, or phenology, of bird migration and whether it, and timely access to food, have shifted with climate change.

Credit: 
University of Massachusetts Amherst