Culture

Alarming number of heart infections tied to opioid epidemic

DALLAS, Sept. 18, 2019 -- An alarming number of people nationwide are developing infections of either the heart's inner lining or valves, known as infective endocarditis, in large part, due to the current opioid epidemic. This new trend predominantly affects young, white, poor men who also have higher rates of HIV, hepatitis C and alcohol abuse, according to new research published in the Journal of the American Heart Association, the open access journal of the American Heart Association.

Infective endocarditis occurs when bacteria or fungi in the blood stream enter the heart's inner lining or valves. Nearly 34,000 people receive treatment for this condition each year, of which approximately 20% die. One of the major risk factors for infective endocarditis is drug abuse.

"Infective endocarditis related to drug abuse is a nationwide epidemic," said the study's senior author Serge C. Harb, M.D., assistant professor of medicine at Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio. "These patients are among the most vulnerable--young and poor, and also frequently have HIV, hepatitis C and alcohol abuse."

Researchers analyzed data in the National Inpatient Sample registry from 2002-2016 on nearly one million hospitalized patients diagnosed with infective endocarditis to compare patients with heart infections related to drug abuse to those with heart infections from other causes. The registry is the largest publicly available database of U.S. hospitalizations.

During the 14 years studied, researchers found that the prevalence ratio for drug-abuse-related heart infections nearly doubled in the United States, from 8% to 16%. All geographic regions saw increases, and the highest jump occurred in the Midwest at nearly 5% per year.

They also found those with infective endocarditis related to drug abuse:

Were predominantly young, white men (median age 38 years old);

Were poorer, with nearly 42% having a median household income in the lowest national quartile, and about 45% are covered by Medicaid;

Had higher rates of HIV, hepatitis C and alcohol abuse compared to patients with infective endocarditis who are not drug abusers;

Had longer hospital stays and higher health care costs; and

Were more likely to undergo heart surgery, yet less likely to die while hospitalized. Lower death rates are likely due to their significantly younger age.

"Nationwide public health measures need to be implemented to address this epidemic, with targeted regional programs to specifically support patients at increased risk," Dr. Harb said. "Specialized teams, including but not limited to cardiologists, infectious disease specialists, cardiac surgeons, nurses, addiction specialists, case managers and social workers, are needed to care for these patients. Appropriately treating the cardiovascular infection is only one part of the management plan. Helping these patients address their addictive behaviors with social supports and effective rehabilitation programs is central to improving their health and preventing drug abuse relapses."

Diseases that occurred more frequently among patients with heart infections from causes other than drug abuse included high blood pressure, diabetes, heart failure, kidney disease and lung disease. A study limitation is that the registry relies solely on diagnostic codes (ICD codes) and does not include hospital transfers. Another limitation is that the registry provided only general information by region, without details specific to states, cities and rural towns.

Credit: 
American Heart Association

Microbe chews through PFAS and other tough contaminants

image: In a series of lab tests, a relatively common soil bacterium has demonstrated its ability to break down the difficult-to-remove class of pollutants called PFAS.

Image: 
David Kelly Crow

In a series of lab tests, a relatively common soil bacterium has demonstrated its ability to break down the difficult-to-remove class of pollutants called PFAS, researchers at Princeton University said.

The bacterium, Acidimicrobium bacterium A6, removed 60% of PFAS _specifically perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS) _ in lab vials over 100 days of observation, the researchers reported in a Sept. 18 article in the journal Environmental Science and Technology. Because of their health concerns and ubiquity, EPA has recently opened a research effort into the chemicals impact in drinking water. Peter Jaffe, the lead researcher and a professor of civil and environmental engineering at Princeton, said the researchers were very encouraged to see these bacteria substantially degrade the famously recalcitrant class of chemicals but cautioned that more work was needed before reaching a workable treatment.

"This is a proof of concept," said Jaffe, the William L. Knapp '47 Professor of Civil Engineering. "We would like to get the removal higher, and then go and test it in the field."

PFAS (Per- and polyfluoroalkyl substances) have been widely used in products from non-stick pans to firefighting foam, and the Environmental Protection Agency has said there is evidence that exposure to PFAS is harmful to human health. Because of this, U.S. manufacturers have phased out several versions of PFAS in their products. But the substance is long-lived and extremely difficult to remove from soil and ground water. In recent years, local governments have been seeking ways to reduce the amount of PFAS in water supplies.

Because of the strength of the carbon-fluorine bond, these chemicals are extremely difficult to remove through conventional means. But Jaffe and co-researcher, Shan Huang, an associate research scholar at Princeton, suspected that the Acidimicrobium A6 might be an effective remedy.

The researchers first began working with the bacteria several years ago when they investigated a phenomenon in which ammonium broke down in acidic, iron-rich soils in New Jersey wetlands and similar locations. Because removing ammonium is a critical part of sewage treatment, the researchers wanted to understand what was behind the process, called Feammox. In their initial research in 2013, Jaffe and fellow researchers removed soil samples from the Assunpink wetland outside Trenton. They cultivated the samples in the lab with an eye to identify the microorganisms responsible for the Feammox process. The researchers learned that the Feammox reaction occurred in the presence of Acidimicrobium A6, but it required several years of painstaking work to isolate this organism and grow it as a pure culture.

One challenge in working with Acidimicrobium A6 is the bacterium's demand for iron both to grow and eliminate compounds like ammonium. Jaffe, along with graduate students Weitao Shuai and Melany Ruiz, now a post-doctoral researcher at Rutgers, determined that they could substitute an electrical anode for the iron in lab reactors. This allowed the researchers to more easily grow these bacteria and work with them; it also presented a possible way to develop reactors for remediation in the absence of iron.

When they sequenced the Acidimicrobium A6 genome, the researchers noticed certain characteristics that opened the possibility that the bacterium could be effective in removing PFAS.

"We knew this was a big environmental challenge, to find an organism that could degrade these perfluorinated organics," Jaffe said.

To test their hypothesis, the researchers sealed samples of Acidimicrobium A6 in lab containers and then tested the bacteria's ability to break down the compounds in lab reactors.

After 100 days, the researchers stopped the test and determined that the bacteria had removed 60 percent of the contaminants and released an equivalent amount of fluoride in the process. Jaffe said the 100 day period was an arbitrary length selected for the experiment, and that longer incubations might result in more PFAS removal. The researchers also plan to vary conditions in the reactor to find the optimum conditions for PFAS removal.

Acidimicrobium A6 thrives in low oxygen conditions, which makes it particularly effective for soil and groundwater remediation and allows it to function without expensive aeration. However, these bacteria also require iron and acidic soil conditions. Jaffe said this could limit their deployment, but adjusting soil conditions could also allow the bacteria to function in areas that do not naturally meet these requirements. Noting previous work on ammonium reduction by Acidimicrobium A6 in soil columns, constructed wetlands, and the electrochemical reactors, Jaffe said the researchers believe this could also be done for PFAS remediation.

Jaffe said the researchers are also working with Mohammad R. Seyedsayamdost, an associate professor of chemistry, and colleagues in the chemistry department to better understand the enzymes involved in the defluorination process. Characterizing those enzymes could provide insights that increase effectiveness in remediation.

Credit: 
Princeton University, Engineering School

Scientists construct energy production unit for a synthetic cell

image: This is an artist's impression of a synthetic cell, with the ATP production system in green.

Image: 
Bert Poolman / BaSyC consortium

Scientists at the University of Groningen have constructed synthetic vesicles in which ATP, the main energy carrier in living cells, is produced. The vesicles use the ATP to maintain their volume and their ionic strength homeostasis. This metabolic network will eventually be used in the creation of synthetic cells - but it can already be used to study ATP-dependent processes. The researchers described the synthetic system in an article that was published in Nature Communications on 18 September.

'Our aim is the bottom-up construction of a synthetic cell that can sustain itself and that can grow and divide,' explains University of Groningen Professor of Biochemistry Bert Poolman. He is part of a Dutch consortium that obtained a Gravitation grant in 2017 from the Netherlands Organisation for Scientific Research to realize this ambition. Different groups of scientists are producing different modules for the cell and Poolman's group was tasked with energy production.

Equilibrium

All living cells produce ATP as an energy carrier but achieving sustainable production of ATP in a test tube is not a small task. 'In known synthetic systems, all components for the reaction were included inside a vesicle. However, after about half an hour, the reaction reached equilibrium and ATP production declined,' Poolman explains. 'We wanted our system to stay away from equilibrium, just like in living systems.'

It took three Ph.D. students in his group nearly four years to construct such a system. A lipid vesicle was fitted out with a transport protein that could import the substrate arginine and export the product ornithine. Inside the vesicle, enzymes were present that broke down the arginine into ornithine. The free energy that this reaction provided was used to link phosphate to ADP, forming ATP. Ammonium and carbon dioxide were produced as waste products that diffused through the membrane. 'The export of ornithine produced inside the vesicle drives the import of arginine, which keeps the system running for as long as the vesicles are provided with arginine,' explains Poolman.

Transport protein

To create an out-of-equilibrium system, the ATP is used to maintain ionic strength inside the vesicle. A biological sensor measures ionic strength and if this becomes too high, it activates a transport protein that imports a substance called glycine betaine. This increases the cell volume and consequently reduces the ionic strength. 'The transport protein is powered by ATP, so we have both production and use of ATP inside the vesicle.'

The system was left to run for 16 hours in the longest experiment that the scientists have performed. 'This is quite long - some bacteria can divide after just 20 minutes,' says Poolman. 'The current system should suffice for a synthetic cell that divides once every few hours.' Eventually, different modules like this one will be combined to create a synthetic cell that will function autonomously by synthesizing its own proteins from a synthetic genome.

Artificial chromosome

The current system is based on biochemical components. However, Poolman's colleagues at Wageningen University & Research are busy collecting the genes needed for the production of enzymes used by the system and incorporating them into an artificial chromosome. Others are working on lipid and protein synthesis, for example, or cell division. The final synthetic cell should contain DNA for all these modules and operate them autonomously like a living cell, but in this case, engineered from the bottom-up and including new properties. However, this is many years away. 'In the meantime, we are already using our ATP-producing system to study ATP-dependent processes and advance the field of membrane transport,' says Poolman.

Credit: 
University of Groningen

New study is first to show long-term durability of early

A new study, VERIFY (Vildagliptin Efficacy in combination with metfoRmIn For earlY treatment of type 2 diabetes) presented at this year's Annual Meeting of the European Association for the Study of Diabetes (EASD) in Barcelona, Spain (16-20 Sept, 2019), and published simultaneously in The Lancet, is the first to show that early combination therapy using vildagliptin and metformin in patients newly diagnosed with type 2 diabetes (T2D) leads to better long-term blood sugar control and a reduced rate of treatment failure than metformin alone (the current standard-of-care treatment for patients newly diagnosed with T2D).

Vildagliptin (also known by its trade names of Galvus and Zomelis) is an oral drug used to treat type 2 diabetes, and belongs to the class of drugs known as dipeptidyl peptidase-4 (DPP-4) inhibitors. By inhibiting this key enzyme, DPP-4 inhibitors promote secretion of insulin by the pancreas, and inhibit production of glucagon, and thus help control blood sugar and avoid hyperglycaemia.

Metformin has been the first line treatment for T2D for several decades (the exact time varying by country), and belongs to the biguanide class of diabetes drugs. Currently, the first-line treatment recommended for type 2 diabetes is metformin monotherapy, with combination therapy only introduced later following treatment failure.

This study, led by EASD President Professor David Matthews (Oxford Centre for Diabetes, Endocrinology and Metabolism, and Harris Manchester College, University of Oxford, UK) and colleagues, included 2001 patients from 254 centres in 34 countries, with 998 randomised to receive early combination therapy using vildagliptin and metformin, and 1003 randomised to receive initial metformin alone, across a 5-year treatment period (enrolment occurred between 2012 and 2014 and follow-up of the final patients was completed in 2019).

The study was divided into 3 periods. In study period 1, patients received either the early combination treatment with metformin (individual, stable daily dose of 1000 to 2000 mg, depending on the patient's tolerability) and vildagliptin 50 mg twice daily, or standard-of-care initial metformin monotherapy (individual, stable daily dose of 1000 to 2000 mg) and placebo twice daily.

Treatment response was monitored by patients visiting their centre every 13 weeks, when the patients' level of glycated haemoglobin (HbA1c -- a measure of blood sugar control) was assessed. If the initial treatment did not maintain levels of HbA1c below 53 mmol/mol [7·0%]) during period 1, confirmed at two consecutive scheduled visits, 13 weeks apart, then this was defined as treatment failure and patients in the metformin monotherapy group received vildagliptin 50 mg twice daily in place of the placebo and patients in the early combination therapy group continued on combination.

This second period was thus a phase of two arms where allocated early combination therapy approach was being tested against a later, metformin with vildagliptin-if-necessary combination strategy. Subsequent failure requiring insulin treatment was assessed as an end-point for second failure by two further visits with loss of glycaemic control. Physicians would then move patients onto insulin therapy. However, patients who did not fail in period 1 but maintained good glycaemic control (HbA1c below 53 mmol/mol, 7%), continued administration of their randomised study medication (early combination or initial metformin monotherapy) for up to five years.

The primary efficacy endpoint was the time from randomisation to initial treatment failure, defined as HbA1c measurement of at least 53 mmol/mol (7·0%) at two consecutive scheduled visits, 13 weeks apart, during period 1.

A total of 1598 (79·9%) patients completed the 5-year study; 811 (81·3%) in the early combination therapy group and 787 (78·5%) in the monotherapy group. The incidence of initial treatment failure during period 1 was 429 (43·6%) patients in the combination treatment group and 614 (62·1%) patients in the monotherapy group. The median observed time to treatment failure in the monotherapy group was 36.1 months, while the median time to treatment failure time for those receiving early combination therapy could only be estimated to be beyond the 5-year study duration at 61.9 months. Both treatment approaches were safe and well tolerated.

The risk of losing blood sugar control (going above HbA1c 53 mmol/mol (7.0%) or more, twice) was approximately halved in the early combination treatment group compared with the monotherapy group over the 5-year study duration (a statistically significant 49% relative risk (RR) reduction). During period 2 when patients in both groups were (or could be) receiving combination treatment, the relative risk of losing blood sugar control was also reduced by 26% among those randomised to receive the early combination treatment, compared with those who transferred to combination therapy after their first treatment failure.

This showed that the early combination therapy strategy approach was superior to a sequential strategy approach involving later intensification of the failing monotherapy with a combination therapy, as demonstrated by a durable effect on blood glucose levels. The authors believe the better long term 'durability' of blood sugar control seen in the combination group could be due to the complementary mechanism of action between the two drugs.

The authors say: "The findings of VERIFY support and emphasise the importance of achieving and maintaining early glycaemic control" and refer to previous studies (such as the UK Prospective Diabetes study), in which early treatment intensification was associated with a legacy effect, where the reduction in vascular complications in the intensive group was maintained or strengthened over 10 years after study completion. In the Diabetes and Aging epidemiology study, an HbA1c value above 6·5% (the threshold for T2D diagnosis in the USA) for the first year following diagnosis was associated with worse outcomes (increasing microvascular events and mortality risk) over the subsequent 10 years of follow-up.

The authors point out that durable HbA1c values below 6.5% are unlikely to be achieved with monotherapy alone. They say: "Real-world evidence has shown how delayed treatment intensification after monotherapy failure results in increasing time spent with avoidable periods of hyperglycaemia, raising a crucial barrier to optimised care. The durable effect we observed with an early combination strategy in the VERIFY study provides initial support for such an approach as an effective way to intensify blood sugar control early after diagnosis and potentially avoid future complications." *

They conclude: "Early intervention with a combination therapy strategy provides greater and durable long-term benefits compared with the current standard-of-care monotherapy with metformin for patients with newly diagnosed type 2 diabetes."

Credit: 
Diabetologia

Study questions routine sleep studies to evaluate snoring in children

Pediatricians routinely advise parents of children who snore regularly and have sleepiness, fatigue or other symptoms consistent with sleep disordered breathing, to get a sleep study; this can help determine whether their child has obstructive sleep apnea, which is often treated with surgery to remove the tonsils and adenoids (adenotonsillectomy). Often pediatricians make surgery recommendations based on the results of this sleep study.

But a new finding from the University of Maryland School of Medicine (UMSOM) suggests that the pediatric sleep study -- used to both diagnose pediatric sleep apnea and to measure improvement after surgery - may be an unreliable predictor of who will benefit from having an adenotonsillectomy.

About 500,000 children under age 15 have adenotonsillectomies every year in the U.S. to treat obstructive sleep apnea. The American Academy of Pediatrics (AAP) recommends the surgery as a first-line therapy to treat the condition, which can cause behavioral issues, cardiovascular problems, poor growth, and developmental delays. The premise is that surgically removing or reducing the severity of the obstruction to the upper airway will improve sleep and reduce other problems caused by the disorder.

In 2012, the AAP recommended that pediatricians should screen children who snore regularly for sleep apnea, and refer children suspected of having the condition for an overnight in-laboratory sleep study. The group also recommended an adenotonsillectomy based on the results of the test. But results from the new UMSOM study, published in the September issue of the journal Pediatrics, call into question those recommendations because the data they analyzed found no relationship between improvements in sleep studies following surgery and resolution of most sleep apnea symptoms.

"Resolution of an airway obstruction measured by a sleep study performed after an adenotonsillectomy has long been thought to correlate with improvement in sleep apnea symptoms, but we found this may not be the case," said study lead author Amal Isaiah, MD, PhD, an Assistant Professor of Otorhinolaryngology--Head and Neck Surgery and Pediatrics at UMSOM. "Our finding suggests that using sleep studies alone to manage sleep apnea in children may be a less than satisfactory way of determining whether surgery is warranted."

To conduct the study, Dr. Isaiah and his colleagues, Kevin Pereira, MD, from UMSOM and Gautam Das, PhD, at the University of Texas at Arlington conducted a new analysis of findings from 398 children, ages 5 to 9 years, who participated in the Childhood Adenotonsillectomy Trial (CHAT), a randomized trial published in 2013 that compared adenotonsillectomy with watchful waiting to treat sleep apnea. They found that resolution of sleep apnea, as determined by sleep study results, did not correlate with improvements in the majority of outcome measures including behavior, cognitive performance, sleepiness and symptoms of attention deficit hyperactivity disorder.

"This is an important finding that should be carefully considered by the pediatric medical community to determine whether recommendations concerning the management of sleep apnea need to be updated," said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. "Practice guidelines, in every field of medicine, should reflect the current state of science."

In the CHAT Trial, the researchers found that 79 percent of children who had the surgery had a normal sleep study 7 months later compared to 46 percent of those who had watchful waiting. Sleep apnea resolved spontaneously in about half of the children who underwent watchful waiting. It also demonstrated no significant improvement in how children performed on cognitive tests to assess how well they could focus, analyze and solve problems, and recall what they had just learned.

The CHAT researchers did find, however, that those who had early adenotonsillectomy had improved symptoms, quality of life, and behavior.

Credit: 
University of Maryland School of Medicine

Sesame yields stable in drought conditions

image: This is a sesame flower growing atop a sesame plant in western Texas research fields.

Image: 
Irish Lorraine B. Pabuayon

Texas has a long history of growing cotton. It's a resilient crop, able to withstand big swings in temperature fairly well. However, growing cotton in the same fields year after year can be a bad idea. Nutrients can get depleted. Disease can lurk in the ground during the winter season, only to attack the following year. Thus, rotating cotton with other crops could be a better system.

Agronomists have been researching various alternative crops that will grow well in western Texas. This area is part of the Ogallala water aquifer, which has been hit extremely hard the past few decades by drought. Another crop, sorghum, grows well with low water availability, but the yield can be greatly affected by drought conditions.

Irish Lorraine B. Pabuayon, a researcher at Texas Tech University (TTU), is on the team looking at an alternative crop for west Texas: sesame.

Like cotton and sorghum, sesame is also a "low-input" crop. This means it does not need a great deal of water, something that vegetable crops, corn and wheat need regularly and in large quantities.

"When introducing new crops to a water-limited system, it is important for growers to justify the water requirements of the new crops," says Pabuayon. "Properly determining the water requirements of the crops is important. Management decisions for wise use of limited water resources requires understanding a crop's moisture requirements."

Pabuayon and the TTU team found that even under conditions that lowered sorghum and cotton yields, sesame performed well. This could be good news for west Texas farmers.

"Our results showed that sesame yields were not significantly altered under water-deficit conditions," says Pabuayon. "Sesame continued to have consistent yields, even when water-deficit conditions decreased sorghum's yield by 25% and cotton's yield by 40%."

Having another crop that has good market value and can grow well during drought could benefit west Texas farmers. According to Pabuayon, sesame seeds are commonly used for food consumption and other culinary uses. The seeds are high in fat and are a good source of protein. Sesame is a major source of cooking oil. The remaining parts of sesame, after oil extraction, are good sources of livestock feed. Sesame has uses in the biodiesel industry, and even in cosmetics. This means there are multiple markets for the tiny seeds.

"Provided that the market price of sesame can support current yields, the results are favorable for low-input sesame production in west Texas," says Pabuayon. "However, the relatively low yields of sesame (per acre, compared to cotton and sorghum) suggest opportunities for additional genetic advancement. Currently, sesame varieties available for Texas are well-suited as an alternative crop for water-limited crop production systems.

Credit: 
American Society of Agronomy

Stabilizing neuronal branching for healthy brain circuitry

video: This is an illustrated video of the study covered in this press release, designed to be understood by a broad audience

Image: 
Thomas Jefferson University

PHILADELPHIA - Neurons form circuits in our brain by creating tree-like branches to connect with each other. Newly forming branches rely on the stability of microtubules, a railway-like system important for the transport of materials in cells. The mechanisms that regulate the stability of microtubules in branches are largely unknown. New research from the Vickie & Jack Farber Institute for Neuroscience - Jefferson Health has identified a key molecule that stabilizes microtubules and reinforces new neuronal branches.

"Like the railways to a new city, stable microtubules transport valuable material to newly formed branches so that they can grow and mature," explains Dr. Le Ma, associate professor in the department of Neuroscience and senior author of the study. Microtubule stability is regulated by proteins called microtubule-associated proteins (MAPs), which include many subtypes. Previous work from Dr. Ma and Stephen Tymanskyj, a postdoctoral fellow in the lab, had identified a subtype called MAP7, and found that it was localized at sites where new branches are formed. This made it a good candidate for regulating microtubule stability.

In the new study, published August 7 in Journal of Neuroscience, Dr. Tymanskyj and Dr. Ma used genetic tools to remove MAP7 from developing rodent sensory neurons and found that without MAP7, branches can still grow but they retract more frequently. This means that the branches cannot make complete and lasting connections without MAP7. The researchers also introduced more MAP7 protein to branches that had been cut by a laser and found that it could slow down or even prevent retraction that usually happens in response to injury. This suggests that manipulation of MAP7 could potentially rescue injured neuronal branches.

A key finding of the study demonstrated a unique property of MAP7 when it interacts with microtubules. The researchers found that in cells, MAP7 binds to specific regions of microtubules and makes them very stable but avoids the microtubule ends, where individual building blocks are rapidly added or removed. This valuable binding property prevents microtubules, or the cellular railway, from completely disassembling when branches retract. It also promotes steady re-assembly of microtubules to extend the cellular railway for subsequent branch growth. Moreover, the study is the first to demonstrate this new feature, which has not been observed for other MAPs.

Neuronal branches can be damaged by physical injury or toxicity. Understanding the role of MAP7 suggests new ways to reduce or avert that damage. "Our research has identified a new molecular mechanism of microtubule regulation in branch formation and has suggested a new target to potentially treat nerve injury," concludes Dr. Ma, who has already initiated new studies exploring this.

Credit: 
Thomas Jefferson University

Researchers develop thermo-responsive protein hydrogel

image: An illustration of how an engineered Q protein self-assembles to form fiber-based hydrogels at low temperature. These hydrogels have a porous microstructure that allows them to be used for drug delivery applications.

Image: 
NYU Tandon

BROOKLYN, New York, Tuesday, September 17, 2019 - Imagine a perfectly biocompatible, protein-based drug delivery system durable enough to survive in the body for more than two weeks and capable of providing sustained medication release. An interdisciplinary research team led by Jin Kim Montclare, a professor of biomolecular and chemical engineering at the NYU Tandon School of Engineering, has created the first protein-engineered hydrogel that meets those criteria, advancing an area of biochemistry critical to not only to the future of drug delivery, but tissue engineering and regenerative medicine.

Hydrogels are three-dimensional polymer networks that reversibly transition from solution to gel in response to physical or chemical stimuli, such as temperature or acidity. These polymer matrices can encapsulate cargo, such as small molecules, or provide structural scaffolding for tissue engineering applications. Montclare is lead author of a new paper in the journal Biomacromolecules, which details the creation of a hydrogel comprised of a single protein domain that exhibits many of the same properties as synthetic hydrogels. Protein hydrogels are more biocompatible than synthetic ones, and do not require potentially toxic chemical crosslinkers.

"This is the first thermo-responsive protein hydrogel based on a single coiled-coil protein that transitions from solution to gel at low temperatures through a process of self-assembly, without the need for external agents," said Montclare. "It's an exciting development because protein-based hydrogels are much more desirable for use in biomedicine."

The research team conducted experiments encapsulating a model small molecule within their protein hydrogel, discovering that small molecule binding increased thermostability and mechanical integrity and allowed for release over a timeframe comparable to other sustained-release drug delivery vehicles. Future work will focus on designing protein hydrogels tuned to respond to specific temperatures for various drug delivery applications.

Credit: 
NYU Tandon School of Engineering

Guppies teach us why evolution happens

image: Natural settings allow comparative studies of guppies to see how the life histories of those who lived with and without predators differed.

Image: 
David Reznick / UCR

Guppies, a perennial pet store favorite, have helped a UC Riverside scientist unlock a key question about evolution:

Do animals evolve in response to the risk of being eaten, or to the environment that they create in the absence of predators? Turns out, it's the latter.

David Reznick, a professor of biology at UC Riverside, explained that in the wild, guppies can migrate over waterfalls and rapids to places where most predators can't follow them. Once they arrive in safer terrain, Reznick's previous research shows they evolve rapidly, becoming genetically distinct from their ancestors.

"We already knew that they evolved quickly, but what we didn't yet understand was why," Reznick said. In a new paper published in American Naturalist, Reznick and his co-authors explain the reason the tiny fish evolve so quickly in safer waters.

To answer their questions, the scientists traveled to Trinidad, guppies' native habitat, and conducted an experiment. They moved guppies from areas in streams where predators were plentiful to areas where predators were mostly absent. Over the course of four years, they studied how the introduced guppies changed in comparison to ones from where they originated.

"If guppies evolve because they aren't at risk of becoming food for other fish, then evolution should be visible right away," Reznick said. "However, if in the absence of predators they become abundant and deplete the environment of food, then there will be a lag in detectable changes."

Guppies from all four streams were marked so they could be tracked over the course of four years. The scientists tracked the males, which tend to live about five months. They looked at the fishes' age and size at maturity, which are key traits affecting population growth.

They also tracked how the environment changed as the guppy populations expanded, focusing on the abundance of food such as algae and insects, as well as the presence of other nonpredator fish.

They found a two-to-three-year lag between when guppies were introduced and when males evolved, suggesting the second hypothesis was correct; guppies were first changing their new environments, and then as a result, they turned out to be changing themselves.

"The speed of evolution makes it possible to study how it happens," Reznick said. "The new news is that organisms can shape their own evolution by changing their environment."

One of Reznick's current projects includes applying these concepts to questions about human evolution.

"Unlike guppies and other organisms, human population density seems to increase without apparent limit, which increases our impact on our environment and on ourselves," he said.

Credit: 
University of California - Riverside

Poor diabetes control costs the NHS in England £3 billion a year in potentially avoidable hospital treatment

Poor diabetes control was responsible for £3 billion in potentially avoidable hospital treatment in England in the operational year 2017-2018, according to new research comparing the costs of hospital care for 58 million people with and without diabetes.

The findings, being presented at this year's European Association for the Study of Diabetes (EASD) Annual Meeting in Barcelona, Spain (16-20 September), reveal that on average, people with type 1 diabetes require 6 times more hospital treatment (£3,035 per person per year), and those with type 2 diabetes twice as much care (£1,291; after adjusting for their older age), than people without diabetes (£510).

Other than age, diabetes is the largest contributor to healthcare cost and reduced life expectancy in Europe. In England, two-thirds of people with type one diabetes and a third of those with type 2 diabetes have poor control over the blood sugar levels, increasing the risk of multiple long-term health problems ranging from kidney disease to blindness, and the need for additional hospital care.

In this study, researchers used data from the NHS Digital Hospital Episode Statistics in England and the National Diabetes Audit (2017-2018) to compare the cost of hospital treatment provided to people with type 1 and type 2 diabetes to people without diabetes, after adjusting for the effect of age.

Data on elective (planned) and emergency admissions, outpatient visits, and accident and emergency department (A & E) attendances for 58 million people including 2.9 million with type 2 diabetes, and 243,000 with type 1 diabetes between 2017 and 2018 were analysed. This included 90% of all hospital care provided across England.

Of total hospital costs of £36 billion in 2017-2018, the NHS in England spent around £5.5 billion on hospital care for people with diabetes. Of that sum, an estimated £3 billion (8%) was excess expenditure on diabetes (after accounting for age)--almost 10% of the NHS hospital budget.

Compared to people without diabetes, the average annual cost of elective care was more than two times higher for people with type 2 diabetes (£759 vs £331), and the average cost of emergency care was three times higher (£532 vs £179), having allowed for their age difference. Similarly, average costs for people with type 1 diabetes were five-fold greater for elective care (£1,657 vs £331) and eight-fold higher for emergency care (£1,378 vs £179).

"People with diabetes are admitted to hospital more often, especially as emergencies, and stay on average longer as inpatients. These increased hospital costs, 40% of which come from non-elective and emergency care, are three times higher than the current costs of diabetes medication. Improved management of diabetes by GPs and diabetes specialist care teams could improve the health of people with diabetes and substantially reduce the level of hospital care and costs", says author Dr Adrian Heald from Salford Royal Hospital in the UK.

The authors note that the study did not include the indirect costs associated with diabetes, such as those related to increased death and illness, work loss, and the need for informal care.

Credit: 
Diabetologia

Patients with high blood sugar variability much more likely to die than those with stable visit-to-visit readings

New research presented at this year's Annual Meeting of the European Association for the Study of Diabetes (EASD) in Barcelona, Spain (16-20 Sept) shows that patients with the highest variability in their blood sugar control are more than twice as likely to die as those with the most stable blood sugar measurements. The study is by Professor Ewan Pearson, University of Dundee, UK and Dr Sheyu Li, West China Hospital, Sichuan University, Chengdu, China, and University of Dundee, UK, and colleagues.

Measuring glycated haemoglobin (HbA1c) in a patient's blood has for many years been a standard method for measuring blood sugar control over previous weeks and months. Usually, there is focus on whether a patient's HbA1c level is at or below a treatment target for a patient. However, some patients have highly variable HbA1c, and others have stable HbA1c from visit to visit. It is unclear whether this variability in HbA1c is associated with altered prognosis of patients, independent of their average HbA1c from diagnosis. In this study, the authors aimed to investigate the association between visit-to-visit HbA1c variability and cardiovascular events and microvascular complications in patients with newly diagnosed type 2 diabetes.

The study retrospectively recruited patients from Tayside and Fife in the Scottish Care Information-Diabetes Collaboration (SCI-DC), who were observable from diagnosis and had at least five HbA1c measurements before the outcomes. They used a measurement called the HbA1c variability score (HVS) calculated as the percentage of the number of changes in HbA1c more than 0.5% (5.5mmol/mol) among all HbA1c measurements in an individual.

Ten outcomes were studied including the combined outcome of major adverse cardiovascular events (known as MACE), all-cause mortality, cardiovascular death, coronary artery disease (CAD), ischemic stroke, heart failure, diabetic retinopathy (DR), diabetic peripheral neuropathy (DPN), diabetic foot ulcer (DFU) and the new onset of chronic kidney disease (CKD). Statistical models adjusting for baseline characteristics were used to assess the association of HVS with outcomes.

For each outcome, the patients were divided into 5 groups with the patients with the lowest variability (0-20%) as the reference. Compared with this group, patients with HVS of more than 60% (the 60-80% group and the 80-100% group) were associated with increased risks of all the outcomes studied. This means that the outcomes of patients are worse when more than 60% of their HbA1c measurements differ by 0.5% from the previous measure.

When looking specifically at the highest variation group (80-100%) versus the lowest (0-20%), the highest group was associated with a 2.4 times increased risk of all three outcomes of MACE, all-cause and cardiovascular mortality. There was also a 2.6 times increased risk of coronary artery disease, a doubling of risk of stroke, a tripled risk of heart failure, DPN, and CKD; a five-times increased risk of diabetic foot ulcer and seven times increased risk of DR. Adjustment for baseline characteristics confirmed the results.

The authors say: "Higher HbA1c variability is associated with increased risks of all-cause mortality, cardiovascular events and microvascular complication of diabetes independently of accumulated exposure of high HbA1c."

The authors say HbA1c variability varies across individuals, explaining that: "A previous descriptive study* we completed suggests higher HbA1c variability was associated with age, sex, body mass, social deprivation and treatment patterns and this difference may explain some of the increased risk in those with high variability in HbA1c. Frequent fluctuation of HbA1c can be driven by multiple clinical factors, including variation in diet and lifestyle, changing to different anti-diabetic drugs and/or withdrawing of anti-diabetic treatment, and general healthcare quality**"

They explain further: "High variation of HbA1c is more common in patients with a higher average level of HbA1c. However, the association with adverse outcomes seen with high HbA1c variability remains even after adjusting for this baseline difference. Thus, a highly variable HbA1c should be considered as a major risk factor for adverse outcomes, even if the average HbA1c is not too high. At this stage, it is important to emphasize that we can't say that the adverse outcomes are definitively caused by the increased variability in HbA1c, and therefore we cannot yet be sure that reducing HbA1c variability will reduce that risk."

Credit: 
Diabetologia

Teen e-cigarette use doubles since 2017

Data from the 2019 Monitoring the Future Survey of eighth, 10th and 12th graders show alarmingly high rates of e-cigarette use compared to just a year ago, with rates doubling in the past two years. University of Michigan, Ann Arbor, scientists who coordinate and evaluate the survey released the data early to The New England Journal of Medicine (NEJM) to notify public health officials working to reduce vaping by teens. The survey is funded by the National Institute on Drug Abuse (NIDA), part of the National Institutes of Health.

The new data show a significant increase in past month vaping of nicotine in each of the three grade levels since 2018. In 2019, the prevalence of past month nicotine vaping was more than 1 in 4 students in 12th grade; 1 in 5 in 10th grade, and 1 in 11 in eighth grade.

"With 25% of 12th graders, 20% of 10th graders and 9% of eighth graders now vaping nicotine within the past month, the use of these devices has become a public health crisis," said NIDA Director Dr. Nora D. Volkow. "These products introduce the highly addictive chemical nicotine to these young people and their developing brains, and I fear we are only beginning to learn the possible health risks and outcomes for youth."

"Parents with school-aged children should begin paying close attention to these devices, which can look like simple flash drives, and frequently come in flavors that are appealing to youth," said University of Michigan lead researcher Dr. Richard Miech. "National leaders can assist parents by stepping up and implementing policies and programs to prevent use of these products by teens."

Additional findings from the 2019 Monitoring the Future Survey, documenting the use of and attitudes about marijuana, alcohol and other drugs, will be released in December.

Credit: 
NIH/National Institute on Drug Abuse

Tel Aviv University researchers discover evidence of biblical kingdom in Arava Desert

image: More than 6 m of copper production waste were excavated at Khirbat en-Nahas, Jordan. The excavated materials from here and other sites were used to track more than four centuries of technological and social evolution in biblical Edom.

Image: 
T. Levy/American Friends of Tel Aviv University (AFTAU)

Genesis 36:31 describes an early, pre-10th century BCE Edomite kingdom: "... the kings who reigned in Edom before any Israelite king reigned." But the archaeological record has led to conflicting interpretations of this text.

Now a Tel Aviv University study published in PLOS ONE on September 18 finds that the kingdom of Edom flourished in the Arava Desert in today's Israel and Jordan during the 12th-11th centuries BCE.

Expert analysis of specimens found in copper production sites in the Arava, led by Prof. Erez Ben-Yosef of TAU's Department of Archaeology and Ancient Near Eastern Cultures and Prof. Tom Levy of the University of California, San Diego, reveals the untold story of an affluent, flourishing society led by a copper "high-tech network."

Copper, used in ancient times to produce tools and weapons, was the most valuable resource in the Ancient Near East. Copper production is a complex process, requiring different stages and levels of expertise.

Prof. Ben-Yosef's team analyzed hundreds of findings from ancient copper mines in Jordan (Faynan) and Israel (Timna) to reconstruct the evolution and refinement of the copper manufacturing industry over 500 years, spanning the beginning of the first millennium BCE (1300-800 BCE). They identified dramatic changes in the copper slag discovered at the Arava sites.

"Using technological evolution as a proxy for social processes, we were able to identify and characterize the emergence of the biblical kingdom of Edom," explains Prof. Ben-Yosef. "Our results prove it happened earlier than previously thought and in accordance with the biblical description."

Prof. Ben-Yosef's analyses of copper slag, the waste of copper extraction by smelting, show a clear statistical fall in the amount of copper in the slag over time, indicating that production had become expertly streamlined for efficiency. The researchers attribute this sudden improvement to one of the most famous Egyptian invasions of the Holy Land: the military campaign of Pharaoh Shoshenq I (the biblical "Shishak"), who sacked Jerusalem in the 10th century BCE.

The new research indicates that Egypt's intervention in the land of Edom was not accompanied by destruction. Instead, it triggered a "technological leap" that included more efficient copper production and trade.

"We demonstrated a sudden standardization of the slag in the second half of the 10th century BC, from the Faynan sites in Jordan to the Timna sites in Israel, an extensive area of some 2,000 square kilometers, which occurred just as the Egyptians entered the region," says Prof. Ben-Yosef. "The efficiency of the copper industry in the region was increasing. The Edomites developed precise working protocols that allowed them to produce a very large amount of copper with minimum energy."

But Egypt at this time was a weak power, according to Prof. Ben-Yosef. While its influence in the region is clear, it probably did not command the copper industry, which remained a local Edomite enterprise.

"As a consumer of imported copper, Egypt had a vested interest in streamlining the industry. It seems that, through their long-distance ties, they were a catalyst for technological innovations across the region. For example, the camel first appeared in the region immediately after the arrival of Shoshenq I," Prof. Ben-Yosef says.

"Our new findings contradict the view of many archaeologists that the Arava was populated by a loose alliance of tribes, and they're consistent with the biblical story that there was an Edomite kingdom here," Prof. Ben-Yosef concludes. "A flourishing copper industry in the Arava can only be attributed to a centralized and hierarchical polity, and this might fit the biblical description of the Edomite kingdom."

Credit: 
American Friends of Tel Aviv University

Coral reefs and squat lobsters flourished 150 million years ago

image: The shells, or carapaces, of a modern porcelain crab (left) and the oldest known fossil porcelain crab (right). The carapaces are 8 and 3 millimeters wide, respectively. Porcelain crabs are really members of the squat lobster family that became adapted to the same intertidal environment as true crabs.

Image: 
Cristina Robins & Adiel Klompmaker, UC Berkeley

Coral reefs and the abundant life they support are increasingly threatened today, but a snapshot of a coral reef that flourished 150 million years ago shows that many animals were then at their peak of diversity, just offshore of the land ruled by dinosaurs.

In a paper published this month in the Zoological Journal of the Linnean Society, University of California, Berkeley, paleontologists describe a rich reef life during the Jurassic Period in the shallow sea covering what today is central Europe. The reef teemed with animals that snorkelers would recognize today -- fish, crabs, sea urchins, snails, clams and oysters -- but also now-extinct ammonites, which are essentially squid with external shells.

Joining them were 53 distinct species of squat lobsters, the rather unappealing name for creatures that South Americans relish as langostinos.

At a time of peak biodiversity for reef life in what was then the Tethys Sea, parasites also flourished. Some 10 percent of all the squat lobster fossils had bumps on their gill region that betrayed the existence of a blood-sucking parasite that was probably kin to the isopods that parasitize squat lobsters today.

"The reef would have looked similar to coral reefs today, just in terms of diversity and the fact that the corals back then belonged to the same group as the ones we see today," said lead author Cristina Robins, a senior museum scientist at the University of California Museum of Paleontology at UC Berkeley. She personally has described more than 50 new fossil species of squat lobsters, or galatheoids, over the past decade: about 30 percent of known fossil species.

"Squat lobsters became very diverse for the first time in Earth's history at the end of the Jurassic, alongside true crabs, which means that the Late Jurassic coral-associated habitats were key ecosystems in the evolution of galatheoids and their parasites," she said.

The diversity of life among these reefs 150 million years ago contrasts with the situation some 50 million years later, revealed by 100 million-year-old Cretaceous fossils from Spain. The diversity of squat lobsters in reefs was lower in the Cretaceous and also, subsequently, in the Cenozoic Era.

"We know that reefs aren't doing well today because of coral bleaching and other factors," said co-author Adiel Klompmaker, a project scientist in the Department of Integrative Biology. "If these ecosystems continue to deteriorate, it is very likely that associated organisms, including the squat lobsters, are going to take a big hit, as well, in terms of their abundance and diversity."

Squat lobsters: not lobsters, not crabs

As one of just a few experts worldwide on squat lobsters fossils, Robins admits that these animals -- living and extinct -- are underappreciated and understudied, even though they are one of most diverse groups of macrocrustaceans living today, with about 1,250 known species.

On reefs, where most people might encounter them, squat lobsters tend to be about the size of the fingernail on your little finger and easily missed. But some species living in Chile, Peru and Norway grow up to half-foot long and are harvested and sold as langostinos, a marketing term for the meaty tails of these and other lobster-like marine creatures.

They are found worldwide in many ocean environments, including the continental shelf and the deep sea. Large ones often congregate by the hundreds at hydrothermal vents in the abyss miles under the water's surface.

If you did see a squat lobster, you could easily mistake it for either a crab or a true lobster, though it is neither. The often-colorful porcelain crabs, or Porcellanidae -- popular for salt-water aquariums -- are squat lobsters but look nearly identical to true crabs, distinguished primarily by an easily overlooked tail tucked underneath. On the other hand, many squat lobsters look like small-clawed versions of the Maine lobster and have a similarly meaty and tasty, though smaller, tail.

"Most of the squat lobsters look like a crayfish: a lobster, but somewhat squished," Robins said. "But the squat lobsters have secondarily evolved a crab-like body in the porcelain crabs. That body shape helps in rocky intertidal areas."

In her quest to understand the diversity of these animals since they first appeared about 180 million years ago, Robins has studied fossils in collections around the world, but was blown away by a collection of 150 million-year-old fossils she stumbled across at the Vienna Museum of Natural History in Austria.

"I opened up several of their storage cabinets, and they contained nearly 7,000 decapods, a diverse group within crustaceans," Robins said. "The squat lobsters were a pretty high percentage of the number of decapods there: about 2,350 fossils, or a third. This collection is really special, just because they had a very dedicated collector who collected everything, so we have a really great snapshot of what the squat lobster fauna looked like, as well as some of the associated reef fauna."

The collection, one of the most diverse coral reef decapod fossil collections in the world, contains the oldest specimens of four of the six known families of squat lobsters. In their paper, the team described two of these, a new species of porcelain crab that is 50 million years older than the oldest known porcelain crab and the oldest Galatheidae by 25 million years.

Klompmaker, who is interested in the interactions between parasites and their hosts, was amazed by the isopod parasites visible as swellings on the carapaces of 10 percent of the squat lobster fossils. Today, parasites on squat lobsters are usually much less prevalent.

"Parasites such as isopods capitalized on the high abundance of their host and infected squat lobsters at rates not seen previously," he said. "This swelling in the gill region of the lobster's shell is caused a blood-sucking parasitic isopod crustacean, distant relatives and look-alikes of modern pill bugs. These are some of the oldest records of parasitism in galatheoids."

Robins and Klompmaker plan to continue their exploration of fossil squat lobsters in museum collections around the world, knowing that there are many undescribed species that can tell about reef life in the past and the ever-present warfare between parasites and their hosts.

Credit: 
University of California - Berkeley

Study gives the green light to the fruit fly's color preference

video: This time-lapse video shows the fruit flies' color choices during an eight-hour period, beginning four hours before lights turn on.

Image: 
Stanislav Lazopulo/University of Miami Department of Physics

For more than a century, the humble and ubiquitous fruit fly has helped scientists shed light on human genetics, disease, and behavior. Now a new study by University of Miami researchers reveals that the tiny, winged insects have an innate time- and color-dependent preference for light, raising the intriguing possibility that our own color choices depend on the time of day.

In a study published in the journal Nature on Wednesday, the researchers made two unexpected discoveries. First, they found that, given a choice, fruit flies are drawn to green light early in the morning and late in the afternoon, when they are most active, and to red, or dim light, in midday, when like many humans, they slow down to eat and perhaps take a siesta.

Much to the researchers' surprise, they also found that fruit flies, Drosophila melanogaster, demonstrate a "robust avoidance" for blue light throughout the day, a finding that turns a decades-long assumption on its head. Previous experiments dating back to the 1970s determined that fruit flies are attracted to blue light, the main driver for the circadian clock, or the genetic 24-hour timekeeper that controls the lives of humans and most other animals.

"If given a choice, the fact that flies would not choose blue is surprising, but the most surprising thing, which is relevant not just to flies, but to color preference in general, is the fact that color preference changes with time of day," said senior author Sheyum Syed, assistant professor of physics, who conceived and designed the study with post-doctoral student Stanislav Lazopulo. "This finding opens the possibility that human color preference also changes with the time of day, which may explain why it's been so difficult to nail down how color guides our choices."

Added study coauthor James D. Baker, research assistant professor of biology who helped supervise the study, "Stan has shown that these animals have a very clear preference for different colors of light at different times of day that's repeatable day to day, individual to individual, genotype to genotype. Our research community didn't have any idea that was happening."

Four years ago, while a graduate student in Syed's lab, Lazopulo set out to determine how Drosophila would respond to the colored light they would experience at their leisure in nature. If given a choice, what light would they choose? Would there be a pattern? Would their choices be guided by the circadian clock that guides all organisms?

With assistance from his brother, Andrey, he created an elaborate set of behavioral experiments that involved placing hundreds of single flies into tiny multicolored tubes that had a stopper on one end, food at the other end, and three distinct "rooms"--one green, one red, and one blue--that the insects could freely navigate.

Then he recorded their movements around the clock, through 12 hours of constant light and 12 hours of complete darkness, for as many as two weeks at a time. When Lazopulo reviewed the initial computer analysis of the recordings, he thought he had miscoded the computer program.

"They actually don't like blue light. They run away from blue light," he said. "It was absolutely an unexpected result. Based on all the previous knowledge we were not expecting to have such a preference for green, an avoidance for blue, and such robust patterns in this behavior."

But neither the computer program, nor the video, nor his eyes were flawed. During the day, the flies consistently avoided the blue zones, even when their food was placed in one. Under those circumstances, they would make brief incursions into a blue zone, but only to feed.

In contrast, the flies began to occupy the green zones about two hours after the lights came on in the morning. By midday, their preference for green and their activity diminished, with about half the population split between the green and red, or dim, zones. Then, about an hour before the lights turned off, the flies returned to the green zones, and their more active state. Later, during the lights-off phase, the flies randomly distributed themselves across the three zones, indicating, the researchers said, "that light is essential for generating the observed pattern."

Lazopulo and Syed, whose lab studies fruit fly behavior to better understand animal sleep, grooming, and color preferences, attributed the disparate results from earlier studies to improvements in long-term tracking methods and to the differences in the design of the experiments, particularly the difference in the time and conditions that the flies had to choose their color preference.

Past researchers, Syed said, tested Drosophila's color preference by releasing the flies into the bottom of a T-shaped vial and giving them 30 seconds to decide which arm of the T to exit--one with a green light and the other with a blue. The UM researchers suspect the flies chose blue under duress, as "an avoidance response to a noxious stimulus."

But now they know that, at their leisure and under more natural conditions, Drosophila prefer green, like the leaves of the fruit trees where, to the frustration of many a farmer, they like to lay their eggs.

Through what Baker called a "tour de force set of experiments" that included a series of genetic manipulations, the researchers also discovered that the fruit fly's color-driven behavior doesn't depend just on its visual system, as previously documented, but also on light-sensitive cells in the insect's abdomen that sense blue. Their internal clock guides the decision to stay in green or choose dim light in the middle of the day. Delete the clock genes, and the fruit flies always stay in green, never switching to dim light in midday.

But even without the clock, they still avoid blue, thanks to those abdominal cells that signal independently of the clock genes.

What this means for humans remains to be seen. But 110 years after embryologist Thomas Hunt Morgan began breeding fruit flies to confirm how genetic traits are inherited, UM researchers have shown there's still much to learn from the common pest that has evolved into the most studied and written about animal on the planet. In all, 10 scientists have earned six Nobel Prizes for their groundbreaking biological discoveries using fruit flies, whose genetic and physiological makeup is far simpler than humans, yet very similar.

In 1933, Morgan earned the first Nobel for discovering the role chromosomes play in heredity; in 2017, a trio of scientists earned the latest one, for isolating the circadian clock genes that control the rhythm of nearly every organism's daily life--not just in the brain, but in almost every cell in the body.

Credit: 
University of Miami