Culture

New tool improves beekeepers' overwintering odds and bottom line

image: Honey bees play a critical role in US agriculture.

Image: 
ARS-USDA

TUCSON, ARIZONA, September 18, 2019--A new tool from the Agricultural Research Service (ARS) can predict the odds that honey bee colonies overwintered in cold storage will be large enough to rent for almond pollination in February. Identifying which colonies will not be worth spending dollars to overwinter can improve beekeepers' bottom line.

Beekeepers have been losing an average of 30 percent of overwintered colonies for nearly 15 years. It is expensive to overwinter colonies in areas where winter temperatures stay above freezing. So a less expensive practice of overwintering bee colonies in cold storage is becoming popular.

This new tool calculates the probability of a managed honey bee colony surviving the winter based on two measurements: the size of colony and the percent varroa mite infestation in September, according to ARS entomologist Gloria DeGrandi-Hoffman, who headed the team. DeGrandi-Hoffman is research leader of the ARS Carl Hayden Bee Research Center in Tucson, Arizona.

By consulting the probability table for the likelihood of a colony having a minimum of six frames of bees--the number required for a colony to be able to fulfill a pollination contract for almond growers come February--beekeepers can decide in September if it is economically worthwhile to overwinter the colony in cold storage.

"The size of a colony in late summer or early fall can be deceiving with respect to its chances of making it through the winter. Even large colonies with more than 12 frames of bees (about 30,000 bees) have less than a 0.5 probability (50 percent chance) of being suitable for almond pollination if they have 5 or more mites per 100 bees in September," DeGrandi-Hoffman said.

Even with this cost-cutting help, the research team found that revenue from pollination contracts by itself is not likely to provide a sustainable income to a beekeeper anymore. They followed 190 honey bee colonies and recorded all costs.

Considerable resources were expended to feed colonies and on varroa mite and pathogen control. Costs were about $200 per colony.

Almond pollination contracts paid an average of $190 per colony in 2019.

One way for beekeepers to remain economically viable as a business, is to produce a honey crop from their bees. This is most often facilitated by moving colonies to the Northern Great Plains where bees can forage for nectar and pollen from a wide variety flowering plants.

"The situation has changed a lot. It is more expensive to manage honey bees with costs to feed colonies when flowers are not available and to control varroa mites. And it is more difficult to find places for honey bee colonies that provide the diverse nutrition they need," said DeGrandi-Hoffman. "Pollination revenue alone is just not adequate for beekeepers to stay in business. But we need beekeepers because managed bees are a lynchpin in agricultural production today."

Successfully using cold storage will help beekeepers' bottom line, but we are really just learning what the best management practices should be with cold storage," she added.

Credit: 
US Department of Agriculture - Agricultural Research Service

New factor in the development of childhood lymphoma

A study recently published in the renowned journal Blood, led by Kaan Boztug, Scientific Director of the St. Anna Children's Cancer Research Institute, the Ludwig Boltzmann Institute for Rare and Undiagnosed Diseases (LBI-RUD), Adjunct Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences and Associate Professor at the Medical University of Vienna, together with scientists from Israel, Germany, Turkey, Colombia, Argentina and the USA, investigated four patients from independent families with malignancy, autoimmunity and immunodeficiency. All four patients had a germline mutation in the gene encoding CD137, which led to a dysfunction of the co-receptor protein CD137. This dysfunction impaired crucial factors for immune surveillance, in particular for the prevention of viral infections and the development of lymphoma associated with Epstein-Barr virus (EBV) infection. "Not only did we discover a new tumor predisposition syndrome particularly for childhood lymphomas in this study, we also learned more about the basic function of CD137 in the immune system." says Kaan Boztug, joint corresponding and last author, together with colleagues Raz Somech from the Chaim Sheba Medical Center in Tel Aviv and Christoph Klein from the Dr. von Hauner Children's Hospital of the LMU Munich.

The disease mechanism in detail:

Co-receptors play a fundamental role in regulating and fine-tuning the signal strength of so-called antigen receptors, which help immune cells to recognize foreign bodies. An impaired function of these immune receptors can lead to an increased susceptibility to infections, autoimmune disorders and cancer. CD137 or 4-1BB is a co-stimulatory molecule which is frequently expressed on activated T cells to ensure a proper T cell function. Recent studies have also investigated CD137 as an attractive target for cancer immunotherapy.

EBV is a herpes virus that infects more than 90% of all people and remains latent in the body for life. In individuals with impaired T-cell function, EBV infection can lead to lymphoproliferative disorders all the way to malignant lymphomas. Co-first author Marini Thian states: "For me as a biologist and PhD student in the lab, it's exciting to see how we bridge the gap from deep genetic analysis to understanding the disrupted immune response, in particular to EBV virus infection."

Diseases caused by a defect in a single gene, e.g. for CD137, provide unique opportunities to investigate the consequences of such errors for the whole organism. Thus, we can gain mechanistic insights into the signal pathways necessary for a robust immune surveillance of the host against EBV.

In summary, this study demonstrates the key role of CD137 in the control of EBV virus by the immune system. If the body fails to keep the virus under control, it can lead to the development of lymphomas. In the future, the scientists want to use their findings to develop and use targeted therapeutics that can stop this dangerous disease process.

Credit: 
CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences

Want to optimize sales performance?

CATONSVILLE, MD, September 16, 2019- According to new research published in the INFORMS journal Marketing Science (Editor's note: The source of this research is INFORMS), companies can improve sales performance when they adjust sales commissions for the sale of more popular items. Further, the researchers found that when companies provide incentives to the sales force, that is more cost-effective than offering consumers discount pricing. The research centered on automotive sales at the dealership level.

The study in this month's edition of the INFORMS journal Marketing Science is titled "A Salesforce-Driven Model of Consumer Choice," by Bicheng Yang of the University of British Columbia and Tat Chan and Raphael Thomadsen, both of Washington University in St. Louis.

The researchers examined how sales commissions as compensation influences total sales and which products consumers choose. To achieve this, they developed a model that took into account the decisions of both salespeople and consumers.

"The selling process is structurally modeled as a joint decision that involves two parties," said Thomadsen. "Although the consumer makes the final decision, the sales representative's decision of how much service effort to invest in each product also influences the consumer's choice."

The researchers conducted their research using data from a car dealership in Japan, combined with comprehensive and global literature research.

"Our research showed us that not only do consumers have certain product preferences, but sales representatives and their incentivization through commissions has a powerful impact on sales performance," continued Thomadsen. "Our findings shed some light on how companies can strike the right balance to optimize sales."

Credit: 
Institute for Operations Research and the Management Sciences

MSU research team discovers new microbe in wheat stem sawfly

BOZEMAN - A team of researchers in Montana State University's College of Agriculture has discovered a previously unidentified microbe that lives symbiotically with the wheat stem sawfly, a pest that causes hundreds of millions of dollars in damage to wheat crops each year. The discovery, the result of a years-long project, provides the basis for future research that could be vital to combating losses due to wheat stem sawflies in Montana and beyond.

Carl Yeoman in the Department of Animal and Range Sciences and David Weaver in the Department of Land Resources and Environmental Sciences published a paper in the journal PeerJ in August along with a team of colleagues. The paper outlines the discovery of the microbe Spiroplasma sp. WSS - its name a nod to the wheat stem sawflies in which it was discovered. The project was inspired by knowledge of similar symbiotic relationships between other insects and microbes inside them.

Yeoman said wheat stem sawflies cause as much as $350 million in damage to wheat crops each year in the Northern Great Plains. The motivation for looking into those symbiotic relationships stemmed from a hypothesis that if the microbes in wheat stem sawflies could be identified and their functions determined, maybe they could be manipulated to work as a management tool for sawflies.

"Many insect species have microbial symbionts, and these relationships are often essential to the survival of both organisms," said Yeoman. "Microbial symbionts have been shown to affect everything from the reproductive success of their insect hosts to their nutrition - allowing them to survive on poor quality diets - and even their ability to defend against pathogens."

So, the team set out to determine what microbes are associated with wheat stem sawflies, and if they could be manipulated to affect the sawfly's ability to damage wheat crops.

Wheat stem sawflies are one of the more widespread wheat pests in western North America, said Weaver, damaging wheat by penetrating the stem to insert their eggs. The larvae then eat tissues lining the stem, inhibiting photosynthesis and causing lodging - weakening the stem to the point where the plant simply falls over in large swaths. The project was supported by the Montana Wheat and Barley Committee, which has long been in search of new ways to manage the pest.

"We've reported 20% to 30% reductions in seed weight as a result of sawfly feeding," said Weaver. "But if the stem falls and a combine doesn't pick it up, it goes from a 30% loss to a 100% loss of that stem. It's a pretty big problem, and it really frustrates growers."

Other members of the research team included Curtis Fowler of animal and range sciences; postdoctoral researcher Laura Brutscher and undergraduate Furkan Ibaoglu, who graduated in 2018, of the Department of Microbiology and Immunology; and Kevin Wanner of the Department of Plant Sciences and Plant Pathology, along with researchers from the University of Chicago and the Marine Biological Laboratory in Woods Hole, Mass.

The team began their study by collecting wheat stem sawflies from two locations at the larval and adult stages. They brought those back to their labs and began to examine them, where they observed three types of genomic material in their samples: wheat plant DNA, sawfly DNA and a previously undescribed species of microbe belonging to the genus Spiroplasma.

Based on elements of the microbe's genome in comparison to the sawfly genome - which was fully sequenced through another project Wanner and Weaver worked on - the team inferred that Spiroplasma sp. WSS might help sawflies break down sugars they eat and helping them to manufacture other nutrients they don't get from their carbohydrate-heavy diet, including key B vitamins.

"[Spiroplasma] plays a role in certain functions the sawfly may not be able to do as well by itself," said Weaver. "These functions are what we're trying to understand, and potentially how these things could be adapted, and the role of the symbiont truncated to be used as a potential management tool."

That method has shown potential in other research: A group of scholars working in China examined a similar system in pea aphids and found that when symbiotic microbes were inhibited with the use of antibiotics -- or in another case, using scorpion venom -- the fertility of the aphids fell significantly, reducing their population and the risk they posed to pea plants. It is possible that similar approaches could be used with wheat stem sawflies and the newly identified Spiroplasma symbiont, Weaver said.

Yeoman and Weaver plan to make future proposals to further their understanding of the role Spiroplasma sp. WSS plays in sawflies and other insects.

"We set out to identify the symbiotic microbes of wheat stem sawflies...so that we could begin to determine if these insect-microbial relationships could be exploited as alternate measures to control WSS damage in crops," Yeoman and Weaver concluded in their paper. "The identification of Spiroplasma sp. WSS and greater genetic insight into its metabolism provide a critical first step toward our pursuit of a novel biocontrol approach."

Credit: 
Montana State University

Heart cells respond to heart attack and increase the chance of survival

image: This is a slice of human heart tissue after a recent heart attack. The white area is the lumen of the heart. Orange: healthy cells, these cells are the border zone in the human heart, red: damaged cells due to the heart attack, blue: connective tissue.

Image: 
Dennis de Bakker, © Hubrecht Institute

The heart of humans and mice does not completely recover after a heart attack. It now turns out that cells close to area of the heart attack respond to the damage resulting from the heart attack, and that this response is important for survival. This was discovered by researchers from the groups of Jeroen Bakkers (Hubrecht Institute) and Vincent Christoffels (Amsterdam UMC). Additional research on these cells, and on similar cells in animals in which the heart does completely recover after a heart attack, may lead to new treatments for patients with heart damage in the future. The results of this research were published in the scientific journal Circulation.

Heart attack

Cardiovascular diseases remain the main cause of death in the Western world. After a heart attack, scar tissue replaces the lost heart muscle. Because of this, the damage to the heart is permanent and patients are considered to have chronic heart disease. In some animals however, such as the zebrafish, the heart does fully recover after a heart attack. Researchers have already discovered previously that this recovery happens in the healthy heart tissue directly adjacent to the damaged area of the heart. This area is called the border zone. Studying this border zone in different species can teach us more about the response of the heart to a heart attack, so that we may eventually induce this response in humans to recover the lost heart muscle after a heart attack.

Important gene

Researchers have now discovered that the mouse heart, which is permanently damaged after a heart attack just like the human heart, also develops a border zone. "In the border zone cells, the gene program that is normally active in the heart muscle cells was replaced by a gene program that helps the cells deal with the heart damage," says Karel van Duivenboden, researcher in the group of Vincent Christoffels (Amsterdam UMC). The researchers discovered that this program is incredibly important for surviving a heart attack. When they turned off one of the genes in this program, called NPPB, in mice with a heart attack, the chance of survival became much smaller.

Humans

By studying heart tissue of patients that had a heart attack, the researchers also identified a border zone in humans. This border zone turns out to respond in a similar way to a heart attack as the border zone in mice. However, the human border zone turned out to be located in a different area than doctors had thought until now.

Dream for the future

"Now that we know that humans also develop a border zone after a heart attack, we can start to investigate why the heart of humans and mice does not recover after a heart attack, while the hearts of some other animals do," explains Dennis de Bakker, researcher in the group of Jeroen Bakkers (Hubrecht Institute). We may eventually be able to activate cells in the border zone to start growing back the lost heart tissue, in order to restore the heart after a heart attack. "For now, however, this remains a dream for the future," according to De Bakker.

Credit: 
Hubrecht Institute

Autoantibodies in pregnancy: A cause of behavioral disorders in the child?

Dysfunctions in the maternal immune system that occur during pregnancy could possibly lead to impaired brain development in the unborn child. This is suggested by studies by the German Center for Neurodegenerative Diseases (DZNE) and Charite - Universitaetsmedizin Berlin, which are based on laboratory experiments and additional findings in humans. According to these studies, embryonic damage due to so-called autoantibodies could be a previously unnoticed cause of behavioral disorders that occur in diseases such as autism, schizophrenia and ADHD. The research results are published in the journal Annals of Neurology.

During pregnancy, antibodies from the mother's blood constantly enter the embryonic circulation via the umbilical cord to protect the developing child from infection. However, not all maternal antibodies are directed against foreign substances and serve to defend from pathogens. Some antibodies - known as autoantibodies - attack the body's own tissues. They may thus cause damage that can manifest, for example, as autoimmune diseases. Just like the beneficial antibodies, a pregnant woman passes on potentially harmful autoantibodies to her unborn child. This could promote the development of behavioral disorders in the child, as recent studies in animal models suggest. Initial data from studies in humans support these findings.

Dangerous Antibodies

The current study, led by Dr. Harald Pruess from the DZNE's Berlin site and the Department of Neurology with Experimental Neurology at the Charite, focused on an autoantibody that targets a specific protein on the surface of brain cells. This molecule, known as 'NMDA receptor', is essential for the interconnection of neurons and normal brain development. "The NMDA receptor antibody is a relatively common autoantibody. Data from blood donations suggest that up to one percent of the general population may carry this particular autoantibody in their blood. The reasons for this are largely unclear," said Pruess. If this autoantibody reaches the brain, serious inflammations can arise. However, most carriers are free of such symptoms because the blood-brain barrier - a filtering tissue that surrounds the brain's blood vessels - is usually hardly penetrable for antibodies. Unless this barrier is damaged or, as with an embryo in early pregnancy, not yet fully developed.

"We investigated the hypothesis that NMDA receptor antibodies reach the brain of the embryo and cause subtle but lasting impairments during this important phase of brain development," explained Pruess. Indeed, in mice, large quantities of maternal autoantibodies were found to reach the brain of the embryo. This resulted in a reduction of NMDA receptors, altered physiological functions and impaired neuronal development. The offspring showed abnormalities in behavior and some areas of their brains were smaller compared to healthy animals. "This hitherto unknown form of pregnancy-associated brain diseases is reminiscent of psychiatric disorders caused by rubella or chickenpox pathogens. These types of infections also have a temporary effect on the brain that can have lifelong consequences," said Pruess.

Findings in humans

In humans, initial analyses of data from a group of 225 mothers suggest that these autoantibodies occur more frequently in women who have a child with a neurodevelopmental disorder or psychiatric disease. The mothers seem to be protected by the blood-brain barrier. "Further studies will be needed in order to confirm the link between maternal NMDA receptor antibodies and human psychiatric disorders in humans," Pruess emphasized. "However, should future research results confirm our hypothesis, tests for such antibodies in pregnant women would have to be included in prenatal screenings. Where necessary, this would allow to initiate treatments to remove the autoantibodies in order to prevent the child from suffering potentially life-long adverse health effects."

The current results may explain why previous studies have failed to demonstrate a clear link between NMDA receptor antibodies and psychiatric diseases such as schizophrenia. In newborns, the antibodies transferred by the mother are broken down within a matter of weeks. Most patients in existing studies were young adults. Therefore, when the testing for these autoantibodies took place, they had long since disappeared.

Credit: 
DZNE - German Center for Neurodegenerative Diseases

Tensile strength of carbon nanotubes depends on their chiral structures

image: Empirical contour map of nanotube tensile strengths.
Each pair of integers (n,m) on a hexagon identifies the nanotube structure. Nanotube structures are roughly classified as three groups (right schematics). The left image shows the moment of nanotube fracture during the tensile test.?

Image: 
Nagoya University

Nagoya, Japan - Single-walled carbon nanotubes should theoretically be extremely strong, but it remains unclear why their experimental tensile strengths are lower and vary among nanotubes. A team at Nagoya University, Kyoto University, and Aichi Institute of Technology directly measured the tensile strengths of individual structure-defined single-walled carbon nanotubes, revealing key insights into the relationship between their structure and strength.

Carbon nanotubes have been predicted as game-changing structural materials due to their outstanding theoretical strength per weight (Fig. 1a). They have even encouraged the construction of a space elevator, which is impossible using other existing materials.

Carbon nanotubes have a variety of structures with various carbon atom alignments. Depending on the number of concentric layers, carbon nanotubes are classified as single-walled or multi-walled nanotubes (Fig. 1b). Additionally, structures of the concentric layers are specified by diameter and chiral angle (Fig 1c) or a pair of integers (n,m) called as chiral indices.

Because of the difficulty in the selective synthesis of single structure nanotubes, the systematic studies of their mechanical properties require the structure determination of each sample nanotubes. However, due to their nanoscale size and difficulty in handling them, the tensile test of "structure-defined" single-walled carbon nanotube has not been achieved yet. The previous studies have shown that the tensile strength of real carbon nanotubes, including multiwalled and structure-undefined single-walled carbon nanotubes, is typically lower than the ideal case. Furthermore, the strengths considerably varied among the measured samples. This scattering poses a critical problem regarding their practical use in macroscopic structural materials such as yarns composed of many carbon nanotubes, because their fracture will be initiated from the weakest nanotubes. The lack of a systematic experimental study on the structure dependence has long obscured the fracture mechanism of real carbon nanotubes, and, therefore, has hindered the development of a macroscopic structural material with an ideal strength-to-weight ratio.

A team of physicists, chemists, and mechanical engineers designed the experimental schemes for the tensile test of structure-defined single-walled carbon nanotubes (hereafter, referred to as nanotubes). Individual nanotubes were synthesized over a micrometer-scale open slit via ambient alcohol chemical vapor deposition methods (Fig. 2a). Broadband Rayleigh scattering spectroscopy was employed to determine the nanotube structures (Fig. 2b). Then, the individual structure-defined nanotubes were picked up with a micro fork (Fig. 2c), and transferred onto a homemade microelectromechanical system (MEMS) device (Fig. 2d). Each individual nanotube was suspended and cramped between a pair of sample stages that were connected to a micro load-cell and actuator for the direct force measurement and uniaxial tensile force application, respectively (Fig. 2d). Figure 2e shows an image at the moment the nanotube fractured during tensile loading. The force was directly evaluated from the measured displacement of the load-cell stage equipped with micro springs according to Hooke's law.

The team succeeded in measuring the tensile strengths of 16 structure-defined nanotube species. Figure 3a summarizes the structure dependence of the measured ultimate tensile nanotube strengths. The strengths are seemingly dependent on both the chiral angle (Fig. 3b) and diameter (Fig. 3c) of the nanotubes.

The team found the clear relation between strengths and structures by considering directions of carbon-carbon bonds against the direction of the tensile load and stress concentration at structural defects (Fig. 4). Furthermore, the team developed an empirical formula to predict the real nanotubes' strengths. This empirical formula provides the most favorable nanotube structures that should be selectively synthesized toward the strongest material (Top of contents). Fortunately, the suggested types of the nanotube structures are not well-constrained. Although there remain a number of severe problems, including structure selective synthesis of defect-less nanotubes, the growth of long nanotubes, and making ropes with keeping their strength, this finding provides one of the fundamental insights for developing super-strong and ultra-lightweight materials for use in the construction of the safest and most fuel-efficient transport equipment or massive architectural structures.

Credit: 
Japan Science and Technology Agency

Quantum physics -- Simulating fundamental interactions with ultracold atoms

Fundamental interactions between particles are mediated by gauge bosons. Generally, these are described by gauge theories which are extremely challenging to treat theoretically in a wide range of parameters. Tackling some of these open questions in table-top experiments with specifically-designed quantum simulators constitutes an outstanding goal. Now, scientists at Ludwig-Maximilians-Universitaet (LMU) in Munich and the Max Planck Institute of Quantum Optics together with collaborators from the Technical University Munich, Harvard University and the Université Libre de Bruxelles succeeded in demonstrating the main ingredients of a specific lattice gauge theory with two-component ultracold bosons in optical superlattices. The study appears in the scientific magazine Nature Physics and was highlighted with a News and Views article.

How do elementary constituents of matter interact with each other?

One of the great challenges in modern physical sciences concerns the identification of elementary constituents of matter, and the manner by which these particles interact with each other. This fundamental problem occurs in many areas of physics including high-energy, condensed matter and quantum computation. While there have been remarkable achievements, confirming the existence of a plethora of elementary particles and novel exotic phases of matter, many fundamental questions remain unanswered due to the great complexity of the problem. One of the most prominent examples in this regard is the still incomplete knowledge of the phase diagram of Quantum Chromodynamics, which describes the strong interaction between quarks and gluons.

New insights by quantum simulation

Due to the vast progress in controlling individual particles including ions, photons and atoms, it has been suggested that quantum simulations could offer new insights on open questions related to the fundamental interactions between (quasi-)particles, which are mediated by gauge fields. Originally the concept of quantum simulation was proposed by Nobel-prize winner Richard Feynman. The key idea is to engineer a quantum many-body system that is tailored to emulate the properties of a given theoretical model, hence, offering a clear view on fundamental physical phenomena of interest in a controlled laboratory environment. Engineered quantum systems made of ultracold atoms in optical lattices emerged as versatile platforms to study the properties of exotic quantum phases of matter.

Simulating gauge fields, however, is extremely demanding, since it requires the precise implementation of matter particles and gauge fields, which interact in a way that has to respect the local symmetry of the gauge theory of interest.

Simulating gauge-mediated interactions with charge-neutral atoms

A team of physicists at LMU Munich and the Max Planck Institute of Quantum Optics (MPQ), led by Professor Monika Aidelsburger and Professor Immanuel Bloch, carefully designed and successfully realized the fundamental ingredients of a specific minimal lattice gauge theory - a Z2 lattice gauge theory, which plays an important role in condensed matter physics and quantum computation. The team realized a controllable quantum simulator of ultracold bosonic particles trapped in a bichromatic optical lattice. Isolating the dynamics of two particles in a double well facilitated the controlled investigation of the basic building block of the theory, which in future experiments could be used to build extended models. The complex interactions between the particles were manipulated using laser beams, whose intensity was modulated periodically in time. The challenge consisted in implementing well-defined local interactions between "matter" particles and "gauge bosons", the mediators of fundamental interactions. The experimentalists used two different electronic states of the atoms to emulate the different types of particles and the local interactions have been realized by addressing the atoms in a state-dependent manner. The team validated a novel approach based on periodic driving by observing the dynamics of the atoms state- and site-resolved. The excellent knowledge of the microscopic parameters of the model further allowed them to outline the path for future experiments in extended geometries and in higher dimensions.

Dr. Christian Schweizer, the lead author of this study concludes: "While it is still a long path to advance existing experimental platforms in a way that will enable us to shed new light onto fundamental open questions regarding the phase diagram of Quantum Chromodynamics, these are exciting times for quantum simulators, which develop at a remarkable rate." The authors have taken the first steps in the long journey towards studying high-energy physics problems with table-top experiments. This study provides a novel route for solving the outstanding challenges that experimentalists face with currently available protocols to simulate the fundamental interactions between elementary constituents of nature.

Credit: 
Ludwig-Maximilians-Universität München

More operations are scheduled if doctor is well rested

image: Gustav Tinghög, researcher at Linköping University has investigated how orthopaedic surgeons make decisions regarding surgery.

Image: 
Thor Balkhed, Linköping University

Researchers at Linköping University have investigated how orthopaedic surgeons make decisions regarding surgery, and how the decisions are related to how much of their work shift they have completed. The results show that a patient who meets the surgeon at the end of his or her shift is less likely to be scheduled for surgery.

Previous studies have shown that when we get tired, we make decisions without engaging in cognitively demanding reasoning, and we postpone risky or uncertain choices. The researchers at Linköping University wanted to investigate how decision fatigue affects decision-making in healthcare. The results have been published in the journal Health Economics.

"Our study shows that medical decision-making is also affected when there are repeated decisions. If it's the case that important decisions on medical prioritisation are affected by what time of the day you meet the doctor, perhaps this should be looked at. We want society's resources to be used as efficiently and fairly as possible", says Gustav Tinghög, associate professor at Linköping University. He also works at the JEDI LAB, a behavioural and neuroeconomics research lab at Linköping University.

The study was conducted at a Swedish orthopaedic clinic, where eight surgeons work. The surgeons work either the morning shift (up to lunchtime), the afternoon shift, or double shifts, i.e. both before and after lunch.

The researchers studied hospital data for 133 shifts, which included 848 patient appointments for knee, hip and foot problems. At the appointments, the surgeon decides whether the patient requires an operation. If so, the surgeon must report this in a separate journal, and sometimes perform a preoperative examination, such as an ECG or a blood test.

The results show that a patient who meets the surgeon at the end of his or her shift is less likely to be scheduled for an operation.

When the researchers looked across the surgeons' shifts, they saw that four of ten patients (40.2%) who met the surgeons early in the shift were scheduled for an operation, whereas when the surgeons were near the end of the shift, the figure was just two of ten (21.7%).

Thus the results concerning decision-making in medicine are in line with previous research on decision fatigue. Late in the surgeon's workday, he or she is more inclined to rely on simplistic decision-making processes, and to avoid big decisions.

The Linköping researchers' results indicate that when orthopaedic surgeons are rested, they make more decisions to schedule patients for surgery. But Gustav Tinghög says that more research is needed - e.g. regarding how decision-making works for other types of doctors than orthopaedic surgeons.

Credit: 
Linköping University

Inequality: What we've learned from the 'Robots of the late Neolithic'

image: Humble ox or Neolithic robot?

Image: 
Amy Bogaard

Seven thousand years ago, societies across Eurasia began to show signs of lasting divisions between haves and have-nots. In new research published in the journal Antiquity, scientists chart the precipitous surge of prehistoric inequality and trace its economic origins back to the adoption of ox-drawn plows.

Their findings challenge a long-held view that inequality arose when human societies first transitioned from hunting and gathering to agriculture. According to the researchers, it was not agriculture per se that ushered in substantial wealth inequalities, but instead a transformation of farming that made land more valuable and labor less so.

"Ox drawn plows were the robots of the late Neolithic," explains co-author Samuel Bowles, an economist at the Santa Fe Institute. The oxen were a form of labor-saving technology that led to a decoupling of wealth from labor - a decoupling fundamental to modern wealth inequality. "The effect was the same as today: growing economic disparities between those who owned the robots and those whose work the robots displaced."

In the first of two companion papers, the researchers present new statistical methods for comparing wealth inequality across different kinds of wealth, different societies, in different regions, at different times in history. Their analysis of data from 150 archeological sites reveals a steep increase in inequality in Eurasia from around 4,000 BC -- several millennia after the advent of agriculture.

"The surprise here isn't so much that inequality takes off later on, it's that it stayed low for such a long time," says lead author Amy Bogaard, an archaeologist based at the University of Oxford who is also an external professor at the Santa Fe Institute.

"The usual story -- that the societies that adopted agriculture became more unequal -- is no longer valid because we observed that some societies who adopted agriculture were remarkably egalitarian for thousands of years," says co-author Mattia Fochesato, an economist at Bocconi University.

Before around 4,000 BC, societies across the Middle East and Europe cultivated a patchwork of small garden plots, which Bogaard likens to present-day "allotments" in the UK. Families would have grown a variety cereal grains, as well as lentils, peas, and other pulse crops that needed to be harvested by hand. Notably, they would have tilled the soil by hand using hoes, in some cases also with the help of unspecialized cattle (such as aging milk cows) to pull plows, and carefully monitored their gardens during the growing season to protect them from wild animals. "It was quite a busy landscape, with lots of people working in and around these garden plots."

Then something changed. Farmers who were well resourced enough to raise and maintain specialized plow oxen saw new opportunities in farming additional land. A single farmer with an ox team could cultivate ten times or more land than a hoe farmer, and would begin to acquire more and more land to cultivate. Those who owned land and ox teams also began to opt for more stress-tolerant crops, like barley or certain kinds of wheat, that didn't require much labor.

By the second millennium BC in many farming landscapes fields stretched to the horizon, and societies were deeply divided between wealthy landowners, who passed their holdings on to their children, and land-poor or landless families.

The mechanism that drove this change is detailed in an economic model in the researchers' second paper. It reveals a key distinction between farming systems where human labor was the limiting factor for production, versus systems where human labor was more expendable, and where land was the limiting factor.

 "So long as labor was the key input for production, inequality was limited because families did not differ much in how much labor they could deploy to produce crops, " Fochesato explains. "But when the most important input became land, differences between families widened because land and other material forms of wealth could be accumulated and transmitted over generations. By chance, or force, or hard work, some families came to have a lot more than others. Then radical inequality arose."

The two new papers are part of a growing body of scientific research that is applying comparative economic measures to the archaeological record. Much of the work is part of Bowles' long-running series of interdisciplinary workshops on the origins of wealth inequality, which convene annually at the Santa Fe Institute. The new research supports previous findings by archaeologist Tim Kohler et al (Nature, 2017), which drew attention to a markedly greater wealth inequality in post-Neolithic Eurasia than in the Americas, where domesticated draft animals would not have been available. 

One consequence of inequality, Bogaard notes, is that the most unequal societies tended to be more fragile and susceptible to political upheaval or climate change.

The takeaway for people today is that "if there are opportunities to monopolize land or other key assets in a production system, people will. And if there aren't institutional or other redistributive mechanisms, inequality is always where we're going to end up." Land is still a relevant asset, Bogaard says, "but there are many other kinds of assets now that we should think about people's capacity to own and benefit from."

Credit: 
Santa Fe Institute

Artificial intelligence probes dark matter in the universe

image: This is a typical computer-generated dark matter map used by the researchers to train the neural network.

Image: 
ETH Zurich

Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

At ETH Zurich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

Facial recognition for cosmology

While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: "Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy." As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter - including the dark variety - slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as "weak gravitational lensing", distorts the images of those galaxies very subtly, much like far-away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-designed statistics such as so-called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

Neural networks teach themselves

"In our recent work, we have used a completely new methodology", says Alexandre Refregier. "Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job." This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier's group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

In a first step, the scientists trained the neural networks by feeding them computer-generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter - for instance, the ratio between the total amount of dark matter and dark energy - should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

More accurate than human-made analysis

The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time - which is expensive.

Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-450 dataset. "This is the first time such machine learning tools have been used in this context," says Fluri, "and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications."

As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey. Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

Credit: 
ETH Zurich

Scientists develop DNA microcapsules with built-in ion channels

image: This is a conceptual illustration of DNA nanoplate-based microcapsules.

Image: 
Masahiro Takinoue of Tokyo Institute of Technology

A Research group led by Tokyo Tech reports a way of constructing DNA-based microcapsules that hold great promise for the development of new functional materials and devices (Figure 1). They showed that tiny pores on the surface of these capsules can act as ion channels. Their study will accelerate advances in artificial cell engineering and molecular robotics, as well as nanotechnology itself.

DNA-based, self-assembled nanostructures are promising building blocks for new kinds of micro- and nanodevices for biomedical and environmental applications. Much research is currently focused on adding functionality to such structures in order to expand their versatility.

For example, engineered capsules called liposomes that have a lipid-bilayer membrane are already successfully being used as sensors, diagnostic tools and drug delivery systems. Another group of capsules that do not have a lipid bilayer but are instead composed of colloidal particle membrane, known as Pickering emulsion[1] or colloidosomes, also have potential for many biotechnologically useful applications.

Now, a research group led by biophysicist Masahiro Takinoue of Tokyo Institute of Technology reports a new type of Pickering emulsion with the added functionality of ion channels -- an achievement that opens up new routes to designing artificial cells and molecular robots.

"For the first time, we have demonstrated ion channel function using pored DNA nanostructures without the presence of a lipid-bilayer membrane," says Takinoue.

The team's design exploits the self-assembling properties of DNA origami nanoplates[2]. The resulting Pickering emulsions are stabilized by the amphiphilic[3] nature of the nanoplates. (See Figure 2.)

One of the most exciting implications of the study, Takinoue explains, is that it will be possible to develop stimuli-responsive systems -- ones that are based on the concept of open-close switching. Such systems could eventually be used to develop artificial neural networks mimicking the way the human brain works.

"In addition, a stimuli-responsive shape change of the DNA nanoplates could serve as a driving force for autonomous locomotion, which would be useful for the development of molecular robots," Takinoue says.

The present study highlights the team's strengths in combining DNA nanotechnology with a perspective grounded in biophysics and soft-matter physics.

Credit: 
Tokyo Institute of Technology

Scientists identify previously unknown 'hybrid zone' between hummingbird species

image: Allen's and Rufous hummingbirds in the northwest are hybridizing, and scientists hope that studying them may provide new insights into how biodiversity evolves.

Image: 
Brian Myers

We usually think of a species as being reproductively isolated - that is, not mating with other species in the wild. Occasionally, however, closely related species do interbreed. New research just published in The Auk: Ornithological Advances documents the existence of a previously undiscovered hybrid zone along the coast of northern California and southern Oregon, where two closely related bird hummingbirds, Allen's Hummingbird and Rufous Hummingbird, are blurring species boundaries. Researchers hope that studying cases such as this one could improve their understanding of how biodiversity is created and maintained.

A hybrid zone is an area where the ranges of two closely related species overlap and interbreed with one another. To map the extent of the hummingbird hybrid zone in northern California and southern Oregon, San Diego State University's Brian Myers and his colleagues collected data on the physical traits and courtship behavior of more than 300 hummingbirds in the region. Most of the breeding males across the hybrid zone had a mix of characteristics of the two species, shifting gradually from more Rufous-like birds in the north to more Allen's-like birds in the south.

The males of different hummingbird species have distinct displays, performing aerial acrobatics during which their tail feathers produce various sounds. The researchers captured hummingbirds using traps at feeders, temporarily keeping females in mesh cages, where they caught the attention of territorial males. "Sometimes the birds outsmart me," says Myers. "They'll only visit a feeder when the trap isn't on it, or they won't perform their courtship displays to the female hummingbird I'm carrying around, and this can make things very slow sometimes."

The area where Allen's and Rufous hummingbirds interbreed stretches more than 300 kilometers along the Pacific coast and 90 kilometers inland, and it could have implications for the species' futures. "When a hybrid zone is so large, and when one of the hybridizing species has as small a range as Allen's Hummingbird, it raises the possibility of their range shrinking even further as they're swamped by hybrids that carry Rufous Hummingbird traits and pass these genes into Allen's Hummingbird populations," says Myers. "As biodiversity continues to drop, it's more important than ever to understand how new species form and what maintains species barriers once they're made - is there a certain habitat or other resources that require protection? Is it more related to sexual selection? Hybrid zones are an ideal tool with which to study this."

Credit: 
American Ornithological Society Publications Office

Racism a factor in asthma control for young African-American children

ARLINGTON HEIGHTS, IL - (SEPTEMBER 17, 2019) - A new article in Annals of Allergy, Asthma and Immunology, the scientific journal of the American College of Allergy, Asthma and Immunology (ACAAI) shows an association between African American parents/guardians who have experienced the chronic stress associated with exposure to racism and poor asthma control in their young children.

"The relationship between adverse childhood experiences (ACEs)/chronic stressors and asthma risk has been described in adult and some pediatric populations," says allergist Bridgette L. Jones, MD, MS, ACAAI member and lead author of the study. "A recent policy statement from the American Academy of Pediatrics examined how racism can negatively impact the development of infants, children and teens. We wanted to focus on asthma because we know exposure to chronic/toxic stress affects the pathways that are relevant to asthma control. What hasn't been examined is the impact of these experiences in early childhood where interventions to address the exposures may be more effective."

Thirty-one parents/guardians completed stress questionnaires that asked about their experiences with racism. The questionnaires also asked about their child's asthma control. The children of parents/guardians that rated high negative scores in association with experiences of racism had decreased asthma control. In other words, increased experiences of racism identified as stressful by parents were associated with lower asthma control in the child. Forty-seven percent of the children had previously required hospitalization for asthma and 27 percent had required intensive care support during an asthma hospitalization.

"ACEs and toxic/chronic stressors such as emotional/physical/sexual abuse, housing instability, financial stress and experiencing racial discrimination are psychosocial factors that are associated with poor asthma control in children and adults," says Dr. Jones. "Knowing that's true for older children, it's important to identify stressors in young children that are potentially able to be modified. That could possibly allow for early intervention to improve health-related outcomes in the long term."

If asthma symptoms are negatively affecting your child, find an allergist near you who can help create a personal plan to help them lead the life they want to live. The ACAAI allergist locator can help you find an allergist in your area.

Credit: 
American College of Allergy, Asthma, and Immunology

Nature documentaries increasingly talk about threats to nature, but still don't show them

image: High quality figure 1.

Image: 
Laura Thomas-Walter

Researchers from Bangor University, University of Kent, Newcastle University and University of Oxford coded the scripts from the four most recent David Attenborough narrated series. They found the Netflix series Our Planet dedicated 15% of the script to environmental threats and conservation, far exceeding the BBC series Planet Earth II and Dynasties, with only Blue Planet II coming close to this figure.

The researchers also highlighted the uniqueness of Our Planet in weaving the topic of human impacts on nature throughout each episode rather than being the subject of a dedicated final episode, which was done in Blue Planet II.

Despite the more frequent mentions of threats to nature, the researchers noted how visually similar Our Planet was to the other series they analysed. It had few visual depictions of the threats and largely showed the natural world as pristine and separate from humans, something nature documentaries have often been criticised for in the past.

Professor Julia Jones, lead author, said: "One could argue that by using camera angles to avoid showing any sign of people, nature film makers are being disingenuous, and even actively misleading audiences. The viewer may be led to believe that things cannot be that bad for biodiversity as what they are seeing on the screen shows nature, for the most part, doing fine.

"The inextricable link between threats to the natural world and the high consumption of western lifestyles would be more difficult to ignore if the presence, or even dominance, of commercial agriculture, mining and transport infrastructure were more visible in the landscapes, reducing the space for the awe-inspiring wild spectacles shown."

Nature documentaries have the potential to elicit behavioural change and increase support for conservation but to what extent is not well understood. "Previous studies have shown that documentaries can increase willingness amongst viewers to make personal lifestyle changes, increase support for conservation organisations, and generate positivity towards an issue, making policy change more likely. However, we still don't understand the mechanisms by which these changes take place. Considerable research is needed to investigate how viewing nature, portrayed as threatened or pristine, in a documentary affects people in ways which might, ultimately, contribute to saving it." Said Laura Thomas-Walters, co-author.

The researchers suggest collaboration between filmmakers and researchers could help us understand the impacts of these documentaries. Dr Diogo Vesrissimo, co-author, said: "there is limited evidence on the causal relationships between viewing a documentary and subsequent behaviour change. Nature documentary producers should work with researchers to better understand these positive and negative impacts".

"Empirical data needs to be collected to examine whether showing anthropogenic impacts is actually more effective at spurring behaviour change amongst audiences." Added Laura Thomas-Walters.

Credit: 
British Ecological Society