Culture

Social media data used to predict retail failure

Researchers have used a combination of social media and transport data to predict the likelihood that a given retail business will succeed or fail.

Using information from ten different cities around the world, the researchers, led by the University of Cambridge, have developed a model that can predict with 80% accuracy whether a new business will fail within six months. The results will be presented at the ACM Conference on Pervasive and Ubiquitous Computing (Ubicomp), taking place this week in Singapore.

While the retail sector has always been risky, the past several years have seen a transformation of high streets as more and more retailers fail. The model built by the researchers could be useful for both entrepreneurs and urban planners when determining where to locate their business or which areas to invest in.

"One of the most important questions for any new business is the amount of demand it will receive. This directly relates to how likely that business is to succeed," said lead author Krittika D'Silva, a Gates Scholar and PhD student at Cambridge's Department of Computer Science and Technology. "What sort of metrics can we use to make those predictions?"

D'Silva and her colleagues used more than 74 million check-ins from the location-based social network Foursquare from Chicago, Helsinki, Jakarta, London, Los Angeles, New York, Paris, San Francisco, Singapore and Tokyo; and data from 181 million taxi trips from New York and Singapore.

Using this data, the researchers classified venues according to the properties of the neighbourhoods in which they were located, the visit patterns at different times of day, and whether a neighbourhood attracted visitors from other neighbourhoods.

"We wanted to better understand the predictive power that metrics about a place at a certain point in time have," said D'Silva.

Whether a business succeeds or fails is normally based on a number of controllable and uncontrollable factors. Controllable factors might include the quality or price of the store's product, its opening hours and its customer satisfaction. Uncontrollable factors might include unemployment rates of a city, overall economic conditions and urban policies.

"We found that even without information about any of these uncontrollable factors, we could still use venue-specific, location-related and mobility-based features in predicting the likely demise of a business," said D'Silva.

The data showed that across all ten cities, venues that are popular around the clock, rather than just at certain points of day, are more likely to succeed. Additionally, venues that are in demand outside of the typical popular hours of other venues in the neighbourhood tend to survive longer.

The data also suggested that venues in diverse neighbourhoods, with multiple types of businesses, tend to survive longer.

While the ten cities had certain similarities, the researchers also had to account for their differences.

"The metrics that were useful predictors vary from city to city, which suggests that factors affect cities in different ways," said D'Silva. "As one example, that the speed of travel to a venue is a significant metric only in New York and Tokyo. This could relate to the speed of transit in those cities or perhaps to the rates of traffic."

To test the predictive power of their model, the researchers first had to determine whether a particular venue had closed within the time window of their data set. They then 'trained' the model on a subset of venues, telling the model what the features of those venues were in the first time window and whether the venue was open or closed in a second time window. They then tested the trained model on another subset of the data to see how accurate it was.

According to the researchers, their model shows that when deciding when and where to open a business, it is important to look beyond the static features of a given neighbourhood and to consider the ways that people move to and through that neighbourhood at different times of day. They now want to consider how these features vary across different neighbourhoods in order to improve the accuracy of their model.

Credit: 
University of Cambridge

When is a nova not a nova? When a white dwarf and a brown dwarf collide

image: This object is possibly the oldest of its kind ever catalogued: the hourglass-shaped remnant named CK Vulpeculae.

Image: 
ALMA (ESO/NAOJ/NRAO)/S. P. S. Eyres

Researchers from Keele University have worked with an international team of astronomers to find for the first time that a white dwarf and a brown dwarf collided in a 'blaze of glory' that was witnessed on Earth in 1670.

Atacama Large Millimeter/submillimeter Array (ALMA) in Chile observed the debris from the explosion

This is the first time such an event has been conclusively identified

The dual rings of dust and gas - the debris from the explosion - resemble an hourglass

Studying the remains of the merger, the researchers were able to detect the tell-tale signature of lithium

The remains are also rich in organic molecules such as formaldehyde (H2CO) and methanamide (NH2CHO)

The brown dwarf star was 'shredded' and dumped on the surface of a white dwarf star, leading to the 1670 eruption and the hourglass we see today

Using the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, the international team of astronomers, including workers from the Universities of Keele, Manchester, South Wales, Arizona State, Minnesota, Ohio State, Warmia & Mazury, and the South African Astronomical Observatory, found evidence that a white dwarf (the remains of a star like the Sun at the end of its life) and a brown dwarf (a 'failed' star without sufficient mass to sustain thermonuclear fusion) collided in a short-lived blaze of glory that was witnessed on Earth in 1670 as Nova Cygni - 'a new star below the head of the Swan.'

In July of 1670, observers on Earth witnessed a 'new star', or nova, in the constellation Cygnus - the Swan. Where previously there was no obvious star, there abruptly appeared a star as bright as those in the Plough, that gradually faded, reappeared, and finally disappeared from view.

Modern astronomers studying the remains of this cosmic event initially thought it was triggered by the merging of two main-sequence stars - stars on the same evolutionary path as our Sun. This so-called 'new star' was long referred to as 'Nova Vulpeculae 1670', and later became known as CK Vulpeculae.

However, we now know that CK Vulpeculae was not what we would today describe as a 'nova', but is in fact the merger of two stars - a white dwarf and a brown dwarf.

By studying the debris from this explosion - which takes the form of dual rings of dust and gas, resembling an hourglass with a compact central object - the research team concluded that a brown dwarf, a so-called failed star without the mass to sustain nuclear fusion, had merged with a white dwarf.

Professor Nye Evans, Professor of Astrophysics at Keele University and co-author on the paper appearing in the Monthly Notices of the Royal Astronomical Society explains:

"CK Vulpeculae has in the past been regarded as the oldest 'old nova'. However, the observations of CK Vulpeculae I have made over the years, using telescopes on the ground and in space, convinced me more and more that this was no nova. Everyone knew what it wasn't - but nobody knew what it was! But a stellar merger of some sort seemed the best bet. With our ALMA observations of the exquisite dusty hourglass and the warped disc, plus the presence of lithium and peculiar isotope abundances, the jig-saw all fitted together: in 1670 a brown dwarf star was 'shredded' and dumped on the surface of a white dwarf star, leading to the 1670 eruption and the hourglass we see today."

The team of European, American and South African astronomers used the Atacama Large Millimeter/submillimeter Array to examine the remains of the merger, with some interesting findings. By studying the light from two, more distant, stars as they shine through the dusty remains of the merger, the researchers were able to detect the tell-tale signature of the element lithium, which is easily destroyed in stellar interiors.

Dr Stewart Eyres, Deputy Dean of the Faculty of Computing, Engineering and Science at the University of South Wales and lead author on the paper explains:

"The material in the hourglass contains the element lithium, normally easily destroyed in stellar interiors. The presence of lithium, together with unusual isotopic ratios of the elements C, N, O, indicate that an (astronomically!) small amount of material, in the form of a brown dwarf star, crashed onto the surface of a white dwarf in 1670, leading to thermonuclear 'burning', an eruption that led to the brightening seen by the Carthusian monk Anthelme and the astronomer Hevelius, and in the hourglass we see today."

Professor Albert Zijlstra, from The University of Manchester's School of Physics & Astronomy, co-author of the study, says:

"Stellar collisions are the most violent events in the Universe. Most attention is given to collisions between neutrons stars, between two white dwarfs - which can give a supernova - and star-planet collisions.

"But it is very rare to actually see a collision, and where we believe one occurred, it is difficult to know what kind of stars collided. The type we believe that happened here is a new one, not previously considered or ever seen before. This is an extremely exciting discovery."

Professor Sumner Starrfield, Regents' Professor of Astrophysics at Arizona State University comments:

"The white dwarf would have been about 10 times more massive than the brown dwarf, so as the brown dwarf spiralled into the white dwarf it would have been ripped apart by the intense tidal forces exerted by the white dwarf. When these two objects collided, they spilled out a cocktail of molecules and unusual element isotopes.

"These organic molecules, which we could not only detect with ALMA, but also measure how they were expanding into the surrounding environment, provide compelling evidence of the true origin of this blast. This is the first time such an event has been conclusively identified.

"Intriguingly, the hourglass is also rich in organic molecules such as formaldehyde (H2CO), methanol (CH3OH) and methanamide (NH2CHO). These molecules would not survive in an environment undergoing nuclear fusion and must have been produced in the debris from the explosion. This lends further support to the conclusion that a brown dwarf met its demise in a star-on-star collision with a white dwarf."

Since most star systems in the Milky Way are binary, stellar collisions are not that rare, the astronomers note.

Professor Starrfield adds:

"Such collisions are probably not rare and this material will eventually become part of a new planetary system, implying that they may already contain the building-blocks of organic molecules as they are forming."

Credit: 
Keele University

Half the brain encodes both arm movements

image: Patients implanted with electrocorticography arrays completed a 3D center-out reaching task. Electrode locations were based upon the clinical requirements of each patient and were localized to an atlas brain for display (A). B. Patients were seated in the semi-recumbent position and completed reaching movements from the center to the corners of a 50cm physical cube based upon cues from LED lights located at each target while hand positions and ECoG signals were simultaneously recorded. Each patient was implanted with electrodes in a single cortical hemisphere and performed the task with the arm contralateral (C) and ipsilateral (D) to the electrode array in separate recording sessions.

Image: 
Bundy et al., <i>JNeurosci</i> (2018)

Individual arm movements are represented by neural activity in both the left and right hemispheres of the brain, according to a study of epilepsy patients published in JNeurosci. This finding suggests the unaffected hemisphere in stroke could be harnessed to restore limb function on the same side of the body by controlling a brain-computer interface.

The right side of the brain is understood to control the left side of the body, and vice versa. Recent evidence, however, supports a connection between the same side of the brain and body during limb movement.

Eric Leuthardt, David Bundy, and colleagues explored brain activity during such ipsilateral movements during a reaching task in four epilepsy patients whose condition enabled invasive monitoring of their brains through implanted electrodes. Using a machine learning algorithm, the researchers demonstrate successful decoding of speed, velocity, and position information of both left and right arm movements regardless of the location of the electrodes. In addition to advancing our understanding of how the brain controls the body, these results could inform the development of more effective rehabilitation strategies following brain injury.

Credit: 
Society for Neuroscience

Why single embryo transfer during IVF sometimes results in twins or triplets

image: An ultrasound image showing zygotic splitting in process in the womb.

Image: 
Credit/copyright: Keiji Kuroda

It has been known for some time that it is better to transfer a single embryo to a woman's womb during assisted reproduction treatment (ART) rather than several embryos in order to avoid a multiple pregnancy and the risks associated with it such as foetal deaths, miscarriage, premature delivery and low birthweight. However, even when single embryo transfer (SET) is performed, some women still become pregnant with twins or even triplets.

In a study published today (Tuesday) in Human Reproduction [1], one of the world's leading reproductive medicine journals, researchers have investigated one of the reasons why this happens and have, for the first time, been able to calculate that the proportion of multiple pregnancies after SET is 1.6% and that 1.36% of multiple pregnancies after SET occur as a result of a process called zygotic splitting.

These results come from the largest study to investigate zygotic splitting after SET - it analysed 937,848 SET cycles - and it highlights factors that could increase the risk. These include using frozen thawed embryos for SET, maturing the fertilised egg (blastocyst) in the laboratory for five or six days before SET, and assisted hatching, in which a small hole is created in the layer of proteins surrounding the embryo (the zona pellucida) to help the embryo hatch out and attach itself to the wall of the woman's womb.

One of the authors of the study, Dr Keiji Kuroda, of the Sugiyama Clinic Shinjuku and Juntendo University Faculty of Medicine in Japan, said: "As a result of our findings, clinicians may want to consider whether they should counsel couples about the small increase in the risk of multiple pregnancies as a result of zygotic splitting associated with some embryo manipulations."

A zygote is the fertilised egg cell that results from a man's sperm fertilising a woman's egg, and it contains all the genetic information from both parents to form a new individual. It soon starts to divide and subdivide into many more cells called blastomeres, which eventually form the embryo. Zygotic splitting occurs between days two and six when the zygote divides, usually into two, and each zygote then goes on to develop into an embryo, leading to identical twins (or triplets if it divides into three). These are known as "monozygotic" twins (or triplets).

It can be difficult to identify whether a multiple pregnancy has occurred after true zygotic splitting or as a result of SET combined with sexual intercourse that results in another egg being fertilised at the same time. The only way to be sure is to use ultrasound to see whether there are one or more gestational sacs and to detect the foetus or foetuses via their heartbeats. For this study, the researchers identified pregnancies arising from true zygotic splitting as those in which the number of foetuses exceeded the number of gestational sacs.

Dr Kuroda and his colleagues looked at nearly a million cycles of SET carried out in Japan between 2007 and 2014 and which were reported to the Japanese ART national registry (more than 99% of all ART treatment cycles have been entered in this registry since 2007). After SET using fresh or frozen and then thawed embryos, there were nearly 277,000 clinical pregnancies (29.5%), including 4,310 twins (1.56% of pregnancies) and 109 triplets (0.04% of pregnancies). The prevalence of true zygotic splitting was 1.36%, and the researchers found that, compared to singleton pregnancies, using frozen-thawed embryos increased the risk of zygotic splitting embryos by 34%, maturing the blastocysts in the lab for a few days before embryo transfer increased the risk by 79%, and assisted hatching by 21%.

Dr Kuroda said: "Blastocyst culture was associated with the highest risk of zygotic splitting out of the three risk factors we identified. Embryo selection using a computer-automated time-lapse image analysis test and transferring zygotes when they are just starting to divide may be solutions to decreasing the risk.

"However, it is important to point out that although the use of single embryo transfer has increased worldwide, the prevalence of zygotic splitting pregnancies has not. This may be because ART techniques, and also the cultures in which blastocysts are matured in the lab, have improved in recent years, reducing the stress on embryos and leading to a decrease in the risk of zygotic splitting. In fact, the risk of zygotic splitting from blastocyst culture was lower between 2010 and 2014 than between 2007 and 2014 - 79% and 120% respectively, although the reason for this is unknown. So, there may be no need to avoid embryo manipulations, such as blastocyst culture, in order to select the single most viable embryo."

The Japanese Society of Obstetrics and Gynaecology was the first society worldwide to recommend SET in 2008 in order to improve the safety of assisted reproduction. As a result of this policy, the proportion of SET cycles rose to 80% in 2015 and the proportion of multiple pregnancies fell to 3.2% [2].

Limitations of the study include the fact that the Japanese ART registry data regarding frozen-thawed embryo transfer did not include information about ovarian stimulation and fertilisation methods; information on assisted hatching was not included in the registry until 2010; the researchers had no way of validating if information submitted to the registry was correct; and the study is observational and so cannot prove that ART procedures cause zygotic splitting.

Dr Kuroda said that the findings should be applicable to other countries and races. "I have not seen any data on racial differences in zygotic splitting," he concluded.

Credit: 
European Society of Human Reproduction and Embryology

Life is like a box of hippocampal scenes

image: The activity in a brain region called the hippocampus, which is involved in forming new memories, spikes at the boundaries between distinct events within a film. The movie frames shown here are for illustration purposes only, and taken from "The Sorcerer's Apprentice (1961)" By Alfred Hitchcock.

Image: 
Aya Ben-Yakov and Richard Henson

A neuroimaging study of human participants watching the 1994 film Forrest Gump and Alfred Hitchcock's 1961 television drama Bang! You're Dead suggests an important role for the hippocampus in segmenting our continuous everyday experience into discrete events for storage in long-term memory. The research, published in JNeurosci, is among the first to investigate hippocampal function during a natural experience.

Aya Ben-Yakov and Richard Henson found that the hippocampus responded most strongly to the films at the points that independent observers identified as the end of one event and the beginning of a new one. The researchers found a strong match between these event boundaries and participants' hippocampal activity, varying according to the degree to which the independent observers agreed on the transition points between events. While watching the two-hour long Forrest Gump, hippocampal response was more strongly influenced by the subjective event boundaries than by what the filmmaker may consider a transition between scenes, such as a change in location. This suggests that the hippocampus is sensitive to meaningful units of experience rather than perceptual cues.

Credit: 
Society for Neuroscience

European researchers set out priorities for dealing with problem internet use

European Union funded researchers have launched the first international network to identify and understand problems associated with Internet use, such as gambling, pornography, bullying, excessive social media use. The Manifesto for a European Research Network into Problematic Usage of the Internet is published today in the peer-reviewed journal, European Neuropsychopharmacology1.

The European Problematic Use of the Internet (EU-PUI) Research Network, which has to date been awarded €520,000 funding from the EU's COST programme2 (European Cooperation in Science and Technology), has agreed priorities for the study of problems associated with Internet use, what causes these problems, and how society can best deal with them. Identification of these priorities allows robust evidence-based proposals to be developed to feed into the next major round of EU funding, the €100bn Horizon Europe3 project.

Most Internet use is harmless, but recently significant concerns have grown over how Internet use might affect public health, especially mental health, and wellbeing4. The World Health Organisation has recognised Problematic Use of the Internet (PUI) since 2014, and it is about to include the new diagnosis of Gaming Disorder in the forthcoming revised International Classification of Mental Disorders (ICD-11), to be released shortly. Nevertheless, research on PUI has been fragmented and mainly at a national level, meaning that it is difficult to understand the international picture, or to work with a big enough group of patients to develop meaningful comparisons. To address this, the COST programme has funded an expanding EU-PUI network, currently including 123 researchers from 38 countries. Plans for the network originated in the European College of Neuropsychopharmacology's Obsessive- Compulsive and Related Disorders Network, and the International College of Obsessive Compulsive Spectrum Disorders, and extends to include non-EU experts from a variety of backgrounds and disciplines.

The Network's Chair, Consultant Psychiatrist, Professor Naomi Fineberg (University of Hertfordshire), said:

"This network includes the best researchers in the field, and the network will drive the PUI research agenda for the foreseeable future. Problematic Use of the Internet is a serious issue. Just about everyone uses the Internet, but much information on problem use is still lacking. Research has often been confined to individual countries, or problematic behaviours such as Internet gaming. So we don't know the real scale of the problem, what causes problematic use, or whether different cultures are more prone to problematic use than others.

These proposals are aimed at allowing researchers' to identify what we know and what we don't know. For example, it may be that cultural or family factors affect the extent to which people develop problems, but that needs research to determine.

Understanding the biological, psychological and social processes underlying problematic usage of the Internet stands to improve prevention and treatment strategies. Ultimately, we hope to be able to identify those most at risk from the Internet before the problem takes hold, and to develop effective interventions that reduce its harms both at an individual and public health level.

These are questions which need to be answered internationally. The internet is international, and many of the problems associated with it are international, meaning that any solutions need to be viewed in a global perspective. We need standard methods so we can make meaningful comparisons.

There's no doubt that some of the mental health problems we are looking at appear rather like addiction, such as on-line gambling or gaming. Some tend towards the OCD end of the spectrum, like compulsive social-media checking. But we will need more than just psychiatrists and psychologists to help solve these problems, so we need to bring together a range of experts, such as neuroscientists, geneticists, child and adult psychiatrists, those with the lived experience of these problems and policy - makers, in the decisions we make about the Internet.

We need to remember that the Internet is not a passive medium; we know that many programmes or platforms earn their money by keeping people involved and by encouraging continued participation; and they may need to be regulated - not just from a commercial viewpoint, but also from a public health perspective5".

The team has identified 9 main areas of research, including such things as what PUI really is, how we measure it, how it affects health, are there genetic or social factors, and others.

1. What is problematic use of the internet?

2. How do we measure problem use, especially in different cultures and age groups?

3. How does problem use affect health and quality of life?

4. What long-terms studies do we need to show if the problems change over time?

5. How can we make it easier to recognise problem use?

6. What does genetics and personality tell us?

7. Do different cultures, family influences or design features of websites and applications impact on problem use?

8. How can we develop and test preventative interventions and treatments?

9. Can we develop biomarkers?

Naomi Fineberg continued, "We now need to begin to discuss the priorities set out in this paper, both with scientists and the public. We begin with a meeting in Barcelona on 10th October, which is also World Mental Health Day, just after the ECNP Congress, where we will begin to take evidence from the public".

Note; this public meeting will be streamed from 5 pm local time on 10th October from this site: http://www.internetandme.eu/

Commenting Professor David Nutt (Imperial College, London) said:

"As the internet takes up larger and larger parts of our life it is important to prepare for possible negative consequences. This manifesto is a significant step in this direction as it sets out a research programme run by top experts from many European and other countries that will monitor and provide potential solutions to such emergent adverse effects".

Professor Nutt is not involved in this work, this is an independent comment.

Credit: 
European College of Neuropsychopharmacology

Austerity cuts 'twice as deep' in England than rest of Britain, study finds

The first "fine-grained" analysis of local authority budgets across Britain since 2010 has found that the average reduction in service spending by councils was almost 24% in England compared to just 12% in Wales and 11.5% in Scotland.

While some areas - Glasgow, for example - experienced significant service loss, the new study suggests that devolved powers have allowed Scottish and Welsh governments to mitigate the harshest local cuts experienced in parts of England.

University of Cambridge researchers found that, across Britain, the most severe cuts to local service spending between 2010 and 2017 were generally associated with areas of "multiple deprivation".

This pattern is clearest in England, where all 46 councils that cut spending by 30% or more are located. These local authorities tend to be more reliant on central government, with lower property values and fewer additional funding sources, as well as less ability to generate revenue through taxes.

The north was hit with the deepest cuts to local spending, closely followed by parts of London. The ten worst affected councils include Salford, South Tyneside, Wigan, Oldham and Gateshead, as well as the London boroughs of Camden, Hammersmith and Fulham, and Kensington and Chelsea. Westminster council had a drop in service spending of 46% - the most significant in the UK.

The research also shows a large swathe of southern England, primarily around the 'home counties', with low levels of reliance on central government and only relatively minor local service cuts. Northern Ireland was excluded from the study due to limited data.

The authors of the new paper, published in the Cambridge Journal of Regions, Economy and Society, say the findings demonstrate how austerity has been pushed down to a local level, "intensifying territorial injustice" between areas.

They argue that initiatives claimed by government to ameliorate austerity, such as local retention of business taxes, will only fuel unfair competition and inequality between regions - as local authorities turn to "beggar thy neighbor" policies in efforts to boost tax bases and buffer against austerity.

"The idea that austerity has hit all areas equally is nonsense," said geographer Dr Mia Gray, who conducted the research with her Cambridge colleague Dr Anna Barford.

"Local councils rely to varying degrees on the central government, and we have found a clear relationship between grant dependence and cuts in service spending.

"The average cuts to local services have been twice as deep in England compared to Scotland and Wales. Cities have suffered the most, particularly in the old industrial centres of the north but also much of London," said Gray.

"Wealthier areas can generate revenues from business tax, while others sell off buildings such as former back offices to plug gaping holes in council budgets.

"The councils in greatest need have the weakest local economies. Many areas with populations that are ageing or struggling to find employment have very little in the way of a public safety net.

"The government needs to decide whether it is content for more local authorities to essentially go bust, in the way we have already seen in Northamptonshire this year," she said.

The latest study, which comes as England's county councils predict at least £1 billion in further cutbacks by 2020, used data from the Institute of Fiscal Studies to conduct a spatial analysis of Britain's local authority funding system.

Gray and Barford mapped the levels of central grant dependence across England's councils, and the percentage fall of service spend by local authorities across Scotland, Wales and England between financial years 2009/2010 and 2016/2017.

Some of the local services hit hardest across the country include highways and transport, culture, adult social care, children and young people's services, and environmental services.

The part of central government formerly known as the Department of Communities and Local Government experienced a dramatic overall budget cut of 53% between 2010 and 2016.

As budget decisions were hit at a local level, "mandatory" council services - those considered vital - were funded at the expense of "discretionary" services. However, the researchers found these boundaries to be blurry.

"Taking care of 'at risk' children is a mandatory concern. However, youth centres and outreach services are considered unessential and have been cut to the bone. Yet these are services that help prevent children becoming 'at risk' in the first place," said Gray.

"There is a narrative at national and local levels that the hands of politicians are tied, but many of these funding decisions are highly political. Public finance is politics hidden in accounting columns."

Gray points out that once local councils "go bust" and Section 114 notices are issued, as with Northamptonshire Council, administrators are sent in who then take financial decisions that supersede any democratic process.

In an unusual collaboration, the research has also contributed to the development of a new play by the Menagerie Theatre Company that explores the effects of austerity.

In a forum-theatre performance, audience members help guide characters through situations taken from the lives of those in austerity-hit Britain. The play will be performed in community venues across the country during October and November.

Gray added: "Ever since vast sums of public money were used to bail out the banks a decade ago, the British people have been told that there is no other choice but austerity imposed at a fierce and relentless rate."

"We are now seeing austerity policies turn into a downward spiral of disinvestment in certain people and places. Local councils in some communities are shrunk to the most basic of services. This could affect the life chances of entire generations born in the wrong part of the country."

Credit: 
University of Cambridge

Too much vitamin A may increase risk of bone fractures

Consuming too much vitamin A may decrease bone thickness, leading to weak and fracture prone bones, according to a study published in the Journal of Endocrinology. The study, undertaken in mice, found that sustained intake of vitamin A, at levels equivalent to 4.5-13 times the human recommended daily allowance (RDA), caused significant weakening of the bones, and suggests that people should be cautious of over-supplementing vitamin A in their diets.

Vitamin A is an essential vitamin that is important for numerous biological processes including growth, vision, immunity and organ function. Our bodies are unable to make vitamin A but a healthy diet including meat, dairy products and vegetables should be sufficient to maintain the body's nutritional needs. Some evidence has suggested that people who take vitamin A supplements may be increasing their risk of bone damage. Previous studies in mice have shown that short-term overdosing of vitamin A, at the equivalent of 13-142 times the recommended daily allowance in people, results in decreased bone thickness and an increased fracture risk after just 1-2 weeks. This study is the first to examine the effects of lower vitamin A doses that are more equivalent to those consumed by people taking supplements, over longer time-periods.

In this study, Dr Ulf Lerner and colleagues from Sahlgrenska Academy at the University of Gothenburg, report that mice given lower doses of vitamin A, equivalent to 4.5-13 times the RDA in humans, over a longer time period, also showed thinning of their bones after just 8 days, which progressed over the ten week study period.

Dr Ulf Lerner commented, "Previous studies in rodents have shown that vitamin A decreases bone thickness but these studies were performed with very high doses of vitamin A, over a short period of time. In our study we have shown that much lower concentrations of vitamin A, a range more relevant for humans, still decreases rodent bone thickness and strength."

Next, Dr Ulf Lerner intends to investigate if human-relevant doses of vitamin A affect bone growth induced by exercise, which was not addressed in this study. Additionally, his team will study the effects of vitamin A supplementation in older mice, where growth of the skeleton has ceased, as is seen in the elderly.

Dr Ulf Lerner cautions: "Overconsumption of vitamin A may be an increasing problem as many more people now take vitamin supplements. Overdose of vitamin A could be increasing the risk of bone weakening disorders in humans but more studies are needed to investigate this. In the majority of cases, a balanced diet is perfectly sufficient to maintain the body's nutritional needs for vitamin A."

Credit: 
Society for Endocrinology

Study finds standard treatment for common STD doesn't eliminate parasite in some women

A new study led by an infectious disease epidemiologist at Tulane University School of Public Health and Tropical Medicine could change the way doctors treat a common sexually transmitted disease.

Professor Patricia Kissinger and a team of researchers found the recommended single dose of medication isn't enough to eliminate trichomoniasis, the most common curable STD, which can cause serious birth complications and make people more susceptible to HIV. Results of the research are published in Lancet Infectious Diseases.

Globally, an estimated 143 million new cases of trichomoniasis among women occur each year and most do not have symptoms, yet the infection is causing unseen problems. The recommended treatment for more than three decades has been a single dose of the antibiotics metronidazole or tinidazole.

The researchers recruited more than 600 women for the randomized trial in New Orleans; Jackson, Mississippi; and Birmingham, Alabama. Half the women took a single dose of metronidazole and the other half received treatment over seven days.

Kissinger and her team found the women who received multiple doses of the treatment were half as likely to still have the infection after taking all the medication compared to women who only took a single dose.

"There about 3.7 million new cases of trichomoniasis each year in the United States," Kissinger said. "That means a lot of women have not been getting inadequate treatment for many decades."

Trichomoniasis can cause preterm delivery in pregnant women and babies born to infected mothers are more likely to have low birth weight. The parasite can also increase the risk of getting or spreading HIV.

Kissinger believes the CDC will change its treatment recommendations because of the results of this study.

"We need evidence-based interventions to improve health," Kissinger says. "We can no longer do something because it's what we've always done. I hope that this study will help to change the recommendations so that women can get the proper treatment for this common curable STD."

Credit: 
Tulane University

Tumor necrosis associate with atherosclerotic lipid accumulation

Inflammation is currently a well-documented component of atheroslcerosis pathogenesis, which plays a role at each stage of the disease development. Local activation of endothelial cells causing increase endothelial permeability, infiltration of intima with atherogenic low-density lipoprotein (LDL), and recruitment of circulating immune cells is regarded as a first step of atherosclerotic plaque development. Circulating modified LDL is immunogenic, and forms highly atherogenic aggregates with antibodies that are later accumulated in the arterial wall. In growing plaques, circulating monocytes are attracted to the lesions site by cytokine signalling. In the arterial wall, monocyte-derived macrophages play an active role in lipid accumulation, internalizing large associates of lipoprotein particles by means of phagocytosis. Phagocytic cells with cytoplasm filled by stored lipid droplets called foam cells can be found in developing plaques in large quantities. There is evidence that lipid accumulation in the arterial wall cells in its turn activates cytokine signalling leading to a vicious cycle and further aggravating the disease. However, the immune response in atherosclerosis is not limited to enhanced inflammation, since anti-inflammatory cytokines and alternatively-activated (M2) macrophages are also present in atherosclerotic plaques. Anti-inflammatory M2 macrophages are likely to be responsible for hte resolution of the inflammatory response and tissue remodelling observed in advancing plaques. At later stages of lesion development, lipofibrous plaquest with high lipid contents and cell counts give rise to fibrous plaques that contain less cells and lipids, but more extracellular matrix material.

Although the involvement of cytokines in atherosclerotic lesion development is currently beyond doubt, quantitative evaluation of the expression of pro- and anti-inflammatory cytokines in the plaque remains to be studied in detail. In this study, we analyzed the distribution of two cytokines, pro-inflammatory TNFα and anti-inflammatory CCL18, in sections of human carotid atherosclerotic plaques at different stages of development. Our results demonstrated that both pro- and anti-inflammatory cytokines were present in the plaques, although differently distributed and likely expressed by different cells, and appeared to be enriched as compared to grossly normal intima taken as a control. To test whether the expression of TNFα and CCL18 is increased in atherosclerotic lesions, we performed gene expression analysis by means of quantitative PCR. We found that the expression of both cytokines was indeed increased in different types of atherosclerotic lesions. Moreover, it followed a bell-shaped distribution across the 4 studied plaque stages, gradually increasing from the early initial lesions to fatty streaks, reaching maximum in lipofibrous plaques, and decreasing again in fibrous plaques. This distribution was consistent with our previously published observations of bell-shaped changes of atherosclerotic lesion cellularity, proliferative activity, collagen synthesis and lipid content at different stages of the development. For TNFα, the maximal increase in atherosclerotic lesions reached 2 folds as compared to normal tissue, while for CCL18, this number was 1.5 folds.

We next investigated the relationship between pro- and anti-inflammatory cytokine production and lipid accumulation in cells. To that end, we used cultured human monocyte-derived macrophages with lipid accumulation induced by incubation with atherogenic LDL obtained from atheroslcerosis patients' blood serum. Non-atherogenic LDL obtained from healthy donors, which did not cause cholesterol accumulation in cultured cells, was used as a control. We found that cholesterol accumulation in macrophages caused by atherogenic LDL treatment was associated with up-regulation of both TNFα and CCL18. The increase in relative gene expression was statistically significant (p=0.05 for TNFα and p=0.023 for CCL18) as compared to non-atherogenic LDL treatment.

In this work, we report the increased expression of pro-inflammatory cytokine TNFα and anti-inflammatory CCL18 in human atherosclerotic lesions, which could be observed microscopically and in gene expression analysis by means of quantitative PCR. Furthermore, we demonstrate that the increase of pro- and anti-inflammatory cytokines expression is associated with cholesterol accumulation caused by atherogenic LDL in cultured cells. It is likely that lipid accumulation is the trigger of cytokine expression in atherosclerotic lesions, since the maximum of expression is observed in atherosclerotic lesions most enriched in lipids. We discuss the implications of these findings for atheroslcerosis pathogenesis, postulating that a splash of cytokine signalling occurs in lesions with the highest lipid contents. We hypothesize that both pro- and anti-inflammatory responses take place in human atherosclerotic lesions, but are probably characterized by different dynamics. While pro-inflammatory signalling occurs rapidly in response to triggering stimuli and is transient, anti-inflammatory response is relatively slow and long-lasting. Under favorable conditions, resolution of inflammation should lead to a healing process and plaque stabilization, while chronic inflammation may aggravate the disease development.

Credit: 
Bentham Science Publishers

New spheres trick, trap and terminate water contaminant

image: Rice University graduate student Danning Zhang, who led the development of a particle that attracts and degrades contaminants in water, checks a sample in a Rice environmental lab.

Image: 
Jeff Fitlow/Rice University

HOUSTON - (Oct. 5, 2018) - Rice University scientists have developed something akin to the Venus' flytrap of particles for water remediation.

Micron-sized spheres created in the lab of Rice environmental engineer Pedro Alvarez are built to catch and destroy bisphenol A (BPA), a synthetic chemical used to make plastics.

The research is detailed in the American Chemical Society journal Environmental Science & Technology.

BPA is commonly used to coat the insides of food cans, bottle tops and water supply lines, and was once a component of baby bottles. While BPA that seeps into food and drink is considered safe in low doses, prolonged exposure is suspected of affecting the health of children and contributing to high blood pressure.

The good news is that reactive oxygen species (ROS) - in this case, hydroxyl radicals - are bad news for BPA. Inexpensive titanium dioxide releases ROS when triggered by ultraviolet light. But because oxidating molecules fade quickly, BPA has to be close enough to attack.

That's where the trap comes in.

Close up, the spheres reveal themselves as flower-like collections of titanium dioxide petals. The supple petals provide plenty of surface area for the Rice researchers to anchor cyclodextrin molecules.

Cyclodextrin is a benign sugar-based molecule often used in food and drugs. It has a two-faced structure, with a hydrophobic (water-avoiding) cavity and a hydrophilic (water-attracting) outer surface. BPA is also hydrophobic and naturally attracted to the cavity. Once trapped, ROS produced by the spheres degrades BPA into harmless chemicals.

In the lab, the researchers determined that 200 milligrams of the spheres per liter of contaminated water degraded 90 percent of BPA in an hour, a process that would take more than twice as long with unenhanced titanium dioxide.

The work fits into technologies developed by the Rice-based and National Science Foundation-supported Center for Nanotechnology-Enabled Water Treatment because the spheres self-assemble from titanium dioxide nanosheets.

"Most of the processes reported in the literature involve nanoparticles," said Rice graduate student and lead author Danning Zhang. "The size of the particles is less than 100 nanometers. Because of their very small size, they're very difficult to recover from suspension in water."

The Rice particles are much larger. Where a 100-nanometer particle is 1,000 times smaller than a human hair, the enhanced titanium dioxide is between 3 and 5 microns, only about 20 times smaller than the same hair. "That means we can use low-pressure microfiltration with a membrane to get these particles back for reuse," Zhang said. "It saves a lot of energy."

Because ROS also wears down cyclodextrin, the spheres begin to lose their trapping ability after about 400 hours of continued ultraviolet exposure, Zhang said. But once recovered, they can be easily recharged.

"This new material helps overcome two significant technological barriers for photocatalytic water treatment," Alvarez said. "First, it enhances treatment efficiency by minimizing scavenging of ROS by non-target constituents in water. Here, the ROS are mainly used to destroy BPA.

"Second, it enables low-cost separation and reuse of the catalyst, contributing to lower treatment cost," he said. "This is an example of how advanced materials can help convert academic hypes into feasible processes that enhance water security."

Credit: 
Rice University

Participants in dementia prevention research motivated by altruism

Researchers at University of California San Diego School of Medicine, with collaborators across the country, report that people who participate in dementia prevention trials are primarily motivated by altruism and pleased to help.

The findings are published in the October 5 issue of Alzheimer's & Dementia.

"For the most part, people appeared satisfied with their experience in a clinical trial," said first author Mary Sano, PhD, professor of psychiatry and director of the Alzheimer's Disease Research Center at Mount Sinai School of Medicine in New York City. "A big takeaway is how altruism and giving back are important to participants. We were also intrigued by the desire for increased social interactions."

The study surveyed 422 non-demented participants, age 75 and older, in the Home-Based Assessment (HBA) study at 27 sites across the country. The HBA study -- a four-year, longitudinal study using novel technologies to determine the feasibility of assessing cognitively normal older adults in their own homes -- was coordinated by the Alzheimer's Disease Cooperative Study (ADCS), an initiative of the National Institute on Aging based at UC San Diego School of Medicine.

Almost 6 million Americans are currently living with Alzheimer's disease (AD), according to the Alzheimer's Association, with an American developing the disease every 65 seconds. By 2050, the number of persons with AD is projected to rise to nearly 14 million, making the need for research critical.

Yet little is known about the factors affecting the motivation and satisfaction of participants in dementia prevention trials, say experts. Beyond a motivation to help, the new study was an attempt to determine how future clinical trials might be made more attractive and effective. The HBA study involved various levels of technology, such as mail-in questionnaires, live telephone interviews, automated telephone calls with interactive voice response and an internet-connected, home-sited computer kiosk with responses captured via automated speech recognition.

Researchers found that trial participants preferred staff-administered assessments more than automated technologies; greater opportunity to challenge and improve their own mental function (such as a wider variety of activities during testing); and increased interaction with both study staff and other, older adults. They also sought more personal feedback from researchers as the trial progressed.

Sano said it wasn't surprising that participants became bored with repetitive tasks and frustrated by inevitable equipment glitches.

"It's important to understand because it's common for new trials to have more technology and less human interaction," she said. "While advanced technology is clearly essential, we also must remember that people want to feel valued for their own ideas and personalities."

Co-author Jeffrey Kaye, MD, professor of neurology at Oregon Health and Science University and director of both the Layton Aging and Alzheimer's Disease Center and the Oregon Center for Aging and Technology, suggested technology should be used with participant comfort in mind.

"To maximize the advantages that technologies can bring to clinical trials, it is important to ensure that devices or interactions with technology are integrated into participants' everyday lives. Ideally, the technology works in the background and is as unobtrusive as possible. If there are needed interactions, these must be engaging and minimally burdensome, especially when studies may be conducted over many years."

Senior author Howard Feldman, MD, professor of neurosciences at UC San Diego School of Medicine, and clinical neurologist and director of the ADCS said the findings should inform and improve future study design.

"By listening to the concerns and suggestions of our participants, we build better, more effective studies in the future," he said. "It's good to know that participants are feeling the spirit of altruism in this work, as we are essentially relying on successful expansion of this community effort to address the ever increasing size and challenges of Alzheimer's disease.

"It is incumbent on us to listen and plan accordingly. It is also important to note, and not to underestimate, the human element described in this research. Direct human interaction seems to be an important contributor to participant engagement and retention. It is a reminder that human contact provides a benefit to these studies, supporting participants in a way that technology cannot."

Credit: 
University of California - San Diego

'Turbidity currents' are not just currents, but involve movement of the seafloor itself

image: Instruments such as this benthic event detector helped scientists discover how the seafloor moves during turbidity events in submarine canyons.

Image: 
© 2016 MBARI

Turbidity currents have historically been described as fast-moving currents that sweep down submarine canyons, carrying sand and mud into the deep sea. But a new paper in Nature Communications shows that, rather than just consisting of sediment-laden seawater flowing over the seafloor, turbidity currents also involve large-scale movements of the seafloor itself. This dramatic discovery, the result of an 18-month-long, multi-institutional study of Monterey Canyon, could help ocean engineers avoid damage to pipelines, communications cables, and other seafloor structures.

Geologists have known about turbidity currents since at least 1929, when a large earthquake triggered a violent current that traveled several hundred kilometers and damaged 12 trans-Atlantic communications cables. Turbidity currents are still a threat today, as people place more and more cables, pipelines, and other structures on the seafloor. Turbidity currents are also important to petroleum geologists because they leave behind layers of sediment that comprise some of the world's largest oil reserves.

Despite almost a century of research, geologists have struggled to come up with a conceptual model that describes in detail how turbidity currents form and evolve. The Coordinated Canyon Experiment was designed, in part, to resolve this debate. During this 18-month-long study, researchers from the Monterey Bay Aquarium Research Institute (MBARI), the U.S. Geological Survey, the University of Hull, the National Oceanography Centre, the University of Southampton, the University of Durham, and the Ocean University of China combined their expertise and equipment to monitor a 50-kilometer-long (31-mile) stretch of Monterey Canyon in unprecedented detail.

During the experiment, researchers placed over 50 different instruments at seven different locations in the canyon and made detailed measurements during 15 different turbidity flows. Almost all of the flows began near the head of the canyon in water less than about 300 meters (1,000 feet) deep. Once initiated, the flows traveled at least several kilometers down the canyon. The three largest flows traveled over 50 kilometers, sweeping past the deepest monitoring station in the canyon at a depth of 1,850 meters (6,000 feet).

This extensive research program showed that turbidity currents in Monterey Canyon involve both movements of water-saturated sediment and of sediment-laden water. As described in the recent Nature Communications paper, the most important part of the process is a dense layer of water-saturated sediment that moves rapidly over the bottom and remobilizes the upper few meters of the preexisting seafloor.

This is very different from previous conceptual models of turbidity currents, which focused on flows of turbid, sediment-laden water traveling above the seafloor. The authors of the recent paper did observe plumes of sediment-laden water during turbidity events, but they suggest that these are secondary features that form when the pulse of saturated sediment mixes into the overlying seawater.

"This whole experiment was an attempt to learn what was going on at the bottom of the canyon," said Charlie Paull, MBARI marine geologist and first author of the recent paper. "For years we have seen instruments on the bottom move in unexpected ways, and we suspected that the seafloor might be moving. Now we have real data that show when, where, and how this happens."
Among the instruments used in the experiment were current meters mounted on seven moorings distributed along the canyon floor. Analyzing the data from these instruments and measuring the time it took for the flows to travel between the moorings, the researchers were surprised to find that the flows appeared to travel down the canyon at speeds greater than the actual measured water currents.

Although tilting and other movements of the current meters could explain some of these observations, the scientists eventually concluded that their instruments were not simply being moved by currents of turbid water flowing above the seafloor.

The researchers also placed beach-ball-sized sensors called benthic event detectors (BEDs) in the seafloor. The BEDs were designed to be transported by turbidity flows while carrying instruments that recorded their depth, horizontal and vertical movement, and rotation. Other motion sensors were mounted on large, steel frames weighing up to 800 kilograms (1,760 pounds). These were designed to remain stationary while the flows passed around them.

However, both the BEDs and the heavy frames were carried far down the canyon during strong turbidity events. In fact, the heavy, awkwardly-shaped instrument frames often traveled just as fast as the relatively light, streamlined BEDs.

The researchers also noticed large sand waves, up to two meters (6.5 feet) tall, on the floor of the canyon. Repeated bottom surveys showed that these sand waves shifted dramatically during turbidity events, remolding the upper two to three meters of the seafloor. But the researchers still weren't sure exactly how this remolding occurred.

Data from the BEDs provided an important clue. During many events, the BEDs did not just move down the canyon into deeper water, but traveled as fast or faster than the overlying water. They also moved up and down within the flow as much as three meters at regular intervals.

The researchers concluded that, rather than being "dragged" along the bottom by a strong current, their instruments were being "rafted" by a dense, bottom-hugging layer of water-saturated sediment. They hypothesized that the up-and-down motions of the BEDs occurred as the instruments traveled over individual sand waves. As Paull noted, "The BEDs provided an essential kernel of new data that allowed us to understand the movement of the seafloor for the first time."

"Textbooks and modelling efforts have traditionally focused on dilute flows of sediment-laden water over the bottom," Paull added. "But we now know that dilute flows are just part of the equation. It turns out that they are the tail end of the process, which really begins at the seafloor. "

Credit: 
Monterey Bay Aquarium Research Institute

Novel use of NMR sheds light on easy-to-make electropolymerized catalysts

WASHINGTON, D.C., October 5, 2018 -- In the world of catalytic reactions, polymers created through electropolymerization are attracting renewed attention. A group of Chinese researchers recently provided the first detailed characterization of the electrochemical properties of polyaniline and polyaspartic acid (PASP) thin films. In AIP Advances, from AIP Publishing, the team used a wide range of tests to characterize the polymers, especially their capacity for catalyzing the oxidation of popularly used materials, hydroquinone and catechol.

This new paper marks one of the first pairings of standard electrochemical tests with nuclear magnetic resonance (NMR) analysis in such an application. "Because these materials can be easily prepared in an electric field and are cost-effective and environmentally friendly, we think they have the potential to be widely used," said Shuo-Hui Cao, an author on the paper.

Although PASP has shown excellent electrocatalytic responses to biological molecules, newer areas of inquiry have explored the material's ability to lower the oxidational potential in oxidation-reduction reactions. Reducing the oxidation potential is key for finding further uses for two materials used extensively as raw materials and synthetic intermediates in pharmaceuticals, hydroquinone and catechol.

Conductive polymers, like polyaniline, have attracted attention for their high conductivity and low cost. To better understand these materials, Cao and his colleagues tested how well PASP and polyaniline were able to oxidize hydroquinone and catechol using several standard electrochemical techniques, including attenuated total reflection Fournier transform infrared spectrophotometry, cyclic voltammetry and electrochemical impedance spectroscopy.

Using proton-based NMR, they monitored the progress of each reaction by directly measuring how quickly reactants were used and products were created. Cao said that their work using NMR analysis on catechol looks to fill a gap they found in the literature.

"The NMR technique allows us to find out more about their molecular structure and better compare the catalysts' characteristics quantitatively," Cao said.

The group discovered that the polymer-modified electrodes both improved conductivity. PASP's catalytic activity of both hydroquinone and catechol was found to outpace that of polyaniline by a factor of two. Later NMR studies confirmed that electrically induced molecular transformations allowed PASP to serve as a better catalyst.

The findings led the researchers to postulate that polyaspartic acid electropolymerized thin films might be more suitable for use as catalysts over polyaniline in many situations.

Cao said he hopes to further develop NMR techniques that pair with electrochemical testing. So far, the group has used a type of NMR that incorporates one dimension of frequency analysis. In addition to being able to examine new material features, using two-dimensional techniques will allow the group to extend their work to more complicated molecules.

Credit: 
American Institute of Physics

Model helps robots navigate more like humans do

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.

MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they've learned before in similar situations. A paper describing the model was presented at this week's IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can't leverage information about how they or other agents acted previously in similar environments.

"Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents," says co-author Andrei Barbu, a researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT's McGovern Institute. "The thousandth time they go through the same crowd is as complicated as the first time. They're always exploring, rarely observing, and never using what's happened in the past."

The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot's movement in an environment.

In their paper, "Deep sequential models for sampling-based planning," the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.

"When humans interact with the world, we see an object we've interacted with before, or are in some location we've been to before, so we know how we're going to act," says Yen-Ling Kuo, a PhD in CSAIL and first author on the paper. "The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient."

Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.

Trading off exploration and exploitation

Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers' model, however, offers "a tradeoff between exploring the world and exploiting past knowledge," Kuo says.

The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model "learns that when you're stuck in an environment, and you see a doorway, it's probably a good idea to go through the door to get out," Barbu says.

The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It's a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn't have high confidence, it lets the robot explore the environment instead, like a traditional planner.

For example, the researchers demonstrated the model in a simulation known as a "bug trap," where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.

Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers' model more quickly plotted far shorter and consistent paths than a traditional planner.

Working with multiple agents

In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.

"Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on," Barbu says. "You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with."

Results indicate that the researchers' model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. "The plans the robots make take into account what the other cars are going to do, as any human would," Barbu says.

Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.

"Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that's why it can plan efficiently," Barbu says.

More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.

Credit: 
Massachusetts Institute of Technology