Friday 30 June 2023

Finding rewrites understanding into Parkinson's disease pathway

 While mitochondria play a crucial role in producing the energy our cells need to carry out their various functions, when damaged, they can have profound effects on cellular function and contribute to the development of various diseases.

Broken-down mitochondria are usually removed and recycled through a garbage disposal process known as 'mitophagy'.

PINK1 and Parkin are two proteins vital to this process, responsible for 'tagging' malfunctioning mitochondria for destruction. In Parkinson's disease, mutations in these proteins can result in the accumulation of damaged mitochondria in the brain, which can lead to motor symptoms such as tremors, stiffness and difficulty with movement.

The new research, published in Molecular Cell, solves a mystery about how the protein Optineurin recognises unhealthy mitochondria 'tagged' by PINK1 and Parkin, enabling their delivery to our body's garbage disposal system.

Associate Professor Michael Lazarou, a Laboratory Head in WEHI's Ubiquitin Signalling Division, said the discovery filled a vital knowledge gap that would transform our understanding of this cellular pathway.

"Until this study, Optineurin's precise role in initiating our body's garbage disposal process was unknown," Assoc Prof Lazarou, who also holds a co-appointment with Monash University, said.

"While there are many proteins that link damaged cellular materials to the garbage disposal machinery, we found that Optineurin does this in a highly unconventional way that is unlike anything else we've seen from similar proteins.

"This finding is significant because the human brain relies on Optineurin to degrade its mitochondria through the garbage disposal system driven by PINK1 and Parkin. "Knowing how Optineurin does this provides us with a framework on how we might be able to target PINK1 and Parkin mitophagy in disease and prevent the build-up of damaged mitochondria in neurons as we age.

"Achieving this would be instrumental to people with Parkinson's disease -- a condition that continues to impact more than 10 million people worldwide, including 80,000 Australians."

Babies talk more around human-made objects than natural ones

 Researchers have found infants are significantly more likely to use "baby talk" during interactions that involve artificial objects compared to natural ones.

Infants often communicate with protophones, which are sounds resembling squeals, growls or short word-like noises such as "da," "aga" and "ba." These are considered the foundations of speech, as they eventually evolve into full language.

Objects play an important role in this process, as the more vocalisation an object encourages, the closer a young child is to talking.

A new study, led by the University of Portsmouth, has looked at the relationship between protophones and things typically found at home to assess their importance for developing language skills.

To do this, the team observed how often children aged 4 to 18 months who live in Zambia vocalised when using toys and household items, and then compared it to how they interacted with natural objects.

They discovered the amount of protophones produced by the younger infants was significantly higher when engaging with human-made objects, compared to sticks, leaves, rocks and bird feathers.

They also found the children were more interested in household items -- such as mugs, shoes, and pens -- when given the choice between them and natural objects.

Lead author, Dr Violet Gibson from the University of Portsmouth's Department of Psychology, said: "Our findings suggest that object features have an impact on the way in which young children communicate.

"Here, we observed that natural objects were less likely to encourage infants to produce protophones, and as a consequence they may not promote language skill development as much as artificial objects.

"Preverbal infants seem to favour household items, possibly because their features are designed for specific functional purposes, or in the case of toys, they're designed to get a child's attention and spark their interest.

"This supports existing evidence that the use of complex tools in social interactions may have contributed to establishing the groundwork required for the emergence of human language."

Combining maths with music leads to higher scores, suggests review of 50 years of research

 Children do better at maths when music is a key part of their lessons, an analysis of almost 50 years of research on the topic has revealed.

It is thought that music can make maths more enjoyable, keep students engaged and help any ease fear or anxiety they have about maths. Motivation may be increased and pupils may appreciate maths more, the peer-reviewed article in Educational Studies details.

Techniques for integrating music into maths lessons range from clapping to pieces with different rhythms when learning numbers and fractions, to using maths to design musical instruments.

Previous research has shown that children who are better at music also do better at maths. But whether teaching music to youngsters actually improves their maths has been less clear.

To find out more, Turkish researcher Dr. Ayça Akin, from the Department of Software Engineering, Antalya Belek University, searched academic databases for research on the topic published between 1975 and 2022.

She then combined the results of 55 studies from around the world, involving almost 78,000 young people from kindergarten pupils to university students, to come up with an answer.

Three types of musical intervention were included the meta-analysis: standardised music interventions (typical music lessons, in which children sing and listen to, and compose, music), instrumental musical interventions (lessons in which children learn how to play instruments, either individually or as part of a band) and music-maths integrated interventions, in which music is integrated into maths lessons.

Students took maths tests before and after taking part in the intervention and the change in their scores was compared with that of youngsters who didn't take part in an intervention.

The use of music, whether in separate lessons or as part of maths classes, was associated with greater improvement in maths over time.

The integrated lessons had the biggest effect, with around 73% of students who had integrated lessons doing significantly better than youngsters who didn't have any type of musical intervention.

Some 69% of students who learned how to play instruments and 58% of students who had normal music lessons improved more than pupils with no musical intervention.

The results also indicate that music helps more with learning arithmetic than other types of maths and has a bigger impact on younger pupils and those learning more basic mathematical concepts.

Dr Akin, who carried out the research while at Turkey's National Ministry of Education and Antalya Belek University, points out that maths and music have much in common, such as the use of symbols symmetry. Both subjects also require abstract thought and quantitative reasoning.

Arithmetic may lend itself particularly well to being taught through music because core concepts, such as fractions and ratios, are also fundamental to music. For example, musical notes of different lengths can be represented as fractions and added together to create several bars of music.

Integrated lessons may be especially effective because they allow pupils to build connections between the maths and music and provide extra opportunities to explore, interpret and understand maths.

Plus, if they are more enjoyable than traditional maths lessons, any anxiety students feel about maths may be eased.

Limitations of the analysis include the relatively small number of studies available for inclusion. This meant it wasn't possible to look at the effect of factors such as gender, socio-economic status and length of musical instruction on the results.

Consumers more likely to use virtual apparel try-on software if interactive

 While more and more people are shopping online, purchasing clothes on the internet poses a unique challenge: What if it doesn't fit? The apparel industry's latest solution is virtual try-on sessions that allow consumers to share photos or measurements of themselves to create a similar-sized avatar.

While some consumers have significant concerns about the new technology, especially young people, new research from the University of Missouri found that qualities such as the perceived ease of using the technology significantly diminishes privacy concerns.

"This is something that virtual try-on companies should take note of," said Song-yi Youn, an assistant professor of textile and apparel management at the MU College of Arts and Science and lead author on the study. "The way our society is moving, personal information is becoming a valuable and important commodity, and people, especially young people, are very careful with their personal information because this phenomenon is not going away any time soon."

To reach her finding, Youn asked participants to create an avatar by submitting body information such as height, weight, bra size and body shape. Once the avatar was created, participants were asked to virtually try-on a jacket and take a screenshot of their avatar. Finally, they were questioned about their experiences and the likelihood that they would shop virtually again using an avatar.

"When the participants in the study found that they had control over their own experience, they were able to personalize that experience and the technology was easily responsive, they were much more likely to use the technology," Youn said. "In fact, it had a direct impact on the privacy concerns the users were voicing."

Youn said companies can use these findings to help inform their business models to provide better trade-offs for people's personal information, like interactivity, ease of use and versatility. Youn was surprised that these features had such an impact on people's privacy concerns.

"I knew that interactivity and positive aspects of the applications would make people want to use it more," Youn said. "However, I was shocked to discover that the level of interactivity was connected to people's privacy concerns. That has huge implications, not only for businesses using virtual try-on software, but also for businesses utilizing consumer information as part of their business model."

Tuesday 27 June 2023

A subtype of depression identified

 Scientists at Stanford Medicine conducted a study describing a new category of depression -- labeled the cognitive biotype -- which accounts for 27% of depressed patients and is not effectively treated by commonly prescribed antidepressants.

Cognitive tasks showed that these patients have difficulty with the ability to plan ahead, display self-control, sustain focus despite distractions and suppress inappropriate behavior; imaging showed decreased activity in two brain regions responsible for those tasks.

Because depression has traditionally been defined as a mood disorder, doctors commonly prescribe antidepressants that target serotonin (known as selective serotonin reuptake inhibitors or SSRIs), but these are less effective for patients with cognitive dysfunction. Researchers said that targeting these cognitive dysfunctions with less commonly used antidepressants or other treatments may alleviate symptoms and help restore social and occupational abilities.

The study, published June 15 in JAMA Network Open, is part of a broader effort by neuroscientists to find treatments that target depression biotypes, according to the study's senior author, Leanne Williams, PhD, the Vincent V.C. Woo Professor and professor of psychiatry and behavioral sciences.

"One of the big challenges is to find a new way to address what is currently a trial-and-error process so that more people can get better sooner," Williams said. "Bringing in these objective cognitive measures like imaging will make sure we're not using the same treatment on every patient."

Finding the biotype

In the study, 1,008 adults with previously unmedicated major depressive disorder were randomly given one of three widely prescribed typical antidepressants: escitalopram (brand name Lexapro) or sertraline (Zoloft), which act on serotonin, or venlafaxine-XR (Effexor), which acts on both serotonin and norepinephrine. Seven hundred and twelve of the participants completed the eight-week regimen.

Before and after treatment with the antidepressants, the participants' depressive symptoms were measured using two surveys -- one, clinician-administered, and the other, a self-assessment, which included questions related to changes in sleep and eating. Measures on social and occupational functioning, as well as quality of life, were tracked as well.

The participants also completed a series of cognitive tests, before and after treatment, measuring verbal memory, working memory, decision speed and sustained attention, among other tasks.

Before treatment, scientists scanned 96 of the participants using functional magnetic resonance imaging as they engaged in a task called the "GoNoGo" that requires participants to press a button as quickly as possible when they see "Go" in green and to not press when they see "NoGo" in red. The fMRI tracked neuronal activity by measuring changes in blood oxygen levels, which showed levels of activity in different brain regions corresponding to Go or NoGo responses. Researchers then compared the participants' images with those of individuals without depression.

The researchers found that 27% of the participants had more prominent symptoms of cognitive slowing and insomnia, impaired cognitive function on behavioral tests, as well as reduced activity in certain frontal brain regions -- a profile they labeled the cognitive biotype.

"This study is crucial because psychiatrists have few measurement tools for depression to help make treatment decisions," said Laura Hack, MD, PhD, the lead author of the study and an assistant professor of psychiatry and behavioral sciences. "It's mostly making observations and self-report measures. Imaging while performing cognitive tasks is rather novel in depression treatment studies."

Pre-treatment fMRI showed those with the cognitive biotype had significantly reduced activity in the dorsolateral prefrontal cortex and dorsal anterior cingulate regions during the GoNoGo task compared with the activity levels in participants who did not have the cognitive biotype. Together, the two regions form the cognitive control circuit, which is responsible for limiting unwanted or irrelevant thoughts and responses and improving goal selection, among other tasks.

After treatment, the researchers found that for the three antidepressants administered, the overall remission rates -- the absence of overall depression symptoms -- were 38.8% for participants with the newly discovered biotype and 47.7% for those without it. This difference was most prominent for sertraline, for which the remission rates were 35.9% and 50% for those with the biotype and those without, respectively.

"Depression presents in different ways in different people, but finding commonalities -- like similar profiles of brain function -- helps medical professionals effectively treat participants by individualizing care," Williams said.

Depression isn't one size fits all

Williams and Hack propose that behavior measurement and imaging could help diagnose depression biotypes and lead to better treatment. A patient could complete a survey on their own computer or in the doctor's office, and if they are found to display a certain biotype, they might be referred to imaging for confirmation before undergoing treatment.

Researchers at the Stanford Center for Precision Mental Health and Wellness, which Williams directs, in partnership with the Stanford Translational Precision Mental Health Clinic, which Hack directs, are studying another medication -- guanfacine -- that specifically targets the dorsolateral prefrontal cortex region with support from Stanford University Innovative Medicines Accelerator. They believe this treatment could be more effective for patients with the cognitive subtype.

Williams and Hack hope to conduct studies with participants who have the cognitive biotype, comparing different types of medication with treatments such as transcranial magnetic stimulation and cognitive behavioral therapy. In transcranial magnetic stimulation, commonly referred to as TMS, magnetic fields stimulate nerve cells; in cognitive behavioral therapy, patients are taught to use problem-solving strategies to counter negative thoughts that contribute to both emotional dysregulation and loss of social and occupational abilities.

"I regularly witness the suffering, the loss of hope and the increase in suicidality that occurs when people are going through our trial-and-error process," Hack said. "And it's because we start with medications that have the same mechanism of action for everyone with depression, even though depression is quite heterogeneous. I think this study could help change that."

New nationwide modeling points to widespread racial disparities in urban heat stress

 From densely built urban cores to sprawling suburbia, cities are complex. This complexity can lead to temperature hot spots within cities, with some neighborhoods (and their residents) facing more heat than others.

Understanding this environmental disparity forms the spirit of new research led by scientists at the Department of Energy's Pacific Northwest National Laboratory. In a new paper examining all major cities in the U.S., the authors find that the average Black resident is exposed to air that is warmer by 0.28 degrees Celsius relative to the city average. In contrast, the average white urban resident lives where air temperature is cooler by 0.22 degrees Celsius relative to the same average.

The new work, published last week in the journal One Earth, involved a two-part effort. The study's authors aimed to produce a more useful nationwide estimate of urban heat stress -- a more accurate account of how our body responds to outdoor heat. By creating and comparing these estimates against demographic data, they also tried to better understand which populations are most exposed to urban heat stress.

The findings reveal pervasive income- and race-based disparities within U.S. cities. Nearly all the U.S. urban population -- 94 percent, or roughly 228 million people -- live in cities where summertime peak heat stress exposure disproportionately burdens the poor.

The study's authors also find that people who now live within historically redlined neighborhoods, where loan applicants were once denied on racially discriminatory grounds, would be exposed to higher outdoor heat stress than their neighbors living in originally non-redlined parts of the city.

The work also highlights shortcomings in the typical approach scientists take in estimating urban heat stress at these scales, which frequently relies on satellite data. This conventional satellite-based method can overestimate such disparities, according to the new work. As the world warms, the findings stand to inform urban heat response plans put forward by local governments who seek to help vulnerable groups.

What is heat stress?

The human body has evolved to operate within a relatively narrow temperature range. Raise your core body temperature beyond just six or seven degrees and drastic physiological consequences soon follow. Cellular processes break down, the heart is taxed, and organs begin to fail.

Sweating helps. But the cooling power of sweating depends partly on how humid the environment is. When both heat and humidity are omnipresent and difficult to escape, the body struggles to adapt.

How is heat stress measured?

To measure heat stress, scientists use a handful of indicators, many of which depend on air temperature and humidity. Weather stations provide such data. Because most weather stations are outside of cities, though, scientists often rely on other means to get some idea about urban heat stress, including using sensors on satellites.

Those sensors infer the temperature of the land surface from measurements of thermal radiation. But such measurements fall short of delivering a full picture of heat stress, said lead author and Earth scientist TC Chakraborty. Measuring just the skin of the Earth, like the surface of a sidewalk or a patch of grass, said Chakraborty, offers only an idea of what it's like to lay flat on that surface.

"Unless you're walking around barefoot or lying naked on the ground, you're not really feeling that," said Chakraborty. "Land surface temperature is, at best, a crude proxy of urban heat stress."

Indeed, most of us are upright, moving through a world where air temperature and moisture dictate how heat actually feels. And these satellite data are only available for clear-sky days -- another limiting factor. More complete and physiologically relevant estimates of heat stress incorporate a blend of factors, which models can provide, said Chakraborty.

To better understand differences between satellite-derived land surface temperature and ambient heat exposure within cities, Chakraborty's team examined 481 urbanized areas across the continental United States using both satellites and model simulations.

NASA's Aqua satellite provided the land surface temperature; and through model simulations that account for urban areas, the authors generated nationwide estimates of all variables required to calculate moist heat stress. Two such metrics of heat stress -- the National Weather Service's heat index and the Humidex, often used by Canadian meteorologists -- allowed the scientists to capture the combined impacts of air temperature and humidity on the human body.

They then identified heat stress hotspots across the country for summer days between 2014 and 2018. Overlaying maps of both historically redlined neighborhoods and census tracts, the team identified relationships between heat exposure and communities.

How is heat distributed within cities?

Residents in poorer neighborhoods often face greater heat stress. And a greater degree of income inequality in any given city often means greater heat stress exposure for its poorer residents.

Most U.S. cities, including heavily populated cities like New York, Los Angeles, Chicago, and Philadelphia, show this disparity. But the relationship between heat stress and race-based residential segregation is even more stark.

Roughly 87.5 percent of the cities studied show that Black populations live in parts of the city with higher land surface temperatures, warmer air, and greater moist heat stress. Moreover, the association between the degree of heat stress disparity and the degree of segregation between white and non-white populations across cities is particularly striking, said Chakraborty.

"The majority -- 83 percent -- of non-white U.S. urban residents live in cities where outdoor moist heat stress disproportionately burdens them," said Chakraborty, "Further, higher percentages of all races other than white are positively correlated with greater heat exposure no matter which variable you use to assess it."

In the 1930s, the U.S. federal government's Home Owners' Loan Corporation graded neighborhoods in an effort to rank the suitability of real estate investments. This practice is known as "redlining," where lower grades (and consequently fewer loans) were issued to neighborhoods composed of poorer and minority groups. The authors find that these redlined neighborhoods still show worse environmental conditions.

Neighborhoods with lower ratings face higher heat exposure than their non-redlined neighbors. Neighborhoods with higher ratings, in contrast, generally get less heat exposure.

This is consistent with previous research on originally redlined urban neighborhoods showing lower tree cover and higher land surface temperature. Chakraborty, however, notes that using land surface temperature would generally overestimate these disparities across neighborhood grades compared to using air temperature or heat index.

"Satellites give us estimates of land surface temperature, which is a different variable from the temperature we feel while outdoors, especially within cities," said Chakraborty. "Moreover, the physiological response to heat also depends on humidity, which satellites cannot directly provide, and urbanization also modifies."

The findings are not without uncertainty, the authors added. "Ground-based weather stations helped to dwindle down, but not eliminate, model bias," said co-author Andrew Newman of the National Center for Atmospheric Research, who generated the model simulations. However, the results are still consistent with both theory and previous large-scale observational evidence.

What can be done?

Planting more trees often comes up as a potential solution to heat stress, said Chakraborty. But densely built urban cores, where poorer and minority populations in the U.S. often live, have limited space for trees. And many previous estimates of vegetation's potential to cool city surroundings are also based solely on land surface temperature -- they are perhaps prone to similar overestimation, the authors suggest.

More robust measurements of urban heat stress would help, they added. Factors like wind speed and solar insolation contribute to how heat actually affects the human body. But those factors are left out of most scientific assessments of urban heat stress because they are difficult to measure or model at neighborhood scales.

Novel study deepens knowledge of treatment-resistant hypertension

 For many patients with hypertension -- an elevated blood pressure that can lead to stroke or heart attack -- medication keeps the condition at bay. But what happens when medication that physicians usually prescribe doesn't work? Known as apparent resistant hypertension (aRH), this form of high blood pressure requires more medication and medical management.

Novel research from investigators in the Smidt Heart Institute at Cedars-Sinai, published today in the peer-reviewed journal Hypertension, found that aRH prevalence was lower in a real-world sample than previously reported, but still relatively frequent -- affecting nearly 1 in 10 hypertensive patients.

Through their analysis, investigators also learned that patients with well-managed aRH were more likely to be treated with a commonplace medication called mineralocorticoid receptor antagonist, or MRA. These MRA treatments were used in 34% of patients with controlled aRH, but only 11% of patients with uncontrolled aRH.

"Apparent resistant hypertension is more common than many would anticipate," said Joseph Ebinger, MD, assistant professor of Cardiology in the Smidt Heart Institute and corresponding author of the study. "We also learned that within this high-risk population, there are large differences in how providers treat high blood pressure, exemplifying a need to standardize care."

Study findings were based on a unique design, which used clinically generated data from the electronic health records of three large, geographically diverse healthcare organizations. Of the 2,420,468 patients analyzed in the study, 55% were hypertensive. Of these hypertension patients, 8.5%, or 113,992 individuals, met criteria for aRH.

According to Ebinger, treating aRH can be just as tricky as diagnosing it.

In fact, the "apparent" in apparent resistant hypertension stems from the fact that before diagnosis, medical professionals must first rule out other potential reasons for a patient's blood pressure to be high.

These reasons might include medication non-adherence, inappropriate medication selection, or artificially elevated blood pressure in the doctor's office -- known as "white coat hypertension."

"Large amounts of data tell us that patients with aRH, compared to those with non-resistant forms of hypertension, are at greatest risk for adverse cardiovascular events," said Ebinger, director of Clinical Analytics in the Smidt Heart Institute. "Identifying these patients and possible causes for their elevated blood pressure is increasingly important."

The takeaway, Ebinger says, is awareness -- for both medical professionals and patients. He says providers should be mindful that if it's taking four or more antihypertensive medications to control a patient's blood pressure, they should consider evaluation for alternative causes of hypertension, or refer patients to a specialist.

Similarly, patients should lean on their medical providers to help them navigate the complex disease, including having a conversation around strategies for remembering to take their medication and addressing possible treatment side effects.

Treating patients with complex cardiac issues like aRH is at the heart of Cedars-Sinai's expertise.

The Smidt Heart Institute was recently awarded the American Heart Association's Comprehensive Hypertension Center Certification, recognizing the institute's commitment to following proven, research-based treatment guidelines to care for people with complex or difficult-to-treat hypertension.

"This accreditation, coupled with our clinical and research expertise in hypertensive diseases, serves as a mark of excellence," said Christine M. Albert, MD, MPH, chair of the Department of Cardiology and the Lee and Harold Kapelovitz Distinguished Chair in Cardiology. "These efforts signal to patients, healthcare providers, and the community that the Smidt Heart Institute is committed to delivering evidence-based, comprehensive care for hypertension."

Lean body mass, age linked with alcohol elimination rates in women

 The rate at which women eliminate alcohol from their bloodstream is largely predicted by their lean body mass, although age plays a role, too, scientists found in a new study. Women with obesity -- and those who are older -- clear alcohol from their systems 52% faster than women of healthy weights and those who are younger, the study found.

Lean body mass is defined in the study -- published in the journal Alcohol Clinical and Experimental Research -- as one's total body weight minus fat.

"We believe the strong relationship we found between participants' lean body mass and their alcohol elimination rate is due to the association that exists between lean body mass and lean liver tissue -- the part of the liver responsible for metabolizing alcohol," said research group leader M. Yanina Pepino, a professor of food science and human nutrition at the University of Illinois Urbana-Champaign.

To explore links between body composition and alcohol elimination rates, the team conducted a secondary analysis of data from a study performed at the U. of I and another at Indiana University, Indianapolis. Both projects used similar methods to estimate the rate at which alcohol is broken down in the body.

The combined sample from the studies used in the analysis included 143 women who ranged in age from 21 to 64 and represented a wide range of body mass indices -- from healthy weights to severe obesity. Among these were 19 women who had undergone different types of bariatric surgery.

In a subsample of 102 of these women, the researchers had measured the proportions of lean and fat tissue in their bodies and calculated their body mass indices. Based on their BMI, those in the subsample were divided into three groups: normal weight, which included women with BMI ranging from 18.5-24.9; overweight, those with BMI ranging from 25-29.9; and obese, participants with BMI above 30.

As the researchers expected, women with higher BMI had not only more fat mass than women of healthy weights, they also had more lean mass. On average, the group with obesity had 52.3 kg of lean mass, compared with 47.5 kg for the normal weight group.

The two studies both used an alcohol clamp technique, where participants received an intravenous infusion of alcohol at a rate controlled by a computer-assisted system. The system calculated personalized infusion rates based upon each participant's age, height, weight and gender and was programmed so they would reach a target blood alcohol concentration of .06 percent within 15 minutes and maintain that level for about two hours

Using a breathalyzer, breath samples were collected at regular intervals throughout the experiments to estimate participants' blood alcohol concentration and provide feedback to the system.

"We found that having a higher fat-free body mass was associated with a faster alcohol elimination rate, particularly in women in the oldest subgroups," said Neda Seyedsadjadi, a postdoctoral fellow at the university and the first author of the study.

"The average alcohol elimination rates were 6 grams per hour for the healthy weight group, 7 grams for the overweight group, and 9 grams for the group with obesity," she said. "To put this in perspective, one standard drink is 14 grams of pure alcohol, which is found in 12 ounces of beer, 5 ounces of table wine or 1.5 ounces shot of distilled spirits."

The interaction between participants' age and lean body mass accounted for 72% of the variance in the time required to eliminate the alcohol from their system, the team found.

Pepino, who also holds an appointment as a health innovation professor at Carle Illinois College of Medicine, has conducted several studies on alcohol response in bariatric surgery patients.

The findings also shed light on alcohol metabolism and body composition in women who have undergone weight loss surgery. Researchers have long known that bariatric surgery alters women's response to alcohol but were uncertain if it affected how quickly they cleared alcohol from their systems.

Some prior studies found that these patients metabolized alcohol more slowly after they had weight loss surgery. The new study's findings indicate that these participants' slower alcohol elimination rates can be explained by surgery-induced reductions in their lean body mass. Weight loss surgery itself had no independent effects on patients' alcohol elimination rates, the team found.

Sunday 18 June 2023

Ants have a specialized communication processing center that has not been found in other social insects

 Have you ever noticed an ant in your home, only to find that a week later the whole colony has moved in? The traps you set up catch only a few of these ants, but soon, the rest of the colony has mysteriously disappeared. Now, a study published in the journal Cell on June 14 explores how certain danger-signaling pheromones -- the scent markers ants emit to communicate with each other -- activate a specific part of the ants' brains and can change the behavior of an entire nest.

"Humans aren't the only animals with complex societies and communication systems," says lead author Taylor Hart of The Rockefeller University. "Over the course of evolution, ants have evolved extremely complex olfactory systems compared to other insects, which allows them to communicate using many different types of pheromones that can mean different things."

This research suggests that ants have their own kind of communication center in their brains, similar to humans. This center can interpret alarm pheromones, or "danger signals," from other ants. This section of their brain may be more advanced than that of some other insects such as honeybees, which prior work has suggested instead rely on many different parts of their brain to coordinate in response to a single pheromone.

"There seems to be a sensory hub in the ant brain that all the panic-inducing alarm pheromones feed into," says corresponding author Daniel Kronauer of The Rockefeller University.

The researchers used an engineered protein called GCaMP to scan the brain activity of clonal raider ants that were exposed to danger signals. GCaMP works by attaching itself to calcium ions, which flare up with brain activity, and the resulting fluorescent chemical compound can be seen on high-resolution microscopes adapted to view them.

When performing the scans, the researchers noticed that only a small section of the ants' brains lit up in response to danger signals, but the ants still showed immediate and complex behaviors in response. These behaviors were named the "panic response" because they involved actions such as fleeing, evacuating the nest, and transporting their offspring from the nest toward a safer location.

Species of ants with different colony sizes also use different pheromones to communicate a variety of messages. "We think that in the wild, clonal raider ants usually have a colony size of just tens to hundreds of individuals, which is pretty small as far as ant colonies go," says Hart. "Frequently, these small colonies tend to have panic responses as their alarm behavior because their main goal is to get away and survive. They can't risk a lot of individuals. Army ants, the cousins of the clonal raider ants, have massive colonies -- hundreds of thousands or millions of individuals -- and they can be much more aggressive."

Regardless of the species, ants within a colony divide themselves by caste and role, and ants within different castes and roles have slightly different anatomy. For the purpose of this study, researchers chose clonal raider ants as a species because they are easy to control. They used ants of one sex within one caste and role (female worker ants) to ensure consistency and therefore make it easier to observe widespread patterns. Once researchers have a clearer understanding of the neural differences between castes, sexes, and roles, they may better be able to comprehend exactly how different ant brains process the same signals.

"We can start to look at how these sensory representations are similar or different between ants," says Hart. Kronauer says, "We're looking at division of labor. Why do individuals that are genetically the same assume different tasks in the colony? How does this division of labor work?"

This work was supported by the National Institute of General Medical Sciences of the National Institutes of Health, the National Institute of Neurological Disorders and Stroke, the Howard Hughes Medical Institute, the National Science Foundation, and the Kavli Neural Systems Institute.

10-year countdown to sea-ice-free Arctic

 If the world keeps increasing greenhouse gas emissions at its current speed, all sea ice in the Arctic will disappear in the 2030s, an event that could at best be postponed until the 2050s should emissions be somehow reduced. The prediction is a decade earlier than what the Intergovernmental Panel on Climate Change (IPCC) has projected: an ice-free Arctic by the 2040s.

A possible ice-free Arctic in the 2030-2050s was projected regardless of humanity's efforts to reduce its greenhouse gas emissions by Professor Seung-Ki Min and Research Professor Yeon-Hee Kim from the Division of Environmental Science and Engineering at Pohang University of Science and Technology (POSTECH) and a joint team of researchers from the Environment Climate Change Canada and Universität Hamburg, Germany. The research was published in the international journal, Nature Communications.

The term global warming has become a household name since it was first used by a climate scientist at NASA in 1988. The Earth has seen a rapid decline in the Arctic sea ice area as its temperature has increased over the past several decades. This reduction in Arctic sea ice has induced the acceleration of Arctic warming, which is suggested to contribute to the increased frequency of extreme weather events in mid-latitude regions.

To predict the timing of Arctic sea ice depletion, the research team analyzed 41 years of data from 1979 to 2019. By comparing the results of multiple model simulations with three satellite observational datasets, it was confirmed that the primary cause of the decline is attributed to 'human-made greenhouse gas emissions'. Greenhouse gas emissions resulting from human fossil fuel combustion and deforestation have been the primary drivers of Arctic sea ice decline over the past 41 years, while the influence of aerosols, solar and volcanic activities has been found to be minimal. Monthly analysis found that increased greenhouse gas emissions were reducing Arctic sea ice all year round, regardless of season or timing, although September exhibited the smallest extent of sea ice reduction.

Furthermore, it was revealed that climate models used in previous IPCC predictions generally underestimated the declining trend of sea ice area, which was taken into account to adjust the simulation values for future predictions. The results showed accelerated decline rates across all scenarios, most importantly confirming that Arctic sea ice could completely disappear by the 2050s even with reductions in greenhouse gas emissions. This finding highlights for the first time that the extinction of Arctic sea ice is possible irrespective of achieving 'carbon neutrality.'

The accelerated decline of Arctic sea ice, faster than previously anticipated, is expected to have significant impacts not only on the Arctic region but also on human societies and ecosystems worldwide. The reduction of sea ice can result in more frequent occurrences of extreme weather events such as severe cold waves, heat waves, and heavy rainfalls all across the globe, with the thawing of the Siberian permafrost in the Arctic region possibly intensifying global warming further. We may witness terrifying scenarios, which we have seen only in disaster movies, unfold right before our eyes.

Professor Seung-Ki Min, who led the study, explained, "We have confirmed an even faster timing of Arctic sea ice depletion than previous IPCC predictions after scaling model simulations based on observational data." He added, "We need to be vigilant about the potential disappearance of Arctic sea ice, regardless of carbon neutrality policies." He also expressed the importance of "evaluating the various climate change impacts resulting from the disappearance of Arctic sea ice and developing adaptation measures alongside carbon emission reduction policies."

We've pumped so much groundwater that we've nudged Earth's spin

 By pumping water out of the ground and moving it elsewhere, humans have shifted such a large mass of water that the Earth tilted nearly 80 centimeters (31.5 inches) east between 1993 and 2010 alone, according to a new study published in Geophysical Research Letters, AGU's journal for short-format, high-impact research with implications spanning the Earth and space sciences.

Based on climate models, scientists previously estimated humans pumped 2,150 gigatons of groundwater, equivalent to more than 6 millimeters (0.24 inches) of sea level rise, from 1993 to 2010. But validating that estimate is difficult.

One approach lies with the Earth's rotational pole, which is the point around which the planet rotates. It moves during a process called polar motion, which is when the position of the Earth's rotational pole varies relative to the crust. The distribution of water on the planet affects how mass is distributed. Like adding a tiny bit of weight to a spinning top, the Earth spins a little differently as water is moved around.

"Earth's rotational pole actually changes a lot," said Ki-Weon Seo, a geophysicist at Seoul National University who led the study. "Our study shows that among climate-related causes, the redistribution of groundwater actually has the largest impact on the drift of the rotational pole."

Water's ability to change the Earth's rotation was discovered in 2016, and until now, the specific contribution of groundwater to these rotational changes was unexplored. In the new study, researchers modeled the observed changes in the drift of Earth's rotational pole and the movement of water -- first, with only ice sheets and glaciers considered, and then adding in different scenarios of groundwater redistribution.

The model only matched the observed polar drift once the researchers included 2150 gigatons of groundwater redistribution. Without it, the model was off by 78.5 centimeters (31 inches), or 4.3 centimeters (1.7 inches) of drift per year.

"I'm very glad to find the unexplained cause of the rotation pole drift," Seo said. "On the other hand, as a resident of Earth and a father, I'm concerned and surprised to see that pumping groundwater is another source of sea-level rise."

"This is a nice contribution and an important documentation for sure," said Surendra Adhikari, a research scientist at the Jet Propulsion Laboratory who was not involved in this study. Adhikari published the 2016 paper on water redistribution impacting rotational drift. "They've quantified the role of groundwater pumping on polar motion, and it's pretty significant."

The location of the groundwater matters for how much it could change polar drift; redistributing water from the midlatitudes has a larger impact on the rotational pole. During the study period, the most water was redistributed in western North America and northwestern India, both at midlatitudes.

Countries' attempts to slow groundwater depletion rates, especially in those sensitive regions, could theoretically alter the change in drift, but only if such conservation approaches are sustained for decades, Seo said.

The rotational pole normally changes by several meters within about a year, so changes due to groundwater pumping don't run the risk of shifting seasons. But on geologic time scales, polar drift can have an impact on climate, Adhikari said.

The next step for this research could be looking to the past.

"Observing changes in Earth's rotational pole is useful for understanding continent-scale water storage variations," Seo said. "Polar motion data are available from as early as the late 19th century. So, we can potentially use those data to understand continental water storage variations during the last 100 years. Were there any hydrological regime changes resulting from the warming climate? Polar motion could hold the answer."

Genome editing used to create disease resistant rice

 Researchers from the University of California, Davis, and an international team of scientists used the genome-editing tool CRISPR-Cas to create disease resistant rice plants, according to a new study published in the journal Nature June 14.

Small-scale field trials in China showed that the newly created rice variety, developed through genome editing of a newly discovered gene, exhibited both high yields and resistance to the fungus that causes a serious disease called rice blast. Rice is an essential crop that feeds half of the world's population.

Guotian Li, a co-lead author of the study, initially discovered a mutant known as a lesion mimic mutant while working as a postdoctoral scholar in Pamela Ronald's lab at UC Davis. Ronald is co-lead author and Distinguished Professor in the Department of Plant Pathology and the Genome Center.

"It's quite a step forward that his team was able to improve this gene, making it potentially useful for farmers. That makes it important," Ronald said.

The roots of the discovery began in Ronald's lab, where they created and sequenced 3,200 distinct rice strains, each possessing diverse mutations. Among these strains, Guotian identified one with dark patches on its leaves.

"He found that the strain was also resistant to bacterial infection, but it was extremely small and low yielding," Ronald said. "These types of 'lesion mimic' mutants have been found before but only in a few cases have they been useful to farmers because of the low yield."

Working with CRISPR

Guotian continued the research when he joined Huazhong Agricultural University in Wuhan, China.

He used CRISPR-Cas9 to isolate the gene related to the mutation and used genome editing to recreate that resistance trait, eventually identifying a line that had good yield and was resistant to three different pathogens, including the fungus that causes rice blast.

In small-scale field trials planted in disease-heavy plots, the new rice plants produced five times more yield than the control rice, which was damaged by the fungus, Ronald said.

"Blast is the most serious disease of plants in the world because it affects virtually all growing regions of rice and also because rice is a huge crop," Ronald said.

Future applications

The researchers hope to recreate this mutation in commonly grown rice varieties. Currently they have only optimized this gene in a model variety called "Kitaake" that is not grown widely. They also hope to target the same gene in wheat to create disease-resistant wheat.

"A lot of these lesion mimic mutants have been discovered and sort of put aside because they have low yield. We're hoping that people can go look at some of these and see if they can edit them to get a nice balance between resistance and high yield," Ronald said.

Rashmi Jain with the UC Davis Department of Plant Pathology and Genome Center also contributed to the research, as did scientists from BGI-Shenzhen, Huazhong Agricultural University, Jiangxi Academy of Agricultural Sciences, Northwest A&F University and Shandong Academy of Agricultural Sciences, China; the Lawrence Berkeley National Laboratory and UC Berkeley; the University of Adelaide, Australia; and the University of Bordeaux, France.

Research in the Ronald lab was supported by the National Science Foundation, the National Institutes of Health and the Joint Bioenergy Institute funded by the US Department of Energy.

Thursday 15 June 2023

Four-legged robot traverses tricky terrains thanks to improved 3D vision

 Researchers led by the University of California San Diego have developed a new model that trains four-legged robots to see more clearly in 3D. The advance enabled a robot to autonomously cross challenging terrain with ease -- including stairs, rocky ground and gap-filled paths -- while clearing obstacles in its way.

The researchers will present their work at the 2023 Conference on Computer Vision and Pattern Recognition (CVPR), which will take place from June 18 to 22 in Vancouver, Canada.

"By providing the robot with a better understanding of its surroundings in 3D, it can be deployed in more complex environments in the real world," said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.

The robot is equipped with a forward-facing depth camera on its head. The camera is tilted downwards at an angle that gives it a good view of both the scene in front of it and the terrain beneath it.

To improve the robot's 3D perception, the researchers developed a model that first takes 2D images from the camera and translates them into 3D space. It does this by looking at a short video sequence that consists of the current frame and a few previous frames, then extracting pieces of 3D information from each 2D frame. That includes information about the robot's leg movements such as joint angle, joint velocity and distance from the ground. The model compares the information from the previous frames with information from the current frame to estimate the 3D transformation between the past and the present.

The model fuses all that information together so that it can use the current frame to synthesize the previous frames. As the robot moves, the model checks the synthesized frames against the frames that the camera has already captured. If they are a good match, then the model knows that it has learned the correct representation of the 3D scene. Otherwise, it makes corrections until it gets it right.

The 3D representation is used to control the robot's movement. By synthesizing visual information from the past, the robot is able to remember what it has seen, as well as the actions its legs have taken before, and use that memory to inform its next moves.

"Our approach allows the robot to build a short-term memory of its 3D surroundings so that it can act better," said Wang.

The new study builds on the team's previous work, where researchers developed algorithms that combine computer vision with proprioception -- which involves the sense of movement, direction, speed, location and touch -- to enable a four-legged robot to walk and run on uneven ground while avoiding obstacles. The advance here is that by improving the robot's 3D perception (and combining it with proprioception), the researchers show that the robot can traverse more challenging terrain than before.

"What's exciting is that we have developed a single model that can handle different kinds of challenging environments," said Wang. "That's because we have created a better understanding of the 3D surroundings that makes the robot more versatile across different scenarios."

The approach has its limitations, however. Wang notes that their current model does not guide the robot to a specific goal or destination. When deployed, the robot simply takes a straight path and if it sees an obstacle, it avoids it by walking away via another straight path. "The robot does not control exactly where it goes," he said. "In future work, we would like to include more planning techniques and complete the navigation pipeline."

DESI early data release holds nearly two million objects

 The universe is big, and it's getting bigger. To study dark energy, the mysterious force behind the accelerating expansion of our universe, scientists are using the Dark Energy Spectroscopic Instrument (DESI) to map more than 40 million galaxies, quasars, and stars. Today, the collaboration publicly released its first batch of data, with nearly 2 million objects for researchers to explore.

The 80-terabyte data set comes from 2,480 exposures taken over six months during the experiment's "survey validation" phase in 2020 and 2021. In this period between turning the instrument on and beginning the official science run, researchers made sure their plan for using the telescope would meet their science goals -- for example, by checking how long it took to observe galaxies of different brightness, and by validating the selection of stars and galaxies to observe.

"The fact that DESI works so well, and that the amount of science-grade data it took during survey validation is comparable to previous completed sky surveys, is a monumental achievement," said Nathalie Palanque-Delabrouille, co-spokesperson for DESI and a scientist at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), which manages the experiment. "This milestone shows that DESI is a unique spectroscopic factory whose data will not only allow the study of dark energy but will also be coveted by the whole scientific community to address other topics, such as dark matter, gravitational lensing, and galactic morphology."

Today the collaboration also published a set of papers related to the early data release, which include early measurements of galaxy clustering, studies of rare objects, and descriptions of the instrument and survey operations. The new papers build on DESI's first measurement of the cosmological distance scale that was published in April, which used the first two months of routine survey data (not included in the early data release) and also showed DESI's ability to accomplish its design goals.

DESI uses 5,000 robotic positioners to move optical fibers that capture light from objects millions or billions of light-years away. It is the most powerful multi-object survey spectrograph in the world, able to measure light from more than 100,000 galaxies in one night. That light tells researchers how far away an object is, building a 3D cosmic map.

"Survey validation was very important for DESI because it allowed us -- before starting the main survey -- to adjust our selection of all the objects, including stars, bright galaxies, luminous red galaxies, emission line galaxies, and quasars," said Christophe Yeche, a scientist with the French Alternative Energies and Atomic Energy Commission (CEA) who co-leads the target selection group. "We've been able to optimize our selection and confirm our observation strategy."

As the universe expands, it stretches light's wavelength, making it redder -- a characteristic known as redshift. The further away the galaxy, the bigger the redshift. DESI specializes in collecting redshifts that can then be used to solve some of astrophysics' biggest puzzles: what dark energy is and how it has changed throughout the universe's history.

While DESI's primary goal is understanding dark energy, much of the data can also be used in other astronomical studies. For example, the early data release contains detailed images from some well-known areas of the sky, such as the Hubble Deep Field.

"There are some well-trodden spots where we've drilled down into the sky," said Stephen Bailey, a scientist at Berkeley Lab who leads data management for DESI. "We've taken valuable spectroscopic images in areas that are of interest to the rest of the community, and we're hoping that other people will take this data and do additional science with it."

Two interesting finds have already surfaced: Evidence of a mass migration of stars into the Andromeda galaxy, and incredibly distant quasars, the extremely bright and active supermassive black holes sometimes found at the center of galaxies.

"We observed some areas at very high depth. People have looked at that data and discovered very high redshift quasars, which are still so rare that basically any discovery of them is useful," said Anthony Kremin, a postdoctoral researcher at Berkeley Lab who led the data processing for the early data release. "Those high-redshift quasars are usually found with very large telescopes, so the fact that DESI -- a smaller, 4-meter survey instrument -- could compete with those larger, dedicated observatories was an achievement we are pretty proud of and demonstrates the exceptional throughput of the instrument."

Survey validation was also a chance to test the process of transforming raw data from DESI's ten spectrometers (which split a galaxy's light into different colors) into useful information.

"If you looked at them, the images coming directly from the camera would look like nonsense -- like lines on a weird, fuzzy image," said Laurie Stephey, a data architect at the National Energy Research Scientific Computing Center (NERSC), the supercomputer that processes DESI's data. "The magic happens in the processing and the software being able to decode the data. It's exciting that we have the technology to make that data accessible to the research community and that we can support this big question of 'what is dark energy?'"

DESI's early data was a unique project for NERSC. All of the experiment's code, including the computational heavy lifting, is written in the programming language Python rather than the traditional C++ or Fortran.

"That was the first time that using pure Python was shown to be a feasible approach for a major experiment at NERSC, and since then, Python has become increasingly common in our user workload," Stephey said.

The DESI early data release is now available to access for free through NERSC.

There is plenty of data yet to come from the experiment. DESI is currently two years into its five-year run and ahead of schedule on its quest to collect more than 40 million redshifts. The survey has already catalogued more than 26 million astronomical objects in its science run, and is adding more than a million per month.

Pass the salt: This space rock holds clues as to how Earth got its water

 Sodium chloride, better known as table salt, isn't exactly the type of mineral that captures the imagination of scientists. However, a smattering of tiny salt crystals discovered in a sample from an asteroid has researchers at the University of Arizona Lunar and Planetary Laboratory excited, because these crystals can only have formed in the presence of liquid water.

Even more intriguing, according to the research team, is the fact that the sample comes from an S-type asteroid, a category known to mostly lack hydrated, or water-bearing, minerals. The discovery strongly suggests that a large population of asteroids hurtling through the solar system may not be as dry as previously thought. The finding, published in Nature Astronomy, gives renewed push to the hypothesis that most, if not all, water on Earth may have arrived by way of asteroids during the planet's tumultuous infancy.

Tom Zega, the study's senior author and a professor of planetary sciences at the UArizona Lunar and Planetary Laboratory, and Shaofan Che, lead study author and a postdoctoral fellow at the Lunar and Planetary Laboratory, performed a detailed analysis of samples collected from asteroid Itokawa in 2005 by the Japanese Hayabusa mission and brought to Earth in 2010.

The study is the first to demonstrate that the salt crystals originated on the asteroid's parent body, ruling out any possibility they might have formed as a consequence of contamination after the sample reached Earth, a question that had plagued previous studies that found sodium chloride in meteorites of a similar origin.

"The grains look exactly like what you would see if you took table salt at home and placed it under an electron microscope," Zega said. "They're these nice, square crystals. It was funny, too, because we had many spirited group meeting conversations about them, because it was just so unreal."

Zega said the samples represent a type of extraterrestrial rock known as an ordinary chondrite. Derived from so-called S-type asteroids such as Itokawa, this type makes up about 87% of meteorites collected on Earth. Very few of them have been found to contain water-bearing minerals.

"It has long been thought that ordinary chondrites are an unlikely source of water on Earth," said Zega who is the director of the Lunar and Planetary Laboratory's Kuiper Materials Imaging & Characterization Facility. "Our discovery of sodium chloride tells us this asteroid population could harbor much more water than we thought."

Today, scientists largely agree that Earth, along with other rocky planets such as Venus and Mars, formed in the inner region of the roiling, swirling cloud of gas and dust around the young sun, known as the solar nebula, where temperatures were very high -- too high for water vapor to condense from the gas, according to Che.

"In other words, the water here on Earth had to be delivered from the outer reaches of the solar nebula, where temperatures were much colder and allowed water to exist, most likely in the form of ice," Che said. "The most likely scenario is that comets or another type of asteroid known as C-type asteroids, which resided farther out in the solar nebula, migrated inward and delivered their watery cargo by impacting the young Earth."

The discovery that water could have been present in ordinary chondrites, and therefore been sourced from much closer to the sun than their "wetter" kin, has implications for any scenario attempting to explain the delivery of water to the early Earth.

The sample used in the study is a tiny dust particle spanning about 150 micrometers, or roughly twice the diameter of a human hair, from which the team cut a small section about 5 microns wide -- just large enough to cover a single yeast cell -- for the analysis.

Using a variety of techniques, Che was able to rule out that the sodium chloride was the result of contamination from sources such as human sweat, the sample preparation process or exposure to laboratory moisture.

Because the sample had been stored for five years, the team took before and after photos and compared them. The photos showed that the distribution of sodium chloride grains inside the sample had not changed, ruling out the possibility that any of the grains were deposited into the sample during that time. In addition, Che performed a control experiment by treating a set of terrestrial rock samples the same as the Itokawa sample and examining them with an electron microscope.

"The terrestrial samples did not contain any sodium chloride, so that convinced us the salt in our sample is native to the asteroid Itokawa," he said. "We ruled out every possible source of contamination."

Zega said tons of extraterrestrial matter is raining down on Earth every day, but most of it burns up in the atmosphere and never makes it to the surface.

"You need a large enough rock to survive entry and deliver that water," he said.

Previous work led by the late Michael Drake, a former director of the Lunar and Planetary Lab, in the 1990s proposed a mechanism by which water molecules in the early solar system could become trapped in asteroid minerals and even survive an impact on Earth.

"Those studies suggest several oceans worth of water could be delivered just by this mechanism," Zega said. "If it now turns out that the most common asteroids may be much 'wetter' than we thought, that will make the water delivery hypothesis by asteroids even more plausible."

Itokawa is a peanut-shaped near-Earth asteroid about 2,000 feet long and 750 feet in diameter and is believed to have broken off from a much larger parent body. According to Che and Zega, it is conceivable that frozen water and frozen hydrogen chloride could have accumulated there, and that naturally occurring decay of radioactive elements and frequent bombardment by meteorites during the solar system's early days could have provided enough heat to sustain hydrothermal processes involving liquid water. Ultimately, the parent body would have succumbed to the pummeling and broken up into smaller fragments, leading to the formation of Itokawa.

"Once these ingredients come together to form asteroids, there is a potential for liquid water to form," Zega said. "And once you have liquids form, you can think of them as occupying cavities in the asteroid, and potentially do water chemistry."

The evidence pointing at the salt crystals in the Itokawa sample as being there since the beginning of the solar system does not end here, however. The researchers found a vein of plagioclase, a sodium-rich silicate mineral, running through the sample, enriched with sodium chloride.

Shining potential of missing atoms

 Single photons have applications in quantum computation, information networks, and sensors, and these can be emitted by defects in the atomically thin insulator hexagonal boron nitride (hBN). Missing nitrogen atoms have been suggested to be the atomic structure responsible for this activity, but it is difficult to controllably remove them. A team at the Faculty of Physics of the University of Vienna has now shown that single atoms can be kicked out using a scanning transmission electron microscope under ultra-high vacuum. The results are published in the journal Small.

Transmission electron microscopy allows us to see the atomic structure of materials, and it is particularly well suited to directly reveal any defects in the lattice of the specimen, which may be detrimental or useful depending on the application. However, the energetic electron beam may also damage the structure, either due to elastic collisions or electronic excitations, or a combination of both. Further, any gases left in the vacuum of the instrument can contribute to damage, whereby dissociated gas molecules can etch away atoms of the lattice. Until now, transmission electron microscopy measurements of hBN have been conducted at relatively poor vacuum conditions, leading to rapid damage. Due to this limitation, it has not been clear whether vacancies -- single missing atoms -- can be controllably created.

At the University of Vienna, the creation of single atomic vacancies has now been achieved using aberration-corrected scanning transmission electron microscopy in near ultra-high vacuum. The material was irradiated at a range of electron-beam energies, which influences the measured damage rate. At low energies, damage is dramatically slower than previously measured under poorer residual vacuum conditions. Single boron and nitrogen vacancies can be created at intermediate electron energies, and boron is twice as likely to be ejected due to its lower mass. Although atomically precise measurements are not feasible at the higher energies previously used to make hBN emit single photons, the results predict that nitrogen in turn becomes easier to eject -- allowing these shining vacancies to be preferentially created.

Robust statistics collected by painstaking experimental work combined with new theoretical models were vital for reaching these conclusions. Lead author Thuy An Bui has worked on the project since her Master's thesis: "At each electron energy, I needed to spend many days at the microscope carefully collecting one series of data after another," she says. "Once the data was collected, we used machine learning to help analyse it accurately, though even this took a great deal of work." Senior author Toma Susi adds: "To understand the damage mechanism, we created an approximate model that combines ionization with knock-on damage. This allowed us to extrapolate to higher energies and shed new light on defect creation."

Saturday 10 June 2023

Diet tracking: How much is enough to lose weight?

 Keeping track of everything you eat and drink in a day is a tedious task that is tough to keep up with over time. Unfortunately, dutiful tracking is a vital component for successful weight loss, however, a new study in Obesity finds that perfect tracking is not needed to achieve significant weight loss.

Researchers from UConn, the University of Florida, and the University of Pennsylvania tracked 153 weight loss program participants for six months where users self-reported their food intake using a commercial digital weight loss program. The researchers wanted to see what the optimal thresholds were for diet tracking to predict 3%, 5%, and 10% weight loss after six months.

"We partnered with WeightWatchers, who was planning on releasing a new Personal Points program, and they wanted to get empirical data via our clinical trial," says co-author and Department of Allied Health Sciences Professor Sherry Pagoto.

Pagoto explains that the new program takes a personalized approach to assigning points including a list of zero-point foods to eliminate the need for calculating calories for everything,

"Dietary tracking is a cornerstone of all weight loss interventions, and it tends to be the biggest predictor of outcomes. This program lowers the burden of that task by allowing zero-point foods, which do not need to be tracked."

Researchers and developers are seeking ways to make the tracking process less burdensome, because as Pagoto says, for a lot of programs, users may feel like they need to count calories for the rest of their lives: "That's just not sustainable. Do users need to track everything every single day or not necessarily?"

With six months of data, Assistant Professor in the Department of Allied Health Sciences Ran Xu was interested to see if there was a way to predict outcomes based on how much diet tracking participants did. Ran Xu and Allied Health Sciences Ph.D. student Richard Bannor analyzed the data to see if there were patterns associated with weight loss success from a data science perspective. Using a method called receiver operating characteristics (ROC) curve analysis they found how many days people need to track their food to reach clinically significant weight loss.

"It turns out, you don't need to track 100% each day to be successful," says Xu. "Specifically in this trial, we find that people only need to track around 30% of the days to lose more than 3% weight and 40% of the days to lose more than 5% weight, or almost 70% of days to lose more than 10% weight. The key point here is that you don't need to track every day to lose a clinically significant amount of weight."

This is promising since Pagoto points out that the goal for a six-month weight loss program is typically 5% to 10%, a range where health benefits have been seen in clinical trials.

"A lot of times people feel like they need to lose 50 pounds to get healthier, but actually we start to see changes in things like blood pressure, lipids, cardiovascular disease risk, and diabetes risk when people lose about 5-to-10% of their weight," says Pagoto. "That can be accomplished if participants lose about one to two pounds a week, which is considered a healthy pace of weight loss."

Xu then looked at trajectories of diet tracking over the six months of the program.

The researchers found three distinct trajectories. One they call high trackers, or super users, who tracked food on most days of the week throughout six months, and on average lost around 10% of their weight.

However, many participants belonged to a second group that started tracking regularly, before their tracking gradually declined over time to, by the four-month mark, only about one day per week. They still lost about 5% of their weight.

A third group, called the low trackers, started tracking only three days a week, and dropped to zero by three months, where they stayed for the rest of the intervention. On average this group lost only 2% of their weight.

"One thing that is interesting about this data is, oftentimes in the literature, researchers just look at whether there is a correlation between tracking and overall weight loss outcomes. Ran took a data science approach to the data and found there is more to the story," Pagoto says. "Now we're seeing different patterns of tracking. This will help us identify when to provide extra assistance and who will need it the most."

The patterns could help inform future programs which could be tailored to help improve user tracking based on which group they fall into. Future studies will dig deeper into these patterns to understand why they arise and hopefully develop interventions to improve outcomes.

"For me, what's exciting about these digital programs is that we have a digital footprint of participant behavior," says Xu. "We can drill down to the nitty-gritty of what people do during these programs. The data can inform precision medicine approaches, where we can take this data science perspective, identify patterns of behavior, and design a targeted approach."

Digitally delivered health programs give researchers multitudes of data they never had before which can yield new insights, but this science requires a multidisciplinary approach.

"Before, it felt like we were flying in the dark or just going by anecdotes or self-reported measures, but it's different now that we have so much user data. We need data science to make sense of all these data. This is where team science is so important because clinical and data scientists think about the problem from very different perspectives, but together, we can produce insights that neither of us could do on our own. This must be the future of this work," says Pagoto.

Xu agrees: "From a data science perspective, machine learning is exciting but if we just have machine learning, we only know what people do, but we don't know why or what to do with this information. That's where we need clinical scientists like Sherry to make sense of these results. That's why team science is so important."

No longer flying in the dark, these multi-disciplinary teams of researchers now have the tools needed to start tailoring programs even further to help people achieve their desired outcomes. For now, users of these apps can be assured that they can still get significant results, even if they miss some entries.

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...