Sunday, 28 April 2019

Elemental old-timer makes the universe look like a toddler

Rice University physicist Christopher Tunnell at the XENON1T experiment in Italy. The collaboration discovered that xenon 124 has the longest half-life ever measured in a material. The element's half-life is many orders of magnitude greater than the current age of the universe.
In terms of longevity, the universe has nothing on xenon 124.
Theory predicts the isotope's radioactive decay has a half-life that surpasses the age of the universe "by many orders of magnitude," but no evidence of the process has appeared until now.
An international team of physicists that includes three Rice University researchers -- assistant professor Christopher Tunnell, visiting scientist Junji Naganoma and assistant research professor Petr Chaguine -- have reported the first direct observation of two-neutrino double electron capture for xenon 124, the physical process by which it decays. Their paper appears this week in the journal Nature.
While most xenon isotopes have half-lives of less than 12 days, a few are thought to be exceptionally long-lived, and essentially stable. Xenon 124 is one of those, though researchers have estimated its half-life at 160 trillion years as it decays into tellurium 124. The universe is presumed to be merely 13 to 14 billion years old.
The new finding puts the half-life of Xenon 124 closer to 18 sextillion years. (For the record, that's 18,000,000,000,000,000,000,000.)
Half-life doesn't mean it takes that long for each atom to decay. The number simply indicates how long, on average, it will take for the bulk of a radioactive material to reduce itself by half. Still, the chance of seeing such an incident for xenon 124 is vanishingly small -- unless one gathers enough xenon atoms and puts them in the "most radio-pure place on Earth," Tunnell said.
"A key point here is that we have so many atoms, so if any decays, we'll see it," he said. "We have a (literal) ton of material."
That place, set deep inside a mountain in Italy, is a chamber that contains a ton of highly purified liquid xenon shielded in every possible way from radioactive interference.
Called the XENON1T experiment, it's the latest in a series of chambers designed to find the first direct evidence of dark matter, the mysterious substance thought to account for most of the matter in the universe.
It has the ability to observe other unique natural phenomena as well. One such probe in the latest year-long run was to monitor for the predicted decay of xenon 124. Sorting through the pile of data produced by the chamber revealed "tens" of these decays, said Tunnell, who joined Rice this year as part of the university's Data Science Initiative.
"We can see single neutrons, single photons, single electrons," he said. "Everything that enters into this detector will deposit energy in some way, and it's measurable." XENON1T can detect photons that spring to life in the liquid medium as well as electrons drawn to a top layer of charged xenon gas. Both are produced when xenon 124 decays.
"There are different ways in which a radioactive isotope can decay," he said. "One is beta decay. That means an electron comes out. You can have alpha decay, where it spits off part of the nucleus to release energy. And there's electron capture, when an electron goes into the nucleus and turns a proton into a neutron. This changes the composition of the nucleus and results in its decay.
"Normally, you have one electron come in and one neutrino come out," Tunnell said. "That neutrino has a fixed energy, which is how the nucleus expels its mass. This is a process we see often in nuclear particle physics, and it's quite well understood. But we had never seen two electrons come into the nucleus at the same time and give off two neutrinos."
The photons are released as electrons cascade to fill lower vacancies around the nucleus. They show up as a bump on a graph that can only be interpreted as multiple two-neutrino double electron captures. "It can't be explained with any other background sources that we know of," said Tunnell, who served as analysis coordinator for two years.
XENON1T remains the world's largest, most sensitive detector for weakly interactive massive particles, aka WIMPs, the hypothetical particles believed to constitute dark matter. Tunnell worked at XENON1T with Rice colleague Naganoma, who served as operations manager.
The researchers who make up the XENON Collaboration, all of whom are co-authors on the paper, have yet to detect dark matter, but a larger instrument, XENONnT, is being built to further the search. Chaguine is the new instrument's commissioning manager, responsible for its construction.
The collaboration's example could lead researchers to find other exotic processes unrelated to dark matter, Tunnell said, including the ongoing hunt for another unseen process, neutrinoless double electron capture, in which no neutrinos are released. That process, according to the paper, "would have implications for the nature of the neutrino and give access to the absolute neutrino mass."
"It gets tricky, because while we have the science we're trying to do, we also have to think about what else we can do with the experiment," he said. "We have a lot of students looking for thesis projects, so we make a list of 10 or 20 other measurements -- but they're a shot in the dark, and we almost always come up with nothing, as is typical of curiosity-driven science.
"In this case, we took a shot in the dark where two or three students were very lucky," he said.

The first laser radio transmitter

This device uses a frequency comb laser to emit and modulate microwaves wirelessly. The laser uses different frequencies of light beating together to generate microwave radiation. The researchers used this phenomenon to send a song wirelessly to a receiver.
You've never heard Dean Martin like this.
Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences transmitted a recording of Martin's classic "Volare" wirelessly via a semiconductor laser -- the first time a laser has been used as a radio frequency transmitter.
In a paper published in the Proceedings of the National Academy of Sciences, the researchers demonstrated a laser that can emit microwaves wirelessly, modulate them, and receive external radio frequency signals.
"The research opens the door to new types of hybrid electronic-photonic devices and is the first step toward ultra-high-speed Wi-Fi," said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, at SEAS and senior author of the study.
This research builds on previous work from the Capasso Lab. In 2017, the researchers discovered that an infrared frequency comb in a quantum cascade laser could be used to generate terahertz frequencies, the submillimeter wavelengths of the electromagnetic spectrum that could move data hundreds of times faster than today's wireless platforms. In 2018, the team found that quantum cascade laser frequency combs could also act as integrated transmitters or receivers to efficiently encode information.
Now, the researchers have figured out a way to extract and transmit wireless signals from laser frequency combs.
Unlike conventional lasers, which emit a single frequency of light, laser frequency combs emit multiple frequencies simultaneously, evenly spaced to resemble the teeth of a comb. In 2018, the researchers discovered that inside the laser, the different frequencies of light beat together to generate microwave radiation. The light inside the cavity of the laser caused electrons to oscillate at microwave frequencies -- which are within the communications spectrum.
"If you want to use this device for Wi-Fi, you need to be able to put useful information in the microwave signals and extract that information from the device," said Marco Piccardo, a postdoctoral fellow at SEAS and first author of the paper.
The first thing the new device needed to transmit microwave signals was an antenna. So, the researchers etched a gap into the top electrode of the device, creating a dipole antenna (like the rabbit ears on the top of an old TV). Next, they modulated the frequency comb to encode information on the microwave radiation created by the beating light of the comb. Then, using the antenna, the microwaves are radiated out from the device, containing the encoded information. The radio signal is received by a horn antenna, filtered and sent to a computer.
The researchers also demonstrated that the laser radio could receive signals. The team was able to remotely control the behavior of the laser using microwave signals from another device.
"This all-in-one, integrated device holds great promise for wireless communication," said Piccardo. "While the dream of terahertz wireless communication is still a ways away, this research provides a clear roadmap showing how to get there."
The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.
This research was co-authored by Michele Tamagnone, Benedikt Schwarz, Paul Chevalier, Noah A. Rubin, Yongrui Wang, Christine A. Wang, Michael K. Connors, Daniel McNulty and Alexey Belyanin. It was supported in part by the National Science Foundation.

Scientists discover what powers celestial phenomenon STEVE

Amateur astronomer's photograph used in the new research. The photograph was taken on May 8, 2016, in Keller, Wash. The major structures are two bands of upper atmospheric emissions 160 kilometers (100 miles) above the ground, a mauve arc and green picket fence. The black objects at the bottom are trees. The background star constellations include Gemini and Ursa Major.
The celestial phenomenon known as STEVE is likely caused by a combination of heating of charged particles in the atmosphere and energetic electrons like those that power the aurora, according to new research. In a new study, scientists found STEVE's source region in space and identified two mechanisms that cause it.
Last year, the obscure atmospheric lights became an internet sensation. Typical auroras, the northern and southern lights, are usually seen as swirling green ribbons spreading across the sky. But STEVE is a thin ribbon of pinkish-red or mauve-colored light stretching from east to west, farther south than where auroras usually appear. Even more strange, STEVE is sometimes joined by green vertical columns of light nicknamed the "picket fence."
Auroras are produced by glowing oxygen and nitrogen atoms in Earth's upper atmosphere, excited by charged particles streaming in from the near-Earth magnetic environment called the magnetosphere. Scientists didn't know if STEVE was a kind of aurora, but a 2018 study found its glow is not due to charged particles raining down into Earth's upper atmosphere.
The authors of the 2018 study dubbed STEVE a kind of "sky-glow" that is distinct from the aurora, but were unsure exactly what was causing it. Complicating the matter was the fact that STEVE can appear during solar-induced magnetic storms around Earth that power the brightest auroral lights.
Authors of a new study published in AGU's journal Geophysical Research Letters analyzed satellite data and ground images of STEVE events and conclude that the reddish arc and green picket fence are two distinct phenomena arising from different processes. The picket fence is caused by a mechanism similar to typical auroras, but STEVE's mauve streaks are caused by heating of charged particles higher up in the atmosphere, similar to what causes light bulbs to glow.
"Aurora is defined by particle precipitation, electrons and protons actually falling into our atmosphere, whereas the STEVE atmospheric glow comes from heating without particle precipitation," said Bea Gallardo-Lacourt, a space physicist at the University of Calgary and co-author of the new study. "The precipitating electrons that cause the green picket fence are thus aurora, though this occurs outside the auroral zone, so it's indeed unique."
Images of STEVE are beautiful in themselves, but they also provide a visible way to study the invisible, complex charged particle flows in Earth's magnetosphere, according to the study's authors. The new results help scientists better understand how particle flows develop in the ionosphere, which is important goal because such disturbances can interfere with radio communications and affect GPS signals.
Where does STEVE come from?
In the new study, researchers wanted to find out what powers STEVE and if it occurs in both the Northern and Southern Hemispheres at the same time. They analyzed data from several satellites passing overhead during STEVE events in April 2008 and May 2016 to measure the electric and magnetic fields in Earth's magnetosphere at the time.
The researchers then coupled the satellite data with photos of STEVE taken by amateur auroral photographers to figure out what causes the unusual glow. They found that during STEVE, a flowing "river" of charged particles in Earth's ionosphere collide, creating friction that heats the particles and causes them to emit mauve light. Incandescent light bulbs work in much the same way, where electricity heats a filament of tungsten until it's hot enough to glow.
Interestingly, the study found the picket fence is powered by energetic electrons streaming from space thousands of kilometers above Earth. While similar to the process that creates typical auroras, these electrons impact the atmosphere far south of usual auroral latitudes. The satellite data showed high-frequency waves moving from Earth's magnetosphere to its ionosphere can energize electrons and knock them out of the magnetosphere to create the striped picket fence display.
The researchers also found the picket fence occurs in both hemispheres at the same time, supporting the conclusion that its source is high enough above Earth to feed energy to both hemispheres simultaneously.
Public involvement has been crucial for STEVE research by providing ground-based images and precise time and location data, according to Toshi Nishimura, a space physicist at Boston University and lead author of the new study.
"As commercial cameras become more sensitive and increased excitement about the aurora spreads via social media, citizen scientists can act as a 'mobile sensor network,' and we are grateful to them for giving us data to analyze," Nishimura said.

Bridge over coupled waters: Scientists 3D-print all-liquid 'lab on a chip'

To make the 3D-printable fluidic device, Berkeley Lab researchers designed a specially patterned glass substrate. When two liquids - one containing nanoscale clay particles, another containing polymer particles - are printed onto the substrate, they come together at the interface of the two liquids and within milliseconds form a very thin channel or tube about 1 millimeter in diameter.
Researchers at DOE's Lawrence Berkeley National Laboratory (Berkeley Lab) have 3D-printed an all-liquid device that, with the click of a button, can be repeatedly reconfigured on demand to serve a wide range of applications -- from making battery materials to screening drug candidates.
"What we demonstrated is remarkable. Our 3D-printed device can be programmed to carry out multistep, complex chemical reactions on demand," said Brett Helms, a staff scientist in Berkeley Lab's Materials Sciences Division and Molecular Foundry, who led the study. "What's even more amazing is that this versatile platform can be reconfigured to efficiently and precisely combine molecules to form very specific products, such as organic battery materials."
The study's findings, which were reported in the journal Nature Communications, is the latest in a series of experiments at Berkeley Lab that fabricate all-liquid materials with a 3D printer.
Last year, a study co-authored by Helms and Thomas Russell, a visiting researcher from the University of Massachusetts at Amherst who leads the Adaptive Interfacial Assemblies Toward Structured Liquids Program in Berkeley Lab's Materials Sciences Division, pioneered a new technique for printing various liquid structures -- from droplets to swirling threads of liquid -- within another liquid.
"After that successful demonstration, a bunch of us got together to brainstorm on how we could use liquid printing to fabricate a functioning device," said Helms. "Then it occurred to us: If we can print liquids in defined channels and flow contents through them without destroying them, then we could make useful fluidic devices for a wide range of applications, from new types of miniaturized chemical laboratories to even batteries and electronic devices."
To make the 3D-printable fluidic device, lead author Wenqian Feng, a postdoctoral researcher in Berkeley Lab's Materials Sciences Division, designed a specially patterned glass substrate. When two liquids -- one containing nanoscale clay particles, another containing polymer particles -- are printed onto the substrate, they come together at the interface of the two liquids and within milliseconds form a very thin channel or tube about 1 millimeter in diameter.
Once the channels are formed, catalysts can be placed in different channels of the device. The user can then 3D-print bridges between channels, connecting them so that a chemical flowing through them encounters catalysts in a specific order, setting off a cascade of chemical reactions to make specific chemical compounds. And when controlled by a computer, this complex process can be automated "to execute tasks associated with catalyst placement, build liquid bridges within the device, and run reaction sequences needed to make molecules," said Russell.
The multitasking device can also be programmed to function like an artificial circulatory system that separates molecules flowing through the channel and automatically removes unwanted byproducts while it continues to print a sequence of bridges to specific catalysts, and carry out the steps of chemical synthesis.
"The form and functions of these devices are only limited by the imagination of the researcher," explained Helms. "Autonomous synthesis is an emerging area of interest in the chemistry and materials communities, and our technique for 3D-printing devices for all-liquid flow chemistry could help to play an important role in establishing the field."
Added Russell: "The combination of materials science and chemistry expertise at Berkeley Lab, along with world-class user facilities available to researchers from all over the world, and the young talent that is drawn to the Lab is unique. We couldn't have developed this program anywhere else."
The researchers next plan to electrify the walls of the device using conductive nanoparticles to expand the types of reactions that can be explored. "With our technique, we think it should also be possible to create all-liquid circuitry, fuel cells, and even batteries," said Helms. "It's been really exciting for our team to combine fluidics and flow chemistry in a way that is both user-friendly and user-programmable."

Despite health warnings, Americans still sit too much

Sedentary lifestyle word cloud
Most Americans continue to sit for prolonged periods despite public health messages that such inactivity increases the risk of obesity, diabetes, heart disease and certain cancers, according to a major new study led by researchers at Washington University School of Medicine in St. Louis.
The research team analyzed surveys of 51,000 people from 2001 to 2016 to track sitting trends in front of TVs and computers and the total amount of time spent sitting on a daily basis. Unlike other studies that have looked at sedentary behaviors, the research is the first to document sitting in a nationally representative sample of the U.S. population across multiple age groups -- from children to the elderly -- and different racial and ethnic groups.
The research, led by Yin Cao, ScD, an epidemiologist and assistant professor of surgery in the Division of Public Health Sciences, is published April 23 in the Journal of the American Medical Association.
"In almost none of the groups we analyzed are the numbers going in the right direction," said Cao, the study's senior author. "We want to raise awareness about this issue on multiple levels -- from individuals and families to schools, employers and elected officials."
Epidemiologist and co-senior author Graham A. Colditz, MD, DrPH, the Niess-Gain Professor of Surgery and director of the Division of Public Health Sciences, said: "We think a lot of these sedentary habits are formed early, so if we can make changes that help children be more active, it could pay off in the future, both for children as they grow to adulthood and for future health-care spending. Sedentary behavior is linked to poor health in many areas, and if we can reduce that across the board it could have a big impact."
The new study fills a gap in knowledge on sedentary behavior, according to the researchers, putting specific numbers on the amount of time Americans actually spend sitting. For example, the most recent edition of the Physical Activity Guidelines for Americans, published in 2018 by the Department of Health and Human Services, recommends less sitting time but offers no guidance on how much.
The researchers analyzed data from more than 51,000 people who participated in the National Health and Nutrition Examination Survey between 2001 and 2016, looking at four age groups: children ages 5 to 11 (as reported by a parent or guardian), adolescents ages 12 to 19, adults ages 20 to 64, and adults ages 65 and older. Race and ethnicity were defined as non-Hispanic white, non-Hispanic black, Hispanic and other races, including multiracial.
Total daily sitting time increased among adolescents and adults from 2007 to 2016, from seven hours per day to just over eight for teenagers, and from 5.5 hours per day to almost 6.5 for adults, the researchers found.
"Until now, we haven't had data demonstrating the amount of time most Americans spend sitting watching TV or doing other sedentary activities," Cao said. "Now that we have a baseline -- on population level and for different age groups -- we can look at trends over time and see whether different interventions or public health initiatives are effective in reducing the time spent sitting and nudging people toward more active behaviors."
The researchers found that most Americans spend at least two hours per day sitting and watching television or videos. Among children ages 5-11, 62 percent spent at least that long in front of screens daily. For adolescents ages 12-19, that number was 59 percent. About 65 percent of adults ages 20 to 64 spent at least two hours watching television per day. And most recently, from 2015 to 2016, 84 percent of adults over age 65 spent at least that much time sitting watching television. And this remained steady over the course of the study.
Across all age groups, 28 percent to 38 percent of those surveyed spent at least three hours per day watching television or videos, and 13 percent to 23 percent spent four hours or more engaged in watching television or videos.
Importantly, males of all age groups, non-Hispanic black individuals of all age groups and participants who reported being obese or physically inactive were more likely to spend more time sitting to watch televisions or videos compared to their counterparts.
In addition, computer screen time outside of work and school increased over this period. At least half of individuals across all age groups used a computer during leisure time for more than one hour per day in the two most recent years of the study. And up to a quarter of the U.S. population used computers outside of work and school for three hours or more.
"How we create public policies or promote social change that supports less sitting is unclear and likely to be complicated," Colditz said. "If a neighborhood in a disadvantaged community is unsafe, for example, parents can't just send their kids outside to play. Our environments -- the way our cities, our school days and working days are designed -- play roles in this behavior that are difficult to change. But at least now, we have a baseline from which to measure whether specific changes are having an impact."
Chao Cao, a recent graduate of the Brown School and a data analyst in Yin Cao's lab, co-led the analyses. Washington University also collaborated with researchers at a number of other institutions, including Charles Matthews, PhD, at the National Cancer Institute (NCI); Lin Yang, PhD, at the Alberta Health Services, Calgary, Canada; the Harvard T.H. Chan School of Public Health; Memorial Sloan Kettering Cancer Center; and Massachusetts General Hospital and Harvard Medical School.

An army of micro-robots can wipe out dental plaque

With a precise, controlled movement, microrobots cleared a glass plate of a biofilm, as shown in this time-lapse image.
A visit to the dentist typically involves time-consuming and sometimes unpleasant scraping with mechanical tools to remove plaque from teeth. What if, instead, a dentist could deploy a small army of tiny robots to precisely and non-invasively remove that buildup?
A team of engineers, dentists, and biologists from the University of Pennsylvania developed a microscopic robotic cleaning crew. With two types of robotic systems -- one designed to work on surfaces and the other to operate inside confined spaces -- the scientists showed that robots with catalytic activity could ably destroy biofilms, sticky amalgamations of bacteria enmeshed in a protective scaffolding. Such robotic biofilm-removal systems could be valuable in a wide range of potential applications, from keeping water pipes and catheters clean to reducing the risk of tooth decay, endodontic infections, and implant contamination.
The work, published in Science Robotics, was led by Hyun (Michel) Koo of the School of Dental Medicine and Edward Steager of the School of Engineering and Applied Science.
"This was a truly synergistic and multidisciplinary interaction," says Koo. "We're leveraging the expertise of microbiologists and clinician-scientists as well as engineers to design the best microbial eradication system possible. This is important to other biomedical fields facing drug-resistant biofilms as we approach a post-antibiotic era."
"Treating biofilms that occur on teeth requires a great deal of manual labor, both on the part of the consumer and the professional," adds Steager. "We hope to improve treatment options as well as reduce the difficulty of care."
Biofilms can arise on biological surfaces, such as on a tooth or in a joint or on objects, like water pipes, implants, or catheters. Wherever biofilms form, they are notoriously difficult to remove, as the sticky matrix that holds the bacteria provides protection from antimicrobial agents.
In previous work, Koo and colleagues have made headway at breaking down the biofilm matrix with a variety of outside-the-box methods. One strategy has been to employ iron-oxide-containing nanoparticles that work catalytically, activating hydrogen peroxide to release free radicals that can kill bacteria and destroy biofilms in a targeted fashion.
Serendipitously, the Penn Dental Medicine team found that groups at Penn Engineering led by Steager, Vijay Kumar, and Kathleen Stebe were working with a robotic platform that used very similar iron-oxide nanoparticles as building blocks for microrobots. The engineers control the movement of these robots using a magnetic field, allowing a tether-free way to steer them.
Together, the cross-school team designed, optimized, and tested two types of robotic systems, which the group calls catalytic antimicrobial robots, or CARs, capable of degrading and removing biofilms. The first involves suspending iron-oxide nanoparticles in a solution, which can then be directed by magnets to remove biofilms on a surface in a plow-like manner. The second platform entails embedding the nanoparticles into gel molds in three-dimensional shapes. These were used to target and destroy biofilms clogging enclosed tubes.
Both types of CARs effectively killed bacteria, broke down the matrix that surrounds them, and removed the debris with high precision. After testing the robots on biofilms growing on either a flat glass surface or enclosed glass tubes, the researchers tried out a more clinically relevant application: Removing biofilm from hard-to-reach parts of a human tooth.
The CARs were able to degrade and remove bacterial biofilms not just from a tooth surface but from one of the most difficult-to-access parts of a tooth, the isthmus, a narrow corridor between root canals where biofilms commonly grow.
"Existing treatments for biofilms are ineffective because they are incapabale of simultaneously degrading the protective matrix, killing the embedded bacteria, and physically removing the biodegraded products," says Koo. "These robots can do all three at once very effectively, leaving no trace of biofilm whatsoever."
By plowing away the degraded remains of the biofilm, Koo says, the chance of it taking hold and re-growing decreases substantially. The researchers envision precisely directing these robots to wherever they need to go to remove biofilms, be it the inside of a cathether or a water line or difficult-to-reach tooth surfaces.
"We think about robots as automated systems that take actions based on actively gathered information," says Steager. In this case, he says, "the motion of the robot can be informed by images of the biofilm gathered from microcameras or other modes of medical imaging."
To move the innovation down the road to clinical application, the researchers are receiving support from the Penn Center for Health, Devices, and Technology, an initiative supported by Penn's Perelman School of Medicine, Penn Engineering, and the Office of the Vice Provost for Research. Penn Health-Tech, as it's known, awards select interdisciplinary groups with support to create new health technologies, and the robotic platforms project was one of those awarded support in 2018.
"The team has a great clinical background on the dental side and a great technical background on the engineering side," says Victoria Berenholz, executive director of Penn Health-Tech. "We're here to round them out on the business side. They have really done a fantastic job on the project."

Engineers make injectable tissues a reality

Doctoral student Mohamed Gamal uses a newly developed cell encapsulation device.
A simple injection that can help regrow damaged tissue has long been the dream of physicians and patients alike. A new study from researchers at UBC Okanagan moves that dream closer to reality with a device that makes encapsulating cells much faster, cheaper and more effective.
"The idea of injecting different kinds of tissue cells is not a new one," says Keekyoung Kim, assistant professor of engineering at UBC Okanagan and study co-author. "It's an enticing concept because by introducing cells into damaged tissue, we can supercharge the body's own processes to regrow and repair an injury."
Kim says everything from broken bones to torn ligaments could benefit from this kind of approach and suggests even whole organs could be repaired as the technology improves.
The problem, he says, is that cells on their own are delicate and tend not to survive when injected directly into the body.
"It turns out that to ensure cell survival, they need to be encased in a coating that protects them from physical damage and from the body's own immune system," says Mohamed Gamal, doctoral student in biomedical engineering and study lead author. "But it has been extremely difficult to do that kind of cell encapsulation, which has until now been done in a very costly, time consuming and wasteful process."
Kim and Gamal have solved that problem by developing an automated encapsulation device that encases many cells in a microgel using a specialized blue laser and purifies them to produce a clean useable sample in just a few minutes. The advantage of their system is that over 85 per cent of the cells survive and the process can be easily scaled up.
"Research in this area has been hampered by the cost and lack of availability of mass-produced cell encapsulated microgels," says Kim. "We've solved that problem and our system could provide thousands or even tens of thousands of cell-encapsulated microgels rapidly, supercharging this field of bioengineering."
In addition to developing a system that's quick and efficient, Gamal says the equipment is made up of readily available and inexpensive components.
"Any lab doing this kind of work could set up a similar system anywhere from a few hundred to a couple of thousand dollars, which is pretty affordable for lab equipment," says Gamal.
The team is already looking at the next step, which will be to embed different kinds of stem cells -- cells that haven't yet differentiated into specific tissue types -- into the microgels alongside specialized proteins or hormones called growth factors. The idea would be to help the stem cells transform into the appropriate tissue type once they're injected.
"I'm really excited to see where this technology goes next and what our encapsulated stem cells are capable of."

Magnets can help AI get closer to the efficiency of the human brain

Purdue University researchers have developed a process to use magnetics with brain-like networks to program and teach devices to better generalize about different objects.
Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot's brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?
Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.
"Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses," said Kaushik Roy, Purdue's Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. "This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects."
Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.
The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.
The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights.
The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat's hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations.
The trained synaptic weights, encoded deterministically in the magnetization state of the nano-magnets, are then used during inference. Advantageously, use of high-energy barrier magnets (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. However, the barrier height of the nano-magnets used to perform sigmoid-like neuronal computations can be lowered to 20KT for higher energy efficiency.
"The big advantage with the magnet technology we have developed is that it is very energy-efficient," said Roy, who leads Purdue's Center for Brain-inspired Computing Enabling Autonomous Intelligence. "We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations."
Roy said the brain-like networks have other uses in solving difficult problems as well, including combinatorial optimization problems such as the traveling salesman problem and graph coloring. The proposed stochastic devices can act as "natural annealer," helping the algorithms move out of local minimas.
Their work aligns with Purdue's Giant Leaps celebration, acknowledging the university's global advancements in artificial intelligence as part of Purdue's 150th anniversary. It is one of the four themes of the yearlong celebration's Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.
Roy has worked with the Purdue Research Foundation Office of Technology Commercialization on patented technologies that are providing the basis for some of the research at C-BRIC. They are looking for partners to license the technology.

Saturday, 27 April 2019

Lack of awareness of inequality means we penalize those who have least money

People can automatically assume that someone who gives less money to charity is less generous, according to new research. The assumption was made in the study when people had no knowledge of how much someone had donated as a percentage of their overall income.
The online study was carried out by researchers at the University of Exeter Business School, Yale University, MIT and Harvard Business School. Participants were able to choose to 'penalise' different groups of people based on their contributions to society. The researchers found that participants tended to 'penalise' those who had given smaller cash amounts to charity in real terms, without realising that those people had actually given more as a proportion of their income than their wealthier counterparts.
However, the participants' behaviour changed completely when they were made aware of others' incomes. Participants then 'penalised' the rich for giving a lower percentage, even when the cash amount was actually more in real terms.
"This lack of awareness of inequality can have substantial consequences for society -- how we treat each other and what we expect others to contribute to society," said Dr Oliver Hauser, Senior Lecturer in Economics at the University of Exeter and lead author of the report.
In one experiment, participants were given actual figures from a list of five U.S. school districts and told the annual donation given to each of their Parents' Teachers' Associations (PTA). They then had to choose which school district should pay an additional tax bill, which would benefit all five districts collectively. Those participants who didn't know the average income of parents in each area chose to levy the tax on the poorest school district, which had given the least amount of money to the PTA in real terms. Those who were made aware of the average incomes chose to give the tax bill to a school district with richer parents, who had given less as a proportion of their income.
"If people don't realise how little the poor actually have, they may be less sympathetic to the lower contributions made by that group," said Professor Michael Norton from Harvard Business School, who is a co-author of the report.
"Conversely, they are less likely to view the rich negatively for not contributing their 'fair share.' This can have real implications for the on-going gap between the rich and poor in society, particularly when it comes to making decisions on how to distribute public resources fairly."
The income distribution used in the four studies of the research article used recent figures from actual U.S. income distribution, which ranks among the most unequal in the Western world.
The research was carried out by Dr Hauser, Professor Norton and co-authors Dr Gordon Kraft-Todd from Yale University, Associate Professor David G. Rand from Massachusetts Institute of Technology and Professor Martin Nowak from Harvard University.
The research is published in the peer-reviewed academic journal Behavioural Public Policy. Funding was provided by the Harvard Foundations of Human Behavior Initiative and Harvard Business School.

New theory derived from classical physics predicts how economies respond to major disturbances

A shock pushes economies out of a state of equilibrium. Modern macroeconomics are still based on the assumption of equilibria. That makes them fail when dealing with economies in times of crises. Researchers from the Complexity Science Hub Vienna (CSH) are now proposing a novel method borrowed from physics that makes the effects of major events on out-of-equilibrium economies computable for the first time. Their article was published in the current issue of Nature Communications.
The new method adds to current economic models in several ways.
Calculating resilience
"First, we can determine the resilience of an economy," says Peter Klimek, first author of the paper. Each country has different industries, and depends on various imports and exports. "We see all these interdependencies in newly available data sets. From these data we can calculate how susceptible a country and its different production sectors are to disturbances."
The scientists see, for instance, which parts of an economy are particularly vulnerable to a shock, such as a trade war.
Modeling outputs
"We can further quantify how much a shock in one corner of the world affects the production of a given sector far across the globe," says co-author Stefan Thurner. Modeling responses to shocks helps answer questions like why it took economies so long to recover from the 2008 recession. "A shock does not evaporate," explains Peter Klimek. Just like a rock that is thrown into a still pond, a shock produces waves. "The shock waves will run through the whole system, following each of its interdependent connections." The researchers found that it typically takes six to ten years before all sectors of an economy have fully digested a shock.
Testable predictions
Another line of progress resulting from the new method is that testable predictions can be made. The authors took the input-output data of 56 industrial sectors in 43 OECD countries from the years 2000 to 2014. With this large dataset they tested the accuracy of different economic projections that were dealing with the aftermaths of 2008. "Our method clearly outperformed all standard econometric forecasting methods -- most of them substantially," the authors state.
They also estimated the effects of Donald Trump's new tariffs on EU steel and aluminum, imposed in June 2018. The model finds winners and losers. In Germany, for instance, the output of the automotive industry increases, as do legal activities or wholesale trade. On the other hand, electricity production, warehousing or land transport show a decline.
"Interestingly, the production output in most of the EU countries rises so much that it can partly compensate the losses due to the tariffs," says Thurner. As soon as statistics for 2018 and 2019 are available, the team will test its prediction with real-world data. "Our vision is to eventually be able to calculate global effects of all kinds of shock scenarios that can happen anywhere," the complexity scientist adds.
A concept from physics
The concept for the new model is inspired by classical physics: Linear response theory (LRT) explains, for example, how electric or magnetic substances react to strong electrical or magnetic fields. This is known as susceptibility. It can be measured with special devices, but also be mathematically derived from properties of the material. "We show that LRT applies just as well to input-output economics," says Peter Klimek. "Instead of material properties, we use economic networks; instead of electrical resistance, we determine the susceptibility of economies, their response to shocks."
Visualizing economies
To make it intuitively understandable how economies work, scientists at the CSH employ an interactive visualization tool. It will be constantly fed with new data until the final version should represent the whole world economy.

Firms are better off revealing their environmental practices

Is honesty the best policy when it comes to being green?
It just might be, according to a new paper by Michel Magnan, a professor of accountancy at the John Molson School of School of Business.
In their article for Sustainability Accounting, Management and Policy Journal, Magnan and co-author Hani Tadros of Elon University in North Carolina looked at 78 US firms in environmentally sensitive industries from 1997 to 2010. They wanted to deepen their understanding of the driving forces behind the firms' disclosure of their environmental practices and management.
"There is tension out there," says Magnan, the Stephen A. Jarislowsky Chair in Corporate Governance. "Many people are skeptical and will adopt a cynical perspective regarding what corporations choose to disclose."
With public trust in business in general decline, it may be natural to assume that most firms are padding their numbers or deciding to obscure their environmental behaviour. But Magnan says he has found that that is not the case.
Many are keenly aware of growing environmental concerns among members of the public, including investors and consumers of their products. In response, some are quite literally cleaning up their act.
What is said vs. what is done
The researchers separated the firms they studied into two groups based on the data they collected, including public information and the firms' annual disclosure reports and regulatory filings.
The companies whose environmental performance scored positively when compared to existing government regulations (meaning they respected guidelines on pollution, emissions and so on) were designated "high performers." Those that did poorly were designated "low performers."
"High- and low-performing firms will adopt different patterns when it comes to disclosure," explains Magnan. "High performers will provide more information because they are doing well and want to convey that message to their various stakeholders. Poor performers, meanwhile, will try to manage impressions in some way."
The researchers paid close attention to the usefulness of the information firms disclosed. They preferred data that was objectively verifiable and quantitative -- they called that "hard" information. "Soft" information generally consisted of vague statements, unattached to specifics.
They found that high-performing corporations were more likely to disclose hard information because they could afford to be forthcoming. They were using their disclosure as a way of building trust and earning public goodwill, which pays dividends down the line.
"If more disclosure raises your market value, it makes sense," Magnan says.
Look for good, clean facts
With stakeholders paying more attention to environmental issues, Magnan says there is added pressure on firms to come clean on their environmental performance. He sees corporate culture heading in that direction already.
"Some firms will be more forthcoming because that is their governance model, and they feel that it is better to be forthcoming early on," he says. "The costs will be less, and it shows they are operating in good faith."
Companies that engage in practices designed to obfuscate, deny or lie about poor environmental performances are likely to suffer serious consequences, he adds.
"In the short run, that kind of behaviour may help you, but in the long run it may come back to hurt you. Everything becomes public at some point."

Bosses who put their followers first can boost their business !!!

Companies would do well to tailor training and recruitment measures to encourage managers who have empathy, integrity and are trustworthy -- because they can improve productivity, according to new research from the University of Exeter Business School.
Bosses who are so-called 'servant leaders' create a positive culture of trust and fairness in the workplace. In turn, they benefit through creating loyal and positive teams. This type of manager has personal integrity and is also keen to encourage staff development. The new research shows clear evidence of a link between this style of leadership and an increase in productivity.
Researchers examined 130 independent studies which had previously been published and used them to test a number of theories.
"Our work shows that, as we expected, a 'servant leader' style of management which is ethical, trustworthy and has a real interest in the wellbeing and development of staff brings about real positives within the workplace," said Dr Allan Lee, the lead author of the report and Senior Lecturer in Management.
"Employees are more positive about their work and therefore also often feel empowered to become more creative. The result is a rise in productivity."
The analysis also found that this style of leadership often creates a positive and valued working relationship between the manager and employee.
"Given the results, we recommend organisations look to put 'servant leaders' into influential positions and that training programmes and selection processes are aligned to make this happen," added Dr Lee.
The results also suggest that it would benefit organisations to create, or reinforce a culture that positively promotes trust, fairness, and high-quality working relationships between managers and staff.

New studies highlight challenge of meeting Paris Agreement climate goals

New research highlights the "incredible challenge" of reaching the Paris Agreement without intense action and details the extreme temperatures parts of the planet will suffer if countries fail to reduce emissions.
The world reached an agreement in December 2015 on curtailing greenhouse gas emissions with the goal of avoiding a 2-degree Celsius increase in average global temperature above pre-industrial levels. Ideally, the treaty's goal is to limit this increase to 1.5 degrees Celsius. The United States delivered notice to the United Nations in August 2018 of the country's intention to withdraw from the Paris Agreement, joining Syria as one of only two countries in the world not party to the treaty.
Two new studies published in the AGU journals Geophysical Research Letters and Earth's Future now show some of the goals set forth in the agreement might be difficult to reach without much sacrifice.
The new research shows future climate extremes depend on the policy decisions made by major emitters, and that even if major emitters were to strengthen their commitments to reducing emissions, the rest of the world would have to immediately reduce their greenhouse gases to zero to achieve the Paris 2015 goal.
"Simply put, these papers highlight the incredible challenge the 2015 Paris Agreement presented to the world," said Dáithí Stone, a climate scientist with the National Institute of Water and Atmospheric Research, a crown-owned research company in New Zealand, who was not involved in either of the studies.
Importance of major emitters
The first study, published in AGU's Geophysical Research Letters, found none of the world's major carbon emitters, including the U.S., China and the European Union, have made commitments calculated to align with limiting climate warning to a 2-degree Celsius increase above pre-industrial levels.
If these major emitters fail to enact stronger policy changes curtailing their emissions more significantly, specific parts of the world like eastern North America and Central Europe will experience periods of extreme temperatures, according to the new study.
"What's going on now matters, and it matters at the emitter level," said Sophie Lewis, a senior lecturer at the University of New South Wales and the lead author of the new study.
She and her co-authors used models projecting future climate patterns in certain parts of the world to show how the failure of these high-emitting countries would directly lead to problems there.
In many regions of the world, future extreme temperature events depend on the current and future carbon dioxide emissions reductions adopted by major emitters, according the new research. For example, if the U.S. fails to limit the country's emissions, it will lead directly to extreme temperatures in places like Central Europe and eastern North America.
Lewis said not all future impacts are clear, but the data is good enough for Central Europe and eastern North America to show how an average world temperature increase would directly impact those regions.
"In Central Europe it was really clear that there was so much to gain by limiting temperature increase to 1.5 or 2 degrees," she said.
While Lewis said the onus will be on all countries in the future to limit the impact of climate change, the high-emitting regions of the world have an important role in leading reductions. By implementing stronger climate pledges, major emitters can reduce the frequency of future extremes and their own calculated contributions to these temperature extremes, the study's authors noted.
Studies like this are important since they can be used in the future to hold large emitters accountable for failure to limit the effects of climate change, according to the study's authors.
"Extending standard methods of evaluating the increased risk of extreme events due to climate change, they quantify the contribution of individual large carbon dioxide emitter nations to future risk increases," said Michael Wehner, a senior staff scientist at the Lawrence Berkeley National Laboratory managed by the University of California, who was not involved in either of the recent studies.
"As the authors point out, this provides one method to assign liability for loss and damages during extreme weather events," Wehner said.
The tough future for developing countries
In a second study, published in Earth's Future, researchers found the come-one-come-all approach to global climate mitigation set forth in the Paris Agreement masks a huge challenge faced by developing countries.
Even if U.S., China, the European Union and India increased their contributions to limit emissions, the rest of the world would need to drop to virtually zero emissions by 2030 in order for the planet to reach its goal of limiting the increase in temperature from pre-industrial times to 1.5 degrees Celsius, according to the new study.
The authors of the recent study said it would not be technically, politically or socially feasible for many of the world's countries to reach this goal.
"It's very easy to talk about the global average. But as soon as you peel back one layer of the onion, on the country level, these rules don't apply anymore," said Glen Peters, research director for the Center for International Climate Research (CICERO) in Norway and a co-author of the second study.
He said high-emitting countries have already done much of the damage when it comes to emissions, while the rest of the world is now expected to limit their industrial growth and development to reach global emissions goals.
"The pie is so small you're basically going to starve developing countries unless there is a huge increase (in emission reductions) from countries like the U.S.," he said.
According to Wehner the "inequity of global warming between the developed and developing nations" is revealed by both new studies.
"Undoubtedly, in the absence of new energy technologies, there would be significant negative implications for the modernization of developing nations and the alleviation of poverty if they were required to reduce emissions as outlined by [Peters' paper]," he said.
A way forward
Peters, the co-author of the paper in Earth's Future, said while his findings are grim, the world should not give up on reaching emissions targets. He said historically high-emitting countries like the U.S. and parts of Europe should commit to more reductions than the developing world to make up for past emissions.
Peters and his co-authors argue that in order to meet the goals of the Paris Agreement, leading countries need to develop low‐, zero‐ or even negative‐carbon‐emissions energy technologies that can be deployed at scale in the developing world.
Stone, with the National Institute of Water and Atmospheric Research, said Peters' study shows no one country can slip up in the goal to meet climate goals.
"It is hard to argue against their conclusion that we need to start seriously considering options such as the deployment of solar geoengineering, with all of the risks that entails, if the world is serious about achieving the Paris Agreement goals," he said.

Number of women who aren't physically active enough is high and growing

Using data from a national survey representing more than 19 million U.S. women with established cardiovascular disease, Johns Hopkins Medicine researchers say that more than half of women with the condition do not do enough physical activity and those numbers have grown over the last decade. These results imply that targeted counseling to exercise more could reduce risk of cardiovascular disease as well as associated health care costs over their lifetimes.
The researchers say their results suggest that women diagnosed with such disorders as coronary artery disease, stroke, heart failure, heart rhythm disturbances and peripheral artery disease should talk to their physicians about how to increase their physical activity levels to maintain optimal cardiac health and decrease health care costs associated with cardiac disability.
According to the American Heart Association (AHA), heart disease remains the #1 killer of American women, 43 million of whom are affected by the condition. The study, described in the April 12, 2019, issue of JAMA Network Open, notes that total health care costs among women with cardiovascular disease who met AHA-recommended physical activity guidelines were about 30 percent less than costs among those who did not meet the guidelines.
"Physical activity is a known, cost-effective prevention strategy for women with and without cardiovascular disease, and our study shows worsening health and financial trends over time among women with cardiovascular disease who don't get enough physical activity," says Victor Okunrintemi, M.D., M.P.H., a former Johns Hopkins Medicine research fellow who is now an internal medicine resident at East Carolina University. "We have more reason than ever to encourage women with cardiovascular disease to move more."
The AHA strongly recommends physical activity to reduce a woman's chances of developing cardiovascular disease (so-called primary prevention) and to advance and maintain recovery after heart attack or stroke (so-called secondary prevention). The standard recommendation is 150 minutes of moderate to vigorous physical activity per week, which works out to at least 30 minutes of brisk movement per day, five days a week. Previous studies have shown that over the span of a lifetime, men are on average more physically active than women.
In the current study, researchers used data from the 2006-2015 U.S. Agency for Health care Research and Quality's Medical Expenditure Panel Survey, a self-reported questionnaire of individual households across the nation. The results of this study are based on data from about 18,027 women with cardiovascular disease between the ages of 18 and 75, including non-Hispanic whites (77.5 percent), Asians (2.3 percent), African Americans (12.2 percent) and Hispanics (8 percent), who in sum are nationally representative of all U.S. women with cardiovascular disease. They compared answers collected in 2006-2007 against those collected in 2014-2015 to assess any trends.
In 2006, 58 percent of women with cardiovascular disease said they were not meeting the AHA-recommended physical activity guidelines. By 2015, that number rose to 61 percent.
Researchers also found that women ages 40-64 were the fastest growing age group not getting enough physical activity, with 53 percent reporting in 2006-2007 not getting enough exercise and 60 percent in 2014-2015. They also found that African American and Hispanic women were more likely to not exercise enough, and women from low-income households who were enrolled in public insurance and had less high school education were also more likely to not meet recommended physical activity targets. Health care costs among women with cardiovascular disease who did not exercise enough was reported to be $12,724 in 2006-2007 compared to $14,820 in 2014-2015. Women with cardiovascular disease who did get enough exercise on average spent $8,811 in 2006-2007 compared to $10,504 in 2014-2015.
The researchers caution that the study was not designed to show cause and effect, but to identify 10-year trends in the levels of physical activity among U.S. women across various demographic groups defined by age, race/ethnicity and socio-economic factors, and to describe associations of physical inactivity with health care costs. Lack of regular physical activity has been independently linked in scores of previous studies to a higher risk of cardiovascular disease, obesity and diabetes.
"The expense of poor health is tremendous," says Erin Michos M.D., M.H.S., associate professor of medicine at the Johns Hopkins University School of Medicine. "Many high-risk women need encouragement to get more physically active in hopes of living healthier lives while reducing their health care costs."
Researchers say there is a need to tailor specific interventions to the most-impacted groups, including older women, women of lower socioeconomic status as well as minorities, and to encourage physicians who care for them to more consistently promote cardiac rehabilitation referrals and safe exercise tips

Particulate matter takes away 125,000 years of healthy life from Europe's child population

A study led by the Barcelona Institute for Global Health (ISGlobal), a centre supported by "la Caixa," has estimated the disease burden for various environmental exposure factors among the child population of Europe, and once again highlights the risk posed by air pollution. The study calculates that every year exposure to particulate matter of less than 10 micrograms (PM10) in diameter and less than 2.5 micrograms (PM2.5) takes away 125,000 years of healthy life from children in Europe.
This analysis, published in the International Journal of Environmental Research and Public Health, assessed the burden of disease for the child population of the 28 countries in the European Union for seven environmental risk factors: air pollution -- PM10, PM2.5 and ozone -- passive tobacco smoke, humidity, lead and formaldehyde.
Population and health data were compiled from several European databases and the analysis of the environmental burden of disease was conducted in line with the comparative risk assessment approach proposed by the World Health Organisation (WHO) and the Global Burden of Disease (GBD) project. The researchers calculated disability-adjusted life years (DALYs), a measure of overall burden of disease expressed as the number of years of healthy life lost to illness, disability or premature death.
The conclusions show that the environmental exposure factors included in this study take away 211,000 years of healthy life from the European population under 18 years old, accounting for 2.6% of the total. Air pollution (PM10, PM2.5 and ozone) was the most harmful exposure, causing up to 70% of the years of healthy life lost, followed by passive tobacco smoking at 20%.
"The environmental factors included in the study were chosen according to various criteria: they are the exposures for which the most data exist at national level and also those for which there is evidence of a causal relationship with effects on health, among others," states David Rojas, the lead author of the study.
The researcher emphasises that "out of all the risks studied, particulate matter are those that cause the greatest burden of disease, as they are associated with respiratory, cardiovascular and neurological illnesses, among others, as well as with higher infant mortality." "In fact, their real impact may be higher than that indicated by our estimates, as we have only taken into account their effects on infant mortality and asthma in the case of PM10, and lower respiratory tract infections in the case of PM2.5."
Out of the 28 countries included in the study, 22 - the exceptions were Luxembourg, Ireland, Sweden, Estonia, Finland and Denmark -- reported PM10 levels above those recommended by the WHO (annual average below 20 g/m3) and all of them showed ozone levels above those considered safe (an average of 100 g/mover eight hours).
Mark Nieuwenhuijsen, coordinator of the study and of the Urban Planning, Environment and Health Initiative at ISGlobal, points out that "this study shows the pressing need to implement effective policies to reduce children's exposure to environmental risk factors throughout Europe, paying special attention to air pollution and passive smoking." The researcher also points out that "common European databases need to be created to compile and harmonise exposure data for environmental risk factors, especially in childhood, as well as conducting epidemiological studies of multiple environmental risk factors."

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...