Showing posts with label COMPUTER & MATH. Show all posts
Showing posts with label COMPUTER & MATH. Show all posts

Thursday, 16 November 2023

This 3D printer can watch itself fabricate objects

 With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans.

These multimaterial 3D printing systems utilize thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used.

Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real time to ensure no areas have too much or too little material.

Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.

In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.

The researchers used this printer to create complex, robotic devices that combine soft and rigid materials. For example, they made a completely 3D-printed robotic gripper shaped like a human hand and controlled by a set of reinforced, yet flexible, tendons.

"Our key insight here was to develop a machine vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next," says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann, PhD '18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research will appear in Nature.

Contact free

This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.

With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices.

They developed a technique, known as vision-controlled jetting, which utilizes four high-frame-rate cameras and two lasers that rapidly and continuously scan the print surface. The cameras capture images as thousands of nozzles deposit tiny droplets of resin.

The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.

"Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting," says Katzschmann.

The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.

Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesn't need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper.

Superior materials

The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and don't break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and don't degrade as quickly when exposed to sunlight.

"These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment," says Katzschmann.

The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.

"We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the system's ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure," says Buchner.

The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties.

"This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldn't be used in 3D printing before," Matusik says.

New deep learning AI tool helps ecologists monitor rare birds through their songs

 Researchers have developed a new deep learning AI tool that generates life-like birdsongs to train bird identification tools, helping ecologists to monitor rare species in the wild. The findings are presented in the British Ecological Society journal, Methods in Ecology and Evolution.

Identifying common bird species through their song has never been easier, with numerous phone apps and software available to both ecologists and the public. But what if the identification software has never heard a particular bird before, or only has a small sample of recordings to reference? This is a problem facing ecologists and conservationists monitoring some of the world's rarest birds.

To overcome this problem, researchers at the University of Moncton, Canada, have developed ECOGEN, a first of its kind deep learning tool, that can generate lifelike bird sounds to enhance the samples of underrepresented species. These can then be used to train audio identification tools used in ecological monitoring, which often have disproportionately more information on common species.

The researchers found that adding artificial birdsong samples generated by ECOGEN to a birdsong identifier improved the bird song classification accuracy by 12% on average.

Dr Nicolas Lecomte, one of the lead researchers, said: "Due to significant global changes in animal populations, there is an urgent need for automated tools, such acoustic monitoring, to track shifts in biodiversity. However, the AI models used to identify species in acoustic monitoring lack comprehensive reference libraries.

"With ECOGEN, you can address this gap by creating new instances of bird sounds to support AI models. Essentially, for species with limited wild recordings, such as those that are rare, elusive, or sensitive, you can expand your sound library without further disrupting the animals or conducting additional fieldwork."

The researchers say that creating synthetic bird songs in this way can contribute to the conservation of endangered bird species and also provide valuable insight into their vocalisations, behaviours and habitat preferences.

The ECOGEN tool has other potential applications. For instance, it could be used to help conserve extremely rare species, like the critically endangered regent honeyeaters, where young individuals are unable to learn their species' songs because there aren't enough adult birds to learn from.

The tool could benefit other types of animal as well. Dr Lecomte added: "While ECOGEN was developed for birds, we're confident that it could be applied to mammals, fish (yes they can produce sounds!), insects and amphibians."

As well as its versatility, a key advantage of the ECOGEN tool is its accessibility, due to it being open source and able to used on even basic computers.

ECOGEN works by converting real recordings of bird songs into spectrograms (visual representations of sounds) and then generating new AI images from these to increase the dataset for rare species with few recordings. These spectrograms are then converted back into audio to train bird sound identifiers. In this study the researchers used a dataset of 23,784 wild bird recordings from around the world, covering 264 species.

Thursday, 9 November 2023

New twist on optical tweezers

 Optical tweezers manipulate tiny things like cells and nanoparticles using lasers. While they might sound like tractor beams from science fiction, the fact is their development garnered scientists a Nobel Prize in 2018.

Scientists have now used supercomputers to make optical tweezers safer to use on living cells with applications to cancer therapy, environmental monitoring, and more.

"We believe our research is one significant step closer towards the industrialization of optical tweezers in biological applications, specifically in both selective cellular surgery and targeted drug delivery," said Pavana Kollipara, a recent graduate of The University of Texas at Austin. Kollipara co-authored a study on optical tweezers published August 2023 in Nature Communications, written just before he completed his PhD in mechanical engineering under fellow study co-author Yuebing Zheng of UT Austin, the corresponding author of the paper.

Optical tweezers trap and move small particles because light has momentum, which can transfer to an impacted particle. Intensified light in lasers amps it up.

Kollipara and colleagues took optical tweezers one step further by developing a method to keep the targeted particle cool, using a heat sink and thermoelectric cooler. Their method, called hypothermal optothermophoretic tweezers (HOTTs), can achieve low-power trapping of diverse colloids and biological cells in their native fluids.

This latest advancement could help overcome problems with current laser light tweezers because they scorch the sample too much for biological applications.

"The main idea of this work is simple," Kollipara said. "If the sample is getting damaged because of the heat, just cool the entire thing down, and then heat it with the laser beam. Eventually, when the target such as a biological cell gets trapped, the temperature is still close to the ambient temperature of 27-34 °C. You can trap it at lower laser power and control the temperature, thereby removing photon or thermal damage to the cells."

The science team tested their HOTT on human red blood cells, which are sensitive to temperature changes.

"Using conventional optical tweezers, the cell structure is damaged and they die immediately. We have demonstrated that, no matter what kind of solution the cells are dispersed in, our technique can safely trap and manipulate them. That was one of the major findings in the study," Kollipara said.

Another finding applies to drug delivery applications. Plasmonic vesicles, tiny gold nanoparticle-coated bio-containers, were trapped without being reputed moved to different locations inside a solution, analogous to guiding drugs to a targeted cancer tumor. Once they reach the cancer target, they are hit with a secondary laser beam to burst open the drug cargo.

"Laser induced drug delivery is important because we can focus and deliver drugs on a particular target. This way, the amount of drugs a patient consumes goes down significantly, and you can specify at what locations you can administer the drug," Kollipara added.

Supercomputer simulations were needed to compute full-scale 3D force magnitudes on the particles from the optical, thermalphoretic, and thermoelectric fields achieved at a particular laser power. While a PhD student at UT Austin, Kollipara was awarded allocations on TACC's Stampede2, a national strategic resource shared by thousands of scientists funded by the National Science Foundation.

"The system is so complex in terms of computational cost requirements that our local workstations cannot support it. We would need to run a simulation for days to achieve just one data point, and we need thousands. TACC has helped us in our analysis and generates results orders of magnitude faster than anything else that we have," Kollipara said.

More broadly and not directly for this study, Kollipara's plasmic biosensor research has also used TACC's Lonestar5 system to run more extensive simulations. Lonestar5, and now Lonestar6 specifically serve scientists in the UT System through the University of Texas Research Cyberinfrastructure (UTRC).

"Building a complicated model alone is not enough," Kollipara said": 'You need to ensure that it is running properly through experimentation. Laptop computers are not sufficient for the needs of hardcore research and development. That's where supercomputing resources like those at TACC help researchers push research and development as fast as possible and keep up with human needs."

Researchers discover new ultra strong material for microchip sensors

 Researchers at Delft University of Technology, led by assistant professor Richard Norte, have unveiled a remarkable new material with potential to impact the world of material science: amorphous silicon carbide (a-SiC). Beyond its exceptional strength, this material demonstrates mechanical properties crucial for vibration isolation on a microchip. Amorphous silicon carbide is therefore particularly suitable for making ultra-sensitive microchip sensors.

The range of potential applications is vast. From ultra-sensitive microchip sensors and advanced solar cells, to pioneering space exploration and DNA sequencing technologies. The advantages of this material's strength combined with its scalability make it exceptionally promising.

Ten medium-sized cars

"To better understand the crucial characteristic of "amorphous," think of most materials as being made up of atoms arranged in a regular pattern, like an intricately built Lego tower," explains Norte. "These are termed as "crystalline" materials, like for example, a diamond. It has carbon atoms perfectly aligned, contributing to its famed hardness." However, amorphous materials are akin to a randomly piled set of Legos, where atoms lack consistent arrangement. But contrary to expectations, this randomisation doesn't result in fragility. In fact, amorphous silicon carbide is a testament to strength emerging from such randomness.

The tensile strength of this new material is 10 GigaPascal (GPa). "To grasp what this means, imagine trying to stretch a piece of duct tape until it breaks. Now if you'd want to simulate the tensile stress equivalent to 10 GPa, you'd need to hang about ten medium-sized cars end-to-end off that strip before it breaks," says Norte.

Nanostrings

The researchers adopted an innovative method to test this material's tensile strength. Instead of traditional methods that might introduce inaccuracies from the way the material is anchored, they turned to microchip technology. By growing the films of amorphous silicon carbide on a silicon substrate and suspending them, they leveraged the geometry of the nanostrings to induce high tensile forces. By fabricating many such structures with increasing tensile forces, they meticulously observed the point of breakage. This microchip-based approach not only ensures unprecedented precision but also paves the way for future material testing.

Why the focus on nanostrings? "Nanostrings are fundamental building blocks, the very foundation that can be used to construct more intricate suspended structures. Demonstrating high yield strength in a nanostring translates to showcasing strength in its most elemental form."

From micro to macro

And what finally sets this material apart is its scalability. Graphene, a single layer of carbon atoms, is known for its impressive strength but is challenging to produce in large quantities. Diamonds, though immensely strong, are either rare in nature or costly to synthesize. Amorphous silicon carbide, on the other hand, can be produced at wafer scales, offering large sheets of this incredibly robust material.

"With amorphous silicon carbide's emergence, we're poised at the threshold of microchip research brimming with technological possibilities," concludes Norte.

Optical-fiber based single-photon light source at room temperature for next-generation quantum processing

 Quantum-based systems promise faster computing and stronger encryption for computation and communication systems. These systems can be built on fiber networks involving interconnected nodes which consist of qubits and single-photon generators that create entangled photon pairs.

In this regard, rare-earth (RE) atoms and ions in solid-state materials are highly promising as single-photon generators. These materials are compatible with fiber networks and emit photons across a broad range of wavelengths. Due to their wide spectral range, optical fibers doped with these RE elements could find use in various applications, such as free-space telecommunication, fiber-based telecommunications, quantum random number generation, and high-resolution image analysis. However, so far, single-photon light sources have been developed using RE-doped crystalline materials at cryogenic temperatures, which limits the practical applications of quantum networks based on them.

In a study published in Volume 20, Issue 4 of the journal Physical Review Applied on 16 October 2023, a team of researchers from Japan, led by Associate Professor Kaoru Sanaka from Tokyo University of Science (TUS) has successfully developed a single-photon light source consisting of doped ytterbium ions (Yb3+) in an amorphous silica optical fiber at room temperature. Associate Professor Mark Sadgrove and Mr. Kaito Shimizu from TUS and Professor Kae Nemoto from the Okinawa Institute of Science and Technology Graduate University were also a part of this study. This newly developed single-photon light source eliminates the need for expensive cooling systems and has the potential to make quantum networks more cost-effective and accessible.

"Single-photon light sources are devices that control the statistical properties of photons, which represent the smallest energy units of light," explains Dr. Sanaka. "In this study, we have developed a single-photon light source using an optical fiber material doped with optically active RE elements. Our experiments also reveal that such a source can be generated directly from an optical fiber at room temperature."

Ytterbium is an RE element with favorable optical and electronic properties, making it a suitable candidate for doping the fiber. It has a simple energy-level structure, and ytterbium ion in its excited state has a long fluorescence lifetime of around one millisecond.

To fabricate the ytterbium-doped optical fiber, the researchers tapered a commercially available ytterbium-doped fiber using a heat-and-pull technique, where a section of the fiber is heated and then pulled with tension to gradually reduce its diameter.

Within the tapered fiber, individual RE atoms emit photons when excited with a laser. The separation between these RE atoms plays a crucial role in defining the fiber's optical properties. For instance, if the average separation between the individual RE atoms exceeds the optical diffraction limit, which is determined by the wavelength of the emitted photons, the emitted light from these atoms appears as though it is coming from clusters rather than distinct individual sources.

To confirm the nature of these emitted photons, the researchers employed an analytical method known as auto-correlation, which assesses the similarity between a signal and its delayed version. By analyzing the emitted photon pattern using autocorrelation, the researchers observed non-resonant emissions and further obtained evidence of photon emission from the single ytterbium ion in the doped filter.

While quality and quantity of emitted photons can be enhanced further, the developed optical fiber with ytterbium atoms can be manufactured without the need for expensive cooling systems. This overcomes a significant hurdle and opens doors to various next-generation quantum information technologies. "We have demonstrated a low-cost single-photon light source with selectable wavelength and without the need for a cooling system. Going ahead, it can enable various next-generation quantum information technologies such as true random number generators, quantum communication, quantum logic operations, and high-resolution image analysis beyond the diffraction limit," concludes Dr. Sanaka.

What a '2D' quantum superfluid feels like to the touch

 Researchers from Lancaster University in the UK have discovered how superfluid helium 3He would feel if you could put your hand into it.

The interface between the exotic world of quantum physics and classical physics of the human experience is one of the major open problems in modern physics.

Dr Samuli Autti is the lead author of the research published in Nature Communications.

Dr Autti said: "In practical terms, we don't know the answer to the question 'how does it feel to touch quantum physics?'

"These experimental conditions are extreme and the techniques complicated, but I can now tell you how it would feel if you could put your hand into this quantum system.

"Nobody has been able to answer this question during the 100-year history of quantum physics. We now show that, at least in superfluid 3He, this question can be answered."

The experiments were carried out at about a 10000th of a degree above absolute zero in a special refrigerator and made use of mechanical resonator the size of a finger to probe the very cold superfluid.

When stirred with a rod, superfluid 3He carries the generated heat away along the surfaces of the container. The bulk of the superfluid behaves like a vacuum and remains entirely passive.

Dr Autti said: "This liquid would feel two-dimensional if you could stick your finger into it. The bulk of the superfluid feels empty, while heat flows in a two-dimensional subsystem along the edges of the bulk -- in other words, along your finger."

The researchers conclude that the bulk of superfluid 3He is wrapped by an independent two-dimensional superfluid that interacts with mechanical probes instead of the bulk superfluid, only providing access to the bulk superfluid if given a sudden burst of energy.

That is, superfluid 3He at the lowest temperatures and applied energies is thermo-mechanically two dimensional.

"This also redefines our understanding of superfluid 3He. For the scientist, that may be even more influential than hands-in quantum physics."

Superfluid 3He is one of the most versatile macroscopic quantum systems in the laboratory. It often influences seemingly distant fields such as particle physics (for example the Higgs mechanism), cosmology (Kibble mechanism), and quantum information processing (time crystals).

A redefinition of its basic structure may therefore have far-reaching consequences.

Monday, 23 October 2023

To excel at engineering design, generative AI must learn to innovate, study finds

 ChatGPT and other deep generative models are proving to be uncanny mimics. These AI supermodels can churn out poems, finish symphonies, and create new videos and images by automatically learning from millions of examples of previous works. These enormously powerful and versatile tools excel at generating new content that resembles everything they've seen before.

But as MIT engineers say in a new study, similarity isn't enough if you want to truly innovate in engineering tasks.

"Deep generative models (DGMs) are very promising, but also inherently flawed," says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. "The objective of these models is to mimic a dataset. But as engineers and designers, we often don't want to create a design that's already out there."

He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond "statistical similarity."

"The performance of a lot of these models is explicitly tied to how statistically similar a generated sample is to what the model has already seen," says co-author Faez Ahmed, assistant professor of mechanical engineering at MIT. "But in design, being different could be important if you want to innovate."

In their study, Ahmed and Regenwetter reveal the pitfalls of deep generative models when they are tasked with solving engineering design problems. In a case study of bicycle frame design, the team shows that these models end up generating new frames that mimic previous designs but falter on engineering performance and requirements.

When the researchers presented the same bicycle frame problem to DGMs that they specifically designed with engineering-focused objectives, rather than only statistical similarity, these models produced more innovative, higher-performing frames.

The team's results show that similarity-focused AI models don't quite translate when applied to engineering problems. But, as the researchers also highlight in their study, with some careful planning of task-appropriate metrics, AI models could be an effective design "co-pilot."

"This is about how AI can help engineers be better and faster at creating innovative products," Ahmed says. "To do that, we have to first understand the requirements. This is one step in that direction."

The team's new study appeared recently online, and will be in the December print edition of the journal Computer Aided Design. The research is a collaboration between computer scientists at MIT-IBM Watson AI Lab and mechanical engineers in MIT's DeCoDe Lab. The study's co-authors include Akash Srivastava and Dan Gutreund at the MIT-IBM Watson AI Lab.

Framing a problem

As Ahmed and Regenwetter write, DGMs are "powerful learners, boasting unparalleled ability" to process huge amounts of data. DGM is a broad term for any machine-learning model that is trained to learn distribution of data and then use that to generate new, statistically similar content. The enormously popular ChatGPT is one type of deep generative model known as a large language model, or LLM, which incorporates natural language processing capabilities into the model to enable the app to generate realistic imagery and speech in response to conversational queries. Other popular models for image generation include DALL-E and Stable Diffusion.

Because of their ability to learn from data and generate realistic samples, DGMs have been increasingly applied in multiple engineering domains. Designers have used deep generative models to draft new aircraft frames, metamaterial designs, and optimal geometries for bridges and cars. But for the most part, the models have mimicked existing designs, without improving the performance on existing designs.

"Designers who are working with DGMs are sort of missing this cherry on top, which is adjusting the model's training objective to focus on the design requirements," Regenwetter says. "So, people end up generating designs that are very similar to the dataset."

In the new study, he outlines the main pitfalls in applying DGMs to engineering tasks, and shows that the fundamental objective of standard DGMs does not take into account specific design requirements. To illustrate this, the team invokes a simple case of bicycle frame design and demonstrates that problems can crop up as early as the initial learning phase. As a model learns from thousands of existing bike frames of various sizes and shapes, it might consider two frames of similar dimensions to have similar performance, when in fact a small disconnect in one frame -- too small to register as a significant difference in statistical similarity metrics -- makes the frame much weaker than the other, visually similar frame.

Beyond "vanilla"

The researchers carried the bicycle example forward to see what designs a DGM would actually generate after having learned from existing designs. They first tested a conventional "vanilla" generative adversarial network, or GAN -- a model that has widely been used in image and text synthesis, and is tuned simply to generate statistically similar content. They trained the model on a dataset of thousands of bicycle frames, including commercially manufactured designs and less conventional, one-off frames designed by hobbyists.

Once the model learned from the data, the researchers asked it to generate hundreds of new bike frames. The model produced realistic designs that resembled existing frames. But none of the designs showed significant improvement in performance, and some were even a bit inferior, with heavier, less structurally sound frames.

The team then carried out the same test with two other DGMs that were specifically designed for engineering tasks. The first model is one that Ahmed previously developed to generate high-performing airfoil designs. He built this model to prioritize statistical similarity as well as functional performance. When applied to the bike frame task, this model generated realistic designs that also were lighter and stronger than existing designs. But it also produced physically "invalid" frames, with components that didn't quite fit or overlapped in physically impossible ways.

"We saw designs that were significantly better than the dataset, but also designs that were geometrically incompatible because the model wasn't focused on meeting design constraints," Regenwetter says.

The last model the team tested was one that Regenwetter built to generate new geometric structures. This model was designed with the same priorities as the previous models, with the added ingredient of design constraints, and prioritizing physically viable frames, for instance, with no disconnections or overlapping bars. This last model produced the highest-performing designs, that were also physically feasible.

"We found that when a model goes beyond statistical similarity, it can come up with designs that are better than the ones that are already out there," Ahmed says. "It's a proof of what AI can do, if it is explicitly trained on a design task."

For instance, if DGMs can be built with other priorities, such as performance, design constraints, and novelty, Ahmed foresees "numerous engineering fields, such as molecular design and civil infrastructure, would greatly benefit. By shedding light on the potential pitfalls of relying solely on statistical similarity, we hope to inspire new pathways and strategies in generative AI applications 

Eyes may be the window to your soul, but the tongue mirrors your health

 A 2000-year-old practice by Chinese herbalists -- examining the human tongue for signs of disease -- is now being embraced by computer scientists using machine learning and artificial intelligence.

Tongue diagnostic systems are fast gaining traction due to an increase in remote health monitoring worldwide, and a study by Iraqi and Australian researchers provides more evidence of the increasing accuracy of this technology to detect disease.

Engineers from Middle Technical University (MTU) in Baghdad and the University of South Australia (UniSA) used a USB web camera and computer to capture tongue images from 50 patients with diabetes, renal failure and anaemia, comparing colours with a data base of 9000 tongue images.

Using image processing techniques, they correctly diagnosed the diseases in 94 per cent of cases, compared to laboratory results. A voicemail specifying the tongue colour and disease was also sent via a text message to the patient or nominated health provider.

MTU and UniSA Adjunct Associate Professor Ali Al-Naji and his colleagues have reviewed the worldwide advances in computer-aided disease diagnosis, based on tongue colour, in a new paper in AIP Conference Proceedings.

"Thousands of years ago, Chinese medicine pioneered the practice of examining the tongue to detect illness," Assoc Prof Al-Naji says.

"Conventional medicine has long endorsed this method, demonstrating that the colour, shape, and thickness of the tongue can reveal signs of diabetes, liver issues, circulatory and digestive problems, as well as blood and heart diseases.

"Taking this a step further, new methods for diagnosing disease from the tongue's appearance are now being done remotely using artificial intelligence and a camera -- even a smartphone.

"Computerised tongue analysis is highly accurate and could help diagnose diseases remotely in a safe, effective, easy, painless, and cost-effective way. This is especially relevant in the wake of a global pandemic like COVID, where access to health centres can be compromised."

Diabetes patients typically have a yellow tongue, cancer patients a purple tongue with a thick greasy coating, and acute stroke patients present with a red tongue that is often crooked.

A 2022 study in Ukraine analysing tongue images of 135 COVID patients via a smartphone showed that 64% of patients with a mild infection had a pale pink tongue, 62% of patients with a moderate infection had a red tongue, and 99% of patients with a severe COVID infection had a dark red tongue.

Previous studies using tongue diagnostic systems have accurately diagnosed appendicitis, diabetes, and thyroid disease.

"It is possible to diagnose with 80% accuracy more than 10 diseases that cause a visible change in tongue colour. In our study we achieved a 94% accuracy with three diseases, so the potential is there to fine tune this research even further," Assoc Prof Al-Naji says.

Ensuring fairness of AI in healthcare requires cross-disciplinary collaboration

 Pursuing fair artificial intelligence (AI) for healthcare requires collaboration between experts across disciplines, says a global team of scientists led by Duke-NUS Medical School in a new perspective published in npj Digital Medicine.

While AI has demonstrated potential for healthcare insights, concerns around bias remain. "A fair model is expected to perform equally well across subgroups like age, gender and race. However, differences in performance may have underlying clinical reasons and may not necessarily indicate unfairness," explained first author Ms Liu Mingxuan, a PhD candidate in the Quantitative Biology and Medicine (Biostatistics & Health Data Science) Programme and Centre for Quantitative Medicine (CQM) at Duke-NUS.

"Focusing on equity -- that is, recognising factors like race, gender, etc., and adjusting the AI algorithm or its application to make sure more vulnerable groups get the care they need -- rather than complete equality, is likely a more reasonable approach for clinical AI," said Dr Ning Yilin, Research Fellow with CQM and a co-first-author of the paper. "Patient preferences and prognosis are also crucial considerations, as equal treatment does not always mean fair treatment. An example of this is age, which frequently factors into treatment decisions and outcomes."

The paper highlights key misalignments between AI fairness research and clinical needs. "Various metrics exist to measure model fairness, but choosing suitable ones for healthcare is difficult as they can conflict. Trade-offs are often inevitable," commented Associate Professor Liu Nan also from Duke-NUS' CQM, senior and corresponding author of the paper.

He added, "Differences detected between groups are frequently treated as biases to be mitigated in AI research. However, in the medical context, we must discern between meaningful differences and true biases requiring correction."

The authors emphasise the need to evaluate which attributes are considered 'sensitive' for each application. They say that actively engaging clinicians is vital for developing useful and fair AI models.

"Variables like race and ethnicity need careful handling as they may represent systemic biases or biological differences," said Assoc Prof Liu. "Clinicians can provide context, determine if differences are justified, and guide models towards equitable decisions."

Overall, the authors argue that pursuing fair AI for healthcare requires collaboration between experts in AI, medicine, ethics and beyond.

"Achieving fairness in the use of AI in healthcare is an important but highly complex issue. Despite extensive developments in fair AI methodologies, it remains challenging to translate them into actual clinical practice due to the nature of healthcare -- which involves biological, ethical and social considerations. In order to advance AI practices to benefit patient care, clinicians, AI and industry experts need to work together and take active steps towards addressing fairness in AI," said co-author Associate Professor Daniel Ting, Director of SingHealth's AI Office and Associate Professor from the SingHealth Duke-NUS Ophthalmology & Visual Sciences Academic Clinical Programme. He is also Senior Consultant at the Singapore National Eye Centre and Head of AI & Digital Innovation at the Singapore Eye Research Institute (SERI).

"This paper highlights the complexities of translating AI fairness techniques into ethical clinical applications. It represents our collective commitment to developing AI that augments clinicians with trustworthy insights to provide quality and equitable care enhanced by technology," remarked co-author Clinical Associate Professor Lionel Cheng Tim-Ee, Chief Data & Digital Officer, Clinical Director (AI) Future Health System Department, and Senior Consultant, Department of Diagnostic Radiology at Singapore General Hospital (SGH).

"Clinicians must be actively engaged in iterative communication with AI developers to ensure models align with medical ethics and context," stressed senior co-author Professor Marcus Ong, Director of the Health Services & Systems Research (HSSR) Programme at Duke-NUS, who is also Senior Consultant at SGH's Department of Emergency Medicine. "Good intentions alone cannot guarantee fair AI unless we have collective oversight from diverse experts, considering all social and ethical nuances. Pursuing equitable and unbiased AI to improve healthcare will require open, cross-disciplinary dialogues."

The perspective published in npj Digital Medicine represents an international collaboration between researchers from institutions across Singapore, Belgium, and the United States. Authors from across the SingHealth Duke-NUS Academic Medical Centre (including Duke-NUS, SingHealth, SGH, Singapore Eye Research Institute and Singapore National Eye Centre) worked together with experts from the University of Antwerp in Belgium as well as Weill Cornell Medicine, Massachusetts Institute of Technology, Beth Israel Deaconess Medical Center and Harvard T.H. Chan School of Public Health in the United States.

Professor Patrick Tan, Senior Vice-Dean for Research at Duke-NUS, commented, "This global cooperation exemplifies the cross-disciplinary dialogues required to advance fair AI techniques for enhancing healthcare. We hope this collaborative effort spanning Singapore, Europe, and the US provides valuable perspectives to inspire further multinational partnerships towards equitable and unbiased AI."

Monday, 9 October 2023

Machine learning used to probe the building blocks of shapes

 Applying machine learning to find the properties of atomic pieces of geometry shows how AI has the power to accelerate discoveries in maths.

Mathematicians from Imperial College London and the University of Nottingham have, for the first time, used machine learning to expand and accelerate work identifying 'atomic shapes' that form the basic pieces of geometry in higher dimensions. Their findings have been published in Nature Communications.

The way they used artificial intelligence, in the form of machine learning, could transform how maths is done, say the authors. Dr Alexander Kasprzyk from the University of Nottingham said: "For mathematicians, the key step is working out what the pattern is in a given problem. This can be very difficult, and some mathematical theories can take years to discover."

Professor Tom Coates, from the Department of Mathematics at Imperial, added: "We have shown that machine learning can help uncover patterns within mathematical data, giving us both new insights and hints of how they can be proved."

PhD student Sara Veneziale, from the Department of Mathematics at Imperial, said: "This could be very broadly applicable, such that it could rapidly accelerate the pace at which maths discoveries are made. It's like when computers were first used in maths research, or even calculators: it's a step-change in the way we do maths."

Defining shapes

Mathematicians describe shapes using equations, and by analysing these equations can break the shape down into fundamental pieces. These are the building blocks of shapes, the equivalent of atoms, and are called Fano varieties.

advertisement

The Imperial and Nottingham team began building a 'periodic table' of these Fano varieties several years ago, but finding ways of classifying them into groups with common properties has been challenging. Now, they have used machine learning to reveal unexpected patterns in the Fano varieties.

One aspect of a Fano variety is its quantum period -- a sequence of numbers that acts like a barcode or fingerprint. It has been suggested that the quantum period defines the dimension of the Fano variety, but there has been no theoretical proposal for how this works, so no way to test it on the huge set of known Fano varieties.

Machine learning, however, is built to find patterns in large datasets. By training a machine learning model with some example data, the team were able to show the resulting model could predict the dimensions of Fano varieties from quantum periods with 99% accuracy.

Coding the real world

The AI model doesn't conclusively show the team have discovered a new statement, so they then used more traditional mathematical methods to prove that the quantum period defines the dimension -- using the AI model to guide them.

As well as using machine learning to discover new maths, the team say that the datasets used in maths could help refine machine learning models. Most models are trained on data taken from real life, such as health or transport data, which are inherently 'noisy' -- they contain a lot of randomness that to somdegree mask the real information.

Mathematical data is 'pure' -- noise free -- and contains patterns and structures that underly the data, waiting to be uncovered. This data can therefore be used as testing grounds for machine learning models, improving their ability to find new patterns.

Electronic sensor the size of a single molecule a potential game-changer

 Australian researchers have developed a molecular-sized, more efficient version of a widely used electronic sensor, in a breakthrough that could bring widespread benefits.

Piezoresistors are commonly used to detect vibrations in electronics and automobiles, such as in smart phones for counting steps, and for airbag deployment in cars. They are also used in medical devices such as implantable pressure sensors, as well as in aviation and space travel.

In a nationwide initiative, researchers led by Dr Nadim Darwish from Curtin University, Professor Jeffrey Reimers from the University of Technology Sydney, Associate Professor Daniel Kosov from James Cook University, and Dr Thomas Fallon from the University of Newcastle, have developed a piezoresistor that is about 500,000 times smaller than the width of a human hair.

Dr Darwish said they had developed a more sensitive, miniaturised type of this key electronic component, which transforms force or pressure to an electrical signal and is used in many everyday applications.

"Because of its size and chemical nature, this new type of piezoresistor will open up a whole new realm of opportunities for chemical and biosensors, human-machine interfaces, and health monitoring devices," Dr Darwish said.

"As they are molecular-based, our new sensors can be used to detect other chemicals or biomolecules like proteins and enzymes, which could be game-changing for detecting diseases."

Dr Fallon said the new piezoresistor was made from a single bullvalene molecule that when mechanically strained reacts to form a new molecule of different shape, altering electricity flow by changing resistance.

"The different chemical forms are known as isomers, and this is the first time that reactions between them have been used to develop piezoresistors," Dr Fallon said.

"We have been able to model the complex series of reactions that take place, understanding how single molecules can react and transform in real time."

Professor Reimers said the significance of this was the ability to electrically detect the change in the shape of a reacting molecule, back and forth, at about once every 1 millisecond.

"Detecting molecular shapes from their electrical conductance is a whole new concept of chemical sensing," Professor Reimers said.

Associate Professor Kosov said understanding the relationship between molecular shape and conductivity will allow basic properties of junctions between molecules and attached metallic conductors to be determined.

"This new capability is critical to the future development of all molecular electronics devices," Associate Professor Kosov said

New open-source method to improve decoding of single-cell data

Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a new open-source computational method, dubbed Spectra, which improves the analysis of single-cell transcriptomic data.

By guiding data analysis in a unique way, Spectra can offer new insights into the complex interplay between cells -- like the interactions between cancer cells and immune cells, which are critical to improving immunotherapy treatments.

The team's approach and findings were recently published in Nature Biotechnology.

Spectra, the researchers note, can cut through technical "noise" to identify functionally relevant gene expression programs, including those that are novel or highly specific to a particular biological context.

The algorithm is well suited to study data from large patient cohorts and to suss out clinically meaningful patient characteristics, the MSK team writes in a research briefing that accompanies the study, adding that Spectra is ideal for identifying biomarkers and drug targets in the burgeoning field of immuno-oncology.

Additionally, the MSK team has made Spectra freely available to researchers around the world.

"I'm trained as a computer scientist," says study senior author Dana Pe'er, PhD, who chairs the Computational and Systems Biology Program at MSK's Sloan Kettering Institute. "Every single tool I build, I strive to make robust so it can be used in many contexts, not just one. I also try and make them as accessible as possible."

"I'm happy to discover new biology," she continues. "And I'm just as happy -- perhaps happier -- to build a foundational tool that can be used by the wider community to make many biological discoveries."

Along with researchers at MSK, teams from several institutions are already using Spectra to study a variety of diseases, Dr. Pe'er adds.

The Single-Cell Revolution

Over the past decade, the "single-cell revolution" has transformed human understanding of health and disease. Single-cell technologies allow scientists to study the individual cells in a tissue sample or set of samples -- a tumor, for example -- and see not only the variety of cell types that are present (such as cancer cells versus various types of immune cells) but also which genes are active in each cell, shedding new light on cell states and cell interactions. The technology has fostered new understandings about how cells adapt and respond to changing conditions in health and disease -- including the development of resistance to cancer treatments.

The problem is that the mind-boggling amount of data generated by single-cell methods is challenging to sift through and correctly interpret. This is particularly true when trying to look at gene programs -- genes that work together to accomplish a particular task -- that are active across multiple cell types in a tissue, Dr. Pe'er explains.

"This is especially important for studying the interactions between cancer cells and immune cells, which involve highly overlapping gene programs," she says. "This causes some serious statistical problems that can lead to incredibly misleading results."

The team Dr. Pe'er assembled -- led by co-first authors Russell Kunes, a doctoral student trained in statistics, and Thomas Walle, MD, a physician-scientist with expertise in immuno-oncology -- not only developed the improved method for guiding the data analysis, but they also created a user-friendly interface to facilitate its adoption by other scientists.

"Our desire to develop an improved method brought together researchers with vastly different expertise in statistics, computational biology, and immunology," Dr. Walle writes in the research briefing. "Spectra was a journey of mutual learning, with the shared goal of making complex biology explainable."

Guiding the Data Analysis

For the paper, the researchers applied Spectra across two breast cancer immunotherapy data sets and a lung cancer atlas, together totaling more than 1.5 million cells from 375 individuals in 21 studies, demonstrating Spectra's ability to overcome the limitations of traditional analyses at scale.

Spectra gets its power by guiding the data analysis with a body of existing scientific knowledge -- libraries of gene programs generated from previous data by experts in their fields. And while this starting knowledge can directly guide single-cell data analysis, the program also adapts to the data at hand, helping to identify new and modified gene programs. (This adaptive property allowed the scientists to uncover a novel cancer invasion program in tumor-associated macrophages related to anti-PD-1 immunotherapy resistance, they note in the paper.)

Spectra's unique design also takes into account information about the genes that define different cell types, making it more adept at finding gene programs that underlie cellular functions, as opposed to cellular identity.

"Spectra, for example, allows us to separate T cells that are exhausted from the tumor-reactive T cells, which are actively fighting a person's cancer," Dr. Pe'er says. "And it helps us to see the differences in gene activation between the two -- which is quite challenging to unravel in complex contexts like the tumor microenvironment."

Additionally, the authors note, the ability to transfer gleanings from one data set directly to another will speed up and streamline discovery, allowing researchers to refine knowledge across single-cell sequencing studies without requiring complex data integration

AI-driven earthquake forecasting shows promise in trials

 A new attempt to predict earthquakes with the aid of artificial intelligence has raised hopes that the technology could one day be used to limit earthquakes' impact on lives and economies. Developed by researchers at The University of Texas at Austin, the AI algorithm correctly predicted 70% of earthquakes a week before they happened during a seven-month trial in China.

The AI was trained to detect statistical bumps in real-time seismic data that researchers had paired with previous earthquakes. The outcome was a weekly forecast in which the AI successfully predicted 14 earthquakes within about 200 miles of where it estimated they would happen and at almost exactly the calculated strength. It missed one earthquake and gave eight false warnings.

It's not yet known if the same approach will work at other locations, but the effort is a milestone in research for AI-driven earthquake forecasting.

"Predicting earthquakes is the holy grail," said Sergey Fomel, a professor in UT's Bureau of Economic Geology and a member of the research team. "We're not yet close to making predictions for anywhere in the world, but what we achieved tells us that what we thought was an impossible problem is solvable in principle."

The trial was part of an international competition held in China in which the UT-developed AI came first out of 600 other designs. UT's entry was led by bureau seismologist and the AI's lead developer, Yangkang Chen. Findings from the trial are published in the journal Bulletin of the Seismological Society of America.

"You don't see earthquakes coming," said Alexandros Savvaidis, a senior research scientist who leads the bureau's Texas Seismological Network Program (TexNet) -- the state's seismic network. "It's a matter of milliseconds, and the only thing you can control is how prepared you are. Even with 70%, that's a huge result and could help minimize economic and human losses and has the potential to dramatically improve earthquake preparedness worldwide."

The researchers said that their method had succeeded by following a relatively simple machine learning approach. The AI was given a set of statistical features based on the team's knowledge of earthquake physics, then told to train itself on a five-year database of seismic recordings.

Once trained, the AI gave its forecast by listening for signs of incoming earthquakes among the background rumblings in the Earth.

"We are very proud of this team and its first-place finish in this prestigious competition," said Scott Tinker, the bureau's director. "Of course, it's not just location and magnitude, but timing that matters as well. Earthquake prediction is an intractable problem, and we can't overstate the difficulty."

The researchers are confident that in places with robust seismic tracking networks such as California, Italy, Japan, Greece, Turkey and Texas, the AI could improve its success rate and narrow its predictions to within a few tens of miles.

One of the next steps is to test the AI in Texas since the state experiences a high rate of minor- and some moderate-magnitude earthquakes. The bureau's TexNet hosts 300 seismic stations and more than six years of continuous records, which makes it an ideal location to verify the method.

Eventually, the researchers want to integrate the system with physics-based models, which could be important where data is poor, or places such as Cascadia, where the last major earthquake happened hundreds of years before seismographs.

"Our future goal is to combine both physics and data-driven methods to give us something generalized, like chatGPT, that we can apply to anywhere in the world," Chen said.

The new research is an important step to achieving that goal.

"That may be a long way off, but many advances such as this one, taken together, are what moves science forward," Tinker said.

The research was supported by TexNet, the Texas Consortium for Computational Seismology and Zhejiang University. The bureau is a research unit of the Jackson School of Geosciences

Wednesday, 4 October 2023

Examining the superconducting diode effect

 A collaboration of FLEET researchers from the University of Wollongong and Monash University have reviewed the superconducting diode effect, one of the most fascinating phenomena recently discovered in quantum condensed-matter physics.

A superconducting diode enables dissipationless supercurrent to flow in only one direction, and provides new functionalities for superconducting circuits.

This non-dissipative circuit element is key to future ultra-low energy superconducting and semiconducting-superconducting hybrid quantum devices, with potential for quantum technologies in both classical and quantum computing.

SUPERCONDUCTORS AND DIODE EFFECTS

A superconductor is characterized by zero resistivity and perfect diamagnetic behavior, which leads to dissipationless transport and magnetic levitation.

'Conventional' superconductors and the underlying phenomenon of low-temperature superconductivity are explained well by microscopic Bardeen-Cooper-Schrieffer (BCS) theory proposed in 1957.

The prediction of Fulde-Ferrell-Larkin-Ovchinnikov ferromagnetic superconducting phase in 1964-65 and the discovery of 'high-temperature' superconductivity in antiferromagnetic structures in 1986-87, has set the stage for the field of unconventional superconductivity wherein superconducting order can be stabilized in functional materials such as magnetic superconductors, ferroelectric superconductors, and helical or chiral topological superconductors.

Unlike conventional semiconductors and normal conductors, electrons in superconductors constitute pairs, known as Cooper pairs, and the flow of Cooper pairs is called a supercurrent.

Recently, researchers have observed nonreciprocal supercurrent transport leading to diode effects in various superconducting materials with different geometric structures and designs, including single crystals, thin films, heterostructures, nanowires, and Josephson junctions.

THE STUDY

The FLEET research team reviewed theoretical and experimental progress in the superconducting diode effect (SDE) and provided a prospective analysis of future aspects. This study sheds light on various materials hosting SDE, device structures, theoretical models, and symmetry requirements for different physical mechanisms leading to SDE.

"Unlike the conventional semiconducting diode, the efficiency of SDE is widely tunable via extrinsic stimuli such as temperature, magnetic field, gating, device design and intrinsic quantum mechanical functionalities such as Berry phase, band topology and spin-orbit interaction," explains Dr. Muhammad Nadeem (University of Wollongong), who is a Research Fellow at FLEET.

The direction of supercurrent can be controlled either with a magnetic field or a gate electric field. "The gate-tunable diode functionalities in the field-effect superconducting structures could allow novel device applications for superconducting and semiconducting-superconducting hybrid technologies," says co-author Prof Michael Fuhrer (Monash University), who is Director of FLEET.

SDE has been observed in a wide range of superconducting structures, made from conventional superconductors, ferroelectric superconductors, twisted few-layer graphene, van der Waals heterostructures, and helical or chiral topological superconductors. It reflects the enormous potential and wide usability of superconducting diodes, which markedly diversifies the landscape of quantum technologies, says Prof Xiaolin Wang (University of Wollongong), who is a Chief Investigator of FLEET.

Making a femtosecond laser out of glass

 Is it possible to make a femtosecond laser entirely out of glass? That's the rabbit hole that Yves Bellouard, head of EPFL's Galatea Laboratory, went down after years of spending hours -- and hours -- aligning femtosecond lasers for lab experiments.

The Galatea laboratory is at the crossroads between optics, mechanics and materials science, and femtosecond lasers is a crucial element of Bellouard's work. Femtosecond lasers produce extremely short and regular bursts of laser light and have many applications such as laser eye surgery, non-linear microscopy, spectroscopy, laser material processing and recently, sustainable data storage. Commercial femtosecond lasers are made by putting optical components and their mounts on a substrate, typically optical breadboards, which requires fastidious alignment of the optics.

"We use femtosecond lasers for our research on the non-linear properties of materials and how materials can be modified in their volume," explains Bellouard. "Going through the exercise of painful complex optical alignments makes you dream of simpler and more reliable ways to align complex optics."

Bellouard and his team's solution? Use a commercial femtosecond laser to make a femtosecond laser out of glass, no bigger than the size of a credit card, and with less alignment hassles. The results are published in the journal Optica.

How to make a femtosecond laser out of glass

To make a femtosecond laser using a glass substrate, the scientists start with a sheet of glass. "We want to make stable lasers, so we use glass because it has a lower thermal expansion than conventional substrates, it is a stable material and transparent for the laser light we use," Bellouard explains.

Using a commercial femtosecond laser, the scientists etch out special grooves in the glass that allow for the precise placement of the essential components of their laser. Even at micron level precision fabrication, the grooves and the components are not sufficiently precise by themselves to reach laser quality alignment. In other words, the mirrors are not yet perfectly aligned, so at this stage, their glass device is not yet functional as a laser.

The scientists also know from previous research that they can make glass expand or shrink locally. Why not use this technique to adjust the alignment of the mirrors?

The initial etching is therefore designed so that one mirror sits in a groove with micromechanical flexures engineered to locally stir the mirror when exposed to femtosecond laser light. In this way, the commercial femtosecond laser is used a second time, this time to align the mirrors, and ultimately create a stable, small scale femtosecond laser.

"This approach to permanently align free-space optical components thanks to laser-matter interaction can be expanded to a broad variety of optical circuits, with extreme alignment resolutions, down to sub-nanometers," says Bellouard.

Applications and beyond

Ongoing research programs at the Galatea Lab will explore the use of this technology in the context of quantum optical system assembly, pushing the limit of currently achievable miniaturization and alignment accuracy.

The alignment process is still supervised by a human operator, and with practice can take a few hours to achieve. Despite its small size, the laser is capable of reaching approximatively a kiloWatt of peak power and of emitting pulses of less than 200 femtoseconds, barely enough time for light to travel across a human hair.

This novel femtosecond laser technology is to be spun-off by Cassio-P, a company to be headed by Antoine Delgoffe at Galatea Lab, who joined the project at a decisive stage with the mission of finalizing the proof-of-concept into a future commercial device.

"A femtosecond laser replicating itself, are we perhaps reaching the point of self-cloning manufactured devices?" concludes Bellouard.

Funding

The research was funded by the European Research Council under an ERC PoC grant, project GigamFemto, aiming at the demonstration of a gigahertzfemtosecond laser on a single glass chip. The spin-off activities have been supported through the Bridge Proof-of-Concept and EPFL's Ignition programs.

Powering the quantum revolution: Quantum engines on the horizon

 Quantum mechanics is a branch of physics that explores the properties and interactions of iparticles at very small scale, such as atoms and molecules. This has led to the development of new technologies that are more powerful and efficient compared to their conventional counterparts, causing breakthroughs in areas such as computing, communication, and energy.

At the Okinawa Institute of Science and Technology (OIST), researchers at the Quantum Systems Unit have collaborated with scientists from the University of Kaiserslautern-Landau and the University of Stuttgart to design and build an engine that is based on the special rules that particles obey at very small scales.

They have developed an engine that uses the principles of quantum mechanics to create power, instead of the usual way of burning fuel. The paper describing these results is co-authored by OIST researchers Keerthy Menon, Dr. Eloisa Cuestas, Dr. Thomas Fogarty and Prof. Thomas Busch and has been published in the journal Nature.

In a classical car engine, usually a mixture of fuel and air is ignited inside a chamber. The resulting explosion heats the gas in the chamber, which in turn pushes a piston in and out, producing work that turns the wheels of the car.

In their quantum engine the scientists have replaced the use of heat with a change in the quantum nature of the particles in the gas. To understand how this change can power the engine, we need to know that all particles in nature can be classified as either bosons or fermions, based on their special quantum characteristics.

At very low temperatures, where quantum effects become important, bosons have a lower energy state than fermions, and this energy difference can be used to power an engine. Instead of heating and cooling a gas cyclically like a classical engine does, the quantum engine works by changing bosons into fermions and back again.

"To turn fermions into bosons, you can take two fermions and combine them into a molecule. This new molecule is a boson. Breaking it up allows us to retrieve the fermions again. By doing this cyclically, we can power the engine without using heat," Prof. Thomas Busch, leader of the Quantum Systems Unit explained.

While this type of engine only works in the quantum regime, the team found that its efficiency is quite high and can reach up to 25% with the present experimental set up built by the collaborators in Germany.

This new engine is an exciting development in the field of quantum mechanics and has the potential to lead to further advances in the burgeoning area of quantum technologies. But does this mean we will soon see quantum mechanics powering the engines of our cars? "While these systems can be highly efficient, we have only done a proof-of-concept together with our experimental collaborators," explained Keerthy Menon. "There are still many challenges in building a useful quantum engine."

Heat can destroy the quantum effects if the temperature gets too high, so researchers must keep their system as cold as possible. However, this requires a substantial amount of energy to run the experiment at these low temperatures in order to protect the sensitive quantum state.

The next steps in the research will involve addressing fundamental theoretical questions about the system's operation, optimizing its performance, and investigating its potential applicability to other commonly used devices, such as batteries and sensors.

Revolutionary breakthrough: Human stomach micro-physiological system unveiled

 A groundbreaking development in biomedical engineering has led to the creation of a human stomach micro-physiological system (hsMPS), representing a significant leap forward in understanding and treating various gastrointestinal diseases, including stomach cancer. The research team, led by Professor Tae-Eun Park from the Department of Biomedical Engineering at UNIST and Professor Seong-Ho Kong from Seoul National University Hospital, has successfully developed a biomimetic chip that combines organoid and organ-on-a-chip technologies to simulate the complex defense mechanisms of the human gastric mucosa.

Organoids, which mimic human organs using stem cells, have shown great potential as in vitro models for studying specific functions. However, they lack the ability to replicate mechanical stimulation or cell-to-cell interactions found within the human body. This limitation prompted researchers to develop an innovative biochip capable of recreating real-life gastric mucosal protection systems.

The newly developed biochip incorporates fluid flow within its microfluidic channels to simulate mechanical stimuli and facilitate cell-to-cell interactions. Mesenchymal substrate cells exposed to fluid flow activate gastric stem cell proliferation while promoting cellular differentiation balance. This process ultimately mimics key features necessary for developing functional gastric mucosal barriers at a biologically relevant level.

One remarkable achievement demonstrated by this hsMPS is its ability to uncover previously unseen defense mechanisms against Helicobacter pylori -- a pathogen associated with various stomach diseases -- in ways that were not possible with existing models. Gastric mucosal peptide known as TFF1 was observed forming mosaic-like structures within groups infected with Helicobacter pylori -- forming a protective barrier essential for establishing an efficient defense system against external infectious factors. Suppression of gastric mucosal peptide expression resulted in more severe inflammatory reactions.

"This study presents our model's potential for observing dynamic interactions between epithelial cells and immune cells in chips infected with Helicobacter pylori, contributing to a comprehensive understanding of gastric mucosal barrier stability," explained Professor Park.

The research findings, supported by the Basic Research Laboratory (BRL) research grant from the National Research (NRF), funded by the Ministry of Science and ICT (MSIT), have been published online on July 31 in Advanced Science.

These groundbreaking advancements in hsMPS open up new avenues for studying host-microbe interactions, developing therapeutic strategies for gastric infections, and gaining a deeper understanding of gastrointestinal diseases. This innovative biochip technology has the potential to reduce reliance on animal experimentation while providing valuable insights into complex physiological processes within the human stomach.

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...