We often believe computers are more efficient than humans. After all, computers can complete a complex math equation in a moment and can also recall the name of that one actor we keep forgetting. However, human brains can process complicated layers of information quickly, accurately, and with almost no energy input: recognizing a face after only seeing it once or instantly knowing the difference between a mountain and the ocean. These simple human tasks require enormous processing and energy input from computers, and even then, with varying degrees of accuracy.
Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) -- a nationwide consortium led by the University of California San Diego -- has been at the forefront of this research.
UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center's work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.
Now, in phase two, new research from Q-MEEN-C, published in Nano Letters, shows that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone in the journey toward new types of devices that mimic brain functions known as neuromorphic computing.
"In the brain it's understood that these non-local interactions are nominal -- they happen frequently and with minimal exertion," stated Frañó, one of the paper's co-authors. "It's a crucial part of how the brain operates, but similar behaviors replicated in synthetic materials are scarce."
Like many research projects now bearing fruit, the idea to test whether non-locality in quantum materials was possible came about during the pandemic. Physical lab spaces were shuttered, so the team ran calculations on arrays that contained multiple devices to mimic the multiple neurons and synapses in the brain. In running these tests, they found that non-locality was theoretically possible.
When labs reopened, they refined this idea further and enlisted UC San Diego Jacobs School of Engineering Associate Professor Duygu Kuzum, whose work in electrical and computer engineering helped them turn a simulation into an actual device.
This involved taking a thin film of nickelate -- a "quantum material" ceramic that displays rich electronic properties -- inserting hydrogen ions, and then placing a metal conductor on top. A wire is attached to the metal so that an electrical signal can be sent to the nickelate. The signal causes the gel-like hydrogen atoms to move into a certain configuration and when the signal is removed, the new configuration remains.
"This is essentially what a memory looks like," stated Frañó. "The device remembers that you perturbed the material. Now you can fine tune where those ions go to create pathways that are more conductive and easier for electricity to flow through."
Traditionally, creating networks that transport sufficient electricity to power something like a laptop requires complicated circuits with continuous connection points, which is both inefficient and expensive. The design concept from Q-MEEN-C is much simpler because the non-local behavior in the experiment means all the wires in a circuit do not have to be connected to each other. Think of a spider web, where movement in one part can be felt across the entire web.
This is analogous to how the brain learns: not in a linear fashion, but in complex layers. Each piece of learning creates connections in multiple areas of the brain, allowing us to differentiate not just trees from dogs, but an oak tree from a palm tree or a golden retriever from a poodle.
To date, these pattern recognition tasks that the brain executes so beautifully, can only be simulated through computer software. AI programs like ChatGPT and Bard use complex algorithms to mimic brain-based activities like thinking and writing. And they do it really well. But without correspondingly advanced hardware to support it, at some point software will reach its limit.
Frañó is eager for a hardware revolution to parallel the one currently happening with software, and showing that it's possible to reproduce non-local behavior in a synthetic material inches scientists one step closer. The next step will involve creating more complex arrays with more electrodes in more elaborate configurations.
"This is a very important step forward in our attempts to understand and simulate brain functions," said Dynes, who is also a co-author. "Showing a system that has non-local interactions leads us further in the direction toward how our brains think. Our brains are, of course, much more complicated than this but a physical system that is capable of learning must be highly interactive and this is a necessary first step. We can now think of longer range coherence in space and time"
"It's widely understood that in order for this technology to really explode, we need to find ways to improve the hardware -- a physical machine that can perform the task in conjunction with the software," Frañó stated. "The next phase will be one in which we create efficient machines whose physical properties are the ones that are doing the learning. That will give us a new paradigm in the world of artificial intelligence."
No comments:
Post a Comment