Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

Integrated artificial neurons from metal halide perovskites

Jeroen J. de Boer and Bruno Ehrler *
Center for Nanophotonics, AMOLF, 1098 XG, Amsterdam, The Netherlands. E-mail: b.ehrler@amolf.nl

Received 29th November 2024 , Accepted 16th December 2024

First published on 20th January 2025


Abstract

Hardware neural networks could perform certain computational tasks orders of magnitude more energy-efficiently than conventional computers. Artificial neurons are a key component of these networks and are currently implemented with electronic circuits based on capacitors and transistors. However, artificial neurons based on memristive devices are a promising alternative, owing to their potentially smaller size and inherent stochasticity. But despite their promise, demonstrations of memristive artificial neurons have so far been limited. Here we demonstrate a fully on-chip artificial neuron based on microscale electrodes and halide perovskite semiconductors as the active layer. By connecting a halide perovskite memristive device in series with a capacitor, the device demonstrates stochastic leaky integrate-and-fire behavior, with an energy consumption of 20 to 60 pJ per spike, lower than that of a biological neuron. We simulate populations of our neuron and show that the stochastic firing allows the detection of sub-threshold inputs. The neuron can easily be integrated with previously-demonstrated halide perovskite artificial synapses in energy-efficient neural networks.



New concepts

We demonstrate the first fully on-chip integrated halide perovskite artificial neuron. These neurons can be integrated in neuromorphic hardware that draws inspiration from the brain to compute with order of magnitude higher energy efficiency compared to conventional hardware. Our artificial neuron consists of a single halide perovskite memristive device and a capacitor. The simple device layout makes our neuron significantly easier to scale than neurons based on complex electronic circuits of silicon capacitors and transistors, which are commonly used to make artificial neurons. The highly efficient conduction of ions by halide perovskite that underlies resistance changes in these materials allows a lower operating voltage compared to artificial neurons based on other memristive materials. In our current implementation, the neuron consumes 20–60 pJ per spike, lower than the energy consumption of a biological neuron. Our neuron is fabricated on the microscale with a lithography procedure that is compatible with halide perovskite and that enables further downscaling. The similarity of our neuron design to that of microscale halide perovskite artificial synapses that we have demonstrated before allows dense integration in all-halide perovskite energy-efficient neuromorphic chips.

Introduction

Artificial intelligence-based systems have seen a rapid increase in their capabilities in a wide range of tasks, such as natural language processing,1 image recognition,2,3 and strategizing.4,5 The increase in the performance of these systems is accompanied by an exponential increase in the computational power, and thus the energy consumption.6 Neuromorphic computing addresses this issue by implementing neural networks in hardware, lowering the required energy by orders of magnitude compared to conventional computers.7 Neuromorphic chips rely on two main components for computation: artificial neurons, which integrate incoming signals and fire a voltage pulse upon reaching a threshold, and artificial synapses, which determine the connection strength between neurons. Ideally, both components can be integrated into a single chip in a dense arrangement to enable large-scale artificial neural networks. Both the neurons and synapses are typically implemented with electronic circuits composed of transistors and capacitors.8 On the other hand, implementations that use memristive elements, which change their resistance based on an applied voltage, can be more compact and highly energy efficient, making them an attractive alternative.9 Much research has gone into developing artificial synapses that directly use the resistance change of a memristive element as a proxy for connection strength.9–12 Memristive elements also show promise for use in artificial neurons, because of the inherent stochasticity in their resistance changes.13 This inherent stochasticity of memristive neurons can be leveraged for better signal representation,14,15 or more efficient probabilistic computing than would be possible with deterministic neurons.16 Nonetheless, applying memristive elements in artificial neurons is more complex and has been much less explored compared to their application in synapses.

Here, we demonstrate a simple memristive neuron based on a halide perovskite memristive element. Metal halide perovskites are semiconducting compounds that efficiently conduct both electronic and ionic charge carriers.17 The efficient ion conduction in halide perovskites readily induces hysteresis, which was previously exploited to make energy-efficient artificial synapses.18–20 While various halide perovskite artificial synapses have been reported, only one halide perovskite neuron has been experimentally demonstrated before.21 However, this previous implementation used off-chip circuitry to implement signal integration and neuron-like spiking, making scaling difficult. We connect a microscale volatile halide perovskite memristive device in series with a capacitor. The series capacitor applies a reverse bias on the memristive element after spiking of the neuron, which aids in resetting the memristive element after each spike. This makes our neuron design more robust against non-reversible resistance changes of the memristive component than designs with a series resistor,22,23 or capacitor connected in parallel.16,24,25 Because our design consists of only two components, the neuron is also more easily scalable than implementations that require more complex circuitry besides the memristive element.14,26,27 Moreover, the efficient ion conduction of halide perovskites allows an operating voltage of hundreds of millivolts, lower than in previous memristive neurons which is favorable for low energy consumptions. We fabricate our crosspoint neurons with a previously developed procedure that prevents degradation of the halide perovskite layer during lithography.19 Our neuron is integrated fully on-chip without the need for external circuitry to emulate neuron functionality. In that way, the device architecture of our halide perovskite memristive device lends itself to further downscaling and the neuron could be easily integrated with halide perovskite artificial synapses that we have demonstrated before to form artificial neural networks with ultralow-energy consumption.19

Experimental

Fabrication of the on-chip artificial neuron

Heavily p-doped Si wafers (1–5 Ω cm resistivity) were purchased from Siegert Wafer. PbI2 (99.99%) was purchased from TCI. Methylammonium iodide (MAI) was purchased from Solaronix. Anhydrous DMF and chlorobenzene were purchased from Sigma-Aldrich. 950 PMMA A8 was purchased from Kayaku Advanced Materials. All materials were used without further purification.

Devices were fabricated using a similar procedure as described before.19 The artificial neurons were fabricated on heavily p-doped Si wafers with a 100 nm thermal oxide layer. Gold bottom electrodes were patterned on the wafer with a lift-off procedure using MA-N1410 photoresist. UV exposure with a Süss MA6/BA6 mask aligner was followed by development in MA-D533/s. A 5 nm Cr adhesion layer and an 80 nm Au electrode layer were deposited on the patterned resist by e-beam physical vapor deposition. Lift-off was then performed by soaking in acetone for one hour. A 60 nm SiO2 layer was deposited from a O2 and SiH4 gas mixture using inductively-coupled plasma-enhanced chemical vapor deposition (ICPCVD) in an Oxford PlasmaPro100 ICPCVD system. Silver top contacts were patterned using the same procedure as for the bottom electrodes. After patterning of the top electrodes, the SiO2 layer was etched in an Oxford Plasmalab 80 Plus system with an Ar and CHF3 gas mixture, using the top electrodes as a hardmask.

Inside a nitrogen-filled glovebox (<0.5 ppm O2 and water), a stoichiometric mixture of PbI2 and MAI was dissolved in DMF to obtain a 40 wt% MAPbI3 precursor. The precursor was spin coated over the electrodes at 4000 rpm for 30 seconds in the same glovebox. Chlorobenzene was added as an antisolvent after 3 seconds of spinning. Directly after spin coating the samples were annealed at 100 °C for 10 minutes. The 950 PMMA A8 solution was spin coated on top of the halide perovskite at 3000 rpm for 45 seconds, followed by a 5 minute bake at 100 °C.

Electrical characterization

IV curves between −0.5 and 0.5 V and the retention time of the low resistance state were measured with a Keysight B2902A Precision Source/Measure Unit.

Artificial neuron measurements were performed by applying voltage pulses between the heavily p-doped Si substrate and the silver top-electrode with a Rigol DG1062Z arbitrary waveform generator, while measuring the voltage between the gold bottom electrode and the Si substrate with a PicoScope 6402C oscilloscope. The data was smoothed using a moving average with a 5 point subset, corresponding to a 20 μs time window. Afterward, 50 Hz noise from the AC power supply was removed using a fit to a sine wave with a 50 Hz frequency. Raw versions of the figures in the main text are given in Fig. S11 (ESI) and show that the measured signal is not affected significantly by the noise removal.

Results and discussion

Artificial neurons can be fabricated from a resistive switch that shows rapid, highly volatile switching connected in series with a capacitor.28 Thereby, successive voltage pulses eventually switch the memristive element to the low resistance state, charging the capacitor (firing). Then, the charged capacitor reverse-biases the memristive element, switching it off again. We use a resistive switch that comprises of methylammonium lead triiodide (MAPbI3) as the active layer, and a gold and silver contact as the bottom and top contact respectively (Fig. 1a and Methods section). The 2.5 μm wide contacts are arranged in an overlapping back-contact geometry, where the two contacts are orthogonally placed on top of each other with an insulating spacer layer of SiO2 in between. All lithographic processing steps are therefore performed before the perovskite deposition. The compact, dense structure lends itself to downscaling.19 This resistive switch shows a unipolar behavior with a clear threshold voltage of about 0.3 V, where the resistance rapidly changes by four orders of magnitude from approximately 1 GΩ to 100 kΩ (Fig. 1b). This resistance change is maintained for a short period only after switching off the voltage pulse, about 125 ms in the case of Fig. 1c, a requirement for the fabrication of an artificial neuron. A histogram of retention times based on 40 measurements is given in Fig. S2 (ESI). In no case is the retention time more than 500 ms. The resistance changes of the resistive switch are stochastic in nature, as is apparent from the histograms of the time to switch after applying the voltage pulse in Fig. S3a–c (ESI) and their corresponding fit with a Poisson distribution. Such a Poisson distribution for the switching time is expected for resistive switches that change their resistance due to stochastic formation and destruction of conductive filaments.13 We note that resistance change can also occur for the same device but without the MAPbI3 layer, as illustrated by Fig. S4 (ESI). The switching then happens at about 10× higher voltages. It has previously been shown that silver filaments can form in SiO2 layers,29 and the resistance changes therefore likely occur due to filament formation through the SiO2 spacer between the Ag and Au electrodes. Thus, the role of the halide perovskite layer in the final device is to strongly facilitate the formation of these Ag filaments, enabling lower voltage operation and thereby reducing the energy consumption of the device.
image file: d4mh01729c-f1.tif
Fig. 1 A volatile halide perovskite resistive switch. (a) Optical microscopy image of the cross-point formed by the gold and silver electrodes before deposition of the halide perovskite layer, with a schematic image of the full resistive switching device. A gold bottom electrode and silver top electrode sandwich an SiO2 insulating layer. Halide perovskite is spin-coated over the electrodes and forms the active layer of the device. (b) IV curve of the device, measured between −0.5 and 0.5 V. The measured current increases by approximately 4 orders of magnitude at 0.3 V. The device returns to the initial high-resistive state as soon as the voltage is reduced to 0 V again and shows symmetric resistive switching properties in the negative poling direction. (c) Retention time measurement of the resistive switch. The resistance increases to that of the device in the high resistance state after approximately 125 ms. The full measurement is given in Fig. S1 (ESI).

To turn this resistive switch into an artificial neuron, it needs to be connected to a capacitor. We implement this on-chip by connecting the resistive switch in series with a 300 pF capacitor that is formed by the Au bottom contact, the thermal SiO2 layer and the highly-doped Si substrate, as shown in Fig. 2a. With such a connection, the operation of the neuron follows three key steps, depicted in Fig. 2b. In the first step, stimulation, the input voltage pulse experiences a resistive switch with high resistance. Therefore, every voltage pulse deposits only a small amount of charge on the capacitor, insufficient to build up significant voltage. After several pulses, the resistance of the resistive switch will promptly change to the low resistive state. At that point, the second step (firing) is initiated. The capacitor is quickly charged and the charge on the capacitor sets up a voltage that opposes the input voltage. The third step (resetting) is initiated when the applied voltage is removed. The capacitor discharges through the resistive switch, causing the resistive switch to return to the high resistive state, and the cycle can restart.


image file: d4mh01729c-f2.tif
Fig. 2 Operation of the artificial spiking neuron. (a) The neuron is constructed by connecting the memristive part of the device, consisting of the gold bottom electrode, silver top electrode and the MAPbI3 layer, with the capacitor formed by the gold electrode and contact pad, the 100 nm thermal SiO2 layer and the highly doped Si substrate in series. (b) Schematic representation of the three stages of the operation of the neuron. Upon application of a voltage, the device first undergoes a “stimulation” phase, where there is no significant voltage build-up on the capacitor due to the high resistance of the memristive part of the device. After enough voltage has been applied to the device, the memristive device switches to the low-resistance state and the capacitor is rapidly charged, causing a voltage buildup on the capacitor, i.e. “firing” of the neuron. When the applied voltage is removed, the capacitor discharges. This reverse-biases the resistive switch, aiding the disruption of the conductive filament, called the “resetting” process. (c) A pulsed measurement of the artificial neuron. A pulse train of 5 ms, 0.75 V pulses are applied with a 33 Hz frequency, resulting in firing spikes on the capacitor.

Fig. 2c shows the experimental realization of the spiking of the artificial neuron. A 33 Hz, 750 mV pulse train is applied to the device and the voltage across the capacitor is measured. We observe firing pulses on the capacitor after one to three applied pulses. Fitting of the charging and discharging of the capacitor in Fig. S5a and b (ESI) reveals that the resistance of the resistive switch is reduced to 1 to 4 MΩ during most firing steps. The resistance obtained from the fit is higher than the 100 kΩ obtained in the voltage sweep in Fig. 1b, indicating that the device has not fully switched to the low resistance state. The voltage drop over the resistive switch is gradually reduced as the filament is forming and the capacitor is charged, leading to only partial formation of the filament. This partial formation of the filament further aids the volatility and energy efficiency of the device.

During discharging of the capacitor in the resetting step, a resistance of approximately 10 MΩ is extracted, which corresponds to the input impedance of the oscilloscope. Assuming that the resistive switch is brought back to its 1 GΩ high resistance state during the resetting step, the oscilloscope provides a lower resistance discharge path for the capacitor, which is a limitation of our current measurement setup (see Fig. S5c, ESI).

The capacitive discharge fit immediately corresponds to the oscilloscope impedance (Fig. S5a, ESI), from which we conclude that the resistive switch is reset as soon as the bias is removed, at least on the timescale of the measurement. No firing pulses were measured if the halide perovskite layer was omitted, as shown in Fig. S6 (ESI). The resistance changes that underlie the spiking behavior of the neuron, therefore, occur through the halide perovskite layer at these low applied voltages. Fig. S7 (ESI) shows that the stochastic spiking of the neuron was reproducible over multiple measurements.

The firing pattern of the neuron is stochastic in nature, which is expected from the underlying stochastic switching mechanism of the resistive switch. Similar to the resistive switch itself, Fig. S8a (ESI) shows that the time under bias before spiking of the neuron follows a Poisson distribution, with a mean of 6.9 ms for the 0.75 V pulses. This stochastic switching is also observed in biological neurons and can have advantages compared to purely deterministic neurons.

To demonstrate this advantage we use the experimentally obtained mean switching time and resistances to model the behavior of the stochastic neuron. We compared the simulated stochastic neuron to a hypothetical deterministic neuron with a deterministic threshold of the same time constant (6.9 ms) to determine the ability of stochastic and deterministic neurons to represent the input voltage pulse train. Modeling of the neuron is discussed in more detail in Supplementary note S1 (ESI).

Fig. 3a shows the simulated spiking behavior of a stochastic and a deterministic neuron. The spiking of the simulated stochastic neuron is similar to that in the measurement shown in Fig. 2c. The simulated deterministic neuron, on the other hand, spikes at regular intervals.


image file: d4mh01729c-f3.tif
Fig. 3 Simulations comparing the stochastic spiking of the neuron with a hypothetical deterministic version of the neuron. (a) Comparison of a simulated stochastic and deterministic spiking neuron, with the same input as in Fig. 2c. Similar spiking behavior is obtained for the simulated and experimentally measured stochastic neurons. The deterministic neuron always spikes after a cumulative 6.9 ms of bias has been applied. (b) Simulated spiking behavior of populations of 100 stochastic and deterministic neurons. Ten voltage pulses are applied in the simulation with the same pulse duration, length, and magnitude as (a). Blue-shaded regions indicate the application of the 750 mV pulses, while the red marks indicate spiking by the neuron. While the deterministic neurons all spike at the same time, spiking by the stochastic neurons is distributed more evenly throughout the applied pulses. (c) The population codes obtained for each applied pulse in (b). We define the population code as the cumulative amount of spikes output by the population. For the deterministic population, the population code increases with each even number of applied pulses, while the stochastic population shows a more gradual increase with each applied pulse. (d) The representation error of deterministic and stochastic populations as a function of the population size, averaged over 1000 simulations. Deterministic populations have the same representation error regardless of their size. The representation error of the stochastic neurons decreases as the population size increases. The representation error of the stochastic populations is lower for population sizes of 11 or more neurons. The blue shaded region indicates one standard deviation.

To achieve more biologically plausible, robust, and accurate spiking neural networks, neurons are typically implemented in populations.14,15 In these networks, input signals are fed into the neurons in the populations and their collective output is collected as a population code. Fig. 3b shows a simulation of populations of 100 stochastic or deterministic neurons. While the spikes of the stochastic neurons are distributed over all input voltage pulses, the deterministic neurons spike uniformly roughly each second input pulse.

From the simulations of the stochastic and deterministic neuron populations, we calculate the population code as the cumulative amount of spikes output by the total population after each successive input pulse, Fig. 3c. The population code for the deterministic populations increases stepwise, showing that the stochastic neurons can better distinguish different numbers of applied pulses, i.e., they can better encode or represent the input. This process by which stochastic neurons can pick up on sub-threshold signals is called “stochastic resonance”. Biological neurons, which are also stochastic, rely on stochastic resonance to detect otherwise sub-threshold signals.30

To study the effect of population size on the reliability of signal detection, we simulated population codes for populations of 1 and up to 100 neurons and computed a signal representation error for each population size, see Fig. 3d. Supplementary note S1 (ESI) explains how the representation error was determined. This representation error measures how well the population can encode and distinguish between different inputs. The representation error is initially larger for small populations of stochastic neurons compared to deterministic ones. However, the error rapidly decreases as the population size increases and drops below that of the deterministic neurons for relatively small population sizes of 11 or more stochastic neurons. These results are in line with previous work where the same benefit was found for stochasticity in artificial neuron populations.14,15

Experimentally, the neurons are stochastic, but the stochasticity is tunable. The spiking behavior of the neuron can be tuned by changing the parameters of the input voltage pulses. As shown in Fig. 4a, the neuron outputs spikes with a higher probability for each input pulse if the frequency of the incoming pulses is increased. On the other hand, a lower input pulse frequency in Fig. 4b leads to no spiking of the neuron, which is a clear demonstration of the leaky behavior of the neuron. Another demonstration of the leaky-integrate-and-fire behavior of the neuron is given in Fig. S10 (ESI). Increasing the pulse duration to 7.5 ms leads to firing with each applied voltage pulse, whereas 2 ms pulses applied with the same frequency do not lead to spiking of the neuron.


image file: d4mh01729c-f4.tif
Fig. 4 Tunability of the firing of the neuron. (a) Increasing the frequency of the incoming voltage pulses to 50 Hz leads to a higher firing probability with each input pulse. (b) At a lower frequency of incoming voltage pulses of 20 Hz the neuron does not fire. (c) A lower input voltage of 400 mV, corresponding to connection of the neuron through high resistance synapses, leads to no firing of the neuron.

Changing the voltage also provides a way to change the firing pattern of the neuron. When the neuron is integrated in full networks, this would be equivalent to connecting the neuron through synapses with a low connection strength, i.e. a high resistance. The measurement in Fig. 4c illustrates that a lower voltage drop over the neuron due to a resistive artificial synapse leads to no spiking of the neuron. Our spiking neuron therefore shows the leaky-integrate-and-fire behavior and synaptic strength-dependent spiking properties required for constructing neuromorphic hardware with the synapse.

The energy consumption of the firing pulses can be calculated by image file: d4mh01729c-t1.tif, with C the capacitance of the on-chip capacitor and V the voltage of the firing pulse, which yields an energy consumption per firing pulse between 20 to 60 pJ. This is already lower than the energy consumed by a biological neuron (on the order of 100 pJ),31 and artificial neurons that have been implemented in hardware spiking neural networks before,32 even in this early adaptation. More energy-efficient silicon artificial neurons that were demonstrated before have not yet been implemented in full networks.33 In addition, neurons based on electronic circuits of traditional transistors and capacitors require a large number of these components,8,33 making the circuits bulky and therefore limiting the maximum density that can be reached on the final chip. In contrast, our design consists of only two components and could therefore be incorporated in higher densities more easily. Moreover, there is no detectable voltage build-up on the capacitor during the stimulation step before firing, meaning that the energy consumption per spike can be reduced by reducing the capacitance of the capacitor without negatively influencing the functioning of the neuron. We discuss further scaling effects in Supplementary note S2 in the ESI.

Biological neurons are sensitive to input signals of similar frequencies that we use in this work.34 Although these frequencies are significantly lower than that of conventional computers, the different way that information is processed in neuromorphic networks still allows for efficient computation. In fact, neuromorphic networks require synapses and neurons that have time constants that are well-matched to their input for efficient computation. Thus, interfacing with the natural world, e.g. for learning from visual input, requires operating frequencies similar to those we use here.7,35 These time constants can be difficult to achieve with CMOS-based neuromorphic hardware.36 Our neuron therefore provides a convenient alternative that is natively capable of operating at these frequencies. The ability to incorporate these neurons and the corresponding artificial synapses on flexible substrates could allow for novel application areas, including soft robots or even in combination with biological tissue. In addition, ion conductivity and corresponding resistance changes of halide perovskites can be tuned by light stimulation.37 Perovskite neurons could therefore also open up new possibilities of hybrid electronic-photonic neuromorphic hardware, such as low-power smart sensors.

Conclusion

In conclusion, we have demonstrated the first fully on-chip halide perovskite artificial neuron. The neuron consists of only two components, which lends itself well to high-density integration, and shows clear leaky-integrate-and-fire behavior, important for integration in neuromorphic hardware. The spiking of the neuron is stochastic, similar to biological neurons, yet with a lower energy consumption per spike between 20 to 60 pJ. The stochastic spiking of the neuron is beneficial for detecting sub-threshold input, similar to biological neurons. The energy consumption of the neuron could be further reduced by lowering the capacitance of the capacitor. The similarity in device architecture of this artificial neuron to the downscaled artificial synapses of MAPbI3 that we have shown before,19 allows easy implementation of energy-efficient all-halide perovskite neuromorphic hardware.

Data availability

Data for this article, including IV sweeps, pulsed measurements, SEM images and Python scripts for simulations are available at AMOLF Institutional Repository, URL supplied at publication.

Conflicts of interest

There are no conflicts to declare.

Acknowledgements

The work of J. J. B. and B. E. received funding from the European Research Council (ERC) under grant agreement No. 947221. The work is part of the Dutch Research Council NWO and was performed at the research institute AMOLF. The authors thank Marc Duursma, Bob Drent, Igor Hoogsteder and Laura Juškėnaitė for continuous technical support.

References

  1. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger and T. Henighan.
  2. S. M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H. Ashrafian, T. Back, M. Chesus, G. S. Corrado, A. Darzi, M. Etemadi, F. Garcia-Vicente, F. J. Gilbert, M. Halling-Brown, D. Hassabis, S. Jansen, A. Karthikesalingam, C. J. Kelly, D. King, J. R. Ledsam, D. Melnick, H. Mostofi, L. Peng, J. J. Reicher, B. Romera-Paredes, R. Sidebottom, M. Suleyman, D. Tse, K. C. Young, J. De Fauw and S. Shetty, Nature, 2020, 577, 89–94 CrossRef CAS PubMed.
  3. K. He, X. Zhang, S. Ren and J. Sun, in 2015 IEEE International Conference on Computer Vision (ICCV), IEEE, Santiago, Chile, 2015, pp. 1026–1034 Search PubMed.
  4. O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps and D. Silver, Nature, 2019, 575, 350–354 CrossRef CAS PubMed.
  5. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel and D. Hassabis, Nature, 2017, 550, 354–359 CrossRef CAS PubMed.
  6. A. Mehonic and A. J. Kenyon, Nature, 2022, 604, 255–260 CrossRef CAS PubMed.
  7. P. A. Merolla, J. V. Arthur, R. Alvarez-icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar and D. S. Modha, Science, 2014, 345, 668–673 CrossRef CAS PubMed.
  8. E. Chicca, F. Stefanini, C. Bartolozzi and G. Indiveri, Proc. IEEE, 2014, 102, 1367–1388 Search PubMed.
  9. B. Govoreanu, G. S. Kar, Y.-Y. Chen, V. Paraschiv, S. Kubicek, A. Fantini, I. P. Radu, L. Goux, S. Clima, R. Degraeve, N. Jossart, O. Richard, T. Vandeweyer, K. Seo, P. Hendrickx, G. Pourtois, H. Bender, L. Altimime, D. J. Wouters, J. A. Kittl and M. Jurczak, in 2011 International Electron Devices Meeting, 2011, p. 31.6.1–31.6.4 Search PubMed.
  10. W. Xu, S. Y. Min, H. Hwang and T. W. Lee, Sci. Adv., 2016, 2, 1–8 Search PubMed.
  11. A. Chanthbouala, V. Garcia, R. O. Cherifi, K. Bouzehouane, S. Fusil, X. Moya, S. Xavier, H. Yamada, C. Deranlot, N. D. Mathur, M. Bibes, A. Barthélémy and J. Grollier, Nat. Mater, 2012, 11, 860–864 CrossRef CAS PubMed.
  12. D. Kuzum, R. G. D. Jeyasingh, B. Lee and H.-S. P. Wong, Nano Lett., 2012, 12, 2179–2186 CrossRef CAS PubMed.
  13. S. H. Jo, K.-H. Kim and W. Lu, Nano Lett., 2009, 9, 496–500 CrossRef CAS PubMed.
  14. T. Tuma, A. Pantazi, M. Le Gallo, A. Sebastian and E. Eleftheriou, Nat. Nanotech., 2016, 11, 693–699 CrossRef CAS PubMed.
  15. D. Zendrikov, S. Solinas and G. Indiveri, Neuromorph. Comput. Eng., 2023, 3, 034002 CrossRef.
  16. M. Al-Shedivat, R. Naous, G. Cauwenberghs and K. N. Salama, IEEE J. Emerg. Sel. Topics Circuits Syst., 2015, 5, 242–253 Search PubMed.
  17. C. Eames, J. M. Frost, P. R. F. Barnes, B. C. O’Regan, A. Walsh and M. S. Islam, Nat. Commun., 2015, 6, 2–9 Search PubMed.
  18. Z. Xiao and J. Huang, Adv. Electron. Mater., 2016, 2, 1–8 Search PubMed.
  19. J. J. De Boer and B. Ehrler, ACS Energy Lett., 2024, 9, 5787–5794 CrossRef CAS PubMed.
  20. S. I. Kim, Y. Lee, M. H. Park, G. T. Go, Y. H. Kim, W. Xu, H. D. Lee, H. Kim, D. G. Seo, W. Lee and T. W. Lee, Adv. Electron. Mater., 2019, 5, 1–8 Search PubMed.
  21. J.-Q. Yang, R. Wang, Z.-P. Wang, Q.-Y. Ma, J.-Y. Mao, Y. Ren, X. Yang, Y. Zhou and S.-T. Han, Nano Energy, 2020, 74, 104828 CrossRef CAS.
  22. P. Stoliar, J. Tranchant, B. Corraze, E. Janod, M.-P. Besland, F. Tesler, M. Rozenberg and L. Cario, Adv. Funct. Mater., 2017, 27, 1604740 CrossRef.
  23. Y. Zhang, W. He, Y. Wu, K. Huang, Y. Shen, J. Su, Y. Wang, Z. Zhang, X. Ji, G. Li, H. Zhang, S. Song, H. Li, L. Sun, R. Zhao and L. Shi, Small, 2018, 14, 1802188 CrossRef PubMed.
  24. H. Kalita, A. Krishnaprasad, N. Choudhary, S. Das, D. Dev, Y. Ding, L. Tetard, H.-S. Chung, Y. Jung and T. Roy, Sci. Rep., 2019, 9, 53 CrossRef PubMed.
  25. X. Zhang, W. Wang, Q. Liu, X. Zhao, J. Wei, R. Cao, Z. Yao, X. Zhu, F. Zhang, H. Lv, S. Long and M. Liu, IEEE Electron Device Lett., 2018, 39, 308–311 CAS.
  26. M. D. Pickett, G. Medeiros-Ribeiro and R. S. Williams, Nat. Mater, 2013, 12, 114–117 CrossRef CAS PubMed.
  27. W. Yi, K. K. Tsang, S. K. Lam, X. Bai, J. A. Crowell and E. A. Flores, Nat. Commun., 2018, 9, 4661 CrossRef PubMed.
  28. Z. Wang, M. Rao, J. W. Han, J. Zhang, P. Lin, Y. Li, C. Li, W. Song, S. Asapu, R. Midya, Y. Zhuo, H. Jiang, J. H. Yoon, N. K. Upadhyay, S. Joshi, M. Hu, J. P. Strachan, M. Barnell, Q. Wu, H. Wu, Q. Qiu, R. S. Williams, Q. Xia and J. J. Yang, Nat. Commun., 2018, 9, 1–10 CrossRef PubMed.
  29. Z. Wang, S. Joshi, S. E. Savel’ev, H. Jiang, R. Midya, P. Lin, M. Hu, N. Ge, J. P. Strachan, Z. Li, Q. Wu, M. Barnell, G.-L. Li, H. L. Xin, R. S. Williams, Q. Xia and J. J. Yang, Nat. Mater, 2017, 16, 101–108 CrossRef CAS PubMed.
  30. J. K. Douglass, L. Wilkens, E. Pantazelou and F. Moss, Nature, 1993, 365, 337–340 CrossRef CAS PubMed.
  31. P. Lennie, Curr. Biol., 2003, 13, 493–497 CrossRef CAS PubMed.
  32. S. Moradi, N. Qiao, F. Stefanini and G. Indiveri, IEEE Transact. Biomed. Circuits Syst., 2018, 12, 106–122 Search PubMed.
  33. A. Rubino, M. Payvand and G. Indiveri, in 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS), IEEE, Genoa, Italy, 2019, pp. 458–461 Search PubMed.
  34. N. Fourcaud-Trocmé, D. Hansel, C. van Vreeswijk and N. Brunel, J. Neurosci., 2003, 23, 11628–11640 CrossRef PubMed.
  35. N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska and G. Indiveri, Front. Neurosci., 2015, 9, 141 Search PubMed.
  36. N. Qiao, C. Bartolozzi and G. Indiveri, IEEE Transact. Biomed. Circuits Syst., 2017, 11, 1271–1277 Search PubMed.
  37. S. Ham, S. Choi, H. Cho, S. I. Na and G. Wang, Adv. Funct. Mater., 2019, 29, 1–8 Search PubMed.

Footnote

Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d4mh01729c

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.