Jump to main content
Jump to site search


Neuromorphic computation with spiking memristors: habituation, experimental instantiation of logic gates and a novel sequence-sensitive perceptron model

Abstract

Memristors have been compared to neurons and synapses, suggesting they would be good for neuromorphic computing. A change in voltage across a memristor causes a current spike which imparts a short-term memory to a memristor, allowing for through-time computation, which can do arithmetical operations and sequential logic, or model short-time habituation to a stimuli. Using simple physical rules, simple logic gates such as XOR, and novel, more complex, gates such as the arithmetic full adder (AFA) can be instantiated in sol-gel TiO$_2$ plastic memristors. The adder makes use of the memristor's short-term memory to add together three binary values and outputs the sum, the carry digit and even the order they were input in, allowing for logically (but not physically reversible computation). Only a single memristor is required to instantiate each gate, as additional input/output port can be replaced with extra time-steps allowing a single memristor to do a hither-to unexpectedly large amount of computation, which may mitigate the memristor's slow operation speed and may relate to how neurons do a similarly large computation with slow operation speeds. These logic gates can be understood by modelling the memristors as a novel type of perceptron: one which is sensitive to input order. The memristor's short-term memory can change the input weights applied to later inputs, and thus the memristor gates cannot be accurately be described by a single perceptron, requiring either a network of time-invariant perceptrons, or a sequence-sensitive self-reprogrammable perceptron. Thus, the AFA is best described as a sequence-sensitive perceptron that sorts binary inputs into classes corresponding to the arithmetical sum of the inputs. Co-development of memristor hardware alongside software (sequence-sensitive perceptron) models in trained neural networks would allow the porting of modern deep-neural networks architecture to low-power hardware neural net chips.

Back to tab navigation

Publication details

The article was received on 01 Jun 2018, accepted on 08 Aug 2018 and first published on 08 Aug 2018


Article type: Paper
DOI: 10.1039/C8FD00111A
Citation: Faraday Discuss., 2018, Accepted Manuscript
  •   Request permissions

    Neuromorphic computation with spiking memristors: habituation, experimental instantiation of logic gates and a novel sequence-sensitive perceptron model

    E. M. Gale, Faraday Discuss., 2018, Accepted Manuscript , DOI: 10.1039/C8FD00111A

Search articles by author

Spotlight

Advertisements