The human brain is constructed from more than 100 billion interconnected neurons. These neurons transmit information to each other in the form of spikes during the performance of cognitive functions. In the case of artificial neural networks, the network is designed with neurons that work with inputs and outputs in real values. On the contrary, neural networks as we know them from the biological brain use spikes, action-potentials for transmitting information between neurons. This theory, the transmission of information between neurons on the basis of spikes, has been known since 1920. A neuron sends a weak pulse of energy (spike) as a signal if it has received a sufficient amount of energy from another neuron at its input. This mechanism gives us the basis for spiking neural networks (SNN).
SNN by definition = Neurons sends short pulses of electrical energy as signals as long as they themselves have received a sufficient amount of energy at their inputs. This mechanism has been designed into several mathematical models for use in computers.
Spike is the main method of processing information in the neural system, but "neural coding" - the mechanism by which information in the form of spikes is processed and later interpreted remains unclear. In 1920, as part of the SNN theory, it was discovered that the firing of spikes from neurons increases with the intensity of the stimulus. This observation led to the extension of the rate coding hypothesis, according to which neurons communicate purely on the basis of the numbers and frequencies of these spikes. Recent research, however, has shown that the timing of individual spikes also plays an important role in this system. This latest observation supports the "temporal coding" hypothesis, where the precise timing of individual spikes is used to encode and transmit information.