Delayed Spiking Neural P Systems with Scheduled Rules

Due to the inevitable delay phenomenon in the process of signal conversion and transmission, time delay is bound to occur between neurons. +erefore, it is necessary to introduce the concept of time delay into the membrane computing models. Spiking neural P systems (SN P systems), as an attractive type of neural-like P systems in membrane computing, are widely followed. Inspired by the phenomenon of time delay, in our work, a new variant of spiking neural P systems called delayed spiking neural P systems (DSN P systems) is proposed. Compared with normal spiking neural P systems, the proposed systems achieve time control by setting the schedule on spiking rules and forgetting rules, and the schedule is also used to realize the system delay. A schedule indicates the time difference between receiving and outputting spikes, and it also makes the system work in a certain time, which means that a rule can only be used within a specified time range. We specify that each rule is performed only in the continuous schedule, during which the neuron is locked and cannot send or receive spikes. If the neuron is not available at a given time, it will not receive or send spikes due to the lack of a schedule for this period of time. Moreover, the universality of DSN P systems in both generating and accepting modes is proved. And a universal DSN P system having 81 neurons for computing functions is also proved.


Introduction
As an important branch of natural computing, membrane computing enriches the theoretical framework of biomolecular computing. It is inspired by the biological structures and functions of cells and tissues and has good distributed and parallel computing capabilities. At present, there are four main types of membrane systems, including cell-like P systems [1,2], tissue P systems [3,4], numerical P systems [5,6], and neural P systems [7]. Among them, spiking neural P systems that belong to neural P systems, known as the third-generation neural network model in membrane computing, are inspired by the biological phenomenon that neurons communicate with each other through spikes. In recent years, SN P systems have received great attention, and many variants of SN P systems have been discussed.
SN P systems were first proposed in 2006 [8]. An SN P system can be regarded as a directed graph in which nodes represent neurons and arcs represent synapses. It is considered that each neuron mainly contains two components: spikes and rules, where rules include spiking rules (E/a α ) ⟶ a; d and forgetting rules a s ⟶ λ. ey have the different operation. Assuming that the spiking rule is used at time t, it will consume α (α ≥ 1) spikes and produce one spike in the neuron at time t + d, and then the produced spike will be transmitted to the adjacent neurons through the synapses. e function of the forgetting rule is to eliminate s spikes. Over the years, many variations of the SN P systems have been proposed considering different biological characteristics and phenomena, such as SN P systems with astrocytes [9], SN P systems with neuron division and dissolution [10], cell-like SN P systems [11], numerical SN P systems [12], and SN P systems with polarizations [13].
Because of SN P systems with strong expansibility, in recent years, the research on SN P system mainly focuses on the theoretical aspect, especially the establishment of various computing models. Many new variants of SN P systems are proposed by changing rules, synapses, and structures. Some studies have changed in forms of rules, for instance, SN P systems with white hole neurons [14], SN P systems with request rules [15], SN P systems with inhibitory rules [16], SN P systems with communication on request [17,18], nonlinear SN P systems [19], and SN P systems with target indications [20]. According to the inhibitory way of communication between neurons, many researches about the SN P systems with antispikes are proposed [21,22]. ere are some changes in synapses between neurons, for example, SN P systems with multiple channels [23], SN P systems with inhibitory synapses [24], SN P systems with rules on synapses [25,26], and SN P systems with thresholds [27], as well as SN P systems with scheduled synapses [28,29]. According to the self-organizing and adaptive characteristics of artificial neural network, SN P systems with plasticity structure were established [30,31]. Abstracted from neural network models, some new neural P systems were proposed [32,33]. Moreover, motivated by the characteristics of dendrites of nerve cells, dendrite P systems were investigated [34].
At present, application research on SN P systems is weak. However, in order to solve practical problems, many scholars have done researches on the application of membrane computing and obtained some research results in some aspects, for example, optimization problems [35,36], fault diagnosis [37,38], and image recognition [39,40]. Most of the research on the application is to combine the optimization algorithm with the P system model and design an approximate optimization algorithm by using the P system, which is mainly used to solve the clustering problem [41,42]. At present, there are some studies on the combination of P system and neural network, and some achievements have been made in the realization of image processing. e development of the application of membrane computing has been paid much attention. It is a new expansion and breakthrough in the field of membrane computing and develops the research on theory and application.
In the theoretical study of the P systems, their computing power and efficiency are analysed and discussed. e main purpose of computing efficiency is to analyse whether the proposed system can solve the difficult computing problems, such as the traveling salesman problem [43] and the 0-1 knapsack problem [44], within a feasible time. To prove the computing power, the P systems are compared with the Turing machine to analyse its universality [45,46]. e theoretical research of spiking neural P systems is mainly focused on the research of computing power which can work in generating and accepting modes, and the proposed SN P system can also simulate the register machine.
Five tuples exist in a register machine M that has the form of M � (m, H, l 0 , l h , I), where m is the number of registers, H is the set of instruction labels, l 0 is the initial label, l h is the final label, and I is the set of three types of instructions with the following forms: (i) l i : (ADD(r), l j , l k ) is the ADD instruction. When the value stored in the register r is increased 1, it will be moved to one of the instructions with labels l j and l k nondeterministically.
(ii) l i : (SUB(r), l j , l k ) is the SUB instruction. e register r is not empty and subtracts 1 to move the instruction with label l j . If the register r is empty, the value 1 subtracted from register r will be moved to the instruction with label l k . (iii) l h : HALT is the halt instruction. When the process of computation reaches the halt instruction, the computation will stop.
Generally, delays in current SN P systems indicate that the neuron will be closed before a certain time and then reopen after that to accept spikes. But in this paper, the schedule is added on the spiking and forgetting rules to explain the delay, and it is more reasonable to achieve the fact that there is a delay between the input and output. erefore, delayed SN P systems (DSN P systems) are proposed. e rules are only allowed to run for a specific period of time, which represents the time delay between input and output of spikes. e activation of a reference neuron, that is, the application of a rule, indicates when a rule in a connected neuron takes effect or becomes available. For example, the schedule [d, d + 1) means that the rule in the neuron is only available beginning at time d and before time d + 1. In other words, this rule is activated after time d, producing and sending spikes before time d + 1.
is work mainly proposes DSN P systems that are a new variant of SN P systems and presents the fact that there is a time difference between input and output in the form of a time interval, called schedule. e schedule is set on spiking rules and forgetting rules to determine the performance of spikes in the neuron. e performance of the system is continuous in total time, and once the rule satisfies the activation condition of the neuron, the neuron will be locked by this rule; that is to say, the rule must be used and can only be used in the specified period of time. It is notable that the classic SN P systems that apply rules are based on the principle of maximal parallelism, but DSN P systems have changed it so that the rules are executed in order according to the schedules. In addition, the computing power of DSN P systems under the mode of generating and accepting is proved. In this way, DSN P systems are Turing universal as the number generating devices and accepting devices. e structure of the rest of this paper is as follows: Section 2 proposes the definition of DSN P systems and gives an example to illustrate the operation mode of the DSN P system. And Section 3 proves the Turing universality of DSN P systems by proving the computing power of the system in the generating mode and the accepting mode, respectively.
en, a small universal DSN P system for computing functions is proved in Section 4. Finally, Section 5 summarizes the work we have done and explains the following research directions.

Delayed Spiking Neural P Systems
In this section, a new variant of spiking neural P system is proposed, called delayed spiking neural P systems (DSN P systems). Firstly, the definition of the system is given, and detailed explanations of the system are also given. And then, an example is given to show how the system works.
where (1) O � a is the alphabet and a indicated the spike.
(2) σ 1 , σ 2 , . . . , σ m are neurons with the form (n i , R i ), n i ∈ N + , and R i is the set of rules as the following two forms: (i) Spiking rules are all in neurons with the form , where E is a regular expression, α ≥ 1, and 0 ≤ t 1 < τ ≤ t 2 . (ii) Forgetting rules are all in neurons with the form , where E is a regular expression, β ≥ 1, and 0 ≤ t 1 < τ ≤ t 2 .
How the rules work is explained here. Spiking rules (E/a α ) ⟶ a; [t 1 , τ, t 2 ] can be executed on the condition that the neuron contains at least α spikes and the running time must conform to the schedule. E is a regular expression over a. Each time the rule is applied, α spikes are consumed to produce one spike, which is transmitted to the connected neurons. Assume that σ i contains b spikes and a a spiking rule. e rule can be applied if and only if b≥α and the performing time conforms to the schedule of this rule. When the number of spikes in the neuron is less than α, the rule will not be used. e schedule [t 1 , τ, t 2 ) represents the neuron output spikes from association neurons before time t 2 . t 1 is the time when the neuron receives spikes, and τ indicates the performing time of the rule. If t 1 ≠ τ, the neuron will receive spikes at time t 1 , and the specific rule in neuron will be performed at time τ. If t 1 � τ, the spiking rules can be used shorthand (E/a α ) ⟶ a; [t 1 , t 2 ] and it means the neuron receives spikes while the rule is executed, no longer stagnant. In other words, there is no delay during receiving spikes and the executing of the rule. e schedule makes the rules run in a certain order. And if there is no delay between receiving and transmitting spikes, when spikes are transmitted out of the neuron during the synchronization of the rule execution, spiking rules are written as (E/a α ) ⟶ a and forgetting rules can be written as a β ⟶ λ.
Note that the performing time of this system is continuous. Continuity refers to the continuous schedule of each time of the system, and the rule can only be used if it is continuous with the schedule of the superior connected neuron. For instance, the system has three neurons with three rules, each rule has a schedule; that is, erefore, we consider that the system is continuous in [t 1 , t 4 ), and if the neuron fires in [t 1 , t 2 ); then, its connected neurons fire only when their rules have the schedule [t 2 , t 3 ). In our figures, we indicate a reference neuron with the symbol "•", denoted as σ · i . e reference neuron σ · i is activated at time t 1 and outputs the spike before time t 2 which indicates its rule with schedule [t 1 , t 2 ), the connected neuron is only available beginning at time t 2 . If the neuron σ · i applies a rule and produces a spike but no rule in the adjacent neuron is available at the time of outputting, then no neuron can receive this spike.
Forgetting rules a β ⟶ λ; [t 1 , τ, t 2 ) represent that the rules can be used only if when there are β spikes in neuron σ i and the rules also be allowed to use at time t 1 . e function of forgetting rules is to consume β spikes in neurons. e schedule for forgetting rules has the same meaning as spiking rules. Each neuron may contain more than one forgetting rule or none. e neuron contains at least one spiking rule or forgetting rule, and spiking rules and forgetting rules can both exist in the neuron. And only one rule can be used in each neuron at each time. All neurons in the spiking neural P system work in parallel, but the rules in each neuron are applied sequentially. In other words, if there are at least two rules in the neuron, then according to the time control over the rule, only one rule can be selected which meets the schedule. e configuration of system Π at a given moment is composed of the number of spikes contained in each neuron. At the initial moment, every neuron in system Π is in an open state without any rules, denoted as the initial configuration C 0 . When each neuron in system Π has no rules which can be used and keep open, the system reaches the halt configuration. System Π evolves from one configuration C 1 to another configuration C 2 by executing the rules in each neuron. Such a process is called the transfer from C 1 to C 2 , denoted as C 1 ⇒C 2 . A calculation of system Π consists of a series of transitions from the initial configuration and the computation will stop if and only if it can reach the halt configuration.
In the generating mode, the spiking neural P system contains at least one output neuron of which the function is to send the generated spikes to the environment. ere are several definitions of the calculation result of spiking neural P systems [47,48]. In this paper, the calculation result is the difference between the first two times that neuron σ out fires and sends spikes to the environment. For instance, t 1 is the time of firing the neuron σ out for the first time, and t 2 is the time of firing the neuron σ out for the second time, so the calculation result is t 2 − t 1 . Because the system in generating mode is deterministic, it produces a collection of numbers, denoted as N 2 (Π), where the subscript 2 represents the schedule of the first two spikes. When the spiking neural P system is in the accepting mode, there are input neurons in the system, whose function is to receive spikes from the environment. When the number n is identified by this system, it is encoded into a spike train such as 100 · · · 01(10 n 1) (the spike train is a binary language defined on 0, 1 { }, where 1 represents one spike and 0 represents no spike). And then, the system reads the spike train through the input neurons. e set of all numbers accepted by the system is denoted as N acc (Π).
Complexity e set of all the numbers generated or accepted by the DSNP system in generating and accepting modes is denoted as N α (Π), where α ∈ 2, acc { }. e family of all sets N α (Π) denotes N α DSNP m n , and it means the DSN P system having no more than m rules and no more than n neurons under the generating or accepting mode. If the values of m and n cannot be calculated, it is denoted by * .

Illustrative
Example. An example which is a DSN P system with five neurons is given in Figure 1. And the number of spikes in the neuron at each time is given in Table 1. In this illustrative example, five neurons are used to explain the way of performance by using spiking rules and forgetting rules. Assume that neuron σ g 1 has one spike at time t; it will produce one a and send it to neurons σ g 2 and σ g 3 , respectively. e neuron σ g 3 has the form of rule as a ⟶ a; [1,2,3), and it means that this rule allows neuron σ g 3 to receive one spike that came from neuron σ g 1 at time t + 1. In this way, at time t + 1, σ g 2 and σ g 3 both have one spike.
Before time t + 2, σ g 2 sends a spike to neurons σ g 3 and σ g 4 , respectively. And then neurons σ g 3 and σ g 4 each receive one spike at time t + 2. At the same time, the neuron σ g 3 performs the rule a ⟶ a; [1, 2, 3) and σ g 3 receives one spike from σ g 2 which means that the rule a ⟶ λ; [2, 3) will be used. Meanwhile, the rule a ⟶ a; [2,3) in neuron α g4 can be applied. erefore, two spikes in neuron σ g 5 can activate the rule a 2 ⟶ a; [3,4) at time t + 3, one from neuron σ g 3 and another one from neuron σ g 4 . And then one spike will be sent to other connected neurons before time t + 4.

Computing Power as the Number Generator and Acceptor
is section proves the computing power of systems as generating devices and accepting devices and shows that DSN P systems are Turing universal. In order to prove the computing power of a DSN P system, a register machine M � (m, H, l 0 , l h , I) is considered. Register r in M corresponds to neuron σ r in system Π. All of the instructions also correspond to neurons in Π, such that an ADD instruction l i corresponds to neuron σ l i . And if register r stores number n, 2n spikes will be stored in neuron σ r .
ere are two working modes in register machine, the generating mode, and the accepting mode. e system Π in generating mode has three modules: module ADD, module SUB, and module OUTPUT. And in accepting mode, the system has three types of modules: module ADD, module SUB, and module INPUT. e set of numbers generated by the register machine M is denoted as N(M). In both modes, a Turing computable set of natural numbers can be generated; that is, N(M) � NRE. Here, NRE refers to the set of numbers calculated by the Turing machine.

e DSN P System as Generator
Proof. In generating mode, all registers are empty in the initial state, starting with instruction l 0 . e number stored in register 1 at the end of the calculation is the number generated by the register. Assume all neurons are empty, except neuron σ • l 0 which has one spike. e module ADD is used to simulate ADD instruction l i : (ADD(r), l j , l k ), shown in Figure 2. e reference neuron in this module is neuron σ l i , denoted as σ • l i . Assume the rule in neuron σ • l i is activated at time t, and it will send one spike to neurons σ b 1 and σ b 2 , respectively. At time t + 1, neurons σ b 1 and σ b 2 receive this spike. Meanwhile, the rule in σ b 2 is nondeterministic chosen one to perform, so there are two cases: (1) At time t + 1, if the rule a ⟶ a; [1,2) is used, the neuron σ b 1 will accept one spike from neuron σ b 1 at time t + 2, so that the rule a ⟶ a; [2,3) in neuron σ b 1 can be activated. At the same time, neuron σ l k receives one a from neuron σ b 2 at time t + 2, and the rule a ⟶ a; [1,2,3) in neuron σ b 1 is applied. At time t + 3, both σ b 3 and σ r receive two spikes from neuron σ b 1 , so the rule a 2 ⟶ λ; [3,4) can be used. erefore, the neuron σ l j is empty, and neuron σ r has two spikes.
2) is applied, then the neuron σ l k is empty at time t + 2, and the rule a ⟶ a; [1,2,3) in neuron σ b 1 is enabled. Each of neurons σ b 3 and σ r receives one spike from neuron σ b 1 at time t + 3. So the rule a ⟶ a; [3,4) in neuron σ b 3 can perform, and then, at time t + 4, neurons σ l j and σ r receive one spike, respectively. In this way, there are two spikes in the neuron σ r corresponding to increasing the register r by 1.
So far, the module ADD has been proved to be able to simulate correctly. e following is t the simulating proof of module SUB. e structure and rules of module SUB are shown in Figure 3, which is used to simulate SUB instruction l i : (SUB(r), l j , l k ). e simulation of module SUB also starts at reference neuron σ l • i . e neuron σ l • i activates At time t and sends one spike to neurons σ r and σ c 1 , respectively, before time t + 1. It means that neurons σ r and σ c 1 receive this spike at time t + 1. Depending on whether the neuron σ r has spikes at the initial time, there are two cases as follows: (1) If the neuron σ r is not empty, it has 2n spikes. So, neuron σ r has 2n + 1 spikes (odd number) at time t + 1, and the rule (a(aa) + /a 3 ) ⟶ a; [1,2) can be used. At time t + 2, both neurons σ c 3 and σ c 2 receive one spike from the neuron σ r , and the rules a ⟶ λ; [2,3) in neurons σ c 2 and σ c 3 can be used. At the same time, σ c 1 also receives one spike from σ r ; that is, the rule a ⟶ a; [2,3,4) in neuron σ c 1 can be used. At time t + 3, neurons σ c 2 and σ c 3 get one spike from neuron σ c 1 , such that rules a ⟶ λ; [3,4) in σ c 2 and σ c 3 are performed. At the next time, each of neurons σ c 2 and σ c 3 receives one spike from σ c 1 , and rule a ⟶ a; [4,5) in σ c 2 and rule a ⟶ λ; [4,5) in σ c 3 are applied. In this way, neuron σ l k is empty without receiving any spikes and σ l j gains one spike. (2) ere is no spike in neuron σ r . At time t + 1, rules a ⟶ a; [1, 2, 3) allow neurons σ r and σ c 1 to obtain one spike from neuron σ l • i , and this rule is activated at time t + 2. And then, each of neurons σ c 2 and σ c 3 obtains two spikes which come from σ c 1 and σ r , so the rule a 2 ⟶ λ; [3,4) in σ c 2 and the rule a 2 ⟶ a; [3,4) in σ c 3 can fire. erefore, σ l k receives one spike from σ c 3 , and neuron σ l j is empty after time t + 4.
In particular, there is no interference between SUB modules, even if there are multiple SUB instructions with a common register. An illustrative example is given in Figure 4. Assume two SUB instructions l i : (SUB(r), l j , l k ), and l v : (SUB(r), l j ′ , l k ′ ) are simulated. When neuron σ r fires, neurons σ c 3 ′ and σ c 2 ′ get one spike at time t + 2 or time t + 3. But only rules a ⟶ λ; [2, 3) and a ⟶ λ; [3,4) in neurons σ c 3 ′ and σ c 2 ′ are enabled. Neurons σ l j ′ and σ l k ′ are empty. erefore, the SUB instruction can be simulated correctly by module SUB. e structure of module OUTPUT, shown in Figure 5, is used to halt the computation. At the initial state, neuron σ out has one spike. Assume that the neuron σ 1 has 2n spikes according to the number n storing in register 1 of M and neuron σ • l h receives one spike at time t. So the neuron σ 1 gets one spike at time t + 1. us, the neuron σ 1 has 2n + 1 spikes, which is odd number, and at time t + 2, the rule a(aa) + ⟶ a is activated and sends a spike to neurons σ d 1 and σ out , respectively. In this way, σ out has two spikes, which is even number. e rule in σ out can be applied the next time and sends one spike to environment. At the same time, neuron σ d 1 sends one spike to neuron σ out and neuron σ 1 fires again. At present, there are three spikes in neuron σ out . No rules in σ out can be enabled since the number of spikes is odd.

Complexity 5
to environment; that is, (t + n + 2) − (t + 2) � n. e results of each time are shown in Table 2. ese three types of modules simulate the corresponding instructions correctly; that is, system Π correctly simulates the register machine M. erefore, the conclusion of eorem 1 is proved.

e DSN P System as Acceptor
Proof. In accepting mode, all registers except register 1 are empty in the initial state. As the proof of eorem 1, the proof of eorem 2 also needs to simulate register machine M′, which is deterministic register machine. Similarly, register r corresponds to neuron σ r , and if the number n is stored in register r, then there are 2n spikes in neuron σ r . If the register machine reaches the halting instruction l h , when the calculation stops, the number stored in register 1 is accepted by the register machine. And the proof of module SUB is already proved in eorem 1. erefore, in the proof of eorem 2, we only need to prove whether the modules ADD′ and INPUT can be simulated correctly. At the initial configuration of Π, all neurons are empty, except for reference neuron σ • l 0 which contains one spike. Module ADD' shown in Figure 6 differs from model ADD shown in Figure 2. Module ADD′ simulates the ADD′ instruction l i : (ADD(r), l j ). Assume that neuron σ • l i has one spike and activates at time t. us, each of neurons σ r and σ m 1 receives one spike from σ l i at time t + 1. And the rule in neuron σ m 1 can be applied. At time t + 2, both neurons σ r and σ l j receive a spike, and at this time, neuron σ r has two spikes. e simulation of module ADD' is complete, since there are two spikes in σ r and the content of register r increases one.
From the above proof, module ADD', module SUB, and module INPUT can be correctly simulated. In this way, the proof of eorem 2 is complete.

A Small Universal DSN P System as Computing Functions
is section mainly proves the small universality of the DSN P system for computing functions. First, we need some simple theoretical explanations. Turing computable functions can be computed by register machine M. Here, the way of how a register machine M calculates a function f: N K ⟶ N is described. Register machine M stores K parameters n 1 , n 2 , . . . , n k in the first K registers, and the other registers are empty. We start the calculation with instruction l 0 and end with instruction l h . When the computation stops, the result is stored in one of the specified registers, with the rest of the registers empty. It is notable that the register machine M is deterministic when in computing mode. In this paper, we prove the universality of the system as computing functions by simulating a universal register machine M u .
When proving the small universality of the DSN P system as a computing function device, it is usually considered to use a small universal register machine M u : (8, H, l 0 , l h , I), more details explained in [49]. M u contains 8 registers and 23 instructions, as shown in Figure 8. And the key to proving the universality of the system is to find a specific number of neurons that can simulate the register machine M u and minimize the number through the combination of some ADD and SUB instructions. However, since the SUB instruction l 19 is related to register 0 which is used to store the calculated results, it is not reasonable to simulate M u . To solve this problem, a new register 8 is added as the output register, whose function is to save the calculated results. erefore, register machine M u : (8, H, l 0 , l h , I) is extended to M u ′ : (9, H, l 0 , l h ′ , I) and replaces halting instruction l h with the following three new instructions: (2) In this way, register machine M u ′ contains 9 registers (0, . . . , 8) and 25 instructions (14 SUB instructions, 10 ADD instructions, and 1 halting instruction).

Theorem 3.
ere is a universal DSN P system with 81 neurons for computing functions. e specific working process of the DSN P system is shown in Figure 9, including five types of modules: INPUT module, ADD modules, SUB modules, OUTPUT module, and combination modules whose function is to simulate the combination of SUB instructions and ADD instructions. First, the module INPUT reads the encoded parameters of function f into the system. And then, the process enters the register machine simulator. In addition, It is important to note that ADD instructions l i : (ADD(r), l j ) are used in module ADD′ as shown in Figure 6 when simulating register machine M u ′ , and module SUB is shown in Figure 3. e final process is the module OUTPUT, which is structured as shown in Figure 5. But the output register is register 8 instead of register 1. .
· · · · · · · · · · · · · · · · · · t + n 1 1 1 2 (n − 1) In addition, the construction of module INPUT used in the proof of this theorem is different from the previous in accepting devices. Assume that (θ 0 , θ 1 , . . .) is the fixed parameters of a function, and for any θ x (y), register machine M u ′ satisfies θ x (y) � M u ′ (g(x), y). In register machine M u ′ , register 1 stores parameter g(x), and register 2 stores parameter y. erefore, the module INPUT is constructed which is shown in Figure 10. In the beginning, the two parameters g(x) and y are encoded into the form of a spike train 10 g(x)− 1 10 y− 1 1. en, through the computing of module INPUT, the two parameters g(x) and y are stored in neurons σ 1 and σ 2 replaced by 2g(x) spikes and 2y spikes, respectively. Module INPUT shown in Figure 10 is used to compute the function 10 g(x)− 1 10 y− 1 1 that is introduced by neuron σ in into σ 1 with 2g(x) spikes and σ 2 with 2y spikes. Assume that the neuron σ in activates at time t. Each of neurons σ m 1 , σ m 2 , σ m 3 , σ m 4 , and σ m 5 receives one a at time t + 1. us, neurons σ m 3 and σ m 2 activate, and then neuron σ m 2 sends one spike to σ 1 and σ m 3 , respectively. Meanwhile, neuron σ m 3 sends one spike to σ 1 and σ m 2 , so that neuron σ 1 receives two spikes at time t + 2, and each of neurons σ m 3 and σ m 2 has one spike. Until the neuron σ in receives the second spike, that is, after g(x) − 1 times, rules in σ m 3 and σ m 2 cannot fire. In this way, neuron σ 1 gets 2g(x) spikes corresponding to the number g(x) stored in register 1.
Simultaneously, each of neurons σ m 1 , σ m 4 , and σ m 5 has two spikes when neuron σ 1 gets 2g(x) spikes. us, the rules in σ m 4 and σ m 5 are enabled, both consuming one spike. And at the next time, neuron σ 2 receives two spikes, one from σ m 4 and the other from σ m 5 . At the same time, neurons σ m 4 and σ m 5 each receive a spike from the other, so the number of spikes they contain returns to an even number. In this way, neurons σ m 4 and σ m 5 remain active for the next y − 1 times. erefore, neuron σ 2 accepts 2y spikes altogether until neuron σ in receives the third spike. And then, neuron σ m 1 has three spike, and the rule a 3 ⟶ a can perform and sends one spike to σ • l 0 . erefore, the system needs 94 neurons to simulate the universal register machine M u ′ including (i) 6 neurons for module INPUT (ii) 1 × 10 neurons for 10 ADD instructions (iii) 3 × 14 neurons for 14 SUB instructions (iv) 25 neurons for 25 labels of instructions (v) 9 neurons for 9 register (vi) 2 neurons for module OUTPUT e module SUB-ADD-1 is simulating the sequence of SUB instructions l i : (SUB(r 1 ), l j , l k ) and ADD instructions l j : (ADD(r 2 ), l g ), as shown in Figure 11, where r 1 ≠ r 2 . e simulation of module SUB-ADD-1 also starts at reference neuron σ l • i . When neuron σ l j has one spike, ADD instruction is triggered and performed immediately. us, the final results should be that neuron σ l g gains one spike, the register r 2 increases by one, and σ l k is empty. If neuron σ l k has one spike, the ADD instruction will not activate. e register r 2 and neuron σ l g are certainly empty. e computation process is divided into two cases depending on the number of spikes in neuron σ r 1 . e process is similar to module SUB (shown in Figure 3). Assume that the reference neuron σ l • i fires at time t. (1) If the register r 1 is nonempty, neuron σ r 1 has 2n spikes.

Module input
Register machine simulator    8 Complexity in σ c 3 can be used. At time t + 3, neurons σ c 3 and σ c 2 gain one spike form σ c 1 , and rule a ⟶ λ; [3,4) can be applied. And at the next time, neurons σ c 3 and σ c 2 get one spike form σ c 1 at again, thus, the rule a ⟶ a; [4,5) in σ c 2 and a ⟶ λ; [4,5) in σ c 3 fire. At time t + 5, σ c 4 applies its rule a ⟶ a; [5,6) and sends one spike to σ r 2 before time t + 6. In this way, σ r 2 has two spikes, and neuron σ l k is empty. (2) If the register r 1 is empty, it means that neuron σ r 1 is empty at the initial time. At time t + 1, neurons σ r and σ c 1 have one spike, and the rule a ⟶ a; [1,2,3) in σ r can be applied which means neuron σ r accepts the spike at this moment. At time t + 2, the rule a ⟶ a; [1,2,3) in σ c 1 fires and sends one spike to σ c 3 and σ c 2 before t + 3, respectively. At time t + 3, σ c 3 and σ c 2 receive two spikes, and neuron σ c 2 chooses to perform a 2 ⟶ λ; [3,4) and neuron σ c 3 performs rule a 2 ⟶ a; [3,4). erefore, σ l k gains the spike at time t + 4, and σ l g and σ r 2 are empty.
In this way, consecutive SUB-ADD instructions can be simulated correctly, and neuron σ l j can be saved. en, there are 6 pairs of instructions corresponding to module SUB-ADD-1, which can save 6 neurons associated with labels l 1 , l 5 , l 7 , l 9 , l 16 , and l 22 .

Complexity
Module SUB-ADD-2 is used to simulate the combination of SUB instruction l 15 : (SUB(3), l 18 , l 20 ) and ADD instruction l 20 : (ADD(0), l 0 ). As shown in Figure 12, the computational process of SUB-ADD-2 is similar to that of module SUB-ADD-1. us, one neuron associated with l 20 can be saved.
10 Complexity instructions l i : (ADD(r 1 ), l g ) and l g : (SUB(r 2 ), l j , l k ), where r 1 ≠ r 2 . Assume that reference neuron σ • l i has one spike at time t, and the produced spike is received by σ r 1 and σ r 2 at the next time. en, the working process of next time is the same as module SUB, divided into two cases. σ r 1 receives one spike from σ r 2 at time t + 2 or time t + 3, so that σ r 1 has two spikes. us, both the ADD and SUB instructions can be implemented correctly by this module. So, neuron σ l g can be saved, and neuron σ m 1 also is saved as compared to module ADD' shown in Figure 6. ere are two pairs of instructions that adapt to the module ADD-SUB. us, four neurons are saved.   Figure 14). Assume that, at time t, neuron σ l 17 activates. Each of neurons σ m 1 , σ 3 , and σ 2 receives one spike at time t + 1. en, neurons σ 3 and σ 2 get their second spike from σ m 1 , at time t + 2. us, the consecutive ADD-ADD instructions can be simulated correctly by module ADD-ADD. Two module ADD can share the common neuron σ m 1 , and the neuron associated with label l 21 is also saved. erefore, two neurons can be saved by module ADD-ADD.
From the above proof, six neurons are saved by module SUB-ADD-1. One neuron is saved by module SUB-ADD-2. Four neurons are saved by module ADD-SUB. Two neurons are saved by module ADD-ADD. A total of 13 neurons were saved. erefore, the system can only use 81 neurons to simulate the register machine M u ′ .

Conclusion
In this paper, a new variant of SN P systems, called DSN P systems, is proposed, which are inspired by the biological reality that there is a certain time delay between the input and output of neurons. Compared with the classic SN P systems, we made corresponding improvements in the spiking rules and forgetting rules by adding schedules to limit the performing time of rules. It achieves a time delay between input and output by scheduled rules and makes the performance of the system continuous in total working time. e rules can only be used within a specified time range, and during this time, the neuron is locked and cannot send or receive spikes. is also allows a neuron with multiple rules to selectively apply the rules according to the schedule when it is activated. In addition, DSN P systems have changed the principle (maximal parallelism) of applying rules, and rules are executed in order according to the schedules. And in eorems 1 and 2, the computational power of DSN P systems are proved by simulating the register machine under the generating mode and the accepting mode. In eorem 3, a universal DSN P system having 81 neurons for computing functions is also proved.
In this paper, we only involve the theoretical proof of DSN P systems. In the future work, in both theory and application, there are still many problems that can be improved based on DSN P systems. In theory, we can further improve DSN P systems. We can demonstrate the computational power of systems that work on other modes, such as sequential mode and parallel mode. We can also investigate whether the computational power remains if the types of rules are reduced. And DSN P systems with other features are to be studied, such as multiple channels and polarizations. In terms of application, on the one hand, we can consider whether the DSN P system can be combined with a neural network and applied to image or text processing, as the work in [50]. On the other hand, DSN P systems can also be combined with intelligent algorithm, and the performance of such intelligent optimization systems may be greatly improved. In addition, there are many other directions of application, such as diagnose fault and hardware. Although the application of SN P systems has not been fully studied, as a third-generation neural network, the development of SN P systems has infinite possibilities, which need our continuous exploration.

Data Availability
is manuscript does not use any datasets.

Conflicts of Interest
e authors declare that they have no conflicts of interest.