Turing Universality of Weighted Spiking Neural P Systems with Anti-spikes

Weighted spiking neural P systems with anti-spikes (AWSN P systems) are proposed by adding anti-spikes to spiking neural P systems with weighted synapses. Anti-spikes behave like spikes of inhibition of communication between neurons. Both spikes and anti-spikes are used in the rule expressions. An illustrative example is given to show the working process of the proposed AWSN P systems. The Turing universality of the proposed P systems as number generating and accepting devices is proved. Finally, a universal AWSN P system having 34 neurons is proved to work as a function computing device by using standard rules, and one having 30 neurons is proved to work as a number generator.


Introduction
Membrane computing, introduced by Pȃun [1], is a branch of nature-inspired computing. It provides a rich computational framework for biomolecular computing. Models of membrane computing are inspired by the structures and functions of living cells. e obtained models are distributed and parallel computing devices, usually called P systems [2]. ere are three main classes of P systems: cell-like P systems, tissue-like P systems [3], and neural-like P systems [4]. Neural-like P systems, inspired by the ways of information storage and processing in human brain nervous systems, are systems that combine neurons and membrane computing, among which the most widely known are spiking neural P systems (SN P systems) [5]. A SN P system consists of a group of neurons located at the nodes of a directed graph, and neurons send spikes to adjacent neurons through synapses, i.e., links in the graph. ere is only one type of objects, i.e., spikes, in the neurons.
With different biological features and mathematical motivations, many variants of SN P systems have emerged. Some of them made changes on synapses between neurons, such as SN P systems with rules on synapses [6], SN P systems with multiple channels [7], and SN P systems with thresholds [8], while others made changes on the communication rules, such as SN P systems with communication on request [9], SN P systems with polarizations [10], and SN P systems with inhibitory rules [11]. Various new variants of SN P systems are provided in [12,13]. Recently, some new variants of neural-like P systems have been proposed, which are inspired by SN P systems, such as those reported in [14]. In addition, many publications appeared in the literature on the computational power of SN P systems as function computing devices and the number generating/accepting devices. Pǎun [18] proved small universality of SN P systems. Pan [19] proved the small universality of SN P systems with communication on request by using 14 neurons, and more details are available in [20,21]. Since the SNP system was proposed, many scholars have explored its applications. At present, there are many applications of SN P systems, such as skeletonizing image processing [22,23], optimization problems [24], fault diagnosis [25][26][27], and working models [28].
Inspired by the spikes of inhibition of communication between neurons, a new type of SN P systems is proposed by adding anti-spikes to SN P systems, which is called spiking neural P systems with anti-spikes (ASN P systems) [29]. In ASN P systems, each neuron contains multiple copies of symbolic object a or a and processes information by spiking rules and forgetting rules. e annihilating rule aa ⟶ λ exists in each neuron and is the first to apply, meaning a and a cannot coexist in any neuron. Many researchers have proposed different ASN P systems, such as ASN P systems with multiple channels [30], ASN P systems with rules on synapses [31], and asynchronous ASN P systems [32]. e computational power of ASN P systems as number generating and accepting devices, as well as function computing devices, also can be proved [33].
In [34], SN P systems with weighted synapses were proposed. e weights represent the numbers of synapses between connected neurons. Based on the above, a new variant of SN P systems, called the weighted spiking neural P systems with anti-spikes (AWSN P systems),is proposed in this work. In these systems, neurons receive spikes or anti-spikes from their connected neurons and the numbers of spikes or anti-spikes they receive are determined by the weights of the synapses. Only one type of objects, i.e., spikes or antispikes, exists in each neuron with standard rules in SN P systems. ese systems use spiking rules with the form of (E/a c ) ⟶ a p ; d (called standard rules if p � 1 and extended rules otherwise), where E is a regular expression over spikes a and c, andp and d are all positive integers. e meaning of the spiking rules is that c spikes are consumed and p spikes are generated after d time periods. SN P systems also have forgetting rules of the form a s ⟶ λ, where s is a positive integer. e meaning of the forgetting rules is that s spikes are dissolved or removed from a neuron. e rest of this article is organized as follows. In Section 2, the basic knowledge of a register machine is given. e definition of AWSN P systems is given, and an example is presented to show their working process in Section 3. By simulating register machines, the computational power of AWSN P systems is proved as natural number generating devices and accepting devices in Section 4. In Section 5, the universality of these systems as function computing devices and number generating devices is obtained by using 34 neurons and 30 neurons, respectively. Remarks and future research directions are given in Section 6.

Prerequisites
e universality of systems is proved by simulating a register machine M. A register machine is structured as M � (m, H, l 0 , l h , I), where m is the number of registers, H is the set of instruction labels, l 0 and l h are the starting and ending labels, and I is the set of instructions shown below: (1) l i : (ADD(r), l j , l k ) (add 1 to register r and then go to instruction labels l j or l k with nondeterministic choice) (2) l i : (SUB(r), l j , l k ) (if register r is not empty, then subtract 1 from it and go to l j ; otherwise, go to l k ) (3) l h : HALT (the ending instruction) A register machine has two modes: a generating mode and an accepting mode. A register machine M generates a set of numbers indefinitely, denoted by N gen (M) , and works in the following way in the generating mode. When all the registers start empty, M starts the computational process from the instruction label l 0 . When M reaches l h , the computation ends with the results stored in register 1. If the computation does not stop, the numbers will not be generated. A set of numbers can also be accepted by a register machine, denoted as N acc (M), in the accepting mode. Only the input neuron is nonempty at the beginning. It then works in a way similar to that in the generating mode. As register machines are universal in the accepting mode, the add instructions can be written as l i : (ADD(r), l j ). Register machines can compute any set of Turing computable numbers represented by NRE (see, e.g., [6]).
Generally, a universal register machine is used to compute Turing computable functions for the purpose of analyzing the computing power of system. A universal register machine M u is proposed by Minsky [35]. If φ x (y) � M u (g(x), y) satisfies that x and y are natural numbers and g is a recursive function, then M u is universal, denoted by M u � (8, H, l 0 , l h , I), including 8 registers and 23 instructions. Compared with register machine M u ′ as shown in Figure 1, register machine M u does not have instructions l 22 and l 23 , and the final result is placed in register 0. Since the result is stored in register 0, it cannot contain any SUB instruction. Hence, register 8 is added and used to store the result without any SUB instruction. In general, in order to analyze the universality of the system, i.e., to verify that the system is equivalent to a Turing machine, a universal register machine M u ′ as shown in Figure 1 is simulated by a system, denoted by M u ′ � (9, H, l 0 , l h , I), consisting of 9 registers and 25 instructions.

Weighted Spiking Neural P Systems with Anti-spikes
3.1. Definition. e proposed AWSN P system is described as follows: where (1) O � a, a { } is the set of alphabets, where the symbol a is a spike, and a is an anti-spike.
(2) σ 1 , σ 2 , . . . , σ m are neurons, in the form of σ i � (n i , R i ) for 1 ≤ i ≤ m, where n i ≥ 0 is the initial number of spikes stored in σ i , and R i is the set of rules used in σ i in the following form: (a) Spiking rules,  (4) in and out are the input neuron and output neuron.
In the AWSN P system, each neuron has one or more spiking rules and some of them also have forgetting rules, and either spikes or anti-spikes exist in each neuron. If there are k spikes or anti-spikes in neuron σ i , b k ∈ L(E) and k ≥ c, then the spiking rule is called pure, and the rule can be written asb c ⟶ b ′ ; d. e spiking rule can be interpreted as follows. If c spikes or anti-spikes are removed from neuron σ i and the neuron fires, p spikes will be generated after d time periods (as usual in membrane computing, all neurons in a system Π work in parallel with an assumed global clock) and p × n spikes will be sent to neuron σ j (i ≠ j), where n ∈ W. If the spiking rule of neuron σ i is used in time d for all d ≥ 1, the neuron will be closed before time t + d and will not receive any spikes or anti-spikes, and then the neuron will open at time t + d. If t � 0, spikes will be emitted immediately, which means the neuron receives spikes or antispikes from the upper neuron without delay.
If the forgetting rules b s ⟶ λ in the neurons are used, then the s spikes or anti-spikes are removed from the neurons. Spiking rules and forgetting rules must be applied if the conditions are met, but the choice of rules is nondeterministic if the conditions of multiple rules are met in a neuron. However, the annihilating rule aa ⟶ λ must be applied first in each neuron. rough these rules, transitions between configurations can occur. Any sequence of transitions starting from the initial configuration is called a computation. A computation will stop when it reaches a configuration where all neurons are open and no rules can be used. To compute the function f: N K ⟶ N, k natural numbers n 1 , n 2 · · · , n k are introduced into the system by reading a binary sequence z � 10 n 1 , 10 n 2 1, . . . , 10 n k 1 from the environment. at is to say, the input neuron of Π receives a spike in a step if it corresponds to 1 in z, but it receives nothing if it corresponds to 0. e input neuron received exactly k + 1 spikes and will not receive any more spikes after receiving the last spike. e result of the computation is encoded in the distance between two spikes, which means that the computation halts with exactly two spikes as outputs immediately after outputting the second spike. Hence, it generates a spike string of the form 0 b 10 r− 1 1, for b ≥ 0 and r � f(n 1 , . . . , n k ). e computation outputs no spike for a nonspecified number of steps from the beginning of the computation until outputting the first spike.
Let N gen ( ) and N acc ( ) be the sets of numbers generated and accepted by Π, respectively. Let N α ASNP n m , with α ∈ gen, acc , denote the family of sets of numbers generated or accepted by an AWSN P system with m neurons and a maximum of n rules in a neuron.

An Illustrative
Example. An example as graphically shown in Figure 2 is given to explain the working process of the AWSN P system. e results of each step are shown in Table 1. A positive number in the table represents the number of spikes in the neuron, and a negative number represents the number of anti-spikes. For example, 2 means there are two spikes, and − 2 means there are two anti-spikes.
e system has four neurons as shown in Figure 2. Assume that each of neurons σ 1 and σ 2 has two spikes, and neurons σ 3 and σ 4 are empty with no spikes. Suppose that the rule (a 2 /a) ⟶ a in neuron σ 1 can be used at time t, generating one anti-spike and sending three anti-spikes to neurons σ 2 and σ 3 because the weight of synapses between these neurons is 3. Two anti-spikes together with two spikes disappear immediately because the annihilating rule is applied first, and there is one anti-spike left in neuron σ 2 . e rule in σ 2 generates two spikes to be sent to neuron σ 4 and one spike to be sent to neuron σ 1 . So the rule in σ 1 can be applied again. Neuron σ 3 receives six anti-spikes from σ 1 by using the rule of neuron σ 1 twice, so that the rule in σ 3 fires. Neuron σ 4 gets three spikes (two from neuron σ 2 ) and sends one spike to the environment.  Step Computational Intelligence and Neuroscience

Generating Mode
M is simulated by an AWSN P system, including three modules, i.e., modules ADD, SUB, and OUTPUT. In the simulation process, a register r of M corresponds to neuronσ r , and the number n contained in register r is the number of spikes contained in neuron σ r . An instruction l in H corresponds to neuron σ l . Furthermore, the modules require some other neurons in addition to σ r and σ l . e simulation of the ADD and SUB instructions begins at neuron σ l i . Modules ADD and SUB are simulated by sending spikes to σ l j and σ l k as rules in neuron σ r fire. Neuron σ r sends a spike to either σ l j or σ l k , but the choice is nondeterministic. When a spike arrives at neuron σ l h , the computation in M stops, and the module OUTPUT begins to send the result stored in register 1 to the environment. At the beginning of the simulation, neuron σ l 0 has one spike but other neurons do not have any spikes.
(a) Module ADD (Shown in Figure 3) Assume that an ADD instruction l i : (ADD(r), l j , l k ) has to be simulated at time t, one spike is in neuron σ l i , and the rule a ⟶ a can be used. Neuron σ l i sends one spike a to neurons σ r , σ b 1 , and σ b 2 , respectively. e rules a ⟶ a and a ⟶ a in neuron σ b 1 are chosen in a nondeterministic way for use at time t + 1. In this way, there are two cases to consider depending on the choice of the rules in σ b 1 . If a ⟶ a is chosen, neuron σ b 2 sends a spike to neuron σ l k . us, σ l k will generate one spike by using its rule. If a ⟶ a is chosen, neuron σ b 1 sends an anti-spike to neurons σ l i and σ b 2 , respectively. us σ l j will fire and generate one spike by using its rule. e rule in neuron σ b 2 cannot be used because of the annihilating rule, so that σ l k is empty. After one spike is added to σ r , the register r adds 1 and the instruction l j or l k is activated. erefore, the ADD instruction can be simulated correctly by the module ADD.
(b) Module SUB (Shown in Figure 4) Suppose that neuron σ l i has one spike. After the rule a ⟶ a is enabled at time t, each of the neurons σ l j and σ l k receives two anti-spikes a, and σ r receives one anti-spike. e rest of the computation can be divided into two cases according to the number of spikes contained in σ r .
(1) Neuronσ r has at least one spike. Neuron σ r receives one anti-spike from neuron σ l j , but anti-spike will disappear immediately by annihilating one spike in σ r . erefore, the rule a ⟶ a in neuron σ r is not used at time t + 1. At the same time, neuron σ c 1 opens to get one anti-spike from σ l i , and then the rule in σ c 1 fires and generates one spike but sends three spikes to neurons σ l j and two spikes to σ l k . e two spikes are annihilated with two anti-spikes from σ l i and one spike is left in neuron σ l j . Simultaneously, the same happens in neuron σ l k , i.e., the two spikes are annihilated immediately and there is no spike left in σ l k .
(2) Neuronσ r has no spike. Neuron σ r gets one anti-spike from σ l i and its rule can be applied at time t + 1. Simultaneously, neuron σ c 1 gets one anti-spike from σ l i . Hence, one spike from σ r is annihilated in the next time. e rule in σ r cannot be used because σ r does not have any anti-spikes. At the same time, neuron σ l j receives five spikes, among which two spikes are used to annihilate the two anti-spikes received from neuron σ l i ; thus the rule a 2 ⟶ λ in σ l j can be applied. Neuron σ l k receives one spike that annihilates one anti-spikea received from neuron σ l i , and then the rule a ⟶ a in σ l k is enabled to generate one spike a.
erefore, the SUB instruction can be simulated correctly by module SUB.
(c) Module OUTPUT (Shown in Figure 5) Assume that σ l h of system accumulated one spike at time t, and neuron σ 1 has n spikes for the number n being stored in register 1 of M. When the rule in σ l h ′ is fired at time t, neuron σ l h ′ sends one spike to σ 1 . At this moment, σ 1 has an odd number of spikes and its rule fires. At time t + 1, σ 1 sends one spike to σ out and σ b 1 , respectively. us, neuron σ out has one spike, which is an odd number. At time t + 2, neuron σ out fires,  sending one spike to the environment. At the same time, the rules in σ 1 and σ b 1 are used, and both send one a to σ out . After n − 1 steps, until neuron σ 1 has no spike, the number of spikes in σ out is even. At the same time, the use of the rule in σ 1 is stopped, and neuron σ b 1 has one spike. Neuron σ out will receive one spike at time t + n + 2, and then the number of spikes is odd. Neuron σ out fires a second time.
erefore, the number computed by the AWSN P system is the difference between the first two steps when the neuron σ out fires; that is, (t + n + 2) − (t + 2) � n. e module OUTPUT can be simulated correctly.

Accepting Mode
Proof. e proof of this theorem is similar to that of eorem 1. A register machine M � (m, H, l 0 , l h , I), consisting of three modules, ADD, SUB, and INPUT, is considered. Module SUB is shown in Figure 4.
(1) Module ADD (Shown in Figure 6) Assume that an ADD instruction l i : (ADD(r), l j ) has to be simulated at time t. Suppose that one spike is in neuron σ l i ; then the rule a ⟶ a can be used. us, neuron σ l i sends one spike to neurons σ r and σ l j . In this way, the number of spikes in σ r increases by 1 and the instruction l j is activated. Hence, the ADD instruction can be simulated correctly by this module.
(2) Module INPUT (Shown in Figure 7) Module INPUT shown in Figure 7 works as follows. e function of module INPUT is to read the spike train 10 n− 1 1 and compute the number n in the time between receiving two spikes. When neuron σ in receives the first spike at timet and then neurons σ d 1 ,σ d 2 , and σ d 3 receive one spike each, the rule in σ d 2 and σ d 3 can be applied at timet + 1. At timet + 2, neuron σ 1 gets one spike, and, at the same time, neuron σ d 3 gets one spike from σ d 2 and neuron σ d 2 receives one a from σ d 3 . erefore, in the next n − 1time periods, the rules in neurons σ d 2 and σ d 3 can continued to be used.
During this period, σ 1 gets n − 1 spikes. When neuron σ in receives the second spike at step t + n, each of neurons σ d 2 and σ d 3 receives one spike at step t + n + 1 and they both have two spikes. In this way, neurons σ d 2 and σ d 3 cannot fire to send any spikes to neuron σ 1 . In the whole process, neuron σ 1 receives (n − 1) + 1 � n spikes, i.e., the number nis stored in register 1.
From the descriptions above about the three modules, it is clear that the register machine M can correctly simulate the system. e proof is complete.

e Universality as Function Computing Devices
ere is a universal AWSN P system having 34 neurons which can be used to perform function computing.
Proof. A general framework of a system u ′ used to simulate a universal register machine M u ′ is shown in Figure 8, which is a universal AWSN P system. Π ′ consists of 8 modules: ADD, SUB, ADD-ADD, SUB-ADD-1, SUB-ADD-2, SUB-SUB, INPUT, and OUTPUT. e modules SUB, OUTPUT, and ADD are the same as those in Figures 4-6, respectively. e module INPUT is shown in Figure 9.
Module INPUT works as follows: when neuron σ in gets a spike from the environment, the rule a ⟶ a fires and one spike is sent to neurons σ c 1 , σ c 3 , and σ c 4 , and two spikes are sent to neuron σ c 2 . en, the rule in neuron σ c 1 sends one spike to both σ c 2 and σ 1 . At the same time, neuron σ c 2 fires and then sends one spike to σ c 1 and two spikes to σ c 3 .

Computational Intelligence and Neuroscience
Up to this point, three spikes were sent to neuron σ c 3 . erefore, before neuron σ in receives more spikes from the environment, neurons σ c 1 and σ c 2 have received one spike from each other in each time period and neuron σ 1 has received g(x) spikes.
When σ in receives the second spike, each of the neurons σ c 1 , σ c 3 , and σ c 4 can get one spike and σ c 2 gets two spikes. Neuron σ c 2 has four spikes at this moment, and its rule can be used to send two spikes to neuron σ c 3 . Neuron σ c 3 then has six spikes, so that the rule in σ c 3 is used to produce one spike and send it to σ c 2 . In this way, neurons σ c 2 and σ c 3 receive one spike from each other in each step before σ in receives the third spike from the environment. Neuron σ 2 has y spikes at the end. When neuron σ in receives the third spike, each of the neurons σ c 1 , σ c 3 , and σ c 4 gets one spike, while σ c 2 receives two spikes. As a result, neuron σ c 3 has an odd number of spikes and the rule cannot be applied. At present, neuron σ c 4 has three spikes, and the rule a 3 ⟶ a in neuron σ c 4 fires, which generates one spike and sends it to σ l 0 . In this way, it can simulate the instruction l 0 in the next step.
As with the proof of eorems 1 and 2, the system uses the following numbers of neurons: 9 neurons for 9 registers 25 neurons for 25 labels 5 neurons for the module INPUT 1 neuron in each SUB instructions and 14 in total 2 neurons for the module OUTPUT erefore, totally 55 neurons are used. e numbers of neurons can be decreased by exploring some relationships between some instructions of register machine M u ′ . e following modules are given to reduce the number of neurons in the computation process.
e SUB-ADD instructions can be divided into two cases, depending on the number of spikes placed in register r 1 (the register involved in the SUB instruction). Modules SUB-ADD-1 and SUB-ADD-2 shown in Figures 10 and 11 can simulate the SUB and ADD instructions sequentially. e working process of module SUB-ADD-1 is similar to that of module SUB. When the rule in neuron σ l i is used and σ r 1 contains at least one spike, neuron σ r 1 cannot fire. Neuron σ c 1 fires by receiving one a and then sends one spike to σ r 2 . At the end of the computation, neuron σ l g has one spike, neuron σ r 2 has one spike, and neuron σ l k is empty. When σ r 1 is empty, neurons σ r 2 and σ l g are also empty and neuron σ l k contains one spike.
us, each pair of SUB-ADD-1 instructions l i : (SUB(r 1 ), l j , l k ) and l j : (ADD(r 2 ), l g ) can share a common neuron when r 1 ≠ r 2 , and there are totally 6 pairs in M u ′ : Register machine simulator By using this module, 6 neurons can be saved. In the same way, the module shown in Figure 10 can simulate the two instructions l 15 and l 20 . Neuron σ l 20 can be saved. e module ADD-ADD shown in Figure 12 can simulate instructions l 17 and l 21 . In this way, one neuron can be saved.
e SUB instructions share a common neuron when the labels of their registers are different, as shown in Figure 13. Assume that the simulation of the SUB instruction l i : (SUB(r 1 ), l j , l k ) starts at time t. When neuron σ l i gets a spike, the rule a ⟶ a fires and sends one anti-spike to σ r 1 and two anti-spikes to σ l j and σ l k , respectively, at time t + 1. Neuron σ c 1 receives an anti-spike at time t + 2. Neurons σ r 1 , σ l j , σ l k , and σ c 1 work in the same way as those in module SUB shown in Figure 4. Neuron σ c 1 will send three spikes to σ l k ′ and two spikes to σ l j ′ , where forgetting rules will be applied. us, the instruction l i : (SUB(r 1 ), l j , l k ) is correctly simulated by this module. e process when starting with instruction l i ′ is similar to that described above. Two SUB modules dealing with the same register, as shown in Figure 14, can also be proved to work correctly in a similar way. Assume that the instruction l i : (SUB(r 1 ), l j , l k ) is simulated and one spike is contained in neuron σ l i ′ . e process is divided into two cases according to the number of spikes in neuron σ r . When σ r has at least one spike, the working process of the system is similar to that of module SUB. When σ r is empty, the rule in neuron σ c 1 cannot be used. Neurons σ l j , σ l k ′ , and σ l j ′ are all empty but neuron σ l k contains one spike. All SUB instructions can be simulated correctly by the module. erefore, all SUB modules can share a common neuron.
From the above description about the numbers of neurons saved, the system uses the following:

e Small Universality as Number Generator.
A small universal AWSN P system as a number generator is considered. e process of simulating universal number generators is similar to that of simulating general function computing devices, but the difference between them lies in the module INPUT. e system starts with the spike train     Computational Intelligence and Neuroscience 7 10 g(x)− 1 1 from environment and ends with neuron σ 1 receivingg(x) spikes. is system is then loaded with an arbitrary number k, and neuron σ 2 receives k spikes. e number k is also the output at the same time as the output spike train 10 g(x)− 1 1, with g(x) in register 1 and k in register 2. Since the output module is not required, that is to say, register 8 is not required, the register machine M u is simulated. If the computation in M u halts, the computation can also halt.
Furthermore, module INPUT and module OUTPUT can be combined. e module INPUT-OUTPUT is shown in Figure 15, and an example is used to prove its feasibility. e label l h ′ can also be saved because of module INPUT-OUTPUT. e string 101 is used in module INPUT-OUTPUT, where g(x) � 2 and k � 4. e computation follows the above working processes of the modules. e results of each step are shown in Table 2.
Assume that σ in has one spike at timet, and neuron σ f 4 has two spikes. At timet + 1, σ f 1 and σ f 2 receive one spike, respectively. From the structure shown in Figure 15, neurons σ f 1 and σ f 2 receive one spike from each other at each step until σ f 1 and σ f 2 stop firing. en σ in receives the second spike. Each of neurons σ 1 and σ f 3 receives one spike, σ f 5 receives six spikes, and σ out receives two spikes, so that neurons σ f 5 and σ out can fire. At timet + 3, both σ f 1 and σ f 2 have two spikes, but they cannot fire again. σ f 5 receives six spikes from σ f 2 , but σ f 5 also receives two anti-spikes from σ f 3 , plus four spikes existing in σ f 5 , so that neuron σ f 5 has eight spikes. In addition, neuron σ out receives two spikes again, so that there are three spikes contained. Neuron σ f 4 only has one spike because the received anti-spike annihilates one spike. At timet + 4, the neuron σ f 4 is empty after receiving an anti-spike. σ f 5 receives two anti-spikes, so that there are four spikes contained in neuron σ f 5 , the number of spikes is even, and its rule can fire. At the next step, σ f 4 receives one anti-spike and fires. Neuron σ f 5 consumes two spikes and still can fire. At timet + 6, neurons σ f 5 and σ out receive one spike from σ f 4 , respectively. So, there are 4 spikes in σ out , meeting the required conditions for firing. Neuron σ l 0 also gets one spike. e string is read through neuron σ in , and g(x) spikes are stored in register 1 when the calculation stops. At the same time , the output number (t + 6 − t − 2 = 4) is the same as the number stored in register 2. Neuron σ l 0 activates and starts simulating the register machine by simulating modules ADD and SUB. erefore, through this process, the module INPUT-OUTPUT can be simulated correctly. erefore, this system contains the following: ere is a universal AWSN P system having 30 neurons that can be used to perform number generating.

Conclusions
In this work, a variant of the SN P systems, called the AWSN P systems,is proposed. Because of the use of anti-spikes, the proposed systems are more biologically significant thanSN P systems, with inhibitory spikes in the communication between neurons. An example is used to illustrate the working process of this system. e computational universality is then proved in the case of generating mode and accepting mode, respectively. Finally, the Turing universality of AWSN P systems is proved. e function computing device can be realized by using 34 neurons. Compared with the small universal SN P system using anti-spikes introduced by Song [17], the AWSN P system uses 13 fewer neurons. Compared with the SN P systems with weighted synapses introduced by Pan [34], the AWSN P system uses 4 fewer neurons. e small universality of the ASN P system as number generator is investigated with 30 neurons. Compared with Pan's work [34], the proposed system uses 6 fewer neurons. e computational universality is proved for AWSN P systems with standard rules. ere are three types of spiking rules, a ⟶ a, a ⟶ a, and a ⟶ a, used that are time dependent, and there is one type of forgetting rules, a c ⟶ λ. ere are several future research directions. One direction is to investigate whether the computational power will remain the same if only one or two types of spiking rules are used or if the forgetting rules are not used and to investigate whether AWSN P systems can perform better or the same if the spiking rules are not time-dependent. ese open problems certainly need further studies. Another future research direction is the application of the proposed systems. ere have been studies, such as using SN P systems with learning function for letter recognitions [36]. If the learning function was introduced in AWSN P systems, it may perform better in letter recognitions. Because the use of Table 2: e computation process of the module INPUT-OUTPUT.
Step σ in σ f 1 σ f 2 σ f 3 σ f 4 σ f 5 σ 1 σ 2 σ out σ l 0 Computational Intelligence and Neuroscience anti-spikes improves the ability of AWSN P systems to represent and process information, it may solve more practical problems, which still require further research.

Data Availability
No datasets were used in this article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.