Next Article in Journal
A Review of Matrix SIR Arino Epidemic Models
Previous Article in Journal
A Study on Fuzzy Order Bounded Linear Operators in Fuzzy Riesz Spaces
Previous Article in Special Issue
On the Languages Accepted by Watson-Crick Finite Automata with Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulations between Three Types of Networks of Splicing Processors

by
José Ramón Sánchez Couso
1,
José Angel Sanchez Martín
1,
Victor Mitrana
1,* and
Mihaela Păun
2
1
Departamento de Sistemas Informáticos, Universidad Politécnica de Madrid, C/Alan Turing s/n, 28031 Madrid, Spain
2
National Institute for Research and Development of Biological Sciences, Independentei Bd. 296, 060031 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1511; https://doi.org/10.3390/math9131511
Submission received: 4 June 2021 / Revised: 20 June 2021 / Accepted: 23 June 2021 / Published: 28 June 2021
(This article belongs to the Special Issue Bioinspired Computation: Recent Advances in Theory and Applications)

Abstract

:
Networks of splicing processors (NSP for short) embody a subcategory among the new computational models inspired by natural phenomena with theoretical potential to handle unsolvable problems efficiently. Current literature considers three variants in the context of networks managed by random-context filters. Despite the divergences on system complexity and control degree over the filters, the three variants were proved to hold the same computational power through the simulations of two computationally complete systems: Turing machines and 2-tag systems. However, the conversion between the three models by means of a Turing machine is unattainable because of the huge computational costs incurred. This research paper addresses this issue with the proposal of direct and efficient simulations between the aforementioned paradigms. The information about the nodes and edges (i.e., splicing rules, random-context filters, and connections between nodes) composing any network of splicing processors belonging to one of the three categories is used to design equivalent networks working under the other two models. We demonstrate that these new networks are able to replicate any computational step performed by the original network in a constant number of computational steps and, consequently, we prove that any outcome achieved by the original architecture can be accomplished by the constructed architectures without worsening the time complexity.

1. Introduction

In recent years, the limitations encountered by our standard computational models have become more apparent, motivating the pursuit of knowledge regarding innovative paradigms capable of overcoming these barriers. Along this line of research, different computational models finding their inspiration on natural phenomena have been proposed in the scientific community. Further research showed their theoretical proficiency for solving intractable problems.
Networks of bio-inspired processors are one of such nature-inspired systems under the parallel and distributed computational framework, boasting a high degree of parallelism (theoretically, unlimited). This model shares numerous characteristics with other known computational paradigms of diverse origins: tissue-like P systems [1], which is a model inspired by the tissue structure introduced in the area of membrane computing [2], a model, called evolutionary systems, that is inspired by the evolution of various cell populations [3], networks of parallel language processors, which is a parallel model for generating languages [4], flow-based programming, which is a particular variant of the data-flow programming [5], connection machine, which is a kind of a parallel non-conventional processing computer [6], distributed computing using mobile programs [7], etc.
A network of bio-inspired processors may be informally described as a set of processors that are placed in the vertices of a graph. Each processor acts over data organized in distinct structures: strings, pictures, graphs, multisets, etc. An early survey may be found in [8] and more recently in [9]. Thus far, the literature differentiates between two subcategories of networks of bio-inspired processors handling string data, namely evolutionary and splicing processors. The first one was proposed in [10] as a logical abstraction of the point mutations taking place in DNA sequences: insertion, deletion or substitution of a single base pair. The second category, introduced in [11] and analyzed for its theoretical application for problem solving in [12], mimics the splicing phenomena in DNA molecules overseen by the various restriction and ligase enzymes [13] from a computational point of view. A more general view of restriction enzymes as sources of inspiration for computing can be seen in [14], which opened a wide area of research.
Under this paradigm, the strings hosted in a splicing processor perform the role of DNA molecules, while the behavioral constraints for the splicing enzymes are abstracted to a set of splicing rules composed by quadruples of strings specifying the cutting positions of the strings in the node. Following a splicing operation, the pieces are recombined as long as they are yielded by the same splicing rule. This model shares common aspects with the test tube distributed splicing systems introduced in [15] and posteriorly examined in [16]. The divergences with this last splicing paradigm and time-varying distributed H systems [17] are discussed in [18].
The computation in the networks of splicing processors follows after a sequence of alternative steps, namely splicing and communication, that continues indefinitely until a predefined halting requirement is met. In the splicing step, the strings in each node are simultaneously cut and recombined into new ones according to the constraints set by all the applicable splicing rules of the processor hosted in that node. Thus, each string of the node has enough replications to allow for the synchronous completion of the relevant rules. Next in line, a communication step redistributes the string data in the network as follows:
(i) The string data in a splicing processor is sent to the connected nodes provided that it meets the established exiting conditions for that node.
(ii) The receiving nodes concurrently accept the incoming data satisfying its input requirements. Any string refused by all these nodes is consequently lost.
This data flow in the network is managed through the introduction of random-context filters deciding upon the entering and exiting requirements of the nodes through semantic or syntactical conditions. Thus, far, there are three variants of the latter category: (i) each node is assigned an input and output filter to manage the incoming and exiting string data, respectively, [19], (ii) these two filters are combined into one unique filter [20] and (iii) the filters of two connected nodes are replaced with ones assigned to the edge between them [21].
A simulation of a model by another one is said to preserve the time complexity if the numbers of computational steps performed by the two models on every input are of the same order. It is obvious that (ii) and (iii) are particular variants of (i). Consequently, they appear to offer less control over the computation; however, several works proved they share the same computational power through efficient simulations of known computationally complete models in [11,22,23].
Nevertheless, the indirect conversion between the previous variants through the intermission of a Turing machine is unattainable because of the enormous spike in the complexity. This paper aims to improve upon this line of research with the proposal of efficient direct simulations between the three discussed variants.

2. Basic Definitions

We describe here the basic definitions and nomenclature used for this research. For the remaining notations related to this work, the reader is encouraged to consult [24].
An alphabet refers to a finite and non-empty set of symbols. We write the cardinality of a given finite set S as c a r d ( S ) . A string over an alphabet V is a finite sequence of symbols belonging to that alphabet V. The set of all strings over the alphabet V is denoted by V * , the empty string is expressed by ε , and the length of a string z is written | z | . Moreover, a l p h ( z ) is the smallest set U V such that z U * holds.
A splicing rule over V is a 4-tuple of strings following the structure [ ( u 1 , u 2 ) ; ( u 3 , u 4 ) ] , with u 1 , u 2 , u 3 , u 4 V * . Given such a splicing rule r = [ ( u 1 , u 2 ) ; ( u 3 , u 4 ) ] and the strings α , β V * , we define
σ r ( α , β ) = { β 1 u 3 u 2 α 2 α = α 1 u 1 u 2 α 2 , β = β 1 u 3 u 4 β 2 , α 1 , α 2 , β 1 , β 2 V * } { α 1 u 1 u 4 β 2 α = α 1 u 1 u 2 α 2 , β = β 1 u 3 u 4 β 2 , α 1 , α 2 , β 1 , β 2 V * } .
This definition may be extended as follows. Given a language L over V and a finite set of splicing rules R, we define
σ R ( L ) = r R w 1 , w 2 L σ r ( w 1 , w 2 ) .
Given an alphabet V, two disjoint subsets P , F of V, and a string z over V, we define the following predicates:
φ ( s ) ( z ; P , F ) P a l p h ( z ) F a l p h ( z ) =
φ ( w ) ( z ; P , F ) a l p h ( z ) P F a l p h ( z ) = .
In these predicates, the set of permitting contexts P defines the symbols that are required to be present in the current string while the set of forbidding contexts F refers to those symbols that are banned. Both clauses require the nonexistence of symbols a F in the string z and differ on the flexibility of the conditions related to P. The first one demands for all the symbols in P to exist in z, while the latter statement is met as long as one or more of the symbols in P belong to z.
Given β { w , s } and a language L V * , we write
φ β ( L , P , F ) = w L φ β ( w , P , F ) .
A splicing processor with random-context filters (SP) over an alphabet V is a 6-tuple ( S , A , P I , F I , P O , F O ) , where:
S is a finite set of splicing rules over V.
A is a finite set of strings over V. Each string in A is called an auxiliary string.
P I (permitting symbols) and F I (forbidding symbols) are two subsets of V, which both define the input filter of the processor.
P O , F O V are similar subsets of V that define the output filter of the processor.
We say a splicing processor is uniform if the input and output filters are identical: P I = P O = P and F I = F O = F . The set of all splicing processors over the alphabet U is denoted by S P U , while U S P U denotes the set of all uniform splicing processors over U.
A network of splicing processors (NSP) is a 9-tuple Γ = ( V , U , , , G , N , β , I n ̲ , H a l t ̲ ) , where:
  • V U are the input and network alphabet, respectively. The working alphabet U contains two special symbols, namely, , that do not belong to V.
  • G = ( X G , E G ) is an undirected graph defined by the set X G of vertices and the set E G of edges. As the graph does not contain loops, we define an edge by a binary set of vertices. G is called the underlying graph of the network. It is worth mentioning that many papers were dealing with NSP with a complete underlying graph, see, e.g., the survey [9].
  • N : X G S P U is a function that associates with each vertex x X G the splicing processor N ( x ) = ( S x , A x , P I x , F I x , P O x , F O x ) .
  • β : X G { ( s ) , ( w ) } is a function that associates with each vertex the type of both its input and output filters. We now define two mappings on the set of all strings over U { , } :
    input filter : ρ x ( . ) = φ β ( x ) ( . ; P I x , F I x ) .
    output filter : τ x ( . ) = φ β ( x ) ( . ; P O x , F O x ) .
    Informally, ρ x ( z ) (resp. τ x ( z ) ) decides whether or not the string z can pass the input (resp. output) filter of x. If L is a language, we define ρ x ( L ) by: ρ x ( L ) = { w ( U { , } ) * ρ x ( w ) } , that is the set of strings of L that can pass the input filter of x. In an analogous way, we define τ x ( L ) . Note that though we use the same notation for ρ x ( w ) ( τ x ( w ) ) and ρ x ( L ) ( τ x ( L ) ), there is no confusion because the arguments are different.
  • I n ̲ , H a l t ̲ X G are the input and the halting node of Γ , respectively.
Likewise, a network of splicing processors is considered to be uniform if it is composed by uniform splicing processors. A network of uniform splicing processors (NUSP) is a 9-tuple Γ = ( V , U , , , G , N , β , I n ̲ , H a l t ̲ ) ,where:
  • V , U , , , G , I n ̲ , H a l t ̲ follow the same specifications as the parameters in NSP.
  • N : X G U S P U is a function that returns the uniform splicing processor N ( x ) = ( S x , A x , P x , F x ) associated with the node x X G .
  • β : X G { ( s ) , ( w ) } specifies the strength of the sets P and F associated to node filters.
A network of splicing processors with filtered connections (NSPFC) is a 9-tuple Γ = ( V , U , , , G , N , β , I n ̲ , H a l t ̲ ) , where:
  • V , U , , , I n ̲ , H a l t ̲ follow the same specifications as the parameters in NSP.
  • G = ( X G , E G ) is also an undirected graph such that each node x X G is seen as a splicing processor without filters, with the set of splicing rules S x and the set of axioms A x .
  • N : E G 2 U × 2 U associates with each edge e E G a pair of sets P e , F e that define the filter on the edge e.
  • β : E G { ( s ) , ( w ) } defines the predicate variant assigned to the edge filters.
The size of a NSP Γ belonging to any of the variants above is defined as the number of nodes in the graph, i.e., c a r d ( X G ) . A configuration of Γ is a mapping C : X G 2 U * , which assigns a set of strings C ( x ) to each node x Γ , that is the sets of strings that can be found in node x at a given moment. Although for each x X G , C ( x ) is actually a multiset of strings, each string appearing in an arbitrary number of copies, for the sake of simplicity, we work with the support of this multiset. For a string z V * , the initial configuration of Γ on the input string z is C 0 ( z ) ( I n ̲ ) = { z } and C 0 ( z ) ( x ) = for all x X G I n ̲ .
A configuration can be altered through a splicing step or a communication step. In a splicing step, all the splicing rules belonging to the set S x applicable to the strings in the set combination of C ( x ) and A x are realized, causing the change of C to a new configuration C . Formally, a configuration C evolves to C by a splicing step, denoted as C C , if and only if the following proposition is true for all x X G :
C ( x ) = σ S x ( C ( x ) A x ) .
The communication step is different in NSP (NUSP) and NSPFC. We first define how a communication step works in a NSP (the definition of a communication step in a NUSP is very similar, hence left to the reader). In every node x, all the following tasks are accomplished simultaneously:
(i)
The strings that satisfy the output filter condition of x are sent out;
(ii)
Copies of the expelled strings from any node y connected to x enter x, provided that they satisfy the input filter condition of x.
We stress that those strings sent out of x that do not satisfy the input filter condition of any node are definitely lost. Formally, a configuration C follows a configuration C by a communication step in a NSP (we write C C ) if for all x X G
C ( x ) = ( C ( x ) φ β ( x ) ( C ( x ) , P O x , F O x ) )
{ x , y } E G ( φ β ( y ) ( C ( y ) , P O y , F O y ) φ β ( x ) ( C ( y ) , P I x , F I x ) )
holds.
We now describe how a communication step works in a NSPFC. For every pair of connected nodes x , y , all the strings that satisfy the filter condition associated with the edge { x , y } are moved from one node to the other. Formally, C C , if
C ( x ) = ( C ( x ) ( { x , y } E G φ β ( { x , y } ) ( C ( x ) , N ( { x , y } ) ) ) )
( { x , y } E G φ β ( { x , y } ) ( C ( y ) , N ( { x , y } ) ) )
for all x X G .
The computation of a NSP Γ on the input string w V * is defined as a sequence of alternating steps (splicing, communication) that produces the configurations C 0 ( w ) , C 1 ( w ) , C 2 ( w ) , , such that C 2 i ( w ) C 2 i + 1 ( w ) and C 2 i + 1 ( w ) C 2 i + 2 ( w ) , for all i 0 . The computation of NUSP and NSPFC are defined in the same way.
A computation as above halts, if there exists k 1 such that the set of strings existing in the output node C ( H a l t ̲ ) is non-empty. Then, we say that the string w is admitted by the splicing network in an accepting computation. The language defined/accepted by Γ is the set of all strings w over V such that the computation of Γ on w is an accepting one.
The time complexity of the halting computation C 0 ( x ) , C 1 ( x ) , C 2 ( x ) , C m ( x ) of Γ on x V * is denoted by t i m e Γ ( x ) and equals m. The time complexity of Γ is the partial function from I N to I N formalized as follows:
T i m e Γ ( n ) = max { T i m e Γ ( x ) x L ( Γ ) , | x | = n } .
Furthermore, for a function f : I N I N and we write
Time N S P ( f ( n ) ) = { L L   is   accepted   by   a   NSP Γ such   that T i m e Γ ( n ) O ( f ( n ) ) } .
Analogously, one defines Time N U S P ( f ( n ) ) as well as Time N S P F C ( f ( n ) ) .

3. Direct Simulations between NSPs and NUSPs

Obviously, since each NUSP can be immediately transformed into a NSP, we have:
Proposition 1. 
Time N U S P ( f ( n ) ) Time N S P ( f ( n ) ) for any function f : I N I N .
The converse is also true, namely:
Proposition 2. 
Time N S P ( f ( n ) ) Time N U S P ( f ( n ) ) for any function f : I N I N .
Proof. 
Let Γ = ( V , U , < , > , G , N , β , x 1 , x n ) be a NSP with the underlying graph G = ( X G , E G ) and X G = { x 1 , x 2 , , x n } for some n 1 ; x 1 I n ̲ and x n H a l t ̲ . Let further d o m ( S x i ) = { a , b , u , v U * [ ( a , b ) ; ( u , v ) ] S x i } . We construct the NUSP Γ = ( V , U , < , > , G , N , β , x 1 s , x n s ) ; x 1 s I n ̲ and x n s H a l t ̲ , where
U = U U T , U = { a a U } ,
T = { # , ψ } { $ i , # i , θ i , ¥ i 1 i n } , V = V ,
u = a 1 a 2 a | u | u = a 1 a 2 a | u | U * , z ( u , v ) = β 1 u v β 2 z = β 1 u v s . β 2 U *
We now define the nodes of G by Table 1, Table 2 and Table 3, while the edges of G are both listed as well as graphically represented.
and { x 1 s , x 1 0 } , { x 1 0 , x 1 1 } E G .
Let x i , 1 i n 1 be a splicing node. If β ( x i ) = ( w ) , then the nodes defined in Table 2 belong to X G .
All the edges
 – { x 1 s , x 1 0 } ,
 – { x 1 0 , x 1 1 } ,
 – { x i c h e c k i n , x i 1 } for 1 i n 1 ,
 – { x i 1 , x i 2 } , { x i 1 , x i 2 ( d o m ( S x i ) ) } for 1 i n 1 ,
 – { x i 1 , x i r e t u r n 1 } , { x i 1 , x i r e t u r n 2 } for 1 i n 1 ,
 – { x i 2 , x i 3 } for 1 i n 1 ,
 – { x i 3 , x i c h e c k o u t } , { x i 3 , x i r e t u r n 1 } , { x i 3 , x i r e t u r n 2 }
  for 1 i n 1 ,
 – { x i c h e c k o u t , x i c o n t i n u e } , { x i c h e c k o u t , x i 2 ( d o m ( S x i ) ) }
  for 1 i n 1 ,
 – { x i c o n t i n u e , x j c h e c k i n } for all { x i , x j } E G , 1 i j n 1 ,
 – { x i 2 ( d o m ( S x i ) ) , x i r e t u r n 1 } , { x i 2 ( d o m ( S x i ) ) , x i r e t u r n 2 } for 1 i n 1 ,
belong to E G . As we have mentioned above, we present also a graphical representation in Figure 1.
If β ( x i ) = ( s ) , then x i r e t u r n 2 is replaced by p 1 nodes of the form x i r e t u r n 2 k , 1 k p , where P O x i = { Z 1 , Z 2 , , Z p } , p 1 . They are presented in Table 3. Furthermore, if P O x i = , then x i r e t u r n 2 is removed. Analogously, if F O x i = , then x i r e t u r n 1 is removed. Now, an edge between x i 1 , x i 3 and x i 2 ( d o m ( S x i ) ) , [ ( a , b ) ; ( u , v ) ] d o m ( S x i ) , on the one hand, and each node x i r e t u r n 2 k , on the other hand, is added to E G .
The output node x n s is defined as follows: S ( x n s ) = S ( x n ) , P ( x n s ) = P I ( x n ) and F ( x n s ) = F I ( x n ) , with A ( x n s ) = A ( x n ) , β ( x n s ) = β ( x n ) . Finally, we add all the edges { x i c o n t i n u e , x n s } x i , x n E G , to E G .
We now analyze a computation of Γ on an input string < w > . In the input node x 1 s , > is replaced with the sequence ψ > . Then, < w ψ > enters x 1 0 , where ψ > is replaced with the symbols > # 1 and the resulting string is sent to the node x 1 1 . Thus, the node x 1 1 contains the string z = < w > # 1 associated to the input string z = < w > placed in the node x 1 Γ . More generally, we may assume that a string w = z 1 z 2 # i , for some 1 i n 1 , is in x i 1 if x i X G contains the corresponding string w = z 1 z 2 U * . Note that any string produced in x 1 1 can return to x 1 0 . However, these strings have the character # 1 switched with θ 1 , which is not accepted by the connected nodes x 1 s and x 1 1 . Consequently, the node x 1 0 can be disregarded for the rest of the computation.
We now start the simulation of the first splicing step executed by Γ .
Firstly, we analyze the procedure for the case of a splicing node x i Γ . In x i 1 , a rule [ ( a , b ) ; ( u , v ) ] is applied to z 1 z 2 # i producing α 1 a v β 2 and β 1 u b α 2 , if a rule [ ( a , b ) ; ( u , v ) ] can be applied to z 1 z 2 in the node x i X G . Let y 1 y 2 # i be one string obtained after a splicing step from z 1 z 2 # i in x i 1 . Note that if the splicing rule cannot be applied, then z 1 z 2 # i may go out from x i 1 and enter the following nodes:
  • x i c h e c k i n , provided that z 1 z 2 satisfies the condition of the input filter of x i X G ,
  • x i r e t u r n 1 and x i r e t u r n 2 if β ( x i ) = ( w ) or each of the nodes x i r e t u r n 2 k if β ( x i ) = ( s ) , provided that z 1 z 2 does not satisfy the condition of the output filter of x i X G ,
  • x i 2 ( d o m ( S x i ) ) .
All cases are to be analyzed. If z 1 z 2 # i leaves x i 1 and enters x i c h e c k i n , the symbol # i is replaced with θ i , which locks the string in that node. If z 1 z 2 # i leaves x i 1 and enters x i r e t u r n 1 , then ping-pong processes between these two nodes as well as between x i 1 and x i 2 ( d o m ( S x i ) ) start. We distinguish here the cases of weak and strong filters. If β ( x i ) = ( w ) , the string z 1 z 2 # i can also enter x i r e t u r n 2 , starting an identical relationship to the one between x i r e t u r n 1 and the nodes x i 1 , x i 2 ( d o m ( S x i ) ) . If β ( x i ) = ( s ) , the string enters the nodes x i r e t u r n 2 k , provided that z 1 z 2 # i does not contain the character Z k , followed by the same ping-pong process between these nodes and x i 1 , x i 2 ( d o m ( S x i ) ) . In this last case, the structure simulates the situation where a string remains in the node x i because it only contains a proper subset of the characters in P O ( x i ) .
If z 1 z 2 # i leaves x i 1 and enters x i 2 ( d o m ( S x i ) ) , then # i is replaced by θ i ; the new string z 1 z 2 θ i is simultaneously sent to all nodes x i c h e c k o u t , x i r e t u r n 1 and x i r e t u r n 2 . If it enters x i c h e c k o u t , then θ i is replaced firstly by ¥ i (in x i c h e c k o u t ) and secondly by some $ j (in x i c o n t i n u e ) provided that { x i , x j } E G . The newly obtained string z 1 z 2 $ j is sent to x j c h e c k i n . Note that this situation simulates exactly the situation when z 1 z 2 is sent to x j after staying unchanged for one splicing step in x i . The case when z 1 z 2 θ i enters any of the nodes x i r e t u r n 1 and x i r e t u r n 2 is considered above.
The only case remaining to be analyzed is when z 1 z 2 # i is transformed into y 1 y 2 # i (either y 1 = z 1 or y 2 = z 2 ) after applying a splicing rule in x i 1 . Then, y 1 y 2 # i leaves x i 1 . We follow the route of this string through the network: x i 2 , where # i is replaced by θ i , then x i 3 , where u and v are replaced by u and v, respectively. We analyze in detail the case for a string y 1 y 2 θ i with y 1 z 1 . The application of the first splicing rule on a string y = β 1 u y 2 θ i yields two strings, namely y = β 1 u # and y = π u y 2 θ i . These two strings cannot exit the node, as they contain # and π , respectively. In the next splicing step, these two strings can only combine between themselves through the second splicing rule { [ ( π , u b ) ; ( t , u # ) ] , yielding two new strings: y 2 = π u # and y 2 = β 1 u y 2 θ i . The first one cannot be used in the computation anymore, while the latter is the original string y 1 y 2 θ i with u replaced by u. The procedure for a string y 1 y 2 θ i with y 2 z 2 is analogous through the application of the other two splicing rules.
After leaving x i 3 , the new string, say z , can enter either x i c h e c k o u t or at least one of x i r e t u r n 1 and x i r e t u r n 2 . If it enters x i c h e c k o u t and consequently x i c o n t i n u e , then the following computational step in Γ was simulated in Γ as follows: z was obtained from z 1 z 2 by means of a splicing rule [ ( a , b ) ; ( u , v ) ] in x i X G , then z was sent to all neighbors of x i . The situation when z enters one of the nodes x i r e t u r n 1 and x i r e t u r n 2 corresponds to the situation when z remains in x i for a new splicing step, as it cannot pass the output filter of x i .
We now analyze the computational steps required for simulating a computation in Γ . The input node x 1 s and x 1 0 require 1 splicing step (or 2 computational steps) each. In the worst case, a splicing step in one of the nodes { x i 1 i n 1 } can be simulated in Γ in 7 splicing steps (or 14 computational steps) distributed in the following way:
  • 2 steps in x i 1 .
  • 2 steps in x i 2 .
  • 4 steps in x i 3 .
  • 2 steps in x i c h e c k o u t .
  • 2 steps in x i c o n t i n u e .
  • 2 steps in x j c h e c k i n 1 j n 1 .
Note that the simulation only requires 12 computational steps if x j x n s since x n s is not simulated by a subnetwork and the computation halts once a string enters that node. By all the above considerations, we conclude that L ( Γ ) = L ( Γ ) and T i m e Γ ( n ) O ( T i m e Γ ( n ) ) . □

4. Direct Simulations between NSPs and NSPFCs

Proposition 3. 
Time N S P ( f ( n ) ) Time N S P F C ( f ( n ) ) for any function f : .
Proof. 
We take a NSP Γ = ( V , U , < , > , G , N , β , x 1 , x n ) with the underlying graph G = ( X G , E G ) with the set of nodes X G = { x 1 , x 2 , , x n } for some n 1 ; x 1 I n ̲ and x n H a l t ̲ . We construct the NSPFC Γ = ( V , U , < , > , G , N , β , x I , x n s ) ; x I I n ̲ and x n s H a l t ̲ , where
V = V , U = U U T ,
U = { a a U } , u = a 1 a 2 a | u | u = a 1 a 2 a | u | U * ,
z ( u , v ) = β 1 u v β 2 z = β 1 u v s . β 2 U * , T = { $ i i { 1 , , n } } { # , $ , π , θ , σ }
We now define the parameters of Γ . First, for each pair of nodes x i , x j 1 i j n , i n of X G with { x i , x j } E G , we have the following nodes in Γ
x i , j 1 : S ( x i , j 1 ) = { [ ( ε , θ ) ; ( # , $ ) ] } , A ( x i , j 1 ) = { # $ } ,
x i , j 2 : S ( x i , j 2 ) = { [ ( ε , $ ) ; ( # , ψ ) ] } , A ( x i , j 2 ) = { # ψ } ,
and the following edges (the nodes x i f and x j s are to be defined below):
{ x i f , x i , j 1 } : P = P O ( x i ) , F = F O ( x i ) { $ } , β = β ( x i ) ,
{ x i , j 1 , x i , j 2 } : P = { $ } , F = { ψ } , β = ( w ) ,
{ x i , j 2 , x j s } : P = P I ( x j ) , F = F I ( x j ) { $ , $ j , θ } , β = β ( x j ) .
The specifications for the input node x I and the output node x n s are as follows:
x I : S ( x I ) = { [ ( ε , > ) ; ( # , > ψ ) ] } , A ( x I ) = { # > ψ } ,
x n s : S ( x n s ) = S ( x n Γ ) , A ( x n s ) = A ( x n Γ ) .
and the edge:
{ x I , x 1 s } : P = { ψ } , F = { # } , β = ( s ) .
For each node x i 1 i n 1 in Γ , we add a subnetwork (Figure 2).to Γ according to the cases considered in the sequel.
Case 1. If x i is a splicing node with β ( x i ) = ( w ) , the subnetwork is defined as follows:
  • x i s
    S = { [ ( ε , ψ ) ; ( # , $ i ) ] } { [ ( ε , $ i ) ; ( # , θ ) ] }
    A = { # $ i , # θ }
  • x i 1
    S = { [ ( a , b ) ; ( u , v ) ] ] [ ( a , b ) ; ( u , v ) ] S x i , { a , b , u , v } U * } }
    A = { z k ( u , v ) $ i 1 k c a r d ( A x i ) , z k A x i }
  • x i 2
    S = { [ ( ε , $ i ) ; ( # , θ ) ] } { [ ( ε , σ ) ; ( # , θ ) ] } { [ ( ε , θ ) ; ( # , # ) ] }
    A = { # θ , # # }
  • x i f
    S = { [ ( a , v ) ; ( # , v π ) ] a U * } { [ ( a v , π ) ; ( # v , t ) ] { a , v } U * , t { U * { θ } } }
    { [ ( u , b ) ; ( π u , # ) ] b U * { θ } } { [ ( π , u b ) ; ( t , u # ) ] { u , t } U * , b { U * { θ } } }
    A = { π u # , # v π θ }
  • x i r t 1
    S = { [ ( ε , θ ) ; ( # , σ ) ] }
    A = { # σ }
  • x i r t 2
    S = { [ ( ε , θ ) ; ( # , σ ) ] }
    A = { # σ }
The edges between them are:
{ x i s , x i 1 } : P = { $ i } , F = { # θ } , β = ( w ) ,
{ x i 1 , x i 2 } : P = { $ i } U , F = { θ , # } , β = ( w ) ,
{ x i 2 , x i f } : P = { θ } , F = { # , $ i , π } , β = ( w ) ,
{ x i f , x i r t 1 } : P = F O { x i } , F = { # , σ , π } , β = ( w ) ,
{ x i f , x i r t 2 } : P = , F = P O { x i } { # , σ , π } , β = ( w ) ,
{ x i r t 1 , x i 1 } : P = { σ } , F = { # , $ i , θ } U , β = ( w ) ,
{ x i r t 2 , x i 1 } : P = { σ } , F = { # , $ i , θ } U , β = ( w ) ,
Case 2. If x i is a splicing node with strong filters, we replace the node x i r t 2 with p 1 nodes of the form x i r t 2 k , 1 k p , where P O x i = { Z 1 , Z 2 , , Z p } , p 1 . These new nodes are defined as follows:
x i r t 2 k : S = { [ ( ε , θ ) ; ( # , σ ) ] } , A = { # σ } ,
and the edges between x i r t 2 and the nodes x i f and x i s are replaced with 2 p edges of the form
{ x i f , x i r t 2 k } : P = , F = { Z k , # , σ , π } U , β = ( w ) ,
{ x i r t 2 k , x i 1 } : P = { σ } , F = { # , $ i , θ } U , β = ( w )
In both cases, if P O x i = , then the nodes x i r t 2 and x i r t 2 k are removed. Analogously, if F O x i = , then x i r t 1 is removed.
We now analyze a computation of Γ on the input string w = < z > . In the input node x I , the symbol ψ is attached at the end of the string. Next, it enters x 1 s and the simulation of a computation in Γ starts. We assume that w ψ lies in x 1 s while the string w is found in x 1 , the input node of Γ . Inductively, we may assume that a string w is found in some x i , a node of Γ , as well as w ψ in x i s from Γ .
Let x i be a splicing node, where a rule [ ( a , b ) ) ; ( u , v ) ] is applied to w producing α 1 a v β 2 and β 1 u b α 2 , if w = α 1 a b α 2 or w otherwise. In Γ , the string w ψ is processed as follows. First w ψ becomes w $ i in x i s , then it can enter x i 1 only. Here, it may become w = α 1 a v β 2 $ i or w = β 1 u b α 2 $ i , if w = α 1 a b α 2 , or it is left unchanged.
Further, a string of the form w $ i produced in this node can go back to x i s , where $ i is changed to θ , sealing that string in the node. On the other hand, the strings can also enter x i 2 . In x i 2 , the symbols $ i and σ are replaced by θ , closing the route back to x i 1 . Note that the other produced strings # $ i , # σ and # θ remain locked in the node.
Finally, the strings enter x i f where each symbol c u or c v , if present, is rewritten into the associated symbol c, replacing u with u and v with v. Thoroughly, the string w = α 1 a v β 2 θ is split into the strings w 1 = α 1 a v π θ and w 2 = # v β 2 θ . Both strings cannot leave the node because of the characters π and #. In the next splicing step, the only rule that can be applied to these strings is { [ ( a v , π ) ; ( # v , t ) ] . At the same time, this rule can only be applied to these two new strings. The rule yields w 1 = α 1 a v β 2 θ and w 2 = # v π θ . The former contains the string generated by the original node x i , w 1 = α 1 a v β 2 , completing the simulation of the splicing step. On the other hand, the latter cannot exit the node or be modified by any rule, so it remains locked for the rest of the computation. The logic is analogous for the other string generated by the splicing rule. Thus, in node x i f , we have obtained the strings α 1 a v β 2 θ and β 1 u b α 2 θ , if w = α a b α 2 .
In conclusion, if w U * is a string in the nodes x i of Γ and w ψ in x i s of Γ , then we can obtain w U * in one splicing step of Γ if and only if we can obtain the associated string w θ in the node x i f of Γ in 4 splicing steps (or 8 computational steps). At this point, we note that w can leave x i and enter x j in Γ if and only if the string can leave x i f and enter x j s via the nodes x i , j 1 and x i , j 2 . If it can leave x i , but cannot enter x j in Γ , then it is trapped in x i , j 2 in Γ . Furthermore, a string entering x j has the character ψ replaced with $ j , and consequently, it cannot return back to x i , j 2 . Finally, if the string cannot leave the node x i , then it is sent by x i f to x i r t 1 and x i r t 2 (in the case of weak filters) or to the nodes x i r t 1 and x i r t 2 k , for all Z k P O ( x i ) (in the case of strong filters). In these nodes, the character θ is replaced with σ , and the resulting string is sent to x i 1 . If the string is not split in the following splicing step, it returns to the nodes x i r t 1 and x i r t 2 in the case of weak filters or to the nodes x i r t 1 and x i r t 2 k in the case of strong filters, starting a ping-pong process between them, which continues until the string yields new ones by means of another splicing step in x i 1 . In this last case, the new strings can only enter x i 2 because it contains characters a u and the process described above is repeated.
We now analyze the computational steps required for simulating a computation in Γ . The input node x I and the node x 1 s require 1 splicing step (or 2 computational steps) each. In the worst case, a splicing step in one of the nodes { x i 1 i n 1 } can be simulated in Γ in 7 splicing steps (or 14 computational steps) distributed in the following way:
  • 2 steps in x i 1 .
  • 2 steps in x i 2 .
  • 4 steps in x i f .
  • 2 steps in x i , j 1 .
  • 2 steps in x i , j 2 .
  • 2 steps in x j s .
Note that the simulation only requires 12 computational steps if x j s x n s since the computation halts when a string enters x n s . We conclude that L ( Γ ) = L ( Γ ) and T i m e Γ ( n ) O ( T i m e Γ ( n ) ) . □
The converse of the previous proposition holds.
Proposition 4. 
Time N S P F C ( f ( n ) ) Time N S P ( f ( n ) ) for any function f : I N I N .
Proof. 
Let Γ = ( V , U , < , > , G , N , β , x 1 , x n ) be a NSPFC with G = ( X G , E G ) , X G having n nodes x 1 , x 2 , , x n ; x 1 I n ̲ and x n H a l t ̲ . We construct the NSP Γ = ( V , U , < , > , G , N , β , x I , x n s ) ; x I I n ̲ and x n s H a l t ̲ , where
V = V , U = U U X Y { # , ψ } ,
X = { X i , X i 1 i n } , Y = { Y i 1 i n 1 } ,
U = { a a U } u = a 1 a 2 a | u | u = a 1 a 2 a | u | U * ,
z ( u , v ) = β 1 u v β 2 z = β 1 u v s . β 2 U *
We now define the parameters of Γ . First, we add a main subnetwork composed by the input node x I , the nodes { x i s 1 i n } , the nodes { x i 1 , x i 2 1 i n 1 } and the nodes { x i , j 1 i j n , i n , and { x i , x j } E G } .
  • node x I :
    S = { [ ( ε , > ) ; ( # , > X 1 ) ] } ,
    A = { # > X 1 }
    P I = V , F I = X , P O = X , F O = ,
    β = ( w ) .
  • nodes x i s , 1 i n 1 :
    S = { [ ( a , b ) ; ( u , v ) ] ] [ ( a , b ) ; ( u , v ) ] S x i , { a , b , u , v } U * } ,
    A = { z k ( u , v ) X i 1 k c a r d ( A x i ) , z k A x i } ,
    P I = { X i , ψ } , F I = X { X i } { # } , P O = { X i , U } , F O = ,
    β = ( w ) .
  • node x n s :
    S = S ( x n ) ,
    A = A ( x n ) ,
    P I = { X n , ψ } , F I = X { X n } { # } , P O = { X n , U } , F O = ,
    β = ( w ) .
  • nodes x i 1 , 1 i n 1 :
    S = { [ ( ε , X i ) ; ( # , X i ) ] } [ ( ε , ψ ) ; ( # , X i ) ] ] ,
    A = { # X i } ,
    P I = { X i , U } , F I = { # } , P O = { X i } , F O = ,
    β = ( w ) .
  • nodes x i 2 , 1 i n 1 :
    S = { [ ( a , v ) ; ( # , v π ) ] a U * } { [ ( a v , π ) ; ( # v , t ) ] { a , v } U * , t { U * { X i , X i ψ } } }
    { [ ( u , b ) ; ( π u , # ) ] b U * { θ } } { [ ( π , u b ) ; ( t , u # ) ] { u , t } U * , b { U * { X i , X i , ψ } } } ,
    A = { π u # , # v π X i } ,
    P I = { X i } , F I = { # } , P O = { X i } , F O = { π , # } ,
    β = ( w ) .
  • nodes x i , j , 1 i j n , i n , { x i , x j } E G :
    S = { [ ( ε , X i ) ; ( # , X j ) ] ] } ,
    A = { # X j } ,
    P I = P { x i , x j } , F I = F { x i , x j } { X j } , P O = { X j } , F O = { # } ,
    β = β ( { x i , x j } ) .
The edges between them are:
{ { x I , x 1 s } }
{ { x i s , x i 1 } 1 i n 1 }
{ { x i 1 , x i 2 } 1 i n 1 }
{ { x i 2 , x i , j } 1 i j n , i n , and { x i , x j } E G } }
{ { x i , j , x j } 1 i j n , i n , and { x i , x j } E G } } ,
Next, for each node x i 1 i n 1 in Γ , we add a subnetwork to Γ according to the cases considered in the sequel.
Case 1. Let E i be the number of nodes connected to the node x i 2 of the form x i , j , we add the corresponding nodes { x i , j r t 1 ( k ) , x i , j r t 2 ( k ) 1 k E i } to the subnetwork. Note that E i is also equal to the number of edges between the node x i and the nodes x j 1 i j n , i n in the network Γ . If the edges { x i , x j } Γ have β = ( w ) , the subnetwork is defined as follows:
  • nodes x i , j r t 1 ( k ) , 1 k E i , 1 i j n , i n , { x i , x j } E G :
    S = { [ ( ε , X i ) ; ( # , Y 2 ) ] } , if k = 1 E i > 1 { [ ( ε , Y k ) ; ( # , Y k + 1 ) ] } , if 2 k E i 1 { [ ( ε , Y E i ) ; ( # , ψ ) ] } , if k = E i E i > 1 { [ ( ε , X i ) ; ( # , ψ ) ] } , if E i = 1 ,
    A = { # Y k + 1 } , if 1 k E i 1 { # ψ } , if k = E i ,
    P I = F { x i , x j } , F I = { # , ψ } { Y Y k } , P O = { ψ , Y k + 1 } , F O = { # } ,
    β = ( w ) .
  • nodes x i , j r t 2 ( k ) , 1 k E i , 1 i j n , i n , { x i , x j } E G :
    S = { [ ( ε , X i ) ; ( # , Y 2 ) ] } , if k = 1 E i > 1 { [ ( ε , Y k ) ; ( # , Y k + 1 ) ] } , if 2 k E i 1 { [ ( ε , Y E i ) ; ( # , ψ ) ] } , if k = E i E i > 1 { [ ( ε , X i ) ; ( # , ψ ) ] } , if E i = 1 ,
    A = { # Y k + 1 } , if 1 k E i 1 { # ψ } , if k = E i ,
    P I = , F I = P { x i , x j } { # , ψ } { Y Y k } , P O = { ψ , Y k + 1 } , F O = { # } ,
    β = ( w ) .
The edges between them are:
{ { x i 2 , x i , j r t 1 ( 1 ) } , { x i 2 , x i , j r t 2 ( 1 ) } }
{ { x i , j r t 1 ( k ) , x i , j r t 1 ( k + 1 ) } , { x i , j r t 1 ( k ) , x i , j r t 2 ( k + 1 ) } 1 k E i 1 }
{ { x i , j r t 2 ( k ) , x i , j r t 1 ( k + 1 ) } , { x i , j r t 2 ( k ) , x i , j r t 2 ( k + 1 ) } 1 k E i 1 }
{ { x i , j r t 1 ( E i ) , x i s } , { x i , j r t 2 ( E i ) , x i s } } ,
Case 2. If the k edge { x i , x j } has β = ( s ) , we replace each of the nodes x i , j r t 2 ( k ) for p 1 nodes of the form x i , j r t 2 t ( k ) , 1 t p , where P { x i , x j } = { Z 1 , Z 2 , , Z p } , p 1 . The parameters S, A and β of these new nodes remain the same, while the input and output filters are defined as follows:
x i , j r t 2 t ( k ) : P I = , F I = { Z t , # , ψ } { Y Y k } , P O = { ψ , Y k + 1 } , F O = { # }
and the edges involving x i , j r t 2 ( k ) are replaced with p edges involving each of the nodes x i , j r t 2 t ( k ) .
In both cases, if the k edge has P { x i , x j } = , then the nodes x i , j r t 2 ( k ) and x i , j r t 2 t ( k ) are removed. Analogously, if F { x i , x j } = , then x i , j r t 1 ( k ) is removed. Furthermore, if P { x i , x j } = and F { x i , x j } = , the whole subnetwork given in Figure 3 is removed because a string is always able to leave the node x i Γ through the edge k.
Let us assume the string w = < z > to be the input string in Γ . In the input node x I of Γ , the symbol X 1 is attached to the string and then sent to the node x 1 s . Thus, the node x 1 Γ contains the string w = < z > while w = < z > X 1 lies in the analogous node x 1 s Γ . In a more general setting, we assume that a string y = y X i , y U * , enters x i s Γ at a given step of the computation of Γ on w if and only if the string y enters x i Γ at a given step of the computation of Γ on w.
Let y be transformed into z 1 = α 1 a v s . β 2 and z 2 = β 1 u b α 2 in node x i and z 1 and/or z 2 can pass the filter on the edge between x i and x j . In Γ , the string y = y X i is transformed into z 1 = α 1 a v β 2 X i and z 2 = β 1 u b α 2 X i in the node x i s . If the string is of the form y = y ψ , it is transformed into z 1 = α 1 a v β 2 X i and z 2 = β 1 u b α 2 ψ . The strings enter x i 1 where the characters X i and ψ are replaced with X i . Thus, we get the same strings with either X i or ψ . Then, it continues into x i 2 , where the symbols in u and v , if present, are replaced with the original counterpart u and v. In detail, the string z 1 is split into the strings z 1 ( 1 ) = α 1 a v π X i and z 1 ( 2 ) = # v β 2 X i . Both strings cannot leave the node because of the characters π and #. In the next splicing step, the rule { [ ( a v , π ) ; ( # v , t ) ] is applied to the two strings, yielding z 1 ( 1 ) = α 1 a v s . β 2 X i and z 2 ( 2 ) = # v π X i . The former constitutes the string z = z 1 X i where z 1 is the string yielded by x i Γ , completing the simulation of the application of a splicing rule of Γ . On the other hand, the latter cannot exit the node or be modified by any rule, so it remains locked for the rest of the computation. The logic is analogous for the other string generated by the splicing rule. The new string z = z 1 X i is sent to the nodes x i , j associated to the edges { x i , x j } in the original network. Since the input filters of these nodes are identical to the edge filters in Γ , the string z 1 X i can enter a node x i , j Γ if and only if the string z 1 can pass the filter between x i and x j in Γ . Note that the converse is also true. Subsequently, the symbol X i is replaced with X j and the new string is sent to x j s Γ where a similar procedure starts. Alternatively, if the string y is not split in x i Γ , the same event happens in x i s Γ . If the string is of the form y = y ψ , it remains locked in that node until a splicing rule can be applied on it while a string of the form y = y X i still enters the node x i 1 because of the symbol X i and the same computational process described above starts.
On the other hand, a copy of the string z X i is also sent to the subnetwork illustrated in Figure 3, where the case of a string not passing the filters of any edge between the node x i and the connected nodes in the original network Γ is handled. We distinguish here between the cases of strong and weak filters. For each edge k with β = ( w ) , 1 k E i , the node x i , j r t 1 ( k ) checks if the string cannot enter the corresponding node x i , j ( k ) because of containing forbidden symbols, while the node x i , j r t 2 ( k ) verifies if the string contains the characters required by the P I filter of the node x i , j ( k ) . Thoroughly, the string exits x i 2 and enters the first pair of nodes x i , j r t 1 ( 1 ) and x i , j r t 2 ( 1 ) granted that the associated string z could not pass the filters of the first edge in the network Γ . In that node, X i is replaced with Y 2 ensuring that the new string could only enter the next pair of nodes x i , j r t 1 ( 2 ) and x i , j r t 2 ( 2 ) associated to the second edge. Subsequently, Y k 2 k E i 1 is replaced by the next character Y k + 1 in the remaining pair of nodes x i , j r t 1 ( k ) and x i , j r t 2 ( k ) , forcing the string to continue through them in sequence. If the string can enter one of the connected nodes through an edge in Γ , it will be lost at some point of this subnetwork as it will be refused by the input filters of the nodes corresponding to that edge. Otherwise, it will reach the last pair of nodes where the symbol Y E i will be changed to ψ , yielding a string of the form z ψ which will be returned to x i s . Because of the character ψ , this string cannot leave the node x i s ensuring it remains there until it can be used in a new splicing step. Thus, the following computational step in Γ was simulated in Γ : z could not leave the node x i and it remained in that node for new splicing steps. In the case of an edge k with strong filters, the computation follows the same procedure described above with the difference that the node x i , j r t 2 ( k ) is replaced with p 1 nodes of the form x i , j r t 2 t ( k ) , 1 t p , where P { x i , x j } = { Z 1 , Z 2 , , Z p } , p 1 . A node x i , j r t 2 t ( k ) refuses any string z containing the character Z t . Thus, the string can only continue through this network by means of a node x i , j r t 2 t ( k ) for some 1 t p if it only contains a proper subset of P { x i , x j } .
We analyze now the case where a string z X i enters x i , n from a node x i 2 , for some 1 i n 1 . In that node, X i is replaced with X n and the resulting string enters x n s , ending the computation. Note that by the considerations above, a string enters x i , n if and only if a string from x i was able to pass the filter on the edge between x i and x n in Γ .
We now analyze the computational steps required for simulating a computation in Γ . The input node x I requires 1 splicing step (or 2 computational steps). In the worst case, a splicing step in one of the nodes { x i 1 i n 1 } can be simulated in the associated subnetwork in 4 + E i splicing steps (or 8 + 2 E i computational steps) distributed in the following way:
  • 2 steps in x i s .
  • 2 steps in x i 1 .
  • 4 steps in x i 2 .
  • 2 steps in x i , j if the string z yielded by the splicing step in the node x i Γ could pass any of the edge filters between x i and x j in Γ ; 1 i j n , i n .
  • 2 E i steps if z could not pass any of the edge filters in Γ .
Since E i is bounded by n 1 , where n is the number of nodes in Γ , a NSPFC can be simulated by NSP in O ( n ) time. As a result of this analysis, we conclude that L ( Γ ) = L ( Γ ) and T i m e Γ ( n ) O ( T i m e Γ ( n ) ) . □

5. Direct Simulations between NUSPs and NSPFCs

Proposition 5. 
Time N U S P ( f ( n ) ) Time N S P F C ( f ( n ) ) for any function f : I N I N .
Proof. 
This proposition can be proved through a NSPFC simulation of NUSP identical to the proposed simulation for NSP in the previous section, with the only difference being the replacement of the filters P I , P O with P and F I , F O with F in the specifications.
Therefore, the input node x I and the node x 1 s require 1 splicing step (or 2 computational steps) each. In the worst case, a splicing step in one of the nodes { x i 1 i n 1 } can be simulated in Γ in 7 splicing steps (or 14 computational steps) distributed in the following way:
  • 2 steps in x i 1 .
  • 2 steps in x i 2 .
  • 4 steps in x i f .
  • 2 steps in x i , j 1 .
  • 2 steps in x i , j 2 .
  • 2 steps in x j s .
Similarly, we conclude that L ( Γ ) = L ( Γ ) and T i m e Γ ( n ) O ( T i m e Γ ( n ) ) . □
The converse of the previous proposition holds.
Proposition 6. 
Time N S P F C ( f ( n ) ) Time N U S P ( f ( n ) ) for any function f : I N I N .
Proof. 
Let Γ = ( V , U , < , > , G , R , N , α , x 1 , x n ) be a NSPFC with G = ( X G , E G ) , X G having n nodes x 1 , x 2 , , x n ; x 1 I n ̲ and x n H a l t ̲ . We construct the NUSP Γ = ( V , U , < , > . G , N , α 1 , x I , x n s ) ; x I I n ̲ and x n s H a l t ̲ , where
V = V , U = U U X Y { # , ψ } ,
X = { X i , X i 1 i n } , Y = { Y i 1 i n 1 } ,
U = { a a U } u = a 1 a 2 a | u | u = a 1 a 2 a | u | U * ,
z ( u , v ) = β 1 u v β 2 z = β 1 u v s . β 2 U *
We now define the parameters of Γ . First, we add a main subnetwork composed by the input node x I , the nodes { x i 1 i n } , the nodes { x i 1 , x i 2 1 i n 1 } and the nodes { x i , j 1 i j n , i n , and { x i , x j } E G } .
  • node x I :
    S = { [ ( ε , > ) ; ( # , > X 1 ) ] } ,
    A = { # > X 1 }
    P = V , F = { # , X 1 } ,
    β = ( w ) .
  • node x I 1 :
    S = { [ ( ε , X 1 ) ; ( # , X 1 ) ] } { [ ( ε , X 1 ) ; ( # , # ) ] } ,
    A = { # X 1 , # # }
    P = { X 1 , X 1 } , F = { # } ,
    β = ( w ) .
  • nodes x i s , 1 i n 1 :
    S = { [ ( a , b ) ; ( u , v ) ] ] [ ( a , b ) ; ( u , v ) ] S x i , { a , b , u , v } U * } ,
    A = { z k ( u , v ) X i 1 k c a r d ( A x i ) , z k A x i } ,
    P = { X i , ψ , U } , F = X { X i } { # } ,
    β = ( w ) .
  • nodes x n s :
    S = S ( x n ) ,
    A = A ( x n ) ,
    P = { X n , ψ , U } , F = X { X n } { # } ,
    β = ( w ) .
  • nodes x i 1 , 1 i n 1 :
    S = { [ ( ε , X i ) ; ( # , X i ) ] } [ ( ε , ψ ) ; ( # , X i ) ] ] [ ( ε , X i ) ; ( # , # ) ] ] ,
    A = { # X i , # # } ,
    P = { X i , X i , U } , F = { # } ,
    β = ( w ) .
  • nodes x i 2 , 1 i n 1 :
    S = { [ ( a , v ) ; ( # , v π ) ] a U * } { [ ( a v , π ) ; ( # v , t ) ] { a , v } U * , t { U * { X i , X i ψ } } }
    { [ ( u , b ) ; ( π u , # ) ] b U * { θ } } { [ ( π , u b ) ; ( t , u # ) ] { u , t } U * , b { U * { X i , X i , ψ } } } ,
    A = { π u # , # v π X i } ,
    P = { X i } , F = { π , # } ,
    β = ( w ) .
  • nodes x i , j , 1 i j n , i n , { x i , x j } E G :
    S = { [ ( ε , X i ) ; ( # , X j ) ] ] } { [ ( ε , X j ) ; ( # , # ) ] ] } ,
    A = { # X j , # # } ,
    P = P { x i , x j } , F = F { x i , x j } ,
    β = β ( { x i , x j } ) .
The edges between them are:
{ { x I , x I 1 } }
{ { x I 1 , x 1 s } }
{ { x i s , x i 1 } 1 i n 1 }
{ { x i 1 , x i 2 } 1 i n 1 }
{ { x i 2 , x i , j } 1 i j n , i n , and { x i , x j } E G } }
{ { x i , j , x j s } 1 i j n , i n , and { x i , x j } E G } } .
Next, for each node x i 1 i n 1 in Γ , we add a subnetwork to Γ according to the cases considered in the sequel.
Case 1. Let E i be the number of nodes connected to the node x i 2 of the form x i , j , we add the corresponding nodes { x i , j r t 1 ( k ) , x i , j r t 2 ( k ) 1 k E i } to the subnetwork. Note that E i is also equal to the number of edges between the node x i and the nodes x j 1 i j n , i n in the network Γ . If the edges { x i , x j } Γ have β = ( w ) , the subnetwork is defined as follows:
  • nodes x i , j r t 1 ( k ) , 1 k E i , 1 i j n , i n , { x i , x j } E G :
    S = { [ ( ε , X i ) ; ( # , Y 2 ) ] } { [ ( ε , ψ ) ; ( # , # ) ] } , if k = 1 E i > 1 { [ ( ε , Y k ) ; ( # , Y k + 1 ) ] } { [ ( ε , ψ ) ; ( # , # ) ] } , if 2 k E i 1 { [ ( ε , Y E i ) ; ( # , ψ ) ] } , if k = E i E i > 1 { [ ( ε , X i ) ; ( # , ψ ) ] } , if E i = 1
    A = { # Y k + 1 , # # } , if 1 k E i 1 { # ψ } , if k = E i ,
    P = F { x i , x j } , F = { # , X i } { Y { Y k , Y k + 1 } } { U } ,
    β = ( w ) .
  • nodes x i , j r t 2 ( k ) , 1 k E i , 1 i j n , i n , { x i , x j } E G :
    S = { [ ( ε , X i ) ; ( # , Y 2 ) ] } { [ ( ε , ψ ) ; ( # , # ) ] } , if k = 1 E i > 1 { [ ( ε , Y k ) ; ( # , Y k + 1 ) ] } { [ ( ε , ψ ) ; ( # , # ) ] } , if 2 k E i 1 { [ ( ε , Y E i ) ; ( # , ψ ) ] } , if k = E i E i > 1 { [ ( ε , X i ) ; ( # , ψ ) ] } , if E i = 1
    A = { # Y k + 1 , # # } , if 1 k E i 1 { # ψ } , if k = E i ,
    P = , F = P { x i , x j } { # , X i } { Y { Y k , Y k + 1 } } { U } ,
    β = ( w ) .
The edges between them are:
{ { x i 2 , x i , j r t 1 ( 1 ) } , { x i 2 , x i , j r t 2 ( 1 ) } }
{ { x i , j r t 1 ( k ) , x i , j r t 1 ( k + 1 ) } , { x i , j r t 1 ( k ) , x i , j r t 2 ( k + 1 ) } 1 k E i 1 }
{ { x i , j r t 2 ( k ) , x i , j r t 1 ( k + 1 ) } , { x i , j r t 2 ( k ) , x i , j r t 2 ( k + 1 ) } 1 k E i 1 }
{ { x i , j r t 1 ( E i ) , x i s } , { x i , j r t 2 ( E i ) , x i s } } ,
Case 2. If the k edge { x i , x j } has β = ( s ) , we replace each of the nodes x i , j r t 2 ( k ) for p 1 nodes of the form x i , j r t 2 t ( k ) , 1 t p , where P { x i , x j } = { Z 1 , Z 2 , , Z p } , p 1 .The parameters S, A and β of these new nodes remain the same, while the input and output filters are defined as follows:
x i , j r t 2 t ( k ) : P = , F = { Z t , # , X i } { Y { Y k , Y k + 1 } } { U }
and the edges involving x i , j r t 2 ( k ) are replaced with p edges involving each of the nodes x i , j r t 2 t ( k ) .
In both cases, if the k edge has P { x i , x j } = , then the nodes x i , j r t 2 ( k ) and x i , j r t 2 t ( k ) are removed. Analogously, if F { x i , x j } = , then x i , j r t 1 ( k ) is removed. Furthermore, if P { x i , x j } = and F { x i , x j } = , the whole subnetwork given in in Figure 3 is removed because a string is always able to leave the node x i Γ through the edge k.
Let us assume the string w = < z > to be the input string in Γ . In the input node x I of Γ , the symbol X 1 is attached to the string and then sent to the node x I 1 , where it is replaced with X 1 . Then, the new string is sent to x 1 s and the simulation process starts. Thus, the node x 1 Γ contains the string w = < z > while w = < z > X i lies in the analogous node x 1 s Γ . In a more general setting, we assume that a string y = y X i , y U * , enters x i s Γ at a given step of the computation of Γ on w if and only if the string y enters x i Γ at a given step of the computation of Γ on w. Note that any string entering x I 1 from x 1 s will have the character X 1 replaced with #, which will result in that string being locked in that node. Therefore, the two nodes x I and x I 1 do not interfere with the computation anymore.
Let y be transformed into z 1 = α 1 a v s . β 2 and z 2 = β 1 u b α 2 in node x i and z 1 and/or z 2 can pass the filter on the edge between x i and x j . In Γ , the string y = y c , with c { X i , ψ } , is transformed into z 1 = α 1 a v β 2 X i and z 2 = β 1 u b α 2 c . The strings enter x i 1 , where the characters X i and ψ are replaced with X i . Thus, we get the same strings with any c { X i , ψ } . Then, it continues into x i 2 , where the symbols in u and v , if present, are replaced with the original counterpart u and v. In detail, the string z 1 is split into the strings z 1 ( 1 ) = α 1 a v π X i and z 1 ( 2 ) = # v β 2 X i . Both strings cannot leave the node because of the characters π and #. In the next splicing step, the rule { [ ( a v , π ) ; ( # v , t ) ] is applied to the two strings, yielding z 1 ( 1 ) = α 1 a v s . β 2 X i and z 2 ( 2 ) = # v π X i . The former constitutes the string z = z 1 X i where z 1 is the string yielded by x i Γ , completing the simulation of the splicing step. On the other hand, the latter cannot exit the node or be modified by any rule, so it remains locked for the rest of the computation. The logic is analogous for the other string generated by the splicing rule. The new string z = z 1 X i is sent to the nodes x i , j associated to the edges { x i , x j } in the original network. Since the filters of these nodes are identical to the edge filters in Γ , the string z 1 X i can enter a node x i , j Γ if and only if the string z 1 can pass the filter between x i and x j in Γ . Note that the converse is also true. Subsequently, the symbol X i is replaced with X j and the new string is sent to x j s Γ where a similar procedure starts. Note that if the string returns to the previous node in this sequence, it is ultimately blocked or lost. In x i 1 , the strings entering from x i 2 are split into two strings containing the forbidden symbol # and consequently they remain locked in that node. The same situation happens if a string enters x i 2 from a node x i , j . On the other hand, if a string enters a node x i , j from x j s for a given j, the string is split into two strings containing the symbol #. In the next communication step, these two strings are ejected from the network because they are not accepted by any node connected to x i , j . Alternatively, if the string y remains unchanged in x i Γ , the same event happens in x i s Γ . If the string is of the form y = y ψ , it starts a ping-pong process with the nodes x i , j r t 1 and x i , j r t 2 (or the nodes x i , j r t 2 t ( k ) if the edge { x i , x j } Γ has strong filters) until a splicing rule can be applied on it. On the other hand, if the string is of the form y = y X i , it still enters the node x i 1 because of the symbol X i , starting the same computational process described above.
On the other hand, a copy of the string z X i is also sent to the subnetwork illustrated in Figure 3, where the case of a string not passing the filters of any edge between the node x i and the connected nodes in the original network Γ is handled. We distinguish here between the cases of strong and weak filters. For each edge k with β = ( w ) , 1 k E i , the node x i , j r t 1 ( k ) checks if the string cannot enter the corresponding node x i , j ( k ) because of containing forbidden symbols, while the node x i , j r t 2 ( k ) verifies if the string contains the characters required by the P filter of the node x i , j ( k ) . Thoroughly, the string exits x i 2 and enters the first pair of nodes x i , j r t 1 ( 1 ) and x i , j r t 2 ( 1 ) granted that the associated string z could not pass the filters of the first edge in the network Γ . In that node, X i is replaced with Y 2 , ensuring that the new string could only enter the next pair of nodes x i , j r t 1 ( 2 ) and x i , j r t 2 ( 2 ) associated to the second edge. Subsequently, Y k 2 k E i 1 is replaced by the next character Y k + 1 in the remaining pair of nodes x i , j r t 1 ( k ) and x i , j r t 2 ( k ) , forcing the string to continue through them in sequence. If the string can enter one of the connected nodes through an edge in Γ , it will be lost at some point of this subnetwork as it will be refused by the input filters of the nodes corresponding to that edge. Otherwise, it will reach the last pair of nodes where the symbol Y E i will be changed to ψ yielding a string of the form z ψ which will be returned to x i s . Now, the string can either be used for another splicing step in which case the resulting strings will exit the node and enter x i 1 as it will contain a character a V or a ping-pong process will start between this node and the nodes x i , j r t 1 ( E i ) and x i , j r t 2 ( E i ) that may continue forever or until a splicing rule may be applied to the string. Thus, the following computational step in Γ was simulated in Γ : z could not leave the node x i , and it remained in that node for new splicing steps. In the case of an edge k with strong filters, the computation follows the same procedure described above with the difference that the node x i , j r t 2 ( k ) is replaced with p 1 nodes of the form x i , j r t 2 t ( k ) , 1 t p , where P { x i , x j } = { Z 1 , Z 2 , , Z p } , p 1 . A node x i , j r t 2 t ( k ) refuses any string z containing the character Z t . Thus, the string can only continue through this network by means of a node x i , j r t 2 t ( k ) for some 1 t p if it only contains a proper subset of P { x i , x j } . Note that a string z ψ can also enter the previous nodes x i , j r t 1 ( E i 1 ) and x i , j r t 2 ( E i 1 ) granted that E i > 1 , but that string is split into two new strings with the forbidden symbol # through the application of the rule { [ ( ε , ψ ) ; ( # , # ) ] } and, consequently, they will remain locked in that node.
We analyze now the case where a string z X i enters x i , n from a node x i 2 , for some 1 i n 1 . In that node, X i is replaced with X n and the resulting string enters x n s , ending the computation. Note that by the considerations above, a string enters x i , n if and only if a string from x i was able to pass the filter on the edge between x i and x n in Γ .
We analyze now the computational steps required for simulating a computation in Γ . The input node x I requires 1 splicing step (or 2 computational steps). In the worst case, a splicing step in one of the nodes { x i 1 i n 1 } can be simulated in the associated subnetwork in 4 + E i splicing steps (or 8 + 2 E i computational steps) distributed in the following way:
  • 2 steps in x i s .
  • 2 steps in x i 1 .
  • 4 steps in x i 2 .
  • 2 steps in x i , j if the string z yielded by the splicing step in the node x i Γ could pass any of the edge filters between x i and x j in Γ ; 1 i j n , i n .
  • 2 E i steps if z could not pass any of the edge filters in Γ .
Since E i is bounded by n 1 where n is the number of nodes in Γ , a NSPFC can be simulated by NUSP in O ( n ) time. As a result of this analysis, we conclude that L ( Γ ) = L ( Γ ) and T i m e Γ ( n ) O ( T i m e Γ ( n ) ) . □
In virtue of the results obtained in the proposed simulations, we are now able to state the main conclusion of this research:
Theorem 1. 
Time N S P ( f ( n ) ) = Time N U S P ( f ( n ) ) = Time N S P F C ( f ( n ) )
for any function f : I N I N .

6. Conclusions and Further Work

As being a theoretical piece of work, the methodology used in this paper is the standard one: identifying the problem, discussing the theoretical implications and possible practical applications, stating the results, proving the statements, and discussing different aspects of the results.
We demonstrate here an efficient method to simulate different variants of NSP with random-context filters with each other. We describe a methodology to translate any network belonging to one of the three variants into an equivalent on of the other two models. The definitions of the former regarding splicing rules, auxiliary strings, random-context filters and connections between nodes are used to determine the corresponding ones for the new network. Later on, they are assigned to the nodes and edges according to an established procedure. Although the constructed networks may appear to be more complex and to have a significantly bigger size than the original network, the time complexity remains unchanged. This statement is proven true by the capability of these new networks to simulate any computational step in the original architecture in a constant number of computational steps. This method is theoretically important and attractive, as it allows direct and time efficient conversions of one variant of NSP into another one, avoiding intermediate computational models. Efficient simulations between different bio-inspired computational models are essential in the theoretical and practical studies in this field. It is common for these paradigms to be used as problem solvers, and simulations between two models would allow the scientific community to translate a solution proposed for one of the models to the other, avoiding the time and effort required otherwise. Furthermore, research about this topic could also have important practical applications in architecture design, such as the hypothetical establishment of systems, composed of a mix of different bio-inspired architectures, in an effort towards improving the efficiency of a given task.
As one can see, the underlying graph of the simulating network might be different than that of the original network. An attractive line for further research, which may be considered not only for the variants considered here, would be to impose some conditions on the topology of the underlying graph (to be the same as the original one, to have a predefined form, etc.).
Last but not least, we have started a study of possible numerical simulations and implementations of some of these models. A few steps have already been made by considering some possible ways of assigning probabilities to some similar models, see, e.g., [25,26,27].

Author Contributions

Conceptualization, V.M. and J.A.S.M.; formal analysis, V.M. and M.P.; investigation, J.R.S.C. and V.M.; writing—original draft preparation, J.A.S.M.; writing—review and editing, V.M.; supervision, V.M.; funding acquisition, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by a grant of the Romanian Ministry of Education and Research, CCCDI-UEFISCDI, Project No. PN-III-P2-2.1-PED-2019-2391, within PNCDI III.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was partially supported by the Comunidad de Madrid under Convenio Plurianual with the Universidad Politécnica de Madrid in the actuation line of Programa de Excelencia para el Profesorado Universitario.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Martín-Vide, C.; Pazos, J.; Păun, G.; Rodrxixguez-Patón, A. A new class of symbolic abstract neural nets: Tissue P systems. Lect. Notes Comput. Sci. 2002, 2387, 290–299. [Google Scholar]
  2. Păun, G. Membrane Computing. An Introduction; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  3. Csuhaj-Varjú, E.; Mitrana, V. Evolutionary systems: A language generating device inspired by evolving communities of cells. Acta Inform. 2000, 36, 913–926. [Google Scholar] [CrossRef]
  4. Csuhaj-Varjú, E.; Salomaa, A. Networks of parallel language processors. Lect. Notes Comput. Sci. (LNCS) 1997, 1218, 299–318. [Google Scholar]
  5. Morrison, J. Flow-Based Programming: A New Approach to Application Development; CreateSpace: Scotts Valley, CA, USA, 2010. [Google Scholar]
  6. Hillis, D. The Connection Machine; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  7. Gray, R.; Kotz, D.; Nog, S.; Rus, D.; Cybenko, G. Mobile agents: The next generation in distributed computing. In Proceedings of the IEEE International Symposium on Parallel Algorithms Architecture Synthesis, Aizu-Wakamatsu, Japan, 17–21 March 1997; pp. 8–24. [Google Scholar]
  8. Manea, F.; Martín-Vide, C.; Mitrana, V. Accepting networks of evolutionary word and picture processors: A survey. Sci. Appl. Lang. Methods 2010, 2, 525–560. [Google Scholar]
  9. Arroyo, F.; Castellanos, J.; Mitrana, V.; Santos, E.; Sempere, J.M. Networks of bio-inspired processors. Triangle Lang. Lit. Comput. 2012, 7, 4–22. [Google Scholar]
  10. Castellanos, J.; Martín-Vide, C.; Mitrana, V.; Sempere, J.M. Networks of evolutionary processors. Acta Inform. 2003, 39, 517–529. [Google Scholar] [CrossRef]
  11. Manea, F.; Martín-Vide, C.; Mitrana, V. Accepting networks of splicing processors. In New Computational Paradigms, First Conference on Computability in Europe, CiE 2005, Amsterdam, The Netherlands, 8–12 June 2005; Barry Cooper, S., Löwe, B., Torenvliet, L., Eds.; Volume 3526 of Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; pp. 300–309. [Google Scholar]
  12. Manea, F.; Martín-Vide, C.; Mitrana, V. All NP-problems can be solved in polynomial time by accepting networks of splicing processors of constant size. In DNA Computing, 12th International Meeting on DNA Computing, DNA12, Seoul, Korea, 5–9 June 2006, Revised Selected Papers; Chengde, M., Takashi, Y., Eds.; Volume 4287 of Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 47–57. [Google Scholar]
  13. Head, T.; Păun, G.; Pixton, D. Language theory and molecular genetics: Generative mechanisms suggested by DNA recombination. In Handbook of Formal Languages, Volume 2. Linear Modeling: Background and Application; Grzegorz, R., Arto, S., Eds.; Springer: Berlin/Heidelberg, Germany, 1997; pp. 295–360. [Google Scholar]
  14. Head, T. Restriction Enzymes in Language Generation and Plasmid Computing. In Biomolecular Information Processing: From Logic Systems to Smart Sensors and Actuators; Wiley: Hoboken, NJ, USA, 2012; pp. 245–263. [Google Scholar]
  15. Csuhaj-Varjú, E.; Kari, L.; Păun, G. Test tube distributed systems based on splicing. Comput. Artif. Intell. 1996, 15, 211–232. [Google Scholar]
  16. Păun, G. Distributed architectures in DNA computing based on splicing: Limiting the size of components. In Proceedings of the 1st International Conference on Unconventional Models of Computation, Auckland, New Zealand, 5–9 January 1998; pp. 323–335. [Google Scholar]
  17. Păun, G. DNA computing; Distributed splicing systems. Lect. Notes Comput. Sci. 2005, 1261, 353–370. [Google Scholar]
  18. Manea, F.; Martín-Vide, C.; Mitrana, V. Accepting networks of splicing processors: Complexity results. Theor. Comput. Sci. 2007, 371, 72–82. [Google Scholar] [CrossRef] [Green Version]
  19. Margenstern, M.; Mitrana, V.; Pérez-Jiménez, M.J. Accepting hybrid networks of evolutionary processors. In DNA Computing, 10th International Workshop on DNA Computing, DNA 10, Milan, Italy, 7–10 June 2004, Revised Selected Papers; Claudio, F., Giancarlo, M., Claudio, Z., Eds.; Volume 3384 of Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; pp. 235–246. [Google Scholar]
  20. Bottoni, P.; Labella, A.; Manea, F.; Mitrana, V.; Petre, I.; Sempere, J.M. Complexity-preserving simulations among three variants of accepting networks of evolutionary processors. Nat. Comput. 2011, 10, 429–445. [Google Scholar] [CrossRef]
  21. Dragoi, C.; Manea, F.; Mitrana, V. Accepting networks of evolutionary processors with filtered connections. J. Univers. Comput. Sci. 2007, 13, 1598–1614. [Google Scholar]
  22. Castellanos, J.; Manea, F.; de Mingo López, L.F.; Mitrana, V. Accepting networks of splicing processors with filtered connections. In Machines, Computations, and Universality; Jérôme, D.-L., Maurice, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 218–229. [Google Scholar]
  23. Gómez-Canaval, S.; Mitrana, V.; Pŭn, M.; Sanchez Martín, J.A.; Sánchez Couso, J.R. Networks of uniform splicing processors: Computational power and simulation. Mathematics 2020, 8, 1217. [Google Scholar] [CrossRef]
  24. Rozenberg, G.; Salomaa, A. Handbook of Formal Languages; Springer: Berlin, Germany, 1997. [Google Scholar]
  25. Arroyo, F.; Gómez-Canaval, S.; Mitrana, V.; Păun, M.; Sánchez Couso, J.R. Towards probabilistic networks of polarized evolutionary processors. In Proceedings of the 2018 International Conference on High Performance Computing & Simulation (HPCS), Orléans, France, 16–20 July 2018; pp. 764–771. [Google Scholar]
  26. Gómez-Canaval, S.; Mitrana, V.; Păun, M.; Vararuk, S. High performance and scalable simulations of a bio-inspired computational model. In Proceedings of the 2019 International Conference on High Performance Computing & Simulation (HPCS), Dublin, Ireland, 15–19 July 2019; pp. 543–550. [Google Scholar]
  27. Sánchez, J.A.; Arroyo, F. Simulating probabilistic networks of polarized evolutionary processors. Procedia Comput. Sci. 2019, 159, 1421–1430. [Google Scholar] [CrossRef]
Figure 1. A graphical representation of the network.
Figure 1. A graphical representation of the network.
Mathematics 09 01511 g001
Figure 2. Subnetwork for splicing step.
Figure 2. Subnetwork for splicing step.
Mathematics 09 01511 g002
Figure 3. Subnetwork to handle strings that cannot exit the node x i .
Figure 3. Subnetwork to handle strings that cannot exit the node x i .
Mathematics 09 01511 g003
Table 1. Description of the initial nodes.
Table 1. Description of the initial nodes.
NodeSPFA β
x 1 s { [ ( ε , > ) ; ( # , ψ > ) ] } { } { # , # 1 , θ 1 } { # ψ > } ( s )
x 1 0 { [ ( ε , ψ > ) ; ( # , > # 1 ) ] } { } ( U T ) { > , # 1 } { # > # 1 , # θ 1 } ( s )
x 1 0 { [ ( ε , ψ > ) ; ( # , > # 1 ) ] } { } ( U T ) { > , # 1 } { # > # 1 , # θ 1 } ( s )
{ [ ( ε , # 1 ) ; ( # , θ 1 ) ] }
Table 2. Description of the intermediate nodes.
Table 2. Description of the intermediate nodes.
NodeSPFA β
x i c h e c k i n { [ ( ε , $ i ) ; ( # , # i ) ] } P I x i ( F I x i U T ) { # # i , # θ i } β ( x i )
{ [ ( ε , # i ) ; ( # , θ i ) ] } { $ i , # i }
x i 1 { { [ ( a , b ) ; ( u , v ) ] { # i } { θ i , # } { z k ( u , v ) # i ( s )
[ ( a , b ) ; ( u , v ) ] S x i } 1 k c a r d ( A x i ) ,
z k A x i }
x i 2 { [ ( ε , # i ) ; ( # , θ i ) ] } U T { θ i , # i } { # θ i } ( w )
x i 2 ( d o m ( S x i ) ) , { [ ( ε , # i ) ; ( # , θ i ) ] } { θ i , # i } ( d o m ( S x i ) T U ) { # θ i } ( w )
{ θ i , # i }
x i 3 { [ ( u , b ) ; ( π u , # ) ] { θ i } { # , π } { π u # , # v π θ i } ( s )
b { U * { θ i } } }
{ [ ( π , u b ) ; ( t , u # ) ] { u , t } U * ,
b { U * { θ i } } }
{ [ ( a , v ) ; ( # , v s . π ) ] a U * }
{ [ ( a v , π ) ; ( # v , t ) ] { a , v } U * ,
t U * { θ i } }
x i c h e c k o u t { [ ( ε , θ i ) ; ( # , ¥ i ) ] } P O x i F O x i U { # ¥ i } β ( x i )
( T { θ i , ¥ i } )
x i c o n t i n u e { [ ( ε , ¥ i ) ; ( # , $ j ) ] T ( { ¥ i } { $ j { # $ j } ( s )
{ x i , x j } E G } { x i , x j } E G } )
x i r e t u r n 1 { [ ( ε , θ i ) ; ( # , # i ) ] } F O x i U { # , $ j , ¥ j { # # i } ( w )
1 j n }
x i r e t u r n 2 { [ ( ε , θ i ) ; ( # , # i ) ] } P O x i { # , $ j , ¥ j { # # i } ( s )
1 j n } U
Table 3. Description of the returning nodes.
Table 3. Description of the returning nodes.
NodeSPFA β
x i r e t u r n 2 k { [ ( ε , θ i ) ; ( # , # i ) ] } { Z k } U { # # i } ( w )
{ # , $ j , ¥ j 1 j n }
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sánchez Couso, J.R.; Sanchez Martín, J.A.; Mitrana, V.; Păun, M. Simulations between Three Types of Networks of Splicing Processors. Mathematics 2021, 9, 1511. https://doi.org/10.3390/math9131511

AMA Style

Sánchez Couso JR, Sanchez Martín JA, Mitrana V, Păun M. Simulations between Three Types of Networks of Splicing Processors. Mathematics. 2021; 9(13):1511. https://doi.org/10.3390/math9131511

Chicago/Turabian Style

Sánchez Couso, José Ramón, José Angel Sanchez Martín, Victor Mitrana, and Mihaela Păun. 2021. "Simulations between Three Types of Networks of Splicing Processors" Mathematics 9, no. 13: 1511. https://doi.org/10.3390/math9131511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop