On the Possibility of Reproducing Utsu’s Law for Earthquakes with a Spring-Block SOC Model

The Olami, Feder and Christensen (OFC) spring-block model has proven to be a powerful tool for analyzing and comparing synthetic and real earthquakes. This work proposes the possible reproduction of Utsu’s law for earthquakes in the OFC model. Based on our previous works, several simulations characterizing real seismic regions were performed. We located the maximum earthquake in these regions and applied Utsu’s formulae to identify a possible aftershock area and made comparisons between synthetic and real earthquakes. The research compares several equations to calculate the aftershock area and proposes a new one with the available data. Subsequently, the team performed new simulations and chose a mainshock to analyze the behavior of the surrounding events, so as to identify whether they could be catalogued as aftershocks and relate them to the aftershock area previously determined using the formula proposed. Additionally, the spatial location of those events was considered in order to classify them as aftershocks. Finally, we plot the epicenters of the mainshock, and the possible aftershocks comprised in the calculated area resembling the original work of Utsu. Having analyzed the results, it is likely to say that Utsu’s law is reproducible using a spring-block model with a self-organized criticality (SOC) model.


Introduction
In the present article we propose a method to estimate the aftershock area given by the Utsu law [1,2] by using the spring-block model suggested by Olami, Feder and Christensen (OFC) [3] based on a bidimensional spring-block array capable of reaching a self-organized criticality (SOC) state. As is well known, the SOC concept was developed by Bak et al. in 1987 [4-7] and since then this idea has been useful for many applications in several disciplines [8]. The OFC model works as a cellular automaton with a few rules for its evolution (see Section 2.1). The first property known to be reproduced by this model was the Gutenberg-Richter law [9] log . N(M ≥ M 0 ) = a − bM, which relates the magnitude of an earthquake to the number of earthquakes of a given magnitude M, where a is a constant and b is the scaling parameter between large and small events and varies depending on the subduction zone. For several years it was believed that this model did not reproduce aftershocks. However, in 2002 Hergarten and Neugebauer (HN) [10] proposed a manner to obtain aftershocks with the original OFC model.
Aftershocks and their decay have been studied for a long time. Omori proposed in 1894 that they follow a hyperbolic decay with time, following n = k (c+t) , which is known as the Omori law [11], where n is the number of aftershocks; k and c are constants that vary between earthquake sequences; and t is the time elapsed since the mainshock. Later, this equation was modified by Utsu in 1961 and took the general form of n = k (c+t) p which is known as the Omori-Utsu law [1], where p is a constant that determines the decay rate and can take values between 0.6 and 1.1 [12] if the aftershock sequence is modeled by means of a multifractal scaling theory. However, Davidsen et al. [13], by using a generalized Omori-Utsu law, found that the exponent p is a constant > 1 providing convincing evidence that the characteristic time c is not constant but a genuine function of m c , with m c being at a lower magnitude cut off for the aftershocks considered. Some authors have tried to give a generalized form of this law in the form of a differential equation and introduced the idea of a "cool down" process after the mainshock [14,15]. In the end, the goal has been to characterize the behavior of aftershocks so the general activity can be predicted after a big event and therefore, asses the risk in a given zone [16]. In addition, there seems to be a relationship between the parameter b of Gutenberg-Richter and the parameter p of Omori which governs the speed of the aftershock's decay [17][18][19][20]. So far, we know that the Gutenberg-Richter law is reproducible in the OFC model, so we aimed to see if some sort of Omori-like behavior could be also obtained.
Our group of researchers has studied for a long time the OFC model, trying to reproduce real-seismicity properties reported by seismologists. For our part, we have published several articles in which we found some similarities between the OFC model and real seismicity [3,[21][22][23][24][25][26][27][28]. We can outline the ability to emulate real-life seismic subduction zones by including the age of the lithospheric plate and the rate of convergence, both transformed into their synthetic counterparts in the spring-block model [21]; we were also able to reproduce the so-called Ruff-Kanamori diagram [21,23,29] that relates the age of the subduction plate with the convergence rate of the plates and the maximum characteristic earthquake for a given zone. In a different paper, we suggested that the Gutenberg-Richter parameters a and b are positively correlated [24]. We have also published a variation of the OFC model, including non-homogenous cases for different elastic ratios [25,27] and getting the asperity concept into the spring-block model. In the same article, we present an extensive review of our previous results.
In terms of the current study, it aims to replicate Utsu's law for earthquakes within the OFC model. Additionally, in the spring-block model, there is potential to identify certain events as aftershocks by considering their spatial location.
In the work that our research group has performed, we conducted simulations to study real seismic regions, analyzed the maximum earthquake and identified possible aftershock areas using Utsu's formulae. We proposed a new equation to calculate the aftershock area and analyzed the behavior of the surrounding events to determine whether they could be classified as aftershocks. By considering the spatial location of these events, we plotted the epicenters of the mainshock and the possible aftershocks within the calculated area, concluding that Utsu's law is reproducible using a spring-block SOC model.
As for the remaining work, evaluating the suitability of aftershock candidates according to Omori's law is required. Moreover, exploring the inclusion of the spatial location in identifying potential aftershocks also needs a deeper examination, as we will discuss further.
This article is organized as follows: in Section 2, we present the OFC model and Utsu's equations. In Section 3, we present the results of the simulations performed to characterize the different seismic regions and the relationship between the different equations to calculate the aftershock area. In Section 4, we discuss the results and analyze them to identify possible aftershocks and the application of the equations for the aftershock area and, finally, in Section 5, we present our conclusions.

The Olami-Feder-Christensen Model
The OFC model is characterized as being a non-conservative continuous model that displays self-organized criticality [3]. It is made by a system of blocks interconnected by Hooke springs. Each block is connected to four nearest neighbors. In addition, each of these blocks is connected to a moving plate by another set of Hooke springs and connected to a fixed rigid plate, as shown in Figure 1. The block's displacement is given by the relative movement between the two plates; when the force in one of the blocks overcomes a certain threshold F th (the maximum static friction), then the block slips. The moving block then returns to the zero-force position after transferring its energy to the four nearest neighbors which can result in further slips that can provoke a chain reaction. ratios, defined in Equations (5) and (6) can take values between 0 and 0.25. However, th values of the parameter in the Gutenberg-Richter law that are similar to those ob served in real seismicity are only obtained for around 0.2.
The magnitude of a synthetic earthquake will be defined as a function of the num ber of relaxed blocks, where = ( ), and is the base of the logarithm, in th way the size of the synthetic earthquake is determined by . For a more detailed description of the equations and to deepen the choice of elast parameters, we recommend reading reference [9] where we present an extensive review on this topic. Based on the work of Perez Oregon et al. [21][22][23], in which they simulated severa seismic regions considering the age of the subduction plate and the convergence velocit to reproduce the maximum magnitude earthquake for each region and also using the OF spring-block model, 29 simulations were performed, same as in [21], using lattices of 10 × 100 blocks and one million iterations. The synthetic age of the subduction zone was ob tained considering the interval from 0 to 160 Myr proposed by Ruff and Kanamori [29 and defining as the normalized age of the tectonic plates. If we set the value to 1 fo the maximum value of the tectonic age (160 Myr) we propose a relation of and the tec tonic age using [21]: It can be seen that for an old plate takes lower values that transform to low-mag nitude earthquakes, while for younger ones is close to 0.25, which generates high-mag nitude earthquakes. For each of the subduction regions considered, the value of wa calculated. In addition, we included the convergence rate for these plates ranging from to 11.1 cm/year as reported in [23] but converted to the synthetic counterpart ranging from 0 to 3. This synthetic velocity was implemented in the spring-block model by adding it t the difference between − in the first iteration as a global perturbation to obtai the maximum event per simulation. The spring-block model is a two-dimensional dynamic system composed of interconnected blocks and springs. The blocks are connected to a rigid plate that moves at a constant speed, and the relative motion of the blocks causes them to move as well. Each block has a maximum of four neighbors and will slip if the force acting on it exceeds a certain threshold value. The model can be described using cellular automata, where each block has a force and displacement from its relaxed position on the lattice. The movement of a slipping block can trigger a chain reaction that results in more block movements.
In order to generate the synthetic earthquakes, the spring-block model is mapped to a continuous, non-conservative cellular automaton that follows the next computational evolution rules, as described below.

1.
The lattice sites are initialized with random values between zero and a force threshold F th .

2.
The block with the largest force, F max , is located, and the difference between F th and F max is added to all sites. This causes a global perturbation.

3.
For all sites where F i,j ≥ F th , the force is redistributed among the neighbors of F i,j according to the following rule: F n,n , the forces of the neighboring blocks of the relaxed cell F i,j are increased by γF i,j , and F i,j is reduced to zero. 4.
Step 3 is repeated until the earthquake has fully evolved.

5.
Once a state of rest is reached, the process returns to step 2, and synthetic earthquakes are observed until the desired number of events is reached.
The computational rules described before are deduced from the dynamics of the OFC model, which are the following: Consider a network of size L × L, where each block will have a force of F i,j applied to it, with i and j being integers between 1 and L. The displacement of each block from its relaxed position on the lattice will be denoted as x i,j [3].
The total force exerted on a specific block (i, j) by the springs can be expressed as: where K 1 , K 2 , K L are the elastic constants. Once a local slip at the position (i, j) has happened, the force will redistribute in the following way: where the force increase in the nearest neighbors is given by: where γ 1 and γ 2 are the elastic ratios. If K L is greater than 0, the force redistribution is not conserved, which is expected during actual earthquakes. This results in a redefinition of forces in the nearest-neighbor blocks, which may cause further slipping and trigger a chain reaction or avalanche. For the isotropic case where γ 1 = γ 2 = γ, the OFC model can reproduce the Gutenberg-Richter power law with an exponent similar to the actual one for γ equal to 0.2. Elastic ratios, defined in Equations (5) and (6) can take values between 0 and 0.25. However, the values of the parameter b in the Gutenberg-Richter law that are similar to those observed in real seismicity are only obtained for γ around 0.2.
The magnitude M of a synthetic earthquake will be defined as a function of the number n of relaxed blocks, where M = log x (n), and x is the base of the logarithm, in this way the size of the synthetic earthquake is determined by n.
For a more detailed description of the equations and to deepen the choice of elastic parameters, we recommend reading reference [9] where we present an extensive review on this topic.
Based on the work of Perez Oregon et al. [21][22][23], in which they simulated several seismic regions considering the age of the subduction plate and the convergence velocity to reproduce the maximum magnitude earthquake for each region and also using the OFC spring-block model, 29 simulations were performed, same as in [21], using lattices of 100 × 100 blocks and one million iterations. The synthetic age of the subduction zone was obtained considering the interval from 0 to 160 Myr proposed by Ruff and Kanamori [29] and defining e n as the normalized age of the tectonic plates. If we set the value to 1 for the maximum value of the tectonic age (160 Myr) we propose a relation of γ and the tectonic age using [21]: It can be seen that for an old plate γ takes lower values that transform to lowmagnitude earthquakes, while for younger ones γ is close to 0.25, which generates highmagnitude earthquakes. For each of the subduction regions considered, the value of γ was calculated. In addition, we included the convergence rate for these plates ranging from 1 to 11.1 cm/year as reported in [23] but converted to the synthetic counterpart ranging from 0 to 3. This synthetic velocity was implemented in the spring-block model by adding it to the difference between F th − F max in the first iteration as a global perturbation to obtain the maximum event per simulation.
The magnitude of the synthetic earthquake was determined using the log 3 of the size of the earthquake; this size is referred to as the number of relaxed blocks during the simulation. The value of log 3 was chosen arbitrarily, and it gives the closest value to real magnitudes [23,25]. We want to emphasize that this log value may change if the size of the lattice changes or the number of iterations. Therefore, a new value must be chosen to match the values of real earthquakes. For example, in a 100 × 100 lattice in the case that for a single event all the blocks were relaxed, we would have M = log 3 (10000) = 8.3; while for a 500 × 500 lattice, the same scenario would be M = log 3 (250000) = 11.3, which is by far greater than any earthquake reported. In this case, there is a great level of correspondence between the synthetic and the real values of the earthquakes' magnitudes for each region, as reported in [23,25].

Utsu's Law
In its original publication [2], Utsu proposes that every earthquake taking place in a one-month period after the main event and where the magnitude is less than the main event's magnitude should be considered as an aftershock and not as an isolated event. He also proposes a region in which the epicenters of these aftershocks should be located. This region is called the aftershock area. Utsu [2] defines the aftershock area as the physical area in which the aftershocks are embedded.
The first relationship between the earthquake's magnitude and the aftershock area was proposed in 1955 by Utsu and Seki [30] as shown in Equation (8): where A is the aftershock area in km 2 , and M is the earthquake magnitude and log is a base 10 logarithm. However, this relation is not unique; for different magnitudes there are different equations which provide a better adjustment. In general, the equations take the general form of logA =∝ M − β, varying the constants ∝ and β according to the case.
For magnitudes M < 7, he proposes: This equation was originally used as a correction of Equation (8) in a zone where the aftershocks exhibit a large scatter [2].
For inland earthquakes, he found that the aftershock area is smaller to offshore earthquakes, and that they fit closer to the following equation: This is valid for earthquakes with magnitudes 5.5 ≤ M ≤ 7.5. Goto [31] proposes an equation for the minimum aftershock area for M > 6.5: valid for regions where the seismic activity is low. However, Utsu mentions that the aftershock area given by the latter equation is too small, therefore, he proposes a better adjustment using: which is valid for 5.5 ≤ M ≤ 8.
On the other hand, Bath and Buda [32] propose the following relation after analyzing six earthquakes: In addition, Purcaru [33] proposes a relation for 5 ≤ M ≤ 8.5 using the data of fifteen earthquakes:

Aftershocks in the OFC Model
Some researchers [10,34] have pointed out that one of the biggest limitations of the OFC model is the lack of foreshocks or aftershocks, which, in this case, may be difficult to prove that Utsu's law is present in the OFC model. Some authors have tried to solve this problem. Before, we can mention the work of Hainzl et al. [35] who introduced an additional set of blocks representing the viscous asthenosphere; Pelletier [34], based on Hainzl's work [35], included heterogeneities into his model. In both cases, they modified the original OFC model and made it more complicated by introducing these new variables. Hergarten and Neugebauer [10] did a statistical analysis of the time series in the OFC model and proposed a method to classify events after the mainshock as aftershocks. They proposed to use the time (number of events) between what they consider successive mainshocks of the same or larger magnitude as the recurrence time. Then, they set a window to limit where to look for aftershocks. After this, as long as the following events are not bigger than the mainshock and are located within this window, they are considered aftershocks.

Utsu's Law Simulations
In Figure 2, we show one of the cases in the original work of Utsu where the main shock is illustrated as a double circle while the aftershock area is the dashed line ellipse, all the events that fulfill the requirements of time and magnitude are considered aftershocks [2]. The relative position of the main shock with respect to the aftershocks in the twenty-five cases reported by Utsu in [1] are located in some cases at the edges of the respective area, but in many of them they are located practically in the middle of the area. For the springblock simulation, we take into consideration that the relaxed elements in our simulations can be identified graphically, and these elements generate a "relaxed area" equivalent to the size of the synthetic earthquake ( Figure 3); we propose the equivalence between the synthetic earthquake's size and Utsu's aftershock area. When a synthetic earthquake occurs, the relaxed area has many blocks that are near the threshold, therefore, the probability that the following earthquakes occur in that area is high, which is in consonance with real seismicity. Based on this, we take the relaxed area as equivalent to the aftershock area in the Utsu construction.
quake's size is equivalent to the aftershock area, so this value any of the Equations (8)- (14). We highlighted the computed closest to the earthquake's size by comparing Equations (8)-(1 the best fit for each magnitude. In Table 1, according to our data model, a particular equ a certain magnitude range. For magnitudes < 7.6, the clo tion (8); while in the range of 7.6 ≤ ≤ 8, the best adjustm and for magnitudes over 8, the best adjustment is given by E important to mention that, in all ranges in our proposal, Equ the expected size.
In Table 2, since there are slight variations in the synthet we have the lowest magnitude value of 6.8 in comparison with also have variations over the range they fit the best. We can see (13) gives the best adjustment; on the other hand, for = 7 (again, given by the rounding of Equation (15)), the first is gi second one by Equation (8). The behavior for ≥ 7.6 is id For the synthetic catalog, we found that in our proposal, E accurate value for the synthetic size.  In order to compare the real magnitude (the maximum magnitude reported in a given seismic region) and the real aftershock area versus the synthetic magnitude and the synthetic aftershock area, we treated real earthquakes' magnitudes as synthetic ones in a spring-block simulation so we can expect a certain size (synthetic) for those events (where this size is the number of relaxed elements in a simulation). Since we already have emulated 29 subduction seismic zones [21,25], it was easy to apply the following transformation for going from real seismicity to synthetic seismicity: In Figure 4, we show a projection of Equations (8)- (14) and ( ≤ 9.5 vs. the of the synthetic size in the spring-block mo can be noticed that in the range of 6.7 ≤ ≤ 8.1 several equatio expected size (the × symbol in Figure 4), however, Equations (8 aftershock area for magnitudes > 8.1 while underestimating i 6. On the other hand, Equation (16) (the □ symbol in Figure 4) follo in the values of the expected size. We want to emphasize that thi using the data obtained from the spring-block model, which can governing the synthetic size, and that is why it follows almost p the expected size.
In Tables 1 and 2, we notice that as the magnitude increases, the computed and the expected value of the aftershock area also i difference in percentage ranges from 15.48% (for = 7.5 ) to Therefore, a better adjustment is needed. For Equation (16), this e (for = 7.5) to 0.26% (for = 9.5). On the other hand, in Table fit ranges from 8.4% (for = 6.8) to 297% (for = 9.6), while fo ranges from 0.16% (for = 6.8) to 0.26% (for = 9.6).  (8)-(14) earthquake's magnitudes reported in each seismic region. The expected si Equation (15). The closest value to the expected size is highlighted in gre Afterwards, we calculated the aftershock area using Equations (8)- (14) and (16) (see below) as shown in Table 1. Table 1 lists the maximum real earthquake's magnitude reported for each one of the twenty-nine subduction seismic regions that were used in [21]. We highlighted the equation that gives the closest value to the expected size calculated with Equation (16) so the reader can identify the best fit for each case. This comparison was made only with Equations (8)- (14). In addition, we obtained a new equation using linear regression for this data set. The resultant equation was: It is worth mentioning that our proposed Equation (16) is the one closest to the expected size in comparison to the rest of Equations (8)- (14) and it is the reason the comparison mentioned earlier did not include Equation (16). As the magnitude increases, the difference between the expected size and the fit given by Equations (8)-(14) also increases dramatically. On the other hand, that difference remains minimal with Equation (16).
In Table 1, we present the results for the maximum earthquake's magnitude reported (real seismicity), while in Table 2 are the results for the spring-block simulations of each seismic region (synthetic seismicity). For Table 2, we present the maximum synthetic size which was obtained with the spring-block simulation and then the equivalent magnitude was computed with Equation (15) and solved for M. The reader might notice that the same magnitude has associated different values of the synthetic size; this is the result of rounding Equation (15) to one decimal position. In both cases, as previously stated, the earthquake's size is equivalent to the aftershock area, so this value should be approximated by any of the Equations (8)- (14). We highlighted the computed aftershock area that is the closest to the earthquake's size by comparing Equations (8)- (14) so the reader can identify the best fit for each magnitude.  (8)-(14) using the maximum real earthquake's magnitudes reported in each seismic region. The expected size was computed by using Equation (15). The closest value to the expected size is highlighted in green. The last column is the result of our proposed Equation (16), which gives the best approximation to the expected size.   (8)- (14) using the spring-block model. For each simulation, we obtained the maximum synthetic size of a given region, then the magnitude M was computed applying Equation (15) and solving for M = log 3Ŝ . The closest value to the expected size is highlighted in green. The last column is the result of our proposed Equation (16), which gives the best approximation to the synthetic size. In Table 1, according to our data model, a particular equation fits more accurately to a certain magnitude range. For magnitudes M < 7.6, the closest result is given by Equation (8); while in the range of 7.6 ≤ M ≤ 8, the best adjustment is given by Equation (10) and for magnitudes over 8, the best adjustment is given by Equation (12). However, it is important to mention that, in all ranges in our proposal, Equation (16) is almost equal to the expected size.

Subduction
In Table 2, since there are slight variations in the synthetic earthquakes (for example, we have the lowest magnitude value of 6.8 in comparison with 7.5 in Table 1) the equations also have variations over the range they fit the best. We can see that for M = 6.8, Equation (13) gives the best adjustment; on the other hand, for M = 7.2 we have two adjustments (again, given by the rounding of Equation (15)), the first is given by Equation (9), and the second one by Equation (8). The behavior for M ≥ 7.6 is identical to the one in Table 1. For the synthetic catalog, we found that in our proposal, Equation (16) gives the most accurate value for the synthetic size.
In Figure 4, we show a projection of Equations (8)- (14) and (16) for magnitudes 5 ≤ M ≤ 9.5 vs. the log 3 of the synthetic size in the spring-block model. On the one hand, it can be noticed that in the range of 6.7 ≤ M ≤ 8.1 several equations are very close to the expected size (the × symbol in Figure 4), however, Equations (8)-(14) overestimate the aftershock area for magnitudes M > 8.1 while underestimating it for magnitudes below 6. On the other hand, Equation (16) (the symbol in Figure 4) follows with a 1% difference in the values of the expected size. We want to emphasize that this equation was derived using the data obtained from the spring-block model, which can be said is the equation governing the synthetic size, and that is why it follows almost perfectly the tendency of the expected size.

Aftershocks and Utsu's Law in a Spring-Block Model
In Section 2.3, we mentioned the difficulties in accurately reproducing aftershocks in the OFC model and some attempts to statistically identify events as aftershocks. We used a methodology similar to the one used by HN [10] to determine if an event is an aftershock or if it is an isolated event. They used a lattice of 512 × 512 and 10 events. Then, they defined the minimum value for an event to be considered as a mainshock in the time of the mainshock , which was 1000 blocks, and it had to be the greatest event in an interval of [ − 10 , + 10 ]. The next step was to monitor the number of events in a fixed window and subtract the background seismic activity. We used a 100 × 100 model with one million iterations. In Figure 5, we show the time series for the model, the axis represents the consecutive event number, and since the time between events is always the same (one iteration), the number of events can be considered also as the timestamp. The axis represents the synthetic size of that earthquake and expresses the total number of relaxed blocks for that iteration. It is worth mentioning that for the first 400 thousand In Tables 1 and 2, we notice that as the magnitude increases, the difference between the computed and the expected value of the aftershock area also increases. In Table 1, the difference in percentage ranges from 15.48% (for M = 7.5) to 270.25% (for M = 9.5). Therefore, a better adjustment is needed. For Equation (16), this error ranges from 0.21% (for M = 7.5) to 0.26% (for M = 9.5). On the other hand, in Table 2, the error for the best fit ranges from 8.4% (for M = 6.8) to 297% (for M = 9.6), while for Equation (16) the error ranges from 0.16% (for M = 6.8) to 0.26% (for M = 9.6).

Aftershocks and Utsu's Law in a Spring-Block Model
In Section 2.3, we mentioned the difficulties in accurately reproducing aftershocks in the OFC model and some attempts to statistically identify events as aftershocks. We used a methodology similar to the one used by HN [10] to determine if an event is an aftershock or if it is an isolated event. They used a lattice of 512 × 512 and 10 9 events. Then, they defined the minimum value for an event to be considered as a mainshock in the time of the mainshock t M , which was 1000 blocks, and it had to be the greatest event in an interval of [t M − 10 −4 , t M + 10 −4 ]. The next step was to monitor the number of events in a fixed window and subtract the background seismic activity. We used a 100 × 100 model with one million iterations. In Figure 5, we show the time series for the model, the x axis represents the consecutive event number, and since the time between events is always the same (one iteration), the number of events can be considered also as the timestamp. The y axis represents the synthetic size of that earthquake and expresses the total number of relaxed blocks for that iteration. It is worth mentioning that for the first 400 thousand events, they seem to follow an exponential growth before reaching the SOC state. To make the analysis, we had to determine how big an event must be to be classified as a mainshock, we decided to choose only the events whose sizes were 3000 and above that represent a magnitude of 7.2 which is just below the lowest value of the maximum earthquake reported for the subduction regions. Then we chose one of those events and tried to use a window of 10 4 elements. However, this window was proven to be inefficient since in many cases we found another mainshock inside those 10 4 elements. So, we decided to use instead of a fixed window just the space between consecutive mainshocks (events with at least 3000 blocks) and tried to elucidate how many of those events could be catalogued as aftershocks. One crucial difference between our method and the HN one [10] is that they count all the events inside the window without taking into consideration how far they are from the mainshock's epicenter, while we wanted to relate the event number and its spatial position. For this purpose, we set an arbitrary radius of 40 blocks to delimit the area where to find aftershocks. In Figure 6, we show the time series of the aftershocks' candidates inside this ring. Then we plot all of these events in an , plane as shown in Figure 7. Notice that the events are spread throughout the entire lattice, so it is obvious that not all of them can be considered aftershocks. As mentioned before, we decided to set a radius of 40 (dashed line in Figure 7) choosing the mainshock's epicenter as the circle's center, as in many cases presented by Utsu [2]. Once we had this delimitation, we had to eliminate all the events that can be considered as background noise. We set this limit to 27, which is equivalent to events of magnitude = 3. One crucial difference between our method and the HN one [10] is that they count all the events inside the window without taking into consideration how far they are from the mainshock's epicenter, while we wanted to relate the event number and its spatial position. For this purpose, we set an arbitrary radius of 40 blocks to delimit the area where to find aftershocks. In Figure 6, we show the time series of the aftershocks' candidates inside this ring. Then we plot all of these events in an x, y plane as shown in Figure 7. Notice that the events are spread throughout the entire lattice, so it is obvious that not all of them can be considered aftershocks. As mentioned before, we decided to set a radius of 40 (dashed line in Figure 7) choosing the mainshock's epicenter as the circle's center, as in many cases presented by Utsu [2]. Once we had this delimitation, we had to eliminate all the events that can be considered as background noise. We set this limit to 27, which is equivalent to events of magnitude M = 3. of them can be considered aftershocks. As mentioned before, we decided to set a radius of 40 (dashed line in Figure 7) choosing the mainshock's epicenter as the circle's center, as in many cases presented by Utsu [2]. Once we had this delimitation, we had to eliminate all the events that can be considered as background noise. We set this limit to 27, which is equivalent to events of magnitude = 3. Then we decided to apply Equation (16) to refine the aftershock area. This new area is shown in Figure 7 as a solid line circle. The mainshock is marked with a small red circle. Events in black hexagons are discarded as aftershocks since they are too far away from the mainshock's epicenter. The dashed line represents the original area arbitrarily chosen, while the solid line represents Utsu's aftershock area obtained using Equation (16). Green diamonds were discarded as they represent background noise. Blue stars are the aftershocks candidates.
It is important to mention that many of the biggest events in the interval analyzed were located far away from the mainshock's epicenter and therefore were discarded as aftershocks. plot of all the events between two mainshocks. The dash line represents the initial chosen area while solid line is the aftershock area calculated with Equation (16). Grey hexagons represent all the events that lie outside the rings; green diamonds represent the background noise; blue stars represent the aftershocks candidates.
In Figure 8, we present an image of the aftershocks candidates after cleaning the background noise and removing the elements outside the rings. It can be seen that even though we removed several events, we still have enough events enclosed within the rings. What is more, the inner ring, which corresponds to the aftershock area computed with Equation (16), has 216 possible aftershocks which seem reasonable for an earthquake of magnitude ~7. Finally, in Figure 9, we show the time series of those events. Then we decided to apply Equation (16) to refine the aftershock area. This new area is shown in Figure 7 as a solid line circle. The mainshock is marked with a small red circle. Events in black hexagons are discarded as aftershocks since they are too far away from the mainshock's epicenter. The dashed line represents the original area arbitrarily chosen, while the solid line represents Utsu's aftershock area obtained using Equation (16). Green diamonds were discarded as they represent background noise. Blue stars are the aftershocks candidates.
It is important to mention that many of the biggest events in the interval analyzed were located far away from the mainshock's epicenter and therefore were discarded as aftershocks.
In Figure 8, we present an image of the aftershocks candidates after cleaning the background noise and removing the elements outside the rings. It can be seen that even though we removed several events, we still have enough events enclosed within the rings. What is more, the inner ring, which corresponds to the aftershock area computed with Equation (16), has 216 possible aftershocks which seem reasonable for an earthquake of magnitude M ∼ 7. Finally, in Figure 9, we show the time series of those events.

Discussion
We generated 29 simulations using the OFC model with a 100 × 100 lattice and one million iterations, as in references [5,9]. We applied Equations (8)- (14) for the aftershock area proposed by Utsu and Seki [30], Utsu [2], Goto [31], Bath and Buda [32], and Purcaru [33] to both catalogs: synthetic and real subduction zones as reported in [21]. Finally, we used the data generated by the spring-block model to derive our own Equation (16).
From Tables 1 and 2, and Figure 4, we observed that when using Equations (8)-(14) the error increases along with the magnitude in both scenarios: real and synthetic. This situation makes it clear that a new fit is necessary. This led us to the proposal of Equation (16). The errors related to this equation are on the order of 0.2%. It is clear that for both cases, the best equation to approximate the size of the event is Equation (16). This might stand, therefore, as an indicator that validates this proposal. Although this result was obtained using the spring-block model, its application to real events (complete data sets) is yet to be analyzed.
Regarding the identification of aftershocks candidates in the spring-block model, we believe that it is possible that we can classify some events as aftershocks, especially if we use, as a discriminant analysis, the spatial location and not only the recurrence time. However, we admit that there is a lot of work to do on this matter, especially since we have not proven that these candidates follow Omori's law for aftershocks [11]. From Figure 9, one might rush and say that they follow a hyperbolic decay. Data from Figure 9 is kind of reminiscent of Omori's law, where the cumulative size of the aftershocks decays with time following the equation y = 5270.2t −0.578 as illustrated in Figure 10. The behavior of the aftershocks is remarkably similar to Omori's law with the difference that we do not show the decay in the number of events but in their cumulative size (this size as mentioned earlier is the number of relaxed blocks per event). Eventually, they become small enough to be considered as part of the background activity, similar to the real-life seismic activity. obtained using the spring-block model, its application to real events (complete data sets) is yet to be analyzed. Regarding the identification of aftershocks candidates in the spring-block model, we believe that it is possible that we can classify some events as aftershocks, especially if we use, as a discriminant analysis, the spatial location and not only the recurrence time. However, we admit that there is a lot of work to do on this matter, especially since we have not proven that these candidates follow Omori's law for aftershocks [11]. From Figure 9, one might rush and say that they follow a hyperbolic decay. Data from Figure 9 is kind of reminiscent of Omori's law, where the cumulative size of the aftershocks decays with time following the equation = 5270.2 . as illustrated in Figure 10. The behavior of the aftershocks is remarkably similar to Omori's law with the difference that we do not show the decay in the number of events but in their cumulative size (this size as mentioned earlier is the number of relaxed blocks per event). Eventually, they become small enough to be considered as part of the background activity, similar to the real-life seismic activity.

Conclusions
We conclude that it seems possible to qualitatively reproduce Utsu's law using the OFC spring-block model due to the similarities between the actual seismic regions compared to the simulations. It is clear that a single equation cannot fit all ranges of magnitudes in real scenarios. For the synthetic case, we propose an Equation (16) that fits the expected values very well. This does not mean that the same behavior can be expected in real seismicity since we used synthetic catalogs to derive that equation, but it does make clear the need for a better spring-block model that qualitatively approaches the real-life seismicity. With this, we are not saying that the current model is not valid. It is important to remember that this is a model that works with quite simple rules and does not include many of the parameters observed in real seismicity. However, it fulfills the function of

Conclusions
We conclude that it seems possible to qualitatively reproduce Utsu's law using the OFC spring-block model due to the similarities between the actual seismic regions compared to the simulations. It is clear that a single equation cannot fit all ranges of magnitudes in real scenarios. For the synthetic case, we propose an Equation (16) that fits the expected values very well. This does not mean that the same behavior can be expected in real seismicity since we used synthetic catalogs to derive that equation, but it does make clear the need for a better spring-block model that qualitatively approaches the real-life seismicity. With this, we are not saying that the current model is not valid. It is important to remember that this is a model that works with quite simple rules and does not include many of the parameters observed in real seismicity. However, it fulfills the function of qualitatively reproducing many of the empirical laws and behaviors found in the real world, and for this study, it provides a good approximation for certain quantities where all other equations seem to work well.
Finally, although our results seem to confirm that Utsu's law can be reproduced in the spring-block model, a deeper analysis of the possible replications in the OFC model is necessary. In addition, the inclusion of spatial location when identifying possible aftershocks brings us closer to a more complete model of earthquakes. It should be noted that these results are promising regarding the use of the spring-block model and its ability to reproduce complex properties of the seismic phenomenon.