Fast Software Implementation of Serial Test and Approximate Entropy Test of Binary Sequence

In many cryptographic applications, random numbers and pseudorandom numbers are required. Many cryptographic protocols require using random or pseudorandom numbers at various points, e.g., for auxiliary data in digital signatures or challenges in authentication protocols. In NIST SP800-22, the focus is on the need for randomness for encryption purposes and describes how to apply a set of statistical randomness tests. -ese tests can be used to evaluate the data generated by cryptographic algorithms. -is paper will study the fast software implementation of the serial test and the approximate entropy test and propose two types of fast implementations of these tests. -e first method is to follow the basic steps of these tests and replace bit operations with byte operations. -rough this method, compared with the implementation of Fast NIST STS, the efficiency of the serial test and approximate entropy test is increased by 2.164 and 2.100 times, respectively. -e second method is based on the first method, combining the statistical characteristics of subsequences of different lengths and further combining the two detections with different detection parameters. In this way, compared to the individual implementation of these tests, the efficiency has been significantly improved. Compared with the implementation of Fast NIST STS, the efficiency of this paper is increased by 4.078 times.


Introduction
In cryptography, random numbers and pseudorandom numbers are widely used in applications. For example, use a randomly generated key in a cryptographic system. ere are also random or pseudorandom numbers required to be used at various points in cryptographic protocols, for example, for auxiliary data in digital signatures or challenges in authentication protocols. e random bit sequence can be explained by the result of an unbiased "fair" coin flip, with the sides of the coin marked as "0" and "1." e probability that each flip produces a "0" or "1" is 1/2, and the results of each coin toss are independent of each other. Unbiased, fair coins are perfect random bitstream generators because 0 and 1 values will be randomly distributed, and all elements in the sequence are generated independently of each other. e value of any element in the sequence is unpredictable and has nothing to do with all previously generated elements. e SP 800-22 [1] issued by the National Institute of Standards and Technology (NIST) discusses the randomness test of random number and pseudorandom number generators. ese tests can be applied to fields such as cryptography, modeling, and simulation. In NIST SP800-22, the focus is on the need for randomness for encryption purposes and describes applying a set of statistical randomness tests. Germany released the BSI AIS 30 specification [2]. In 2009, the National Cryptography Administration (NCA) of China issued a randomness test specification [3]. In addition, research on random sequences is in full swing, and a large number of new statistical tests have been proposed [4]. ere are two basic types of random sequence generators: random number generator (RNG) and pseudorandom number generator (PRNG) [5]. In cryptographic applications, both generators produce zero and a stream divided into subsequences or blocks. ese tests can be used to evaluate the data stream generated by the cryptographic algorithm, thereby providing useful reference data for the theoretical analysis of the algorithm [6][7][8]. is approach can reduce the workload of theoretical analysis and detect security risks that cannot be found by other analytical methods. For example, in the competition of AES [9,10], randomness detection is used to evaluate candidate cryptographic algorithms [11]. ZUC [12,13] has officially become a cryptographic algorithm of LTE and executed many randomness tests. Parameter of these detections can be recommended or be adjusted [14][15][16].
Some researchers study the rapid implementation of all NIST STS tests and achieved interesting speedups in most of the tests [17]. Q-value is introduced, and the distribution of Q-value is closer to a uniform distribution than P value, which can reduce the false detection rate [18]. When the runs distribution test is applied on some well-known good deterministic random bit generators (DRBGs), the test results show apparent bias from randomness [19]. A new DFT test method for the long sequence is proposed, and this DFT test reconstructs the statistics to follow the chi-square distribution [20].
In this paper, we study the fast implementation of the serial test and the approximate entropy test and propose two types of fast implementation of these tests. e first fast implementation method is to follow the basic steps according to these tests. In this implementation, the efficiency of the serial test and the approximate entropy test is increased by 2.164 and 2.100 times, respectively, compared with the basic implementation. e second one is to merge these tests. Relative to the individual implementation of these tests, the efficiency has been improved in this implementation. Compared with the basic implementation, the best efficiency of this method is increased by 4.078 times, and the effect is significant. is paper is organized as follows. Section 2 presents an introduction to these statistical tests. Section 3 discusses the serial test, the approximate entropy test, and the basic implementation. Sections 4 and 5 present two types of fast implementation of these tests. Section 6 presents the software implementation results of these methods. Section 7 concludes the paper.

Introduction of Statistical Tests
By performing various statistical tests on the sequences, it is possible to compare and evaluate sequences with random sequences. Both characterization and description of the properties of random sequences can be done through probability. ere are countless possible statistical tests to assess the presence or absence of a pattern, which indicates that the sequence is nonrandom. ese test methods are designed for the different characteristics of the sequence, mainly based on the different focus of the sequence characteristics, and there are also some test methods that have no significant differences in the principles. erefore when choosing a randomness test method, it is necessary to consider all aspects of the random characteristics of the test sequence, but also to take into account the efficiency of the test. In addition, one must be careful when interpreting the results of statistical tests to avoid erroneous conclusions about specific generators [21]. e randomness of the test sequence is essential to test whether it is truly random or the gap between it and true randomness. Randomness testing usually uses hypothesis testing. Hypothesis testing is to propose certain assumptions about the population in order to infer certain properties of the population when the population distribution is unknown or only its form is known. However, its parameters are not known, and then, make judgments on the proposed hypotheses based on the sample. Random hypothesis testing means that a certain aspect of a truly random sequence conforms to a specific distribution. If the sequence to be tested is random, then the sequence to be tested should also conform to this specific distribution in this respect. Take a certain statistical value V of a random sequence that conforms to the chi-square distribution with n degrees of freedom as an example. Null hypothesis (null hypothesis) H 0 : the sequence is random if the statistical value V of the sequence to be tested obeys the χ 2 (n) distribution. Alternative hypothesis H α : the sequence is not random if the statistical value V of the sequence to be tested does not obey the χ2(n) distribution. e opposite of this null hypothesis is the alternative hypothesis, which is that the sequence is not random. By testing each application, a decision or conclusion is derived. Based on the generated sequence, determine whether to accept the null hypothesis H 0 . Under the original hypothesis, the theoretical reference distribution of the statistical data is determined by mathematical methods, and the critical value is determined. During the test, the statistical value of the tested sequence is calculated and compared with the critical value. If the test statistic value does not exceed the critical value, the null hypothesis H 0 of randomness is accepted. Otherwise, accept the alternative hypothesis H a .
ere are two possible outcomes of statistical hypothesis testing, namely, accepting H 0 or accepting H a .
Two types of errors may occur with this method. First, if the data is random, then we conclude that the data is nonrandom. is conclusion is called a type I error. Second, if the data is nonrandom, then we conclude that the data is random.
is conclusion is called a type II error. e probability of a type I error is called the significance level of the test, and this probability is usually denoted as α. Typically, α is selected in the range of [0.001, 0.01], and the value of α in cryptography is about 0.01. In the test, α is the probability that when the test shows that the sequence is truly random, it is not a random sequence. e probability of a type II error is usually denoted as β. In the test, β is the probability that the test will indicate that the sequence is random when it is not. β is not a fixed value, which is different from α. ere are many ways to select data, they can be nonrandom, and each different way can produce different β. One of the main goals of the test is to minimize the possibility of making type II errors. Table 1 below correlates the true state of the data with the test conclusion.
In order to reflect the strength of the evidence against the null hypothesis, the P value can be calculated using test statistics. In all tests, the P value is the probability that the sequence generated by the perfect random number generator has less randomness than the sequence to be tested. When the sequence seems to have perfect randomness, the P value is equal to 1, and when the sequence seems to be completely nonrandom, the P value is equal to 0. When the P value ≥α, the null hypothesis H 0 is accepted, which means that the sequence appears to be random. When the P value <α, the alternative hypothesis H a is accepted, which means that the sequence appears nonrandom.
In the test, the P value can be calculated with the test statistic.
e P value represents the probability that the sequence generated by the perfect random number generator is less random than the sequence being tested. When the P value is 1, it means that the sequence seems to have perfect randomness, and when the P value is 0, it means that the sequence seems to be completely nonrandom. When P ≥α, it means that the null hypothesis is credible, which reflects that the sequence seems to be random. When P <α, it means that the null hypothesis is unreliable, which reflects that the sequence seems to be nonrandom. e P value is the strength of evidence for accepting the null hypothesis. e NIST SP800-22 is a statistical test suite consisting of 16 tests, and the test suite of NCA consists of 15 tests. Table 2 lists all of the tests. Serial test and approximate entropy test are the tests of the two test suites. Table 3 lists all of the symbols and their meanings used in this paper. e establishment of the incomplete gamma function is based on the approximate formula [22], which can be approximated by continuous fraction expansion or series expansion according to the values of its parameters a and x.

Definitions and Symbols.
Gamma function and incomplete gamma function are defined as

Serial Test.
is section describes the serial tests, such as the test description, the technical details, the testing strategy, and the purpose of this test. e frequency of all possible m-bit subsequences in the entire sequence is the focus of the serial test. From the results of this test, it can be determined whether the frequency of occurrence of the 2 m m-bit subsequences of the random sequence is roughly in line with expectations. e probability of all subsequent m-bit in the random sequence is the same, and when m is 1, the serial test has the same utility as the frequency test.
For different values of n, the sequence is expanded by appending the first m-1 bits to the end of the sequence to e serial test is based on testing the uniformity of distributions of subsequences of given lengths in the circularised string ε Here, ψ 2 0 � ψ 2 −1 � 0. us, ψ 2 m is a χ 2 type of statistic, but it is a common mistake to assume that ψ 2 m has the χ 2 distribution. e corresponding generalized serial statistics for the testing of randomness are ∇ψ 2 m and ∇ 2 ψ 2 m : (3)  us, for small values of m, m ≤ log 2 (n) − 2, one can find the corresponding 2m P values from the standard formulas: e steps of the serial test (Algorithm 1) are as follows. Note that choose m such that m ≤ log 2 (n) − 2. e bit length of subsequence m and the sample length n are proposed m � 2 and 5 and n � 1000000 in specification [3], respectively.

Approximate Entropy Test.
e frequency of all possible overlapping m-bit patterns in the sequence is the focus of the approximate entropy test. rough this test, the frequency of two adjacent lengths' (m and m + 1) subsequences can be compared with the expected result. e repetitive pattern in the string is a feature of approximate entropy. Set e approximate entropy ApEn(m), m ≥ 1, is defined as with ApEn(0) � −Φ (1) . ApEn(m) measures the logarithmic frequency with which blocks of length m that are close together. us, a smaller ApEn(m) value indicates strong regularity in the sequence, while larger values indicate irregularities in the sequence.
For a fixed block length m, one should expect that, in long random strings, ApEn(m) ∼ ln 2.
e limited distribution is consistent with the distribution of a random variable with 2m degrees of freedom. is fact provides the basis for statistical testing. us, with e steps of the approximate entropy (Algorithm 2) test are as follows.
Note that choose m such that m ≤ log 2 (n) − 2. e bit length of subsequence m and the sample length n are proposed m � 2 and 5 and n � 1000000 in specification [3], respectively. e second step of the serial test and the approximate entropy test is to determine the frequency of all possible mbit subsequences of ε ′ � (ε 1 , ε 2 , . . . , ε n , ε 1 , . . . , ε m−1 ). Algorithm 3 is a basic implementation of Step 2 based on bits' operations.
We note by LOAD the data loading operation. Left and right shifts will be denoted SHIFT. By CMP means the compare operation. e computational complexity of Algorithm 3 is mn LOAD, 2mn ADD, mn SHIFT, and mn OR. e computational complexity of Algorithm 4 is n/8 LOAD, n(m + 7)/8 ADD, n SUB, n(m + 7)/8 SHIFT, n AND, n(m + 7)/8 OR,

Hotspots.
After determining the key parts of the program, we can start to optimize the code. During the running of the program, there are two situations for time allocation. In some programs, more than 99% of the time is used for inner loop calculations. In other programs, 99% of the time is used to read and write data, and less than 1% of the time is used to calculate the data. It is important to optimize these parts of the code instead of the parts of the code that spend a small amount of time. Optimizing the less critical parts of the code not only wastes time but also makes the code more difficult to maintain. We can use the profiler in the compiler to profile how much time each function costs. It can also be done with third-party analyzers, such as AQtime, Intel VTune, and AMD Code Analyst.
Due to the short time interval, time measurement requires very high resolution. Users can use the Query-PerformanceCounter or GetTickCount functions for millisecond resolution in Windows. Use the time stamp counter in the CPU to get a higher resolution RDTSC. is instruction counts at the CPU clock frequency.
We need to identify the most time-consuming hotspots of the basic implementation of the serial test and the approximate entropy test.
e Hotspots analysis helps understand these tests and identify steps that take a long time to execute (hotspots). Intel VTune amplifier identifies that Step 2 of the serial test is the most time-consuming hotspot of this test. In the serial test, this step determines v i 1  All the operations of Algorithm 3 are based on bit operations, which seriously reduce the performance of these tests.
In practice, a binary sequence is in the form of an octet string.
is octet string needs to be converted into a bit string, and then, the bit string is sent into Algorithm 3 to determine the frequency of all possible m-bit subsequences. Algorithm 3 is replaced by bit operations with byte operations, and the performance will be significantly improved.

Fast Implementation Based on Octet Operation.
In practice, a binary sequence is almost in the form of the octet string, so we assume that n is divided by (8|n) e random byte is a byte sequence formed by combining random bits by bit-by-bit splicing.

Analysis of Computational Complexity.
is section analyses and compares the number of arithmetic operations of the above algorithms. Because Step 2 is the most timeconsuming, so the main task is to analyze the computational complexity of Algorithm 3 based on bit string and bit operations and Algorithm 4 based on octet string and multibit operations.
Unroll Step 2.2 of Algorithm 4, which can reduce many operations. e CMP operation is clear, and the left and right shift number does not need to be calculated. e computational complexity is reduced to n/8 LOAD, n ADD, n(m + 7)/8 SHIFT, n AND, and n(m + 7)/8 OR. e efficiency of the loop depends on the microprocessor's ability to predict the control branch of the loop. A loop with a small and fixed repeat count and no branches inside can be predicted perfectly. It is best to avoid loop unrolling on processors with microoperation cache because it is important to save the use of microoperation cache. e unrolled loop takes up more space in the code cache or microoperation cache. If there are specific advantages to be gained, such as eliminating the if branch, the programmer should manually unroll a loop.
Loop unrolling of Algorithm 4 has many advantages. en, if the branch is eliminated, the CMP operation is removed, and the left and right shift numbers are cleared. e computational complexity of Algorithm 3 (based on bit string) and Algorithm 4 (based on octet string) is provided in Table 4.
For example, when the value of m is 3, the execution numbers of Algorithms 3 and 4 are 15n and 3.625n, respectively. e serial test determines the frequency of all possible jbit patterns, with j � 1, 2, 3, 4, and 5. In Algorithm 3 (based on bit string), the sum of operations of the serial test (with j � 1, 2, 3, 4, and 5) is In Algorithm 4 (based on octet string), the sum of operations of the serial test (with j � 1, 2, 3, 4, and 5) is In contrast to the number of arithmetic operations of the above two algorithms, it is shown that Algorithm 4 based on the octet string is superior to Algorithm 3 based on bit string, in the serial test. Table 5 shows the number of arithmetic operations of the above two algorithms. e approximate entropy test determines the frequency of all possible j-bit patterns, with j � 2, 3, 5, and 6. In Algorithm 3 (based on bit string), the sum of operations of the approximate entropy test is m�2, 3,5,6 5mn � 80n.
In contrast to the number of arithmetic operations of the above two algorithms, it is shown that Algorithm 4 based on octet string is superior to Algorithm 3 based on bit string, in the approximate entropy test. Table 6 shows the number of arithmetic operations of the above two algorithms.

Security and Communication Networks
However, different instructions cost different cycles. Integer operations are usually very fast. On most microprocessors, most simple integer operations (such as addition, subtraction, comparison, bit operations, and shift operations) only take one clock cycle. Multiplication and division require longer clock cycles. Usually, integer multiplication requires 3-4 clock cycles, and integer division requires 40-80 clock cycles. In addition, it may take longer to access data from RAM compared to the time required to perform calculations on the data. If it is cached, it only takes 2-3 clock cycles to read or write the variables in the memory, while if it is not cached, it takes hundreds of clock cycles [23]. So, the exact time consuming of these algorithms needs to do many different experiments.

Merging
ese Two Tests. As mentioned earlier, accessing data from RAM may take longer than the time it takes to perform calculations on the data. is is why all modern computers have a memory cache. Generally, there are a level 1 data cache, a level 2 cache, and a level 3 cache. If the total size of all data in the program is greater than the level 2 cache and the data is scattered in the memory or accessed in a nonsequential manner, memory access may be the largest time-consuming operation in the program. If it is cached, it only takes 2-3 clock cycles to read or write a variable in the memory, while if it is not cached, it takes hundreds of clock cycles. e sequence size is 125000 bytes, which is bigger than most of the level-1 cache, so these data cannot be cached. Loading sequence data in memory costs many clock cycles, and memory access is the time-consuming operation (hotspots) in these tests. e size of the sequence cannot be reduced, and the cache of the CPU is fixed. e best thing we can do is reduce the loading times of data to improve the performance of these tests. Many specifications propose to perform both the serial test and the entropy test.
Step 2 of the serial test and Step 2 of the approximate entropy test have similar functions and call the same algorithm so that these tests may be merged. e values of m are proposed in specification [3]. In the serial test, the values are m � 2 and 5, which need to determine the frequency of 1bit, 2-bit, 3-bit, 4-bit, and 5-bit pattern. In the approximate Input: a binary sequence ε � ε 1 , ε 1 , . . . , ε n of length n. m is the bit length of subsequence. Output: pass or not.
Step 2: determine v i 1 v i 2 . . . v i m the frequency of all possible m-bit subsequences. Step Step 5: repeat Steps 1-4, replacing m by m+1.
Step 8: if P − value ≥ α, the sequence passes the test. Otherwise, return not pass.
Step 1: v i 1 i 2 ···i m � 0, for each value of i.
Step 2: for i � 1, 2, . . . , n, do: 2.1 x � 0. 2.2 for j � 0, 1, . . . , m − 1, do: Step 3: return v w , for each value of w � i 1 i 2 . . . i m . entropy test, the values are m � 2 and 5, which need to determine the frequency of 2-bit, 3-bit, 5-bit, and 6-bit pattern so one can determine the frequency of i-bit patterns, with i � 1, 2, . . . , 6. ere are many benefits to the merger of the six cases. Firstly, lots of the same operations of Algorithm 4 can be combined. For example, the large number of loading operations only need to execute one time with m � 1, 2, . . . , 6, and the loading data operations reduce 5n/8, which enhances the efficiency of data. Secondly, the number of calling algorithms can be greatly reduced, which further speeds up the test performance.
Algorithm 5 shows the emerged test based on multibit, which is the merger of the serial test and the approximate entropy test.
Algorithm 6 shows the merger of the six cases, which determine the frequency of the i-bit pattern, with i � 1, 2, . . . , 6.
In Step 2.2, the conditions of j are 1 ≤ j ≤ 6 and 8 < k + j ≤ 16, which ensure the validity of the results. For example, k � 13, v2 y (j � 2) is valid, and v5 y (j � 3) is invalid. It is recommended to unroll the loop of j and k in programming, reducing lots of operations. e codes of Algorithm 6 are shown in Appendix, with loop unrolling.

Computational Complexity.
is section analyses the number of arithmetic operations of the above algorithms.
Algorithm 5 shows the emerged test based on multibit, which is the merger of the serial test and the approximate entropy test. e most time-consuming step of this test still is Step 2, which calls Algorithm 6, to determine the frequency of all possible j-bit patterns, with j � 1, 2, . . . , 6. In Algorithm 6, Step 1 and Step 3 do basic operations, and Step 2 is the crucial step and the most time-consuming step. e computational complexity of Algorithm 6 is n/8 LOAD, 6n ADD, 7n/4 SHIFT, 6n AND, and n/8 OR. e details are provided in Table 7.
In the implementation of these two tests, different algorithms have different performances. Table 8 lists the number of operations of Algorithms 3, 4, and 6 in the serial test and the approximate entropy test.

Experimental Results
is section shows the experimental results of the above algorithm. In contrast to the performance of these algorithms, it is shown that Algorithm 4 (based on octet string) and Algorithm 6 (based on the merger of these tests) are superior to Algorithm 1 (based on bit string).
At first, it shows the way to perform analysis of the above algorithms. e first step is to set a time counter T s before the algorithm to be analyzed, and set another time counter T e after the algorithm. en, run the application and get the elapsed time T i � T e − T s . Repeat Step 1 and Step 2 k times (k is odd), and get multiple values of the elapsed time T i (i � 1, 2, . . . , k). By the descending order of these values, T 1 ′ T 2 ′ ≥ . . . ≥ T k ′ , and the intermediate value T (k+1)/2 is the elapsed time of the algorithm.
We can use the command RDTSC to get the current time. RDTSC instruction loads the current value of the processor's time stamp counter (64-bit counter). e processor monotonically increments the timestamp counter every clock cycle and resets it to 0 every time the processor is reset. e RDTSC instruction is not serialized. It does not have to wait until all previous instructions have been executed to read the counter.
We measure our algorithms and these tests on a personal computer.
is computer has one Intel Core i3-3240@ 3400 MHz CPU with four cores. Measurements use one core. e compiler used to compile our C code is Intel C++ Compiler XE 12.0. e input data of Algorithm 3 is a bit string of length 1000000, and the input data of Algorithms 4 and 8 is the octet string of byte length 125000, which is converted from that binary string.
Algorithm 3 (based on bit string) and Algorithm 4 (based on octet string) can be used to determine the frequency of the m-bit pattern. Algorithm 6 (based on octet string) can be used to determine the frequency of the i-bit pattern with m � 1, 2, . . . , 6. In the serial test, one should determine the frequency of 1-bit, 2-bit, 3-bit, 4-bit, and 5-bit patterns. In the approximate entropy test, one should determine the frequency of 2bit, 3-bit, 5-bit, and 6-bit patterns. Algorithm 6 can complete this task.
Output: v w , for each value of w � i 1 i 2 . . . i m .

Security and Communication Networks
On the same test platform, we use the NIST STS source code provided by NIST, the source code of Fast NIST STS from [17], and the implementation of this paper to test serial test and approximate entropy test. Table 9 shows the performance of the original implementation NIST STS, the implementation Fast NIST STS from [17], and our new implementation. e original implementation of NIST STS is based on bits when performing serial tests and approximate entropy tests. is makes the detection time increase significantly when m increases. For example, when m � 2, it takes 32.844 milliseconds to perform the serial test, and when m � 9, the time increases to 169.163 milliseconds. In addition, the original implementations of NIST STS and Fast NIST STS both perform detection for a single parameter m when  Step 1: extend the sequence by appending the first octet to the end of the sequence E ′ � (E 1 , E 2 , · · · , E n/8 , E 1 ).
Step 7: compute C j i � v i 1 i 2 ···i j /n, for each value of i.
Step 3: Return v1 w , v2 w , v3 w , v4 w , v5 w , and v6 w .  It can be seen from Table 9 that the efficiency of the serial test and the approximate entropy test are increased by 2.164 and 2.100 times separately, compared with the implementation of Fast NIST STS, and the execution efficiency of the merge test has been significantly improved. Its execution time is very close to that of the serial test or approximate entropy test alone. At the same time, the efficiency of the merge test reached 4.078 times that of Fast NIST STS.
In most applications, all the detection items of the test suit need to be executed. erefore, the algorithm proposed in this paper has very important value in practical applications and can significantly reduce the execution time of these two detection algorithms.

Conclusions
In this paper, we study the fast implementation of the serial test and the approximate entropy test and propose two types of fast implementation of these tests. e first fast implementation method is to follow the basic steps according to these tests and then call Algorithm 4 to determine the frequency of the m-bit pattern. In this implementation, the efficiency of the serial test and the approximate entropy test are increased by 2.164 and 2.100 times separately, compared with the implementation of Fast NIST STS. e second one is to merge these tests, combine the steps, and call Algorithm 6 to determine the frequency of all the possible values of m. In this implementation, the efficiency has been greatly improved relative to the individual implementation of these tests, and the improvement is much more significant compared with the basic implementation based on bits' operations. e best efficiency of this method is increased by 4.078 times, comparing to the implementation of Fast NIST STS.
In conclusion, we propose the fast implementation method based on merging tests and combining the frequency of the subsequences. is method not only can be used in the merging test but also can be used to only do the serial test or only do the approximate entropy test.
Data Availability e raw/processed data required to reproduce these findings cannot be shared at this time as the data also form part of an ongoing study.  Table 9: e performance of the original implementation NIST STS, the implementation Fast NIST STS from [17], and our new implementation. Merged test is a combined test of the serial test (m � 2 and 5) and approximate entropy test (m � 2 and 5).

Tests
NIST