Skip to main content

Advertisement

Log in

Benchmarking of lightweight cryptographic algorithms for wireless IoT networks

  • Original Paper
  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Cryptographic algorithms that can provide both encryption and authentication are increasingly required in modern security architectures and protocols (e.g. TLS v1.3). Many authenticated encryption systems have been proposed in the past few years, which has resulted in several cryptanalysis research work. In this same direction, the National Institute of Standards and Technology (NIST) is coordinating a large effort to find a new standard authenticated encryption algorithm to be used by resource-constrained and limited devices. In this paper, 12 algorithms of the 33 candidates of the Round 2 phase from NIST competition are being benchmarked on a real IoT test-bed. These 33 ciphers implement authenticated encryption with associated data which aims at preserving integrity, privacy and authenticity at the same time. In this work, we ported the 12 algorithms to different hardware platforms (an x86_64 PC, an AVR ATmega128, an MSP430F1611 and the IoT-LAB platform) and make a fair comparison between their performance. We adapted these algorithms to the Contiki operating system to evaluate the latency and efficiency of each algorithm on IoT applications deployed on a national experimental platform which is IoT-LAB. In addition, we used the FELICS-AE benchmark to quantify locally the RAM, execution time and code size of each algorithm. In fact, this work provides practical results of their performance in an IoT scenario which pave the way for further research on other algorithms, platforms or OS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. https://csrc.nist.gov/Projects/lightweight-cryptography/round-2-candidates

  2. https://www.iot-lab.info/

  3. https://www.cryptolux.org/index.php/FELICS

  4. https://www.cryptolux.org/index.php/FELICS

  5. the code is available on https://gitlab.inria.fr/minier/felics-ae.

  6. https://github.com/das-labor/xbx

  7. https://www.iot-lab.info/testbed/dashboard

  8. https://csrc.nist.gov/Projects/lightweight-cryptography/round-2-candidates

  9. https://github.com/iot-lab.

  10. As a routing algorithm is required, we decide to use the standard RPL. See [43] for more details.

  11. We limit our study with 2 nodes because when trying more nodes, whatever the topology, the metrics become more difficult to measure.

References

  1. Adomnicai, A., Berger, T. P., Clavier, C., Francq, J., Huynh, P., Lallemand, V., Le Gouguec, K. , Minier, M., Reynaud, L., & Thomas, G. (2019). Lilliput-ae: a new lightweight tweakable block cipher for authenticated encryption with associated data. Submitted to NIST Lightweight Project.

  2. Andreeva, E., Lallemand, V., Purnal, A., Reyhanitabar, R., Roy, A., & Vizár, D. (2019). Forkae v. Submission to NIST Lightweight Cryptography Project.

  3. Banik, S., Bogdanov, A., Peyrin, T., Sasaki, Y., Sim, S. M., Tischhauser, E., & Todo, Y. (2019). Sundae-gift. Submission to Round, 1.

  4. Banik, S., Chakraborti, A., Iwata, T., Minematsu, K., Nandi, M., Peyrin, T., Sasaki, Y., Sim, S. M., & Todo, Y. (2019). Gift-cofb. Submission to Round, 1.

  5. Bao, Z., Chakraborti, A., Datta, N., Guo, J., Nandi, M., Peyrin, T., & Yasuda, K. (2019). Photon-beetle. Submission to the NIST Lightweight Cryptography Standardization Effort.

  6. Beierle, C., Biryukov, A., dos Santos, L. C., Großschädl, J., Perrin, L., Udovenko, A., Velichkov, V., Wang, Q., & Biryukov, A. (2019). Schwaemm and esch: lightweight authenticated encryption and hashing using the sparkle permutation family. NIST round, 2.

  7. Beierle, C., Jean, J., Kölbl, S., Leander, G., Moradi, A., Peyrin, T., Sasaki, Y., Sasdrich, P., & Sim, S. M. (2020). Skinny-aead and skinny-hash. IACR Transactions on Symmetric Cryptology, pages 88–131.

  8. Bellare, M., Rogaway, P. (2000). Encode-then-encipher encryption: How to exploit nonces or redundancy in plaintexts for efficient cryptography. In International Conference on the Theory and Application of Cryptology and Information Security, pages 317–330. Springer.

  9. Bernstein, D. J., & Lange, T. (2019). ebacs: Ecrypt benchmarking of cryptographic systems. http://bench.cr.yp.to, Access on line in December 2019.

  10. Beyne, T., Chen, Y. L., Dobraunig, C., & Mennink, B. (2020). Status update on elephant. NIST lightweight competition (2020).

  11. Canteaut, A., Duval, S., Leurent, G., Naya-Plasencia, M., Perrin, L., Pornin, T., & Schrottenloher, A. (2020). Saturnin: a suite of lightweight symmetric algorithms for post-quantum security. Transactions on Symmetric Cryptology, 2020(S1), 160–207.

    Article  Google Scholar 

  12. Cazorla, M., Gourgeon, S., Marquet, K., & Minier, M. (2015). Implementations of lightweight block ciphers on a wsn430 sensor.

  13. Cazorla, M., Marquet, K., & Minier, M. (2013). Survey and benchmark of lightweight block ciphers for wireless sensor networks. In 2013 international conference on security and cryptography (SECRYPT), pages 1–6. IEEE.

  14. Chakraborti, A., Datta, N., Jha, A., & Nandi, M. (2019). Hyena. Submission to Round, 1.

  15. Chakraborti, A., Datta, N., Jha, A. L., Cuauhtemoc, M., Nandi, M., & Sasaki, Y. (2019). Lotus-aead and locus-aead. Submission to Round, 1.

  16. Daemen, J., Massolino, P. M. C., Mehrdad, A., & Rotella, Y. (2020). The subterranean 2.0 cipher suite. IACR Transactions on Symmetric Cryptology, pages 262–294.

  17. Daemen, J., Hoffert, S., Peeters, M., Van Assche, G., & Van Keer, R. (2020). Xoodyak, a lightweight cryptographic scheme. Transactions on Symmetric Cryptology, 2020(S1), 60–87.

    Article  Google Scholar 

  18. Dinu, D., Le Corre, Y., Khovratovich, D., Perrin, L., Großschädl, J., & Biryukov, A. (2019). Triathlon of lightweight block ciphers for the internet of things. Journal of Cryptographic Engineering, 9(3), 283–302.

    Article  Google Scholar 

  19. Dobraunig, C.E., Eichlseder, M., Mangard, S., Mendel, F., Mennink, B., Primas, R., & Unterluggauer, T. (2020). Isap v2. 0.

  20. Dobraunig, C., Eichlseder, M., Mendel, F., & Schläffer, M. (2014). Ascon. Submission to the NIST LWC competition: http://ascon. iaik. tugraz. at.

  21. Dos Santos, L. C., Großschädl, J., & Biryukov, A. (2019). Felics-aead: benchmarking of lightweight authenticated encryption algorithms. In International Conference on Smart Card Research and Advanced Applications, pages 216–233. Springer.

  22. Eisenbarth, T., Kumar, S., Paar, C., Poschmann, A., & Uhsadel, L. (2007). A survey of lightweight-cryptography implementations. IEEE Design and Test of Computers, 24(6), 522–533.

    Article  Google Scholar 

  23. Goudarzi, D., Jean, J., Kölbl, S., Peyrin, T., Rivain, M., Sasaki, Y., & Sim, S. M. (2020). Pyjamask: Block cipher and authenticated encryption with highly efficient masked implementation. IACR Transactions on Symmetric Cryptology, pages 31–59.

  24. Hell, M., Johansson, T., Meier, W., Sönnerup, J., Yoshida, H. (2019). Grain-128aead-a lightweight aead stream cipher. NIST Lightweight Cryptography, Round, 1.

  25. Iot-lab api documentation. https://github.com/iot-lab/iot-lab-client/blob/master/iotlabclient/client_README.md.

  26. Iot-lab cli tools documentation. https://github.com/iot-lab/iot-lab/wiki/CLI-Tools.

  27. Iot-lab cli tools. https://pypi.org/project/iotlabcli/.

  28. Iot-lab client github. https://github.com/iot-lab/iot-lab-client.

  29. Iot-lab github. https://github.com/iot-lab.

  30. Iot-lab website. https://www.iot-lab.info.

  31. Kerckhof, S., Durvaux, F., Hocquet, C., Bol, D., & Standaert, F.-X. (2012). Towards green cryptography: a comparison of lightweight ciphers from the energy viewpoint. In International Workshop on Cryptographic Hardware and Embedded Systems, pages 390–407. Springer.

  32. Knežević, M., Nikov, V., & Rombouts, P. (2012). Low-latency encryption–is “lightweight= light+ wait”? In International Workshop on Cryptographic Hardware and Embedded Systems, pages 426–446. Springer.

  33. Law, Y. W., Doumen, J., & Hartel, P. (2006). Survey and benchmark of block ciphers for wireless sensor networks. ACM Transactions on Sensor Networks (TOSN), 2(1), 65–93.

    Article  Google Scholar 

  34. Le Gouguec, K., & Huynh, P. (2019). Felics-ae: a framework to benchmark lightweight autheticated block ciphers. In Proceedings of the 2019 NIST Lightweight Cryptography Workshop.

  35. Matsui, M., & Murakami, Y. (2013). Minimalism of software implementation. In International Workshop on Fast Software Encryption, pages 393–409. Springer,.

  36. National Institute of Standards and Technology (NIST). sha-3-project. [Online; 2007].

  37. NIST Fips Pub. (2001). 197: Advanced encryption standard (aes). Federal information processing standards publication,197(441), 0311.

  38. Onelab website. https://onelab.eu.

  39. RIOT. The friendly operating system for the internet of things.

  40. Texas Instruments. Msp430f1611 datasheet. https://www.ti.com/lit/ds/symlink/msp430f1611.pdf.

  41. Wenzel-Benner, C., & Gräf, J. (2010). Xbx: external benchmarking extension for the supercop crypto benchmarking framework. In International Workshop on Cryptographic Hardware and Embedded Systems, pages 294–305. Springer.

  42. Wenzel-Benner, C., Gräf, J., Pham, J., & Kaps, J.-P. (2012). Xbx benchmarking results january 2012. In Third SHA-3 Candidate Conference (2012), http://xbx.das-labor.org/trac/wiki.

  43. Winter, T., Thubert, P., Brandt, A., Hui, J. W., Kelsey, R., Levis, P., et al. (2012). Rpl: Ipv6 routing protocol for low-power and lossy networks. RFC, 6550, 1–157.

    Google Scholar 

  44. Wu, H., & Huang, T. (2019). Tinyjambu: A family of lightweight authenticated encryption algorithms. Submission to the NIST Lightweight Cryptography Competition, https://csrc.nist.gov/CSRC/media/Projects/Lightweight-Cryptography/documents/round-1/spec-doc/TinyJAMBU-spec. pdf.

Download references

Acknowledgements

This work was partially supported by the project IMPACT DigiTrust of “Lorraine Université d’Excellence”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdelkader Lahmadi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: performance results for the NIST competition finalists on an ARM Cortex-M3

Table 7 Performance results with -O3 on ARM Cortex-M3

Clearly, regarding the results of Table 7, Schwaemm256-128 is the most efficient algorithm in terms of execution time but has a dedicated implementation as the following ones: Romulus-N and TinyJAMBU-128. The importance of a dedicated implementation for the ARM Cortex-M3 is thus of primary importance because all the algorithms tested with the reference code have bad performances.

This is exactly the same behaviour concerning RAM consumption where TinyJAMBU-128 and Schwaemm256-128 have the lowest consumption, except that Romulus-N becomes worst. The same remark holds for code size.

Appendix B: detailed view of the python scripts

1.1 Libraries

The Python scripts are written in Python3. They use the following libraries:

  • matplolib and pylab to draw figures

  • datetime to handle time measures and compute time intervals

  • time to wait and get the current timestamp

  • fpdf to generate a PDF report from a set of experiments

  • sqlite3 to interact with the database

  • path, from os, to handle files

  • pyshark to parse PCAP files

  • sys to handle error messages

  • configargparse to parse the configuration file

  • queue to put the different threads in a queue

  • Thread, from threading, to implement a thread

  • subprocess to launch the threads and retrieve their output

  • SSHClient, from paramiko, to handle SSH communication with the IoT-LAB user space

  • SCPClient, from scp, to handle SCP operations with the IoT-LAB user space

  • json to handle JSON data

  • iotlabcli to interact with the IoT-LAB API

  • shutil to copy firmwares files

  • enum to force the Algorithm, Architecture, OperatingSystem, Operation, ProfileType, Topology and DataProcessing values to be part of a predefined set

  • Path, from pathlib, to create directories.

1.2 Classes

The Python scripts relies on six main classes, with their relations are depicted in Fig. 11. The Thread class is imported from the Python library threading.

Fig. 11
figure 11

Class diagram of our benchmarking tool

Enum classes Six classes do not appear in the classes diagram: they are called Algorithm, Architecture, OperatingSystem, Operation, ProfileType and Topology. They are written in the file utils.py. They all extend the Enum class, from the enum library. These classes are used while checking the content of the configuration file, in order to force the algorithms, architecture, os, operation, profile, and topology values to be part of a predefined set.

Singleton class While the data (such as the Enum classes, mentioned in the previous subsection) directly provided by the functions of the file utils.py are not retrieved from a specific instance, the data provided by the database is retrieved by calling the instance of a class. This is done in order to guarantee that only one instance accesses the database at a time, and to be able to close this instance at the end of the scripts. Thus, the Database class extends a Singleton class.

1.3 Modules

1.3.1 Overview of the modules

The scripts present six main modules: global, launch_experiment, analytics, experiment, database and utils. The first three modules are process-focused: they contain functions to respectively orchestrate the modules regarding to the configuration file, launch one or more experiments on the IoT-LAB platform and retrieve the generated files, and analyse these latter in order to extract measures. The last three modules handle data: the experiment module gathers data related to the experiment in order to manipulate this entity, the database module control the interaction with the database (including its creation), and the utils module contains data related to our architecture (such as a common format for the files’ names and paths).

1.3.2 Data handling modules

Experiment The experiment module represents an experiment. It simplifies the handling of the information or the operation related to this experiment. An experiment can be instantiated using parameters defined by the user, or using the database which will provide the parameters associated to a given experiment ID.

Database The Database module provides tools to retrieve or store data from the database. It contains the Database class, which extends a Singleton in order to ensure that only one instance access to the database at a time. The Database contains six tables, depicted in the Fig. 12. Three of them are statically pre-filled by the scripts (Topology, Node and NodeTopology), while the other three (Experiment, Measure, Traffic) store data retrieved while the experiments are launched or analysed.

The Node table stores the nodes used in one or more topologies. They present an ID, specific to our database, a node ID, which is related to the ID on the IoT-LAB platform, a site which indicated where the node is located (as the IoT-LAB platform is divided into multiple sites), and the architecture of this node.

The Topology table lists the topologies: they present an ID, specific to our database and a name, for the user use.

The TopologyNode table links each node to its associated topology and defines the role it takes (‘server’ or ‘client’). Thus, it contains an ID, specific to our database, the role the node takes in the topology, the ID of the associated topology and the ID of the associated node.

The Experiment table stores the parameters of each experiment which has been launched on IoT-LAB through the Benchmarking tool. It is linked to a topology, identified by its ID. Two tables are associated to the Experiment table: Measure and Traffic, detailed below.

The Measure table stores the power measures associated to an experiment. The experiment is identified by the ID specific to the database. Each measure presents an associated node, also identified by its ID, where the power has been measured, a timestamp indicating when the power has been measured, and the measured value in watts.

The Traffic table stores the data computed from the PCAP capture on the server side during an experimentation, identified by its ID specific to the database. It itself presents an ID specific to the database, the number of considered packets, the time interval during which the packets have been captured, and the sum of the lengths of the considered packets.

Fig. 12
figure 12

Database overview

Utils The Utils module is a helper: it centralizes the static data related to the Benchmarking tool architecture of local files. Moreover, it gathers some functions used by both the Global, Launch_Experiment and Analytics modules.

1.3.3 Process handling modules

Global process When using the Benchmarking tool, the user can configure four distinct operations, depicted in the Fig. 13: ‘launch-experiments’, ‘analyse-experiments’, ‘analyse-last’ or ‘launch-and-analyse-last’. We can identify two main phases of the process: ‘Launching experiments’ and ‘Analysing experiments’. The first one when the following options are selected: ‘launch-experiments’ and ‘launch-and-analyse-last’; the second one is used when these options are selected: ‘analyse-experiments’, ‘analyse-last’ and ‘launch-and-analyse-last’. These two phases are detailed in the subsections below.

Fig. 13
figure 13

Process phases overview

Launching experiments Input/Output The ‘Launching’ phase takes as input the experiments’ parameters. To launch several experiments, a list of algorithms can be submitted in the configuration file. The launched experiments will then present the same parameters, except for this algorithm. Only experiments with the same parameters (duration, os, architecture and topology) can then be compared with each other in the ‘Analysing’ phase. The ‘Launching’ phase outputs a list of experiments IDs. It contains the IDs of the launched experiments.

Orchestrate the launching of the experiments The Fig. 14 presents the global process of the ‘Launching’ phase. As mentioned before, multiple experiments can be launched at once. Thus, each experiment is launched into a specific Thread. The clients of these experiments do not wait between two sent packets, so that the max number of received packets can be computed. This value is used to compile a new client, waiting between packets in order to compute the latency. A second experiment, using this client, is then launched. The second experiment is referred as the “associated experiment” of the first one, while the first experiment is referred as the “original experiment” of the second one. The process launching one experiment is described in the following paragraph.

Fig. 14
figure 14

Steps for launching a process

Launch an experiment An experiment is submitted using the IoT-LAB API. This is done by providing the parameters given by the user. The nodes to be used are chosen regarding the topology data contained in the database. On IoT-LAB, an experiment can present multiple states: ‘Waiting’, ‘Launching’, ‘Running’, ‘Finishing’, ‘Terminated’ and ‘Error’. For each experiment, its state is regularly checked in order to determine the following action of the Benchmarking tool. The first time the experiment is detected as ‘Running’, a traffic capture is launched on the server side. When the experiment is detected as ‘Terminated’, the generated files are copied locally through SCP. The experiment parameters (and ID) are then stored in the database, in the Experiment table (Fig. 15).

Fig. 15
figure 15

Steps for launching an experiment

Analysing the experiments

Input/Output The ‘Analysing’ phase can take as input the experiments’ parameters or their IDs. If parameters are provided in the configuration file, the analysis will be done on the last experiment launched presenting these parameters. As for the ‘Launching’ phase, multiple algorithms can be requested. They will be analysed separately, and all presented in the output report. The maximal number of experiments to consider for each algorithm is indicated in the configuration file. The experiments to analyse can also be identified directly by their IDs. The ‘Analysing’ phase outputs a PDF report and PNG graphs (included in the PDF).

Global process The Analysing phase process is depicted in the Fig. 16. For each algorithm, the experiments are retrieved (regarding their parameters or their IDs), and, for each experiment, its power measures and its traffic data are computed.

Fig. 16
figure 16

Global analysis process

Power measures The power measures are retrieved from OML file generated by IoT-LAB while the experiment is running, as shown in Fig. 17. These files (one per node) are then copied to local files through SCP. For each experiment, several entries are stored in the table Measure of the database: one per power measures, namely associated to a timestamp. The generation of the power graph from these measures is detailed in the Fig. 18. When the experiments are analysed, the data of these latter is gathered when the experiments present the same encryption algorithm. As the power is measured at a specific time, the power measures are discrete values for all the experiments and nodes. Thus, in order to gather data for several experiments, their time duration is divided into several time windows. The mean power of each time window is then computed. Thereby, a new power measure is simulated: it presents the timestamp of the middle of the time window as timestamp, and the mean of the power of the experiments on this time window as value. The power graph presents these power means. As the boxplot presents the minimum, the maximum, the median and the first and third quartile, the power measures are not to be preformatted, contrary to the graph data. Thus, the power measures of all experiments are simply retrieved from the database for each algorithm in order to build the associated boxplot.

Fig. 17
figure 17

Power analysis process

Fig. 18
figure 18

Power graph generation

Traffic measures The traffic data is retrieved from the PCAP file generated on the server side by IoT-LAB while the experiment is running, and from the logs of all the nodes, clients and server, as shown in Fig. 19. These files are then copied to local files through SCP. The Benchmarking tool then parses the PCAP file by using the pyshark library. The only considered packets are the packets sent from a client to the server containing an encrypted payload. A margin of 5% of the time interval of the experiment is also ignored at the beginning and the end of the capture. The traffic measures are computed from the number, timestamp and lengths of the packets, which are stored in the database as one entry of the Traffic table. The computed traffic measures are the following: packets per second, packets loss and latency.

Fig. 19
figure 19

Traffic analysis process

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Blanc, S., Lahmadi, A., Le Gouguec, K. et al. Benchmarking of lightweight cryptographic algorithms for wireless IoT networks. Wireless Netw 28, 3453–3476 (2022). https://doi.org/10.1007/s11276-022-03046-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-022-03046-1

Keywords

Navigation