skip to main content
research-article

FedTSC: a secure federated learning system for interpretable time series classification

Published:01 August 2022Publication History
Skip Abstract Section

Abstract

We demonstrate FedTSC, a novel federated learning (FL) system for interpretable time series classification (TSC). FedTSC is an FL-based TSC solution that makes a great balance among security, interpretability, accuracy, and efficiency. We achieve this by first extending the concept of FL to consider both stronger security and model interpretability. Then, we propose three novel TSC methods based on explainable features to deal with the challengeable FL problem. To build the model in the FL setting, we propose several security protocols that are well optimized by maximally reducing the bottlenecked communication complexity. We build the FedTSC system based on such a solution, and provide the user Sklearn-like Python APIs for practical utility. We show that the system is easy to use, and the novel TSC approach is superior.

References

  1. A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh. 2017. The Great Time Series Classification Bake Off: a Review and Experimental Evaluation of Recent Algorithmic Advances. Data Mining and Knowledge Discovery 31 (2017), 606--660. Issue 3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Fangcheng Fu, Yingxia Shao, Lele Yu, Jiawei Jiang, Huanran Xue, Yangyu Tao, and Bin Cui. 2021. VF2Boost: Very Fast Vertical Federated Gradient Boosting for Cross-Enterprise Learning. In Proceedings of the 2021 International Conference on Management of Data. 563--576.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Eamonn Keogh and Shruti Kasetty. 2003. On the need for time series data mining benchmarks: a survey and empirical demonstration. Data Mining and knowledge discovery 7, 4 (2003), 349--371.Google ScholarGoogle Scholar
  4. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273--1282.Google ScholarGoogle Scholar
  5. Matthew Middlehurst, James Large, Michael Flynn, Jason Lines, Aaron Bostrom, and Anthony Bagnall. 2021. HIVE-COTE 2.0: a new meta ensemble for time series classification. Machine Learning 110, 11 (2021), 3211--3243.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Christoph Molnar. 2022. Interpretable Machine Learning (2 ed.). christophm.github.io/interpretable-ml-book/Google ScholarGoogle Scholar
  7. Yuncheng Wu, Shaofeng Cai, Xiaokui Xiao, Gang Chen, and Beng Chin Ooi. [n.d.]. Privacy Preserving Vertical Federated Learning for Tree-based Models. Proceedings of the VLDB Endowment 13, 11 ([n. d.]).Google ScholarGoogle Scholar
  8. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

  • Published in

    cover image Proceedings of the VLDB Endowment
    Proceedings of the VLDB Endowment  Volume 15, Issue 12
    August 2022
    551 pages
    ISSN:2150-8097
    Issue’s Table of Contents

    Publisher

    VLDB Endowment

    Publication History

    • Published: 1 August 2022
    Published in pvldb Volume 15, Issue 12

    Qualifiers

    • research-article
  • Article Metrics

    • Downloads (Last 12 months)63
    • Downloads (Last 6 weeks)5

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader