Skip to main content

Boosting Federated Multitask Learning: Transfer Effects in Cross-Domain Drug-Target Interaction Prediction

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 822))

Included in the following conference series:

  • 204 Accesses

Abstract

Using federated learning to collaborate with other parties is becoming common when conducting machine learning on high-value data. In our work, we try to expand the possibilities of existing federated models to apply them to multitask problems. Previously we presented FedMTBoost, which used boosting to enhance predictive performance in a small drug-target interaction problem. This paper investigates the algorithm’s performance on a larger scale using a cross-domain benchmark data set. The original motivation for boosting was to weigh the data adaptively; thus, the multitask transfer can happen on different tasks in different iterations. However, our results suggest that improvement is mostly present in federated scenarios, leading us to believe that the data and model weights can improve the federated transfer by adapting the models to the clients’ data. Furthermore, the boosting algorithms generally outperform traditional baselines when fewer data are available, either in tasks or samples. In this paper, we examine these findings in multiple experiments and try to explain the improvements achieved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, R., Liu, X., Jin, S., Lin, J., Liu, J.: Machine learning for drug-target interaction prediction. Molecules 23(9), 2208 (2018)

    Article  Google Scholar 

  2. Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Zhao, S.: Advances and open problems in federated learning. Found. Trends Mach. Learn. 14(1–2), 1–210 (2021)

    Google Scholar 

  3. Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)

    Google Scholar 

  4. Xu, Y., Ma, J., Liaw, A., Sheridan, R.P., Svetnik, V.: Demystifying multitask deep neural networks for quantitative structure-activity relationships. J. Chem. Inf. Model. 57(10), 2490–2504 (2017)

    Article  Google Scholar 

  5. Smith, V., Chiang, C.K., Sanjabi, M., Talwalkar, A.S.: Federated multi-task learning. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  6. Li, Q., Wen, Z., He, B.: Practical federated gradient boosting decision trees. Proc. AAAI Conf. Artif. Intell. 34(04), 4642–4649 (2020)

    Google Scholar 

  7. Cheng, K., Fan, T., Jin, Y., Liu, Y., Chen, T., Papadopoulos, D., Yang, Q.: Secureboost: a lossless federated learning framework. IEEE Intell. Syst. 36(6), 87–98 (2021)

    Article  Google Scholar 

  8. Shen, Z., Hassani, H., Kale, S., Karbasi, A.: Federated functional gradient boosting. In: International Conference on Artificial Intelligence and Statistics, pp. 7814–7840. PMLR (2022)

    Google Scholar 

  9. Wang, B., Pineau, J.: Online boosting algorithms for anytime transfer and multitask learning. Proc. AAAI Conf. Artif. Intell. 29(1) (2015)

    Google Scholar 

  10. Zhang, Y., Yeung, D.Y.: Multi-task boosting by exploiting task relationships. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 697–710. Springer, Berlin (2012)

    Google Scholar 

  11. Svetnik, V., Wang, T., Tong, C., Liaw, A., Sheridan, R.P., Song, Q.: Boosting: An ensemble learning tool for compound classification and QSAR modeling. J. Chem. Inf. Model. 45(3), 786–799 (2005)

    Article  Google Scholar 

  12. Moon, C., Kim, D.: Prediction of drug-target interactions through multi-task learning. Sci. Rep. 12(1), 1–10 (2022)

    Article  Google Scholar 

  13. Oldenhof, M., Ács, G., Pejó, B., Schuffenhauer, A., Holway, N., Sturm, N., Galtier, M.: Industry-Scale Orchestrated Federated Learning for Drug Discovery (2022). arXiv:2210.08871

  14. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  15. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  16. Arany, A., Simm, J., Oldenhof, M.,Moreau, Y.: SparseChem: Fast and accurate machine learning model for small molecules (2022). arXiv:2203.04676

  17. Song, X., Zheng, S., Cao, W., Yu, J., Bian, J.: Efficient and effective multi-task grouping via meta learning on task combinations. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  18. Tang, J., Szwajda, A., Shakyawar, S., Xu, T., Hintsanen, P., Wennerberg, K., Aittokallio, T.: Making sense of large-scale kinase inhibitor bioactivity data sets: a comparative and integrative analysis. J. Chem. Inf. Model. 54(3), 735–743 (2014)

    Article  Google Scholar 

Download references

Acknowledgments

This study was supported by J. Heim Student Scholarship, OTKA 139330, and the European Union project RRF-2.3.1-21-2022-00004 within the framework of the Artificial Intelligence National Laboratory, New National Excellence Programme of the Ministry of Innovation and Technology, code number ÚNKP-22-2-I-BME-70, funded by the National Research, Development and Innovation Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dániel Sándor .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sándor, D., Antal, P. (2024). Boosting Federated Multitask Learning: Transfer Effects in Cross-Domain Drug-Target Interaction Prediction. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2023. Lecture Notes in Networks and Systems, vol 822. Springer, Cham. https://doi.org/10.1007/978-3-031-47721-8_26

Download citation

Publish with us

Policies and ethics