Abstract
Communication plays a pivotal role in our daily lives. With the advances of technology we are now able to use it to communicate with others at a distance. However, while in direct human-human communication we are able to adjust how we pass a message based on our context and the perceived context of the receiver. When we do it at a distance, using a messaging tool, one of the most popular choices, nowadays, this becomes harder. In fact, most messaging tools, such as Whatsapp or Messenger, provide some degree of flexibility regarding the way a message is sent (e.g., text, audio, image), but the receiver is limited to receiving it in the format of sender choice. In this regard, providing more flexibility in such technology-mediated communication scenarios might foster increased adaptability of these tools to multiple user abilities and contexts, and provide important alternatives for those with some disability (e.g., aphasia, blindness). The work presented here adopts a user-centered approach to design and develop a first proof-of-concept for a multimodal messaging system than enables message modality conversion regardless of the format used by the sender.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
References
Almeida, N., Teixeira, A., Silva, S., Ketsmur, M.: The AM4I architecture and framework for multimodal interaction and its application to smart environments. Sens. (Switz.) 19(11), 1–30 (2019). https://doi.org/10.3390/s19112587
Amarasinghe, A., Wijesuriya, V.B.: Stimme: a chat application for communicating with hearing impaired persons. In: 2019 IEEE 14th International Conference on Industrial and Information Systems: Engineering for Innovations for Industry 4.0, ICIIS 2019 - Proceedings, pp. 458–463 (2019)
Bryan-Kinns, N., Hamilton, F.: One for all and all for one? Case studies of using prototypes in commercial projects. In: ACM International Conference Proceeding Series, vol. 31, pp. 91–100 (2002). https://doi.org/10.1145/572020.572032
Cooper, A., Reimann, R., Cronin, D.: About Face 3: The Essentials of Interaction Design, vol. 3 (2007)
Daems, J., Bosch, N., Solberg, S., Dekelver, J., Kultsova, M.: AbleChat: development of a chat app with pictograms for people with intellectual disabilities. In: Engineering for Society - Leuven 2016 - Proceedings, pp. 25–32 (2016)
Liu, L., et al.: Deep learning for generic object detection: a survey. Int. J. Comput. Vis. 128(2), 261–318 (2019)
Martins, A.I., Rosa, A.F., Queirós, A., Silva, A., Rocha, N.P.: European Portuguese validation of the system usability scale (SUS). Proc. Comput. Sci. 67, 293–300 (2015)
Miller, G.A.: WordNet: An Electronic Lexical Database. MIT Press, Cambridge (1998)
Mirzaei, M.R., Ghorshi, S., Mortazavi, M.: Helping deaf and hard-of-hearing people by combining augmented reality and speech technologies. In: Proceedings of 9th International Conference on Disability, Virtual Reality and Associated Technologies, pp. 10–12 (2012)
Nielsen, J.: Heuristic Evaluation. Usability Inspection Methods (1994). Edited by: Nielsen J, Mack RL
Ramesh, A., et al.: Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092 (2021)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 779–788, June 2015
Samonte, M.J.C., Gazmin, R.A., Soriano, J.D.S., Valencia, M.N.O.: BridgeApp: an assistive mobile communication application for the deaf and mute. In: ICTC 2019–10th International Conference on ICT Convergence: ICT Convergence Leading the Autonomous Future, pp. 1310–1315, October 2019
Sevens, L., Vandeghinste, V., Schuurman, I., Eynde, F.V.: Less is more: a rule-based syntactic simplification module for improved text-to-pictograph translation. Data Knowl. Eng. 117, 264–289 (2018)
Acknowledgement
This work was supported by EU and national funds through the Portuguese Foundation for Science and Technology (FCT), in the context of project AAL APH-ALARM (AAL/0006/2019) and funding to the research unit IEETA (UIDB/00127/2020).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Cardoso, H., Almeida, N., Silva, S. (2022). Tell It Your Way: Technology-Mediated Human-Human Multimodal Communication. In: Gao, X., Jamalipour, A., Guo, L. (eds) Wireless Mobile Communication and Healthcare. MobiHealth 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 440. Springer, Cham. https://doi.org/10.1007/978-3-031-06368-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-06368-8_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06367-1
Online ISBN: 978-3-031-06368-8
eBook Packages: Computer ScienceComputer Science (R0)