Abstract
The lack of large-scale, labeled data sets impedes progress in developing robust and generalized predictive models for on-body sensor-based human activity recognition (HAR). Labeled data in human activity recognition is scarce and hard to come by, as sensor data collection is expensive, and the annotation is time-consuming and error-prone. To address this problem, we introduce IMUTube, an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert videos of human activity into virtual streams of IMU data. These virtual IMU streams represent accelerometry at a wide variety of locations on the human body. We show how the virtually-generated IMU data improves the performance of a variety of models on known HAR datasets. Our initial results are very promising, but the greater promise of this work lies in a collective approach by the computer vision, signal processing, and activity recognition communities to extend this work in ways that we outline. This should lead to on-body, sensor-based HAR becoming yet another success story in large-dataset breakthroughs in recognition.
- S. Alireza Golestaneh and L. Karam. 2017. Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5800--5809.Google Scholar
- T. Alldieck, M. Magnor, B. Bhatnagar, C. Theobalt, and G. Pons-Moll. 2019. Learning to reconstruct people in clothing from a single RGB camera. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1175--1186.Google Scholar
- M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2014. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
- P. Asare, R. Dickerson, X. Wu, J. Lach, and J. Stankovic. 2013. BodySim: A Multi-Domain Modeling and Simulation Framework for Body Sensor Networks Research and Design. In International Conference on Body Area Networks (BODYNETS). ICST.Google Scholar
- M. Bächlin, M. Plotnik, and G. Tröster. 2010. Wearable assistant for Parkinson's disease patients with the freezing of gait symptom. IEEE Trans. Inf. Technol. Biomed. 14, 2 (2010), 436--446.Google ScholarDigital Library
- P.J. Besl and N. McKay. 1992. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 2 (February 1992), 239--256.Google ScholarDigital Library
- A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. 2016. Simple online and realtime tracking. In IEEE International Conference on Image Processing (ICIP). 3464--3468.Google Scholar
- O. Bogdan, V. Eckstein, F. Rameau, and J. Bazin. 2018. DeepCalib: a deep learning approach for automatic intrinsic calibration of wide field-of-view cameras. In Proceedings of the ACM SIGGRAPH European Conference on Visual Media Production. ACM, 6:1-6:10.Google Scholar
- F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 961--970.Google Scholar
- Z. Cao, T. Simon, S. Wei, and Y. Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7291--7299.Google Scholar
- B. Caprile and V. Torre. 1990. Using vanishing points for camera calibration. International Journal of Computer Vision 4, 2 (March 1990), 127--139.Google ScholarDigital Library
- J. Carreira, E. Noland, C. Hillier, and A. Zisserman. 2019. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987 (2019).Google Scholar
- Y. Chang, A. Mathur, A. Isopoussu, J. Song, and F. Kawsar. 2020. A Systematic Study of Unsupervised Domain Adaptation for Robust Human-Activity Recognition. 4, 1, Article 39 (March 2020), 30 pages.Google Scholar
- R. Chavarriaga, H. Sagha, and D. Roggen. 2013. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letter 34, 15 (2013), 2033--2042.Google ScholarDigital Library
- C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005 (2013).Google Scholar
- Blender Online Community. 2018. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam. http://www.blender.orgGoogle Scholar
- W. Conover and R. Iman. 1981. Rank transformations as a bridge between parametric and nonparametric statistics. The American Statistician 35, 3 (1981), 124--129.Google ScholarCross Ref
- J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 248--255.Google Scholar
- J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1 (2019), 4171--4186.Google Scholar
- H. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller. 2018. Data augmentation using synthetic data for time series classification with deep residual networks. arXiv preprint arXiv:1808.02455 (2018).Google Scholar
- S. Feng and M. Duarte. 2019. Few-shot learning-based human activity recognition. Expert Systems with Applications 138 (2019), 112782.Google ScholarCross Ref
- A. Fernández, S. Garcia, F. Herrera, and N. Chawla. 2018. SMOTE for learning from imbalanced data: progress and challenges, marking the 15-year anniversary. Journal of artificial intelligence research 61 (2018), 863--905.Google ScholarCross Ref
- R. Girshick. 2015. Fast r-cnn. In IEEE International Conference on Computer Vision (ICCV). 1440--1448.Google ScholarDigital Library
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2014. Generative adversarial nets. 2672--2680.Google Scholar
- A. Gordon, H. Li, R. Jonschkowski, and A. Angelova. 2019. Depth From Videos in the Wild: Unsupervised Monocular Depth Learning From Unknown Cameras. In IEEE International Conference on Computer Vision (ICCV). IEEE.Google Scholar
- C. Gu, C. Sun, D. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, C. Schmid, and J. Malik. 2018. Ava: A video dataset of spatio-temporally localized atomic visual actions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6047--6056.Google Scholar
- Y. Guan and T. Plötz. 2017. Ensembles of deep lstm learners for activity recognition using wearables. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 1, 2 (2017), 1--28.Google Scholar
- N. Hammerla, R. Kirkham, P. Andras, and T. Ploetz. 2013. On preserving statistical characteristics of accelerometry data using their empirical cumulative distribution. In Proceedings of the ACM International Symposium on Wearable Computers. 65--68.Google Scholar
- N. Y. Hammerla, S. Halloran, and T. Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables.. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). AAAI Press, 1533--1540.Google Scholar
- S. Haradal, H. Hayashi, and S. Uchida. 2018. Biosignal data augmentation based on generative adversarial networks. In Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 368--371.Google Scholar
- H. Haresamudram, D. Anderson, and T. Plötz. 2019. On the role of features in human activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 78--88.Google Scholar
- K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.Google Scholar
- M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems. 6626--6637.Google Scholar
- G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine 29, 6 (2012), 82--97.Google Scholar
- K. Hovsepian, M. Al'Absi, E. Ertin, T. Kamarck, M. Nakajima, and S. Kumar. 2015. cStress: towards a gold standard for continuous stress assessment in the mobile environment. In Proceedings of the ACM international joint conference on pervasive and ubiquitous computing. 493--504.Google Scholar
- Y. Huang, M. Kaufmann, E. Aksan, M. Black, O. Hilliges, and G. Pons-Moll. 2018. Deep inertial poser: learning to reconstruct human pose from sparse inertial measurements in real time. 37, 6 (2018), 1--15.Google Scholar
- S. Ioffe and C. Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).Google Scholar
- I. Joel, A.and Stergios. 2011. A Direct Least-Squares (DLS) method for PnP. In IEEE International Conference on Computer Vision (ICCV). IEEE.Google Scholar
- A. Kanazawa, M. Black, D. Jacobs, and J. Malik. 2018. End-to-end recovery of human shape and pose. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7122--7131.Google Scholar
- C. Kang, H. Jung, and Y. Lee. 2019. Towards Machine Learning with Zero Real-World Data. In The ACM Workshop on Wearable Systems and Applications. 41--46.Google Scholar
- S. Kang, H. Choi, H. Park, B. Choi, H. Im, D. Shin, Y. Jung, J. Lee, H. Park, S. Park, and J. Roh. 2017. The development of an IMU integrated clothes for postural monitoring using conductive yarn and interconnecting technology. Sensors 17, 11 (2017), 2560.Google ScholarCross Ref
- P. Karlsson, B. Lo, and G. Z. Yang. 2014. Inertial sensing simulations using modified motion capture data. In Proceedings of the International Conference on Wearable and Implantable Body Sensor Networks (BSN). 16--19.Google Scholar
- D. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google Scholar
- H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. 2011. HMDB: a large video database for human motion recognition. In IEEE International Conference on Computer Vision (ICCV). IEEE, 2556--2563.Google Scholar
- Carnegie Mellon Graphics Lab. 2008. Carnegie Mellon Motion Capture Database. http://mocap.cs.cmu.edu/Google Scholar
- N. Lane, Y. Xu, H. Lu, S. Hu, T. Choudhury, A. Campbell, and F. Zhao. 2011. Enabling Large-Scale Human Activity Inference on Smartphones Using Community Similarity Networks. In Proceedings of the International Conference on Ubiquitous Computing. ACM, 355--364.Google Scholar
- G. Laput and C. Harrison. 2019. Sensing Fine-Grained Hand Activity with Smartwatches. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--13.Google Scholar
- A. Le Guennec, S. Malinowski, and R. Tavenard. 2016. Data Augmentation for Time Series Classification using Convolutional Neural Networks. In ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data.Google Scholar
- W. Li, Z. Zhang, and Z. Liu. 2010. Action recognition based on a bag of 3D points. In The IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 9--14.Google Scholar
- D. Liaqat, M. Abdalla, Pegah Abed-Esfahani, Moshe Gabel, Tatiana Son, Robert Wu, Andrea Gershon, Frank Rudzicz, and Eyal De Lara. 2019. WearBreathing: Real World Respiratory Rate Monitoring Using Smartwatches. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 3, 2 (2019), 1--22.Google ScholarDigital Library
- J. Liu, A. Shahroudy, M. Perez, G. Wang, L. Duan, and A. Kot. 2019. NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019).Google Scholar
- M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. Black. 2015. SMPL: A skinned multi-person linear model. 34, 6 (2015), 1--16.Google ScholarDigital Library
- M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet. 2018. Are gans created equal? a large-scale study. In Advances in neural information processing systems. 700--709.Google Scholar
- N. Mahmood, N. Ghorbani, N. Troje, G. Pons-Moll, and M. Black. 2019. AMASS: Archive of motion capture as surface shapes. In IEEE International Conference on Computer Vision (ICCV). 5442--5451.Google Scholar
- A. Mathur, T. Zhang, S. Bhattacharya, P. Velickovic, L. Joffe, N. Lane, F. Kawsar, and P. Lió. 2018. Using deep data augmentation training to address software and hardware heterogeneities in wearable and smartphone sensing devices. In IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 200--211.Google Scholar
- A. Muhammad Sayem, S. Hon Teay, H. Shahariar, P. Fink, and A. Albarbar. 2020. Review on Smart Electro-Clothing Systems (SeCSs). Sensors 20, 3 (2020), 587.Google ScholarCross Ref
- V. Nair and G. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the international conference on machine learning (ICML). 807--814.Google Scholar
- A. Odena, V. Dumoulin, and C. Olah. 2016. Deconvolution and checkerboard artifacts. Distill 1, 10 (2016), e3.Google ScholarCross Ref
- F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy. 2013. Berkeley mhad: A comprehensive multimodal human action database. In IEEE Workshop on Applications of Computer Vision (WACV). IEEE, 53--60.Google Scholar
- A. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016).Google Scholar
- F. J. Ordóñez and D. Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.Google ScholarCross Ref
- J. Park, Q. Zhou, and V. Koltun. 2017. Colored Point Cloud Registration Revisited. In IEEE International Conference on Computer Vision (ICCV). 143--152.Google Scholar
- D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 2019. 3D human pose estimation in video with temporal convolutions and semi-supervised training. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7753--7762.Google Scholar
- T. Phąm and Y. Suh. 2018. Spline Function Simulation Data Generation for Walking Motion Using Foot-Mounted Inertial Sensors. In Sensors. MDPI, 199--210.Google Scholar
- T. Plötz, C. Chen, N. Hammerla, and G. Abowd. 2012. Automatic synchronization of wearable sensors and video-cameras for ground truth annotation-a practical approach. In Proceedings of the ACM International Symposium on Wearable Computers. IEEE, 100--103.Google Scholar
- F. Pomerleau, F. Colas, and R. Siegwart. 2015. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot 4, 1 (May 2015), 1--104.Google ScholarDigital Library
- G. Pons-Moll, S. Pujades, S. Hu, and M. Black. 2017. ClothCap: Seamless 4D clothing capture and retargeting. 36, 4 (2017), 1--15.Google Scholar
- G. Pons-Moll, J. Romero, N. Mahmood, and M Black. 2015. Dyna: A model of dynamic human shape in motion. 34, 4 (2015), 1--14.Google ScholarDigital Library
- G. Ramponi, P. Protopapas, M. Brambilla, and R.Janssen. 2018. T-cgan: Conditional generative adversarial network for data augmentation in noisy time series with irregular sampling. arXiv preprint arXiv:1811.08295 (2018).Google Scholar
- K. Rashid and J. Louis. 2019. Times-series data augmentation and deep learning for construction equipment activity recognition. Advanced Engineering Informatics 42 (2019), 100944.Google ScholarDigital Library
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. 2016. You only look once: Unified, real-time object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 779--788.Google Scholar
- A. Reiss and D. Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the ACM International Symposium on Wearable Computers. IEEE, 108--109.Google Scholar
- A. Reiss and D. Stricker. 2013. Personalized mobile physical activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 25--28.Google Scholar
- V. Rey, P. Hevesi, O. Kovalenko, and P. Lukowicz. 2019. Let there be IMU data: generating training data for wearable, motion sensor based activity recognition from monocular RGB videos. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the ACM International Symposium on Wearable Computers. 699--708.Google Scholar
- M. Rosca, B. Lakshminarayanan, and S. Mohamed. 2018. Distribution matching in variational inference. arXiv preprint arXiv:1802.06847 (2018).Google Scholar
- S. Rusinkiewicz and M. Levoy. 2001. Efficient variants of the ICP algorithm. In Proceedings Third International Conference on 3-D Digital Imaging and Modeling. IEEE.Google Scholar
- A. Saeed, T. Ozcelebi, and J. Lukkien. 2019. Multi-task Self-Supervised Learning for Human Activity Detection. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 3, 2 (2019), 1--30.Google Scholar
- P. M. Scholl, M. Wille, and K. Van Laerhoven. 2015. Wearables in the wet lab: a laboratory system for capturing and guiding experiments. In Proceedings of the International Conference on Ubiquitous Computing. ACM, 589--599.Google Scholar
- S. Shah and J.K. Aggarwal. 1996. Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation. Pattern Recognition 29, 11 (November 1996), 1775--1788.Google ScholarCross Ref
- Z. Shen, W. Wang, X. Lu, J. Shen, H. Ling, T. Xu, and L. Shao. 2019. Human-Aware Motion Deblurring. In IEEE International Conference on Computer Vision (ICCV). 5572--5581.Google Scholar
- J. Shi, L. Xu, and J. Jia. 2014. Discriminative blur detection features. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2965--2972.Google Scholar
- C. Shorten and T. Khoshgoftaar. 2019. A survey on image data augmentation for deep learning. Journal of Big Data 6, 1 (2019), 60.Google ScholarCross Ref
- G. Sigurdsson, G. Varol, X. Wang, I. Laptev, A. Farhadi, and A. Gupta. 2016. Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding. arXiv preprint arXiv:1604.01753 (2016).Google Scholar
- K. Soomro, A. Zamir, and M. Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).Google Scholar
- O. Steven Eyobu and D. Han. 2018. Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 18, 9 (2018), 2892.Google ScholarCross Ref
- T. Sztyler and H. Stuckenschmidt. 2016. On-body localization of wearable devices: An investigation of position-aware activity recognition. In IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1--9.Google Scholar
- S. Takeda, T. Okita, P. Lago, and S. Inoue. 2018. A multi-sensor setting activity recognition simulation tool. In Proceedings of the ACM International Joint Conference and International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. 1444--1448.Google Scholar
- E. Thomaz, I. Essa, and G. Abowd. 2015. A practical approach for recognizing eating moments with wrist-mounted inertial sensing. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. 1029--1040.Google Scholar
- C. Tong, S. Tailor, and N. Lane. 2020. Are Accelerometers for Activity Recognition a Dead-End?. In Proceedings of the International Workshop on Mobile Computing Systems and Applications. ACM, 39--44.Google Scholar
- M. Trumble, A. Gilbert, C. Malleson, A. Hilton, and J. Collomosse. 2017. Total Capture: 3D Human Pose Estimation Fusing Video and Inertial Sensors. In British Machine Vision Conference (BMVC).Google Scholar
- T. Um, F. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, and D. Kulić. 2017. Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks. In Proceedings of the ACM International Conference on Multimodal Interaction. 216--220.Google Scholar
- F. Xiao, L. Pei, L. Chu, D. Zou, W. Yu, Y. Zhu, and T. Li. 2020. A Deep Learning Method for Complex Human Activity Recognition Using Virtual Wearable Sensors. arXiv preprint arXiv:2003.01874 (2020).Google Scholar
- S. Yao, Y. Zhao, H. Shao, C. Zhang, A. Zhang, S. Hu, D. Liu, S. Liu, Lu Su, and T. Abdelzaher. 2018. Sensegan: Enabling deep learning for internet of things with a semi-supervised framework. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 2, 3 (2018), 1--21.Google Scholar
- J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. 2014. How transferable are features in deep neural networks?. In Advances in neural information processing systems. 3320--3328.Google Scholar
- A. Young, M. Ling, and D. Arvind. 2011. IMUSim: A simulation environment for inertial sensing algorithm design and evaluation. In Proceedings of the International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 199--210.Google Scholar
- J. Yu and R. Ramamoorthi. 2019. Robust Video Stabilization by Optimization in CNN Weight Space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3800--3808.Google Scholar
- M. Zhang and A. A. Sawchuk. 2012. USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the International Conference on Ubiquitous Computing.Google Scholar
- Q. Zhang and R. Pless. 2004. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In IEEE International Conference on Intelligent Robots and Systems (IROS). IEEE.Google Scholar
- Z. Zhao, Y. Chen, J. Liu, Z. Shen, and M. Liu. 2011. Cross-people mobile-phone based activity recognition. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).Google Scholar
- T. Zhou, M. Brown, Noah S., and D. Lowe. 2017. Unsupervised learning of depth and ego-motion from video. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1851--1858.Google Scholar
- H. Zhuang. 1995. A self-calibration approach to extrinsic parameter estimation of stereo cameras. Robotics and Autonomous Systems 15, 3 (August 1995), 189--197.Google ScholarCross Ref
Index Terms
- IMUTube: Automatic Extraction of Virtual on-body Accelerometry from Video for Human Activity Recognition
Recommendations
Let there be IMU data: generating training data for wearable, motion sensor based activity recognition from monocular RGB videos
UbiComp/ISWC '19 Adjunct: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable ComputersRecent advances in Machine Learning, in particular Deep Learning have been driving rapid progress in fields such as computer vision and natural language processing. Human activity recognition (HAR) using wearable sensors, which has been a thriving ...
Towards global aerobic activity monitoring
PETRA '11: Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive EnvironmentsWith recent progress in wearable sensing it becomes reasonable for individuals to wear different sensors all day, thus global activity monitoring is establishing. The goals in global activity monitoring systems are amongst others to tell the type of ...
Nurse care activity recognition challenge using a supervised methodology
UbiComp/ISWC '19 Adjunct: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable ComputersHuman activity recognition is one of the important research topics in machine learning. Different algorithms have been proposed for human activity recognition in several years. But nurse care activity recognition is a new section in machine learning ...
Comments