skip to main content
10.1145/3458864.3467680acmconferencesArticle/Chapter ViewAbstractPublication PagesmobisysConference Proceedingsconference-collections
research-article
Open Access

OESense: employing occlusion effect for in-ear human sensing

Published:24 June 2021Publication History

ABSTRACT

Smart earbuds are recognized as a new wearable platform for personal-scale human motion sensing. However, due to the interference from head movement or background noise, commonly-used modalities (e.g. accelerometer and microphone) fail to reliably detect both intense and light motions. To obviate this, we propose OESense, an acoustic-based in-ear system for general human motion sensing. The core idea behind OESense is the joint use of the occlusion effect (i.e., the enhancement of low-frequency components of bone-conducted sounds in an occluded ear canal) and inward-facing microphone, which naturally boosts the sensing signal and suppresses external interference. We prototype OESense as an earbud and evaluate its performance on three representative applications, i.e., step counting, activity recognition, and hand-to-face gesture interaction. With data collected from 31 subjects, we show that OESense achieves 99.3% step counting recall, 98.3% recognition recall for 5 activities, and 97.0% recall for five tapping gestures on human face, respectively. We also demonstrate that OESense is compatible with earbuds' fundamental functionalities (e.g. music playback and phone calls). In terms of energy, OESense consumes 746 mW during data recording and recognition and it has a response latency of 40.85 ms for gesture recognition. Our analysis indicates such overhead is acceptable and OESense is potential to be integrated into future earbuds.

References

  1. AirPods Pro. https://www.apple.com/uk/airpods-pro/, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  2. Sony WF-1000XM3. https://www.sony.co.uk/electronics/truly-wireless/wf-1000xm3, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  3. Bose QuietControl. https://www.bose.com/, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  4. Andrea Ferlini, Alessandro Montanari, Cecilia Mascolo, and Robert Harle. Head motion tracking through in-ear wearables. In Proceedings of the 1st International Workshop on Earable Computing, pages 8--13, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nam Bui, Nhat Pham, Jessica Jacqueline Barnitz, Zhanan Zou, Phuc Nguyen, Hoang Truong, Taeho Kim, Nicholas Farrow, Anh Nguyen, Jianliang Xiao, et al. eBP: A Wearable System For Frequent and Comfortable Blood Pressure Monitoring From User's Ear. In The 25th Annual International Conference on Mobile Computing and Networking, pages 1--17, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jay Prakash, Zhijian Yang, Yu-Lin Wei, and Romit Roy Choudhury. STEAR: Robust Step Counting from Earables. In Proceedings of the 1st International Workshop on Earable Computing, pages 36--41, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Abdelkareem Bedri, David Byrd, Peter Presti, Himanshu Sahni, Zehua Gue, and Thad Starner. Stick it in your ear: Building an in-ear jaw movement sensor. In Proceedings of the 2015 ACM International Symposium on Wearable Computers, pages 1333--1338, 2015.Google ScholarGoogle Scholar
  8. Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pages 679--689, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Alexis Martin and Jérémie Voix. In-ear Audio Wearable: Measurement of heart and breathing rates for health and safety monitoring. IEEE Transactions on Biomedical Engineering, 65(6):1256--1263, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  10. Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. eSense: Open Earable Platform for Human Sensing. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, pages 371--372, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Yang Gao, Wei Wang, Vir V Phoha, Wei Sun, and Zhanpeng Jin. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(3):1--24, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jennifer R Kwapisz, Gary M Weiss, and Samuel A Moore. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter, 12(2):74--82, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Muhammad Farooq and Edward Sazonov. Accelerometer-based detection of food intake in free-living individuals. IEEE sensors journal, 18(9):3752--3758, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  14. Abhinav Parate, Meng-Chieh Chiu, Chaniel Chadowitz, Deepak Ganesan, and Evangelos Kalogerakis. Risq: Recognizing smoking gestures with inertial sensors on a wristband. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services, pages 149--161, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Mohammad Omar Derawi. Accelerometer-based gait analysis, a survey. Nor Informasjonssikkerhetskonferanse NISK, 1, 2010.Google ScholarGoogle Scholar
  16. Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K Dey. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1--14, 2020.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Stefan Stenfelt. Acoustic and physiologic aspects of bone conduction hearing. In Implantable bone conduction hearing aids, volume 71, pages 10--21. Karger Publishers, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  18. Roman Schlieper, Song Li, Stephan Preihs, and Jürgen Peissig. The relationship between the acoustic impedance of headphones and the occlusion effect. In Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY. Audio Engineering Society, 2019.Google ScholarGoogle Scholar
  19. Michael A Stone, Anna M Paul, Patrick Axon, and Brian CJ Moore. A technique for estimating the occlusion effect for frequencies below 125 hz. Ear and hearing, 35(1):49, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  20. Stefan Stenfelt and Sabine Reinfeldt. A model of the occlusion effect with bone-conducted stimulation. International journal of audiology, 46(10):595--608, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  21. Kévin Carillo, Olivier Doutres, and Franck Sgard. Theoretical investigation of the low frequency fundamental mechanism of the objective occlusion effect induced by bone-conducted stimulation. The Journal of the Acoustical Society of America, 147(5):3476--3489, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  22. Librosa. https://librosa.org/, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  23. Chloë Brown, Jagmohan Chauhan, Andreas Grammenos, Jing Han, Apinan Hasthanasombat, Dimitris Spathis, Tong Xia, Pietro Cicuta, and Cecilia Mascolo. Exploring automatic diagnosis of covid-19 from crowdsourced respiratory sound data. arXiv preprint arXiv:2006.05919, 2020.Google ScholarGoogle Scholar
  24. Honor Magic Earbuds. https://www.hihonor.com/global/products/accessories/honor-magic-earbuds/, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  25. MINISO Marvel Earphones. https://www.miniso-au.com/en-au/product/145169/marvel-earphones/, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  26. Microphone SPU1410LR5H-QB. https://www.mouser.com/datasheet/2/218/SPU1410LR5H-QB-215269.pdf, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  27. ReSpeaker Voice Accessory HAT. https://wiki.seeedstudio.com/ReSpeaker_4-Mic_Linear_Array_Kit_for_Raspberry_Pi/, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  28. Marília Barandas, Duarte Folgado, Letícia Fernandes, Sara Santos, Mariana Abreu, Patrícia Bota, Hui Liu, Tanja Schultz, and Hugo Gamboa. Tsfel: Time series feature extraction library. SoftwareX, 11:100456, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  29. KL Yick, LT Tse, WT Lo, SP Ng, and J Yip. Effects of indoor slippers on plantar pressure and lower limb emg activity in older women. Applied ergonomics, 56:153--159, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  30. David R Bassett, Lindsay P Toth, Samuel R LaMunion, and Scott E Crouter. Step counting: a review of measurement considerations and health-related applications. Sports Medicine, 47(7):1303--1315, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  31. Frédéric Camps, Sébastien Harasse, and André Monin. Numerical calibration for 3-axis accelerometers and magnetometers. In 2009 IEEE International Conference on Electro/Information Technology, pages 217--221. IEEE, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  32. Wenqiang Chen, Maoning Guan, Yandao Huang, Lu Wang, Rukhsana Ruby, Wen Hu, and Kaishun Wu. Vitype: A cost efficient on-body typing system through vibration. In 2018 15th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pages 1--9. IEEE, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Mehul P Sampat, Zhou Wang, Shalini Gupta, Alan Conrad Bovik, and Mia K Markey. Complex wavelet structural similarity: A new image similarity index. IEEE transactions on image processing, 18(11):2385--2401, 2009.Google ScholarGoogle Scholar
  34. Billboard All-Time Top 100 Songs. https://www.billboard.com/articles/news/hot-100-turns-60/8468142/hot-100-all-time-biggest-hits-songs-list, Online. (Accessed on May 19, 2021).Google ScholarGoogle Scholar
  35. D Esteban, C Galand, Daniel Mauduit, and J Menez. 9.6/7.2 kbps voice excited predictive coder (vepc). In ICASSP'78. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages 307--311. IEEE, 1978.Google ScholarGoogle ScholarCross RefCross Ref
  36. Frank Angione, Colin Novak, Chris Imeson, Ashley Lehman, Ben Merwin, Tom Pagliarella, Nikolina Samardzic, Peter D'Angela, and Helen Ule. Study of a low frequency emergency siren in comparison to traditional siren technology. In Proceedings of Meetings on Acoustics 172ASA, volume 29, page 030008. Acoustical Society of America, 2016.Google ScholarGoogle Scholar
  37. Ming-Zher Poh, Kyunghee Kim, Andrew D Goessling, Nicholas C Swenson, and Rosalind W Picard. Heartphones: Sensor earphones and mobile application for non-obtrusive health monitoring. In 2009 International Symposium on Wearable Computers, pages 153--154. IEEE, 2009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Denys JC Matthies, Bernhard A Strecker, and Bodo Urban. EarFieldSensing: A novel in-ear electric field sensing to enrich wearable gesture input through facial expressions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 1911--1922, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Junjue Wang, Kaichen Zhao, Xinyu Zhang, and Chunyi Peng. Ubiquitous keyboard for small mobile devices: harnessing multipath fading for fine-grained keystroke localization. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services, pages 14--27, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Yanzhi Ren, Chen Wang, Jie Yang, and Yingying Chen. Fine-grained sleep monitoring: Hearing your breathing with smartphones. In 2015 IEEE Conference on Computer Communications (INFOCOM), pages 1194--1202. IEEE, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  41. Jagmohan Chauhan, Yining Hu, Suranga Seneviratne, Archan Misra, Aruna Seneviratne, and Youngki Lee. reathPrint: Breathing acoustics-based user authentication. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pages 278--291, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Jian Liu, Yan Wang, Gorkem Kar, Yingying Chen, Jie Yang, and Marco Gruteser. Snooping keystrokes with mm-level audio ranging on a single phone. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, pages 142--154, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Sidhant Gupta, Daniel Morris, Shwetak Patel, and Desney Tan. SoundWave: using the doppler effect to sense gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1911--1914, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Wei Wang, Alex X Liu, and Ke Sun. Device-free gesture tracking using acoustic signals. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking, pages 82--94, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Sangki Yun, Yi-Chao Chen, Huihuang Zheng, Lili Qiu, and Wenguang Mao. Strata: Fine-grained acoustic-based device-free tracking. In Proceedings of the 15th annual international conference on mobile systems, applications, and services, pages 15--28, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Marcos Serrano, Barrett M Ens, and Pourang P Irani. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 3181--3190, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H Thomas. EarTouch: turning the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, pages 1--6, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Juyoung Lee, Hui-Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley, Woontack Woo, and Kai Kunze. Itchy Nose: discreet gesture interaction using EOG sensors in smart eyewear. In Proceedings of the 2017 ACM International Symposium on Wearable Computers, pages 94--97, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. OESense: employing occlusion effect for in-ear human sensing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      MobiSys '21: Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services
      June 2021
      528 pages
      ISBN:9781450384438
      DOI:10.1145/3458864

      Copyright © 2021 Owner/Author

      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 24 June 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      MobiSys '21 Paper Acceptance Rate36of166submissions,22%Overall Acceptance Rate274of1,679submissions,16%

      Upcoming Conference

      MOBISYS '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    ePub

    View this article in ePub.

    View ePub