IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 29 May 2023

Active mechanical haptics with high-fidelity perceptions for immersive virtual reality

  • Zhuang Zhang 1 , 2   na1 ,
  • Zhenghao Xu 3   na1 ,
  • Luoqian Emu 1 ,
  • Pingdong Wei 1 , 2 ,
  • Sentao Chen   ORCID: orcid.org/0000-0002-3675-2998 1 ,
  • Zirui Zhai   ORCID: orcid.org/0000-0002-6353-6537 4 ,
  • Lingyu Kong 5 ,
  • Yong Wang 3 &
  • Hanqing Jiang   ORCID: orcid.org/0000-0002-1947-4420 1 , 6 , 2  

Nature Machine Intelligence volume  5 ,  pages 643–655 ( 2023 ) Cite this article

4886 Accesses

23 Citations

45 Altmetric

Metrics details

  • Electrical and electronic engineering
  • Electronic devices
  • Mechanical engineering

Human-centred mechanical sensory perceptions enable us to immerse ourselves in the physical environment by actively touching or holding objects so that we may feel their existence (that is, ownership) and their fundamental properties (for example, stiffness or hardness). In a virtual environment, the replication of these active perceptions can create authentic haptic experiences, serving as an essential supplement for visual and auditory experiences. We present here a first-person, human-triggered haptic device enabled by curved origami that allows humans to actively experience touching of objects with various stiffness perceptions from soft to hard and from positive to negative ranges. This device represents a substantial shift away from the third-person, machine-triggered and passive haptics currently in use. The device is synchronized with the virtual environment by changing its configuration to adapt various interactions by emulating body-centred physical perceptions, including hardness, softness and sensations of crushing and weightlessness. Quantitative evaluations demonstrate that the active haptic device creates a highly immersive virtual environment, outperforming existing vibration-based passive devices. These concepts and resulting technologies create new opportunities and application potential for a more authentic virtual world.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

latest research papers on haptic technology

Similar content being viewed by others

latest research papers on haptic technology

Wearable haptics for virtual reality and beyond

latest research papers on haptic technology

Bioinspired adaptable multiplanar mechano-vibrotactile haptic system

latest research papers on haptic technology

Effect of 2.5D haptic feedback on virtual object perception via a stylus

Data availability.

All data needed to evaluate the conclusions in the papers are present in the Article and/or the Supplementary Information . The data collected during the experiment with the volunteers can be downloaded from https://github.com/EMLQ/AMH . The data are available via Zenodo at https://doi.org/10.5281/zenodo.7789004 (ref. 51 ).

Code availability

The code that supports the active mechanical haptic system within this paper and other findings of this study are available from https://github.com/EMLQ/AMH . The code is available via Zenodo at https://doi.org/10.5281/zenodo.7789004 (ref. 51 ).

Sparkes, M. What is a metaverse. New Sci. 251 , 18 (2021).

Google Scholar  

Ackerman, J. M., Nocera, C. C. & Bargh, J. A. Incidental haptic sensations influence social judgments and decisions. Science 328 , 1712–1715 (2010).

Yu, X. et al. Skin-integrated wireless haptic interfaces for virtual and augmented reality. Nature 575 , 473–479 (2019).

Biswas, S. & Visell, Y. Haptic perception, mechanics, and material technologies for virtual reality. Adv. Funct. Mater. 31 , 2008186 (2021).

Pyun, K. R., Rogers, J. A. & Ko, S. H. Materials and devices for immersive virtual reality. Nat. Rev. Mater. 7 , 841–843 (2022).

Li, S., Bai, H., Shepherd, R. F. & Zhao, H. Bio-inspired design and additive manufacturing of soft materials, machines, robots, and haptic interfaces. Angew. Chem. Int. Ed. 58 , 11182–11204 (2019).

Jung, Y. H., Kim, J.-H. & Rogers, J. A. Skin-integrated vibrohaptic interfaces for virtual and augmented reality. Adv. Funct. Mater. 31 , 2008805 (2021).

Zhu, M. L. et al. Haptic-feedback smart glove as a creative human–machine interface (HMI) for virtual/augmented reality applications. Sci. Adv. 6 , eaaz8693 (2020).

Liu, Y. et al. Electronic skin as wireless human–machine interfaces for robotic VR. Sci. Adv. 8 , eabl6700 (2022).

Jung, Y. H. et al. A wireless haptic interface for programmable patterns of touch across large areas of the skin. Nat. Electron. 5 , 374–385 (2022).

Sun, Z., Zhu, M., Shan, X. & Lee, C. Augmented tactile-perception and haptic-feedback rings as human–machine interfaces aiming for immersive interactions. Nat. Commun. 13 , 5224 (2022).

Kim, D. et al. Actuating compact wearable augmented reality devices by multifunctional artificial muscle. Nat. Commun. 13 , 4155 (2022).

Choi, I., Ofek, E., Benko, H., Sinclair, M. & Holz, C. CLAW: a multifunctional handheld haptic controller for grasping, touching, and triggering in virtual reality. In CHI '18: Proc. 2018 CHI Conference on Human Factors in Computing Systems 654 (Association for Computing Machinery, 2018).

Scheggi, S., Meli, L., Pacchierotti, C. & Prattichizzo, D. Touch the virtual reality: using the leap motion controller for hand tracking and wearable tactile devices for immersive haptic rendering. In ACM SIGGRAPH 2015 Posters 31 (Association for Computing Machinery, 2015).

Giraud, F. H., Joshi, S. & Paik, J. Haptigami: a fingertip haptic interface with vibrotactile and 3-DoF cutaneous force feedback. IEEE Trans. Haptics 15 , 131–141 (2022).

Pezent, E., Agarwal, P., Hartcher-O’Brien, J., Colonnese, N. & O’Malley, M. K. Design, control, and psychophysics of Tasbi: a force-controlled multimodal haptic bracelet. IEEE Trans. Robot. 38 , 2962–2978 (2022).

Yao, K. M. et al. Encoding of tactile information in hand via skin-integrated wireless haptic interface. Nat. Mach. Intell. 4 , 893–903 (2022).

Leroy, E., Hinchet, R. & Shea, H. Multimode hydraulically amplified electrostatic actuators for wearable haptics. Adv. Mater. 32 , 2002564 (2020).

Chinello, F., Pacchierotti, C., Malvezzi, M. & Prattichizzo, D. A three revolute–revolute–spherical wearable fingertip cutaneous device for stiffness rendering. IEEE Trans. Haptics 11 , 39–50 (2018).

Xu, H., Peshkin, M. A. & Colgate, J. E. UltraShiver: lateral force feedback on a bare fingertip via ultrasonic oscillation and electroadhesion. IEEE Trans. Haptics 12 , 497–507 (2019).

Steed, A., Ofek, E., Sinclair, M. & Gonzalez-Franco, M. A mechatronic shape display based on auxetic materials. Nat. Commun. 12 , 4758 (2021).

Peng, Y. et al. Elastohydrodynamic friction of robotic and human fingers on soft micropatterned substrates. Nat. Mater. 20 , 1707–1711 (2021).

Choi, C. et al. Surface haptic rendering of virtual shapes through change in surface temperature. Sci. Robot. 7 , eabl4543 (2022).

Li, X. et al. Nanotexture shape and surface energy impact on electroadhesive human–machine interface performance. Adv. Mater. 33 , 2008337 (2021).

Mintchev, S., Salerno, M., Cherpillod, A., Scaduto, S. & Paik, J. A portable three-degrees-of-freedom force feedback origami robot for human–robot interactions. Nat. Mach. Intell. 1 , 584–593 (2019).

Pacchierotti, C. et al. Wearable haptic systems for the fingertip and the hand: taxonomy, review, and perspectives. IEEE Trans. Haptics 10 , 580–600 (2017).

Xiong, Q. et al. So-EAGlove: VR haptic glove rendering softness sensation with force-tunable electrostatic adhesive brakes. IEEE Trans. Robot . 38 , 3450–3462 (2022).

Giri, G. S., Maddahi, Y. & Zareinia, K. An application-based review of haptics technology. Robotics 10 , 29 (2021).

Choi, I., Hawkes, E. W., Christensen, D. L., Ploch, C. J. & Follmer, S. Wolverine: a wearable haptic interface for grasping in virtual reality. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) 986–993 (IEEE, 2016).

Martinez, M. O. et al. Open source, modular, customizable, 3-D printed kinesthetic haptic devices. In 2017 IEEE World Haptics Conference (WHC) 142–147 (IEEE, 2017).

Jadhav, S., Majit, M. R. A., Shih, B., Schulze, J. P. & Tolley, M. T. Variable stiffness devices using fiber jamming for application in soft robotics and wearable haptics. Soft Robot. 9 , 173–186 (2022).

Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415 , 429–433 (2002).

Moscatelli, A. et al. Touch as an auxiliary proprioceptive cue for movement control. Sci. Adv. 5 , eaaw3121 (2019).

Kieliba, P., Clode, D., Maimon-Mor, R. O. & Makin, T. R. Robotic hand augmentation drives changes in neural body representation. Sci. Robot. 6 , eabd7935 (2021).

Peck, J. & Shu, S. B. The effect of mere touch on perceived ownership. J. Consum. Res. 36 , 434–447 (2009).

Brasel, S. A. & Gips, J. Tablets, touchscreens, and touchpads: how varying touch interfaces trigger psychological ownership and endowment. J. Consum. Psychol. 24 , 226–233 (2014).

Felton, S. Origami for the everyday. Nat. Mach. Intell. 1 , 555–556 (2019).

Rus, D. & Tolley, M. T. Design, fabrication and control of origami robots. Nat. Rev. Mater. 3 , 101–112 (2018).

Hawkes, E. et al. Programmable matter by folding. Proc. Natl Acad. Sci. USA 107 , 12441–12445 (2010).

Zhai, Z., Wang, Y. & Jiang, H. Origami-inspired, on-demand deployable and collapsible mechanical metamaterials with tunable stiffness. Proc. Natl Acad. Sci. USA 115 , 2032–2037 (2018).

Melancon, D., Gorissen, B., Garcia-Mora, C. J., Hoberman, C. & Bertoldi, K. Multistable inflatable origami structures at the metre scale. Nature 592 , 545–550 (2021).

Felton, S., Tolley, M., Demaine, E., Rus, D. & Wood, R. A method for building self-folding machines. Science 345 , 644–646 (2014).

Faber, J. A., Arrieta, A. F. & Studart, A. R. Bioinspired spring origami. Science 359 , 1386–1391 (2018).

Wu, S. et al. Stretchable origami robotic arm with omnidirectional bending and twisting. Proc. Natl Acad. Sci. USA 118 , e2110023118 (2021).

Zhai, Z., Wang, Y., Lin, K., Wu, L. & Jiang, H. In situ stiffness manipulation using elegant curved origami. Sci. Adv. 6 , eabe2000 (2020).

Su, H. et al. Physical human–robot interaction for clinical care in infectious environments. Nat. Mach. Intell. 3 , 184–186 (2021).

Li, G. et al. Self-powered soft robot in the Mariana Trench. Nature 591 , 66–71 (2021).

Panzirsch, M. et al. Exploring planet geology through force-feedback telemanipulation from orbit. Sci. Robot. 7 , eabl6307 (2022).

Liu, K., Pratapa, P. P., Misseroni, D., Tachi, T. & Paulino, G. H. Triclinic metamaterials by tristable origami with reprogrammable frustration. Adv. Mater. 34 , 2107998 (2022).

Song, Z. et al. Origami lithium-ion batteries. Nat. Commun. 5 , 3140 (2014).

Zhang, Z. et al. Active mechanical haptics with high-fidelity perceptions for immersive virtual reality. Zenodo https://doi.org/10.5281/zenodo.7789004 (2023).

Download references

Acknowledgements

We thank the Research Center for Industries of the Future (RCIF) at Westlake University and Westlake Education Foundation for supporting this work. Z. Zhang acknowledges support from the National Natural Science Foundations of China (grant 52205031). Y.W. acknowledges support from the National Natural Science Foundation of China (grants 11872328 and 12132013). L.K. acknowledges support from the Key Project of Zhejiang Lab (G2021NB0AL03) and the National Natural Science Foundations of China (grant 52205034). We also thank Y. Jiang for helping with human experiments and data analysis, and Y. Huang for helping with the design of the electronic and control system.

Author information

These authors contributed equally: Zhuang Zhang, Zhenghao Xu.

Authors and Affiliations

School of Engineering, Westlake University, Hangzhou, China

Zhuang Zhang, Luoqian Emu, Pingdong Wei, Sentao Chen & Hanqing Jiang

Westlake Institute for Advanced Study, Hangzhou, China

Zhuang Zhang, Pingdong Wei & Hanqing Jiang

School of Aeronautics and Astronautics, Zhejiang University, Hangzhou, China

Zhenghao Xu & Yong Wang

School for Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ, USA

Intelligent Robot Research Center, Zhejiang Lab, Hangzhou, China

Lingyu Kong

Research Center for Industries of the Future, Westlake University, Hangzhou, China

Hanqing Jiang

You can also search for this author in PubMed   Google Scholar

Contributions

Z. Zhang, Z.X. and H.J. developed the concept. Z. Zhang, Z.X., L.E. and P.W. designed and prototyped the devices. Z. Zhang and L.E. developed the electronics, the control system and the software. Z. Zhang, Z.X., L.E., P.W., S.C., Z. Zhai, Y.W. and H.J. carried out experiments and analysis. Z. Zhang, Z.X., L.E. and L.K. collected the user data. Z. Zhang, Z.X. and H.J. wrote the manuscript.

Corresponding author

Correspondence to Hanqing Jiang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Xinge Yu and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 some force–displacement relationships of curved origami modules with different angles α..

a-c , normalized force–displacement relationship of curved origami modules with different initial folding angles β with angles α = 50°, 60°, 70°. d-f , normalized force–displacement relationship of curved origami modules with fixed initial folding angles β = 120° and different controllable angles △ β, with different angles α = 50°, 60°, 70°.

Extended Data Fig. 2 Fabrication of the curved origami.

a , Fabrication of curved origami using plastic or metal materials: step a1, sheets cutting according to the 2D curved origami pattern, using an engraving machine; step a2, peel off the origami patterns from the substrate; step a3, manually fold the 2D pattern along the curved crease to form the 3D configuration with bending panels; step a4, integrate a cable through two holes on the panels; step a5, knotting the cable behind one side of the origami panel to form motion constraint; step a6, pull and release the cable to control the curved origami folding. b , Fabrication of the curved origami with the silver nanowires (AgNWs) coated layer for sensing: step b1, prepare the substrate and AgNWs suspension; step b2, pour the AgNWs suspension onto the surface of the substrate; step b3, dry the suspension in the oven for 6 hours to obtain the AgNWs-coated substrate; steps b4-b9, similar fabrication process with steps a1-a6.

Extended Data Fig. 3 SEM images of the AgNWs-coated origami.

SEM images of AgNWs coated on the surface of a PET film under initial ( a ), outward ( b ), and inward ( c ) bending at low (up) and high (bottom) magnitude. The scale bars are 2 μm (in yellow) and 1 μm (in red), respectively. Outward bending shows loosely packed structure, while inward bending shows densely packed structure, as compared with the initial state, indicating resistance increase for outward bending and decrease for inward bending.

Extended Data Fig. 4 Mechanical structure of the in-hand haptic device.

a , Exploded-view schematic illustration of the spherical in-hand device with five synchronously controlled origami buttons. The slide guide is utilized to confine the compression range (10 mm) of the origami button. The universal joints are used for transmitting the rotations about the vertical axis between the top and bottom plates of curved origami modules. b , Schematic of the actuation system. The cable is actuated by an on-board micromotor through worm gears. A tension roller is used for the pre-tension of the cable, to avoid the slide between the cable and the rollers. c , Synchronous actuation of the origami buttons. A SI-MO (single input–multiple output) actuation strategy is constructed based on cable routing. The red arrow denotes the driving roller, and the blue ones denote follow-up rollers.

Extended Data Fig. 5 Stiffness manipulation of the in-hand device.

a , Synchronous control of the folding angles of the curved origami buttons. b-e , Optical images and the corresponding force–displacement relationships of a curved origami button with folding angle β  = 120° under different control angles △ β  = 0°, 30°, 60°, and 90°.

Extended Data Fig. 6 Recorded EMG of the upper limb.

a-d , Raw EMG data with a sampling frequency of 2,000 Hz and the corresponding RMS value with a sampling interval of 0.25 s. The data were recorded when users tried to grasp four objects with different stiffness under four different scenarios, including grasping real objects ( a ), grasping the present haptic device ( b ), grasping virtual objects through hand gestures ( c ), and grasping a conventional joystick ( d ).

Extended Data Fig. 7 Mechanical structure of the stepping haptic device.

a , Exploded-view schematic illustration of the stepping device with synchronously controlled curved origami tessellation. b , Schematic of the multihead worm gears transmission. The cable is actuated by a motor through worm gears. Four gears are synchronously controlled by one four-head worm, making four cables synchronously pulled/released by the motor. c , Schematic of the multiknotted cable-driven transmission. Five knots are evenly made on each cable, rending the folding of five origami modules synchronously controlled by one cable. The red arrow denotes the input, and the blue ones denote the follow-ups. d , Synchronous actuation of the origami tessellation.

Extended Data Fig. 8 Snapshots of the immersive whole-body haptic experiences on the stepping device with various virtual scenarios.

a , The user experienced low-value positive stiffness in the scenario of grassland, with his feet readily sinking into the virtual ground. b , The user experienced negative stiffness in the scenario of an icy surface, with his whole body keeping still upon a light active stepping while experiencing a real falling upon a hard active stepping. c , The user experienced high-value positive stiffness in the scenario of a rigid avenue, with his whole body keeping still upon active stepping.

Extended Data Fig. 9 Stiffness manipulation of the stepping device.

a , Synchronous control of the folding angles of the curved origami tessellation. b-d , Optical images of the stepping processes and the corresponding force–displacement relationships of a curved origami module inside the stepping device with folding angle β = 120° under different control angles △ β  = 0°, 35°, and 60°. Solid lines are measured results and dash lines denote theoretical ones.

Extended Data Fig. 10 Recorded EMG of the lower limb.

a-c , Raw EMG data with a sampling frequency of 2,000 Hz and the corresponding RMS value with a sampling interval of 0.25 s. The data were recorded when users stepped on the stepping device with three different stiffness, including high-value positive stiffness simulating a rigid avenue ( a ), negative stiffness simulating an icy surface ( b ), and low-value positive stiffness simulating a soft grassland ( c ).

Supplementary information

Supplementary information.

Supplementary Notes 1–3 and Figs. 1–10.

Reporting Summary

Supplementary video 1.

Active mechanical haptics with the in-hand device.

Supplementary Video 2

Stiffness manipulation of the in-hand device.

Supplementary Video 3

Active mechanical haptics with the stepping device.

Supplementary Video 4

Stiffness manipulation of the stepping device.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Zhang, Z., Xu, Z., Emu, L. et al. Active mechanical haptics with high-fidelity perceptions for immersive virtual reality. Nat Mach Intell 5 , 643–655 (2023). https://doi.org/10.1038/s42256-023-00671-z

Download citation

Received : 20 November 2022

Accepted : 05 May 2023

Published : 29 May 2023

Issue Date : June 2023

DOI : https://doi.org/10.1038/s42256-023-00671-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Frequency-encoded eye tracking smart contact lens for human–machine interaction.

  • Hengtian Zhu

Nature Communications (2024)

A virtual reality paradigm simulating blood donation serves as a platform to test interventions to promote donation

  • Lisa A. Williams
  • Kallie Tzelios
  • Tanya E. Davison

Scientific Reports (2024)

Ambient haptics: bilateral interaction among human, machines and virtual/real environments in pervasive computing era

  • Liangyue Han
  • Naqash Afzal
  • Dangxiao Wang

CCF Transactions on Pervasive Computing and Interaction (2024)

Artificial intelligence-powered electronic skin

  • Changhao Xu
  • Samuel A. Solomon

Nature Machine Intelligence (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

latest research papers on haptic technology

An Application-Based Review of Haptics Technology

  • February 2021
  • Robotics 10(1):29

Gowri Shankar Giri at Ryerson University

  • Ryerson University

Yaser Maddahi at Tactile Robotics

  • Tactile Robotics

Kourosh Zareinia at Ryerson University

Abstract and Figures

(A) Phantom Desktop (TouchX). (B) Phantom Omni (Touch). (C) Modified Phantom Premium for neuroArm. (D) Omega 3.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Ayşe Kabuk

  • Sena Melike Taşcı
  • İlayda Türkoğlu

Merdiye Sendir

  • Ningzhe Hou
  • Majid Taghavi
  • Chau Kien Tsong

Wan Ahmad Jaafar Wan Yahaya

  • Niklas Hirsch
  • Alexander Wiethoff
  • Abhijeet Singh Raina
  • Mehdi Gorjian

Vinayak Raman Krishnamurthy

  • Zhuang Zhang
  • Hanqing Jiang

Thomas Thurner

  • Julia Maier
  • Martin Kaltenbrunner

Andreas Schrempf

  • Maryam Kalvandi

Sheri McKinstry

  • Arianna Saracino

Timo Oude Vrielink

  • EXPERT REV MED DEVIC

Amir Baghdadi

  • Christopher J. Payne
  • Khushi Vyas

Daniel Bautista-Salinas

  • Seungmoon Choi
  • Frances Lau
  • Freddy Abnousi
  • Oliver Schneider
  • Dustin T. Goetz
  • David K. Owusu-Antwi

Heather Culbertson

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic

Affiliations.

  • 1 Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran.
  • 2 Department of Electrical Engineering, University of Isfahan, Isfahan, Iran.
  • 3 Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran.
  • 4 School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran.
  • 5 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada.
  • PMID: 34476241
  • PMCID: PMC8407078
  • DOI: 10.3389/frobt.2021.612949

This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.

Keywords: COVID-19 pandemic; artificial intelligence; haptic; medical training; virtual reality.

Copyright © 2021 Motaharifar, Norouzzadeh, Abdi, Iranfar, Lotfi, Moshiri, Lashay, Mohammadi and Taghirad.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Process of procedural skill development…

Process of procedural skill development in medical training and surgery.

VR/AR advantages in medical training.

VR/AR applications in surgical training.

Single user vs. dual user…

Single user vs. dual user haptic systems. (A) Single user haptic system. (B)…

A general framework for surgical…

A general framework for surgical skill assessment.

Example of two-stage and single-stage…

Example of two-stage and single-stage detectors Kathuria (2021). (A) Two-stage detector (RCNN). (B)…

Similar articles

  • Technology-enhanced learning in orthopaedics: Virtual reality and multi-modality educational workshops may be effective in the training of surgeons and operating department staff. Hall AJ, Walmsley P. Hall AJ, et al. Surgeon. 2023 Aug;21(4):217-224. doi: 10.1016/j.surge.2022.04.009. Epub 2022 May 24. Surgeon. 2023. PMID: 35624020
  • Virtual reality and haptic interfaces for civilian and military open trauma surgery training: A systematic review. Mackenzie CF, Harris TE, Shipper AG, Elster E, Bowyer MW. Mackenzie CF, et al. Injury. 2022 Nov;53(11):3575-3585. doi: 10.1016/j.injury.2022.08.003. Epub 2022 Aug 7. Injury. 2022. PMID: 36123192 Review.
  • From static web to metaverse: reinventing medical education in the post-pandemic era. Lewis KO, Popov V, Fatima SS. Lewis KO, et al. Ann Med. 2024 Dec;56(1):2305694. doi: 10.1080/07853890.2024.2305694. Epub 2024 Jan 23. Ann Med. 2024. PMID: 38261592 Free PMC article. Review.
  • Pre-graduation medical training including virtual reality during COVID-19 pandemic: a report on students' perception. De Ponti R, Marazzato J, Maresca AM, Rovera F, Carcano G, Ferrario MM. De Ponti R, et al. BMC Med Educ. 2020 Sep 25;20(1):332. doi: 10.1186/s12909-020-02245-8. BMC Med Educ. 2020. PMID: 32977781 Free PMC article.
  • Deep Learning-Based Haptic Guidance for Surgical Skills Transfer. Fekri P, Dargahi J, Zadeh M. Fekri P, et al. Front Robot AI. 2021 Jan 20;7:586707. doi: 10.3389/frobt.2020.586707. eCollection 2020. Front Robot AI. 2021. PMID: 33553246 Free PMC article.
  • A comprehensive study on unraveling the advances of immersive technologies (VR/AR/MR/XR) in the healthcare sector during the COVID-19: Challenges and solutions. Khan HU, Ali Y, Khan F, Al-Antari MA. Khan HU, et al. Heliyon. 2024 Jul 23;10(15):e35037. doi: 10.1016/j.heliyon.2024.e35037. eCollection 2024 Aug 15. Heliyon. 2024. PMID: 39157361 Free PMC article. Review.
  • Visualization in Anatomy Education. Patra A, Pushpa NB, Ravi KS. Patra A, et al. Adv Exp Med Biol. 2023;1406:171-186. doi: 10.1007/978-3-031-26462-7_8. Adv Exp Med Biol. 2023. PMID: 37016115
  • Application of Virtual Reality Systems in Bone Trauma Procedures. Ugwoke CK, Albano D, Umek N, Dumić-Čule I, Snoj Ž. Ugwoke CK, et al. Medicina (Kaunas). 2023 Mar 14;59(3):562. doi: 10.3390/medicina59030562. Medicina (Kaunas). 2023. PMID: 36984563 Free PMC article. Review.
  • The Impact of the Addition of a Virtual Reality Trainer on Skill Retention of Tourniquet Application for Hemorrhage Control Among Emergency Medical Technician Students: A Pilot Study. Arif A, Santana Felipes RC, Hoxhaj M, Light MB, Dadario NB, Cook B, Cataldo MJ, Jafri FN. Arif A, et al. Cureus. 2023 Jan 28;15(1):e34320. doi: 10.7759/cureus.34320. eCollection 2023 Jan. Cureus. 2023. PMID: 36865981 Free PMC article.
  • Public interest in the digital transformation accelerated by the COVID-19 pandemic and perception of its future impact. Park JY, Lee K, Chung DR. Park JY, et al. Korean J Intern Med. 2022 Nov;37(6):1223-1233. doi: 10.3904/kjim.2022.129. Epub 2022 Sep 26. Korean J Intern Med. 2022. PMID: 36153857 Free PMC article.
  • Abdelaal A. E., Avinash A., Kalia M., Hager G. D., Salcudean S. E. (2020). A Multi-Camera, Multi-View System for Training and Skill Assessment for Robot-Assisted Surgery. Int. J. CARS 15, 1369–1377. 10.1007/s11548-020-02176-1 - DOI - PubMed
  • Akbari M., Carriere J., Meyer T., Sloboda R., Husain S., Usmani N., et al. (2021). Robotic Ultrasound Scanning with Real-Time Image-Based Force Adjustment: Quick Response for Enabling Physical Distancing during the Covid-19 Pandemic. Front. Robotics AI 8, 62. 10.3389/frobt.2021.645424 - DOI - PMC - PubMed
  • Anh N. X., Nataraja R. M., Chauhan S. (2020). Towards Near Real-Time Assessment of Surgical Skills: A Comparison of Feature Extraction Techniques. Comput. Methods Programs Biomed. 187, 105234. 10.1016/j.cmpb.2019.105234 - DOI - PubMed
  • Antoniades A., Spyrou L., Took C. C., Sanei S. (2016). “Deep Learning for Epileptic Intracranial Eeg Data,” in 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), Vietri sul Mare, Salerno, Italy, September 13–16, 2016 (IEEE; ), 1–6. 10.1109/mlsp.2016.7738824 - DOI
  • ARASH-ASiST (2019). Dataset. Aras Haptics: A System for EYE Surgery Training. Available at: https://aras.kntu.ac.ir/arash-asist// . (Accessed 08 05, 2020).

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • Frontiers Media SA
  • PubMed Central

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • A-Z Publications

Annual Review of Control, Robotics, and Autonomous Systems

Volume 1, 2018, review article, haptics: the present and future of artificial touch sensation.

ORCID icon

  • View Affiliations Hide Affiliations Affiliations: 1 Department of Mechanical Engineering, Stanford University, Stanford, California 94305, USA; email: [email protected] , [email protected] 2 Department of Computer Science, University of Southern California, Los Angeles, California 90089, USA; email: [email protected]
  • Vol. 1:385-409 (Volume publication date May 2018) https://doi.org/10.1146/annurev-control-060117-105043
  • Copyright © 2018 by Annual Reviews. All rights reserved

This article reviews the technology behind creating artificial touch sensations and the relevant aspects of human touch. We focus on the design and control of haptic devices and discuss the best practices for generating distinct and effective touch sensations. Artificial haptic sensations can present information to users, help them complete a task, augment or replace the other senses, and add immersiveness and realism to virtual interactions. We examine these applications in the context of different haptic feedback modalities and the forms that haptic devices can take. We discuss the prior work, limitations, and design considerations of each feedback modality and individual haptic technology. We also address the need to consider the neuroscience and perception behind the human sense of touch in the design and control of haptic devices.

Article metrics loading...

Full text loading...

Literature Cited

  • 1.  Chu V , McMahon I , Riano L , McDonald CG , He Q et al. 2015 . Robotic learning of haptic adjectives through physical interaction. Robot. Auton. Syst 63 : 279– 92 [Google Scholar]
  • 2.  Flesher SN , Collinger JL , Foldes ST , Weiss JM , Downey JE et al. 2016 . Intracortical microstimulation of human somatosensory cortex. Sci. Transl. Med. 8 : 361ra141 [Google Scholar]
  • 3.  Johansson RS , Flanagan JR 2009 . Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci 10 : 345– 59 [Google Scholar]
  • 4.  Johnson KO , Yoshioka T , Vega-Bermudez F 2000 . Tactile functions of mechanoreceptive afferents innervating the hand. J. Clin. Neurophysiol. 17 : 539– 58 [Google Scholar]
  • 5.  Bolanowski SJ , Gescheider GA , Verrillo RT 1994 . Hairy skin: psychophysical channels and their physiological substrates. Somatosens. Motor Res. 11 : 279– 90 [Google Scholar]
  • 6.  Massie TH , Salisbury JK 1994 . The phantom haptic interface: a device for probing virtual objects. Proceedings of the ASME Dynamic Systems and Control Division 55 295– 300 New York: IEEE [Google Scholar]
  • 7.  Colgate JE , Brown JM 1994 . Factors affecting the Z-width of a haptic display. Proceedings of the 1994 IEEE International Conference on Robotics and Automation 3205– 10 New York: IEEE [Google Scholar]
  • 8.  Peshkin MA , Colgate JE , Wannasuphoprasit W , Moore CA , Gillespie RB , Akella P 2001 . Cobot architecture. IEEE Trans. Robot. Autom. 17 : 377– 90 [Google Scholar]
  • 9.  Chan S , Conti F , Blevins NH , Salisbury K 2011 . Constraint-based six degree-of-freedom haptic rendering of volume-embedded isosurfaces. 2011 IEEE World Haptics Conference 89– 94 New York: IEEE [Google Scholar]
  • 10.  Walker JM , Colonnese N , Okamura AM 2016 . Noise, but not uncoupled stability, reduces realism and likeability of bilateral teleoperation. IEEE Robot. Autom. Lett. 1 : 562– 69 [Google Scholar]
  • 11.  Orta Martinez M , Morimoto TK , Taylor AT , Barron AC , Pultorak JDA et al. 2016 . 3-D printed haptic devices for educational applications. 2016 IEEE Haptics Symposium 126– 33 New York: IEEE [Google Scholar]
  • 12.  Minogue J , Jones MG 2006 . Haptics in education: exploring an untapped sensory modality. Rev. Educ. Res. 76 : 317– 48 [Google Scholar]
  • 13.  Zhang J , Fiers P , Witte KA , Jackson RW , Poggensee KL et al. 2017 . Human-in-the-loop optimization of exoskeleton assistance during walking. Science 356 : 1280– 84 [Google Scholar]
  • 14.  Stetten G , Wu B , Klatzky R , Galeotti J , Siegel M et al. 2011 . Hand-held force magnifier for surgical instruments. Information Processing in Computer-Assisted Interventions: IPCAI 2011 RH Taylor, GZ Yang 90– 100 Berlin: Springer [Google Scholar]
  • 15.  Polygerinos P , Wang Z , Galloway KC , Wood RJ , Walsh CJ 2015 . Soft robotic glove for combined assistance and at-home rehabilitation. Robot. Auton. Syst. 73 : 135– 43 [Google Scholar]
  • 16.  Wehner M , Quinlivan B , Aubin PM , Martinez-Villalpando E , Baumann M et al. 2013 . A lightweight soft exosuit for gait assistance. 2013 IEEE International Conference on Robotics and Automation 3362– 69 New York: IEEE [Google Scholar]
  • 17.  Pacchierotti C , Sinclair S , Solazzi M , Frisoli A , Hayward V , Prattichizzo D 2017 . Wearable haptic systems for the fingertip and the hand: taxonomy, review, and perspectives. IEEE Trans. Haptics 10 : 580– 600 [Google Scholar]
  • 18.  Diolaiti N , Niemeyer G , Barbagli F , Salisbury J 2006 . Stability of haptic rendering: discretization, quantization, time delay, and coulomb effects. IEEE Trans. Robot. 22 : 256– 68 [Google Scholar]
  • 19.  Srinivasan MA , Beauregard GL , Brock DL 1996 . The impact of visual information on the haptic perception of stiffness in virtual environments. Proceedings of the ASME Dynamic Systems and Control Division 58 555– 59 New York: ASME [Google Scholar]
  • 20.  Okamura AM , Dennerlein JT , Howe RD 1998 . Vibration feedback models for virtual environments. 1998 IEEE International Conference on Robotics and Automation 674– 79 New York: IEEE [Google Scholar]
  • 21.  Kuchenbecker KJ , Fiene J , Niemeyer G 2006 . Improving contact realism through event-based haptic feedback. IEEE Trans. Vis. Comput. Graph. 12 : 219– 30 [Google Scholar]
  • 22.  Hachisu T , Sato M , Fukushima S , Kajimoto H 2012 . Augmentation of material property by modulating vibration resulting from tapping. Haptics: Perception, Devices, Mobility, and Communication: EuroHaptics 2012 P Isokoski, J Springare 173– 80 Berlin: Springer [Google Scholar]
  • 23.  Prattichizzo D , Pacchierotti C , Rosati G 2012 . Cutaneous force feedback as a sensory subtraction technique in haptics. IEEE Trans. Haptics 5 : 289– 300 [Google Scholar]
  • 24.  Biggs J , Srinivasan M 2002 . Tangential versus normal displacements of skin: relative effectiveness for producing tactile sensations. Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 121– 28 New York: IEEE [Google Scholar]
  • 25.  Drewing K , Fritschi M , Zopf R , Ernst MO , Buss M 2005 . First evaluation of a novel tactile display exerting shear force via lateral displacement. ACM Trans. Appl. Percept. 2 : 118– 31 [Google Scholar]
  • 26.  Gleeson B , Horschel S , Provancher W 2010 . Perception of direction for applied tangential skin displacement: effects of speed, displacement, and repetition. IEEE Trans. Haptics 3 : 177– 88 [Google Scholar]
  • 27.  Webster RJ , Murphy TE , Verner LN , Okamura AM 2005 . A novel two-dimensional tactile slip display: design, kinematics and perceptual experiment. ACM Trans. Appl. Percept. 2 : 150– 65 [Google Scholar]
  • 28.  Guzererler A , Provancher WR , Basdogan C 2016 . Perception of skin stretch applied to palm: effects of speed and displacement. Haptics: Perception, Devices, Control, and Applications: EuroHaptics 2016 F Bello, H Kajimoto, Y Visell 180– 89 Berlin: Springer [Google Scholar]
  • 29.  Bark K , Wheeler J , Shull P , Savall J , Cutkosky M 2010 . Rotational skin stretch feedback: a wearable haptic display for motion. IEEE Trans. Haptics 3 : 166– 76 [Google Scholar]
  • 30.  Wheeler J , Bark K , Savall J , Cutkosky M 2010 . Investigation of rotational skin stretch for proprioceptive feedback with application to myoelectric systems. IEEE Trans. Neural Syst. Rehabil. Eng. 18 : 58– 66 [Google Scholar]
  • 31.  Guinan AL , Hornbaker NC , Montandon MN , Doxon AJ , Provancher WR 2013 . Back-to-back skin stretch feedback for communicating five degree-of-freedom direction cues. 2013 World Haptics Conference (WHC) 13– 18 New York: IEEE [Google Scholar]
  • 32.  Provancher WR , Sylvester ND 2009 . Fingerpad skin stretch increases the perception of virtual friction. IEEE Trans. Haptics 2 : 212– 23 [Google Scholar]
  • 33.  Quek ZF , Schorr SB , Nisky I , Okamura AM , Provancher WR 2014 . Augmentation of stiffness perception with a 1-degree-of-freedom skin stretch device. IEEE Trans. Hum.-Mach. Syst. 44 : 731– 42 [Google Scholar]
  • 34.  Quek ZF , Schorr SB , Nisky I , Okamura AM , Provancher WR 2014 . Sensory substitution using 3-degree-of-freedom tangential and normal skin deformation feedback. 2014 IEEE Haptics Symposium 27– 33 New York: IEEE [Google Scholar]
  • 35.  Quek ZF , Schorr SB , Nisky I , Provancher WR , Okamura AM 2015 . Sensory substitution of force and torque using 6-DoF tangential and normal skin deformation feedback. 2015 IEEE International Conference on Robotics and Automation (ICRA) 264– 71 New York: IEEE [Google Scholar]
  • 36.  Girard A , Marchal M , Gosselin F , Chabrier A , Louveau F , Lécuyer A 2016 . HapTip: displaying haptic shear forces at the fingertips for multi-finger interaction in virtual environments. Front. ICT 3 : 6 [Google Scholar]
  • 37.  Quek ZF , Schorr SB , Nisky I , Provancher WR , Okamura AM 2015 . Sensory substitution and augmentation using 3-degree-of-freedom skin deformation feedback. IEEE Trans. Haptics 8 : 209– 21 [Google Scholar]
  • 38.  Schorr SB , Quek ZF , Romano RY , Nisky I , Provancher WR , Okamura AM 2013 . Sensory substitution via cutaneous skin stretch feedback. 2013 IEEE International Conference on Robotics and Automation 2341– 46 New York: IEEE [Google Scholar]
  • 39.  Schorr SB , Quek ZF , Nisky I , Provancher W , Okamura AM 2015 . Tactor-induced skin stretch as a sensory substitution method in teleoperated palpation. IEEE Trans. Hum.-Mach. Syst. 45 : 714– 26 [Google Scholar]
  • 40.  Solazzi M , Frisoli A , Bergamasco M 2010 . Design of a novel finger haptic interface for contact and orientation display. 2010 IEEE Haptics Symposium 129– 32 New York: IEEE [Google Scholar]
  • 41.  Pacchierotti C , Tirmizi A , Prattichizzo D 2014 . Improving transparency in teleoperation by means of cutaneous tactile force feedback. ACM Trans. Appl. Percept. 11 : 4 [Google Scholar]
  • 42.  Leonardis D , Solazzi M , Bortone I , Frisoli A 2015 . A wearable fingertip haptic device with 3 DoF asymmetric 3-RSR kinematics. 2015 IEEE World Haptics Conference 388– 93 New York: IEEE [Google Scholar]
  • 43.  Schorr SB , Okamura AM 2017 . Three-dimensional skin deformation as force substitution: wearable device design and performance during haptic exploration of virtual environments. IEEE Trans. Haptics 10 : 418– 30 [Google Scholar]
  • 44.  Brown JD , Ibrahim M , Chase EDZ , Pacchierotti C , Kuchenbecker KJ 2016 . Data-driven comparison of four cutaneous displays for pinching palpation in robotic surgery. 2016 IEEE Haptics Symposium 147– 54 New York: IEEE [Google Scholar]
  • 45.  Perez AG , Lobo D , Chinello F , Cirio G , Malvezzi M et al. 2015 . Soft finger tactile rendering for wearable haptics. 2015 IEEE World Haptics Conference 327– 32 New York: IEEE [Google Scholar]
  • 46.  Prattichizzo D , Chinello F , Pacchierotti C , Malvezzi M 2013 . Towards wearability in fingertip haptics: a 3-DoF wearable device for cutaneous force feedback. IEEE Trans. Haptics 6 : 506– 16 [Google Scholar]
  • 47.  Pacchierotti C , Meli L , Chinello F , Malvezzi M , Prattichizzo D 2015 . Cutaneous haptic feedback to ensure the stability of robotic teleoperation systems. Int. J. Robot. Res. 34 : 1773– 87 [Google Scholar]
  • 48.  Pacchierotti C , Prattichizzo D , Kuchenbecker KJ 2016 . Cutaneous feedback of fingertip deformation and vibration for palpation in robotic surgery. IEEE Trans. Biomed. Eng. 63 : 278– 87 [Google Scholar]
  • 49.  Tsetserukou D , Hosokawa S , Terashima K 2014 . LinkTouch: a wearable haptic device with five-bar linkage mechanism for presentation of two-DOF force feedback at the fingerpad. 2014 IEEE Haptics Symposium 307– 12 New York: IEEE [Google Scholar]
  • 50.  Schorr SB , Okamura AM 2017 . Fingertip tactile devices for virtual object manipulation and exploration. Proceedings of the 2017 ACM CHI Conference on Human Factors in Computing Systems 3115– 19 New York: ACM [Google Scholar]
  • 51.  Sofia KO , Jones L 2013 . Mechanical and psychophysical studies of surface wave propagation during vibrotactile stimulation. IEEE Trans. Haptics 6 : 320– 29 [Google Scholar]
  • 52.  Bell J , Bolanowski S , Holmes MH 1994 . The structure and function of Pacinian corpuscles: a review. Prog. Neurobiol. 42 : 79– 128 [Google Scholar]
  • 53.  Ackerley R , Carlsson I , Wester H , Olausson H , Wasling HB 2014 . Touch perceptions across skin sites: differences between sensitivity, direction discrimination and pleasantness. Front. Behav. Neurosci. 8 : 54 [Google Scholar]
  • 54.  Meier A , Matthies DJ , Urban B , Wettach R 2015 . Exploring vibrotactile feedback on the body and foot for the purpose of pedestrian navigation. Proceedings of the 2nd International Workshop on Sensor-Based Activity Recognition and Interaction art. 11 New York: ACM [Google Scholar]
  • 55.  Zelek JS , Bromley S , Asmar D , Thompson D 2003 . A haptic glove as a tactile-vision sensory substitution for wayfinding. J. Vis. Impair. Blind. 97 : 621– 32 [Google Scholar]
  • 56.  Paneels S , Anastassova M , Strachan S , Van SP, Sivacoumarane S , Bolzmacher C 2013 . What's around me? Multi-actuator haptic feedback on the wrist. 2013 IEEE World Haptics Conference 407– 12 New York: IEEE [Google Scholar]
  • 57.  Elliott LR , van Erp J , Redden ES , Duistermaat M 2010 . Field-based validation of a tactile navigation device. IEEE Trans. Haptics 3 : 78– 87 [Google Scholar]
  • 58.  Jones LA , Lockyer B , Piateski E 2006 . Tactile display and vibrotactile pattern recognition on the torso. Adv. Robot. 20 : 1359– 74 [Google Scholar]
  • 59.  Bark K , Khanna P , Irwin R , Kapur P , Jax SA et al. 2011 . Lessons in using vibrotactile feedback to guide fast arm motions. 2011 IEEE World Haptics Conference 355– 60 New York: IEEE [Google Scholar]
  • 60.  Jansen C , Oving A , van Veen HJ 2004 . Vibrotactile movement initiation. Proceedings of the EuroHaptics International Conference (EuroHaptics '04) 110– 17 Berlin: Springer [Google Scholar]
  • 61.  Culbertson H , Walker JM , Raitor M , Okamura AM , Stolka PJ 2016 . Plane assist: the influence of haptics on ultrasound-based needle guidance. Medical Image Computing and Computer-Assisted Intervention: MICCAI 2016 S Ourselin, L Joskowicz, M Sabuncu, G Unal, W Wells 370– 77 Cham, Switz.: Springer [Google Scholar]
  • 62.  Christiansen R , Contreras-Vidal JL , Gillespie RB , Shewokis PA , O'Malley MK 2013 . Vibrotactile feedback of pose error enhances myoelectric control of a prosthetic hand. 2013 IEEE World Haptics Conference 531– 36 New York: IEEE [Google Scholar]
  • 63.  Rotella MF , Guerin K , He X , Okamura AM 2012 . HAPI Bands: a haptic augmented posture interface. 2012 IEEE Haptics Symposium 163– 70 New York: IEEE [Google Scholar]
  • 64.  Bark K , Hyman E , Tan F , Cha E , Jax SA et al. 2015 . Effects of vibrotactile feedback on human learning of arm motions. IEEE Trans. Neural Syst. Rehabil. Eng. 23 : 51– 63 [Google Scholar]
  • 65.  Bluteau J , Dubois MD , Coquillart S , Gentaz E , Payan Y 2010 . Vibrotactile guidance for trajectory following in computer aided surgery. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology 2085– 88 New York: IEEE [Google Scholar]
  • 66.  Kontarinis DA , Howe RD 1995 . Tactile display of vibratory information in teleoperation and virtual environments. Presence Teleoper. Virtual Environ. 4 : 387– 402 [Google Scholar]
  • 67.  McMahan W , Gewirtz J , Standish D , Martin P , Kunkel J et al. 2011 . Tool contact acceleration feedback for telerobotic surgery. IEEE Trans. Haptics 4 : 210– 20 [Google Scholar]
  • 68.  Dennerlein JT , Millman PA , Howe RD 1997 . Vibrotactile feedback for industrial telemanipulators. Sixth Annual Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 189– 95 New York: ASME [Google Scholar]
  • 69.  Sibert J , Cooper J , Covington C , Stefanovski A , Thompson D , Lindeman RW 2006 . Vibrotactile feedback for enhanced control of urban search and rescue robots. Proceedings of the 2006 IEEE International Workshop on Safety, Security and Rescue Robotics New York: IEEE [Google Scholar]
  • 70.  Brewster S , Brown LM 2004 . Tactons: structured tactile messages for non-visual information display. Proceedings of the Fifth Conference on Australasian User Interface 15– 23 New York: ACM [Google Scholar]
  • 71.  Azadi M , Jones LA 2014 . Evaluating vibrotactile dimensions for the design of tactons. IEEE Trans. Haptics 7 : 14– 23 [Google Scholar]
  • 72.  Schneider OS , MacLean KE 2016 . Studying design process and example use with macaron, a web-based vibrotactile effect editor. 2016 IEEE Haptics Symposium 52– 58 New York: IEEE [Google Scholar]
  • 73.  Rovers L , van Essen HA 2004 . Design and evaluation of hapticons for enriched instant messaging. Virtual Reality 9 : 177– 91 [Google Scholar]
  • 74.  Mathew D 2005 . vSmileys: imaging emotions through vibration patterns. Alternative Access: Feeling and Games 2005 75– 80 Tampere, Finl.: Univ. Tampere [Google Scholar]
  • 75.  Krishna S , Bala S , McDaniel T , McGuire S , Panchanathan S 2010 . VibroGlove: an assistive technology aid for conveying facial expressions. CHI '10: Extended Abstracts on Human Factors in Computing Systems 3637– 42 New York: ACM [Google Scholar]
  • 76.  Eid MA , Al Osman H 2016 . Affective haptics: current research and future directions. IEEE Access 4 : 26– 40 [Google Scholar]
  • 77.  Burtt HE 1917 . Tactual illusions of movement. J. Exp. Psychol. 2 : 371– 85 [Google Scholar]
  • 78.  Kang J , Lee J , Kim H , Cho K , Wang S , Ryu J 2012 . Smooth vibrotactile flow generation using two piezoelectric actuators. IEEE Trans. Haptics 5 : 21– 32 [Google Scholar]
  • 79.  Seo J , Choi S 2013 . Perceptual analysis of vibrotactile flows on a mobile device. IEEE Trans. Haptics 6 : 522– 27 [Google Scholar]
  • 80.  Seo J , Choi S 2015 . Edge flows: improving information transmission in mobile devices using two-dimensional vibrotactile flows. 2015 IEEE World Haptics Conference 25– 30 New York: IEEE [Google Scholar]
  • 81.  Alles DS 1970 . Information transmission by phantom sensations. IEEE Trans. Man-Mach. Syst. 11 : 85– 91 [Google Scholar]
  • 82.  Israr A , Poupyrev I 2011 . Tactile brush: drawing on skin with a tactile grid display. CHI '11: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2019– 28 New York: ACM [Google Scholar]
  • 83.  Geldard FA , Sherrick CE 1972 . The cutaneous “rabbit”: a perceptual illusion. Science 178 : 178– 79 [Google Scholar]
  • 84.  Cholewiak RW , Collins AA 2000 . The generation of vibrotactile patterns on a linear array: influences of body site, time, and presentation mode. Atten. Percept. Psychophys. 62 : 1220– 35 [Google Scholar]
  • 85.  Yang GH , Ryu D , Park S , Kang S 2012 . Sensory saltation and phantom sensation for vibrotactile display of spatial and directional information. Presence Teleoper. Virtual Environ. 21 : 192– 202 [Google Scholar]
  • 86.  Amemiya T , Ando H , Maeda T 2005 . Virtual force display: direction guidance using asymmetric acceleration via periodic translational motion. 2005 IEEE World Haptics Conference 619– 22 New York: IEEE [Google Scholar]
  • 87.  Amemiya T , Ando H , Maeda T 2005 . Phantom-DRAWN: direction guidance using rapid and asymmetric acceleration weighted by nonlinearity of perception. Proceedings of the 2005 ACM International Conference on Augmented Tele-Existence 201– 8 New York: ACM [Google Scholar]
  • 88.  Shima T , Takemura K 2012 . An ungrounded pulling force feedback device using periodical vibration-impact. Haptics: Perception, Devices, Mobility, and Communication: EuroHaptics 2012 P Isokoski, J Springare 481– 92 Berlin: Springer [Google Scholar]
  • 89.  Tappeiner HW , Klatzky RL , Unger B , Hollis R 2009 . Good vibrations: asymmetric vibrations for directional haptic cues. 2009 IEEE World Haptics Conference 285– 89 New York: IEEE [Google Scholar]
  • 90.  Imaizumi A , Okamoto S , Yamada Y 2014 . Friction sensation produced by laterally asymmetric vibrotactile stimulus. Haptics: Neuroscience, Devices, Modeling, and Applications: EuroHaptics 2014 M Auvray, C Duriez 11– 18 Berlin: Springer [Google Scholar]
  • 91.  Rekimoto J 2013 . Traxion: a tactile interaction device with virtual force sensation. Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology 427– 32 New York: ACM [Google Scholar]
  • 92.  Amemiya T , Gomi H 2014 . Distinct pseudo-attraction force sensation by a thumb-sized vibrator that oscillates asymmetrically. Haptics: Neuroscience, Devices, Modeling, and Applications: EuroHaptics 2014 M Auvray, C Duriez 88– 95 Berlin: Springer [Google Scholar]
  • 93.  Culbertson H , Walker JM , Okamura AM 2016 . Modeling and design of asymmetric vibrations to induce ungrounded pulling sensation through asymmetric skin displacement. 2016 IEEE Haptics Symposium 27– 33 New York: IEEE [Google Scholar]
  • 94.  Culbertson H , Walker JM , Raitor M , Okamura AM 2017 . WAVES: a wearable asymmetric vibration excitation system for presenting three-dimensional translation and rotation cues. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems 4972– 82 New York: ACM [Google Scholar]
  • 95.  Tanabe T , Yano H , Iwata H 2016 . Properties of proprioceptive sensation with a vibration speaker-type non-grounded haptic interface. 2016 IEEE Haptics Symposium 21– 26 New York: IEEE [Google Scholar]
  • 96.  Amemiya T , Gomi H 2016 . Active manual movement improves directional perception of illusory force. IEEE Trans. Haptics 9 : 465– 73 [Google Scholar]
  • 97.  Klatzky RL , Lederman SJ , Hamilton C , Grindley M , Swendsen RH 2003 . Feeling textures through a probe: effects of probe and surface geometry and exploratory factors. Atten. Percept. Psychophys. 65 : 613– 31 [Google Scholar]
  • 98.  Lederman SJ , Klatzky RL , Hamilton CL , Ramsay GI 1999 . Perceiving surface roughness via a rigid probe: effects of exploration speed and mode of touch. Haptics-e 1 : 1 [Google Scholar]
  • 99.  Campion G , Hayward V 2005 . Fundamental limits in the rendering of virtual haptic textures. First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 263– 70 New York: IEEE [Google Scholar]
  • 100.  Otaduy MA , Lin MC 2008 . Rendering of textured objects. Haptic Rendering: Foundations, Algorithms, and Applications M Lin, M Otaduy 371– 93 Boca Raton, FL: CRC [Google Scholar]
  • 101.  Okamura AM , Kuchenbecker KJ , Mahvash M 2008 . Measurement-based modeling for haptic display. Haptic Rendering: Foundations, Algorithms, and Applications M Lin, M Otaduy 443– 67 Boca Raton, FL: CRC [Google Scholar]
  • 102.  Okamura AM , Webster RJ III , Nolin JT , Johnson KW , Jafry H 2003 . The haptic scissors: cutting in virtual environments. 2003 IEEE International Conference on Robotics and Automation 828– 33 New York: IEEE [Google Scholar]
  • 103.  Takeuchi Y , Kamuro S , Minamizawa K , Tachi S 2012 . Haptic duplicator. Proceedings of the 2012 Virtual Reality International Conference art. 30 New York: ACM [Google Scholar]
  • 104.  Saga S , Raskar R 2012 . Feel through window: simultaneous geometry and texture display based on lateral force for touchscreen. SIGGRAPH Asia 2012 Emerging Technologies art. 8 New York: ACM [Google Scholar]
  • 105.  Loomis JM 1992 . Distal attribution and presence. Presence Teleoper. Virtual Environ. 1 : 113– 19 [Google Scholar]
  • 106.  Guruswamy VL , Lang J , Lee WS 2009 . Modeling of haptic vibration textures with infinite-impulse-response filters. 2009 IEEE International Workshop on Haptic Audio Visual Environments and Games 105– 10 New York: IEEE [Google Scholar]
  • 107.  Romano JM , Kuchenbecker KJ 2012 . Creating realistic virtual textures from contact acceleration data. IEEE Trans. Haptics 5 : 109– 19 [Google Scholar]
  • 108.  Culbertson H , Unwin J , Kuchenbecker KJ 2014 . Modeling and rendering realistic textures from unconstrained tool-surface interactions. IEEE Trans. Haptics 7 : 381– 93 [Google Scholar]
  • 109.  Meyer DJ , Wiertlewski M , Peshkin MA , Colgate JE 2014 . Dynamics of ultrasonic and electrostatic friction modulation for rendering texture on haptic surfaces. 2014 IEEE Haptics Symposium 63– 67 New York: IEEE [Google Scholar]
  • 110.  Hoshi T , Takahashi M , Iwamoto T , Shinoda H 2010 . Noncontact tactile display based on radiation pressure of airborne ultrasound. IEEE Trans. Haptics 3 : 155– 65 [Google Scholar]
  • 111.  Hasegawa K , Shinoda H 2013 . Aerial display of vibrotactile sensation with high spatial-temporal resolution using large-aperture airborne ultrasound phased array. 2013 IEEE World Haptics Conference 31– 36 New York: IEEE [Google Scholar]
  • 112.  Monnai Y , Hasegawa K , Fujiwara M , Yoshino K , Inoue S , Shinoda H 2014 . HaptoMime: mid-air haptic interaction with a floating virtual screen. Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology 663– 67 New York: ACM [Google Scholar]
  • 113.  Carter T , Seah SA , Long B , Drinkwater B , Subramanian S 2013 . UltraHaptics: multi-point mid-air haptic feedback for touch surfaces. Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology 505– 14 New York: ACM [Google Scholar]
  • 114.  Long B , Seah SA , Carter T , Subramanian S 2014 . Rendering volumetric haptic shapes in mid-air using ultrasound. ACM Trans. Graph. 33 : 181 [Google Scholar]
  • 115.  Mazzone A , Kunz A 2005 . Sketching the future of the SmartMesh wide area haptic feedback device by introducing the controlling concept for such a deformable multi-loop mechanism. 2005 IEEE World Haptics Conference 308– 15 New York: IEEE [Google Scholar]
  • 116.  Klare S , Peer A 2014 . The formable object: a 24-degree-of-freedom shape-rendering interface. IEEE/ASME Trans. Mechatron. 20 : 1360– 71 [Google Scholar]
  • 117.  Winck R , Kim J , Book WJ , Park H 2012 . Command generation techniques for a pin array using the SVD and the SNMF. IFAC Proc. Vol. 45 : 411– 16 [Google Scholar]
  • 118.  Hayward V , Cruz-Hernandez M 2000 . Tactile display device using distributed lateral skin stretch. Proceedings of the Haptic Interfaces for Virtual Environment and Teleoperator Systems Symposium 69 1309– 14 New York: ASME [Google Scholar]
  • 119.  Follmer S , Leithinger D , Olwal A , Hogge A , Ishii H 2013 . inFORM: dynamic physical affordances and constraints through shape and object actuation. Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology 417– 26 New York: ACM [Google Scholar]
  • 120.  Leithinger D , Follmer S , Olwal A , Ishii H 2014 . Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology 461– 70 New York: ACM [Google Scholar]
  • 121.  Leithinger D , Follmer S , Olwal A , Luescher S , Hogge A et al. 2013 . Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays. CHI '13: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 1441– 50 New York: ACM [Google Scholar]
  • 122.  Rossignac J , Allen M , Book W , Glezer A , Ebert-Uphoff I et al. 2003 . Finger sculpting with digital clay: 3D shape input and output through a computer-controlled real surface. 2003 Shape Modeling International 229– 31 New York: IEEE [Google Scholar]
  • 123.  Majidi C 2014 . Soft robotics: a perspective—current trends and prospects for the future. Soft Robot 1 : 5– 11 [Google Scholar]
  • 124.  Steltz E , Mozeika A , Rembisz J 2010 . Jamming as an enabling technology for soft robotics. SPIE Proc 7642 : 764225 [Google Scholar]
  • 125.  Steltz E , Mozeika A , Rodenberg N , Brown E , Jaeger H 2009 . JSEL: jamming skin enabled locomotion. 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems 5672– 77 New York: IEEE [Google Scholar]
  • 126.  Follmer S , Leithinger D , Olwal A , Cheng N , Ishii H 2012 . Jamming user interfaces: programmable particle stiffness and sensing for malleable and shape-changing devices. Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology 519– 28 New York: ACM [Google Scholar]
  • 127.  Yao L , Niiyama R , Ou J , Follmer S , Della Silva C , Ishii H 2013 . PneUI: pneumatically actuated soft composite materials for shape changing interfaces. Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology 13– 22 New York: ACM [Google Scholar]
  • 128.  Stanley AA , Okamura AM 2015 . Controllable surface haptics via particle jamming and pneumatics. IEEE Trans. Haptics 8 : 20– 30 [Google Scholar]
  • 129.  Mullenbach J , Shultz C , Piper AM , Peshkin MA , Colgate JE 2013 . Surface haptic interactions with a TPad tablet. Proceedings of the Adjunct Publication of the 26th Annual ACM Symposium on User Interface Software and Technology 7– 8 New York: ACM [Google Scholar]
  • 130.  Bau O , Poupyrev I , Israr A , Harrison C 2010 . TeslaTouch: electrovibration for touch surfaces. Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology 283– 92 New York: ACM [Google Scholar]
  • 131.  Winfield L , Glassmire J , Colgate JE , Peshkin M 2007 . T-PaD: tactile pattern display through variable friction reduction. 2007 IEEE World Haptics Conference 421– 26 New York: IEEE [Google Scholar]
  • 132.  Takasaki M , Kotani H , Mizuno T , Nara T 2005 . Transparent surface acoustic wave tactile display. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems 3354– 59 New York: IEEE [Google Scholar]
  • 133.  Mullenbach J , Johnson D , Colgate J , Peshkin M 2012 . ActivePaD surface haptic device. 2012 IEEE Haptics Symposium 407– 14 New York: IEEE [Google Scholar]
  • 134.  Matsumoto K , Ban Y , Narumi T , Yanase Y , Tanikawa T , Hirose M 2016 . Unlimited corridor: redirected walking techniques using visuo haptic interaction. ACM SIGGRAPH 2016 Emerging Technologies art. 20 New York: ACM [Google Scholar]
  • 135.  Yokokohji Y 2005 . Designing an encountered-type haptic display for multiple fingertip contacts based on the observation of human grasping behaviors. Int. J. Robot. Res. 24 : 717– 29 [Google Scholar]
  • 136.  Lederman SJ , Jones LA 2011 . Tactile and haptic illusions. IEEE Trans. Haptics 4 : 273– 94 [Google Scholar]
  • 137.  Klatzky RL , Lederman SJ , Reed C 1987 . There's more to touch than meets the eye: the salience of object attributes for haptics with and without vision. J. Exp. Psychol. Gen. 116 : 356 [Google Scholar]
  • 138.  Rock I , Victor J 1964 . Vision and touch: an experimentally created conflict between the two senses. Science 143 : 594– 96 [Google Scholar]
  • 139.  Lécuyer A 2009 . Simulating haptic feedback using vision: a survey of research and applications of pseudo-haptic feedback. Presence Teleoper. Virtual Environ. 18 : 39– 53 [Google Scholar]
  • 140.  Jang I , Lee D 2014 . On utilizing pseudo-haptics for cutaneous fingertip haptic device. 2014 IEEE Haptics Symposium 635– 39 New York: IEEE [Google Scholar]
  • 141.  Lécuyer A , Burkhardt JM , Le Biller J , Congedo M 2005 . “A 4 ”: a technique to improve perception of contacts with under-actuated haptic devices in virtual reality. 2005 IEEE World Haptics Conference 316– 22 New York: IEEE [Google Scholar]
  • 142.  Ban Y , Narumi T , Tanikawa T , Hirose M 2014 . Displaying shapes with various types of surfaces using visuo-haptic interaction. Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology 191– 96 New York: ACM [Google Scholar]
  • 143.  Ban Y , Kajinami T , Narumi T , Tanikawa T , Hirose M 2012 . Modifying an identified curved surface shape using pseudo-haptic effect. 2012 IEEE Haptics Symposium 211– 16 New York: IEEE [Google Scholar]
  • 144.  Ban Y , Kajinami T , Narumi T , Tanikawa T , Hirose M 2012 . Modifying an identified angle of edged shapes using pseudo-haptic effects. Haptics: Perception, Devices, Mobility, and Communication: EuroHaptics 2012 P Isokoski, J Springare 25– 36 Berlin: Springer [Google Scholar]
  • 145.  Kohli L 2010 . Redirected touching: warping space to remap passive haptics. 2010 IEEE Symposium on 3D User Interfaces (3DUI) 129– 30 New York: IEEE [Google Scholar]

Data & Media loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, planning and decision-making for autonomous vehicles, learning-based model predictive control: toward safe learning in control, recent advances in robot learning from demonstration, a tour of reinforcement learning: the view from continuous control, safe learning in robotics: from learning-based control to safe reinforcement learning, magnetic methods in robotics, a century of robotic hands, distributed optimization for control, integrated task and motion planning.

  • View  PDF
  • Download full issue

Virtual Reality & Intelligent Hardware

Review haptic display for virtual reality: progress and challenges.

  • Previous article in issue
  • Next article in issue

Cited by (0)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Touching at a Distance: Digital Intimacies, Haptic Platforms, and the Ethics of Consent

Madelaine ley.

1 Ethics/Philosophy Section, Department of Values, Technology and Innovation, Faculty of Technology, Management and Policy, Delft University of Technology, Delft, Netherlands

Nathan Rambukkana

2 Communication Studies, Faculty of Arts, Wilfrid Laurier University, Waterloo, Canada

The last decade has seen rise in technologies that allow humans to send and receive intimate touch across long distances. Drawing together platform studies, digital intimacy studies, phenomenology of touch, and ethics of technology, we argue that these new haptic communication devices require specific ethical consideration of consent. The paper describes several technologies, including Kiiroo teledildonics, the Kissenger, the Apple Watch, and Hey Bracelet, highlighting how the sense of touch is used in marketing to evoke a feeling of connection within the digital sphere. We then discuss the ambiguity of skin-to-skin touch and how it is further complicated in digital touch by remediation through platforms, companies, developers, manufacturers, cloud storage sites, the collection and use of data, research, satellites, and the internet. Lastly, we raise concerns about how consent of data collection and physical consent between users will be determined, draw on examples in virtual reality and sex-robotics, and ultimately arguing for further interdisciplinary research into this area.

Introduction

Touch is an important mode of communication for humans; we have the ability to offer support, love, disdain, or discomfort through small but meaningful contact. Until recently, it would seem absurd to say you could physically touch someone in another room or country; emerging haptic technologies, however, make this possible—albeit in a new way. With the rise of one-person households (Semega et al., 2019 ; Snell, 2017 ; Yeung & Cheung, 2015 ), increased rates of loneliness that some label an “epidemic” (Gerst-Emerson & Jayawardhana, 2015 ; Leigh-Hunt et al., 2017 ; Luo et al., 2012 ), and most recently the isolation introduced by COVID-19, the market for digital touch may be growing. As interpersonal touch enters the digital realm to augment other modes of online communication there are new ethical considerations, and in this paper we specifically consider the role of physical and digital consent in the use of new haptic technologies.

The following paper takes a multidisciplinary approach, drawing on literature from digital intimacy studies (Andreassen et al., 2017 ; Dobson et al., 2018 ; Miguel, 2018 ; Rambukkana, 2015a , 2015b ), phenomenology and post-phenomenology (Al-Saji, 2010 ; Liberati, 2017 ; MacLaren, 2014 ), and ethics of technology (van de Poel, 2013 ; van Wynsberghe & Robbins, 2014 ; Nissenbaum, 2001 ). In drawing these fields of thought together, we are able to recognize the myriad ways in which contemporary intimacy is and will be shaped by the development of haptic technologies, and discern some of the unique ethical consent concerns that arise with the emergence, development, and futures of digital touch.

Digital Intimacy and Platform Studies

Digital intimacy studies unpacks the varied ways humans are intimate in our digitalized world, often—and increasingly—via platforms. As a framing, “digital intimacies” connects two often disparate fields of research. While digital culture studies frequently explores the interconnections, interactivity, and proximities that such technologies afford (Rheingold, 1993 ; Odzer, 1997 ; O’Riordan & Phillips, 2007 ; Baym, 2010 ; Paasonen, 2011 ), only rarely has this work been considered in relation to the study of “intimacies” specifically (McGlotten, 2007 ; Rambukkana, 2015a , 2015b ; Rambukkana & Gautier, 2017 ; Attwood, 2017 ). Similarly, while critical intimacy studies addresses the impact of media on the intimate public sphere broadly (Berlant, 1997 ), critical intimacy studies explorations of digital platforms remain rare (Miguel, 2018 ). As Rambukkana ( 2015b ) has argued, the critical conjunction of “digital intimacies” connects these complementary fields.

This paper mobilizes the emerging fields of both digital intimacies and platform studies. Critical intimacy studies provides an important framework for the growing sociocultural phenomenon of digital intimacy, which research has shown (Penley & Ross, 1991 ; O’Riordan & Phillips, 2007 ; Paasonen, 2011 ; Hasinoff, 2015 ; Philips, 2016 ; Baym, 2018 ; Attwood, 2017 ) drives transformative change in how people develop and express intimate relationships using technology.

Scholars have variously defined intimacy. Berlant ( 1998 , p. 282) considers intimate relationships as “the close connections that matter, and on which we depend for living.” Bersani and Phillips ( 2008 , p. vii) define it as “the source and medium of personal development.” While “intimacy” has been discussed by religious and philosophical thinkers for millennia, the formulation favoured in critical intimacy studies emerged from queer theory. Queer theory (Jagose, 1996 ) has made significant contributions to our theoretical understanding of intimacy in modern contexts. In particular, it extended its study beyond kinship and sexuality studies to incorporate problematics on all scales, from internal dynamics of personalities and interests, to interpersonal and group dynamics, to macro-social organizations. 1 Viewed as a whole, intimacy studies have explored the problematics of identities (Bersani & Phillips, 2008 ; Butler, 1990 , 1993 , 2004 ), publics and counterpublics (Berlant, 1997 ; Calhoun, 1992 ; Fraser, 1992 ; Habermas, 1989 ; Warner, 2002 ), and societal privilege (Combahee River Collective, 1977 ; Clark, 2000 ; Heldke & O’Connor, 2004 ; Rambukkana, 2015a ). Most pertinent to this study, intimacies are relevant to friendships (Bickmore, 1998 ), networks (Zappavigna, 2011 ), kinship (Haraway, 1992 ; Harrison & Marsden, 2004 ), and sexualities (McGlotten, 2007 ; Rambukkana, 2015a ).

The majority of digital media research addresses forms of intimacies, if not always in those words. This includes problematics such as cyberlibertarianism (Barlow, 1996 ; Bey, 1991 ; Dyson et al., 1994 ), virtual communities (Rheingold, 1993 ; Barney, 2003 ; Feenberg & Barney, 2004 ), cybersex (Odzer, 1997 ), online publics (Kolko, 2003 ), women’s online space (Shade, 2003 ), queer online identities (O’Riordan & Phillips, 2007 ), digital pornography (Paasonen, 2011 ), and sexting panics (Hasinoff, 2015 ). However, this research has only recently been framed as platform studies.

Montfort and Bogost ( 2009 , p. 2) define a platform as “a computing system of any sort upon which further computing development can be done. It can be implemented entirely in hardware, entirely in software (which runs on any of several hardware platforms), or in some combination of the two.” Moreover, platform studies may include other peripheral texts, from the examination of underlying code, to terms of service, to packaging and advertising materials, even to the ownership structures of companies. A growing field, platform studies has recently expanded from video game systems (Bogost & Montfort, 2009 ; Jones & Thiruvathukal, 2012 ), to explore broader issues, such as algorithmic culture (Gillespie, 2014 ; Crawford, 2016 ) and social media platforms (Burgess & Matamoros-Fernández, 2016 ; Langlois & Elmer, 2013 ). However, this field is still in its infancy, with particularly acute research gaps in haptic interfaces; by focusing on digital intimacy and haptic platforms, this paper tries to address one of these gaps—especially in relation to intimate communication and questions of ethics and consent. This paper shifts the focus from connections made through visual, audio, and linguistic means to the haptic intimacies that can occur with the use of technology.

Haptic Technologies as Platforms for Intimacy

Many contemporary technologies involve haptic interaction between the user and the physical device. A text message accompanied with a vibration, for example, is a communication method that incorporates haptics and can cause an affective, even intimate, response in the recipient. When feeling a phone buzz, a person may respond with a jolting sensation of surprise, excitement, or dread depending on their context and/or expectations. However, the sense of touch in this example is a tool to indexically link someone to the primary mode of communication, which in this case is either visual (text or photo messages) or audio (voice message or phone call). This paper focuses instead on technologies that use the sense of touch as the primary mode of communication and strive towards mutual interaction between people across a distance and (usually) in real-time. While other senses, like vision or hearing, help facilitate the feeling of connection through touch, they play a more supportive role here. Below we provide a description of several such haptic platforms that facilitate a range of intimate relationships, drawing attention to the language used in their marketing to highlight how the sense of touch supposedly presents digitally. Later, we discuss two further technologies implicated with touch: VR and sex robotics, to flesh out a full discussion of the implications of digital touch consent.

Teledildonics

Howard Rheingold coined the term teledildonics in the 1990s, presciently predicting a future where one could use sex toys online to engage with others despite physical distance:

You probably will not use erotic telepresence technology in order to have sexual experiences with machines. Thirty years from now, when portable telediddlers become ubiquitous, people will use them to have sexual experiences with other people , at a distance, in combinations and configurations undreamed of by precybernetic voluptuaries ( 1991 , p. 345, emphasis in original)

Current teledildonic companies (such as Kiiroo) and companies that offer teledildonics as part of their product ranges (such as We-Vibe) are the manifestation of Rheingold’s prophecy. For example, Kiiroo produces fleshlights and vibrators that can be connected and synchronized online, so that couples can experience a sensation similar to sexual intercourse without skin-to-skin contact. Kiiroo has several devices available for purchase and while some pairings include only unidirectional control, where just one person can affect the force, speed, or pattern of their partner’s device, other combinations allow for the possibility of mutual control. These sets can be found on their webpage under the “Couples” category, where an advertisement reads: “The Kiiroo Couple Set 2 was designed to ease the distance and close the gap, because who wouldn’t like to feel their lover’s intimate touch when they are away?” (Kiiroo, 2021 ). An advertisement for another set reads, “The two-way connection enables you and your partner to share your pleasure from any distance. The built-in touch-sensitive technology allows for bi-directional control of connected devices so either of you can drive the action” (Kiiroo, 2021 ).

Kiiroo’s marketing focuses on closing distances, as well as mutual pleasure and agency. The language carefully makes clear that the couple using their technology is not having “sex” as it is traditionally understood, but rather are “mimicking intercourse in real-time” (Kiiroo, 2021 ). Indeed, the possibilities for a digital sexual encounter via Kiiroo are limited to the capabilities of the technology. For example, when using Kiiroo, intimate touch is centred around the genital area. The person using the vibrator has the ability to be more creative because they can move the device anywhere on the body, shifting pressure, and changing speed. The person using the fleshlight, however, is limited to a repetitive up-and-down motion that can shift in speed, stroke length, or pattern only. While the options, compared to skin-to-skin sexual interaction, are reduced, unique possibilities also emerge with the use of this technology—the most obvious being new ways to be intimate and feeling sexually connected across a distance. But there are also emergent perils, such as deception about who might be on the other end of the device, hacking the control feed, or even the illegal distribution of recorded intimate sessions—some of these potentially rising to “rape by deception” (Rambukkana & Gauthier, 2017 ; Sparrow & Karas, 2020 ).

The Lovotics website explains that the Kissenger is a device that focuses on intimate touch around the lips and makes it possible to digitally kiss someone through an attachment on a smartphone or a stand-alone device. Kissenger has three applications: human-to-human, human-to-robot, or human-to-virtual character. For our purposes we focus on the first, which is described as aiming to:

Bridge the physical gap between two intimately connected individuals. Kissenger plays the mediating role in the kiss interaction by imitating and recreating the lip movement of both users in real time using two digitally connected artificial lips. (Loveotics, n.d.)

Kissenger recreates the pressure of a person’s mouth on their partner’s corresponding device in real time, meaning that the constant attunement that occurs when kissing lip-to-lip might be (to some degree) experienced. As with Kiiroo’s and We-Vibe’s teledildonic sets, the couple’s experience is largely formed by the shape, texture, and affordances of the technology. Unlike a lip-to-lip kiss, for example, there is no moisture, warmth, or possibility of tongue involvement. Yet, people using Kissenger may become used to the digitally mediated interaction and experience it like a kiss or something that signifies a kiss. Again, with the development of this technology a new kind of digital intimacy emerges and the experience of distance between loved-ones shifts.

Apple Watch

Apple has long traded in haptic metaphor, apt for a company that pays such attention to tactile detail; from physical user interface (UI) design to the niceties of packaging, Apple products are always crafted with pleasing touch experiences in mind. The iPhone, which was a paradigm shift in both touch screen telephony and mobile internet, was also framed as a way to let “music lovers ‘touch’ their music” (Apple, 2007 ).

The iPad was initially marketed as a “magical and revolutionary” way to “physically interact with applications and content” (Apple, 2010 ), one often short-handed to literally “touching the internet” (e.g., Gonyea, 2010 ). In 2014–15 they started flirting even more deeply with touch, introducing their Taptic Engine with the MacBook Pro Force Touch trackpad, allowing haptic feedback (Apple, 2015a ); and 3D Touch with the 6S iPhone line, which added “new ways to navigate and experience iPhone by sensing pressure to enable new gestures” (Apple, 2015b ). But with the 2014 Apple Watch, their entrée into the wearables market, Apple showed a commitment to making haptics not only utilitarian, but intimate. The Apple Watch “blurs the boundary between physical object and user interface” (Apple, 2014 ), with an always-touching UI that mobilizes multiple haptic technologies: “Force Touch, a technology that senses the difference between a tap and a press”; “the Taptic Engine […] that […] discreetly enable[s] an entirely new vocabulary of alerts and notifications you can […] feel”; and “Digital Touch, [that allows you to send] something as personal as your own heartbeat” (Apple, 2014 ). Its blend of passive touch with the ability to “send” touch messages to other Apple Watch users marks this as arguably the next-gen haptic technology with the widest user base.

Hey Bracelet and Hey Touch

The focus of Hey is to create a haptic communication device that enables non-sexual intimate touch. The company produces two devices, Hey Bracelet and Hey Touch, which are described as both filling the haptic gap in digital communication and “adding a completely new dimension to relationships” (Hey, 2021 ). Hey’s marketing focuses on the capability touch has to communicate support or affection to a loved one, and positions its haptic technology as an opportunity to continue this over long distances. When using Hey Bracelets, two people have the sleek device around their wrists and when one person lightly squeezes theirs, the other person’s “produces a gentle squeeze” letting you “send a ‘real’ human touch across distance” (Hey, 2021 ). There is no vibration or buzz for bystanders to see or hear, so the interaction may remain private between the two people involved.

The Hey Touch is the latest development by Hey and offers more types of touch beyond the bracelet’s simple squeeze. Hey Touch is a small stylish square that one can wear as a necklace or tuck in their pocket. Connected through an application, one can send up to 200 types of touch to another user (Hey, 2021 ). Like Apple, the Hey Touch is trying to integrate their technology into everyday communication. The device can be linked to messages or pictures, and can be a part of a group chat: “Thanks to the integrated multicoloured LED, it’s clear who’s sending the touch. Say ‘I miss you’ or good luck’. Let loved ones know you’re thinking of them, have fun with a group of friends or use it for all kinds of other occasions” (Hey, 2021 ). While the Hey Touch can be used to send a haptic message, it can also be used to enhance other digital communication already occurring.

The Ambiguity of Touch

Marketing of digital touch platforms, as shown above, mobilizes the affective quality of touch. Discussions on digital touch have unique difficulties in finding clear language, as the sense of touch is frequently used metaphorically. When affected by a loving gesture or piece of art, we say: “I was so touched”; after a mental breakthrough, we exclaim: “Then it hit me!”; in conveying something beyond rational thought, we describe: “I feel that…”; and in moments of disconnect, we admit: “I’m just not grasping it.” While the context in which such utterances arise may vary, the use of touch is similarly called upon to help articulate an experience or lack of affect or connection.

This mixing of metaphorical and literal is intentionally used for marketing—as with the play of the word “platform” (Gillespie, 2010 ), a doubling that makes “touching platforms” a particularly layered site of analysis. The ambiguity appeals to potential users’ emotions, while also suggesting that the device will allow them to literally touch their loved ones. While digital touch is made possible through developers’ research on aspects such as force and pressure, a user would not experience a squeeze on the Hey Bracelet in this highly quantified way. The squeeze would likely give the user a sensation that is physical and emotional, largely shaped by the context, relationship, and intimate history between the two people, intermixed with and framed by the meanings and discourses mobilised by the platform makers in their packaging, marketing, instruction materials, etc.

Beyond this affective tangling, consider also the fundamental paradox in the very idea of touching long-distance. Historically it would be nonsensical to suggest that you could squeeze someone’s hand from a distance. But by drawing on the metaphorical meaning of touch, i.e., being emotionally affected, and articulating this together with new haptic technologies, touching long-distance becomes both conceivable and possible—while remaining phenomenologically distinct from immediate skin-to-skin contact.

Digital touch technology differs from immediate physical contact in that it incorporates both distance as well as hardware and software intermediations. Yet, the ambiguous nature of old fashioned skin-to-skin touch remains. Feminist phenomenologist Alia Al-Saji draws on Husserl’s conception of touch and bodily awareness, developing the notion of sensings , a level of the body as a surface that is totally in touch with the world, making for an intimate relationship of “proximity and reciprocity” (2010, p. 19). According to this conception, the body is both being-touched and participating in the affective sense of being-touched (p. 23). Thus, the act of being-touched is not simply passive because the affect it inspires also requires activity. Al-Saji uses the passive–active dynamic to suggest that there is a blurring of self and other in inter-personal touch (p. 18). Though the body seems to move spontaneously, it does so in active-passivity and responds to the “affective pull” of the world it inhabits (p. 25). Bodies are receptive to their situation and move accordingly; a person’s hand will jerk away when surprised or will lighten its pressure when perceiving tenderness on another. When two sentient beings touch, a co-sensing occurs: when a person reaches out and touches another’s arm her hand is actively touching but also adapting to possibilities that allow the other to be touched. For example, she may have to reach up to caress the other’s face or move slowly in order to not scare him or her away. Connection in touch, then, is complex and involves blurry boundaries—one is both touching and touched, pursuing and responding, and therefore, as bodily awareness is interconnected with haptic experience, others can play a role in shaping a person’s embodied subjectivity.

Kym Maclaren provides further insight into the intersubjective ambiguity in touch by drawing on the Merleau-Pontian idea that the body is both subject and object (2014). Maclaren clarifies that this should not be understood as a dichotomy; instead she suggests thinking about the body as both sensible and sentient, where these “are essentially intertwined: our being-sentient is inseparable from our being-sensible” (2014, p. 98). Touch is a cooperative movement connecting the subject and the world. The touching agent is guided by the being or thing they intend to touch. Maclaren gives the following example: “To feel the softness of the fuzz on a baby’s head, one must not pat vigorously, but rather keep a certain distance and move one’s hand gently back and forth” (p. 99). Although one person may seem more active in the physical connection, touch involves a mutual activity of responding and being affected by the other. This physical connection between two beings is never quite static and each body provides both input and responses, thus neither being can be identified as wholly active or wholly passive.

The concepts of attunement and the blurring of active and passive are still present in this emergent digital touch connectivity. Despite what marketers would have people believe, the experience of using technologically mediated touch differs fundamentally from immediate bodily contact. It takes on a new form, shaped according to the technological capabilities of the device, the priorities of the developers, and ultimately by user-experience (UX). Instead of attuning immediately to the flesh of another, a user attunes to the technology and how the other’s body is perceived, represented, or remediated over a distance.

From a phenomenological standpoint, the devices become inhabited by the user’s bodies thereby changing the world around them (Liberati, 2017 ). At first, using the digital touch technologies will be clumsy and the devices will be noticeable; the remediation aspect weighs heavily at this stage. For example, as intimate as the Apple Watch’s Digital Touch affordances are, their UI is clunky and feeling the Taptic Engine beat a rhythm on your wrist is markedly different from a tap on your shoulder or hand resting on the small of your back. However, with more use, and more finely grained design elements, the devices could become incorporated into the person’s body, not just as an unnoticed extension, but part of the body schema itself. Devices like Hey Bracelet or the Apple Watch could become so integrated by the body as to become mundane ways of communicating—much as the tactile vibration of cell phones or game controllers, once novel, has already become—opening up new ways of being with others in the world. Technologies like the Kissenger and teledildonics like Kiiroo could transcend feeling like a simulation of touch and “feel” more like a real thing, if not the exact same kind of real thing. This fuzzy complexity, this intersubjective mutuality, the role of technology as intermediary and co-actor, and this potential to become something more fundamentally authentic than a marketing gimmick are also why consent issues loom large, as we turn to in the next section.

Ethics, Consent, and Haptic Platforms

Shifting from a strictly phenomenological lens, which focuses on the experience of subjects using haptic technology, we distinguish digital touch from immediate touch in order to highlight that there is a network involved in transmitting a touch message across distances. Digital touch involves platforms, companies, developers, manufacturers, cloud storage sites, the collection and use of data, research, satellites, and the internet. What may seem private and intimate in fact involves a huge expanse of activity undergone with unknown, even unknowable, partners. As Carey Jewitt et al. point out, “[d]igital touch does not only raise questions of trust in the relationship between people but also in the reliability, security and safety of the machines and systems that mediate touch” ( 2020 , p. 116). This foregrounds the issue of how consent is determined in the privacy policies and data collection of the companies. While people may be unlikely to divulge their sexual preferences and activities to a stranger in person, the concern for privacy often wanes online. Pornography sites, for example, account for 30% of web traffic despite how sites can leak user data, such as gender/sexual identity and sexual interests (Maris et al., 2019 ). Contributing to users’ uncritical engagement with online privacy is how user agreements are often so long and complicated that people typically scroll through without reading and click “agree.” While consent is usually technically stated, we can hardly call it “informed.” The intimate information that can be collected through digital touch devices should be treated with appropriate sensitivity. Kiiroo, for example, states that they collect minimal data because the information is sensitive, but not all companies adhere to this ethos.

The information collected by haptic technologies also helps to create a comprehensive documentation of a person’s identity and body—this not only includes their intimate practices and desires, but can also include their body shape, temperature, texture, and heartbeat. Such metadata is crucial to platform capitalism with its increasing reliance on user data (Srnicek, 2016 , p. 39). This is even more pertinent when such “data-driven intimacy” (p. 279) is articulated to sexuality, what Flore and Pienaar ( 2020 ) term a “sexuotechnical-assemblage” (p. 279). A case in point is the March 2017 class action suit against Canadian company Standard Innovation, for failing to inform its customers that their wireless We-Vibe sex toy was quietly collecting user data such as “time and date of use, the user-selected vibration intensity level and pattern and the temperature of the device” (Perkel, 2017 , n.p.), resulting in a 5-million-dollar settlement (Perkel, 2017 ).

Beyond data consent, haptic platforms also raise questions about how physical touch can or should be determined. The #metoo movement has spurred macro-level cultural conversations about the nature of sexual consent, ones that need careful consideration when translated to digital haptics. Feminist movements have pushed for a shift from the no-model of consent, where only a verbal “no” draws the boundary between acceptable and unacceptable, to a yes-model, where absence of an on-going “yes” indicates lack of consent (Anderson, 2005 ). Under this model, consent is not the default or settled with a once-given simple “yes” in either non-sexual or sexual touch (Anderson, 2005 ); it is ongoing, sometimes supported with verbal exchange, sometimes with signs of pleasure or sighs of comfort. It is revoked with a “no,” but also with a shift-away of the eyes, or tense body. Yet, the yes-model of consent is not without its own troubling ambiguities. In response to the “Yes Means Yes” campaigns adopted in colleges across the United States, critics argue that men continually misread women’s body language (Anderson, 2005 ) or avoid checking-in with their partner to avoid an explicit refusal (Jozkowski, 2015 ), and people tend to find that voicing consent explicitly can feel transactional and awkward (Willis et al., 2019 ). A feminist ethics of consent is not black and white, often shifting according to myriad factors including the type of relationship, context, mood, and timing. Accounting for this complexity will be a challenge for those who are developing and using technologies that mediate intimate touch between two or more people.

These ambiguities of consent continue into the digital realm. Even in the absence of more advanced haptic interaction, where one could feel another’s entire body, there are many possibilities for miscommunication and boundary violation. For example, one could increase the speed of a lover’s vibrator without sensing what, in close proximity, would usually be telling body language signals to slow down. Full communication through touch is only possible with attunement to the whole of the contextual elements. Like homographs, which can only be interpreted correctly within a full sentence, certain bodily sensations are only understood within the fullness of the body. Tension, for example, can alternately indicate ecstasy or discomfort. This distinction may be lost over digital devices. In addition, the world of digital haptics might lack scripts, conventions, mores, or laws. In the examples of virtual reality (VR) and sex robotics, we can see how such emergent technologies strain notions of consent.

Virtual Reality

Jordan Belamire describes a virtual groping encounter within a multiplayer HTC Vive VR game QuiVR :

In between a wave of zombies and demons to shoot down, I was hanging out next to BigBro442, waiting for our next attack. Suddenly, BigBro442’s disembodied helmet faced me dead-on. His floating hand approached my body, and he started to virtually rub my chest. “Stop!” I cried. I must have laughed from the embarrassment and the ridiculousness of the situation. Women, after all, are supposed to be cool, and take any form of sexual harassment with a laugh. But I still told him to stop. This goaded him on, and even when I turned away from him, he chased me around, making grabbing and pinching motions near my chest. Emboldened, he even shoved his hand toward my virtual crotch and began rubbing. There I was, being virtually groped in a snowy fortress with my brother-in-law and husband watching. As it progressed, my joking comments toward BigBro442 turned angrier, and were peppered with frustrated obscenities. At first, my brother-in-law and husband laughed along with me—all they could see was the flat computer screen version of the groping. Outside the total immersion of the QuiVr world, this must have looked pretty funny, and definitely not real. Remember that little digression I told you about how the hundred-foot drop looked so convincing? Yeah. Guess what. The virtual groping feels just as real . Of course, you’re not physically being touched, just like you’re not actually one hundred feet off the ground, but it’s still scary as hell. (Belamire, 2016 ; emphasis added)

This is obviously a clear violation, a lack of consent that was communicated in multiple ways: verbally, with gestures, with moving away. But even with more everyday encounters, due to the lost ability to communicate nuanced needs and desires through feeling each other’s bodies, strong verbal communication would need to increase to ensure ongoing and enthusiastic consent. Relying on verbal communication alone is not enough, however, and reaching the fullness of the yes-model of consent requires further technological advancement, such as a haptic body suit that could transduce nuanced touch—with the Teslasuit as one example of tech moving in that direction (Teslasuit, 2019 ).

Sex Robotics

A limit case pertinent to these issues could be intimate relationships with robotic beings as “digital others” (Levy, 2007 ; Liberati, 2018 ; Viik, 2020 ). Levy ( 2020 ) notes that critics of sex robots—such as Richardson ( 2022 )—argue that a robot, as object, could never consent to sex (p. 191). While this is an as-yet-unresolved ethical question that would require knowing the shape of future technology to fully determine, he also notes how a parallel question in light of the #metoo inflected discussion of consent, might be “How can a robot determine, with any degree of certainty, whether or not a proximate human wants or at least consents to sex?” (Levy, 2020 , p. 191, 197). Questions of what behaviours are acceptable from the robot and who is responsible in the event of consent violations from a sex robot are also raised (Levy, 2020 , p. 191). Pinning his analysis to the notion that advanced robotics could use “sexual scripts” to understand—or, problematically, even infer or “ optimize” —consent (Levy, 2020 , 192), his analysis underlines how multiple senses play a role in the negotiation of consent, from verbal and visual elements to touch and body language cues (Levy, 2020 , 194). More worrying than anything, however, is Levy’s suggestion that sexual consent violations committed by an autonomous robot—in other words, robotic sexual assault—should be treated as an accident only and remedied by vehicles such as insurance rather than legal proceedings (Levy, 2020 , 198).

Other issues of consent can continue from the skin-to-skin to the digital world. An abusive or controlling partner or family member could incessantly send touches through an Apple Watch or Hey Bracelet, thereby extending physical control across distances. In some ways, however, digital touch technology might provide the possibility of an empowering sense of control. Being able to quickly shut the devices down or move it away from the body easily removes one from the other’s touch. Of course, in many relationships where there is abuse and unsafe power dynamics, taking a drastic stance of control like shutting off a digital touch device could feel nearly impossible. Since the effects of corporeal miscommunication or transgression can be damaging, mitigating its continuance in the digital world is important to ensure users’ safety. The need for consent considerations is clear with respect to haptic platforms and, as demonstrated above, is both emergent in effects and unclear with respect to scope. But thinking through present and future technologies where haptic consent is being wrangled with can help to unpack the stakes of these issues.

Conclusion: Negotiating Haptic Futures

With new touch technologies, it is now possible to have our touch dispersed. However, it is not possible to touch in the exact way one would in person—this freedom is stifled by the mediating technology. With digital touch it is not merely two or more people adapting and attuning to each other’s bodily situations but rather an entire infrastructure constructing the design and distribution of digital touch. This very infrastructure might be mobilized to facilitate digital and physical consent. A growing movement in the philosophy of technology argues that most decisions made in a technology’s lifespan, from conception of to dissemination and use, result from implicit or explicit prioritization of particular values (Van de Poel & Kroes, 2014 ; Friedman & Nissenbaum, 1996 ; Nissenbaum, 2001 ; van Wynsberghe & Robbins, 2014 ). Values are defined broadly as “what is important to people in their lives” (Friedman & Hendry, 2019 p. 23), which depends “substantively on the interests and desires of human beings within the cultural milieu” (Friedman et al., 2006 p. 2). Privacy, safety, and efficiency are examples of values commonly prioritized in technologies. We argue that physical consent be added to the list when it comes to haptic technologies. Further research might include taking a Value-Sensitive Design (Friedman & Hendry, 2019 ) or Design for Values approach (van den Hoven et al., 2015 ) to consent in haptic technologies to see how it can be translated into design ethics. As we have shown, consent is complex and may be difficult to achieve through the mediation of technology, but design features might also enable it, such as the addition of a disconnect button.

Clearly, when haptic technology interfaces with aspects of human intimacy, design needs to be proactive about ethical considerations, and not just because of the danger of bad press or lawsuits from lapses or hacks. In the game design field, for example, Jess Marcotte ( 2018 ) argues that games should be designed with intersectional feminist principles in mind. This extends to games incorporating touch, as Marcotte elaborates:

In Tune , which I co-designed, is an example of a game that was developed around intersectional feminist principles. In Tune is a game where players are asked to negotiate consent separately from sexual intimacy […]. Players are asked to perform a series of sustained poses, negotiating who will do what to whom, whether the pose needs to be modified, and whether they will perform the pose at all. The game positions consent as an intersectional feminist issue that affects our day-to-day interactions with others and that requires active, ongoing engagement rather than the binary, one-time giving and receiving of consent in sexually intimate contexts. One of the poses asks players to negotiate touching each other’s heads, which finds echoes in a game like Hair Nah , which is about (white) people touching a black woman’s hair without her permission [...]. Games like Hair Nah and In Tune demonstrate some of the ways that one can design intersectionally. (Marcotte, 2018 )

Developing and incorporating a feminist ethics of touch, for example, would require that a device had the capacity to mediate or facilitate mutually created consent between users—even if one of the users is itself an artifact, as with sex robots. Linking humanities scholars working on consent with the teams building these technologies is crucial if digital touch is to stay up-to-date with the evolution of consent in the non-digital sphere.

Finally, as communication will adapt to the technologies being used, so too may the nature of consent. For example, if there was an auto-disconnect button it might soon take the place of a safe word, or a user may be able to individualize settings so she only receives touch from certain people at certain times. The specific configurations of consent will be shaped by the unique configurations of users, technologies, and sociotechnical conventions, but tech developers need to provide the options to do so. Future academic work might look beyond affordances and discursive framing of haptic devices to investigate the empirical aspect of how these technologies are thought out by designers, debated by law and policy makers, and experienced by users. One of the limitations of the current work is that it asks many questions, yet answers few. If digital intimacies are shaped by the collision of connection and technologies, how do we map and account for the new terrain of intimacy those encounters create? Generating some potential answers through empirical means would empower further inclusion of interdisciplinary perspectives from ethicists, designers, lawmakers, and users in multiple settings, and could help shape digital touch technologies to avoid extending, in addition to modes of touching, the attendant potential harms of skin-to-skin touch. Similarly, designers should continue to use visions of possibilities and abuses from speculative fiction, and actual anecdotes of the same from critical journalism, to help nuance their products and tweak their affordances. While we do not argue that digital touch is the same as its analog analogue, we establish above it is a real and material experience. As such, we need to take haptic platforms seriously, and thinking about consent is a key piece of that.

Social Science Research Council of Canada, Insight Development Grant. “Exploring Digital Intimacies in the Emergent Field of Platform Studies through Haptics.” Wilfrid Laurier University.

Declarations

The authors declare that they have no conflict of interest.

1 Nationality and patriotism are significant forms of intimacy, for example: borders and boundaries of nations create insides and outsides, forms of belonging, and national narratives (Berlant, 1997 ). Zoning in cities strongly determines intimacies, defining or breaking up neighbourhoods, lifestyles, economies—such as zoning that shutters LGBTQ businesses (Warner, 1999 ). Intimacies can emerge from fandoms (Bury, 2005 ; Rambukkana, 2007 ), from musical subcultures (Thornton, 1995 ; Baym, 2010 , 2018 ), from indiscretions (Kipnis, 2003 ; Wasserman, 2015 ), or from transactions (Feenberg & Bakardjieva, 2004 ; Zelizer, 2005 ).

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Madelaine Ley, Email: [email protected] .

Nathan Rambukkana, Email: ac.ulw@anakkubmarn .

  • Al-Saji A. Bodies and sensings: On the uses of Husserlian phenomenology for feminist theory. Continental Philosophy Review. 2010; 43 (1):13–37. doi: 10.1007/s11007-010-9135-8. [ CrossRef ] [ Google Scholar ]
  • Anderson MJ. Negotiating sex. Southern California Law Review. 2005; 78 :101–138. [ Google Scholar ]
  • Andreassen R, Petersen MN, Harrison K, Raun T, editors. Mediated intimacies: Connectivities, relationalities and proximities. Routledge; 2017. [ Google Scholar ]
  • Apple. (2007). Apple reinvents the phone with iPhone . Apple. https://www.apple.com/newsroom/2007/01/09Apple-Reinvents-the-Phone-with-iPhone/ .
  • Apple. (2010). Apple launches iPad . Apple. https://www.apple.com/newsroom/2010/01/27Apple-Launches-iPad/ .
  • Apple. (2014). Apple unveils Apple Watch—Apple’s most personal device ever . Apple. https://www.apple.com/ca/newsroom/2014/09/09Apple-Unveils-Apple-Watch-Apples-Most-Personal-Device-Ever/ .
  • Apple. (2015a). Apple introduces 15-inch MacBook Pro with Force Touch Trackpad & new $2,399 iMac with Retina 5K display . Apple. https://www.apple.com/ca/newsroom/2015/05/19Apple-Introduces-15-inch-MacBook-Pro-with-Force-Touch-Trackpad-New-1-999-iMac-with-Retina-5K-Display/ .
  • Apple. (2015b). Apple introduces iPhone 6s & iPhone 6s Plus . Apple. https://www.apple.com/ca/newsroom/2015/09/09Apple-Introduces-iPhone-6s-iPhone-6s-Plus/ .
  • Attwood F. Sex media. Polity; 2017. [ Google Scholar ]
  • Barlow, J. P. (1996). Declaration of the independence of Cyberspace [Web post]. http://homes.eff.org/~barlow/Declaration-Final.html .
  • Barney D. The vanishing table, or community in a world that is no world. In: Feenberg A, Barney D, editors. Community in the digital age. Rowman & Littlefield; 2003. pp. 31–52. [ Google Scholar ]
  • Baym NK. Personal connections in the digital age. Polity; 2010. [ Google Scholar ]
  • Baym NK. Playing to the crowd: Musicians, audiences, and the intimate work of connection. New York University Press; 2018. [ Google Scholar ]
  • Belamire, J. (2016). My first virtual reality groping. Mic. https://www.mic.com/articles/157415/my-firstvirtual-reality-groping-sexual-assault-in-vr-harassment-in-tech-jordan-belamire .
  • Berlant L. The queen of America goes to Washington City: Essays on sex and citizenship. Duke; 1997. [ Google Scholar ]
  • Berlant L. Intimacy: A special issue. Critical Inquiry. 1998; 24 (2):281–288. doi: 10.1086/448875. [ CrossRef ] [ Google Scholar ]
  • Bersani L, Philips A. Intimacies. University of Chicago Press; 2008. [ Google Scholar ]
  • Bey H. T.A.Z.: The temporary autonomous zone, ontological anarchy, poetic terrorism. Autonomedia; 1991. [ Google Scholar ]
  • Bickmore, T. W. (1998). Friendship and intimacy in the digital age [Unpublished manuscript]. Media Lab, MIT. Retrieved May 6, 2011 from http://www.media.mit.edu/Bbickmore/Mas714/finalReport.html .
  • Bogost, I., & Montfort, N. (2009). Platform studies: Frequently questioned answers . Paper presented at Digital Arts and Culture, Irvine, California.
  • Burgess J, Matamoros-Fernández A. Mapping sociocultural controversies across digital media platforms: One week of #gamergate on Twitter, YouTube, and Tumblr. Communication Research and Practice. 2016; 2 (1):79–96. doi: 10.1080/22041451.2016.1155338. [ CrossRef ] [ Google Scholar ]
  • Bury R. Cyberspaces of their own: Female fandoms online. Peter Lang; 2005. [ Google Scholar ]
  • Butler J. Gender trouble: Feminism and the subversion of identity. Routledge; 1990. [ Google Scholar ]
  • Butler J. Bodies that matter: On the discursive limits of “sex”. Routledge; 1993. [ Google Scholar ]
  • Butler J. Undoing gender. Routledge; 2004. [ Google Scholar ]
  • Calhoun C. Introduction: Habermas and the public sphere. In: Calhoun C, editor. Habermas and the public sphere. MIT Press; 1992. pp. 1–48. [ Google Scholar ]
  • Chopik WJ. The benefits of social technology use among older adults are mediated by reduced loneliness. Cyberpsychology, Behavior, and Social Networking. 2016 doi: 10.1089/cyber.2016.0151. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clark EO. Virtuous vice: Homoeroticism and the public sphere. Duke University Press; 2000. [ Google Scholar ]
  • Combahee River Collective. (1977). Combahee River Collective statement [Web post]. http://circuitous.org/scraps/combahee.html .
  • Crawford K. Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology & Human Values. 2016; 41 (1):77–92. doi: 10.1177/0162243915589635. [ CrossRef ] [ Google Scholar ]
  • Dobson AS, Robards B, Carah N, editors. Digital intimate publics and social media. Palgrave Macmillan; 2018. [ Google Scholar ]
  • Dyson, E., Gilder, E., Keyworth, G., & Toffler, A. (1994). Cyberspace and the American dream: A Magna Carta for the knowledge age. Future Insight, 1 (2). http://www.pff.org/issues-pubs/futureinsights/fi1.2magnacarta.html
  • Feenberg A, Bakardjieva M. Consumers or citizens? The online community debate. In: Feenberg A, Barney D, editors. Community in the digital age. Rowman and Littlefield; 2004. pp. 1–31. [ Google Scholar ]
  • Feenberg A, Barney D, editors. Community in the digital age. Rowman and Littlefield; 2004. [ Google Scholar ]
  • Flore J, Pienaar K. Data-driven intimacy: Emerging technologies in the (re)making of sexual subjects and “healthy” sexuality. Health Sociology Review. 2020; 29 (3):279–293. doi: 10.1080/14461242.2020.1803101. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fraser N. Rethinking the public sphere: A contribution to the critique of actually existing democracy. In: Calhoun C, editor. Habermas and the public sphere. MIT Press; 1992. pp. 109–142. [ Google Scholar ]
  • Friedman B, Hendry DG. Value sensitive design: Shaping technology with moral imagination. MIT Press; 2019. [ Google Scholar ]
  • Friedman B, Kahn P, Borning A, Zhang P, Galletta D. Value sensitive design and information systems. Springer. 2006 doi: 10.1007/978-94-007-7844-3_4. [ CrossRef ] [ Google Scholar ]
  • Friedman B, Nissenbaum H. Bias in computer systems. ACM Transactions on Information Systems. 1996; 14 (3):330–347. doi: 10.1145/230538.230561. [ CrossRef ] [ Google Scholar ]
  • Gerst-Emerson K, Jayawardhana J. Loneliness as a public health issue: The impact of loneliness on health care utilization among older adults. American Journal of Public Health. 2015; 105 :1013–1019. doi: 10.2105/AJPH.2014.302427. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gillespie T. The politics of ‘platforms’ New Media & Society. 2010; 12 (3):347–364. doi: 10.1177/1461444809342738. [ CrossRef ] [ Google Scholar ]
  • Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). MIT Press.
  • Gonyea, C. (2010). The future of computing: iPad review [Weblog post]. https://chris.gonyea.com/2010/04/ .
  • Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society (T. Burger & F. Lawrence, Trans.). MIT Press.
  • Haraway, D. (1992). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs, and women: The reinvention of nature (pp. 149–182). Routledge.
  • Harrison K, Marsden D, editors. The state of affairs: Explorations in infidelity and commitment. Lawrence Erlbaum; 2004. [ Google Scholar ]
  • Hasinoff A. Sexting panic: Rethinking criminalization, privacy, and consent. University of Illinois Press; 2015. [ Google Scholar ]
  • Heldke L, O’Connor P, editors. Oppression, privilege, & resistance: Theoretical perspectives on racism, sexism, and heterosexism. McGraw Hill; 2004. [ Google Scholar ]
  • Hey. (2021). About Hey . Feel Hey. Retrieved July 26, 2021, from https://feelhey.com/pages/about#gref .
  • Jagose A. Queer theory: An introduction. New York University Press; 1996. [ Google Scholar ]
  • Jewitt C, Price S, Leder Mackley K, Yiannoutsou N, Atkinson D. Interdisciplinary Insights for digital touch communication. SpringerOpen. 2020 doi: 10.1007/978-3-030-24564-1_7. [ CrossRef ] [ Google Scholar ]
  • Jones SE, Thiruvathukal GK. Codename revolution: The Nintendo Wii platform. MIT Press; 2012. [ Google Scholar ]
  • Jozkowski KN. “Yes means yes”? Sexual consent policy and college students. Change: The Magazine of Higher Learning. 2015; 47 (2):16–23. doi: 10.1080/00091383.2015.1004990. [ CrossRef ] [ Google Scholar ]
  • Kiiroo. (2021). Couples . Kiiroo. Retrieved July 26, 2021, from https://www.kiiroo.com/collections/for-couples .
  • Kipnis L. Against love: A polemic. Random House; 2003. [ Google Scholar ]
  • Kolko B, editor. Virtual publics: Policy and community in an electronic age. Columbia University Press; 2003. [ Google Scholar ]
  • Langlois G, Elmer G. The research politics of social media platforms. Culture Machine. 2013; 14 :1–17. [ Google Scholar ]
  • Leigh-Hunt N, Bagguley D, Bash K, Turner V, Turnbull S, Valtorta N, Caan W. An overview of systematic reviews on the public health consequences of social isolation and loneliness. Public Health. 2017; 152 :157–171. doi: 10.1016/j.puhe.2017.07.035. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Levy D. Love + sex with robots: The evolution of human–robot relationships. Harper; 2007. [ Google Scholar ]
  • Levy D. Some aspects of human consent to sex with robots. Paladyn, Journal of Behavioral Robotics. 2020; 11 (1):191–198. doi: 10.1515/pjbr-2020-0037. [ CrossRef ] [ Google Scholar ]
  • Liberati N. Teledildonics and new ways of “being in touch”: A phenomenological analysis of the use of haptic devices for intimate relations. Science and Engineering Ethics. 2017; 23 (3):801–823. doi: 10.1007/s11948-016-9827-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liberati N. Being riajuu: A phenomenological analysis of sentimental relationships with “digital others” In: Cheok AD, Levy D, editors. Love and sex with robots. LSR 2017. Springer; 2018. pp. 12–25. [ Google Scholar ]
  • Lovotics. (n.d.). Kissenger . Lovotics. Retrieved July 26, 2021, from https://sites.google.com/site/lovoticsrobot/kissenger .
  • Luo Y, Hawkley LC, Waite LJ, Cacioppo JT. Loneliness, health, and mortality in old age: A national longitudinal study. Social Science & Medicine. 2012; 74 (6):907–914. doi: 10.1016/j.socscimed.2011.11.028. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • MacLaren K. Touching matters: Embodiments of intimacy. Emotion, Space and Society. 2014; 13 :95–102. doi: 10.1016/j.emospa.2013.12.004. [ CrossRef ] [ Google Scholar ]
  • Marcotte, J. (2018). Queering control(lers) through reflective game design practices. Game Studies: The International Journal of Computer Game Research, 18 (3). http://gamestudies.org/1803/articles/marcotte .
  • Maris, E., Libert, T., & Henrichsen, J. (2019). Tracking sex: The implications of widespread sexual data leakage and tracking on porn websites . Preprint.
  • McGlotten S. Virtual intimacies: Love, addiction, and identity @ The Matrix. In: O’Riordan K, Phillips D, editors. Queer online: Media technology and sexuality. Peter Lang; 2007. pp. 123–137. [ Google Scholar ]
  • Miguel C. Personal relationships and intimacy in the age of social media. Palgrave Macmillan; 2018. [ Google Scholar ]
  • Montfort N, Bogost I. Racing the beam: The Atari video game system. MIT Press; 2009. [ Google Scholar ]
  • Nissenbaum H. How computer systems embody values. Computer. 2001; 34 (3):120–119. doi: 10.1109/2.910905. [ CrossRef ] [ Google Scholar ]
  • Odzer C. Virtual spaces: Sex and the cyber citizen. Berkley; 1997. [ Google Scholar ]
  • O’Riordan K, Phillips D, editors. Queer online: Media technology and sexuality. Peter Lang; 2007. [ Google Scholar ]
  • Paasonen S. Carnal resonance: Affect and online pornography. MIT Press; 2011. [ Google Scholar ]
  • Penley C, Ross A, editors. Technoculture. University of Minnesota Press; 1991. [ Google Scholar ]
  • Perkel, C. (2017). Canadian sex toy maker accused of secretly collecting intimate data settles $5M lawsuit. Toronto Star . https://www.thestar.com/business/2017/03/14/canadian-sex-toy-maker-accused-of-secretly-collecting-intimate-data-settles-5m-lawsuit.html .
  • Phillips W. This is why we can’t have nice things: Mapping the relationship between online trolling and mainstream culture. MIT Press; 2016. [ Google Scholar ]
  • Rambukkana, N., & Gauthier, M. (2017). L’adultère à l’ère numérique: Une discussion sur la non/monogamie et le développement des technologies numériques à partir du cas Ashley Madison [Adultery in the digital era: A discussion about non/monogamy and digital technologies based on the website Ashley Madison]. Genre, Sexualité & Société, 17 . http://journals.openedition.org/gss/3981 .
  • Rambukkana N. Is slash an alternative media? ‘Queer’ heterotopias and the role of autonomous media space in radical world building. Affinities: A Journal of Radical Theory, Culture, and Action. 2007; 1 (1):69–85. [ Google Scholar ]
  • Rambukkana N. Fraught intimacies: Non/monogamy in the public sphere. UBC Press; 2015. [ Google Scholar ]
  • Rambukkana N. Hashtag publics: The power and politics of discursive networks. Peter Lang; 2015. [ Google Scholar ]
  • Rheingold H. Virtual reality. Touchstone; 1991. [ Google Scholar ]
  • Rheingold H. The virtual community: Homesteading on the electronic frontier. Addison-Wesley; 1993. [ Google Scholar ]
  • Richardson K. Sex robots: The end of love. Polity; 2022. [ Google Scholar ]
  • Semega, J., Kollar, M., Creamer, K., and Mohanty, A. (2019). Income and poverty in the United States: 2018. United States Census Bureau. https://www.census.gov/content/dam/Census/library/publications/2019/demo/p60-266.pdf .
  • Shade L. Gender and commodification of community: Women.com and gURL.com. In: Feenberg A, Barney D, editors. Community in the digital age. Rowman and Littlefield; 2003. pp. 143–160. [ Google Scholar ]
  • Snell KDM. The rise of living alone and loneliness in history. Social History. 2017; 42 (1):2–28. doi: 10.1080/03071022.2017.1256093. [ CrossRef ] [ Google Scholar ]
  • Sparrow R, Karas L. Teledildonics and rape by deception. Law, Innovation and Technology. 2020; 12 (1):175–204. doi: 10.1080/17579961.2020.1727097. [ CrossRef ] [ Google Scholar ]
  • Srnicek N. Platform capitalism. Polity; 2016. [ Google Scholar ]
  • Teslasuit. (2019). Teslasuit: Full body haptic VR suit for motion capture and training . Teslasuit. Retrieved August 14, 2019 from https://teslasuit.io .
  • Thornton, S. (1995). Club cultures: Music, media and subcultural capital . Polity.
  • Van de Poel I. Translating values into design requirements. In: Mitchfelder D, McCarty N, Goldberg DE, editors. Philosophy and engineering: Reflections on practice, principles and process. Springer; 2013. pp. 253–266. [ Google Scholar ]
  • Van de Poel I, Kroes P. Can technology embody values? In: Kroes P, Verbeek PP, editors. The moral status of technical artefacts. Springer; 2014. pp. 103–124. [ Google Scholar ]
  • van den Hoven J, Vermaas PE, van de Poel I. Design for values: An introduction. In: van den Hoven J, Vermaas P, van de Poel I, editors. Handbook of ethics, values, and technological design. Cham: Springer; 2015. pp. 1–7. [ Google Scholar ]
  • Van Wynsberghe A, Robbins S. Ethicist as designer: A pragmatic approach to ethics in the lab. Science and Engineering Ethics. 2014; 20 (4):947–961. doi: 10.1007/s11948-013-9498-4. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Viik T. Falling in love with robots: A phenomenological study of experiencing technological alterities. Paladyn, Journal of Behavioral Robotics. 2020; 11 (1):52–65. doi: 10.1515/pjbr-2020-0005. [ CrossRef ] [ Google Scholar ]
  • Warner M. The trouble with normal: Sex, politics and the ethics of queer life. Harvard University Press; 1999. [ Google Scholar ]
  • Warner M. Publics and counterpublics. Zone Books; 2002. [ Google Scholar ]
  • Wasserman M. Cyber infidelity: The new seduction. Human & Rousseau; 2015. [ Google Scholar ]
  • Willis M, Hunt M, Wodika A, Rhodes DL, Goodman J, Jozkowski KN. Explicit verbal sexual consent communication: Effects of gender, relationship status, and type of sexual behavior. International Journal of Sexual Health. 2019; 31 (1):60–70. doi: 10.1080/19317611.2019.1565793. [ CrossRef ] [ Google Scholar ]
  • Yeung W-JJ, Cheung AK-L. Living alone: One-person households in Asia. Demographic Research. 2015; 32 :1099–1112. doi: 10.4054/DemRes.2015.32.40. [ CrossRef ] [ Google Scholar ]
  • Zappavigna M. Ambient affiliation: A linguistic perspective on Twitter. New Media & Society. 2011; 13 (5):788–806. doi: 10.1177/1461444810385097. [ CrossRef ] [ Google Scholar ]
  • Zelizer V. The purchase of intimacy. Princeton University Press; 2005. [ Google Scholar ]

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Journal Menu

  • Applied Sciences Home
  • Aims & Scope
  • Editorial Board
  • Reviewer Board
  • Topical Advisory Panel
  • Instructions for Authors
  • Special Issues
  • Sections & Collections
  • Article Processing Charge
  • Indexing & Archiving
  • Editor’s Choice Articles
  • Most Cited & Viewed
  • Journal Statistics
  • Journal History
  • Journal Awards
  • Society Collaborations
  • Conferences
  • Editorial Office

Journal Browser

  • arrow_forward_ios Forthcoming issue arrow_forward_ios Current issue
  • Vol. 14 (2024)
  • Vol. 13 (2023)
  • Vol. 12 (2022)
  • Vol. 11 (2021)
  • Vol. 10 (2020)
  • Vol. 9 (2019)
  • Vol. 8 (2018)
  • Vol. 7 (2017)
  • Vol. 6 (2016)
  • Vol. 5 (2015)
  • Vol. 4 (2014)
  • Vol. 3 (2013)
  • Vol. 2 (2012)
  • Vol. 1 (2011)

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

Haptics: Technology and Applications — 2021

  • Print Special Issue Flyer
  • Special Issue Editors

Special Issue Information

Benefits of publishing in a special issue.

  • Published Papers

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section " Mechanical Engineering ".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 43003 The Deadline for manuscript submissions is 31 October 2021, but we still accept papers in case of new submissions. Related Special Issue: Haptics: Technology and Applications

Share This Special Issue

Special issue editor.

latest research papers on haptic technology

Dear Colleagues,

This Special Issue seeks papers which examine some of the latest advances with respect to haptic actuators, haptic rendering, haptic applications in virtual reality/augmented reality, haptic applications in virtual education/training, and all aspects of haptics, including neuroscience, psychophysics, perception, and interactions. This Special Issue also welcomes papers related to medical and surgical simulations, skills training, rehabilitation robotics, collaborative human–robot interactions, communication, and haptic feedback for design and the arts.

Prof. Dr. Sang-Youn Kim Guest Editor

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website . Once you are registered, click here to go to the submission form . Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

  • haptic/vibrotactile actuator
  • psychophysics and perception
  • multimodal interaction
  • virtual reality
  • haptic interfaces design
  • haptic rendering and modeling
  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here .

Published Papers (14 papers)

Jump to: Review

latest research papers on haptic technology

Jump to: Research

latest research papers on haptic technology

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

More From Forbes

The whole world in your hand: major advances in haptic technology.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

The Whole World in Your Hand: Major Advances in Haptic Technology

Recent advances in scientific research may allow robotic prosthetics and virtual reality simulations to be even more effective than before. Researchers in Hong Kong have developed a new, glove-like technology that not only allows users to experience sensations in their hands when interacting with virtual objects but customizes the intensity of sensations according to the sensitivity levels of a person’s nerves.

Haptic technology is defined as technology that relies on computer-induced forces, vibrations, or motions to provide people with an artificial sense of touch. This technology, along with virtual reality, has become more relevant in the medical field within the past decade. Not only do surgeons rely on virtual reality to perform surgeries, but haptic technology can vastly improve the lives of amputees with robotic prosthetics. In recent years, researchers have found that robotic prosthetics with the ability to provide an artificial sense of touch to patients significantly decrease the mental effort required to operate the prosthetic. An artificial sense of touch can also improve patients’ overall ability to control their prosthetics.

While scientists have developed haptic technology for clinical application in the past, previous attempts have often been bulky, inconvenient to use, and have not been customizable for each user.

Now, researchers have developed a new iteration of haptic technology that involves an ultrathin, glove-like technology called WeTac. WeTac contains several electrodes throughout the glove structure and provides electrical feedback to users to induce sensations of touch throughout their hands. This technology not only has the potential to improve the outcomes of robotic surgeries but is a significant development in haptic technology that could also be applied to those who are disabled and using robotic prosthetics.

WeTac is an ultrathin, glove-like haptic technology.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

The first challenge of creating the WeTac was to come up with a design that could emulate the dynamic and variable sensations that people feel when using their hands.

Consider the experience of shaking a person’s hand. You may only feel the handshake in certain areas of your palm or fingers. These areas of contact may change as you go through the movements of shaking their hand. The pressure of your grip or their grip may alter as well. Beyond this, some regions of our hands are innately more sensitive to touch than others. The feelings we experience in our hands are very dynamic even for something as basic as shaking somebody else’s hand.

The goal of Yao et al. was to design a haptic glove that could capture these dynamic sensations when users interacted with virtual objects. To do so, the WeTac was designed with 32 electrodes spanning the surface of the palm and the fingers. This would allow the researchers to adjust the intensity of electrical signals at 32 different points in the hand and allow them to induce more accurate sensations of touch throughout the hand.

The use of electrodes to create artificial sensations also enabled Yao et al. to create the WeTac in an incredibly lightweight form. Electrodes have previously been used in thin, wearable devices and can sit directly on the skin without causing any irritation. This makes them optimal for creating a lightweight and convenient device.

Electrodes are dispersed throughout the WeTac glove.

By using the electrodes, Yao et al. could induce electrical currents throughout the hand. The idea was that these electrical currents would activate the nerves in a person’s hand, effectively inducing sensations of touch that a person might feel from interacting with a physical object. Yao et al. designed the WeTac so that electrical currents could be produced by a blue control unit that would attach to the user’s wrist. This control unit would have wireless capabilities and could be controlled with a phone or computer. This would allow WeTac users to move freely.

Electrodes throughout the WeTac induce sensations in the hand wirelessly.

To test the device, the first step for Yao et al. was to optimize WeTac’s electrical stimulation settings according to each participant. The sensitivity levels of people’s hands can differ across populations. For example, men typically display decreased sensitivity to touch compared to women. Older people also display decreased sensitivity compared to younger people. To customize the device, Yao et al. measured the average electrical stimulation threshold for each participant and across each of the 32 electrodes in their hands.

As expected, on average, women had lower thresholds for electrical stimulation than men. Younger individuals also displayed lower thresholds. The exception to this pattern was that women who exhibited a greater number of calluses on their hands due to their jobs had higher thresholds. In other words, Yao et al. found that beyond gender and age, hand sensitivity can also differ according to a person’s job or daily activities.

After calibrating the WeTac according to each volunteer’s sensitivity levels, Yao et al. were ready to test the WeTac in virtual reality simulations. The first simulation involved participants slowly grabbing a virtual tennis ball and a virtual cactus. This simulation would allow the researchers to determine that the WeTac could produce different sensations according to the texture of a stationary virtual object. After running the simulations, the team found that the tennis ball could induce gentle touch sensations, while the cactus would induce a spike sensation that was slightly painful or uncomfortable.

The researchers tested the haptic technology with multiple simulations. Two of which, involved ... [+] interacting with a tennis ball or cactus.

The researchers also tested a simulation where a virtual mouse and pieces of cheese appeared on the participants' hands. The participant would then report the sensations they felt as the virtual mouse traveled across their hand to eat each piece of cheese. This allowed the researchers to determine that the WeTac could also effectively induce sensations of touch for a moving, dynamic object.

Overall, this study demonstrates significant progress in haptic technology. As the WeTac and other lightweight haptic feedback devices continue to be developed, we may begin to see more complex virtual reality technology and robotic prosthetics that utilize haptic feedback and can improve the outcomes of remote/robotic surgeries as well as the lives of amputees.

William A. Haseltine

  • Editorial Standards
  • Forbes Accolades

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

ORIGINAL RESEARCH article

Applications of haptic technology, virtual reality, and artificial intelligence in medical training during the covid-19 pandemic.

Mohammad Motaharifar,

  • 1 Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
  • 2 Department of Electrical Engineering, University of Isfahan, Isfahan, Iran
  • 3 Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
  • 4 School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
  • 5 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada

This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.

1 Introduction

After the outbreak of COVID-19 virus in Wuhan, China at the end of 2019, this virus and its mutations has rapidly spread out in the world. In view of the fact that no proven treatment has been so far introduced for the COVID-19 patients, the prevention policies such as staying home, social distancing, avoiding physical contact, remote working, and travel restrictions has strongly been recommended by the governments. As a consequence of this global problem, universities have initiated policies regarding how to keep up teaching and learning without threatening their faculty members and students to the virus. Thus, the majority of traditional in-class courses have been substituted to the online courses. Notwithstanding the fact that the emergency shift of the classes have reduced the quality of education during the COVID-19 pandemics Hodges et al. (2020) , some investigators have proposed ways for rapid adaption of the university faculty and the students to the situation and improve the quality of education Zhang et al. (2020) .

Nevertheless, the case of remote learning is different in the medical universities as the learning process in the medical universities is not just rely on the in-class courses. As an illustration, the medical training in the traditional way is accomplished by a medical student through attending some training courses, watching how the procedure is performed by a trainer, performing the procedure under supervision of a trainer, and at the final stage, independently performing the procedure. In fact, the traditional method of surgery training relies on excessive presence of students in the hospital environments and the skill labs to practice the tasks on the real environments such as physical phantoms, cadavers, and patients and that is why medical students are called “residents”. Thus, the aforementioned traditional surgery training methodology requires a substantial extent of physical contact between medical students, expert surgeons, nurses, and patients, and as a result, the risk of infection is high among those people. On the other hand, the assistive technologies based on virtual reality and haptic feedback have introduced alternative surgical training tools to increase the safety and efficiency of the surgical training procedures. Nowadays, the necessity of reducing the physical contact in the hospital environments seems to make another motivation for those assistive technologies. Therefore, it is beneficial to review those technologies from COVID-19 motivation aspect.

In this paper, the existing assistive technologies for medical training are reviewed in a COVID-19 situation. While there are several motivations for those technologies such as increasing the safety, speed, and efficiency of training, the new motivations created for those technologies during the COVID-19 pandemic are the specific focus of this paper. In spite of the existing literature on COVID-19, our main focus is surgery training technologies that help to reduce physical contact during this and other similar pandemics. Notably, a number of those studies have analyzed systemic and structural challenges applicable to medical training programs with little emphasis on technological aspects of the subject Sharma and Bhaskar (2020) , Khanna et al. (2020) . On the other hand, the methods of remote diagnostics and remote treatment have received a great deal of attention after COVID-19 pandemic and a massive body of literature have covered those topics Tavakoli et al. (2020) , Feizi et al. (2021) , Akbari et al. (2021) . In contrast, less studies have given special attention on remote training and remote skill assessment which is the subject of this paper. For this reason, this paper addresses scientific methods, technologies and solutions to reduce the amount of physical contact in the medical environments that is due to training reasons.

Relevant literature was chosen from articles published by IEEE, Frontiers, Elsevier, SAGE, and Wiley with special attention to the well-known interdisciplinary journals. The search was preformed using the keywords “remote medical training,” “skill assessment in surgery,” “virtual and augmented reality for medical training,” “medical training haptic systems,” and “artificial intelligence and machine learning for medical training” until June 30, 2021. The literature was examined to systematically address key novel concepts in remote training with sufficient attention to the future direction of the subject. Finally, it is tried to review the problem in the COVID-19 context in a way that the discussed materials are distinct from similar literature in a conventional non-COVID context.

The rest of this paper is organized as follows: The clinical motivations of the training tools are discussed in Section 2. The virtual and augmented reality and the related areas of utilization for medical training are described in Section 3. Section 4 explains how haptic technology may be used for medical training, while Section 5 describes some data-based approaches that may be used for skill assessment. Then, the machine vision and its relevant methods used for medical training are presented in Section 6. Finally, concluding remarks are stated in Section 7.

2 The Clinical Motivation

The process of skill development among medical students have always been a challenging issue for the medical universities, as the lack of expertise may lead to undesired complications for the patients Kotsis and Chung (2013) . Moreover, owing to the rapid progress of minimal invasive surgeries during the past decades, the closed procedures have been becoming a method of choice over traditional open surgeries. In the minimal invasive surgery, the instruments enter the body through one or more small incisions, while this type of surgery is applicable to a variety of procedures. The foremost advantage of this technique is the minimal affection to healthy organs, which leads to less pain, fewer post-operative complications, faster recovery time, and better long-term results.

However, the closed surgery technique is more challenging from the surgeon’s point of view since the surgeon does not have a complete and direct access on the surgical site and the tiny incisions limit the surgeon’s accessibility. Owing to the limited access, some degrees of freedom are missing and surgeon’s manipulation capability is considerably reduced. Furthermore, there is fulcrum effect at the entry point of the instrument, i.e., the motion of the tip of the instrument, which is placed inside the organ, and the external part of the instrument, which is handled by the surgeon, are reversed. This results in more difficult and even awkward instrument handling and requires specific and extensive surgical training of the surgeon. As a result, the minimal invasive surgeries demands advanced expertise level, the lack of which might cause disastrous complications for the patient. These conditions are equally important in many medical interventions, especially in minimally invasive surgeries. Here a number of specific areas of surgical operation are expressed in order to address complications that might occur during the training procedures.

• Eye surgery:

An important category of medical interventions which need a very high skill level is intraocular eye surgical procedures. Notably, the human eye is a delicate and highly complex organ and the required accuracy for the majority of intraocular surgeries is in the scale of 50–100 microns. The closed type of surgery is applicable to a number of eye surgeries such as the Cataract surgery in the anterior segment as well as the vitro-retinal surgical procedures in the posterior segment. Notably, some complications such as Posterior Capsule Rupture (PCR) for cataract surgery and retina puncture for the vitro-retinal surgical procedures are among the relatively frequent complications that might happen, due to the surgeon’s lack of surgical skills and dexterity. It is shown in a study on ophthalmic residents that the rate of complications such as retinal injuries is higher for the residents with less skills Jonas et al. (2003) .

• Laparoscopic Cholecystectomy

Another example is Laparoscopic Cholecystectomy (LC) which is now the accepted standard procedure across the world and is one of the most common general and specialist surgical procedures. However, it can be prone to an important complication that is bile duct injury (BDI). Although BDI is uncommon but it is one of the most serious iatrogenic surgical complications. In extreme BDI cases, a liver resection or even liver transplantation becomes necessary. BDI is considered as an expensive medical treatment and its mortality rate is as high as 21% Iwashita et al. (2017) .

• Neurosurgery

Neurosurgery is another field that deals with complex cases and requires high accuracy and ability in the surgeon’s performance. In a prospective study of 1,108 neurosurgical cases, 78.5% of errors during neurosurgery were considered preventable Stone and Bernstein (2007) . The most frequent errors reported were technical in nature. The increased use of endoscopy in neurosurgery introduces challenges and increases the potential for errors because of issues such as indirect view, elaborate surgical tools, and a confined workspace.

• Orthopedic surgery

In the field of orthopedics, knee and shoulder arthroscopic surgeries are among the most commonly performed procedures worldwide. There is a steep learning curve associated with arthroscopic surgery for orthopaedic surgery trainees. Extensive hands-on training is typically required to develop surgical competency. The current minimum number of cases may not be sufficient to develop competency in arthroscopic surgery. It is estimated that it takes about 170 procedures before a surgeon develops consultant-level motor skills in knee arthroscopic surgery Yari et al. (2018) . With work-hour restrictions, patient safety concerns, and fellows often taking priority over residents in performing cases, it is challenging for residents to obtain high-level arthroscopic skills by the end of their residency training.

The above motivation shows the importance of skill development among the medical students. The standard process of procedural skill development in medicine and surgery is shown as a diagram in Figure 1 . In the observation stage, the medical students need to attend a clinical environment and watch how the procedure is performed by a trainee. Then, the medical students get involved in the operation as an apprentice, while the actual procedure is performed by the trainer. Later, the medical students practice the operation under direct supervision of the trainer, while the trainer assesses the skill level of the medical students. The supervised practice and skill assessment steps are repeated as long as the trainee does not have enough experience and skill to conduct the procedures without supervision of the trainer. Finally, after obtaining sufficient skill level, the trainee is able to independently perform the operation.

www.frontiersin.org

FIGURE 1 . Process of procedural skill development in medical training and surgery.

Remarkably, a learning curve is considered for each procedure, which means that performance tends to improve with experience. This concept applies for all of the medical procedures and specialties, but complex procedures, surgery in particular, are more likely to gradual learning curves, which means that improvement and expertise is achieved after longer training time. Some of the important factors in the learning curve are manual dexterity of the surgeon, the knowledge of surgical anatomy, structured training and mentoring and the nature of the procedure. The learning curve is longer for minimally invasive procedures than that for open surgical procedures. The learning curve is also influenced by the experience of the supporting surgical team. Besides, learning curves depend on the frequency of procedures performed in a specified period. Many studies suggest that complication rates are inversely proportional to the volume of surgical workload.

Notably, the above mentioned process of skill development require a considerable extent of physical contact between the trainees, the expert surgeons, the nurses, and the patients, while this shall be reduced in the COVID-19 pandemic. In addition to the high risk of infection in the medical universities with the conventional medical training approaches, the majority of the health-care capacity is focused on fighting the COVID-19 virus and consequently, the education requirements of medical universities are failed to be entirely fulfilled. As a result, the training efficiency of medical universities will be reduced, provided that they just rely on the conventional training approaches. This will have possible side-effects on the future performance of the health-care system mainly due to the insufficient number of recently graduated students with adequate expertise level.

On the other hand, traditional education takes place in hospitals and on real patients, which face several problems during the COVID-19 pandemic: the hospital environment is contaminated with the virus, hospital staff and physicians are very busy and tired and have less training capacity, prolonged hospital stays of patients to train students put them at greater risk for exposure to the virus, especially if complication occurs by a resident who does not have gained sufficient skills during the training procedure. Therefore, training with assistive devices outside the hospital may play an effective role in this situations. The highlighted factors can significantly be improved by assisted learning, especially in minimally invasive procedures. In more complex surgeries, the complications becomes more serious, the learning curve will be longer, and the role of assisted learning becomes more prominent.

To solve the above mentioned problems, assistive training tools provide a variety of solutions through which the medical universities are able to continue their education procedures, while the risks enforced by the COVID-19 outbreak are reduced. In the following sections, the main assistive training tools including the haptic systems, virtual reality, machine vision, and data mining are reviewed and the areas in which those technologies facilitate the training process during the COVID-19 pandemic are detailed. The aim of these technologies is to have the training efficiency higher or at least equal to that of the conventional training methods without risk of infection of the involved parties to the virus.

3 Virtual and Augmented Reality

Virtual Reality is employed to create an immersive experience for various applications such as visualization, learning and education. In virtual reality, a computer generated graphical presence is visualized using a head mounted display and the user can interact with 3D objects located in the virtual world. In addition to VR, the Augmented Reality (AR) is developed to add 3D objects to the real world creating a different experience by adding digital information to the real objects in the surrounding environment. Although experiencing the 3D objects in VR scenes is far from the interaction with real objects, the VR experience is getting closer to the real world environments by the help of more realistic computer graphics and full-body haptics suits.

The virtual reality (VR) and augmented reality (AR) are getting more interest as a training technique in the medical fields, unlocking significant benefits such as safety, repeatability and efficiency Desselle et al. (2020) . Furthermore, during the COVID-19 pandemic, remote training and consulting are considered as vital advantages of VR/AR based training methods ( Singh et al., 2020 ).

Some advantages of using VR/AR in medical training are depicted in Figure 2 . Safety is the first and the most important benefit of VR/AR employment in medical education. Complex medical operations may be performed in a simulated environment based on VR with complete safety and without putting the patient’s life into danger. Repeatability is the second advantage of using VR as any simulation scenario in the field of medical training can be repeated over and over until the trainee is completely satisfied. During the COVID-19 pandemic it is vital to practice social distancing which is delivered by VR/AR employment in medical education. Medical training and surgery simulation by computer graphics in VR/AR virtual environments results in reduced training costs as no material except than a computer, a VR headset and a haptic device is required. Since medical training by VR/AR is performed using a computer, the surgery simulation is always in hand as soon the computer and VR headset are ready to be used. Therefore, the efficiency of medical training is increased as no time is required for either preparation of an operation room or getting a patient ready.

www.frontiersin.org

FIGURE 2 . VR/AR advantages in medical training.

VR/AR techniques are employed in various applications in surgical training as it can be seen in Figure 3 . The first application of AR/VR in surgical training is surgical procedure diagnosis and planning. Using AR/VR, the real surgical operation is simulated ahead without putting the patient’s life into danger. The AR/VR is used in surgical education and training which is mentioned as the second application. Simulation based environments are developed for training of medical students by virtual human anatomy 3D models. Another application of AR/VR is robotic and tele-surgery, by which surgical consulting becomes possible even from a far distance. The last application of AR/VR in surgical training is sensor data and image visualization during the surgical operation which makes the effective usage of patient’s medical data possible.

www.frontiersin.org

FIGURE 3 . VR/AR applications in surgical training.

It is shown that the learning curve of hip arthroscopy trainees is significantly improved using a virtual reality simulator ( Bartlett et al., 2020 ). In this study, a group of twenty five inexperienced students were chosen to perform seven arthroscopies of a healthy virtual hip joint weekly. The experimental results indicated that average total time decreased by nearly 75 % while the number of collisions between arthroscope and soft-tissues decreased almost by 90 % .

VR is also employed in orthopedic surgical training, where 37 residents participated in a study to obtain an understanding of the LISS 1 plating surgical process ( Cecil et al., 2018 ). The developed virtual surgical environment is equipped with a haptic device to perform various activities such as assembling LISS plate, placing the assembled LISS plate correctly inside the patient’s leg, and attaching the LISS plate to the fractured bone. The test was divided into pre–test where the students get familiar with the surgery process and the post–test which is devoted to the actual evaluation phase. The participants had 1 h to finish both the pre–and post–tests which resulted in improvement of learning the LISS plating surgical process.

The applicability and effectiveness of VR based training in orthopedic education is evaluated in ( Lohre et al., 2020 ), where nineteen orthopedic surgical residents cooperated in this study. The surgical residents performed a glenoid exposure module on a VR based simulator using a haptic device as the input controller. The result of training of residents using VR simulator has been compared to the conventional surgery training methods. Considering the learning time, repeating 3 to 5 VR based surgery experiments by the residents, resulted in 570 % training time reduction. Additionally, VR based surgical training helped the residents to finish glenoid exposure significantly faster than the residents trained by conventional education methods.

Orthognathic surgery is another surgery field considered for VR based training as it is one of the complex surgical procedures ( Medellin-Castillo et al., 2020 ). While conventional OSG 2 learning techniques are dependent to cadavers or models and experienced surgeons are trained after several years of experiments in operating rooms, employment of VR in surgical training can reduce the learning time and the education cost at the same time. In this study, three cases are considered for evaluation of VR in OSG, cephalometry training, osteotomy training and surgery planning to be precise. The experimental results indicated that the combination of haptics and VR is effective in skill improvement of trainees and surgery time reduction. Furthermore, the surgery errors and mistakes are reduced by using haptic feedback to recreate the sense of touch as trainees can detect landmarks more precisely in comparison to conventional techniques.

In conjunction with VR, the AR technology has also been used in various medical fields for training such as neurosurgical training ( Si et al., 2019 ). Anatomical information and other sensory information can be visualized to the surgeons more properly, and therefore, more accurate decision can be made during a surgery. Although this study is only applicable to the simulated environments because of registration problem, the experiment indicated the effectiveness of the simulator in skill improvement of surgeons.

While key features of VR/AR have led to improved training specially in surgical training, there are some limitations that should be considered ( kumar Renganayagalu et al., 2021 ). The first limitation of VR simulators is the cost of VR content production, and therefore, most of simulators are made for very specific type of simulation in a limited context. The second limitation is the immaturity of interaction devices for VR simulations, which has a great affect on the user experience. Another limitation of VR usage in medical training is the inability of using VR devices for long period of time as the VR devices are made for entertainment and not for a long training session.

It can be concluded that in spite of some limitations, VR/AR based simulators equipped with a haptic device can be used in medical surgery training in order to achieve skill improvement and training time reduction. Furthermore, during the isolation requirements due to COVID-19 pandemic, VR/AR based techniques can be well employed for medical training.

4 Teleoperated Haptic Systems

Haptic systems provide the sense of touch with remote objects without the need of actual contact. It also provides collaboration between several operators without the need of any physical contact. As depicted in Figure 4 , based on the number of the operators, the haptic systems may be classified into single user, dual-user or multi-user haptic systems. Single user haptic systems enable a single human operator to interact with a remote or virtual environment, whereas dual-user or multi-user haptic systems provide a mechanism for collaboration of two or multiple human operators. The medical training applications of those systems is presented here.

www.frontiersin.org

FIGURE 4 . Single user vs. dual user haptic systems. (A) Single user haptic system. (B) Dual user haptic system.

4.1 Single User Haptic Systems

Single user haptic systems extend the abilities of human operators to interact with remote, virtual, and out-of-reach environment. In the field of surgery training, a number of investigations have proposed haptic training simulators for training of minimally invasive surgery (MIS) Basdogan et al. (2004) , dental procedures Wang et al. (2014) , sonography Tahmasebi et al. (2008) , and ocular therapies Spera et al. (2020) . As shown in Figure 4A , a typical single-user haptic simulator system consists of a human operator, a haptic interface, a graphical interface, and a reference model for the virtual object. Notably both the graphical interface and the haptic interface utilize the reference model to provide necessary feedback for the operator. While the graphical interface provides a visual feedback of the environment, the haptic interface provides the kinesthetic feedback of the interaction between the tool and the surgical field. Indeed, the role of haptic feedback is to recreate the sense of contact with the virtual environment for the operator. As a result, the circumstances of actual operation is provided for the medical students, while the need of physical presence in the clinical environments is eliminated. Indeed, through haptic technology, the medical students are able to practice on a virtual environment without the need of presence at the clinical environment. Thus, the risk of infection during the COVID-19 pandemic is effectively reduced.

4.2 Dual User Haptic Systems

The cooperative and joint conduction of an operation either for the purpose of collaboration or training, as a fundamental clinical task, cannot be provided by single user haptic systems. In order to make the cooperation of two surgeons possible, the system should be upgraded to a dual user haptic system by adding another haptic console. A dual user haptic system is a more recent advancement in haptic technology, and it consists of two haptic consoles, one for the trainer and one for the trainee Shahbazi et al. (2018a) . Remarkably, the traditional collaboration methods require direct physical contact of the persons conducting the operation, whereas the haptic-based collaboration approach eliminates the physical contact of the collaborators. As a result of removing the need of physical contact, the involved people are no longer in the risk of the Corona virus. A commercial dual user haptic system developed by intuitive Surgical Inc. ® is the da Vinci Si Surgical System which supports training and collaboration during minimally invasive surgery. The da Vinci Si System builds on the existing da Vinci technology, where it has a number of enabling features such as leading-edge 3D visualization, advanced motion technology, and sufficient dexterity and workspace. However, the da Vinci Si does not provide active supervision and intervention of the trainer on the trainee’s actions. As an illustration, in the case that the trainee controls the procedure, the trainer does not have the possibility to guide the trainee during the procedure.

The issue of supervision and intervention of the trainer during the operation in dual user haptic systems have been a topic of active investigation during the past years. A number of studies have utilized the concept of dominance factor to determine the task dominance of each operator Nudehi et al. (2005) , Khademian and Hashtrudi-Zaad (2012) , Shahbazi et al. (2014b) , Motaharifar et al. (2016) . In those approaches, the trainee is given a partial or full task authority by the trainer based on his/her level of expertise. Notably, the task authority provided by these control architectures is supposed to be fixed during the operation. Thus, changing the authority of the surgeons and specially blocking the trainee’s commands is not possible in the middle of the operation. This might lead to undesired operative complications specially in the case that the trainee makes a sudden unpredictable mistake.

Fortunately, a number of investigations have developed control architectures to address the above shortcoming of the previously proposed haptic architectures Motaharifar et al. (2019b) , Shahbazi et al. (2014a) , Motaharifar and Taghirad (2020) . As a case in point, an S-shaped function is proposed in Motaharifar et al. (2019b) for the adjustment of the corrective feedback in order to shape the trainee’s muscle memory. In fact, the training approach behind the presented architecture is based on allowing the trainee to freely experience the task and be corrected as needed. Nevertheless, through the above scheme, the trainee is just granted the permission to receive the trainer’s motion profile; that is, the trainee is deprived of any realistic contribution to the surgical procedure. In contrast, several investigations have proposed mechanisms for adjusting the task dominance, through which the trainee is granted partial or full contribution to the task Shahbazi et al. (2014a) , Motaharifar and Taghirad (2020) , Liu et al. (2015) , Lu et al. (2017) , Liu et al. (2020) . Remarkably, the above approaches require both the trainer and the trainee to completely perform the operation on their haptic devices, and the actual task authority is determined based on the position error between the trainer and the trainee Shahbazi et al. (2014a) , Motaharifar and Taghirad (2020) , Liu et al. (2015) , Lu et al. (2017) , Liu et al. (2020) . This constitutes an important limitation of the above architectures, since the trainer is enforced to be involved in every detail of each operation and even the trivial ones. Notably, the trainer’s obligation to precisely perform every part of the surgical procedure has little compatibility with the trainer’s responsibilities in terms of supervisory assistance and interference. In fact, by grabbing the idea from the conventional training programs of the medical universities, the haptic architecture should be developed in such a manner that the trainer is able to intervene only in order to prevent a complication to the patient due to the trainee’s mistake. The issue of trainer’s supervisory assistance and interference is addressed in Motaharifar et al. (2019a) by adjusting the task authority based on the trainer’s hand force Motaharifar et al. (2019a) . That is, the trainer is able to grant the task authority to the trainer by holding the haptic device loosely or overrule the trainee’s action by grasping the haptic device tightly. Therefore, the active supervision and interference of the trainer is possible without the need of any physical contact between the trainer and the trainee.

Although the above investigations address the essential theoretical aspects regarding dual user haptic systems, the commercialization of collaborative haptic system needs more attention. In the past years, some research groups have developed pilot setups of dual user haptic system with the primal clinical evaluation that have the potential of commercialization. For instance, the ARASH-ASiST system provides training and collaboration of two surgeons and it is preliminary designed for Vitreoretinal eye surgical procedures ARASH-ASiST (2019) . It is expected that the commercialization and widespread utilization of those assistive surgery training tools is considerably beneficial to the health-care systems in order to decrease the physical contact during the COVID-19 pandemic, and to increase the safety and efficiency of training programs during and after this crisis.

Notwithstanding the fact that teleoperated haptic systems provide key benefits for remote training during COVID-19 pandemic, they face a number of challenges that inspire perspectives of future investigations. First, the haptic modality is not sufficient to recreate the full sense of actual presence at the surgical room near an expert surgeon. To overcome this challenge and increase the operators telepresence, the haptic, visual, and auditory components are augmented to achieve a multi–modal telepresence and teleaction architecture in Buss et al. (2010) . The choice of control structure and clinical investigation of the above multi–modal architecture is still an area of active research Shahbazi et al. (2018b) , Caccianiga et al. (2021) . On the other hand, the on-line communication system creates another challenge for the haptic training systems. Notably, owing to the high-bandwith requirement for an appropriate on-line haptic system, the majority of existing haptic architectures in applications such as collaborative teleopertion, handwriting and rehabilitation cover off-line communication Babushkin et al. (2021) . However, due to the complexity, uncertainty, and diversity of the surgical procedures, the online feedback from the expert surgeon is necessary for a safe and efficient training. The advent of 5G technology with faster and more robust communication network may provide enough bandwidth for an effective real-time remote surgery training.

5 Data Driven Scoring

A vital element of a training program is how to evaluate the effectiveness of exercises by introducing a grading system based on participants’ performance. The conventional qualitative skill assessment methods require physical contact between the trainer and the trainee since they are based on direct supervision of the trainer. On the other hand, the systematic approaches for skill assessment are based on collecting the required data using appropriate instruments and analyzing the obtained data, while they eliminate the requirement of physical contact between the trainer and the trainee. Thus, reviewing the systematic data-based methods is of utmost importance, as they can be utilized to reduce the physical contact during the COVID-19 Pandemic. In this section, some of the state of the art methods in surgical skill evaluation are reviewed. Following the trend of similar research in the context of surgical skill evaluation, we categorize the reviewed methods by two criteria. The first is the type of data, and the method uses for grading the participant. The second criterion is the features extraction techniques that are used during the evaluation stage.

Generally speaking, two types of data may be available in Robotic-Assisted surgery; kinematic and video data. Kinematic data is available when a robot or haptic device is involved. The most common form of capturing kinematic information is using IMUs, encoders, force sensors, magnetic field positioning sensors, etc. The video is generally recorded in all minimally invasive surgeries using endoscopy procedures.

Kinematic data are more comfortable to analyze because the dimensionality of kinematic data is lower than video data. Moreover, Kinematic information is superior to video in measuring the actual 3D trajectories, and 3D velocities Zappella et al. (2013) . On the other hand, video data is more convenient to capture since no additional equipment and sophisticated sensors are needed to be attached to the surgical tool. Additionally, video data reflects the contextual semantic information such as the presence or absence of another surgical instrument, which can not be derived from the kinematic data Zappella et al. (2013) . To use the video data effectively, one should overcome some common obstacles like occlusion and clutter. Using multiple cameras, if possible, can greatly assist in this procedure Abdelaal et al. (2020) . In conclusion, it can be said that each type of data has its own merits and limitations, and using kinematic data as well as the video may result in a richer dataset.

Other than the kinematic and video data, another source of information is often disregarded in the literature. The expert surgeon who conducts the training program can evaluate the trainee’s performance and provide useful feedback regarding his/her performance. This type of information, which is at another semantic level compared to the sensory data, is called soft data. The hard and soft information fusion methods can merge the expert’s opinion with the kinematic and video data (hard data) to accomplish a better grading system.

Most surgical skill evaluation methods utilize a feature extraction technique to classify the participant’s skill level after acquiring the data, like expert, intermediate, and novice. The classification problem can be solved by employing some hand-engineered features or features that are automatically extracted from the data. Hand-engineered features are interpretable and easy to obtain. However, hand-engineered features are hard to define. Specifically, defining a feature that represents the skill level regardless of the task is not trivial. Therefore, the states of the art methods are commonly based on automatic feature extraction techniques. An end-to-end deep neural network is used to unfold the input data’s spatial and temporal features and classify the participant in one of the mentioned skill levels in an automated feature extraction procedure. While, Table 1 summarizes the topic of different data types and feature extraction techniques, we are going to cover some of the reviewed methods in the next sections.

www.frontiersin.org

TABLE 1 . Summery of different sources of data and different feature extraction techniques.

The most convenient hand-engineered features are those introduced by descriptive statistics Anh et al. (2020) . In a skill rating system proposed by Brown et al. (2016) , eight values of mean, standard deviation, minimum, maximum, range, root-mean-square (RMS), total sum-of-squares (TSS), and time integral of force and acceleration signals are calculated. Together with time features like task completion time, these values are used as inputs for a random forest classifier to rate the peg transfer score of 38 different participants. In Javaux et al. (2018) , metrics like mean/maximum velocity and acceleration, tool path length, depth perception, maximum and integral of planar/vertical force, and task completion time are considered as a baseline for skill assessment Lefor et al. (2020) . Another commonly used method in the literature is to use statistical tests such as Mann-Whitney test Moody et al. (2008) , Kruskal–Wallis test Javaux et al. (2018) , Pearson or Spearman correlation Zendejas et al. (2017) , etc. These tests are utilized to classify the participants directly Moody et al. (2008) or automatically calculate some of the well-known skill assessment scores like GOALS and FLS Zendejas et al. (2017) .

Since many surgical tasks are periodic by nature, the data frequency domain analysis proves to be effective Zia et al. (2015) . For periodic functions like knot tying and suturing Zia et al. (2015) suggests that transforming the data into time series and performing a Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) on the data extracts features, will assist the skill level classification task. The results show that such an approach outperforms many machine-learning-based methods like Bag of Words (BoW) and Sequential Motion Texture (SMT). In another work by the same author, symbolic features, texture features, and frequency features are employed for the classification. A Sequential Forward Selection (SFS) algorithm is then utilized to reduce the number of elements in the feature vector and remove the irrelevant data Zia et al. (2016) . Hojati et al. (2019) suggests that since Discrete Wavelet Transform (DWT) is superior to DFT and DCT in a sense that it offers simultaneous localization in time and frequency domain, DWT is a better choice for feature extraction in surgical skill assessment tasks.

As it is mentioned before, hand-engineered features are task-specific. For example, the frequency domain analysis discussed in the previous section is only viable when the task is periodic. Otherwise, the frequency domain features should be concatenated with other features. Moreover, perceiving the correct features that reflect participants’ skill levels in different surgical tasks requires an intensive knowledge of the field. As a result, developing a method in which the essential features are identified automatically is advantageous.

With the recent success of Convolutional Neural Networks (CNN) in classification problems like image classification, action recognition, and segmentation, it is safe to assume that CNN can be used in skill assessment problems. However, unlike image classification, improvement brought by end-to-end deep CNN remains limited compared to hand-engineered features for action recognition Wang et al. (2018) . Similarly, using conventional CNN does not contribute too much to the result in surgical skill evaluation problems. For example, Fawaz et al. (2018) proposed a CNN-based approach for dry-lab skill evaluation tasks such as needle passing, suturing, and knot-tying. However, a hand-engineered-based method with a set of features introduced as holistic features (SMT, DFT, DCT, and Approximate Entropy (ApEn)) suggested by Zia and Essa (2018) reaches the same accuracy as the CNN-based method in the needle passing and suturing tasks and outperforms the CNN-based method in the knot-tying task.

Wang et al. (2018) suggests that conventional CNN falls short compared to traditional hand-crafted feature extraction techniques because it only considers the appearances (spatial features) and ignores the data’s temporal dynamics. In Wang and Fey (2018) , a parallel deep learning architecture is proposed to recognize the surgical training activity and assess trainee expertise. A Gated recurrent unit (GRU) is used for temporal feature extraction, and a CNN network is used to extract the spatial features. The overall accuracy calculated for the needle passing, suturing, and knot tying tasks is 96% using video data. The problem of extracting spatiotemporal features is addressed with 3D ConvNets in Funke et al. (2019) . In this method, inflated convolutional layers are responsible for processing the video snippets and unfolding the classifier’s input data.

To the best of our knowledge, all of the proposed methods in the literature have used single classifier techniques in their work. However, methods like classifier fusion have proved to be useful in the case of medical-related data. In Kazemian et al. (2005) an OWA-based fusion technique is used to combine multiple classifiers and improve the accuracy. For a more advanced classifier fusion technique, one can refer to the proposed method in Kazemian et al. (2010) where more advanced methods such as Dempster’s Rule of Combination (DCR) and Choquet integral are compared with more basic techniques. Activity recognition and movement classification is another efficient way to calculate metrics representing the surgical skill automatically Khan et al. (2020) . Moreover, instrument detection in a video and drawing centroid based on the orientation and movement of the instruments can reflect the focus and ability to plan moves in a surgeon. Utilizing these centroids and calculating the radius, distance, and relative orientation can aid with the classification based on skill level Lavanchy et al. (2021) .

In conclusion, the general framework illustrated in Figure 5 can summarize the reviewed techniques. The input data, either kinematic and video, is fed to a feature extraction block. A fusion block Naeini et al. (2014) can enrich the semantic of the data using expert surgeon feedback. Finally, a regression technique or a classifier can be employed to calculate a participant’s score based on his/her skill level or represent a label following his/her performance.

www.frontiersin.org

FIGURE 5 . A general framework for surgical skill assessment.

6 Machine Vision

The introduction of new hardware capable of running deep learning methods with acceptable performance led artificial intelligence to play a more significant role in any intelligent system Han (2017) . It is undeniable that there is a huge potential in employing deep learning methods in a wide range of various applications Weng et al. (2019) , Antoniades et al. (2016) , Lotfi et al. (2018) , Lotfi et al. (2020) . In particular, utilizing a camera along with a deep learning algorithm, machines may precisely identify and classify objects by which either performing a proper reaction or monitoring a process may be realized automatically. For instance, considering a person in a coma, any tiny reaction is crucial to be detected, and since it is not possible to assign a person for each patient, a camera can solve the problem satisfactorily. Regarding the COVID-19 pandemic situation, artificial intelligence may be used to reduce both physical interactions and the risk of a probable infection especially when it comes to a medical training process. Considering eye surgery as an instance, not only should the novice surgeon closely track how the expert performs but also the expert should be notified of a probable mistake made by the novice surgeon during surgery. In this regard, utilizing computer vision approaches as an interface, the level of close interactions may be minimized effectively. To clarify, during the training process, the computer vision algorithm may act as both the novice surgeon looking over the expert’s hand and the expert monitoring and evaluating how the novice performs. This kind of application in a medical training process may easily extend to other cases. By this means, the demand for keeping in close contact is met properly.

Not needing a special preprocessing, deep convolutional neural networks (CNNs) are commonly used for classifying images into various distinct categories. For instance, in medical images, this may include probable lesions Farooq et al. (2017) , Chitra and Seenivasagam (2013) . Moreover, they can detect intended objects in the images which can be adopted not only to find and localize specific features but also to recognize them if needed. Since most of the medical training tasks require on-line and long-term monitoring, by utilizing a camera along with these powerful approaches, an expert may always keep an eye on the task assigned to a trainee. Besides, methods based on CNNs are capable of being implemented on graphics processor units (GPUs) to process the images with an applicable performance in terms of both speed and accuracy Chetlur et al. (2014) , Bahrampour et al. (2015) . This will reduce the probable latency and makes it possible for the trainer to be notified on time and correct the trainee remotely.

There are numerous researches carried out in the field of image processing based on CNNs. These methods are mainly divided into two single-stage and two-stage detectors. The former is known to be fast while the latter results in higher accuracy. In Figure 6 the difference between a two-stage and a single-stage detector is illustrated. Considering single-stage detectors and starting with the LeCun et al. (1998) as one of the earliest networks, plenty of different approaches have been presented in the literature among which single-shot multi-box detector (SSD) Liu et al. (2016) , RetinaNet Lin et al. (2017) , and you only look once (YOLO) Redmon and Farhadi (2018) may be counted as nominated ones. Some of these approaches have been proposed with several structures including simpler and more complex structures to be employed depending on whether the speed is of high importance or accuracy. Mainly, training and the test are two phases when utilizing these methods. While it is crucial to define a proper optimization problem in the first phase, it is indispensable to implement the trained CNN optimally. Coming up with various solutions, methods like Krizhevsky et al. (2012) , Simonyan and Zisserman (2015) , Szegedy et al. (2015) , and Szegedy et al. (2016) suggest utilizing specific CNN models to obtain better outcomes. On the other hand, to further improve the accuracy, in two-stage detectors like Girshick et al. (2014) , it is suggested to first determine a region of interest (ROI) then identify probable objects in the related area. As a representative, Uijlings et al. (2013) , which is known as selective search, is designed to propose 2 k proposal regions, while a classifier may be employed for the later stage. Dealing with some challenging problems in these detectors, He et al. (2015) , Girshick (2015) , and Ren et al. (2015) are proposed to enhance the results in terms of both accuracy and speed.

www.frontiersin.org

FIGURE 6 . Example of two-stage and single-stage detectors Kathuria (2021) . (A) Two-stage detector (RCNN). (B) Single-stage detector (YOLO).

To put all in a nutshell, when dealing with critical situations such as the current COVID-19 epidemic, it is highly recommended to employ artificial intelligence techniques in image processing namely deep CNNs for medical training tasks. By this means, neither is a close physical interaction between the expert and novice necessary, nor the quality of the training is reduced adversely due to the limitations. In fact, the computer vision approach acts as an interface making it possible both to learn from the expert and to evaluate the novice, remotely.

7 Conclusion and Future Prospects

The faculty members and the students of the medical universities are classified in the high-risk category due to the potential exposure to coronavirus through direct contact and aerosol-generating procedures. As a result, many medical schools have suspended their clinical programs or implemented social distancing in their laboratory practices. Furthermore, the current fight against the COVID-19 virus have used nearly all capacity of health-care systems, and some less urgent and less emergent medical services including the education issues are limited or even paused. Therefore, unless some assistive training tools are utilized to support the educational procedures, the training efficiency of medical universities will be reduced and it have future consequences for the world health-care system.

Practicing medical tasks with current lock-down policies can be solved utilizing state of the art techniques in haptics, virtual reality, machine vision, and machine learning. Notably, utilization of the above technologies in medical education has been researched actively within the past years in order to increase the safety and efficiency of the surgical training procedures. Nowadays, another motivation is created for those assistive technologies owing to the COVID-19 pandemic. In this paper, the existing assistive technologies for medical training are reviewed in the COVID-19 context and a summary of them is presented in Table 2 .

www.frontiersin.org

TABLE 2 . The main tools and approaches that help to reduce physical contact in medical training.

It is reviewed that a surgical simulator system including a VR/AR based graphical interface and a haptic interface is able to provide the circumstances of actual surgical operation for the medical students, without the necessity of attending the hospital environments. Furthermore, through augmenting the system with another haptic console and having a dual user haptic system, the opportunity of collaboration with and receiving guidance cues from an expert surgeon in a systematic manner is given to the trainees. In contrast to the traditional collaboration methodologies, the haptic-based collaboration does not require the physical contact between the involved people and the risk of infection is reduced. Assessment of the expertise level of the medical students is another element of each training program. The necessity of reducing physical contact during the COVID-19 pandemic have also affected the skill assessment methodologies as the traditional ways of skill assessment are based on direct observation by a trainer. In contrast, data-based analysis may be utilized as a systematic approach for skill assessment without any need of physical contact. In this paper, some of the ongoing methods in surgical skill evaluation have been reviewed.

Biomedical engineering technology has progressed by leaps and bounds during the past several decades and advancements in remote diagnostics and remote treatment have been considered as a leading edge in this field. For instance, the tele-surgery robotic-assisted da Vinci system have received a great deal of attention in the healthcare marketplace with more than 5 million surgeries in the last 2 decades DaVinci (2021) . However, the rate of advancement in medical training, which usually follows traditional methods, has been considerably less than the other aspects of medical field, and modern training technologies have received fewer attention during the past several decades. While remote training and remote skill assessment technologies make relatively lower risk to the patient than remote diagnostics and remote treatment, the reason behind fewer attention to the former is the lack of sufficient motivations. It is hoped that the motivations created for those advanced medical training methods during the COVID-19 crisis are strong enough to continuously increase their utilization among the medical universities. Although wide utilization of those technologies needs a considerable extent of time, effort, and investment, immediate and emergent decisions and actions are required to widely utilize those potential techniques. Notably, all of the presented approaches and techniques are targeted to be utilized in the normal situations without any pandemic in order to provide safer and more efficient medical training. Therefore, even after the world recovers from this crisis, these techniques, tools, and approaches deserve more attention, recognition, investigation, and utilization. There needs to be a global awareness among the medical universities that haptic technology and virtual reality integrated with machine learning and machine vision provides an excellent systematic medical training apparatus that ensures the requirements of health-care systems to enhance the safety, efficiency, and robustness of medical training.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.

Author Contributions

Conceptualization, HT, SFM, and MM; original draft preparation MM, AN, PA, AI, and FL; review and editing, HT, SFM, BM, and AL.

This work was supported in part by the National Institute for Medical Research Development (NIMAD) under Grant No. 942314, in part by Tehran University of Medical Sciences, Tehran, Iran under Grant No. 35949-43-01-97, and in part by K. N. Toosi University of Technology, Tehran, Iran Research Grant.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1 Less invasive stabilization system

2 Orthoganthic surgery

Abdelaal, A. E., Avinash, A., Kalia, M., Hager, G. D., and Salcudean, S. E. (2020). A Multi-Camera, Multi-View System for Training and Skill Assessment for Robot-Assisted Surgery. Int. J. CARS 15, 1369–1377. doi:10.1007/s11548-020-02176-1

CrossRef Full Text | Google Scholar

Akbari, M., Carriere, J., Meyer, T., Sloboda, R., Husain, S., Usmani, N., et al. (2021). Robotic Ultrasound Scanning with Real-Time Image-Based Force Adjustment: Quick Response for Enabling Physical Distancing during the Covid-19 Pandemic. Front. Robotics AI 8, 62. doi:10.3389/frobt.2021.645424

Anh, N. X., Nataraja, R. M., and Chauhan, S. (2020). Towards Near Real-Time Assessment of Surgical Skills: A Comparison of Feature Extraction Techniques. Comput. Methods Programs Biomed. 187, 105234. doi:10.1016/j.cmpb.2019.105234

PubMed Abstract | CrossRef Full Text | Google Scholar

Antoniades, A., Spyrou, L., Took, C. C., and Sanei, S. (2016). “Deep Learning for Epileptic Intracranial Eeg Data,” in 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP) , Vietri sul Mare, Salerno, Italy , September 13–16, 2016 ( IEEE ), 1–6. doi:10.1109/mlsp.2016.7738824

ARASH-ASiST (2019). Dataset. Aras Haptics: A System for EYE Surgery Training. Available at: https://aras.kntu.ac.ir/arash-asist// . (Accessed 08 05, 2020).

Google Scholar

Babushkin, V., Jamil, M. H., Park, W., and Eid, M. (2021). Sensorimotor Skill Communication: A Literature Review. IEEE Access 9, 75132–75149. doi:10.1109/access.2021.3081449

Bahrampour, S., Ramakrishnan, N., Schott, L., and Shah, M. (2015). Comparative Study of Caffe, Neon, Theano, and Torch for Deep Learning. CoRR . arXiv:1511.06435. Available at: http://arxiv.org/abs/1511.06435 .

Bartlett, J. D., Lawrence, J. E., Yan, M., Guevel, B., Stewart, M. E., Audenaert, E., et al. fnm (2020). The Learning Curves of a Validated Virtual Reality Hip Arthroscopy Simulator. Arch. Orthopaedic Trauma Surg. 140 (6), 761–767. doi:10.1007/s00402-020-03352-3

Basdogan, C., De, S., Kim, J., Muniyandi, M., Kim, H., and Srinivasan, M. A. (2004). Haptics in Minimally Invasive Surgical Simulation and Training. IEEE Comput. Graphics Appl. 24, 56–64. doi:10.1109/mcg.2004.1274062

Brown, J. D., O’Brien, C. E., Leung, S. C., Dumon, K. R., Lee, D. I., and Kuchenbecker, K. J. (2016). Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer. IEEE Trans. Biomed. Eng. 64, 2263–2275. doi:10.1109/TBME.2016.2634861

Buss, M., Peer, A., Schauß, T., Stefanov, N., Unterhinninghofen, U., Behrendt, S., et al. (2010). Development of a Multi-Modal Multi-User Telepresence and Teleaction System. Int. J. Robot. Res. 29, 1298–1316. doi:10.1177/0278364909351756

Caccianiga, G., Mariani, A., de Paratesi, C. G., Menciassi, A., and De Momi, E. (2021). Multi-Sensory Guidance and Feedback for Simulation-Based Training in Robot Assisted Surgery: A Preliminary Comparison of Visual, Haptic, and Visuo-Haptic. IEEE Robot. Autom. Lett. 6, 3801–3808. doi:10.1109/lra.2021.3063967

Cecil, J., Gupta, A., and Pirela-Cruz, M. (2018). An Advanced Simulator for Orthopedic Surgical Training. Int. J. Comput. Assist. Radiol. Surg. 13, 305–319. doi:10.1007/s11548-017-1688-0

Chetlur, S., Woolley, C., Vandermersch, P., Cohen, J., Tran, J., Catanzaro, B., et al. (2014). cuDNN: Efficient Primitives for Deep Learning. CoRR . arXiv: 1410.0759. Available at: http://arxiv.org/abs/1410.0759 .

Chitra, R., and Seenivasagam, V. (2013). Heart Disease Prediction System Using Supervised Learning Classifier. Bonfring Int. J. Softw. Eng. Soft Comput. 3, 01–07. doi:10.9756/bijsesc.4336

DaVinci (2021). Dataset. Enabling Surgical Care to Get Patients Back to what Matters. Available at: https://www.intuitive.com/en-us/products-and-services/da-vinci . (Accessed 202107 04.

Desselle, M. R., Brown, R. A., James, A. R., Midwinter, M. J., Powell, S. K., and Woodruff, M. A. (2020). Augmented and Virtual Reality in Surgery. Comput. Sci. Eng. 22, 18–26. doi:10.1109/mcse.2020.2972822

Farooq, A., Anwar, S., Awais, M., and Rehman, S. (2017). “A Deep Cnn Based Multi-Class Classification of Alzheimer’s Disease Using Mri,” in 2017 IEEE International Conference on Imaging systems and techniques (IST) (IEEE) , Beijing, China , October 18–20, 2017 , 1–6. doi:10.1109/ist.2017.8261460

Fawaz, H. I., Forestier, G., Weber, J., Idoumghar, L., and Muller, P.-A. (2018). “Evaluating Surgical Skills from Kinematic Data Using Convolutional Neural Networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention , Granada, Spain , September 16–20, 2018 ( Springer ), 214–221. doi:10.1007/978-3-030-00937-3_25

Feizi, N., Tavakoli, M., Patel, R. V., and Atashzar, S. F. (2021). Robotics and Ai for Teleoperation, Tele-Assessment, and Tele-Training for Surgery in the Era of Covid-19: Existing Challenges, and Future Vision. Front. Robot. AI 8, 610677. doi:10.3389/frobt.2021.610677

Funke, I., Mees, S. T., Weitz, J., and Speidel, S. (2019). Video-based Surgical Skill Assessment Using 3d Convolutional Neural Networks. Int. J. Comput. Assist. Radiol. Surg. 14, 1217–1225. doi:10.1007/s11548-019-01995-1

Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014). “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition , Columbus, OH , June 23–28, 2014 , 580–587. doi:10.1109/cvpr.2014.81

Girshick, R. (2015). “Fast R-Cnn,” in Proceedings of the IEEE international conference on computer vision , Boston, MA , June 7–12, 2015 , 1440–1448. doi:10.1109/iccv.2015.169

Han, S. (2017). Efficient Methods and Hardware for Deep Learning . Stanford University .

He, K., Zhang, X., Ren, S., and Sun, J. (2015). Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. pattern Anal. machine intell. 37, 1904–1916. doi:10.1109/tpami.2015.2389824

Hodges, C., Moore, S., Lockee, B., Trust, T., and Bond, A. (2020). The Difference between Emergency Remote Teaching and Online Learning. Boulder, CO. Educause Rev. 27 (1), 1–9.

Hojati, N., Motaharifar, M., Taghirad, H., and Malekzadeh, A. (2019). “Skill Assessment Using Kinematic Signatures: Geomagic Touch Haptic Device,” in 2019 7th International Conference on Robotics and Mechatronics (ICRoM) , Tehran, Iran , November 20–22, 2019 ( IEEE ), 186–191. doi:10.1109/icrom48714.2019.9071892

Iwashita, Y., Hibi, T., Ohyama, T., Umezawa, A., Takada, T., Strasberg, S. M., et al. (2017). Delphi Consensus on Bile Duct Injuries during Laparoscopic Cholecystectomy: an Evolutionary Cul-De-Sac or the Birth Pangs of a New Technical Framework? J. Hepato-Biliary-Pancreatic Sci. 24, 591–602. doi:10.1002/jhbp.503

Javaux, A., Joyeux, L., Deprest, J., Denis, K., and Vander Poorten, E. (2018). Motion-based Skill Analysis in a Fetoscopic Spina-Bifida Repair Training Model. In CRAS , Date: 2018/09/10-2018/09/11, London, United Kingdom.

Jonas, J. B., Rabethge, S., and Bender, H.-J. (2003). Computer-assisted Training System for Pars Plana Vitrectomy. Acta Ophthalmol. Scand. 81, 600–604. doi:10.1046/j.1395-3907.2003.0078.x

Kathuria, A. (2021). Dataset. Tutorial on Implementing yolo V3 from Scratch in Pytorch . Available at: https://blog.paperspace.com/how-to-implement-a-yolo-object-detector-in-pytorch/ . (Accessed on 01 07, 2021).

Kazemian, M., Moshiri, B., Nikbakht, H., and Lucas, C. (2005). “Protein Secondary Structure Classifiers Fusion Using Owa,” in International Symposium on Biological and Medical Data Analysis , Aveiro, Portugal , November 10–11, 2005 ( Springer ), 338–345. doi:10.1007/11573067_34

Kazemian, M., Moshiri, B., Palade, V., Nikbakht, H., and Lucas, C. (2010). Using Classifier Fusion Techniques for Protein Secondary Structure Prediction. Int. J. Comput. Intell. Bioinf. Syst. Biol. 1, 418–434. doi:10.1504/ijcibsb.2010.038225

Khademian, B., and Hashtrudi-Zaad, K. (2012). Dual-user Teleoperation Systems: New Multilateral Shared Control Architecture and Kinesthetic Performance Measures. Ieee/asme Trans. Mechatron. 17, 895–906. doi:10.1109/tmech.2011.2141673

Khan, A., Mellor, S., King, R., Janko, B., Harwin, W., Sherratt, R. S., et al. (2020). Generalized and Efficient Skill Assessment from Imu Data with Applications in Gymnastics and Medical Training. New York, NY, ACM Trans. Comput. Healthc. 2 (1), 1–21.

Khanna, R. C., Honavar, S. G., Metla, A. L., Bhattacharya, A., and Maulik, P. K. (2020). Psychological Impact of Covid-19 on Ophthalmologists-In-Training and Practising Ophthalmologists in india. Indian J. Ophthalmol. 68, 994. doi:10.4103/ijo.ijo_1458_20

Kotsis, S. V., and Chung, K. C. (2013). Application of See One, Do One, Teach One Concept in Surgical Training. Plast. Reconstr. Surg. 131, 1194. doi:10.1097/prs.0b013e318287a0b3

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). “Imagenet Classification with Deep Convolutional Neural Networks,” in Advances in neural information processing systems , Lake Tahoe, NV , December 3–6, 2012 , 1097–1105.

kumar Renganayagalu, S., Mallam, S. C., and Nazir, S. (2021). Effectiveness of Vr Head Mounted Displays in Professional Training: A Systematic Review. Technol. Knowl. Learn . (Springer), 1–43. doi:10.1007/s10758-020-09489-9

Lavanchy, J. L., Zindel, J., Kirtac, K., Twick, I., Hosgor, E., Candinas, D., et al. (2021). Automation of Surgical Skill Assessment Using a Three-Stage Machine Learning Algorithm. Scientific Rep. 11, 1–9. doi:10.1038/s41598-021-88175-x

LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based Learning Applied to Document Recognition. Proc. IEEE 86, 2278–2324. doi:10.1109/5.726791

Lefor, A. K., Harada, K., Dosis, A., and Mitsuishi, M. (2020). Motion Analysis of the Jhu-Isi Gesture and Skill Assessment Working Set Using Robotics Video and Motion Assessment Software. Int. J. Comput. Assist. Radiol. Surg. 15, 2017–2025. doi:10.1007/s11548-020-02259-z

Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017). “Focal Loss for Dense Object Detection,” in Proceedings of the IEEE international conference on computer vision , 2980–2988. doi:10.1109/iccv.2017.324

Liu, F., Lelevé, A., Eberard, D., and Redarce, T. (2015). “A Dual-User Teleoperation System with Online Authority Adjustment for Haptic Training,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , Milan, Italy , August 25–29, 2015 , 1168–1171. doi:10.1109/embc.2015.7318574

Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., et al. (2016). “Ssd: Single Shot Multibox Detector,” in European conference on computer vision ( Springer ), 21–37. doi:10.1007/978-3-319-46448-0_2

Liu, F., Licona, A. R., Lelevé, A., Eberard, D., Pham, M. T., and Redarce, T. (2020). An Energy-Based Approach for N-Dof Passive Dual-User Haptic Training Systems. Robotica 38, 1155–1175. doi:10.1017/s0263574719001309

Lohre, R., Bois, A. J., Athwal, G. S., and Goel, D. P. (2020). Improved Complex Skill Acquisition by Immersive Virtual Reality Training: a Randomized Controlled Trial. JBJS 102, e26. doi:10.2106/jbjs.19.00982

Lotfi, F., Ajallooeian, V., and Taghirad, H. D. (2018). “Robust Object Tracking Based on Recurrent Neural Networks,” in 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM) , 507–511. doi:10.1109/icrom.2018.8657608

Lotfi, F., Hasani, P., Faraji, F., Motaharifar, M., Taghirad, H., and Mohammadi, S. (2020). “Surgical Instrument Tracking for Vitreo-Retinal Eye Surgical Procedures Using Aras-Eye Dataset,” in 2020 28th Iranian Conference on Electrical Engineering (ICEE) ( IEEE ), 1–6. doi:10.1109/icee50131.2020.9260679

Lu, Z., Huang, P., Dai, P., Liu, Z., and Meng, Z. (2017). Enhanced Transparency Dual-User Shared Control Teleoperation Architecture with Multiple Adaptive Dominance Factors. Int. J. Control. Autom. Syst. 15, 2301–2312. doi:10.1007/s12555-016-0467-y

Medellin-Castillo, H. I., Zaragoza-Siqueiros, J., Govea-Valladares, E. H., de la Garza-Camargo, H., Lim, T., and Ritchie, J. M. (2020). Haptic-enabled Virtual Training in Orthognathic Surgery. Virtual Reality 25, 53–67. doi:10.1007/s10055-020-00438-6

Moody, L., Waterworth, A., McCarthy, A. D., Harley, P. J., and Smallwood, R. H. (2008). The Feasibility of a Mixed Reality Surgical Training Environment. Virtual Reality 12, 77–86. doi:10.1007/s10055-007-0080-8

Motaharifar, M., and Taghirad, H. D. (2020). A Force Reflection Robust Control Scheme with Online Authority Adjustment for Dual User Haptic System. Mech. Syst. Signal Process. 135, 106368. doi:10.1016/j.ymssp.2019.106368

Motaharifar, M., Bataleblu, A., and Taghirad, H. (2016). “Adaptive Control of Dual User Teleoperation with Time Delay and Dynamic Uncertainty,” in 2016 24th Iranian conference on electrical engineering (ICEE) , Shiraz, Iran , May 10–12, 2016 ( IEEE ), 1318–1323. doi:10.1109/iraniancee.2016.7585725

Motaharifar, M., Taghirad, H. D., Hashtrudi-Zaad, K., and Mohammadi, S. F. (2019a). Control of Dual-User Haptic Training System with Online Authority Adjustment: An Observer-Based Adaptive Robust Scheme. IEEE Trans. Control. Syst. Technol. 28 (6), 2404–2415. doi:10.1109/tcst.2019.2946943

Motaharifar, M., Taghirad, H. D., Hashtrudi-Zaad, K., and Mohammadi, S.-F. (2019b). Control Synthesis and ISS Stability Analysis of Dual-User Haptic Training System Based on S-Shaped Function. IEEE/ASME Trans. Mechatron. 24 (4), 1553–1564. doi:10.1109/tmech.2019.2917448

Naeini, M. P., Moshiri, B., Araabi, B. N., and Sadeghi, M. (2014). Learning by Abstraction: Hierarchical Classification Model Using Evidential Theoretic Approach and Bayesian Ensemble Model. Neurocomputing 130, 73–82. doi:10.1016/j.neucom.2012.03.041

Nudehi, S. S., Mukherjee, R., and Ghodoussi, M. (2005). A Shared-Control Approach to Haptic Interface Design for Minimally Invasive Telesurgical Training. IEEE Trans. Control. Syst. Technol. 13, 588–592. doi:10.1109/tcst.2004.843131

Redmon, J., and Farhadi, A. (2018). Yolov3: An Incremental Improvement. CoRR abs/1804.02767 . Available at: http://arxiv.org/abs/1804.02767 .

Ren, S., He, K., Girshick, R., and Sun, J. (2015). “Faster R-Cnn: Towards Real-Time Object Detection with Region Proposal Networks,” in Advances in neural information processing systems , Montreal, Quebec, Canada , December 7–12, 2015 , 91–99.

Shahbazi, M., Atashzar, S. F., Talebi, H. A., and Patel, R. V. (2014a). An Expertise-Oriented Training Framework for Robotics-Assisted Surgery. Proc. IEEE Int. Conf. Rob. Autom. , 5902–5907. doi:10.1109/icra.2014.6907728

Shahbazi, M., Atashzar, S. F., Talebi, H. A., and Patel, R. V. (2014b). Novel Cooperative Teleoperation Framework: Multi-Master/single-Slave System. IEEE/ASME Trans. Mechatron. 20, 1668–1679. doi:10.1109/tmech.2014.2347034

Shahbazi, M., Atashzar, S. F., and Patel, R. V. (2018a). A Systematic Review of Multilateral Teleoperation Systems. IEEE Trans. Haptics 11, 338–356. doi:10.1109/toh.2018.2818134

Shahbazi, M., Atashzar, S. F., Ward, C., Talebi, H. A., and Patel, R. V. (2018b). Multimodal Sensorimotor Integration for Expert-In-The-Loop Telerobotic Surgical Training. IEEE Trans. Robot. 34, 1549–1564. doi:10.1109/tro.2018.2861916

Sharma, D., and Bhaskar, S. (2020). Addressing the Covid-19 burden on Medical Education and Training: the Role of Telemedicine and Tele-Education during and beyond the Pandemic. Front. Public Health 8, 838. doi:10.3389/fpubh.2020.589669

Si, W.-X., Liao, X.-Y., Qian, Y.-L., Sun, H.-T., Chen, X.-D., Wang, Q., et al. (2019). Assessing Performance of Augmented Reality-Based Neurosurgical Training. Vis. Comput. Industry, Biomed. Art 2, 6. doi:10.1186/s42492-019-0015-8

Simonyan, K., and Zisserman, A. (2015). “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in International Conference on Learning Representations , San Diego, CA , May 7–9, 2015 .

Singh, R. P., Javaid, M., Kataria, R., Tyagi, M., Haleem, A., and Suman, R. (2020). Significant Applications of Virtual Reality for Covid-19 Pandemic. Diabetes Metab. Syndr. Clin. Res. Rev. 14 (4), 661–664. doi:10.1016/j.dsx.2020.05.011

Spera, C., Somerville, A., Caniff, S., Keenan, J., and Fischer, M. D. (2020). Virtual Reality Haptic Surgical Simulation for Sub-retinal Administration of an Ocular Gene Therapy. Invest. Ophthalmol. Vis. Sci. 61, 4503. doi:10.1039/d0ay90130j

Stone, S., and Bernstein, M. (2007). Prospective Error Recording in Surgery: an Analysis of 1108 Elective Neurosurgical Cases. Neurosurgery 60, 1075–1082. doi:10.1227/01.neu.0000255466.22387.15

Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). “Going Deeper with Convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition , Boston, MA , June 7–12, 2015 , 1–9. doi:10.1109/cvpr.2015.7298594

Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition , Las Vegas, NV , June 27–30, 2016 , 2818–2826. doi:10.1109/cvpr.2016.308

Tahmasebi, A. M., Hashtrudi-Zaad, K., Thompson, D., and Abolmaesumi, P. (2008). A Framework for the Design of a Novel Haptic-Based Medical Training Simulator. IEEE Trans. Inf. Technol. Biomed. 12, 658–666. doi:10.1109/titb.2008.926496

Tavakoli, M., Carriere, J., and Torabi, A. (2020). Robotics, Smart Wearable Technologies, and Autonomous Intelligent Systems for Healthcare during the Covid-19 Pandemic: An Analysis of the State of the Art and Future Vision. Adv. Intell. Syst. 2, 2000071. doi:10.1002/aisy.202000071

Uijlings, J. R., Van De Sande, K. E., Gevers, T., and Smeulders, A. W. (2013). Selective Search for Object Recognition. Int. J. Comput. Vis. 104, 154–171. doi:10.1007/s11263-013-0620-5

Wang, Z., and Fey, A. M. (2018). “Satr-dl: Improving Surgical Skill Assessment and Task Recognition in Robot-Assisted Surgery with Deep Neural Networks,” in In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , Honolulu, HI , July 17–21, 2018 ( IEEE ), 1793–1796. doi:10.1109/EMBC.2018.8512575

Wang, D., Shi, Y., Liu, S., Zhang, Y., and Xiao, J. (2014). Haptic Simulation of Organ Deformation and Hybrid Contacts in Dental Operations. IEEE Trans. Haptics 7, 48–60. doi:10.1109/toh.2014.2304734

Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., et al. (2018). Temporal Segment Networks for Action Recognition in Videos. IEEE Trans. pattern Anal. machine intell. 41, 2740–2755. doi:10.1109/TPAMI.2018.2868668

Weng, J., Weng, J., Zhang, J., Li, M., Zhang, Y., and Luo, W. (2019). “Deepchain: Auditable and Privacy-Preserving Deep Learning with Blockchain-Based Incentive,” in IEEE Transactions on Dependable and Secure Computing . doi:10.1109/tdsc.2019.2952332

Yari, S. S., Jandhyala, C. K., Sharareh, B., Athiviraham, A., and Shybut, T. B. (2018). Efficacy of a Virtual Arthroscopic Simulator for Orthopaedic Surgery Residents by Year in Training. Orthopaedic J. Sports Med. 6, 2325967118810176. doi:10.1177/2325967118810176

Zappella, L., Béjar, B., Hager, G., and Vidal, R. (2013). Surgical Gesture Classification from Video and Kinematic Data. Med. image Anal. 17, 732–745. doi:10.1016/j.media.2013.04.007

Zendejas, B., Jakub, J. W., Terando, A. M., Sarnaik, A., Ariyan, C. E., Faries, M. B., et al. (2017). Laparoscopic Skill Assessment of Practicing Surgeons Prior to Enrollment in a Surgical Trial of a New Laparoscopic Procedure. Surg. Endosc. 31, 3313–3319. doi:10.1007/s00464-016-5364-1

Zhang, W., Wang, Y., Yang, L., and Wang, C. (2020). Suspending Classes Without Stopping Learning: China’s Education Emergency Management Policy in the Covid-19 Outbreak. Multidisciplinary digital publishing institute, J. Risk Finan. Manag. 13 (3), 1–6.

Zia, A., and Essa, I. (2018). Automated Surgical Skill Assessment in Rmis Training. Int. J. Comput. Assist. Radiol. Surg. 13, 731–739. doi:10.1007/s11548-018-1735-5

Zia, A., Sharma, Y., Bettadapura, V., Sarin, E. L., Clements, M. A., and Essa, I. (2015). “Automated Assessment of Surgical Skills Using Frequency Analysis,” in International Conference on Medical Image Computing and Computer-Assisted Intervention , Munich, Germany , October 5–9, 2015 ( Springer ), 430–438. doi:10.1007/978-3-319-24553-9_53

Zia, A., Sharma, Y., Bettadapura, V., Sarin, E. L., Ploetz, T., Clements, M. A., et al. (2016). Automated Video-Based Assessment of Surgical Skills for Training and Evaluation in Medical Schools. Int. J. Comput. Assist. Radiol. Surg. 11, 1623–1636. doi:10.1007/s11548-016-1468-2

Keywords: COVID-19 pandemic, medical training, haptic, virtual reality, artificial intelligence

Citation: Motaharifar M, Norouzzadeh A, Abdi P, Iranfar A, Lotfi F, Moshiri B, Lashay A, Mohammadi SF and Taghirad HD (2021) Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic. Front. Robot. AI 8:612949. doi: 10.3389/frobt.2021.612949

Received: 01 October 2020; Accepted: 29 July 2021; Published: 12 August 2021.

Reviewed by:

Copyright © 2021 Motaharifar, Norouzzadeh, Abdi, Iranfar, Lotfi, Moshiri, Lashay, Mohammadi and Taghirad. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Seyed Farzad Mohammadi, [email protected] ; Hamid D. Taghirad, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. (PDF) Relative Contribution of Haptic Technology to Assessment and

    latest research papers on haptic technology

  2. PPT

    latest research papers on haptic technology

  3. (PDF) A review paper on Haptic technology applications

    latest research papers on haptic technology

  4. PPT

    latest research papers on haptic technology

  5. Haptic Technology Market Report: Trends, Forecast and Competitive Analysis

    latest research papers on haptic technology

  6. (PDF) An Application-Based Review of Haptics Technology

    latest research papers on haptic technology

VIDEO

  1. shapeShift: Rich VR Haptics With Shape-Changing Robotic Displays

  2. Mapping the Brain

  3. Ecocem Science Symposium

  4. Multi-Sensory Devices group: 4 year anniversary

  5. Haptic Control for Robot Teleoperation

  6. Stanford Seminar

COMMENTS

  1. Recent advances in multi-mode haptic feedback technologies towards

    In this review, we first introduce the basic physiological mechanism of haptics, including kinesthetic sensation and tactile sensation. Then, we discuss stimulation mechanisms in haptic interfaces which have potentials in future portable and wearable electronics, where the mechanisms can be divided into three types, forced based haptic interfaces [[23], [24], [25]], thermal based haptic ...

  2. Recent Advances and Opportunities of Active Materials for Haptic

    This paper's primary goal is to review the current status and opportunities of active materials or advanced functional materials-based haptic technology. This paper also intends to assess the role of active materials for haptic innovations and their potential contributions to the technological needs of emerging haptic technologies, namely tele ...

  3. Haptic technology and its application in education and learning

    Haptic technology provides a new human-computer interactive method, which allows the user to feel the motion and haptic information in virtual environment with haptic devices, and it's also a new kind of learning means. In this paper we explained the basic concepts of haptic technology and how haptic technology works, followed by a summary of main haptic devices and the key technologies ...

  4. An Application-Based Review of Haptics Technology

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... A new haptic technology called ...

  5. Touch to Learn: A Review of Haptic Technology's Impact on Skill

    However, a growing trend in this subject is a more profound research of the performance differences between these two haptic modalities, revealing new perspectives on their efficacy. [ 26 , 27 ] For example, Kim et al. [ 26 ] investigated the importance and the perceptual differences of multimedia information to teach students about the ...

  6. Haptic Technology: A comprehensive review on its applications and

    This paper describes how haptic technology works, its devices, applications, and disadvantages. A brief explanation on haptics functions and its implementation in various fields of study is provided in this paper. A description on some of its future applications and a few limitations of this technology is also provided. Previous article in issue.

  7. Active electronic skin: an interface towards ambient haptic feedback on

    This paper presents the concept of an active electronic skin, characterized by three features: richness (multi-modal haptic stimuli), interactivity (bi-directional sensing and actuation ...

  8. Haptics and VR: Technology and Applications

    A haptic device containing haptic actuators is an interface that provides computed haptic information to users. This issue presents the latest haptic research on relevant topics, particularly psychological studies on human touch, haptic simulation using a haptic model and rendering, haptic actuators/devices, and haptic applications.

  9. Haptic Devices: Wearability-Based Taxonomy and Literature Review

    In the last decade, several new haptic devices have been developed, contributing to the definition of more realistic virtual environments. An overview of this topic requires a description of the various technologies employed in building such devices, and of their application domains. This survey describes the current technology underlying haptic devices, based on the concept of "wearability ...

  10. Towards Enabling Haptic Communications over 6G: Issues and ...

    This research paper provides a comprehensive overview of the challenges and potential solutions related to enabling haptic communication over the Tactile Internet in the context of 6G networks. The increasing demand for multimedia services and device proliferation has resulted in limited radio resources, posing challenges in their efficient allocation for Device-to-Device (D2D)-assisted haptic ...

  11. Active mechanical haptics with high-fidelity perceptions for immersive

    Research has shown that authentic and active mechanical haptics can lead to simultaneous physiological and psychological responses, which are difficult to trigger by visual and auditory stimuli ...

  12. (PDF) An Application-Based Review of Haptics Technology

    Haptics or haptic technology is defined as the technology of applying touch sensation. while interacting with a physical or virtual environment [. 1. ]. Physical interaction may. be performed at ...

  13. Applications of Haptic Technology, Virtual Reality, and Artificial

    A dual user haptic system is a more recent advancement in haptic technology, and it consists of two haptic consoles, one for the trainer and one for the trainee Shahbazi et al. (2018a). Remarkably, the traditional collaboration methods require direct physical contact of the persons conducting the operation, whereas the haptic-based ...

  14. Applications of Haptic Technology, Virtual Reality, and Artificial

    This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. …

  15. Designing Pedagogically Effective Haptic Systems for Learning: A ...

    Haptic technology enables users to utilize their sense of touch while engaging with a virtual representation of objects in a simulated environment. It is a bidirectional technology in that it facilitates the interaction between the user and these virtual representations by allowing them to apply force onto one another, which is analogous to our real-world interactions with physical objects as ...

  16. An Overview of Wearable Haptic Technologies and Their Performance in

    To re-create such extensive haptic experiences in a virtual setting requires complex technological solutions and this is a fast-growing area of interest for researchers and engineers. Because we often interact with the environment using our hands, much of the focus in haptic technology research has been dedicated to hand-based devices.

  17. Haptic Technology: A comprehensive review on its applications and

    This paper describes how haptic technology works, its devices, applications, and disadvantages. A brief explanation on haptics functions and its implementation in various fields of study is provided in this paper. A description on some of its future applications and a few limitations of this technology is also provided.

  18. Haptics: The Present and Future of Artificial Touch Sensation

    This article reviews the technology behind creating artificial touch sensations and the relevant aspects of human touch. We focus on the design and control of haptic devices and discuss the best practices for generating distinct and effective touch sensations. Artificial haptic sensations can present information to users, help them complete a task, augment or replace the other senses, and add ...

  19. Haptic display for virtual reality: progress and challenges

    Abstract. Immersion, interaction, and imagination are three features of virtual reality (VR). Existing VR systems possess fairly realistic visual and auditory feedbacks, and however, are poor with haptic feedback, by means of which human can perceive the physical world via abundant haptic properties. Haptic display is an interface aiming to ...

  20. Touching at a Distance: Digital Intimacies, Haptic Platforms, and the

    Drawing together platform studies, digital intimacy studies, phenomenology of touch, and ethics of technology, we argue that these new haptic communication devices require specific ethical consideration of consent. The paper describes several technologies, including Kiiroo teledildonics, the Kissenger, the Apple Watch, and Hey Bracelet ...

  21. Haptics: Technology and Applications

    A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Mechanical Engineering". The Deadline for manuscript submissions is 31 October 2021, but we still accept papers in case of new submissions. Related Special Issue: Haptics: Technology and Applications.

  22. The Whole World In Your Hand: Major Advances In Haptic Technology

    WeTac is an ultrathin, glove-like haptic technology. Yao et al. Nature (2022), DOI: 10.1038/s42256-022-00543-y. The first challenge of creating the WeTac was to come up with a design that could ...

  23. Applications of Haptic Technology, Virtual Reality, and Artificial

    This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient.