Abstract
During robotic ultrasound examinations, maintaining pressure and angle control over the ultrasound probe is crucial for obtaining consistent images for an accurate diagnosis. Although force and torque sensors are commonly used for contact force monitoring, their accuracy can be influenced by sensor placement and system complexity. To address these issues, we propose a sensorless approach to estimate the contact force difference between the two sides of an ultrasound probe. Our proposed method utilizes a deep learning-based approach, specifically, a convolutional neural network–long short-term memory (CNN–LSTM) approach, that leverages sequential ultrasound images to estimate force differentials. Experiments were conducted using three tissue-mimicking phantoms and an in vivo human arm to train and evaluate the proposed approach. By varying the applied force difference on the phantoms and human arm, we achieved root mean squared error values of 0.501 and 0.553 N, respectively, in the contact force difference prediction. For performance assessment, we compared our proposed approach with a confidence map and various CNN–LSTM-based methods and demonstrated that our approach outperforms other approaches in terms of accuracy. The results indicate that our proposed method is effective for probe imbalance prediction without relying on physical sensors at inference time and can be deployed to control probes during robotic ultrasound examinations. Therefore, our sensorless approach offers a promising solution for more consistent and reliable robotic ultrasound scanning.
| Original language | English |
|---|---|
| Pages (from-to) | 10594-10601 |
| Number of pages | 8 |
| Journal | IEEE Robotics and Automation Letters |
| Volume | 10 |
| Issue number | 10 |
| DOIs | |
| State | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2016 IEEE.
Keywords
- Deep learning for visual perception
- force and tactile sensing
- medical robots and systems
- ultrasound scanning