添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

与使用物理硬件相比,使用手势控制具有许多优势。然而,它尚未普及,因为大多数手势控制系统需要额外的传感器或深度摄像头来检测或捕捉手势的移动,然后才能触发有意义的信号以进行相应的动作。本研究提出了一种手势控制系统方法,该方法使用对象检测算法 YOLOv3,结合手工规则,在计算机上实现动态手势控制。该项目使用单个 RGB 相机进行手势识别和定位。由于缺乏专门用于人机交互的标准手势,用于训练的所有手势及其相应命令的数据集由作者定制设计。通过 Python 中的 Pynput 库将手势命令与虚拟鼠标和键盘输入集成的算法被开发用于处理诸如鼠标控制、媒体控制等命令。YOLOv3模型的mAP结果基于测试结果获得了96.68%的准确率。成功地使用基于规则的手势解释算法将静态手势识别转换为动态手势。

The use of gesture control has numerous advantages compared to the use of physical hardware. However, it has yet to gain popularity as most gesture control systems require extra sensors or depth cameras to detect or capture the movement of gestures before a meaningful signal can be triggered for corresponding course of action. This research proposes a method for a hand gesture control system with the use of an object detection algorithm, YOLOv3, combined with handcrafted rules to achieve dynamic gesture control on the computer. This project utilizes a single RGB camera for hand gesture recognition and localization. The dataset of all gestures used for training and its corresponding commands, are custom designed by the authors due to the lack of standard gestures specifically for human–computer interaction. Algorithms to integrate gesture commands with virtual mouse and keyboard input through the Pynput library in Python, were developed to handle commands such as mouse control, media control, and others. The mAP result of the YOLOv3 model obtained 96.68% accuracy based on testing result. The use of rule-based algorithms for gesture interpretation was successfully implemented to transform static gesture recognition into dynamic gesture.