Dataset Overview
EgoNRG -Egocentric Navigation Robot Gestures- is a comprehensive dataset features joint hand and arm segmentations captured from 32 participants (14 females and 18 males) performing 12 gesture-based commands for ground vehicle robot control. The participants were divided into four groups of eight, with each group executing a specific set of four gestures. Ten of the twelve gestures were derived from the Army Field Manual, 1 deictic gesture, and 1 emblem gesture. The dataset encompasses 3,044 videos and 160,639 annotated frames. The dataset features:
- Joint hand and arm segmentations of each participants' left and right limb.
- Participants' performed gestures with 1. long sleeves and gloves (wearing replica flame-resistant solid color clothing and military camouflage) and 2. bare skin to mimic conditions in real-world industrial and military environments.
- Environments with and without background people visible.
- Data captured in both indoor and outdoor environment at various points throughout the day (morning, midday, and dusk).
- Data captured from four synchronized monochrome cameras each with a different perspective.
- Gesture performed map directly to standard ground vehicle robot commands (stop, move forward, go left, move in reverse, etc.).