MagicHand: Interact with IoT Devices in Augmented Reality (AR)
We built an AR system that enables intuitive, hand-gesture control of IoT devices.
Traditional mobile/web UIs limit immediacy; our prototype overlays controls in-situ
for smart lighting and audio with hand-tracked virtual panels.
Real-time 2D CNN for hand-gesture recognition.
Lightweight object detection from scene geometry to find nearby devices.
Augmented visual panels rendered into the user’s physical environment.
Qualitative UX demos and quantitative tests across multiple IoT targets.
We are developing a novel, sensor-driven framework to extract and visualize tacit knowledge—the implicit skills and intuitions that expert practitioners acquire over years of experience but struggle to articulate. This work aims to improve training efficiency in domains where traditional teaching methods fall short.
Our approach involves:
Our first deployment focuses on glassblowing, a domain with a strong expert-apprentice tradition and notoriously slow skill acquisition. We collect rich sensor data during expert demonstrations of fundamental techniques in a beginner glassblowing course and use this to develop personalized, data-enhanced training content for novices.
Preliminary results reveal measurable patterns of expertise, and future work will explore AI-driven models for automatic expertise assessment, broader applicability to other skill-based domains, and controlled testing of learning outcomes.
Instrumenting expert demonstrators with multimodal sensors, including eye-tracking, EMG, accelerometers, audio, and video.
Capturing and synchronizing expert behaviors during task execution to highlight the variables at play in real time.
Creating enriched instructional videos that reduce cognitive load for novices by making expert strategies visible and replayable.
Defining quantifiable metrics of expertise that distinguish expert from novice performance and can inform AI-based training tools.
Our first deployment focuses on glassblowing, a domain with a strong expert-apprentice tradition and notoriously slow skill acquisition. We collect rich sensor data during expert demonstrations of fundamental techniques in a beginner glassblowing course and use this to develop personalized, data-enhanced training content for novices.
Preliminary results reveal measurable patterns of expertise, and future work will explore AI-driven models for automatic expertise assessment, broader applicability to other skill-based domains, and controlled testing of learning outcomes.
Brain-Computer Interfaces (BCIs) offer exciting potential across healthcare, communication, and human-computer interaction—but also introduce novel security risks. In this project, we uncover a previously unexplored physical-layer vulnerability in EEG-based BCIs that enables remote injection of false brain signals using radio-frequency (RF) interference.
Our system transmits amplitude-modulated RF signals that are passively picked up by the EEG hardware’s physical structure, allowing an attacker to influence the signal without needing direct access to the EEG data stream.
We demonstrate successful attacks on:
Research-grade devices (Neuroelectrics)
Open-source hardware (OpenBCI), and
Consumer-grade headsets (Muse)
Using these devices, we forcibly controlled:
A virtual keyboard, inputting arbitrary characters,
A drone-control interface, inducing crashes, and
A neurofeedback app, reporting fake meditative states.
This is, to our knowledge, the first remote EEG injection attack at the physical level. Our findings raise important concerns not only for BCI integrity and safety, but also for clinical EEG reliability, where such vulnerabilities could lead to misdiagnosis or inappropriate treatment. This work highlights the urgent need for robust shielding, detection, and validation mechanisms in the next generation of BCI systems.