Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.

Description

Recent advances in neuroscience and robotics have allowed initial demonstrations of brain-computer interfaces (BCIs) for controlling wheeled and humanoid robots. However, further advances have proved challenging due to the low throughput of the interfaces and the high degrees-of-freedom (DOF) of the robots. In this paper, we build on our previous work on Hierarchical BCIs (HBCIs) which seek to mitigate this problem. We extend HBCIs to allow training of arbitrarily complex tasks, with training no longer restricted to a particular robot state space (such as Cartesian space for a navigation task). We present two algorithms for learning command hierarchies by automatically extracting patterns from a user's command history. The first algorithm builds an arbitrary-level hierarchical structure (a "control grammar") whose elements can represent skills, whole tasks, collections of tasks, etc. The user "executes" single symbols from this grammar, which produce sequences of lower-level commands. The second algorithm, which is probabilistic, also learns sequences which can be executed as high-level commands, but does not build an explicit hierarchical structure. Both algorithms provide a de facto form of dictionary compression, which enhances the effective throughput of the BCI. We present results from two human subjects who successfully used the hierarchical BCI to control a simulated PR2 robot using brain signals recorded non-invasively through electroencephalography (EEG).

Questions and Answers

You need to be logged in to be able to post here.