I was lucky to visit the closing keynote of Thad Starner at the 3rd Augmented Human Conference held in Stuttgart in March 2013.
As part of Thad's talks he pulled out his favorite 2-second rule which says: the longer accessing a device exceeds 2s, the more its actually usage would decrease exponentially. Thus, he made a claim that wrist watch interface always sitting on one's wrist ready to use should be more successful than mobile phones which have to pulled out of the pocket. He showed a compelling example of implementing a pie-chart interface using the 12 positions of an analog watch to "dial into apps" .
Then he showed the promo video on google glasses, which uses voice control to record video and display related information.
He ended his talk with teaching muscle memory applied to the example of how to learn to play piano (well, or at least to play some one-handed melodies;). Finally, he showed some working examples of brain-computer interfaces which generally perform very slow in allowing users to convey control information at only about 1bit per second. However, he could show how to pick up American sign language gestures from reactions of the motor cortex. His final vision would be to allow patients without movements to communicate with their environment.
Overall, I was intrigued by Thad's lecture style: instead of selling his research he puts himself into the position of a critical spectator of his own research who gets excited and convinced by his own results. Indeed, a smart way of presenting.
 Daniel Ashbrook, Kent Lyons, and Thad Starner. 2008. An investigation into round touchscreen wristwatch interaction. In Proceedings of the 10th international conference on Human computer interaction with mobile devices and services (MobileHCI '08).