Most game interfaces today are largely symbolic, translating sim- plified input such as keystrokes into the choreography of full-body character movement. In this paper, we describe a system that di- rectly uses human motion performance to provide a radically dif- ferent, and much more expressive interface for controlling virtual characters. Our system takes a data feed from a motion capture sys- tem as input, and in real-time translates the performance into cor- responding actions in a virtual world. The difficulty with such an approach arises from the need to manage the discrepancy between the real and virtual world, leading to two important subproblems 1) recognizing the user’s intention, and 2) simulating the appropriate action based on the intention and virtual context. We solve this is- sue by first enabling the virtual world’s designer to specify possible activities in terms of prominent features of the world along with as- sociated motion clips depicting interactions. We then integrate the pre-recorded motions with online performance and dynamic simu- lation to synthesize seamless interaction of the virtual character in a simulated virtual world. The result is a flexible interface through which a user can make freeform control choices while the result- ing character motion maintains both physical realism and the user’s personal style.