We introduce a physics-based method to synthesize concurrent ob- ject manipulation using a variety of manipulation strategies pro- vided by different body parts, such as grasping objects with the hands, carrying objects on the shoulders, or pushing objects with the elbows or the torso. We design dynamic controllers to physi- cally simulate upper-body manipulation and integrate it with proce- durally generated locomotion and hand grasping motion. The out- put of the algorithm is a continuous animation of the character ma- nipulating multiple objects and environment features concurrently at various locations in a constrained environment. To capture how humans deftly exploit different properties of body parts and objects for multitasking, we need to solve challenging planning and exe- cution problems. We introduce a graph structure, a manipulation graph, to describe how each object can be manipulated using dif- ferent strategies. The problem of manipulation planning can then be transformed to a standard graph traversal. To achieve the ma- nipulation plan, our control algorithm optimally schedules and exe- cutes multiple tasks based on the dynamic space of the tasks and the state of the character. We introduce a ”task consistency” metric to measure the physical feasibility of multitasking. Furthermore, we exploit the redundancy of control space to improve the character’s ability to multitask. As a result, the character will try its best to achieve the current tasks while adjusting its motion continuously to improve the multitasking consistency for future tasks.