The ICT Virtual Human Toolkit is a collection of modules, tools, and libraries designed to aid and support researchers and developers with the creation of virtual human conversational characters. The Toolkit is an on-going, ever-changing, innovative system fueled by basic research performed at the University of Southern California (USC) Institute for Creative Technologies (ICT) and its partners.

Designed for easy mixing and matching with a research project’s proprietary or 3rd-party software, the Toolkit provides a widely accepted platform on which new technologies can be built. It is our hope that, together as a research community, we can further develop and explore virtual human research and technologies.

Request the ICT Virtual Human Toolkit.


Jun 6 2016 - We have released a new version of the Toolkit. It updates Unity to version 5.3 (64 bit), has an update GUI, updated lighting, and a Mecanim preview. Note that there's a new 3rd party installer which is required. See Release Notes for details.

Jun 13 2015 - The latest version of the Toolkit uses Unity 5.1, which allows SmartBody to be loaded as a plugin for Unity Free, just as it always did for Unity Pro. Unity 5 adds many new features, including updated lighting and 64-bit support. In addition, we've updated the characters and added basic character customization.

Feb 5 2015 - Welcome to the new year, and welcome to a new Toolkit version. We updated Unity, made SmartBody animations a native Unity object and added previews for a new menu and character selection in the LineUp scene. See the Release Notes for more details.

See Release Notes and News Archive for details.


The ICT Virtual Human Toolkit is built upon a common modular architecture which enables users to utilize all modules as is, one or more modules coupled with proprietary components, or one or more modules in other existing systems. Our technology emphasizes natural language interaction, nonverbal behavior and perception. Its main modules are listed below. See Documentation for an overview of the architecture, the messaging API, and other components.


MultiSense is a multimodal sensing framework which is created as a platform to integrate and fuse sensor technologies and develop probabilistic models for human behavior recognition. MultiSense tracks and analyzes users’ facial expressions, body posture, acoustic features, linguistic patterns and higher-level behavior descriptors (e.g. attention, fidgeting). It uses the Perception Markup Language (PML).


At the core of the NPCEditor is a statistical text classification algorithm that selects the character’s responses based on the user’s utterances. A character designer specifies a set of responses and a set of sample utterances that should produce each response through a provided authoring tool. The NPCEditor also contains a dialogue manager that specifies how to use the classifier results.

Nonverbal Behavior Generator (NVBG)

The NVBG is a rule-based system that analyzes character text and functional markup to propose nonverbal behaviors. The resulting schedule is Behavior Markup Language (BML).


SmartBody is a character animation library that provides synchronized locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real-time. It uses Behavior Markup Language (BML) to transforms behavior descriptions into real-time animations.


The Toolkit uses Unity as its main game engine, which has been extended to include a tight integration with SmartBody, a messaging protocol, debug and authoring tools, and a graphical timeline editor for creating cut-scenes.