Overview

Go to the Download page for instructions on how to obtain the Toolkit. Instructions for Installing the Toolkit and Running the Example Scenario can be found in Tutorials.

The Virtual Human Toolkit has two target audiences: users and developers.

We define users as people who use the provided Toolkit technology as is, usually either running a component or using it to create new content. Users with basic computer skills and minor scripting skills will be able to configure and run systems.

We define developers as people who extend Toolkit components, use Toolkit components in their own systems, or use their own components within the Toolkit. Developers need to have a strong technical background, including in-depth knowledge of one or more programming languages, knowledge of distributed systems and an affinity with virtual human related research and technology.


Users

It is recommended to start with running the provided example scenario to give you a sense of the features and capabilities of the Toolkit. See the above mentioned two tutorials for details.

The Virtual Human Toolkit consists of a collection of modules, libraries, tools and 3rd party software, collectively dubbed Components. We define modules as run-time components that are part of the running system. See the Architecture page to learn more about how these modules relate to each other.

These modules combined allow for virtual humans who are, in real-time, capable of:

  • Listening to a person talk or type
  • Selecting a verbal response
  • Generating nonverbal behavior for that response
  • Reply using either pre-recorded speech or text-to-speech
  • Realizing the nonverbal behavior in full synchronization with the speech

A virtual human consists of a variety of elements, including art, verbal and nonverbal behavior. An easy way to get started is to build a basic virtual human using the VHBuilder tool for which you can watch the video tutorial. For more advanced virtual humans, you can use the NPCEditor. See the Adding a New Line of Dialogue with the NPCEditor and Creating a New Virtual Human with the NPCEditor tutorials for more details.


Developers

See the Users section above on how to create your own virtual humans.

The Toolkit is based on a modular Architecture in which most modules use Virtual Human Messaging to communicate with each other. Those two pages describe the main messages used in the Toolkit. You can create your own modules that listen to and/or send out VH messages by implementing the VHMsg library. VHSMsg supports C#, C++, Java, Lisp and TCL. For more details, see the Developing a New Module tutorial.

One of the few exceptions to using VHMsg is SmartBody, which has a direct connection with the renderer. For Unity, this is a DLL, for Ogre, SmartBody is run as a stand-alone process that connects using a TCP/IP connection. The AcquireSpeech speech client also has a direct link to a speech server like PocketSphinx Wrapper.

The folder structure of the Toolkit is as follows:

  • \bin, all compiled run-time code, copied over in post-build steps from the source in \core
  • \core, all run-time modules
  • \data, most data; Unity data is included in \core\vhtoolkitUnity\Assets
  • \lib, all supporting libraries
  • \local_tests, default location of Logger log files
  • \scripts, supporting scripts for builds, installers, etc.
  • \tools, all tools

The root contains a MS Visual Studio 2010 solution. Since not all Toolkit components come with source, set the Solution Configuration to ‘(1) ReleaseOpenSource’. See the Build Instructions panel for hxow to compile the Toolkit on various platforms.

Note that development happens mostly in \core while the system runs from \bin. While this allows for a clean separation, during the development, debugging and configuration, it means paying extra attention to the location of files.