Program
  || HOME ||
   
 
Program

The Workshop constsists of 3 days of talks and hands on tutorials.


Wednesday, September 24, 2008
Time
Event
Presenter
Location
9:30 AM 10:00 AM   BREAKFAST  
1st Floor Café
10:00 AM 10:25 AM   Welcome and Overview, Introduction to Virtual Humans Dr. Jonathan Gratch, Associate Director, Virtual Humans Research, ICT
1st Floor Screening Room
10:25 AM 10:50 AM   Virtual Human Creation Pipeline Patrick Kenny, Computer Scientist, ICT
1st Floor Screening Room
10:50 AM 11:15 AM   Speech Input Dr. Panayiotis Georgiou, Research Assistant Professor, USC Speech Analysis and Interpretation Laboratory (SAIL)
1st Floor Screening Room
11:15 11:30   BREAK  
11:30 AM 12:00 PM   Statistical Approaches for Language Understanding in Virtual Humans Dr. Anton Leuski, Research Scientist, ICT
1st Floor Screening Room
12:00 PM

12:25 PM

  Dialogue Modelling for Virtual Humans Mr. Sudeep Gandhe, PhD Student, Computer Science
1st Floor Screening Room
12:25 PM 12:45 PM   Perception of Human Nonverbal Behaviors Dr. Louis-Philippe Morency, Research Scientist, ICT
1st Floor Screening Room
12:45 PM 2:00 PM   LUNCH  
1st Floor Café
2:00 PM 2:30 PM   Emotion and Cognition Dr. Jonathan Gratch, Associate Director, Virtual Humans Research, ICT
1st Floor Screening Room
2:30 PM 3:00 PM   Nonverbal Behavior Dr. Stacy Marsella, Project Leader, Information Sciences Institute (ISI)
1st Floor Screening Room
3:00 PM 3:30 PM   Knowledge Modeling for Domains Dr. Eduard Hovy, Director, Natural Language Group, ISI
1st Floor Screening Room
3:30 PM 3:50 PM   BREAK   
3:50 PM 4:20 PM   SASO-ST Negotiation Demonstration  Mr. Patrick Kenny, Computer Scientist, ICT
1st Floor VR Theater
4:20 PM 4:50 PM   SGT Star Demonstration Mr. Josh Williams, Lead Presenter, Mixed Reality Lab, ICT
1st Floor Screening Room
4:50 PM 5:30 PM   Introduction to Virtual Human Toolkit and Workshop Project Mr. Patrick Kenny, Computer Scientist, ICT
1st Floor Screening Room
5:30 PM 6:00 PM   Depart for Dinner  
6:00 PM 9:00 PM   Dinner                                                                Presentation: Entertainment and Virtual Characters Mr. Kim LeMasters, Creative Director, ICT
Marina del Rey Hotel, Marina Room

 


Thursday, September 25, 2008
Time
Event
Presenter
Location
9:30 AM 10:00 AM   BREAKFAST  
1st Floor Café
10:00 AM 10:15 AM   Welcome and Overview Dr. Jonathan Gratch, Associate Director, Virtual Humans Research, ICT
1st Floor Screening Room
10:15 AM 11:15 AM   TUTORIAL: Introduction to Lab Exercise Mr. Patrick Kenny, Computer Scientist, ICT
2nd Floor Computer Lab
11:15 AM 12:15 PM   TUTORIAL: NPCEditor  Dr. Anton Leuski, Research Scientist, ICT
2nd Floor Computer Lab
12:15 PM 1:30. PM   LUNCH  
1st Floor Café
2:30 PM 3:50. PM   TUTORIAL: Nonverbal Behavior - FML, BML, NVB, SmartBody  Dr. Stacy Marsella, Project Leader, ISI; Mr. Andrew Marshall, Programmer Analyst, ISI; Ms. Jina Lee, PhD Student, Computer Science
2nd Floor Computer Lab
3:50 PM 4:05. PM   BREAK  
4:05 PM

5:30PM

  TUTORIAL: Nonverbal Behavior - FML, BML, NVB, SmartBody  Dr. Stacy Marsella, Project Leader, ISI; Mr. Andrew Marshall, Programmer Analyst, ISI; Ms. Jina Lee, PhD Student, Computer Science
2nd Floor Computer Lab
5:30 PM 6:30. PM   Dinner at ICT  
1st Floor Café
6:30 PM 8:00. PM   Open Lab Time Lab Facilitators: Patrick Kenny, Jina Lee,Andrew Marshall, Arno Hartholt
2nd Floor Computer Lab

 


Friday, September 26, 2008
Time
 
Event
Presenter
Location
9:00 AM 10:00 AM   Open Lab Time Joint Time with Breakfast
2nd Floor Computer Lab
9:30 AM 10:00 AM   Breakfast  
1st Floor Screening Room
10:00 AM 11:00 AM   Open Lab Time Lab Facilitators: Patrick Kenny, Jina Lee, Andrew Marshall, Arno Hartholt
2nd Floor Computer Lab
11:00 AM 11:20 AM   Break, Transition to 1st floor Screening Room  
11:20 AM 12:30 PM   Demonstration of Lab Projects Mr. Patrick Kenny, Computer Scientist, ICT
1st Floor Screening Room
12:30 PM 1:00 PM   Wrap up and Discussions Dr. Jonathan Gratch, Associate Director, Virtual Humans Research, ICT
1st Floor Screening Room
1:00 PM
2:00 PM
  LUNCH  
1st Floor Café
2:00 PM
2:30 PM
  This is CCT Screening Movie If Interested
VR-Theater

 

 
 
 

 

TALK ABSTRACTS

Virtual Human Creation Pipeline
Presenter: Mr. Patrick Kenny
Creating virtual humans is a large undertaking that involves many datasets and modules that must communicate with each other. A distributed environment that allows researchers to work on their module while still utilizing other modules in the system is important. Additionally a common framework for message passing should be utilized. The system architecture will be discussed and examples of how the data flows through the system to create realistic interactive virtual humans will be shown. This pipeline forms the basis for the rest of the components in the workshop and tutorials.

Speech Recognition or Speech-to-Text conversion: The first block of a virtual character system
Presenter: Dr. Panayiotis Georgiou
This talk will introduce the field of automatic speech recognition. It aims to provide an overview of the components and theory behind modern LVCSR (Large Vocabulary Continuous Speech Recognition) systems. The topics introduced include: feature extraction, acoustic modeling, lexicon (pronunciation) modeling, language modeling, and the training and decoding process.

In the feature selection a brief overview of appropriate signal transformations that results on more compact and meaningful representation of the speech are introduced. These features are employed as representative examples in creation of the acoustic models. Linguistic knowledge in combination with signal analysis can aid in the creation of the lexicon, which represents the transcription- pronunciation mappings, and language models can be employed as prior knowledge of the potential use of the language. The talk will touch briefly in the challenges faced in training and decoding of speech and the specific challenges faced in the creation of the highly focused, but large-to-medium vocabulary, virtual character tasks.

Statistical Approaches for Language Understanding in Virtual Humans       
Presenter: Dr. Anton Leuski
Interactive virtual characters has been shown to be effective tools in computer assisted training and simulation. One of the important tasks when creating a believable and engaging virtual character is making it capable of natural interaction with the users. Such a character should understand the user's speech and respond back appropriately. The virtual characters may play many different roles starting from a simple guide character capable of a limited question- answering dialog about a narrow topic to very sophisticated virtual persona capable of engaging the user in a complex negotiation. 
Different character tasks require different approaches for language understanding. In this talk I will describe three approaches for language understanding in virtual humans.

Dialogue Modeling for Virtual Humans
Presenter: Mr. Sudeep Gandhe
The important functions of a dialogue model include 1) progressively tracking the state of the dialogue 2) providing a context for interpretation of the input utterance(s) and 3) selecting the content and type of the output utterance(s).

In this talk we will go over the computational models for spoken dialogue systems in general, starting from simple finite state graphs, to frame-based approaches and information state dialogue modeling. We will look at the theory of speech-acts and how those are used in a plan-based spoken dialogue agent. In the end, we will talk about issues related to dialogue models for Virtual Humans and talk about the dialogue models implemented at ICT for projects like SGT-Star and SASO-ST.

Perception of Human Nonverbal Behaviors
Presenter: Dr. Louis-Philippe Morency
When people interact with each other, it is common to see indications of acknowledgment given with a simple head gesture or explicit turn-taking with eye gaze shifts. People use nonverbal behaviors to communicate relevant information, express emotion and to synchronize rhythm between participants. Perception of nonverbal behaviors is a key component of human communication, and novel multimodal interfaces need to recognize and analyze these visual cues to facilitate more natural human-computer interaction. In this talk I will discuss the technology behind Watson, a real-time library for nonverbal behavior recognition, that became the de-facto standard for adding perception to embodied agent interfaces, and was successfully used by MERL, USC, NTT, Media Lab and many other research groups.

Emotion and Cognition
Presenter: Dr. Jonathan Gratch
The last decade has seen an explosion of interest in the role emotion plays in human cognition and social interaction. Recent findings in psychology and neuroscience have emphasized emotion¹s distinct and complementary role in human cognition when contrasted with the rational conceptions of human thought such as decision theory, game theory and logic. Rather than viewing emotion as a distortion of such rational systems, contemporary research emphasizes emotion¹s functional role and has worked out a number of the mechanisms through which emotion helps an organism adapt to its physical and social environment. Within computer science, there is growing interest in exploiting these findings to expand classical rational models of intelligent behavior. 
In this talk, I will review current findings on the intrapersonal and interpersonal function of emotion and its potential role in enhancing human-computer interaction. I will then discuss our attempts to model emotion within the context of live-like interactive characters that can engage in socio-emotional interactions with human users for training, psychotherapy and education.

Nonverbal Behavior
Presenter: Dr. Stacy Marsella
Nonverbal behavior realizes a range of functions in face-to-face interaction, such as regulating the interaction, conveying propositional content and conveying information about the interactants.  We will discuss the range of these behaviors,  including gaze, gesture, posture, and facial behavior, as well as approaches to modeling in a virtual human the relation of the behaviors to the functions they play. We also will discuss how to realize nonverbal behavior in a virtual human animation system.  

Knowledge Modeling for Domains
Presenter: Dr. Eduard Hovy

Any large project that is built over several years by various people out of a combination of new and legacy code will eventually experience problems of module integration and data standardization. In response, we are creating a single knowledge framework for the entire the Virtual Human system. This framework houses the modules and code in a way that is standardized, well-integrated, easily extensible, and supports reuse in other projects. Principally, the framework holds the core knowledge representation structures of all the principal modules in one shared space, and supports tools and interfaces that allow system builders to view, edit, debug, and extend the knowledge representations uniformly for all modules at once. The centralized knowledge repository includes an ontology (for concept and instance definitions), a task/plan/script library, natural language processing information in lexicons, a framebank of semantic frame representations and their associated surface realizations, and other information. The framework also supports the automated consistency checking of changes and the propagation of new knowledge to all relevant modules in the appropriate formats.

Entertainment and Virtual Characters
Presenter: Mr. Kim LeMasters
Kim LeMasters, the former President of CBS Entertainment, will be speaking about the relationship between the ICT and Hollywood.  Mr. LeMasters will address the use of storytelling as a way of informing, educating and entertaining with a special emphasis on how it relates to the use of transferring tacit knowledge and interacting with Virtual Humans.

 

LAB TUTORIALS

Introduction to Lab Exercise
This tutorial will give an overview of the main lab project and introduce the Launcher, which starts up the components of the virtual human toolkit. There will be a walk through of all the componets in the Virutal Human System.


NPCEditor
NPCEditor is software for creating virtual characters that can conduct a human-like conversation on a narrow topic. Such a character has a predefined set of responses, it accepts a textual input from a user, i.e., a question, -- and selects the appropriate response, i.e., answer that is returned to the user. At the core of this process is a statistical text classification algorithm that selects the character’s answer based on the user’s question. It requires training data – a set of sample questions for every character’s response. The algorithm analyzes the text of the sample questions assigned to each answer and creates a mathematical description of the “translation relationship” to define how the content of the question is mapped to the content of the answer. When the system receives a new question, it uses this translation information to build a lexical representation of what it believes to be the best answer for that question. It then compares this constructed answer to every stored response and returns the best match.


There are three main parts of the NPCEditor:
Character editor allows a designer to define a character, its regular properties such as its name and graphical avatar, a set of responses the character can produce, and a set of training questions that map to those responses. The editor includes several novel features to facilitate the annotation process in which the designer links sample questions to the character’s responses. Firstly, the editor allows the designer to define arbitrary annotation classes and labels and assign those labels to the questions and responses. Secondly, the editor constantly reevaluates the translation relationships between linked questions and answers and suggests possible responses for the unlinked sample questions to the character designer.


Communication module facilitates the character interface with the other software. The module is an API framework that defines how the character receives the input and sends out the response. Presently, NPCEditor has plug-ins for email, instant messaging, and a number of proprietary protocols defined on top of the Elvin messaging. The required libraries are available as separate downloads from the Internet [b, c, d]. The communication module is easily extendable by adding new protocol plug-ins.


Agent server that monitors the incoming connections, directs the incoming information to the appropriate virtual character, runs the characters response selection algorithm, and returns the response back to the sender. The server interface allows an operator to monitor and record the interactive sessions for further analysis.

SmartBody BML Realizer
The Behavior Markup Language (BML) is a growing standard for controlling the verbal and non-verbal behavior of embodied agents, with an emphasis on coordinated multi-modal communication.  SmartBody is an open source BML realizer, capable of converting the behavior commands into skeletal animation.  This tutorial will demonstrate how to setup, configure, and control SmartBody, including the range of BML currently implemented.  The tutorial will conclude with some brief comments on how to extend SmartBody with new BML behaviors, new animation controllers, and new renderers.


Nonverbal Behavior Generator (NVBG)
Believable nonverbal behaviors for virtual humans can create a more immersive experience for users and improve the effectiveness of communication. The Nonverbal Behavior Generator (NVBG) analyzes the syntactic and semantic structure of the surface text as well as the affective state of the ECA and annotates the surface text with appropriate nonverbal behaviors. NVBG is portable to virtual human systems that employ the SAIBA framework, works in real-time, and is user-extensible so that users can easily modify or extend the embedded behavior generation rules.