The ICT Virtual Human Toolkit is a collection of modules, tools, and libraries designed to aid and support researchers and developers with the creation of virtual human conversational characters. The Toolkit is an on-going, ever-changing, innovative system fueled by basic research performed at the University of Southern California (USC) Institute for Creative Technologies (ICT) and its partners.
Designed for easy mixing and matching with a research project’s proprietary or 3rd-party software, the Toolkit provides a widely accepted platform on which new technologies can be built. It is our hope that, together as a research community, we can further develop and explore virtual human research and technologies.
The ICT Virtual Human Toolkit is built upon a common modular architecture which enables users to utilize all modules as is, one or more modules coupled with proprietary components, or one or more modules in other existing systems. Our technology emphasizes natural language interaction, nonverbal behavior and perception. Its main modules are listed below. See Documentation for an overview of the architecture, the messaging API, and other components.
MultiSense: A multimodal sensing framework which is created as a platform to integrate and fuse sensor technologies and develop probabilistic models for human behavior recognition. MultiSense tracks and analyzes users’ facial expressions, body posture, acoustic features, linguistic patterns and higher-level behavior descriptors (e.g. attention, fidgeting). It uses the Perception Markup Language (PML).
NPCEditor: At the core of the NPCEditor is a statistical text classification algorithm that selects the character’s responses based on the user’s utterances. A character designer specifies a set of responses and a set of sample utterances that should produce each response through a provided authoring tool. The NPCEditor also contains a dialogue manager that specifies how to use the classifier results.
Are you trying to find the best programs like Virtual Human Toolkit? Have a look at this post and weigh in on your thoughts. Cool!
ConvLab is an open-source multi-domain end-to-end dialog system platform, aiming to enable researchers to quickly set up experiments with reusable components and compare...
DeepPavlov is an open-source conversational AI library built on TensorFlow and Keras. It is designed for: • development of production ready chat-bots and complex...
Features:
Olympus was created at Carnegie Mellon University (CMU) during the late 2000's and benefits from ongoing improvements in functionality. Its main purpose is to help...
A python framework for sharing, training and testing dialogue models, from open-domain chitchat to VQA.
A flexible framework that can be used to create, train, and evaluate conversational AI.
PyDial is an open-source end-to-end statistical spoken dialogue system toolkit which provides implementations of statistical approaches for all dialogue system modules....
Rasa Core: Throw Away Your State Machine and Use Machine Learning For Dialogue Management Manage your dialogue with machine learning and let it improve with every...
Features:
Add your reviews & share your experience when using Virtual Human Toolkit to the world. Your opinion will be useful to others who are looking for the best Virtual Human Toolkit alternatives.
Popular Alternatives
iOS Alternatives
Android Alternatives
Copyright © 2021 TopAlter.com
Sites we Love: AnswerBun, MenuIva, UKBizDB, Sharing RPP