CogChar is an open source cognitive character technology platform.
Our characters may be physical (robots), virtual, or both.
Applications include therapeutics, education, research, and fun!

The CogChar project is Java based, but network integratable, connecting several other open source projects.
Coghcar provides an important ingredient of the Glue.AI software recipe.

CogChar provides a set of connected subsystems for experimental, iterative character creation.
We aim to engineer a highly integrated yet modular system that allows many intelligent humanoid character features to be combined as freely and yet robustly as possible.
We achieve robust modularity using the OSGi standard for Java applications, although our individual features can often be run outside of OSGi.
Our outlook for Android compatibility is very bright, although we can't control or predict what Oracle and Google will do in coming years.
We directly support networking over HTTP and AMQP.

Cogchar by itself is primarily a set of bundles used to assemble your character application.
We do have some runnable demonstrations included in our bundles.
Here is a spreadsheet summary of our user interface components and Glue.AI containers.

External Features

Many of our subsystems come to us thru another open source software project (many of which are part of the Glue.AI effort). Specifically, Cogchar has the following main external ingredients:

Web interaction using the Lift framework (combine or replace with your own bundled webapp using PAX-WEB launcher).

After reading this list, it's perhaps rather obvious that one of the main goals of the Cogchar project is to achieve a useful simulation duality between real robotics worlds in the MechIO space, and various onscreen 3D OpenGL worlds. We seek to do that as flexibly as possible, with the definition of world mappings occuring in the semantic space, with minimal need for software extensions.

(Images of Zeno Robot are copyright by RoboKind Robots, and are used here with permission. Please do not reproduce without permission from RoboKind Robots).

Internal Feature Goals

Flexible behavior authoring, editing, sharing, and triggering.

Variable, authorable, testable mappings between various character and world coordinate systems, as well as between symbolic spaces.

Fusion of sensor and symbol streams, applied to self-image and model of world.

Character sees, hears, touches and knows in many related symbolic and physical/virtual/numeric dimensions.

My left-eye camera sees some pixels, some of which are recognized as a face, which may be matched to a name, which I may choose to say, producing a certain sound...

I am standing on a floor, so my feet feel pressure from the floor. I am holding my torso erect above my legs, and my head above my torso. I am looking at a person called Samantha. I can describe these facts to you in spoken words, and you can also see these facts reflected in my physical/virtual display, because they are coming from the same world-image, built from the current fused estimate of all available information (both physical and symbolic).