Project 2

Hamlet
with Tines

This second project
will allow you to use motion-tracking to give life to an animated
character. In this case you are going to use OpenGL to help a fork play
Hamlet. I will supply the audio files, but you need to supply the
motion (and emotion) to go with it.

Why a fork? A fork
is quite easy to create in OpenGL so you can concentrate on moving the
fork to give it appropriate emotion for the scenes. You can give your
fork either 3 or 4 tines (the pointy parts) and the rest of the design
is up to you as long as it looks like a fork, and as long as you create
it completely within OpenGL - no importing models. Note that you can
not add any additional 'body parts' to the fork - no extra arms or
legs, or eyeballs or ears or hair. You can make the fork move or jump
around the set if you want, or it can stand in the same spot. You can
twist or bend the body of the fork, and each of the tines. If you have
any question about whether your fork design is legitimate then you
should check with Andy.

What does the fork
get to do? The fork gets to play two scenes from Hamlet - one longer
(about a minute) and one shorter (about 30 seconds). The staging of the
scene is up to you - Richard Burton's Hamlet was done on a minimal
stage in street clothes, so you don't need lot of setting to stage
Hamlet. The fork can be alone on a black stage with only minimal props,
or you can create a set and some props. Several of these scenes suggest
props (knife, skull, etc) and sets (graveyard, etc). If you want any
props or backdrops, then you need to create them yourself in OpenGL.
You can set up your lighting any way you wish to get the effect you
want.

There are several
very good renditions of Hamlet that have been preserved on audio and
video. Unfortunately the audio quality on both the Sir Lawrence Olivier
version and the Richard Burton version is rather lacking by today's
standards - and they each took their own liberties with the text, as
did Mel Gibson. Fortunately, in 1992 before filming Hamlet, Kenneth
Branaugh and friends did a dramatization of the complete play for BBC
radio. That is the version of the audio that we will be using.

For the longer
speach you have three choices:

Frailty thy name is woman

To be or not to be

What a piece of work is man

For the shorter
speach you have three choices:

Alas poor Yoric

The play's the thing

Suits of woe

All of the files
are given in aiff format (8K, mono, 16bit) which is compatible with the
SGI machines. The simplest way to play these on an SGI is: playaiff
soundfile.aiff

In order to get the
fork to move, we will use motion capture. The virtual reality equipment
in EVL is designed to monitor the location and orientation of the user
to update the graphics. Instead we can use this equipment to record the
position and orientation of the user and then use those positions and
orientations to move the fork. We have three trackers on most of our VR
devices, typically one for the head and one for each hand, so you can
simultaneously record the position and orientation of three locations.

I am providing a
piece of software to record those positions and orientations into a
text file. The software is called 'capture' and can be found in ~aej.
You start the program by typing 'capture'

To do the motion
capture, you stand in the CAVE and start the recording by pressing the
left button on the wand and stop the recording by pressing the right
button. Each time you press the left button you will create a new text
file. Be sure to keep the original recordings - you will need to turn
them in. You will probably need to edit the resulting files (e.g.
cropping )and you should note what modifications you have made to those
files. The files will give the locations in the CAVE's coordinate
system where 0,0,0 is at the center of the CAVE on the floor so you
will need to adjust the coordinate system in your playback program. The
sound file "sound.aiff" starts playing automatically when you press the
left button; this allows you to play back the sound and do your acting
to the sound file. You may want to try and do the whole speach at once,
or do it in segments and then conenct the files together. You may want
to act out all the 'body parts' of the fork at once, or do them one at
a time. You may end up with one large data file or a bunch of separate
ones. There are lots of options for how to do this.

While you are
recording you will be able to see a representation of where the
computer believes the sensors are so you can adjust your behavious
accordingly. You will probably want to 'overact' somewhat and be a a
bit more dramatic in your gestures.

You should do the
long scene straight - that is you should do it seriously. For the
second shorter scene you can either do it seriously or goof on it and
parody the speach. Its up to you.

I have set aside
October 2nd in the lab for people to use the CAVE to do their
recordings. We will set up a schedule and each person will have an hour
to do their recordings.

Once you have the
recordings done you should be able to do the rest of your work on the
SGI O2s that are available in the Computer Science labs on the second
floor of SEL.

Now that you have
all of the motions recorded you can use them to move the fork in your
OpenGL programs (one for the first speach, one for the second
speach).When you run either of the programs it should open up a 500 x
500 pixel resizable window showing an appropriate view of the action
with the fork on stage. Pressing the 'F' key starts the audio playing
and the fork acting. Pressing the left and right arrows allows the
viewer to circle around the scene - you should give the user the 'best
view' to start with but the user is free to change that view.

The data file that
you create gived the time (in seconds) since capture started, then for
each sensor it gives the position (3 floats) and orientation (3
floats). Standing in the center of the CAVE looking at the front
screen, the position X is left (negative) to right (positive); the
position Y is 0 and the floor and positive upward; the position Z is
ahead (negative) to behind (positive.) The orientation is in the order
pitch, yaw, roll where pitch is down (negative) to up (positive); yaw
is right (negative) to left (positive); roll is clockwise (negative) to
counterclockwise (positive.)

Before the project
deadline you should create a directory containing the C/C++ source code
for both programs, the two executables, the motion-capture text files,
the final version of the files used by your applications, the notes on
what changes you made to the motion capture files to generate the final
versions of the files.

Obviously all of
the code, aside from that which I provide, should be your own. In order
to play the sounds we will use the brain-dead approach of doing a
system call from within the program. Its not the best way, but its
simple and it seems to work quite well.

Now I would suggest
that the first thing you should do is listen to the audio file for the
scenes and chose the two that you like. What are the main emotions in
that scene? How might you want to stage the scene? If you haven't read
the play in a while, or if you haven't read the play at all, then you
probably should read it for the context - see the links below. If you
want to take the easy way out then you can take a look at how various
productions of Hamlet have staged these scenes, but note that these
audio files come from a BBC Radio production so there is no
accompanying video. I would caution against trying to do a straight
translation of Olivier's or Burton's or Branaugh's actions into a fork
- I don't think that will work. Start looking at forks - different
styles of forks. Check the web. Go to stores that sell silverware sets.
Get ideas. Start drawing forks. Start drawing storyboards. See what
parts of the fork you are going to need to move, and how they will need
to move.