And now, on to the oldest, cruftiest, yet can't-live-without-it-iest part of QTJ: QuickDraw. QuickDraw is a graphics API that can be traced all the way back to that first Mac Steve Jobs pulled out of a bag and showed the press more than 20 years ago. You know—back when Mac supported all of two colors: black and white.

Don't worry; it's gotten a lot better since then.

To be fair, a native Mac OS X application being written today from scratch probably would use the shiny new "Quartz 2D" API. And as a Java developer, the included Java 2D API is at least as capable as QuickDraw, with extension packages like Java Advanced Imaging (JAI) only making things better.

The real advantage to understanding QuickDraw is that it's what's used to work with captured images (see Chapter 6) and individual video samples (see Chapter 8). It is also a reasonably capable graphics API in its own right, supporting import from and export to many formats (most of which J2SE lacked until 1.4), affine transformations, compositing, and more.

Getting and Saving Picts

If you had a Mac before Mac OS X, you probably are very familiar with picts , because they were the native graphics file format on the old Mac OS. Taking screenshots would create pict files, as would saving your work in graphics applications. Developers used pict resources in their applications to provide graphics, splash screens, etc.

Actually, a number of tightly coupled concepts relate to picts. The native structure for working with a series of drawing commands is called a Picture actually. This struct, along with the functions that use it, are wrapped by the QTJ class quicktime.qd.Pict. There's also a file format for storing picts, which can contain either drawing commands or bit-mapped images—files in this format usually have a .pct or .pict extension. QTJ's Pict class has methods to read and write these files, and because it's easy to create Picts from Movies, Tracks, GraphicsImporters, SequenceGrabbers (capture devices), etc., it's a very useful class.

How do I do that?

The PictTour.java application, shown in Example 5-1, exercises the basics of getting, saving, and loading Picts.

Note

Compile and run this example with ant run-ch05-picttour from the downloadable book code.

The two Thread.sleep( ) calls are here only as a workaround to a problem I saw while developing this example—reading a file I'd just written proved crashy (maybe the file wasn't fully closed?). Because it's unlikely you'll write a file and immediately reread it, this isn't something you'll want or need to do in your code.

When run, this example prompts the user for a graphics file, which then is displayed in three windows, as shown in Figure 5-1. These represent three different means of loading the pict.

What just happened?

You can get picts in a number of ways in QTJ. The first example here is to use a GraphicsImporter to load an image file in some arbitrary format, and then call getAsPicture( ) to get a Pict object. This is the easiest way to get a Pict from an arbitrary file—if you knew for sure that a given file was in the pict file format, you could use Pict.fromFile( ) instead, but that does not check to ensure the file really is a pict. So, the safe thing to do is to use a GraphicsImporter, let it figure out the format of the source file, and then convert to pict if necessary with getAsPicture( ).

Writing a pict file to disk is easy: just call writeToFile() .

Tip

Curiously, this takes a java.io.File, not a QTFile, like so many other I/O routines in QTJ.

You also can write a Pict to disk by using the GraphicsImporter's saveAsPicture( ) method.

Note

Yes, it is kind of weird to use the "importer" for what is effectively an "export."

The example uses both of these methods to write pict files to disk—Pict.writeToFile( ) creates pict.pict and GraphicsImporter.saveAsPicture( ) creates gipict.pict. Each file is then reloaded with GraphicsImporters. Conveniently, a GraphicsImporter can be used with a QTFactory to create a QTComponent (see Section 4.4 in Chapter 4), which is how the imported picts are shown on-screen.

What about . . .

. . . other ways to get pictures? Look at the Pict class and you'll see several static fromXXX( ) methods that provide Picts from GraphicsImporters, GraphicsExporters, Movies, Tracks, and other QTJ classes.

Also, why does this example go through the hassle of creating absolute path strings and passing those to the QTFile constructor? It's a workaround to an apparent bug in QTJ for Windows: when you use a relative path (like Pict.writeToFile (new File("MyPict.pict"))), QTJ sometimes writes the file not to the current directory, but rather to the last directory it accessed. In this case, that means the directory it read the source image from. Specifying absolute paths works around this problem.

Getting a Pict from a Movie

If you're working with movies, you'll probably want to be able to get a pict from some arbitrary time in the movie. You could use this for identifying movies via thumbnail icons, identifying segments on a timeline GUI, etc. This action is so common, and it's really easy.

How do I do that?

To grab a movie at a certain time, you just need a one-line call to Movie.getPict( ) , as exercised by the dumpToPict( ) method shown here:

Note

Notice I don't say "grab the current movie frame" because the movie could have other on-screen elements like text, sprites, other movies, etc., not just one frame of one video track.

This method stops the movie if it's playing and stores the previous play rate. Then it creates a Pict on the movie's current time and saves it to a file called movie.pict. Then it restarts the movie.

Note

The downloadable book code exercises this in a demo called PictFromMovie. Run it with ant run-ch05-pictfrommovie.

What about . . .

. . . not stopping the movie? I haven't had good results with this call unless the movie is stopped. At best, it makes the playback choppy for a few seconds; at worst, it crashes.

Converting a Movie Image to a Java Image

It's possible you'll want to grab the current display of the movie and get it into a java.awt.Image. A convenient method call has been provided for just this task; unfortunately, it doesn't work very well, so a Pict-based workaround is needed.

How do I do that?

QTJ provides QTImageProducer , an implementation of the AWT ImageProducer interface. ImageProducer dates back to Java 1.0, and was designed to handle latency and unreliability when loading images over the network—issues that are irrelevant in typical desktop cases.

The most straightforward way to get an image from a movie is to get a QTImageProducer from a MoviePlayer, the object typically used to create a lightweight, Swing-ready QTJComponent. The ConvertToImageBad application in Example 5-2 demonstrates this approach.

Note

Makes sense, doesn't it? The MoviePlayer needs to generate AWT images for the lightweight QTJComponent, so that's what you get an ImageProducer from.

This is a negative example. Keep reading for why you don't want to use this code, and for a superior alternative.

What just happened?

The grabMovieImage() method creates a QTImageProducer from the MoviePlayer and hands it to the AWT Toolkit method createImage(). This call returns an AWT Image that (because it's a nice, clean, one-line call) is stuffed into a Swing ImageIcon and put on-screen.

This is more of a "what the heck" than a "what just happened." If your results are anything like mine, you're probably wondering why the movie stopped the first time you snapped a picture, even though the sound continued. Or why, for that matter, subsequent pictures seem to be later in the movie, meaning the decompression and decoding of the video is still working, but that it's just not getting to the screen.

Tip

Or not—maybe they'll have fixed it by the time you read this. At any rate, as of this writing, the QTImageProducer provided by a MoviePlayer is not to be trusted.

A Better Movie-to-Java Image Converter

The code shown in Section 5.3 is error-prone and nasty. On the other hand, a QTImageProducer is available from the GraphicsImporterDrawer. It does not have to work with a moving target like the MoviePlayer does. If only you could use that one instead . . . .

How do I do that?

The example program ConvertToJavaImageBetter has a different implementation of the grabMovieImage( ) method, as shown in Example 5-3.

What just happened?

This isn't a hack. It's close, though.

Once the movie is paused, the key is to get the movie's display into a GraphicsImporter. Once that's done, it's easy to get a QTImageProducer from a GraphicsImporterDrawer and an image from the AWT Toolkit.

Note

Note to self: pitch QuickTime for Java Hacks to O'Reilly!

The problem is getting the image into a GraphicsImporter. If you look at the Javadoc, you might see one way to connect the dots: get a Pict from the Movie, save that to disk, then turn around and import. It would look something like this:

With the pict imported into a GraphicsImporter, you would get a QTImageProducer from the GraphicsImporterDrawer and generate AWT Images from the image producer, without messing up the movie playback.

The drawback of this approach is that you must read and write data to the hard drive, which is obviously much slower than an operation that takes place purely in memory.

In fact, an in-memory equivalent is possible. Look back at the GraphicsImporter Javadoc. Several setData( ) methods allow you to use sources other than just flat files for input to a GraphicsImporter. Two of them allow you to pass in more or less opaque pointers: setDataReference() and setDataHandle(). With these calls, the importer will read from memory the same way it would read from disk.

Note

And they say Java doesn't have pointers!

The trick in this case is to make the GraphicsImporter think it's reading a .pict file from disk, but actually it's reading from memory. One gotcha in this case is that pict files have a 512-byte header before their data—the header doesn't have to contain anything meaningful, it just has to be present. So, allocate a byte array 512 bytes longer than the size of the Pict data (getSize() and getBytes( ), inherited from QTHandleRef, respectively, return the size and contents of the native structure pointed to by the Pict object, not the Java object itself), and copy those bytes over with an offset of 512.

Next, you need a GraphicsImporter for the Pict format, and a GraphicsImporterDrawer to provide the QTImageProducer. The example code creates these in its constructor:

Build a DataRef to point to the byte array and pass it to the GraphicsImporter with setDataReference( ). You've now replaced the file write and file read with equivalent in-memory operations. Now it's a simple matter of getting a GraphicsImporterDrawer and, from that, a QTImageProducer to create Java images.

Drawing with Graphics Primitives

In AWT, a Graphics object represents a drawing surface—either on-screen or off-screen—and supplies various methods for drawing on it. QuickTime has a GWorld object that's so similar, the QT developers renamed it QDGraphics just to make Java developers feel at home. As with the AWT class, painting is driven by a callback mentality.

What just happened?

The program sets up an ImageDescription, specifying a color model and size information, and creates a QDGraphics drawing surface according to its specs. Next, a new Pict is created from the QDGraphics and an object called OpenCPicParams, which provides size and resolution information. For on-screen work, the default 72dpi is fine.

Next, it issues a Pict.beginDraw() command, passing in a QDDrawer object. QDDrawer is an interface for setting up callbacks to a draw() method that specifies the QDGraphics to be drawn on. This redraw-oriented API is kind of overkill for this headless, off-screen example, but it does get the job done. The Pict records the drawing commands made in the draw( ) call and saves the result to disk as gworld.pict.

So, what can you do with QDGraphics primitives? Some basics of geometry are shown in this example. QDGraphics work with a system of foreground and background colors, a pen of some number of horizontal and vertical pixels, and a concept of a current position. This example begins with two variants of line drawing: the first drawing a line specified by an offset in horizontal and vertical pixels, and the second drawing a line to a specific point. Next, it draws some text in the default font—note that as with AWT, the text will go above the current point. Finally, the example iterates through some of the simpler shapes available as graphics primitives: ovals, optionally rounded rectangles, and arcs.

What about . . .

. . . drawing an image into the QDGraphics, like with AWT's Graphics.drawImage( ) ? Ah, you're getting ahead of me. That will be covered later in the chapter.

Also, why are all the variables and comments here GWorld and gw instead of QDGraphics and qdg? Like I said at the start of this lab, QDGraphics is something of an analogy to an AWT Graphics. Unfortunately, it's a flawed analogy. It wraps a native drawing surface called a GWorld , and all the calls throughout QTJ that take or return it use the "GWorld" verbiage, such as the setGWorld( ) and getGWorld( ) calls that you'll see throughout the Javadoc. Once you start getting into QTJ, the desire to understand it from QuickTime's point of view, as a GWorld, outweighs the benefits of making an appeal to the AWT Graphics analogy. So, to me, it's a GWorld.

Getting a Screen Capture

One frequently useful source of image data is, unsurprisingly, the screen—or screens, if you're so fortunate. Each screen is represented by an object that can give you its current contents, though it takes a little work to do anything with it.

How do I do that?

ScreenToPNG, shown in Example 5-5, is a headless application that starts up, grabs the screen, and writes out the image to a PNG file called screen.png.

Note

I use PNG for screenshots because it's lossless, widely supported, compressed, and patent-unencumbered.

Notice at the bottom left that I have the DVD Player application running. Apple's tools for doing screen grabs—the Grab application and the Cmd-Shift-3 and Cmd-Shift-4 key combinations—won't work if you have the DVD Player running. However, this proves that those pixels are available to QuickDraw. That said, if you grab the screen while a DVD is playing, you might get tearing (if the capture grabs between frames) or even a blank panel (if the capture catches the repaint at a bad time). If you're going to use this to grab images from DVDs, hit Pause first.

Note

Also, don't do anything with a DVD that will get you or me sued.

What just happened?

The program asks for the main screen by means of the static GDevice.getMain( ) method. From this, you can get a PixMap , which is an object that represents metadata about a stored image, such as its color table, pixel format, packing scheme, etc. This metadata also can be stored as an ImageDescription, which is a structure that many graphics methods take as a parameter. The PixMap also has a pointer to the byte array that holds the image data, which you can retrieve as the wrapper object RawEncodedImage .

Note

Java 2D analogy: a PixMap is like a Raster, an ImageDescription is like a Sample-Model, and an EncodedImage is like a DataBuffer. Not exactly the same, but the same ideas throughout.

So now you have an image of what's on the screen—what can you do with it? The goal is to get that image into a format suitable for a GraphicsExporter. One means of doing this is to render into a QDGraphics and send that to the exporter. To do this, look to the QTImage class, which has methods to compress (from a QDGraphics drawing surface to an EncodedImage) and decompress (from a possibly compressed EncodedImage to a QDGraphics). In this case, use decompress( ) to make a QDGraphics, then pass that to the exporter's setInputPixMap( ) method (yes, despite the name, it takes a QDGraphics, not a PixMap) and do the export.

Tip

It's odd that EncodedImage is an interface, yet its relevant methods, like decompress( ), are static in QTImage (which is in another package!). Maybe EncodedImage should have been an abstract class?

What about . . .

. . . getting other screens? If you do have multiple monitors, GDevice has a scheme for iterating through the screens. Call the static GDevice.getList( ) to get—wait for it—not a list of GDevices, but just the first one. You then call its instance method getNext( ) to return another GDevice, and so on, until getNext() returns null.

And why is the PNG file-type constant defined in StdQTConstants4 ? PNG came late to the QuickTime party and wasn't supported until QuickTime 4. The later constants classes (StdQTContants4, StdQTContants5, and StdQTContants6) define constants that were added in later versions of QuickTime. kQTFileTypeTIFF is also in StdQTConstants4, but most other values you'd want to use are in the original StdQTConstants.

Also, it's getting difficult to remember the various means of converting between EncodedImages, Picts, QDGraphics, etc. To keep track of all this for myself, I created the diagram in Figure 5-6 while writing this chapter and have found myself consulting it frequently since then.

Matrix-Based Drawing

Primitives and copying blocks of pixels are nice, but they're kind of limiting. Oftentimes, you must take pixels and scale them, rotate them, and move them around. Of course, if you've worked with Java 2D, you know this as the concept of affine transformations , which maps one set of pixels to another set of pixels, keeping straight lines straight and parallel lines parallel.

If you've really worked with Java 2D's affine transformations, you probably know that they're represented as a linear algebra matrix, with coordinates mapped from source to destination by multiplying and/or adding pixel values against coefficients of the matrix. By changing the coefficients in the matrix to interesting values (or trigonometric functions), you can define different kinds of transformations.

QuickTime does exactly the same thing, with the minor exception that rather than hiding the matrix in a wrapper (like J2D's AffineTransformation class), it puts the matrix front-and-center throughout the API. One reason for this is that it's also a major part of the file format—tracks in a movie all have a matrix in their metadata to determine how they're rendered at runtime.

QuickTime matrix manipulation can basically do three things for you:

Translation

Move a block of pixels from one location to another

Rotation

Rotate pixels around a given point

Scaling

Make block bigger or smaller, or change its shape

Tip

This is a lab, not a lecture, so you don't get the all-singing, all-dancing, all-algebra introduction to matrix theory here. If you must have this, Apple provides a pretty straightforward intro in "The Transformation Matrix," part of the "Introductions to QuickTime" documentation anthology on its web site.

How do I do that?

The example GraphicImportMatrix shows the effect of setting up a Matrix and then using it for drawing operations. A full listing is in Example 5-6.

This headless app begins by importing two PNG files, the number 1 on a green background and the number 2 on cyan. Then it creates a GWorld (oops, I mean a QDGraphics—sorry!) big enough to hold the 2 image, which will serve as the background. Both GraphicsImporters call setGWorld() with the scratchWorld, which allows them to draw( ) into it. A Matrix defines a scale, translate, and rotate transformation for the 1, which is drawn atop the 2. The result is compressed as a PNG and saved as matrix.png, which is shown in Figure 5-7.

What just happened?

Using setMatrix( ) with a GraphicsImporter allows you to tell the importer to use the transformation specified by the Matrix when you call the importer's draw( ) method. Of the three typical transformations, two can be combined into one call—scaling and translating can be expressed with a single call, Matrix.rect() , which defines a mapping from one source rectangle to a target rectangle. In the example, rect( ) maps from the full size of the image to a quarter-size image, centered horizontally and vertically.

Tip

The same thing can be done with separate calls to Matrix.translate( ) and Matrix.scale(), if you prefer.

The example also calls Matrix.rotate() to rotate the scaled and moved box by 30 degrees clockwise.

Tip

You also can define matrix transformations by calling the various setXXX( ) methods that set individual coordinates in the Matrix, if you've read Apple's Matrix docs and understand each coefficient. But why bother when you've got the convenience calls?

Having set this Matrix on 1's GraphicsImporter, the example draws 2 into scratchWorld as a background, and then draws 1 on top of it, scaled, translated, and rotated.

But what to do with the pixels that have been drawn into the QDGraphics? It's not like the Section 5.5 lab, in which a QDGraphics was wrapped by a Pict that could be saved off to disk. Instead, use QTImage to create an EncodedImage from the drawing surface. In the Section 5.6 lab, QTImage.decompress( ) converted an image to a QDGraphics. In this case, QTImage.compress( ) can return the favor by compressing the possibly huge pixel map into a compressed format.

Compressing is harder than decompressing. You need to know up front how big of a byte array will be needed to hold the compressed bytes, so first you call getMaxCompressionSize() . This takes six parameters:

A QDGraphics to compress from.

A QDRect defining the region to be compressed.

Color depth, as an int. Set this to 0 to let QuickTime decide.

Codec quality. These are in StdQTConstants . From the worst to best, they are: codecMinQuality, codecLowQuality, codecNormalQuality, codecHighQuality, codecMaxQuality, codecLosslessQuality. Note that not all codecs support all these values.

Codec type. These constants are identified as XXXCodecType constants in the StdQTConstants classes.

Codec identifier. If you have a CodecComponent object you want to use for the compression, pass it here. Typically, you pass null to let QuickTime decide.

Most of these parameters are used in the subsequent compress( ) call. It goes without saying that you need to use the same values for each call, or else getMaxCompressionSize( ) will lead you to create a byte array that is the wrong size.

Along with many of the preceding parameters, the compress() call takes a RawEncodedImage created from a suitably large byte array. compress( ) puts the compressed and encoded image data into the RawEncodedImage and returns an ImageDescription. Taken together, these are enough to provide an input to a GraphicsExporter, in the form of a call to setInputPtr() .

Note

Passing pointers again! This is one of those cases where QTJ is very un-Java-like.

Compositing Graphics

Matrix transformations are nice, but you can do more with image drawing. QuickDraw supports a number of graphics modes so that instead of just copying pixels from a source to a destination, you can combine them to create interesting visual effects. The graphics mode defines the combination: blending, translucency, etc.

How do I do that?

Specifying a graphics mode for drawing is trivial. Create a GraphicsMode object and call setGraphicsMode( ) on the GraphicsImporter. In the included example, GraphicImportCompositing.java, the mode is set with the following code:

What just happened?

The "blend" GraphicsMode instructs QuickDraw to average out colors where they overlap. In this case, 1's black pixels are lightened up by averaging when averaged with cyan, and the green is slightly tinted where it overlaps with cyan or black.

The QDColor.green is irrelevant in this case, but change the first argument to QDConstants.transparent and suddenly the result is very different, as shown in Figure 5-9.

A GraphicsMode takes a constant to specify behavior, and a color that is used by some of the available modes. In the case of transparent, any pixels of the specified color (green in this case) become invisible, allowing the background picture to show through.

Warning

Don't jump to the conclusion that this is similar to transparency in a GIF or a PNG. Those are indexed color formats, where one of the index values can be made transparent. But in such a format, you could have 254 index values that all represented the same shade of green, and a 255th that becomes invisible. In this QuickDraw example, all green pixels are transparent. If you've worked with television equipment, this should be familiar as the chroma key concept frequently used in news and weather, where someone will stand in front of a green wall, and an effects box will replace all green pixels with video from another source.

There are too many supported graphics mode values to list here, but some of the most useful are as follows:

srcCopy

Copies source to destination. This is the normal behavior.

transparent

Punches out specified color and lets background show through.

blend

Mixes foreground and background colors.

addPin

Adds foreground and background colors, up to a maximum value.

subPin

Calculates the difference between sum and destination colors, to a minimum value.