With the introduction of QuickTime streaming, QuickTime has extended its ability to retrieve data. Traditionally, media data had to exist as local data. With streaming, media data can now exist on remote servers and retrieved using the HTTP or FTP Internet protocols, as well as the more traditional file protocol of previous releases. The location of media is contained in a structure called a DataRef. This tells QuickTime where media data is and how to retrieve it.
QuickTime streaming also adds support for broadcast media data, using the RTSP protocol. Media data of this format is delivered through network protocols, typically sourced through the broadcast of either live or stored video. It requires the creation of streaming data handlers to deal with the mechanics of retrieving media data using this protocol. The tracks in a single movie can have data in different locations. For example, one track's media data might be contained in the movie itself; a second track's media data might exist in another local file; and a third track's media data might exist in a remote file and retrieved through the FTP protocol. In these two cases, the movie references the media data that is stored elsewhere. A fourth track might retrieve broadcasted media data through a network using the RTSP protocol. Your application is free to mix and match these data references for tracks' media data as appropriate. QuickTime presents a complex and powerful capability to deal with media data; a complexity that provides the user of QuickTime with a powerful tool in both the development and delivery of media content.
Dealing With Media
Despite the complexity of the data retrieval semantics of QuickTime, the process of dealing with media in QuickTime is quite simple. In the CreateMovie sample code, the movie's data is constructed by inserting the media sample data and its description into the media object and then inserting the media into the track.
The following is a printout of the ImageDescription for the last frame of the image data that is added to the movie. As you can see, the ImageDescription describes the format, the size, and the dimensions of the image itself:
quicktime.std.image.ImageDescription[ cType=rle , temporalQuality=512, spatialQulity=512, width=330, height=140, dataSize=0, frameCount=1, name=Animation, depth=32]
The following is the SoundDescription of the sound track added in this sample code. It describes the sample, sample rate, size, and its format:
quicktime.std.movies.media.SoundDescription[ format=twos, numChannels=1, sampleSize=8, sampleRate=22050.0]
If the data is not local to the movie itself, one inserts a sample description (as previously to describe the data) and a DataRef that describes to QuickTime both the location of the data and the means it should use to retrieve the data when it is required. Once assembled, QuickTime handles all of the mechanics of retrieving and displaying the media at runtime.
When media is displayed, the media handlers use the various rendering services of QuickTime. These rendering services are based on the same data model, in that the sample description is used to both describe the media and to instantiate the appropriate component responsible for rendering data of that specific format. These rendering components are also available to the application itself outside of movie playback.
For example, the DSequence object is used to render image data to a destination QDGraphics and used throughout QuickTime to render visual media. Your application can also use the DSequence object directly by providing both the image data and an ImageDescription that describes the format and other characteristics of this data and, of course, a destination QDGraphics where the data should be drawn. Like QuickTime itself, your application can apply matrix transformations, graphics modes, and clipping regions that should be applied in this rendering process.
Similar processes are used and available for all of the other media types (sound, music, and so on) that QuickTime supports, with each of these media types providing their own extended sample descriptions, media handlers, and rendering services.