The invention relates generally to computer systems, and more particularly to the processing of graphical and other video information for display on computer systems.
The limits of the traditional immediate mode model of accessing graphics on computer systems are being reached, in part because memory and bus speeds have not kept up with the advancements in main processors and/or graphics processors. In general, the current (e.g., WM_PAINT) model for preparing a frame requires too much data processing to keep up with the hardware refresh rate when complex graphics effects are desired. As a result, when complex graphics effects are attempted with conventional graphics models, instead of completing the changes that result in the perceived visual effects in time for the next frame, the changes may be added over different frames, causing results that are visually and noticeably undesirable.
A new model for controlling graphics output is described in U.S. patent application Ser. Nos. 10/184,795, 10/184,796, 10/185,775, 10/401,717, 10/402,322 and 10/402,268, assigned to the assignee of the present invention and hereby incorporated by reference. This new model provides a number of significant improvements in graphics processing technology. For example, U.S. Ser. No. 10/184,795 is generally directed towards a multiple-level graphics processing system and method, in which a higher-level component (e.g., of an operating system) performs computationally intensive aspects of building a scene graph, updating animation parameters and traversing the scene graph's data structures, at a relatively low operating rate, in order to pass simplified data structures and/or graphics commands to a low-level component. Because the high-level processing greatly simplifies the data, the low-level component can operate at a faster rate, (relative to the high-level component), such as a rate that corresponds to the frame refresh rate of the graphics subsystem, to process the data into constant output data for the graphics subsystem. When animation is used, instead of having to redraw an entire scene with changes, the low-level processing may interpolate parameter intervals as necessary to obtain instantaneous values that when rendered provide a slightly changed scene for each frame, providing smooth animation.
U.S. Ser. No. 10/184,796 describes a parameterized scene graph that provides mutable (animated) values and parameterized graph containers such that program code that wants to draw graphics (e.g., an application program or operating system component) can selectively change certain aspects of the scene graph description, while leaving other aspects intact. The program code can also reuse already-built portions of the scene graph, with possibly different parameters. As can be appreciated, the ability to easily change the appearance of displayed items via parameterization and/or the reuse of existing parts of a scene graph provide substantial gains in overall graphics processing efficiency.
U.S. Ser. No. 10/185,775 generally describes a caching data structure and related mechanisms for storing visual information via objects and data in a scene graph. The data structure is generally associated with mechanisms that intelligently control how the visual information therein is populated and used. For example, unless specifically requested by the application program, most of the information stored in the data structure has no external reference to it, which enables this information to be optimized or otherwise processed. As can be appreciated, this provides efficiency and conservation of resources, e.g., the data in the cache data structure can be processed into a different format that is more compact and/or reduces the need for subsequent, repeated processing, such as a bitmap or other post-processing result.
While the above improvements provide substantial benefits in graphics processing technology, there still needs to be a way for programs to effectively use this improved graphics model and its other related improvements in a straightforward manner. What is needed is a comprehensive yet straightforward model for programs to take advantage of the many features and graphics processing capabilities provided by the improved graphics model and thereby output complex graphics and audiovisual data in an efficient manner.
Briefly, the present invention provides an object model, and an application programming interface (API) for accessing that object model in a manner that allows program code developers to consistently interface with a scene graph data structure to produce graphics. A base object in the model and API set is a visual, which represents a virtual surface to the user; the scene graph is built of visual objects. Such Visuals include container visual objects, retained visual objects, drawing visual objects and other visual objects. Visuals themselves can hold onto resource objects, such as clip objects, transform objects and so forth, and some type of Visuals (e.g., DrawingVisual, RetainedVisual) can hold on to drawing instruction lists that may reference resource objects, such as images, brushes and/or gradients.
Most resource objects in the scene graph are immutable once created, that is, once they are created they cannot be changed. For those objects that a developer wants to easily change, mutability is provided by a changeables pattern and implementation, as described in copending U.S. patent application entitled “Changeable Class and Pattern to Provide Selective Mutability in Computer Programming Environments” filed concurrently herewith, assigned to the assignee of the present invention and herein incorporated by reference.
Via the application programming interfaces, program code writes drawing primitives such as geometry data, image data, animation data and other data to the visuals. For example, program code writes drawing primitives such as draw line instructions, draw geometry instructions, draw bitmap instructions and so forth into the visuals. Those drawing instructions are often combined with complex data like geometry data that describes how a path is drawn, and they also may reference resources like bitmaps, videos, and so forth.
The code can also specify clipping, opacity and other properties on visuals, and methods for pushing and popping transform, opacity and hit test identification are provided. In addition, the visual may participate in hit testing. The program code also interfaces with the visuals to add child visuals, and thereby build up a hierarchical scene graph. A visual manager processes (e.g., traverses or transmits) the scene graph to provide rich graphics data to lower-level graphics components.
Container visuals provide for a collection of children visuals and in one implementation, are the only visuals that can define hierarchy. The collection of children on a container visual allows for arbitrary insertion, removal and reordering of children visuals.
Drawing visuals are opened with an open call that returns a drawing context (e.g., a reference to a drawing context object) to the caller. In general, a drawing context is a temporary helper object that is used to populate a visual. The program code then uses the drawing context to add drawing primitives to the visual. The open call may clear the contents (children) of a visual, or an append call may be used to open a visual for appending to that current visual. In addition to receiving static values as drawing parameters, drawing contexts can be filled with animation objects.
A retained visual operates in a similar manner to a drawing visual, except that its drawing context is filled when the system requests that it be filled, instead of when the program code wants to fill it. For example, if a particular Visual's content will be needed in rendering a scene, the system will call IRetainedVisual.Render to fill the content of the Visual, replacing any content already in memory.
Thus, different types of primitives may be drawn into a visual using a drawing context, including geometry, image data and video data. Geometry is a type of class that defines a vector graphics skeleton without stroke or fill, e.g., a rectangle. Each geometry object corresponds to a simple shape (LineGeometry, EllipseGeometry, RectangleGeometry), a complex single shape (PathGeometry), or a list of such shapes (GeometryList) with a combine operation specified (e.g., union, intersection and so forth.) These objects form a class hierarchy. There are also shortcuts for drawing frequently used types of geometry, such as a DrawRectangle method.
When geometry is drawn, a brush or pen may be specified. A brush object defines how to graphically fill a plane, and there is a class hierarchy of brush objects. A pen also has a brush specified on it describing how to fill the stroked area. A special type of brush object (the VisualBrush) can reference a visual to define how that brush is to be drawn. A drawing brush makes it possible to fill a shape or control with combinations of other shapes and brushes.
Other benefits and advantages will become apparent from the following detailed description when taken in conjunction with the drawings, in which:
Exemplary Operating Environment
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
The computer 110 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 110 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 110. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, discussed above and illustrated in
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160 or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Interfaces to Scene Graph Data Structures
One aspect of the present invention is generally directed to allowing program code, such as an application or operating system component, to communicate drawing instructions and other information (e.g., image bitmaps) to graphics components in order to render graphical output on the system display. To this end, the present invention provides a number of defined functions and methods, e.g., in the form of application programming interfaces (APIs) to an object model, that enable programs to populate a scene graph with data structures, drawing primitives (commands), and other graphics-related data. When processed, the scene graph results in graphics being displayed on the screen.
In one implementation, the graphics layer architecture 200 includes a high-level composition and animation engine 214, which includes or is otherwise associated with a caching data structure 216. The caching data structure 216 contains a scene graph comprising hierarchically-arranged objects that are managed according to a defined object model, as described below. In general, the visual API layer 212 provides the program code 202 (and the layout system 210) with an interface to the caching data structure 216, including the ability to create objects, open and close objects to provide data to them, and so forth. In other words, the high-level composition and animation engine 214 exposes a unified media API layer 212 by which developers may express intentions about graphics and media to display graphics information, and provide an underlying platform with enough information such that the platform can optimize the use of the hardware for the program code. For example, the underlying platform will be responsible for caching, resource negotiation and media integration.
In one implementation, the high-level composition and animation engine 214 passes an instruction stream and possibly other data (e.g., pointers to bitmaps) to a fast, low-level compositing and animation engine 218. As used herein, the terms “high-level” and “low-level” are similar to those used in other computing scenarios, wherein in general, the lower a software component is relative to higher components, the closer that component is to the hardware. Thus, for example, graphics information sent from the high-level composition and animation engine 214 may be received at the low-level compositing and animation engine 218, where the information is used to send graphics data to the graphics subsystem including the hardware 222.
The high-level composition and animation engine 214 in conjunction with the program code 202 builds a scene graph to represent a graphics scene provided by the program code 202. For example, each item to be drawn may be loaded with drawing instructions, which the system can cache in the scene graph data structure 216. As will be described below, there are a number of various ways to specify this data structure 216, and what is drawn. Further, the high-level composition and animation engine 214 integrates with timing and animation systems 220 to provide declarative (or other) animation control (e.g., animation intervals) and timing control. Note that the animation system allows animate values to be passed essentially anywhere in the system, including, for example, at the element property level 208, inside of the visual API layer 212, and in any of the other resources. The timing system is exposed at the element and visual levels.
The low-level compositing and animation engine 218 manages the composing, animating and rendering of the scene, which is then provided to the graphics subsystem 222. The low-level engine 218 composes the renderings for the scenes of multiple applications, and with rendering components, implements the actual rendering of graphics to the screen. Note, however, that at times it may be necessary and/or advantageous for some of the rendering to happen at higher levels. For example, while the lower layers service requests from multiple applications, the higher layers are instantiated on a per-application basis, whereby is possible via the imaging mechanisms 204 to perform time-consuming or application-specific rendering at higher levels, and pass references to a bitmap to the lower layers.
In accordance with an aspect of the present invention, a Visual application programming interface (API) provides a set of objects that define the visual tree in accordance with an aspect of the present invention. The visual tree represents a data structure that can be rendered by the graphics system to a medium (a screen, printer or surface). When rendered, the data in the visual tree is the “scene” a viewer sees. Visual objects (or simply Visuals) contain and manage the other graphical objects that make up a drawn scene like geometries, primitives, brushes, color gradients, and animations.
Although the present invention also provides access to drawing and rendering services at a higher level of abstraction using a more familiar object and property system, and provides vector graphics objects at the markup level through markup language (code-named “XAML”), the Visual API will be of most interest to developers who want greater control over drawing functionality than they can easily achieve using the property system or markup.
A DrawingVisual is a Visual that can contain graphical content. This Visual exposes a number of drawing methods. The child objects of a DrawingVisual are organized in a zero-based, z-order space. A RetainedVisual is A Visual that introduces a “retained instruction stream” that can be used for drawing. In simpler terms, the RetainedVisual allows the developer to retain the visual's content and redraw it only when necessary. It is possible to use the RetainedVisual imperatively, like a DrawingVisual, by calling RenderOpen and using the returned DrawingContext for drawing. The RetainedVisual provides validation callback functionality and an InvalidateVisual method. To use validation functionality, the user implements the IRetainedRender interface on the RetainedVisual or a class that derives from it.
Returning to
As shown in
A typical application might draw graphics by defining a layout in “XAML” as described in the aforementioned U.S. patent application Ser. No. 10/401,717, and also by specifying some drawing operations in C#. Developers may create Shape elements, or draw geometries using the Geometry classes with primitives. In the following scenario, the code demonstrates drawing an ellipse in the Visual that underlies the Canvas:
Using the Visual API, developers can instead draw directly into the Visual (that would otherwise be accessed via by the layout element).
To render the content of a DrawingVisual object, an application typically calls the RenderOpen method on the DrawingVisual. RenderOpen returns a DrawingContext with which the application can perform drawing operations. To clear the Visual's contents, the application calls Close on the DrawingContext. After the application calls Close, the DrawingContext can no longer be used.
The following code draws an ellipse (the same ellipse as in the previous example) into a DrawingVisual, using a Geometry object rather than the Ellipse shape. The example creates a DrawingVisual, gets the DrawingVisual's DrawingContext, and calls the DrawingContext's DrawGeometry method to draw the ellipse. Note that you must add the Visual to the visual tree of the top-level object, which in this case is the window.
The following example further builds on the previous example by adding similar ellipses to a ContainerVisual; note that this example is verbose for clarity). Using ContainerVisual can help organize scene objects and allow the developer to segregate Visual objects on which to perform hit-testing or validation (RetainedVisual objects) from ordinary drawn content, and minimize unnecessary redrawing of content.
A RetainedVisual is similar to a DrawingVisual, but allows for selective redrawing of visual content. As its name suggests, the RetainedVisual can retain content for multiple appearances on the medium. It also provides callback and validation functionality. This functionality can help with rendering performance by offering the developer greater control over re-rendering of content.
At a basic level, the user can create and use a RetainedVisual much like a DrawingVisual; that is, the user can call RenderOpen and get a DrawingContext. Alternatively, the user can implement the IRetainedRender interface on a RetainedVisual. By doing so, users ensure that the graphics system will use the value set in the RenderBounds property as the bounds for the content to be rendered at the IRetainedVisual.Render call.
When rendering the scene, the graphics system will examine any child Visual. If the value of the RenderBounds property indicates that a particular Visual's content will be needed in rendering a scene, the system will call IRetainedVisual.Render to fill the content of the Visual, replacing any content already in memory. The application can also call InvalidateVisual directly to flush content from a Visual. If the application has not implemented IRetainedRender on the RetainedVisual, any call to InvalidateVisual will throw an exception.
The following code instantiates a class that implements IRetainedRender on a RetainedVisual and draws into it.
The Visual API, like the rest of the graphics system of the present invention, is a managed API and makes use of typical features of managed code, including strong typing and garbage collection. It also takes advantage of the hardware acceleration capability for rendering. To accommodate developers working with existing unmanaged applications, the Visual API provides limited interoperability between the present graphics system and Microsoft Windows® Graphics Device Interface (GDI)-based rendering services.
This interoperability allows developers to host GDI-based windows in Visual-aware applications using the Hwnd Visual object, write controls and theming that are based on the present invention's drawing and rendering, but still work in legacy GDI applications, and Modify GDI HWND-based applications to take advantage of the new rendering features, including hardware acceleration and the color model.
The HwndVisual enables hosting of Win32 content in a Visual-aware application. As represented in
As with other objects, you can apply transforms and other property changes to the control once hosted in a Visual.
As represented in
To draw, the Visual manager 304 processes (e.g., traverses or transmits) the scene graph as scheduled by a dispatcher 308, and provides graphics instructions and other data to the low level component 218 (
Visuals offer services by providing clip, opacity and possibly other properties that can be set, and/or read via a get method. In addition, the visual has flags controlling how it participates in hit testing. A Show property is used to show/hide the visual, e.g., when false the visual is invisible, otherwise the visual is visible. Furthermore, these objects (whether Visuals at the Visual API layer or elements at the element layer) exist in a hierarchy. A coordinate system is inherited down through this hierarchy. In this way, a parent can push a coordinate transform that modifies the rendering pass and gets applied to that parent's children.
The transform for a visual is on the connection to that visual. In other words, it is set via the [Get|Set]ChildTransform on the parent's IVisual interface.
Note that the coordinate transforms may be applied in a uniform way to everything, as if it were in a bitmap. Note that this does not mean that transformations always apply to bitmaps, but that what gets rendered is affected by transforms equally. By way of example, if the user draws a circle with a round pen that is one inch wide and then applies a scale in the X direction of two to that circle, the pen will be two inches wide at the left and right and only one inch wide at the top and bottom. This is sometimes referred to as a compositing or bitmap transform (as opposed to a skeleton or geometry scale that affects the geometry only).
With respect to coordinate transformation of a visual, TransformToDescendant transforms a point from the reference visual to a descendant visual. The point is transformed from the post-transformation coordinate space of the reference visual to the post-transformation coordinate space of the descendant visual. TransformFromDescendant transforms a point from the descendant visual up the parent chain to the reference visual. The point is transformed from post-transformation coordinate space of the descendant visual to post-transformation coordinate space of the reference visual. A user may get a Matrix to and from a descendant and from and to any arbitrary visual. Two properties are available that may be used to determine the bounding box of the content of the Visual, namely DescendantBounds, which is the bounding box of the descendants, and ContentBounds which is the bounds of the content. Applying a Union to these provides the total bounds.
The clip property sets (and gets) the clipping region of a visual. Any Geometry (the geometry class is shown in
The Opacity property gets/sets the opacity value of the visual, such that the content of the visual is blended on the drawing surface based on the opacity value and the selected blending mode. The BlendMode property can be used to set (or get) the blending mode that is used. For example, an opacity (alpha) value may be set between 0.0 and 1.0, with linear alpha blending set as the mode, e.g., Color=alpha*foreground color+(1.0-alpha)*background color). Other services, such as special effects properties, may be included in a visual, e.g., blur, monochrome, and so on.
The various services (including transform, opacity, and clip) can be pushed and popped on a drawing context, and push/pop operations can be nested, as long as there is an appropriate pop call for each push call.
The PushTransform method pushes a transformation. Subsequent drawing operations are executed with respect to the pushed transformation. The pop call pops the transformation pushed by the matching PushTransform call:
Similarly, the PushOpacity method pushes an opacity value. Subsequent drawing operations are rendered on a temporary surface with the specified opacity value and then composite into the scene. Pop( ) pops the opacity pushed by the matching PushOpacity call:
The PushClip method pushes a clipping geometry. Subsequent drawing operations are clipped to the geometry. The clipping is applied in post transformation space. Pop( ) pops the clipping region pushed by the matching PushClip call:
Note that push operations can be arbitrarily nested as long as the pop operations are matched with a push. For example, the following is valid:
A ProxyVisual is a visual that may be added more than once into the scene graph, e.g., below a container visual. Since any visual referred to by a ProxyVisual may be reached by multiple paths from the root, read services (TransformToDescendent, TransformFromDescendent and HitTest) do not work through a ProxyVisual. In essence, there is one canonical path from any visual to the root of the visual tree and that path does not include any ProxyVisuals.
As described above, visuals can be drawn on by populating their drawing contexts with various drawing primitives, including Geometry, ImageSource and MediaData. Furthermore, there are a set of resources and classes that are shared through this entire stack. This includes Pens, Brushes, Geometry, Transforms and Effects. The DrawingContext abstract class exposes a set of drawing operations that can be used to populate a DrawingVisual, ValidationVisual, ImageData, etc. In other words, the drawing context abstract class exposes a set of drawing operations; for each drawing operation there are two methods, one that takes constants as arguments, and one that takes animators as arguments.
Geometry is a type of class (
As represented in
As represented in
The graphics object model of the present invention includes a Brush object model, which is generally directed towards the concept of covering a plane with pixels. Examples of types of brushes are represented in the hierarchy of FIG. 13, and, under a Brush base class, include Gradient Brush, NineGridBrush, SolidColorBrush and TileBrush. GradientBrush includes LinearGradient and RadialGradient objects. DrawingBrush and ImageBrush derive from TileBrush. Alternative arrangements of the classes are feasible, e.g., deriving from TileBrush may be ImageBrush, VisualBrush, VideoBrush, NineGridBrush and Drawing Brush. Note that Brush objects may recognize how they relate to the coordinate system when they are used, and/or how they relate to the bounding box of the shape on which they are used. In general, information such as size may be inferred from the object on which the brush is drawn. More particularly, many of the brush types use a coordinate system for specifying some of their parameters. This coordinate system can either be defined as relative to the simple bounding box of the shape to which the brush is applied, or it can be relative to the coordinate space that is active at the time that the brush is used. These are known, respectively, as RelativeToBoundingBox mode and Absolute mode.
A SolidColorBrush object fills the identified plane with a solid color. If there is an alpha component of the color, it is combined in a multiplicative way with the corresponding opacity attribute in the Brush base class. The following sets forth an example SolidColorBrush object:
The GradientBrush objects, or simply gradients, provide a gradient fill, and are drawn by specifying a set of gradient stops, which specify the colors along some sort of progression. The gradient is by drawn by performing linear interpolations between the gradient stops in a gamma 2.2 RGB color space; interpolation through other gammas or other color spaces (HSB, CMYK and so forth, is also a feasible alternative. Two types of gradient objects include linear and radial gradients.
In general, gradients are composed of a list of gradient stops. Each of these gradient stops contains a color (with the included alpha value) and an offset. If there are no gradient stops specified, the brush is drawn as a solid transparent black, as if there were no brush specified at all. If there is only one gradient stop specified, the brush is drawn as a solid color with the one color specified. Like other resource classes, the gradient stop class (example in the table below) is derives from the changeable class and thus is selectively mutable, as described in the U.S. patent application entitled “Changeable Class and Pattern to Provide Selective Mutability in Computer Programming Environments.”
Gradients are drawn by specifying a set of gradient stops. These gradient stops specify the colors along some sort of progression. There are two types of gradients presently supported, namely linear and radial gradients. The gradient is drawn by doing interpolations between the gradient stops in the specified color space.
Gradients are composed of a list of gradient stops. Each of these gradient stops contains a color (with the included alpha value) and an offset. If there are no gradient stops specified, the brush is drawn as transparent (as if there were no brush specified). If there is only one gradient stop specified, the brush is drawn as a solid color with the one color specified. Any gradient stops with offsets in the range of zero to one (0.0 . . . 1.0) are considered, with the largest stop in the range (−∞. . . 0.0] and the smallest stop in the range [1.0 . . . +∞). If the set of stops being considered includes a stop which is outside of the range zero to one, an implicit stop is derived at zero (and/or one) which represents the interpolated color which would occur at this stop. Also, if two or more stops are set at the same offset, a hard transition (rather than interpolated) occurs at that offset. The order in which stops are added determines the behavior at this offset; the first stop to be added is the effective color before that offset, the last stop to be set is the effective color after this stop, and any additional stops at this offset are ignored.
This class is a Changeable like other resource classes:
Like SolidColorBrush, this has nested Changeables in the animation collections.
The GradientSpreadMethod enum specifies how the gradient should be drawn outside of the specified vector or space. There are three possible values, including Pad, in which the end colors (first and last) are used to fill the remaining space, Reflect, in which the stops are replayed in reverse order repeatedly to fill the space, and Repeat, in which the stops are repeated in order until the space is filled. The default value for properties of this type is Pad:
In general, a LinearGradientBrush is used to fill an area with a linear gradient. A linear gradient defines a gradient along a line. The line's end point is defined by the linear gradient's StartPoint and EndPoint properties. By default, the StartPoint of a linear gradient is (0,0), the upper-left corner of the area being filled, and its EndPoint is (1,1), the bottom-right corner of the area being filled. As represented in
The ColorInterpolationMode enum defines the interpolation mode for colors within a gradient. The two options are PhysicallyLinearGamma10 and PerceptuallyLinearGamma22.
This is an abstract base class.
As described above in the Changeables section, GradientBrush is a complex-type with respect to Changeables, because its GradientStops property itself holds Changeables. That means that GradientBrush needs to implement the protected methods MakeUnchangeableCore( ), and PropagateEventHandler( ), as well as CloneCore( ) that Changeable subclasses implement. It may also choose to implement ValidateObjectState( ) if there are invalid combinations of GradientStops that make up the collection, for instance.
The LinearGradient specifies a linear gradient brush along a vector. The individual stops specify colors stops along that vector.
The markup for LinearGradient allows specification of a LinearGradient with two color stops, at offsets zero and one. If the “LinearGradient” version is used, the start point and end point are specified, respectively. If “HorizontalGradient’ is used, the start point is set to 0,0 and the end point is set to 1,0. If “VerticalGradient” is used, the start point is set to 0,0 and the end point is set to 0,1. In these cases, the default MappingMode is used, which is RelativeToBoundingBox. The RadialGradient is similar in programming model to the linear gradient. However, whereas the linear gradient has a start and end point to define the gradient vector, the radial gradient has a circle along with a focal point to define the gradient behavior. The circle defines the end point of the gradient—in other words, a gradient stop at 1.0 defines the color at the circle's circumference. The focal point defines center of the gradient. A gradient stop at 0.0 defines the color at the focal point.
The markup for RadialGradient allows specification of a RadialGradient with two color stops, at offsets 0 and 1 respectively. The default MappingMode is used, which is RelativeToBoundingBox, as are the default radii, 0.5:
The TileBrush is an abstract base class which contains logic to describe a tile and a means by which that tile should fill an area. Subclasses of TileBrush contain content, and logically define a way to fill an infinite plane.
The Stretch enum is used to describe how a ViewBox (source coordinate space) is mapped to a ViewPort (destination coordinate space). This is used in TileBrush:
The TileMode enum is used to describe if and how a space is filled by Tiles. A TileBrush defines where the base Tile is (specified by the ViewPort). The rest of the space is filled based on the TileMode value.
The VerticalAlignment enum is used to describe how content is positioned within a container vertically:
The HorizontalAlignment enum is used to describe how content is positioned within a container horizontally.
The TileBrush properties select a rectangular portion of the infinite plane to be a tile (the ViewBox) and describe a destination rectangle (ViewPort) which will be the base Tile in the area being filled. The remaining destination area will be filled based on the TileMode property, which controls if and how the original tile is replicated to fill the remaining space:
A TileBrush's contents have no intrinsic bounds, and effectively describe an infinite plane. These contents exist in their own coordinate space, and the space which is being filled by the TileBrush is the local coordinate space at the time of application. The content space is mapped into the local space based on the ViewBox, ViewPort, Alignments and Stretch properties. The ViewBox is specified in content space, and this rectangle is mapped into the ViewPort rectangle.
The ViewPort defines the location where the contents will eventually be drawn, creating the base tile for this Brush. If the value of ViewPortUnits is Absolute, the value of ViewPort is considered to be in local space at the time of application. If, instead, the value of ViewPortUnits is RelativeToBoundingBox, then the value of ViewPort is considered to be in the coordinate space where 0,0 is the top/left corner of the bounding box of the object being painted and 1,1 is the bottom/right corner of the same box. For example, consider a RectangleGeometry being filled which is drawn from 100,100 to 200,200. Then, if the ViewPortUnits is Absolute, a ViewPort of (100,100,100,100) would describe the entire content area. If the ViewPortUnits is RelativeToBoundingBox, a ViewPort of (0,0,1,1) would describe the entire content area. If the ViewPort's Size is empty and the Stretch is not None, this Brush renders nothing.
The ViewBox is specified in content space. This rectangle is transformed to fit within the ViewPort as determined by the Alignment properties and the Stretch property. If the Stretch is None, then no scaling is applied to the contents. If the Stretch is Fill, then the ViewBox is scaled independently in both X and Y to be the same size as the ViewPort. If the Stretch is Uniform or UniformToFill, the logic is similar but the X and Y dimensions are scaled uniformly, preserving the aspect ratio of the contents. If the Stretch is Uniform, the ViewBox is scaled to have the more constrained dimension equal to the ViewPort's size. If the Stretch is UniformToFill, the ViewBox is scaled to have the less constrained dimension equal to the ViewPort's size. Another way to think of this is that both Uniform and UniformToFill preserve aspect ratio, but Uniform ensures that the entire ViewBox is within the ViewPort (potentially leaving portions of the ViewPort uncovered by the ViewBox), and UniformToFill ensures that the entire ViewPort is filled by the ViewBox (potentially causing portions of the ViewBox to be outside the ViewPort). If the ViewBox's area is empty, then no Stretch will apply. Alignment will still occur, and it will position the “point” ViewBox.
Once the ViewPort is determined (based on ViewPortUnits) and the ViewBox's destination size is determined (based on Stretch), the ViewBox needs to be positioned within the ViewPort. If the ViewBox is the same size as the ViewPort (if Stretch is Fill, or if it just happens to occur with one of the other three Stretch values), then the ViewBox is positioned at the Origin so as to be identical to the ViewPort. If not, then HorizontalAlignment and VerticalAlignment are considered. Based on these properties, the ViewBox is aligned in both X and Y dimensions. If the HorizontalAlignment is Left, then the left edge of the ViewBox will be positioned at the Left edge of the ViewPort. If it is Center, then the center of the ViewBox will be positioned at the center of the ViewPort, and if Right, then the right edges will meet. The process is repeated for the Y dimension.
If the ViewBox is Empty it is considered unset. If it is unset, then ContentUnits are considered. If the ContentUnits are Absolute, no scaling or offset occurs, and the contents are drawn into the ViewPort with no transform. If the ContentUnits are RelativeToBoundingBox, then the content origin is aligned with the ViewPort Origin, and the contents are scaled by the object's bounding box's width and height.
When filling a space with a TileBrush, the contents are mapped into the ViewPort as above, and clipped to the ViewPort. This forms the base tile for the fill, and the remainder of the space is filled based on the Brush's TileMode. If set, the Brush's transform is applied, which occurs after the other mapping, scaling, offsetting, and so forth.
A VisualBrush is a TileBrush whose contents are specified by a Visual. This Brush can be used to create complex patterns, or it can be used to draw additional copies of the contents of other parts of the scene.
ImageBrush is a TileBrush having contents specified by an ImageData. This Brush can be used to fill a space with an Image.
VideoBrush is a TileBrush having contents specified by a VideoData. This Brush can be used to fill a space with a Video.
NineGridBrush is a Brush which always fills the object bounding box with its content image, and the image stretch isn't accomplished purely via a visual scale. The Image source is divided into nine rectangles by four borders (hence the name NineGrid). The contents of the image in each of those nine regions are scaled in 0, 1 or 2 dimensions until they fill the object bounding box. The dimensions in which each section is scaled can be seen in this diagram:
In addition to the nine grid regions pictured above, there is an optional “tenth” grid. This takes the form of an additional image which is centered in the ViewPort and which is not scaled. This can be used to place a shape in the center of a button, etc. This “tenth grid” is called a glyph, and is exposed by the GlyphImageData property:
Note that the border members count in from the edge of the image in image pixels
The Pen is an object that takes a Brush and other parameters that describe how to stroke a space/Geometry. Conceptually, a Pen describes how to create a stroke area from a Geometry. A new region is created which is based on the edges of the Geometry, the Pen's Thickness, the PenLineJoin, PenLineCap, and so forth. Once this region is created, it is filled with the Brush.
The PenLineCap determines how the ends of a stroked line are drawn:
The PenDashCap determines how the ends of each dash in a dashed, stroked line are drawn:
The PenLineJoin determines how joints are draw when stroking a line:
The DashArrays class comprises static properties which provide access to common, well-known dash styles:
Another brush object represented in
Conceptually, the VisualBrush provides a way to have a visual drawn in a repeated, tiled fashion as a fill. This is represented in
In one implementation, a VisualBrush's contents have no intrinsic bounds, and effectively describe an infinite plane. These contents exist in their own coordinate space, and the space which is being filled by the VisualBrush is the local coordinate space at the time of application. The content space is mapped into the local space based on the ViewBox, ViewPort, Alignments and Stretch properties. The ViewBox is specified in content space, and this rectangle is mapped into the ViewPort (as specified via the Origin and Size properties) rectangle.
The ViewPort defines the location where the contents will eventually be drawn, creating the base tile for this Brush. If the value of DestinationUnits is UserSpaceOnUse, the Origin and Size properties are considered to be in local space at the time of application. If instead the value of DestinationUnits is ObjectBoundingBox, then an Origin and Size are considered to be in the coordinate space, where 0,0 is the top/left corner of the bounding box of the object being brushed, and 1,1 is the bottom/right corner of the same box. For example, consider a RectangleGeometry being filled which is drawn from 100,100 to 200,200. In such an example, if the DestinationUnits is UserSpaceOnUse, an Origin of 100,100 and a Size of 100,100 would describe the entire content area. If the DestinationUnits is ObjectBoundingBox, an Origin of 0,0 and a Size of 1,1 would describe the entire content area. If the Size is empty, this Brush renders nothing.
The ViewBox is specified in content space. This rectangle is transformed to fit within the ViewPort as determined by the Alignment properties and the Stretch property. If the Stretch is none, then no scaling is applied to the contents. If the Stretch is Fill, then the ViewBox is scaled independently in both X and Y to be the same size as the ViewPort. If the Stretch is Uniform or UniformToFill, the logic is similar but the X and Y dimensions are scaled uniformly, preserving the aspect ratio of the contents. If the Stretch is Uniform, the ViewBox is scaled to have the more constrained dimension equal to the ViewPort's size. If the Stretch is UniformToFill, the ViewBox is scaled to have the less constrained dimension equal to the ViewPort's size. In other words, both Uniform and UniformToFill preserve aspect ratio, but Uniform ensures that the entire ViewBox is within the ViewPort (potentially leaving portions of the ViewPort uncovered by the ViewBox), and UniformToFill ensures that the entire ViewPort is filled by the ViewBox (potentially causing portions of the ViewBox to be outside the ViewPort). If the ViewBox is empty, then no Stretch will apply. Note that alignment will still occur, and it will position the “point” ViewBox.
Once the ViewPort is determined (based on DestinationUnits) and the ViewBox's size is determined (based on Stretch), the ViewBox needs to be positioned within the ViewPort. If the ViewBox is the same size as the ViewPort (if Stretch is Fill, or if it just happens to occur with one of the other three Stretch values), then the ViewBox is positioned at the Origin so as to be identical to the ViewPort. Otherwise, HorizontalAlignment and VerticalAlignment are considered. Based on these properties, the ViewBox is aligned in both X and Y dimensions. If the HorizontalAlignment is Left, then the left edge of the ViewBox will be positioned at the Left edge of the ViewPort. If it is Center, then the center of the ViewBox will be positioned at the center of the ViewPort, and if Right, then the right edges will meet. The process is repeated for the Y dimension.
If the ViewBox is (0,0,0,0), it is considered unset, whereby ContentUnits are considered. If the ContentUnits are UserSpaceOnUse, no scaling or offset occurs, and the contents are drawn into the ViewPort with no transform. If the ContentUnits are ObjectBoundingBox, then the content origin is aligned with the ViewPort Origin, and the contents are scale by the object's bounding box's width and height.
When filling a space with a VisualBrush, the contents are mapped into the ViewPort as above, and clipped to the ViewPort. This forms the base tile for the fill, and the remainder of the space is filled based on the Brush's TileMode. Finally, if set, the Brush's transform is applied—it occurs after all the other mapping, scaling, offsetting, etc.
The TileMode enumeration is used to describe if and how a space is filled by its Brush. A Brush which can be tiled has a tile rectangle defined, and this tile has a base location within the space being filled. The rest of the space is filled based on the TileMode value.
Returning to
As generally described above, the graphics object model of the present invention includes a Transform object model, which includes the types of transforms represented in the hierarchy of
Matrices for 2D computations are represented as a 3×3 matrix. For the needed transforms, only six values are needed instead of a full 3×3 matrix. These are named and defined as follows.
When a matrix is multiplied with a point, it transforms that point from the new coordinate system to the previous coordinate system:
Transforms can be nested to any level. Whenever a new transform is applied it is the same as post-multiplying it onto the current transform matrix:
Most places in the API do not take a Matrix directly, but instead use the Transform class, which supports animation.
Conclusion
As can be seen from the foregoing detailed description, there is provided a system, method and object model that provide program code with the ability to interface with a scene graph. The system, method and object model are straightforward to use, yet powerful, flexible and extensible.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
The present invention is a continuation-in-part of U.S. patent application Ser. No. 10/402,268 filed Mar. 27, 2003.
Number | Name | Date | Kind |
---|---|---|---|
5261041 | Susman | Nov 1993 | A |
5487172 | Hyatt | Jan 1996 | A |
5500933 | Schnorf | Mar 1996 | A |
5509115 | Butterfield | Apr 1996 | A |
5553222 | Milne | Sep 1996 | A |
5555368 | Orton | Sep 1996 | A |
5727141 | Hoddie | Mar 1998 | A |
5745761 | Celi | Apr 1998 | A |
5752029 | Wissner | May 1998 | A |
5790130 | Gannett | Aug 1998 | A |
5852449 | Esslinger | Dec 1998 | A |
5920325 | Morgan | Jul 1999 | A |
5930810 | Farros et al. | Jul 1999 | A |
5936632 | Cunniff | Aug 1999 | A |
5986667 | Jevans | Nov 1999 | A |
5986675 | Anderson | Nov 1999 | A |
5987627 | Rawlings | Nov 1999 | A |
6014139 | Watson et al. | Jan 2000 | A |
6075532 | Colleran | Jun 2000 | A |
6092107 | Eleftheriadis et al. | Jul 2000 | A |
6115713 | Pascucci | Sep 2000 | A |
6151134 | Depa | Nov 2000 | A |
6154215 | Hopcroft | Nov 2000 | A |
6160907 | Robotham et al. | Dec 2000 | A |
6195694 | Chen et al. | Feb 2001 | B1 |
6215495 | Grantham et al. | Apr 2001 | B1 |
6237092 | Hayes, Jr. | May 2001 | B1 |
6243856 | Meyer et al. | Jun 2001 | B1 |
6259451 | Tesler | Jul 2001 | B1 |
6266053 | French et al. | Jul 2001 | B1 |
6272650 | Meyer | Aug 2001 | B1 |
6275857 | McCartney | Aug 2001 | B1 |
6314470 | Ward et al. | Nov 2001 | B1 |
6377263 | Falacara | Apr 2002 | B1 |
6411297 | Tampieri | Jun 2002 | B1 |
6487565 | Schechter | Nov 2002 | B1 |
6538656 | Cheung | Mar 2003 | B1 |
6570578 | Smirnov | May 2003 | B1 |
6631403 | Deutsch et al. | Oct 2003 | B1 |
6636211 | Chartier | Oct 2003 | B2 |
6654931 | Haskell et al. | Nov 2003 | B1 |
6675230 | Lewallen | Jan 2004 | B1 |
6707456 | Marrin | Mar 2004 | B1 |
6714201 | Grinstein | Mar 2004 | B1 |
6717599 | Olano | Apr 2004 | B1 |
6731314 | Cheng | May 2004 | B1 |
6732109 | Lindberg | May 2004 | B2 |
6741242 | Itoh et al. | May 2004 | B1 |
6751655 | Deutsch | Jun 2004 | B1 |
6765571 | Sowizral | Jul 2004 | B2 |
6833840 | Lifshitz | Dec 2004 | B2 |
6919891 | Schneider et al. | Jul 2005 | B2 |
6986101 | Cooper | Jan 2006 | B2 |
7012606 | Swedberg | Mar 2006 | B2 |
7055092 | Yardumian | May 2006 | B2 |
7064766 | Beda | Jun 2006 | B2 |
7069503 | Tanimoto | Jun 2006 | B2 |
7076332 | Cifra | Jul 2006 | B2 |
7088374 | David | Aug 2006 | B2 |
7102651 | Louveaux | Sep 2006 | B1 |
7103581 | Suen | Sep 2006 | B1 |
7103873 | Tanner | Sep 2006 | B2 |
7126606 | Beda | Oct 2006 | B2 |
7143339 | Weinberg | Nov 2006 | B2 |
7161599 | Beda | Jan 2007 | B2 |
20010000962 | Rajan | May 2001 | A1 |
20020019844 | Kurowski | Feb 2002 | A1 |
20020032697 | French | Mar 2002 | A1 |
20020046394 | Do | Apr 2002 | A1 |
20020063704 | Sowizral | May 2002 | A1 |
20020116417 | Weinberg et al. | Aug 2002 | A1 |
20030005045 | Tanimoto | Jan 2003 | A1 |
20030028901 | Shae | Feb 2003 | A1 |
20030110297 | Tabatabai et al. | Jun 2003 | A1 |
20030120823 | Kim et al. | Jun 2003 | A1 |
20030126557 | Yardumian et al. | Jul 2003 | A1 |
20030132937 | Schneider et al. | Jul 2003 | A1 |
20030139848 | Cifra et al. | Jul 2003 | A1 |
20030194207 | Chung et al. | Oct 2003 | A1 |
20030210267 | Kylberg | Nov 2003 | A1 |
20040015740 | Dautelle | Jan 2004 | A1 |
20040039496 | Dautelle | Feb 2004 | A1 |
20040093604 | Demsey et al. | May 2004 | A1 |
20040110490 | Steele et al. | Jun 2004 | A1 |
20040189645 | Beda et al. | Sep 2004 | A1 |
20040189669 | David et al. | Sep 2004 | A1 |
20040216139 | Rhoda | Oct 2004 | A1 |
20040220956 | Dillon | Nov 2004 | A1 |
20050050471 | Hallisey et al. | Mar 2005 | A1 |
20050060648 | Fennelly | Mar 2005 | A1 |
Number | Date | Country |
---|---|---|
WO9900725 | Jan 1999 | WO |
WO9952080 | Oct 1999 | WO |
Number | Date | Country | |
---|---|---|---|
20040189645 A1 | Sep 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10402268 | Mar 2003 | US |
Child | 10693673 | US |