The present invention relates generally to the creation and editing of visual presentations, and more particularly, to displaying graphics based on user customizations.
Visual aids help people understand information. Conveying information to or among groups of people almost necessarily requires creating visual presentations embodying the information. Graphics application programs, such as the Microsoft® PowerPoint® presentation application, have helped automate the task of creating such visual presentations. Such graphics application programs allow users to convey information more efficiently and effectively by putting that information in an easily understandable format referred to herein as a graphic.
A graphic is a visual representation, such as a diagram or other drawing, of an idea. A graphic is typically composed of several graphical elements that represent content embodying the idea, such as, for example, a bulleted list. Each graphical element is a part of the displayed graphic. A graphical element can have both textual and graphical characteristics. Whereas graphical characteristics generally refer to pictorial or other visual features of a graphical element, textual characteristics generally refer to the written matter within the graphical element. Depending on the information and the audience, a user of a graphics application program generally determines a specific graphic that will best teach or convey the underlying information. Generally, conventional graphics application programs provide one of two approaches for creating a graphic.
On one hand, some conventional graphics application programs utilize a manual drawing approach in which users have full flexibility in creating and editing the graphic. As such, a user may position and customize the look of the graphical elements in the graphic as he or she sees fit. By providing such “free reigns” on graphic editing, however, this manual approach results in the user having to re-position and re-align those graphical elements in the graphic affected by the customization and/or repositioning of other graphical elements in the graphic. As one may guess, this approach generally requires a great deal of time to manipulate the graphic to render a final product. The user's time is inefficiently spent manipulating the visual aspects of the graphic rather than focusing on the message that is to be portrayed in the graphic. Moreover, this approach requires, at least to some extent, graphical design abilities. Those users that do not have strong design skills are even further limited by the manual approach.
On the other hand, some conventional graphics application programs utilize an automatic thawing approach in which the layout and look for each graphic is automatically defined based on the type of graphic desired by a user and the graphical elements predetermined for the graphic. In this approach, the burden of aligning and positioning graphical elements in the graphic is taken away from the user and placed instead with the application program. However, this approach is problematic in the sense that the user is typically only provided a limited fixed set of graphic definitions to choose from. Additionally, the user is not empowered to customize the graphic based on his or her desires without first abandoning altogether the automatic drawing functionality, thereby defeating the purpose for using this approach in the first place.
It is with respect to these and other considerations that the present invention has been made.
In accordance with the present invention, a computer-implemented method is provided for rendering a graphic on a display screen. The graphic is a visual representation of content in which items may or may not be arranged in a predetermined structure. Various forms of content may be represented using the graphic, but for illustration purposes, the content is described herein as textual content. In receipt of the content, the method involves receiving selection of a graphic definition that is to visually represent the content. The selected graphic definition specifies default properties for the appearance and layout of graphical elements for graphics created under the graphic definition. Next, the method creates the graphic to include graphical elements corresponding to the items in the content and according to a customization of at least one of the default properties previously applied to a graphic rendered for the content based on a different graphic definition. The created graphic is then output to a display module for display to a user.
In accordance with embodiments of the invention, the customization is identified by analyzing a set of properties persistent across all possible graphic definitions, wherein this set of properties is specified in a “semantic” model. Thus, the semantic model defines those properties that are applicable to graphics corresponding to all possible graphic definitions. In accordance with yet another embodiment, creation of the graphic may also take into account customizations that are specific to the particular graphic definition for the graphic currently being rendered. These customizations are maintained in a “presentation” model that is retrieved along with the semantic model in response to selection of the associated graphic definition.
In yet further embodiments, the present invention provides a system for visually representing content. The system includes a plurality of possible graphic definitions each specifying default properties for an associated graphic operable to represent the content. The system also includes a semantic model that defines “semantic” properties for all possible graphic definitions such that each associated graphic represents a similar item in the content using a similar semantic property. Additionally, the system according to this embodiment includes a customization engine operable to define graphics according to different graphic definitions and the semantic model.
In accordance with yet another embodiment, the system includes a plurality of presentation models. One or more of the presentation models are associated with one or more of the plurality of possible graphic definitions. Each of the presentation models define presentation properties specific to the graphic definition to which each of the one or more presentation models is associated. In response to selection of a specific graphic definition for display, the customization engine renders a graphic according to the selected definition, the semantic model, which is persistent across all graphic definitions, and one or more presentation models associated with the selected definition. Thus, the graphic is displayed based on the selected definition, but has appearance and layout properties customized as dictated in the associated presentation model(s) and the semantic model.
In accordance with still another embodiment, the present invention is directed to a method for customizing a graphic having graphical elements displayed on a display screen. In response to receiving a request to modify the graphic, the method involves modifying the graphic based on the request while maintaining a customization previously applied to the graphic. Specifically, the customization relates to a property of a first graphical element in the graphic relative to a second graphical element in the graphic. For example, the customization may relate to the positioning or size of the first graphical element relative to the second graphical element. In an embodiment, the modification request embodies an instruction to add a graphical element to the graphic.
The various embodiments of the present invention may be implemented as a computer process, a computing system or as an article of manufacture such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
These and various other features as well as advantages, which characterize the present invention, will be apparent from a reading of the following detailed description and a review of the associated drawings.
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout.
In general, the present invention relates to customizing the visual representation of content displayed in one or more graphics. The content may be any form of information, but is described herein as textual data in accordance with an exemplary embodiment of the present invention. The content may be provided by a user (e.g., by keyboard, mouse, etc.), an application program, or a combination of both. Each graphic includes at least one graphical element, which may have textual characteristics, graphical characteristics or both.
In accordance with an embodiment, the present invention provides a computer-implemented method for displaying (referred to herein as, “display process”) a graphic based on user customizations to appearance and layout properties of one or more graphical elements in the graphic. Such properties include color, positioning, size, shape, formatting and other visual attributes associated with the graphical elements.
The display process is embodied in a computer graphics application having a user interface (UI) for creating and editing graphics. The computer graphics application may be either a stand-alone computer application or a sub-component of another computer application, such as, without limitation, a presentation application, a word processing application, a drawing application or a spreadsheet application. Those skilled in the art will appreciate the applicability of the computer graphics application to these other forms of computer applications, which are typically collected in an office suite of applications, such as Microsoft Office® and OpenOffice.
The present invention is described in the general context of computer-executable instructions (e.g., program modules) executed by one or more computers or other devices. The functionality of the program modules may be combined or distributed as desired in various embodiments. The program modules include one or more routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
Referring now to
The graphics pane 106 displays graphical content 108 created by the computer graphics application using content from the content pane 104. The graphical content 108 may be any form of a visual presentation, such as a drawing, diagram, etc., and is referred to herein as a “graphic” for nomenclature purposes. The gallery pane 105 provides the user with a plurality of selectable graphic definitions (e.g., 109a, 109b) that may be applied to the content in the content pane 104 and rendered in the graphics pane 106 as a graphic 108. Each of these three panes (104, 105 and 106) is now described in turn in greater detail relative to operation of the computer graphics application in accordance with various embodiments of the present invention.
The graphics pane 106, which is also referred to in
The graphic 108 is shown in
The content pane 104 is a window, a windowpane, outline view class, or other display area that allows a user to input a body of content 115 (hereinafter referred to as “content”) into the UI 102 of the computer graphics application. As such, the content pane 104 is operable to accept content 115 for use by the computer graphics application in creating the graphic 108. Basically, the content 115 is an idea that the user intends the rendered graphic 108 to convey. In an embodiment, the content 115 includes textual data, which may or may not be arranged based on specific formatting properties, or a “predetermined structure.”
In an embodiment, the content pane 104 is operable to receive input from a user and display that input as the content 115 for editing by the user. In this regard, the content 115 may be either manually entered (e.g., by keyboard) into the content pane 104 by a user or pasted from another area in the computer graphics application or another application program altogether. In accordance with another embodiment, the content 115 in the content pane 104 may be linked to another application or program, such that as the content data in the other program is created or modified, the content 115 within the content pane 104 will automatically appear or be modified. In still other embodiments, the user may manually refresh the linked data, such that the user forces the content data to update in the content pane 104 rather than having the graphics application or other program update automatically. In still other embodiments, the user may request and receive content data from another program, like a database. Alternatively, the content 115 may be input into the content pane 104 automatically (i.e., without user interaction) by the computer graphics application or by another application.
The gallery pane 105 is a window or other graphical user interface component operable to present various types of graphics definitions, such as the graphic definitions 109a and 109b shown for illustrative purposes. The graphic definitions 109a and 109b may be chosen by a user for application to the content 115 in the content pane 104 to render the graphic 108. In an embodiment, the gallery pane 105 allows a user to switch between the different graphic definitions 109a and 109b and apply the same content to the chosen graphic definition, e.g., 109a and 109b, without needing to recreate each graphic 108 from scratch.
Each graphic definition, e.g., 109a and 109b, is associated with a default set of properties for the graphic 108. In an embodiment, these properties relate to any visual or non-visual characteristic embodying the layout and appearance of graphical elements, e.g., 122-131, within the graphic 108. In response to a user selecting a specific graphic definition 109a or 109b, the computer graphics application uses the selected graphic definition 109a or 109b as the framework for the layout and appearance of the graphic 108. In accordance with an embodiment of the present invention, the computer graphics application dynamically renders the graphic 108 based on the properties defined in the selected definition 109a or 109b, as currently specified according to any customizations that have been applied to either (1) any of these properties that is persistent across all graphic definitions (e.g., 109a and 109b) or (2) any of these properties that are strictly applicable to the selected graphic definition 109a or 109b. Dynamic generation of the graphic 108 thus refers to the different properties that may be specified for the graphic 108 at different points in time at which a specific graphic definition, e.g., 109a and 109b, is selected by the user.
The gallery pane 105 shown in
With the foregoing structures of the UI 102 in mind, operation of the computer graphics application is now described with reference to
The content 115 may be input into the content pane 104 and a graphic definition 109a or 109b may be selected in any sequence without departing from the scope of the present invention. If a graphic definition 109a or 109b is selected by a user prior to any content 115 being entered into the content pane 104, a graphic 108 is displayed without any content or, alternatively, with a set of sample content. In contrast, a user may input data into the content pane 104 for entry as the content 115 prior to selecting a graphic definition 109a or 109b. In an embodiment in this case, the computer graphics application may provide the user with a default choice for the graphic definition 109a or 109b; thus, as the content 115 is entered, the graphics pane 106 may display a graphic 108 of the default graphic definition 109a or 109b that grows in graphical elements (e.g., 122-131) as the user continues to add the content 115. Alternatively, the graphics pane 106 may remain blank (i.e., without graphic 108) until the user selects a graphic definition 109a or 109b from the gallery pane 105.
In an embodiment, the structure of the textual content 115 in the content pane 104 determines the structure and appearance of the graphical elements 122-131 shown in the graphics pane 106. For example, a first layer of the wheel diagram graphic 108 is a parent element 131 corresponding to a first primary line 116a of textual content 115 in the content pane 104. A second layer of the wheel diagram graphic 108 includes elements 126, 127, 128, 129 and 130 that are subordinate to the parent element 131, and thus referred to as “child elements.” The child elements 126, 127, 128, 129 and 130 correspond to the lines 118a of textual content 115 indented under the first line 116a. A third layer of the wheel diagram graphic 108 is also a parent element 125 and corresponds to a second primary line 116b of the textual content 115. Finally, a fourth layer of the wheel diagram graphic 108 includes child elements 122, 123 and 124 that are subordinate to the parent element 125. The child elements 122, 123 and 124 correspond to the lines 118b of the textual content 115 indented under the second primary line 116b. From the foregoing example, it should be appreciated that the textual content 115 in the content pane 104 is represented by various graphical elements 122-131 in the graphic 108 and the structure of the textual content 115 is represented by the structure of the graphical elements 122-131 in the graphic 108.
With the above example in mind, an embodiment of the present invention involves modifying the graphic 108 in response to changes within the textual content 115. For instance, if the indention of the top-most line of those lines 118a shown in
An example of a suitable operating environment in which the invention may be implemented is illustrated in
With reference to
Device 200 may also contain communications connection(s) 212 that allow the device to communicate with other devices. Communications connection(s) 212 is an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media.
Device 200 may also have input device(s) 214 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 216 such as a display, speakers, printer, etc. may also be included. The devices may help form the user interface 102 discussed above. All these devices are well know in the art and need not be discussed at length here.
Computing device 200 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed by processing unit 202. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Combinations of any of the above should also be included within the scope of computer readable media.
The computer device 200 may operate in a networked environment using logical connections to one or more remote computers (not shown). The remote computer may be a personal computer, a server computer system, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer device 200. The logical connections between the computer device 200 and the remote computer may include a local area network (LAN) or a wide area network (WAN), but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
When used in a LAN networking environment, the computer device 200 is connected to the LAN through a network interface or adapter. When used in a WAN networking environment, the computer device 200 typically includes a modem or other means for establishing communications over the WAN, such as the Internet. The modem, which may be internal or external, may be connected to the computer processor 202 via the communication connections 212, or other appropriate mechanism. In a networked environment, program modules or portions thereof may be stored in the remote memory storage device. By way of example, and not limitation, a remote application programs may reside on memory device connected to the remote computer system. It will be appreciated that the network connections explained are exemplary and other means of establishing a communications link between the computers may be used.
With the computing environment of
More particularly, user interaction 308 with the content pane 104 results in the input of the content 115 into the computer graphics application 100. In response to such input, the computer graphics application 100 displays this content 115 within the content pane 104 for display and to enable editing by the user. Also, as described above, the computer graphics application 100 creates a graphic 108 representing this content 115 and displays this graphic 108 through the graphics pane 106. User interaction 310 with the graphics pane 106 results in editing of the graphic 108 displayed therein. As such, the user interaction 310 represents customizations to the graphic 108 displayed in the graphics pane 106. User interaction 312 with the gallery pane 105 results in the selection of a specific graphic definition from a plurality of graphic definitions, e.g., 109a and 109b, the graphical representations of which are displayed through the gallery pane 105 by icon, menu, toolbar, thumbnail or other known selectable UI component. Thus, selection of a specific graphic definition 109a or 109b through the gallery pane 105 yields the rendering of a graphic 108 in the graphics pane 106 based on the selected definition 109a or 109b.
In addition to the user interface components described above, the computer graphics application 100 also includes a customization system 300 and a layout engine 303. The customization system 300 and the layout engine 303 work together to provide the user interface 102 with the appropriate graphic 108 for rendering on the graphics pane 106. To accomplish this, the customization system 300 passes data 301 embodying the appearance and layout properties specified by the selected graphic definition 109a or 109b, and any associated customizations thereto, to the layout engine 303. For nomenclature purposes, this data 301 is hereinafter referred to as “customization data.” The customization data 301 collectively defines the properties based on which the graphic 108 is to be rendered, as specified in the selected graphic definition 109a or 109b and according to any customizations that have been applied to either (1) any of these properties that are persistent across all graphic definitions (referred to below as “semantic” properties) or (2) any of these properties that are strictly applicable to the selected graphic definition 109a or 109b (referred to below as “presentation” properties). A more detailed illustration of the customization system 300 is provided below with reference to
The layout engine 303 interprets the customization data 301 to generate a layout tree 302 for the graphic 108 being rendered. The layout tree 302 is then traversed to identify the appearance and layout properties for use in rendering the graphic 108. In an embodiment, traversal of the layout tree 302 is performed by a component of the layout engine 303 referred to as an “output engine” 304. In this embodiment, the output engine 304 renders the graphic 108 that is to be provided to the graphics pane 106 for display and editing. In receipt of the graphic 108, the graphics pane 106 displays the graphic 108 to the user for viewing and editing. The functionality and structure of the layout engine 303 is described in greater detail in accordance with an embodiment of the present invention in co-assigned U.S. patent application for “Method, System and Computer-Readable Medium for Creating and Laying Out a Graphic Within an Application Program,” filed Sep. 30, 2004 and assigned Ser. No. 10/955,271, the entire disclosure of which is hereby incorporated by reference in its entirety. It should be appreciated that the graphic 108 may be constructed using the customization data 301 by means other than the layout engine 303, which is described above only for illustrative purposes in order to convey an exemplary embodiment of the present invention.
Referring now to
In an embodiment, the graphical model library 402 is specific to each instance and/or session of the computer graphics application 100. As such, the graphical model library 402 and its components are shown in dashed lines to illustrate instantiation of the library 402 and the models 408, 410 and 404 in memory for each instance of the computer graphics application 100. For example, if a user is creating and/or editing two different graphics 108 at the same time using the computer graphics application 100, a graphical model library 402 and associated models 408, 410 and 404 are created for each of the two different graphics 108. Alternatively, the graphical model library 402 and its components may be persisted across more than one instance and/or session of the computer graphics application 100. The implementation is a matter of choice, both of which are fully contemplated within the scope of the present invention.
The presentation models (e.g., 408 and 410) are data structures that maintain the current properties specific to each graphic definition 109a and 109b that may be selected through the gallery pane 105. Embodiments of the present invention are described illustratively with a 1:1 correlation of presentation models to graphic definitions 109a and 109b. As such,
The semantic model 404 is a data structure that maintains current properties persisted across all graphic definitions 109a and 109b that may be selected by the user through the gallery pane 105. As such, there exists only one semantic model 404 within the graphical model library 402. Again, an embodiment of the present invention noted above involves maintaining a semantic model 404 with each instance and/or session of the computer graphics application 100, and therefore it is possible to have more than one semantic model 404.
In response to receiving a selection of a graphic definition 109a or 109b through the gallery pane 105, the customization engine 412 retrieves the appropriate presentation model 408 or 410 (i.e., the presentation model 408 associated with the selected graphic definition 306) and the semantic model 404. The customization engine 412 then creates the customization data 301 based on the current properties defined for the selected graphic definition 109a or 109b, as specified in the associated presentation model 408, and the semantic model 404.
The properties specified in the presentation models 408 and 410 and the semantic model 404 are dynamically updated based on input from the content pane 104 and the graphics pane 106. With respect to the content pane 104, as the user is adding content 115 through the user interaction 308, the presentation models 408 and 410 and the semantic model 404 are updated to reflect the addition of such content 115. For example, if a hierarchical list has textual content lines “A,” “B” and “C,” then each of the presentation models 408 and 410 and the semantic model 404 in the graphical model library 402 have in-memory representations for a graphical element corresponding to each of the textual content lines “A,” “B” and “C.” In response to a user adding a fourth textual content line “D,” the customization engine 412 updates each of the presentation models 408 and 410 and the semantic model 404 to include an in-memory representation for a graphical element corresponding to this new textual content line. Therefore, the customization data 301 created by the customization engine 412 will include the addition of this new graphical element by virtue of the appropriate presentation model 408 or 410 and the semantic model 404 specifying same.
With respect to the graphics pane 106, as the user is editing a rendered graphic 108 through the user interaction 310, the customization engine 412 updates the presentation model (e.g., 408 or 410) corresponding to the graphic definition 306 associated with the edited graphic 108 or, alternatively, the semantic model 404 to reflect the user's customizations. In this regard, the customization engine 412 updates the appropriate presentation model (e.g., 408 or 410) if the customization is a change to a presentation property, i.e., a “presentation change.” In contrast, the customization engine 412 updates the semantic model 404 if the customization is a change to a semantic property, i.e., a “semantic change.”
Generally, categorization of a change to any property of a graphic 108 as being a “presentation” change or a “semantic” change is a matter of choice and any such categorizations are within the scope of the present invention. For illustration purposes only, an exemplary semantic change is herein described as being change to the color of a graphical element and an exemplary presentation change is herein described as being a change to the size of a graphical element. With these illustrations in mind,
As noted above, the presentation model 408 and the semantic model 404 are in-memory representations, and thus shown using dashed lines. Each of these models 404 and 408 include representations corresponding to each of the graphical elements 502, 504, 506, 508 and 510 included in the graphic 500. Specifically, the presentation model 408 includes representations 502′, 504′, 506′, 508′ and 510′ that correspond to graphical elements 502, 504, 506, 508 and 510, respectively, and maintain properties associated with each respective graphical element. These properties are “presentation” properties that are specific only to the graphic definition 109a or 109b to which the graphic 500 belongs. In the exemplary embodiment for illustrating
Likewise, the semantic model 404 includes representations 502″, 504″, 506″, 508″ and 510″ that correspond to graphical elements 502, 504, 506, 508 and 510, respectively, and maintain properties associated with each respective graphical element. These properties are “semantic” properties that are persistent across all graphic definitions 109a and 109b that may be selected through the gallery pane 105. In the exemplary embodiment for illustrating
Because a graphic (e.g., 511) corresponding to the requested graphic definition 109a or 109b has not yet been rendered on the graphics pane 106, a user has not yet had a chance to customize any of the presentation properties. As such, the retrieved presentation model 410 specifies default properties for the graphical elements, according to the associated graphic definition 109a or 109b, and the customization engine 412 creates customization data 301 that does not specify any presentation changes. Thus, because the size change to graphical element 506 is considered for purposes of this illustration a “presentation” change, that particular customization is not persisted to the graphic 511. Indeed, all of the presentation properties specified in the retrieved presentation model 410 are default properties for the selected graphic definition 305. However, because the semantic model 404 has been updated per the semantic change (i.e., color) to the graphic 500 illustrated in
The examples shown in
For example, the position of a graphical element relative to other graphical elements may constitute a presentation or semantic property that is specified by a presentation model or a semantic model, respectively. In either case, the computer graphics application 100 applies customizations by scaling both the x and y offsets applied to the re-positioned graphical element in response to the addition of a new graphical element to the graphic. Alternatively, the x and y offsets embody a radial offset based on polar coordinates. In an embodiment, positional movements relative to graphics that are rectangular in nature (e.g., square, rectangle, etc.) are applied based on x and y offsets, whereas positional movements relative to graphics that are circular in nature (e.g., oval, circle, etc.) are applied based on a radial offset.
For both radial and linear positional customizations, the distance that a graphical element has been moved relative to its default position is stored in either a presentation or a semantic model, depending on whether positional changes are “presentation” or “semantic” changes. As such, these customizations are maintained with the graphic and, if stored as a semantic change, then across graphics corresponding to other graphic definitions, even after modification of the graphic(s). In response to a change to the layout of a graphic (e.g., adding or deleting a graphical element) in which a graphical element has been re-positioned, the computer graphics application 100 determines a new position for the previously re-positioned graphical element based on the stored relative change. For linear customizations, this process involves using the offset of the previously re-positioned graphical element from another graphical element in the graphic. For radial customizations, this process involves using the radius, the shape position angle and angle between graphical elements.
Referring now to
The display process 600 is performed using an operation flow beginning with a start operation 602 and ending with a terminate operation 630. The start operation 602 is initiated in response to a user or another application program launching the computer graphics application 100 to create or edit a graphic 108 representing content 115 entered into the application 100. From the start operation 602, the operation flow passes to a receive content operation 604.
The receive content operation 604 receives the content 115 that the user or application program is requesting to be visually represented in the graphic 108. In an embodiment, this content 115 is textual content, which may or may not be arranged in a format. Exemplary textual content in accordance with this embodiment is a structured list. Also, in an embodiment, the content 115 received by the first receive operation 604 is displayed to a user through the content pane 104 of the user interface 102 for the computer graphics application 100. From the receive content operation 604, the operation flow passes to a create operation 606.
The create operation 606 creates the presentation model 408 or 410 and the semantic model 404 for use with the instance of the computer graphics application 100 launched at the start operation 602. The created semantic model 410 specifies default semantic properties that are persistent across all possible graphic definitions 109a and 109b. The created presentation model 408 or 410 specifies the default presentation properties defined by a selected graphic definition 109a or 109b.
In accordance with an embodiment, the selected graphic definition 109a or 109b on which the created presentation model 408 or 410 is based is a default graphic definition, e.g., 109a or 109b, predetermined for all instances of the computer graphics application 100. In this embodiment, the computer graphics application 100 is pre-programmed such that initiation of the application 100 renders a selection of the default graphic definition 109a or 109b for use by the user until the user requests a graphic switch to another graphic definition 109a or 109b. In accordance with an alternative embodiment, the computer graphics application 100 may present the user with a selection screen (e.g., UI dialog) that allows the user to select a specific graphic definition 109a or 109b for rendering an initial graphic 108 within the graphics pane 106. As such, the create operation 606 creates the presentation model 408 or 410 based on the selected graphic definition 109a or 109b.
After the presentation model 408 or 410 and the semantic model 404 have been created, the operation flow passes in sequence to a render operation 612. The render operation 612 renders the graphic 108 on a display screen for viewing and editing by a user. The visual characteristics (i.e., layout and appearance of graphical elements) of the graphic 108 are defined by the render operation 612 based on the property specifications in the semantic model 404 and the presentation model 408 or 410 created by the create operation 606. As described above, the semantic model 404 is not only used to define certain visual properties, i.e., “semantic properties,” for the graphic 108 displayed by the render operation 612, but all graphics 108 belonging to all graphic definitions 109a and 109b that may be rendered in the computer graphics application 100. In contrast, however, the presentation model 408 or 410 is used only to define certain visual properties, i.e., “presentation properties,” for the graphic 108 being rendered and for none other. Indeed, a graphic 108 corresponding to the other graphic definition 109a or 109b takes on only those the presentation properties specified in the presentation model 408 or 410 corresponding to that graphic definition 109a or 109b. After the graphic 108 is rendered on the display screen, the operation flow passes to a first query operation 614.
The first query operation 614 determines whether the instance of the computer graphics application 100 launched to invoke the start operation 602 has been terminated, thereby signifying that no further input regarding content, customizations or selection of graphic definitions 109a and 109b will be received unless the computer graphics application 100 is subsequently invoked to create a new instance.
If the instance has been terminated, the operation flow concludes at the terminate operation 630. Otherwise, the operation flow branches “No” to a second query operation 616. The second query operation 616 determines whether the graphic 108 currently rendered in the graphics pane 106 has been edited (i.e., customized) in any fashion. If so, the second query operation 616 branches the operation flow “Yes” to a third query operation 618. Otherwise, the second query operation 616 branches the operation flow “No” to a fourth query operation 624.
The third query operation 618 examines the customization detected by the second query operation 616 to determine whether the customization relates to a presentation change or a semantic change. As noted repeatedly above, a presentation change is a change that is intended to only affect the specific graphic definition 109a or 109b that the graphic 108 currently being rendered corresponds to. In contrast, a semantic change is a change that is intended to affect all graphic definitions 109a and 109b that may be selected by the computer graphics application 100. Any property that may relate to a graphic (e.g., 108), or graphical elements thereof, may be labeled either a presentation property, and thus subjected to presentation changes, or a semantic property, and thus subject to semantic changes. The implementation is a matter of choice, and for illustrative purposes only, the size of a graphical element is being described herein as an exemplary presentation property and the color of a graphical element is being described herein as an exemplary semantic property.
If the third query operation 618 determines that the customization is a presentation change, the operation flow is branched “P” to a first update operation 620. The first update operation 620 updates the retrieved presentation model 408 or 410 with the customization. On the other hand, if the third query operation 618 determines that the customization is a semantic change, the operation flow is branched “S” to a second update operation 622. The second update operation 622 updates the semantic model 404 created by the create operation 606 with the customization. From both the first update operation 620 and the second update operation 622, the operation flow passes back to the render operation 612, which renders the graphic 108 based on the updated model (i.e., either the presentation model or the semantic model). The operation flow then continues as previously described.
In circumstances when the second query operation 616 branches the operation flow “No,” the fourth query operation 624 is invoked. The fourth query operation 624 determines whether a user or another application program has selected a new graphic definition 109a or 109b for display on the graphics pane 106. Such a selection is interpreted as the user or other application program desiring to view the content 115 received in receive operation 604 based on a different graphic definition 109a or 109b. If the fourth query operation 624 determines that such a selection has been made, the operation flow passes to a switch operation 626.
The switch operation 626 creates the presentation model 408 or 410 (or, retrieves, if this presentation model has already been created) associated with the new selected graphic definition 109a or 109b and then passes the operation flow back to the render operation 612. The render operation 612 then renders the graphic 108 based on the current semantic model 404 (i.e., either the semantic model created by the create operation 606 or an updated version of same) and the presentation model 408 or 410 created or retrieved by the switch operation 626.
However, if the fourth query operation 624 determines that the selection of a new graphic definition 109a or 109b has not occurred, the operation flow branches “No” to a fifth query operation 627. The fifth query operation 627 determines whether a user or another application program has input information resulting in a change to the structure of the content 115. Such a change in structure may result from the addition or deletion of content that did (if removed) or would (if added) correspond to a graphical element in the graphic 108. Such removal or deletion may include formatting changes that result in the addition or deletion of graphical elements. If the fifth query operation 627 detects a change in the content 115 that will result in a structural change to the graphic 108, the operation flow is branched “Yes” to an third update operation 628. Otherwise, the operation flow branches “No” to the first query operation 614 and continues as previously described.
The third update operation 628 updates both the presentation model 408 or 410 currently in use (i.e., either the presentation model created by the create operation 606 or a presentation model created or retrieved by the switch operation 626) and the semantic model 404 to reflect the changes to the content 115. From the third update operation 628, the operation flow passes to the render operation 612, which renders the graphic 108 based on the updated presentation model 408 or 410 and the updated semantic model 404. From the render operation 612, the operation flow continues as previously described.
Although the present invention has been described in language specific to structural features, methodological acts, and computer readable media containing such acts, it is to be understood that the present invention defined in the appended claims is not necessarily limited to the specific structure, acts, or media described. One skilled in the art will recognize other embodiments or improvements that are within the scope and spirit of the present invention. For example, the sequence of performance of operations within the display process 600 is shown in accordance with an exemplary embodiment. In accordance with other embodiments, the sequence of performance of these operations may be altered. For instance, the create operation 606 may be performed in time prior to the retrieve content operation 604 without departing from the scope of the present invention.
Additionally, while a 1:1 correlation between presentation models (e.g., 408 and 410) and graphic definitions (e.g., 109a and 109b) is described, more than one presentation model (e.g., 408 and 410) may be associated with a single graphic definition 109a or 109b in accordance with an embodiment of the present invention. Furthermore, it should be appreciated that the UI 102 may be constructed to have less than or more than three panes (e.g., 104, 105 and 106). Indeed, the functionality on any one of these panes (e.g., 104, 105 and 106) may be alternatively or additionally provided in other types of graphical user interface components, such as, for example, toolbars, thumbnails, menu bars, command lines, dialog boxes, etc.
Even further, while the presentation models (e.g., 408, 410) are described herein as being specific to each graphic definition, e.g., 109a and 109b, other embodiments contemplated within the scope of the present invention relate to presentation models (e.g., 408, 410) being persisted across multiple graphic definitions 306. In these embodiments, graphic definitions, e.g., 109a and 109b, having similar characteristics are grouped together in graphic classifications and the presentation models are specific to these classifications rather than individually to the types making up the classifications. For example, a classification may group together all graphic definitions, e.g., 109a and 109b, having graphical elements operable for positional movement in a radial manner in order to persist these movements across all graphic definitions in this classification. Likewise, another classification may group together all graphic definitions, e.g., 109a and 109b, operable for positional movement relative to an x-y coordinate system.
This application is a continuation application of U.S. patent application Ser. No. 11/013,655, entitled “Maintaining Graphical Presentations Based On User Customizations,” filed on Dec. 15, 2004, now U.S. Pat. No. 8,134,575 which is a continuation-in-part of U.S. patent application Ser. No. 10/957,103, entitled “Editing The Text Of An Arbitrary Graphic Via A Hierarchical List,” filed on Sep. 30, 2004, the complete disclosures of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4531150 | Amano | Jul 1985 | A |
4686522 | Hernandez et al. | Aug 1987 | A |
4996665 | Nomura | Feb 1991 | A |
5214755 | Mason | May 1993 | A |
5426729 | Parker | Jun 1995 | A |
5535134 | Cohn et al. | Jul 1996 | A |
5557722 | DeRose et al. | Sep 1996 | A |
5596691 | Good et al. | Jan 1997 | A |
5619631 | Schott | Apr 1997 | A |
5649216 | Sieber | Jul 1997 | A |
5669006 | Joskowicz et al. | Sep 1997 | A |
5732229 | Dickinson | Mar 1998 | A |
5818447 | Wolf et al. | Oct 1998 | A |
5867386 | Hoffberg et al. | Feb 1999 | A |
5872867 | Bergen | Feb 1999 | A |
5903902 | Orr et al. | May 1999 | A |
5909220 | Sandow | Jun 1999 | A |
5956043 | Jensen | Sep 1999 | A |
5956737 | King et al. | Sep 1999 | A |
5999731 | Yellin et al. | Dec 1999 | A |
6057842 | Knowlton et al. | May 2000 | A |
6057858 | Desrosiers | May 2000 | A |
6081816 | Agrawal | Jun 2000 | A |
6161098 | Wallman | Dec 2000 | A |
6166738 | Robertson et al. | Dec 2000 | A |
6173286 | Guttman et al. | Jan 2001 | B1 |
6189132 | Heng et al. | Feb 2001 | B1 |
6204849 | Smith | Mar 2001 | B1 |
6204859 | Jouppi et al. | Mar 2001 | B1 |
6256650 | Cedar et al. | Jul 2001 | B1 |
6289502 | Garland et al. | Sep 2001 | B1 |
6289505 | Goebel | Sep 2001 | B1 |
6292194 | Powell, III | Sep 2001 | B1 |
6301704 | Chow et al. | Oct 2001 | B1 |
6305012 | Beadle et al. | Oct 2001 | B1 |
6308322 | Serocki et al. | Oct 2001 | B1 |
6320602 | Burkardt et al. | Nov 2001 | B1 |
6324686 | Komatsu et al. | Nov 2001 | B1 |
6405225 | Apfel et al. | Jun 2002 | B1 |
6448973 | Guo et al. | Sep 2002 | B1 |
6593933 | Xu et al. | Jul 2003 | B1 |
6667750 | Halstead et al. | Dec 2003 | B1 |
6691282 | Rochford et al. | Feb 2004 | B1 |
6715130 | Eiche et al. | Mar 2004 | B1 |
6774899 | Ryall et al. | Aug 2004 | B1 |
6819342 | Kitagawa et al. | Nov 2004 | B2 |
6826727 | Mohr et al. | Nov 2004 | B1 |
6826729 | Giesen et al. | Nov 2004 | B1 |
6944830 | Card et al. | Sep 2005 | B2 |
6956737 | Chen et al. | Oct 2005 | B2 |
6957191 | Belcsak et al. | Oct 2005 | B1 |
7055095 | Anwar | May 2006 | B1 |
7107525 | Purvis | Sep 2006 | B2 |
7178102 | Jones et al. | Feb 2007 | B1 |
7209815 | Grier et al. | Apr 2007 | B2 |
7348982 | Schorr et al. | Mar 2008 | B2 |
7379074 | Gerhard et al. | May 2008 | B2 |
7423646 | Saini et al. | Sep 2008 | B2 |
7743325 | Berker et al. | Jun 2010 | B2 |
7747944 | Gerhard et al. | Jun 2010 | B2 |
7750924 | Berker et al. | Jul 2010 | B2 |
8134575 | Wong et al. | Mar 2012 | B2 |
20010051962 | Plotkin | Dec 2001 | A1 |
20020065852 | Hendrickson et al. | May 2002 | A1 |
20020107842 | Biebesheimer et al. | Aug 2002 | A1 |
20020111969 | Halstead, Jr. | Aug 2002 | A1 |
20030065601 | Gatto | Apr 2003 | A1 |
20030079177 | Brintzenhofe et al. | Apr 2003 | A1 |
20040041838 | Adusumilli et al. | Mar 2004 | A1 |
20040111672 | Bowman et al. | Jun 2004 | A1 |
20040133854 | Black | Jul 2004 | A1 |
20040148571 | Lue | Jul 2004 | A1 |
20040205602 | Croeni | Oct 2004 | A1 |
20050001837 | Shannon | Jan 2005 | A1 |
20050007382 | Schowtka et al. | Jan 2005 | A1 |
20050034083 | Jaeger | Feb 2005 | A1 |
20050091584 | Bogdan et al. | Apr 2005 | A1 |
20050094206 | Tonisson | May 2005 | A1 |
20050132283 | Diwan et al. | Jun 2005 | A1 |
20050157926 | Moravec et al. | Jul 2005 | A1 |
20050216832 | Giannetti | Sep 2005 | A1 |
20050240858 | Croft et al. | Oct 2005 | A1 |
20050273730 | Card et al. | Dec 2005 | A1 |
20050289466 | Chen | Dec 2005 | A1 |
20060064642 | Iyer | Mar 2006 | A1 |
20060066627 | Gerhard et al. | Mar 2006 | A1 |
20060066631 | Schorr et al. | Mar 2006 | A1 |
20060066632 | Wong | Mar 2006 | A1 |
20060070005 | Gilbert | Mar 2006 | A1 |
20060209093 | Berker et al. | Sep 2006 | A1 |
20060212801 | Berker et al. | Sep 2006 | A1 |
20060277476 | Lai | Dec 2006 | A1 |
20060294460 | Chao et al. | Dec 2006 | A1 |
20070006073 | Gerhard et al. | Jan 2007 | A1 |
20070055939 | Furlong et al. | Mar 2007 | A1 |
20070112832 | Wong | May 2007 | A1 |
20070186168 | Walsman et al. | Aug 2007 | A1 |
20080046803 | Beauchamp et al. | Feb 2008 | A1 |
20080136822 | Schorr et al. | Jun 2008 | A1 |
20080178107 | Lee et al. | Jul 2008 | A1 |
20080282147 | Schoor | Nov 2008 | A1 |
20080288916 | Tazoe | Nov 2008 | A1 |
20090019453 | Kodaganur | Jan 2009 | A1 |
20090119577 | Almbladh | May 2009 | A1 |
20090327954 | Danton | Dec 2009 | A1 |
20110055687 | Bhandar et al. | Mar 2011 | A1 |
20110225548 | Callens et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
0 431 638 | Jun 1991 | EP |
1 111 543 | Jun 2001 | EP |
1 111 543 | Nov 2002 | EP |
2001-500294 | Jan 2001 | JP |
2002507289 | Mar 2002 | JP |
2002507301 | Mar 2002 | JP |
2003-052582 | Feb 2003 | JP |
2003044464 | Feb 2003 | JP |
2004-220561 | Aug 2004 | JP |
2005-275890 | Dec 2005 | JP |
10-2004-0041979 | May 2004 | KR |
277871 | Jul 2010 | MX |
2142162 | Nov 1999 | RU |
578067 | Mar 2004 | TW |
200406734 | May 2004 | TW |
WO 8200726 | Mar 1982 | WO |
WO 9500916 | Jan 1995 | WO |
WO 9855953 | Oct 1998 | WO |
WO 0139019 | May 2001 | WO |
WO 0139019 | May 2001 | WO |
WO 03052582 | Jun 2003 | WO |
WO 2004046972 | Jun 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20120127178 A1 | May 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11013655 | Dec 2004 | US |
Child | 13362879 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10957103 | Sep 2004 | US |
Child | 11013655 | US |