Productivity applications such as those available in the Microsoft® Office suite of applications allow users to create a number of different types of documents incorporating various types of data objects. Objects include both native objects created by the application, such as text boxes as well as images and multimedia components. Embedded objects can include objects created with one application and embedded into a document created by another application. Embedding the object, ensures that the object retains its original format.
Often, only portions of these objects are seen in the display version of the document, with some of the data from the object being hidden for various reasons. Currently, there are only limited mechanisms for selecting these objects and making them visible to the user. To manipulate objects, a user generally must first select the object in the user interface. Often it can be difficult to determine an object's boundaries, making it difficult to select, especially when an object obscures another. It is also difficult to determine what an obscure object is fully without moving the object away from the other object. In general, users have a hard time determining what happened to objects such as graphics that happen to be covered by other graphics and understanding layering of objects.
Technology is disclosed for identifying embedded objects in a user interface when a user interface device is positioned over the object. The technology is included in a computer system having a graphical user interface, a display and a user interface selection device. A method of illustrating a characteristic of an object in a document on the display comprises the steps of retrieving an event indicating the position of the user interface selection device over an object; and displaying an indicator illustrating boundaries of the object in a document. In certain embodiments, the style of indicator displayed is dependent on the type of object over which the selection device is positioned.
In another implementation, a method in a computer system for displaying an indicator of an object on a display device is presented. The indicator shows a location and boundaries of the object. The method may comprise the steps of: determining a position of a user controlled cursor over an object in a document; and displaying an indicator illustrating at least boundaries of the object in a document. In a further implementation, the method may include displaying a first style of indicator with a first type of object and a second style of indicator with a second type of object.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Technology is disclosed for identifying objects in a document in a user interface. The technology identifies the objects when a user interface device is positioned over the object. In one embodiment, an object-type dependent indicator is presented when a user positioned a cursor or pointer over an object. The object dependent indicator can be a halo, or band of color, around an object that the mouse cursor is currently hovering over. In other cases, a translucent mask to overlay on the object is used. In addition, obscured portions of objects are presented. This overcomes shortcomings in previous attempts to address this issue which do not identify objects on mouse-over or mouse-hover events.
The technology disclosed herein allows users to readily identify objects embedded in documents, and quickly determine the scope and content of those objects. The technology allows one to more readily view and determine the boundaries of obscured objects, such as text boxes and partially hidden images and drawings.
In one implementation, the technology is implemented in a user interface user productivity applications such as those which comprise the Microsoft® Office suite of applications, and provides the user with graphical information that can assist the user in determining the scope of objects. Such applications embed objects in documents. A document may be any file in any format for storing data for use by an application on a storage media. In particular, documents refer to any of the files used by the productivity applications referred to herein to store objects which may be rendered.
The present technology will now be described with reference to
The GUI described herein can be implemented on a variety of processing systems.
The present system is operational with numerous other general purpose or special purpose computing systems, environments or configurations. Examples of well known computing systems, environments and/or configurations that may be suitable for use with the present system include, but are not limited to, personal computers, server computers, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, laptop and palm computers, hand held devices including personal digital assistants and mobile telephones, distributed computing environments that include any of the above systems or devices, and the like.
The present system may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The present system may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as ROM 131 and RAM 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
The application programs 135 stored in system memory 130 may include the GUI for performing the present system as described hereinafter. When one of the application programs including the GUI of the present system is launched, it runs on the operating system 134 while executing on the processing unit 120. An example of an operating system on which the application programs including the present GUI may run is the Macintosh operating system by Apple Computer, Inc., but the application programs including the present GUI may operate on a variety of operating systems including also the Windows® operating system from Microsoft Corporation, or the Linux operating system. The application programs including the present GUI may be loaded into the memory 130 from the CD-ROM drive 155, or alternatively, downloaded from over network 171 or network 173.
The present technology will now be described in reference to the flowcharts of
In this context, an embedded object may refer to a broad range of graphical, linkable objects which may include text, graphics, multimedia, or other content, including objects created from other applications. The present technology provides an interface which may be contextually adjusted for a variety of application programs, e.g., word processing, presentation, spreadsheet, drawing, and/or other application program types. Each embedded object has boundaries which define the size of the object in the document. Such boundaries are editable by the user, generally when a user “selects” the object by clicking on the object in a manner allowed by the application.
In general, upon launching an application program, a graphical user interface is presented a user interface on a device such as a monitor 191. As shown in
At step 200 of
At step 202, the technology determines whether a mouse cursor 400 is positioned over an object. Each object is uniquely identified within the environment, and application development environments include mouse tracking capabilities which monitor mouse events of the mouse based on events which are linked to the operating system. For example, Microsoft Windows includes MouseDown, MouseEnter, MouseHover, MouseMove and other events which track the cursor movement. In step 202, in one embodiment, the MouseHover event is used to determine when the cursor or pointer 400 is positioned over an object in the working environment document.
If a cursor is positioned over an object, a determination is made at steps 204, 208, 212 and 216 as to the type of object, and a type dependent indicator presented to the user to identify the object. Steps 204, 208, 212 and 216 identify whether the object is a text box (204) an image (208) a drawing (212) or some other type of object (216). If the item is a text box at step 204, then a text box indicator is provided at step 206. If the item is an image at step 208, an image indicator presented at step 210. If the item is a drawing at step 212, a drawing indicator is provided at 214. For any other type of object “N” at step 216, a custom indicator 218 may be provided. If the object is undefined, the method returns to step 202 with no indicator being provided at step 222.
Steps 202-232 of
Application programs generally allow for the user to “select” an object to further manipulate the object. Usually, selecting an object occurs on the MouseDown event either on an object's border or within the object's defined area. If a mouse down event occurs at step 230, object handles or gems may be added at step 232. This is illustrated with respect to
Most application programs which support embedded objects also support the capability of organizing such objects in layers. In general, embedded objects inserted into a document are inserted in successive, stacked layers. Generally, objects can be placed in separate layers and freely moved under or over each other. Objects may be managed individually or in groups. In
As noted above, unless an object is selected, there is generally no indication of the bounds of the object. For example, a text box object merely appears as text on the document page.
By providing the object indicator on the MouseHover event, the presence of the object is more easily discernible to the user, and the user is better able to manipulate objects. In addition, by providing the object in a different color than the document colors, the object is easily separable from other elements of the document. It will be well understood that the capability of rendering the halo stroke or transparent box is generally included in the development environment and is well known.
An alternative indicator is shown in
In the example shown in
Translucency is added to the obscured portion of lower object (320 when displayed for consistency in indicating the bounds of the object 320. However, by displaying the obscured portion of the lower object, the user can see the entire underlying picture.
In one implementation, the indicator presented in
The technology discussed herein provides the advantage that objects are shown on a hovering aspect of the pointer. For an image, one needs to determine how much of the image is masked and click the regions on the image that are masked and replicate those above everything else.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.