In today's digital world, the use of graphical user interfaces (GUIs) to display and manage computer information has become ubiquitous. For example, the WINDOWS™ (Microsoft Corporation, Redmond, Wash.) operating systems used in many personal computers employ a GUI desktop that displays information (e.g., text, images, etc.) for a user to view, and provides various icons or indicia with which the user may interact (e.g., a button, an Internet link, etc.). Software applications in these systems count on knowing, in advance, the position from which the user will be viewing the display screen, and will arrange and orient their graphical elements accordingly. For example, for a typical desktop computer, the applications assume that the user will be viewing the display with one particular edge being at the “top,” and will orient text and images with that orientation in mind.
New applications opened on the system assume the same orientation for the display, and present their information aligned in the same manner as other applications (e.g., they share the same understanding of “up” and “down” as other applications on the system). This allows the same user to easily and accurately see the information for the various applications. However, such arrangements may include drawbacks.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, or to limit the appended claims beyond their stated scope.
As described herein, a user interface element can be oriented on a computer display at a location that is determined by the placement of an object on or against the display, and at an orientation that depends on a shadow cast by the object on the display. To accomplish this, a shadow axis may be determined as a line running between the shadow and a point of contact between the object and the display. In some instances, the shadow axis can be displayed to give the user feedback on the orientation that is about to be used.
The interface element can be rotated, taking the shadow axis into consideration, so that the element can be readable by the person placing the object on the display.
The element can also be offset from the actual point of contact (or area of contact). For example, some interface elements may be located above and/or to the side of the point of contact. Other objects can have a variable offset, such as the current volume setting in an audio program.
In some instances, after the interface element is displayed, it can be further moved based on additional movement of the object and/or shadow. For example, further rotation of the shadow can cause re-orientation of the interface element. Movement of the object across the display can cause further corresponding movement of the interface element. Or, the interface can become anchored at its initial position, and further movement can be interpreted as input to the interface (e.g., selecting a new menu option, volume level, etc.).
Various types of interface elements can be used, such as menus, dialog boxes, slider bars, etc. Some interfaces can be radial in nature, and may be centered about the point (or area) of contact.
These and other features will be described in greater detail below.
The features herein are described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that can perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the features may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The features may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 110 may include a variety of computer readable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. The system memory 130 may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, may be stored in ROM 131. RAM 132 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/nonremovable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 110 may be connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 may include a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
The computing device shown in
The display device 200 may display a computer-generated image on its display surface 201, which allows the device 200 to be used as a display monitor (such as monitor 191) for computing processes, displaying graphical user interfaces, television or other visual images, video games, and the like. The display may be projection-based, and may use a digital light processing (DLP—trademark of Texas Instruments Corporation) technique, or it may be based on other display technologies, such as liquid crystal display (LCD) technology. Where a projection-style display device is used, projector 202 may be used to project light onto the underside of the display surface 201. It may do so directly, or may do so using one or more mirrors. As shown in
In addition to being used as an output display for displaying images, the device 200 may also be used as an input-receiving device. As illustrated in
The device 200 shown in
The device 200 is also shown in a substantially horizontal orientation, with the display surface 201 acting as a tabletop. Other orientations may also be used. For example, the device 200 may be oriented to project a display onto any desired surface, such as a vertical wall. Reflective IR light may also be received from any such oriented surface.
Automatically centering a requested user interface element at the point of contact may, however, result in the user's hand obscuring part of the displayed interface 501. Additionally, there remains the question of orientation. The user may have approached the table from a different side, such as shown in
One such alternative involves shadows.
When a contact is made, the system may also detect the shadow 703 cast by the object making the contact with the display. To detect a shadow, the system may compare light strength or brightness values of pixels surrounding the point of contact, and may identify those pixels receiving dimmer light. For example, some detection systems may use reflected light from below to determine proximity, such that the areas reflecting the most light (brightest) correspond to the point of contact, and areas reflecting less light (darker) correspond to the shadow (in such embodiments, an object's shadow need not depend on the light shadow cast from lights in the room). The system may identify the range of pixels that comprise the shadow 703, and may also identify a single representative (e.g., central) point, or pixel, of the shadow 703. This may be another center of mass calculation. Alternatively, the system may check to find the darkest point within the shadow 703, and assign that point as the representative point of the shadow. This may be helpful, for example, when the ambient light in the room contains multiple light sources.
The system may also calculate a shadow axis 704 as a line extending from the shadow 703 to the contact area 702 and beyond. The shadow axis may, for example, connect the representative or central point of the shadow 703 with the point of contact of the contact area 702. The axis 704 need not be displayed, although it may be if desired. For example, the shadow axis may be displayed prior to actually rendering the element, giving the user feedback on the orientation that is about to be used. The system may also calculate an angle θ 705 of the axis 704, as measured with respect to any desired frame of reference. For example, the angle shown in
The contact point, shadow, axis and/or axis angle may be used to determine how a requested user interface element will appear on the display. A requested user interface element may be placed at a location that depends on the determined point of contact, and oriented at an angle corresponding to, or depending on (or equaling), the axis angle 705.
Such placement and orientation are made possible by definition information used to initially define the user interface element (e.g., prior to its actual use in the system).
The element 801 may be defined with respect to a reference coordinate system 804 (e.g., having X- and Y- dimensions and coordinates using system 804 as the frame of reference in a Cartesian coordinate system, or using any other desired coordinate system). As illustrated in the
As noted above, a user interface element may be defined using one or more offsets from a point of contact. The
The offsets may each be a predetermined fixed value (e.g., 100 pixels, one inch, etc.). Alternatively, the offsets may each be a variable value.
If, alternatively, the volume setting had been higher, such as at a setting of “5”, when the request was made, the software may use different values for A and B such that the contact point 1102b is aligned with the current volume level in the volume control interface element 1101b.
The volume control example above illustrated a shifting of the interface along one axis (i.e., the X-axis of the interface), although shifting along more than one axis is also possible if desired. For example, the interface could also adjust its Y-axis location based on another parameter. Additionally, the axis (or axes) of the interface (including the axis of the shifting) may be rotated by or to an angle depending on the shadow axis rotation described above, such that a user can approach the display from any angle and request, for example, a volume control interface element that appears horizontally aligned from that user's point of view.
Once a requested interface is rendered at the desired position and orientation, the interface may maintain that position for further interaction. In the volume control example described above, the volume control interface 11a, b may remain anchored at its initial position, allowing the user to further adjust the volume by sliding his/her finger along the axis of volume control. Although the example above describes a volume control, the same features may be applied to other types of interfaces, such as other slider controls (graphic equalizer settings, video color settings, brightness settings, etc.), pull-down menus, two-dimensional grids, etc. Another example involves a two-dimensional radial interface element 1110, such as that shown in
Because some applications allow users to request interfaces at various locations on the display, it is possible that some users may place a contact point close to the edge of the display, such that the normal offset and shadow axis rotation features described above may cause the interface to partly (or wholly) appear off of the display. Alternatively, there may be other objects resting on the display surface, or graphical elements displayed in the surface, that the resulting interface should avoid overlapping. In these instances, the computer system may further include instructions on how to adjust the position of the requested element if it overlaps or runs off the edge of the display. For example, the interface element may be defined to have one or more default movement axes, and may automatically move along the axis (axes) to avoid overlapping or running off the page. For example, an interface may be defined such that it is automatically moved perpendicularly away from an edge of the display (or away from another object/element) by a sufficient distance such that it no longer runs over the display (or overlaps the object/element). The interface element may also rotate to avoid a display edge or another object/element. These “movements” and “rotations” may be rendered on the display, or they may occur invisibly to generate the final result on the screen.
As a further feature, changes in the shadow axis may result in further rotation of the displayed element. For example, after the interface element has initially been displayed, the user may change the orientation of his/her hand (e.g., by walking around the table to another side), and the system may continuously or periodically monitor the progress and change in the shadow axis, and may rotate the interface element accordingly.
With the element defined, the process may move to step 1202 to await a request for the element. The request can come in any desired manner. For example, some elements may be defined such that they are requested whenever a certain type of input is made on the display itself, such as a touch and hold of a finger, placement of a predefined object, touching/placing in a predetermined location on the display etc. Other inputs may also be used, such as a keyboard command, spoken command, foot pedal, etc. Still other inputs may originate as requests made by a software application or the operating system, as opposed to (or in addition to) the user.
If no request is made, the process may return to simply await a request. If a request is made, however, the process may move to step 1203 to identify a contact point of the request. The contact point is generally the location where the user has placed a finger (or other object) to indicate a location for the desired interface element, and may be a single point, pixel or an entire contact area.
The system may also, in step 1204, identify the shadow area of the finger/object used to make the contact. This may also be a single point or pixel. Based on the contact point and shadow, the system can make some high probability inferences about what sort of object is making the contact. In particular it is possible to infer with a high degree of accuracy that a contact is a finger touch by the shape, size and aspect ratios of the contact point (along with its shadow). For example, predefined shapes, sizes and/or aspect ratios may be stored in advance, and may be compared to a detected contact to identify matches.
With the contact point and shadow point, the system may then determine the shadow axis and calculate a rotation angle for the request in step 1205. The system may then move to step 1206, and generate the display of the requested element using the shadow axis as a vertical axis (e.g., rotating the element's frame of reference to align with the shadow axis), and using the offset data (if any) with respect to the point of contact. If desired, the element may also be positioned (or repositioned) to avoid extending beyond the displayable edge of the display surface, or to avoid overlapping other objects or display elements.
The system may also incorporate the user's handedness (e.g., left or right) when interpreting a shadow, and may include an angular offset to account for differences between a right- and left-hand input (e.g., a person's right hand touching a spot directly in front will cast a shadow extending down and right, and the system may incorporate an extra rotation of the element to account for that). The system may be pre-instructed as to the user's handedness (e.g., through a query to the user, predefined configuration parameter, etc.), or the system may determine the user's handedness by comparing the shape of the shadow with a predefined template of right- and left-hands.
When the requested element is displayed, the process may move to step 1207 to allow further use and/or manipulation of the element. For example, the user may interact with the element by moving a slider bar, selecting an element from a pull-down menu, or by rotating the element by changing the shadow axis (e.g., keeping a finger placed against the display while walking around an edge of a tabletop display). The process may then return to step 1202 to await another request for the element.
The discussion above refers to the use of shadows to infer orientation. Orientation can be inferred through other approaches as well. For example, an object may have a determinable orientation (e.g., a piece of paper, or a pattern printed on the bottom of a physical object). For example, the bottom of a physical object might have an arrow pointing “up.” Rotating the physical object on the display can cause a corresponding rotation in the orientation of an associated interface element. The same can be done with the physical shape of an object (e.g., a physical object might be shaped like an arrow).
In alternative arrangements, a stylus may be used for input, and may include 3-dimensional sensing components to allow the system to detect the 3-dimensional orientation with which the stylus was held during the input. For example, the stylus may have a plurality of electromagnetic circuits responsive to one or more magnetic fields generated by the display, and the display may detect the positions of these circuits to infer the angle at which the stylus is being held.
As another alternative, the area of physical contact may be used itself to infer an orientation. For example, if the user places his/her finger against the screen, the area of contact may have a wider area towards the user's hand (i.e., the fingertip narrows). Detecting this wider area may allow the system to infer the direction from which the user's finger touched the display, and then use this information to determine the corresponding interface element orientation. Fingerprint patterns (e.g., the whorls, arches and loops) may also be detected and used to infer the orientation based on a predetermined fingerprint orientation.
As a further alternative, areas of the display may be preassigned to different orientations. For example, a rectangular display may be divided into quadrants by diagonals, and within each quadrant interface elements could be oriented to be read by a person located directly facing that quadrant.
Using one or more of the features and approaches described above, users' experiences with various orientations can be improved. Although the description above provides illustrative examples and sequences of actions, it should be understood that the various examples and sequences may be rearranged, divided, combined and subcombined as desired. For example, steps and features described may be omitted, or additional steps and features may be added. Accordingly, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Number | Name | Date | Kind |
---|---|---|---|
5230063 | Hoeber et al. | Jul 1993 | A |
5423554 | Davis | Jun 1995 | A |
5434964 | Moss et al. | Jul 1995 | A |
5665951 | Newman et al. | Sep 1997 | A |
5714698 | Tokioka et al. | Feb 1998 | A |
5804803 | Cragun et al. | Sep 1998 | A |
5910653 | Campo | Jun 1999 | A |
5943164 | Rao | Aug 1999 | A |
6159100 | Smith | Dec 2000 | A |
6181343 | Lyons | Jan 2001 | B1 |
6247128 | Fisher et al. | Jun 2001 | B1 |
6333735 | Anvekar | Dec 2001 | B1 |
6445364 | Zwern | Sep 2002 | B2 |
6452593 | Challener | Sep 2002 | B1 |
6512507 | Furihata | Jan 2003 | B1 |
6568596 | Shaw | May 2003 | B1 |
6577330 | Tsuda et al. | Jun 2003 | B1 |
6624833 | Kumar | Sep 2003 | B1 |
6662365 | Sullivan et al. | Dec 2003 | B1 |
6667741 | Kataoka et al. | Dec 2003 | B1 |
6672961 | Uzun | Jan 2004 | B1 |
6735625 | Ponna | May 2004 | B1 |
6745234 | Philyaw et al. | Jun 2004 | B1 |
6767287 | McQuaid et al. | Jul 2004 | B1 |
6768419 | Garber et al. | Jul 2004 | B2 |
6791530 | Vernier et al. | Sep 2004 | B2 |
6792452 | Philyaw | Sep 2004 | B1 |
6822635 | Shahoian et al. | Nov 2004 | B2 |
6847856 | Bohannon | Jan 2005 | B1 |
6910076 | Lortz | Jun 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6965842 | Rekimoto | Nov 2005 | B2 |
6990660 | Moshir et al. | Jan 2006 | B2 |
7085590 | Kennedy et al. | Aug 2006 | B2 |
7104891 | Osako et al. | Sep 2006 | B2 |
20010054082 | Rudolph et al. | Dec 2001 | A1 |
20020109737 | Jaeger | Aug 2002 | A1 |
20020151337 | Yamashita et al. | Oct 2002 | A1 |
20020154214 | Scallie et al. | Oct 2002 | A1 |
20030025676 | Cappendijk | Feb 2003 | A1 |
20030063132 | Sauer et al. | Apr 2003 | A1 |
20030119576 | McClintiock et al. | Jun 2003 | A1 |
20030234773 | Sano et al. | Dec 2003 | A1 |
20040005920 | Soltys et al. | Jan 2004 | A1 |
20040032409 | Girard | Feb 2004 | A1 |
20040046784 | Shen | Mar 2004 | A1 |
20040051733 | Katzie | Mar 2004 | A1 |
20040090432 | Takahashi et al. | May 2004 | A1 |
20040127272 | Park et al. | Jul 2004 | A1 |
20040212617 | Fitzmaurice | Oct 2004 | A1 |
20050054392 | Too | Mar 2005 | A1 |
20050069188 | Rubbert et al. | Mar 2005 | A1 |
20050110781 | Geaghan et al. | May 2005 | A1 |
20050134578 | Chambers et al. | Jun 2005 | A1 |
20050146508 | Kirkland et al. | Jul 2005 | A1 |
20050162402 | Watanachote | Jul 2005 | A1 |
20050168448 | Simpson | Aug 2005 | A1 |
20050177054 | Yi et al. | Aug 2005 | A1 |
20050183035 | Ringel | Aug 2005 | A1 |
20050193120 | Taylor | Sep 2005 | A1 |
20050200291 | Naugler et al. | Sep 2005 | A1 |
20050248729 | Drucker et al. | Nov 2005 | A1 |
20050251800 | Kurlander et al. | Nov 2005 | A1 |
20050253872 | Goss et al. | Nov 2005 | A1 |
20050277071 | Yee | Dec 2005 | A1 |
20050280631 | Wong et al. | Dec 2005 | A1 |
20060017709 | Okano | Jan 2006 | A1 |
20060026535 | Hotelling | Feb 2006 | A1 |
20060075250 | Liao | Apr 2006 | A1 |
20060156249 | Blythe et al. | Jul 2006 | A1 |
20060161871 | Hotelling et al. | Jul 2006 | A1 |
20060244734 | Hill et al. | Nov 2006 | A1 |
20070188518 | Vale et al. | Aug 2007 | A1 |
20070220444 | Sunday et al. | Sep 2007 | A1 |
20070236485 | Trepte | Oct 2007 | A1 |
20070284429 | Beeman | Dec 2007 | A1 |
20070300307 | Duncan | Dec 2007 | A1 |
20080040692 | Sunday et al. | Feb 2008 | A1 |
20080174570 | Jobs et al. | Jul 2008 | A1 |
Number | Date | Country |
---|---|---|
0050979 | Aug 2001 | WO |
WO 0236225 | May 2002 | WO |
WO 2005040944 | May 2005 | WO |
WO 20050122557 | Dec 2005 | WO |
2006003586 | Jan 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20070300182 A1 | Dec 2007 | US |