Desktop computers once reigned as the most common personal computer configuration, leading software developers to create content designed for optimal rendering on a desktop display. For example, website developers often favor rich, dense content for a web page so that the display “real estate” on a viewing device can be utilized to its fullest extent. One factor driving website developers toward rich, dense web page content is the fact that third party entities are willing to pay for their content (e.g., advertisements) to be provided on a content provider's web page. This means that content providers effectively lose money when they choose to leave empty space on a web page.
Designing content that is rich and dense is generally a nonissue with desktop displays. For instance, an average user whose eyes are positioned roughly a foot away from a 19 inch desktop display is capable of unassisted reading of dense content rendered on the display, and is further able to navigate and browse the content by manipulating an on-screen cursor with a mouse or a similar pointing device.
As computing technology has advanced, however, computing devices having a small form factor have become ubiquitous. For example, many individuals own a smart phone (typically with a display size in the range of about 4 to 5 inches) and take it with them everywhere they go. Furthermore, consumers are now becoming familiar with the practice of surfing the Internet from the comfort of their own living room on a home television (TV) display. In either scenario, at least some content that is rendered on the user's display may be difficult to read and/or select when attempting to interact with the content. With respect to small form factor devices, readability and/or selectability issues stem from rendering dense content on a small display. A similar issue arises in the living room TV scenario when a user is situated at a substantial distance from the display that makes it difficult to read and/or select content provided in a rich, dense layout. As a consequence, users continue to experience frustration when navigating and browsing content on their consumer devices.
Described herein are techniques and systems for enabling “hover-based” interaction with content that is rendered on a display of a viewing device. The term “hover,” (sometimes called “three-dimensional (3D) touch”) is used to describe a condition where an object is positioned in front of, but not in contact with, the front surface of the display, and is within a predetermined 3D space or volume in front of the display. Accordingly, a hovering object may be defined as an object positioned in front of the display of the computing device within the predetermined 3D space without actually contacting the front surface of the display. The dimensions of the 3D space where hover interactions are constrained, and particularly a dimension that is perpendicular to the front surface of the display, may depend on the size of the display and/or the context in which the display used, as will be described in more detail below.
In some embodiments, a process of enabling hover-based interaction with content includes rendering the content on a display, detecting an object in front of, but not in contact with, a front surface of the display, and in response to detecting the object, determining a location on the front surface of the display that is spaced a shortest distance from the object relative to distances from the object to other locations on the front surface. The determined location on the front surface of the display may then be used to determine a portion of the content that is rendered at the location or within a threshold distance from the location, and a magnified window of the portion of the content may then be displayed in a region of the display. In some embodiments, the portion of the content within the magnified window is actionable by responding to user input when the user input is provided within the magnified window. Systems and computer-readable media for implementing the aforementioned process are also disclosed herein.
By displaying a magnified window in a region of the display in response to detecting an object hovering in front of the display, a user may experience enhanced browsing and navigation of rendered content. Specifically, the rendered content may remain at a lowest zoom level (i.e., zoomed out), and the user may conveniently identify portions of the rendered content that are of interest to the user without changing the zoom level of the rendered content. In other words, the magnified window feature eliminates the steps required to pinch and zoom (and potentially pan) the content in order to find, read, and/or select content rendered on the display, saving the user time and eliminating frustration when browsing content. Upon finding an interesting portion of the content via the magnified window, the user may then have the ability to zoom to the portion of interest via a user input command. Moreover, the magnified window feature also enables content providers to continue to design content that is rich and dense without expending resources on “mobile” versions of their content (e.g., mobile sites) that tend to remove content from their site, which in turn leads to lost revenue.
In some embodiments, the actionable content that is rendered on a display of a viewing device is configured to respond to received hover interactions by modifying the rendered content and/or rendering additional content in response to the detected hover interactions. In this scenario, hover-based interaction with the rendered content may be enabled by a process that includes rendering content on a display, detecting an object in front of, but not in contact with, a front surface of the display, and in response to detecting the object, identifying a pointer event associated with a portion of the content underneath the object. A display-related function associated with the identified pointer event may be determined and performed to modify the rendered portion of the content and/or render additional content on the display. In some embodiments, the hover interaction from the object may be provided within the magnified window such that the portion of the content in the magnified window is modified and/or additional content is rendered within the magnified window as it would be outside of the magnified window.
This Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicates similar or identical items.
Embodiments of the present disclosure are directed to, among other things, techniques and systems for enabling “hover-based” interaction with content that is rendered on a display of a viewing device. Although examples are provided herein predominantly with reference to a mobile computing device (e.g., a smart phone), it is to be appreciated that the techniques and systems are not limited to mobile devices. For instance, viewing devices that may benefit from the techniques disclosed herein may include, without limitation, mobile devices (e.g., smart phones, tablet computers, portable media players, wearable computers, etc.), as well as television (TV) displays, displays implemented within moving vehicles (e.g., navigation displays in automobiles, aircraft, etc.), and the like. In this sense, displays described herein over which hover interactions may be detected may be mobile (e.g., integrated into a mobile computing device, vehicle, etc.) or situated (e.g., wall mounted displays).
The characteristics of the hover-based input that may be provided to the variety of devices contemplated herein may vary with the size of the device, the context of the device's use, and/or the hardware (e.g., sensors) enabling such hover-based input. For example, a TV display in a living room may have a large screen size, may be stationary, and may utilize an image capture device (e.g., a depth camera) to detect hover interactions. By contrast, a small, mobile device, such as a smart phone, may utilize a sensor or sensor array embedded in the display itself (e.g., a capacitive-based touch screen sensor with proximity sensing capabilities). It is to be appreciated that, no matter the device type, sensors, or context of use, “hover,” as used herein, may reference a physical state of an object that is positioned within a predetermined 3D space in front of the display without actually contacting the front surface of the display. The dimensions of the predetermined 3D space may be defined by a two-dimensional (2D) area on the display and a distance in a direction perpendicular to the front surface of the display. In this sense, objects that are positioned outside of the 2D area on the display, contacting the display, or beyond a threshold distance in a direction perpendicular to the front surface of the display may be considered to not be in a hover state.
The techniques and systems described herein may be implemented in a number of ways. Example implementations are provided below with reference to the following figures.
The computing device 102 may be implemented as any number of computing devices (nonlimiting examples of which are shown in
The computing device 102 may be equipped with one or more processors 104 and system memory 106. Depending on the exact configuration and type of computing device, the system memory 106 may be volatile (e.g., random access memory (RAM)), non-volatile (e.g., read only memory (ROM), flash memory, etc.), or some combination of the two. The system memory 106 may include, without limitation, an operating system 108, a browser module 110, program data 112, and a local content store 114 accessible to the processor(s) 104.
The operating system 108 may include a component-based framework 116 that supports components (including properties and events), objects, inheritance, polymorphism, reflection, and provides an object-oriented component-based application programming interface (API), such as that of the Win32™ programming model and the .NET™ Framework commercially available from Microsoft® Corporation of Redmond, Wash. The API provided by the component-based framework 116 may comprise a set of routines, protocols, and/or tools associated with the operating system 108 and/or an application program of the operating system 108 that provides an interface with the operating system 108 and/or associated application programs.
The operating system 108 may further include a hover interface module 118 configured to enable hover-based interaction with a display of the computing device 102 and the content rendered thereon. In general, the operating system 108 may be configured with one or more stacks to drive a standard class of human interface devices (HIDs) (e.g., keyboards, mice, etc.) as well as enabling touch-screen input (i.e., contact-based input with an associated display). The hover interface module 118 additionally enables the computing device 102 to determine and interpret hover-based input received from objects (e.g., a user's finger or hand, a stylus, a pen, a wand, etc.) that hover in front of an associated display, and to perform display-related functions pertaining to the hover-based input. In order to determine and interpret hover-based input from an object, the hover interface module 118 may rely on one or more additional hardware and/or software components of the computing device 102, such as the browser module 110 and one or more hardware sensors of the computing device 102 that are configured to detect a hovering object (i.e., an object in front of, but not contacting, the display of the computing device 102).
The browser module 110 may be configured to receive content, and to render the received content via a browser (e.g., a web browser) on a display of the computing device 102. Execution of the browser module 110 may, for example, provide access to a website by rendering web pages served by the website on an associated display. The browser module 110 may be further configured to interact with the hover interface module 118 via the API of the operating system 108 for enabling hover-based interaction with content rendered via the browser. The content to be rendered may comprise documents, applications, web content, and the like, which may be received/accessed from the local content store 114 when the content is stored locally on the computing device 102, or from remote sources, such as from the other computing devices 120 shown in
In some embodiments, the content received by the browser module 110 may comprise web page content based on hyper text markup language (HTML) code that configures the content to be “actionable” in that the content is responsive to user input. Any suitable scripting language (e.g., JavaScript, Jscript, European Computer Manufacturers Association script (ECMAScript), etc.) or program (e.g., Java applet) may be utilized for enabling actionable content, including content that may be linked to hover functionality. In this sense, the content received by the browser module 110 may be coded with event-driven programming languages to register event handlers/listeners on element nodes inside a document object model (DOM) tree for any type of content. One suitable event model that may be utilized for making content actionable is the World Wide Web Consortium (W3C) model for pointer events, including hover events.
In an illustrative example, the content received by the browser module 110 may comprise web page content that includes selectable (i.e., actionable) text that responds to selection input by modifying the selected text with highlighting, text selection grippers, or other suitable display-based modification. As another example, the content on a web page may include links (e.g., hyperlinks) to other web pages or sites, video or audio playback buttons for embedded video/audio content, and so on. Accordingly, upon selection of such actionable content, the content may respond by navigating to another webpage or playing back video/audio files, respectively. When hover events are associated with portions of the content, those portions may be actionable by changing in appearance (i.e., display modification) or by rendering additional content (e.g., a drop down menu, pop-up bubble with information about the content) in response to a cursor being positioned over the content, and these display modifications and/or additional content may disappear from the display when the cursor is moved away from the hover-enabled content.
The computing device 102 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
In some embodiments, any or all of the system memory 106, removable storage 122 and non-removable storage 124 may store programming instructions, data structures, program modules and other data, which, when executed by the processor(s) 104, implement some or all of the processes described herein.
In contrast, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media.
The computing device 102 may also include one or more input devices 126 such as a keyboard, pointing devices (e.g., mouse, touch pad, joystick, etc.), a pen, stylus, or wand, a touch screen (e.g., capacitive, resistive, infrared, surface acoustic wave (SAW), optical), a camera (e.g., 3D sensor), a proximity sensor, a microphone, etc., through which a user may enter commands and information into the computing device 102. Although the input device(s) 126 are shown in
In some embodiments, the input device(s) 126 may include one or more proximity-based sensors 128 configured to detect an object hovering in front of a display of the computing device 102. The proximity-based sensor(s) 128 enable the computing device 102 to differentiate between contact-based touch events and non-contact (i.e., hover) interactions, rather than merely detecting objects near the display and resolving the detected object as a contact0based touch event. In this sense, the computing device 102 may be considered to be “hover-capable” because it may detect hover interactions and touch/contact interactions in a mutually exclusive manner.
The proximity sensor(s) 128 may include any suitable proximity sensing technology. One illustrative example of a suitable proximity sensing technology is a capacitive sensor or sensor array configured to detect an object hovering in front of the display of the computing device 102. Such a capacitive sensor or sensor array may include a two-dimensional (2D) grid of electrodes substantially spanning an area of the display screen of the computing device 102 with voltage applied to the electrodes so that the electrodes are configured to measure capacitance changes at each electrode. Capacitance changes at the electrodes may be influenced by an object (such as a human finger) in proximity to the electrodes such that a location on the front surface of the display that the object is closest to can be pinpointed based on electrodes that measure corresponding capacitance changes. In order to sense a hovering object, a capacitive sensor or sensor array may be based at least in part on self capacitance, which is known to provide stronger signal sensing as compared to mutual capacitance sensors so that an object may be detected in front of the front surface of the display without the object contacting the display. A proximity sensor 128 based on a combination of self capacitance and mutual capacitance may enjoy the benefits of both types of capacitive sensors, namely proximity sensing and multi-touch (i.e., detecting multiple touch locations at the same time), respectively. In some instances, the proximity sensor(s) 128 may be configured to detect an object in front of the display that is at a distance within the range of about 0.001 inches to about 8 inches from the front surface of the display in a direction perpendicular to the front surface.
One example of a relatively “long range” input device 126 configured to detect an object positioned in front of the display of the computing device 102 is a depth camera (e.g., the Kinect® sensor used with the Xbox® console system from Microsoft® Corporation of Redmond, Wash.). A depth camera may be configured to capture image data and depth information using any suitable technique such as time-of-flight (ToF), structured light imaging, stereo imaging, and the like. In some instances, the proximity sensor(s) 128 with longer range sensing capabilities may be configured to detect an object in front of the display that is at a distance within the range of about 20 inches to about 170 inches from the front surface of the display.
It is to be appreciated that the input device(s) 126 are not limited to the examples described above, and any suitable proximity sensor(s) 128 may be used to detect an object hovering in front of a display of the computing device 102, including, but not limited to, inductive, magnetic, ultrasonic, or other suitable proximity sensors 128.
The computing device 102 may also include output device(s) 130, such as a display 132 (e.g., a liquid crystal display (LCD), plasma, rear projection, etc.), one or more speakers, a printer, or any other suitable output device coupled communicatively to the processor(s) 104. The output device(s) 130 may generally be configured to provide output to a user of the computing device 102. In some embodiments, the output device(s) 130 may be integrated into the computing device 102 (e.g., an embedded display 132), or provided externally as a peripheral output device 130 (e.g., a peripheral display 132).
When a hover event is detected, the hover interface module 118 may cause performance of a display-related function that is reflected on the display 130 of the computing device 102. Display-related functions that may be performed in response to detection of a hover event include, without limitation, displaying a magnified window 142 of a portion of the content 138 (e.g., a portion of the content 138 underneath the object), modifying the display of the portion of the content 138, and/or rendering additional content in association with the portion of the content 138. The portion of the content 138 rendered within the magnified window 142 of
The computing device 102 may operate in a networked environment and, as such, the computing device 102 may further include communication connections 142 that allow the device to communicate with the other computing devices 120, such remotely located content providers. The communication connections 142 are usable to transmit and/or receive data, such as content that may be stored in the local content store 114.
In some embodiments, the magnified window 142 is of a width, w, that may be less than an overall width of the display 132. In one example, the width, w, is no greater than about 75% of the width of the display 132. In another example, the width, w, is no greater than about 50% of the width of the display 132. In yet another example, the width, w, is no greater than about 25% of the width of the display 132. A restriction on the width, w, to being a fraction of the width of the display 132 may allow for optimal magnification of the portion 306 of the content 138 that is rendered within the magnified window 142, and it may facilitate selectively browsing the content in a left-to-right manner, or vice versa.
In some embodiments, the region in which the magnified window 142 is displayed may include a bottom boundary that is located a predetermined distance, b, from the location 304 on the front surface 300 corresponding to the object 302. For example, the predetermined distance, b, may be within the range of about 0.2 inches to about 0.5 inches. In another example, the predetermined distance, b, may be no greater than about 1 inch. Rendering the magnified window 142 at a predetermined distance, b, above the location 304 may prevent the object 302 from obstructing the user's view of the content within the magnified window 142 or a portion thereof.
In some embodiments, the hover interface module 118 may identify an input gesture from the object 302 before causing the display of the magnified window 142. In this sense, the magnified window 142 may be displayed in response to receipt of the input gesture, which may take any suitable form. One example input gesture that may trigger the display of the magnification window 142 is in the form of the object 302 remaining in a first position shown in
Although the magnified window 142 shown in
In some embodiments, the user input provided within the magnified window 142, as illustrated in
In some embodiments, the user input provided within the magnified window 142 may comprise a hover interaction from the object 302. For example, the object 302 may hover over the content rendered within the magnified window 142 at a distance from the front surface 300 of the display 132, and the actionable content, if coded to respond to hover events, may respond to the hover-based user input by performing a display-related function, such as changing the appearance of the content (e.g., highlighting the circle of the video playback button), displaying additional content, and so on.
In some embodiments, other criteria may be utilized to selectively render the magnified window 142 on the display 132. One example criterion may be that the object 302 moves across the front surface 300 of the display 132 above a predetermined speed (i.e., the object moves too fast). This criterion was alluded to while describing the movement of the object 302 below a predetermined speed with reference to
For further illustration, Table 1 shows an example of the primary “contact” logic that may be utilized by the hover interface module 118 to determine how to respond to an object 302 interacting with the display 132 via a combination of hover-based input and touch-based input when the magnified window 142 is not currently displayed.
Additionally, the following are example event ordering scenarios that may be followed by the hover interface module 118 in response to various interaction between the object 302 and the display 132:
Touch Down:
Touching down on an element rendered on the front surface 300 of the display 132 may produce the following sequence of events on the hit tested node of the WM_PointerDown message: mousemove, pointerover, mouseover, mouseenter, pointerdown, mousedown.
Lifting Up:
Lifting up from an element rendered on the front surface 300 of the display 132 may produce the following sequence of events on the hit tested node of the WM_PointerUp message: pointerup, mouseup, pointerout, mouseout, mouseleave.
Moving the Contact (Object 302 Remains In-Contact):
Moving the contact on the screen while in-contact (after touching down) may produce the following sequence:
Here, the [ ] brackets indicate events that fire when the new hit-test result is not equal the previous hit-test result. These events are fired at the previous hit-test result. The { } braces indicate events that fire when the update transitions in/out of the bounds of an element.
Moving the Contact—HOVER:
Produce coordinate updates without being in contact with the screen (near field input devices/objects). The sequence of events is as follows:
Moving the Contact Causing Manipulation to Begin:
When direct manipulation takes over the primary contact for the purposes of manipulation (signaled by a WM_POINTERCAPTURECHANGE message), then the following events may be dispatched: pointerout, mouseout, mouseleave, pointercancel, mouseup. Here, the “mouseup” event targets the HTML element (window).
At 1202, the browser module 110 may render content 138 on the display 132 of the computing device 102. As described above, at least some of the content 138 may be actionable by responding to user input. For instance, the content 138 rendered on the display 132 at 1202 may comprise web page content that includes, without limitation, some or all of interactive text (e.g., selectable text), links (e.g., hyperlinks), soft buttons (e.g., video/audio playback buttons), and the like, that, in response to user input, may cause performance of a navigation function (e.g., navigating the browser to a different web page upon selection of a link), a display-related function (e.g., modifying the display of the content 138, displaying additional content, etc.), and so on.
At 1204, the proximity sensor(s) 128 may detect an object 302 hovering in front of the display 132. That is, the object 302 may be in front of, but not in contact with, a front surface 300 of the display 132 at a distance from the front surface 300 such that the proximity sensor(s) 128 may detect the object 302. In one illustrative example, the proximity sensor(s) 128 comprise a capacitive sensor array embedded within or behind the display 132, and the signal strength of the capacitive sensor array may be sufficient to detect objects 302 in proximity (e.g., within the range of about 0.001 inches to about 8 inches) to the front surface 300.
At 1206, the hover interface module 118 may determine a location 304 on the front surface 300 of the display 132 corresponding to the object's position relative to the front surface 300 of the display 132. In some embodiments, the hover interface module 118 may receive the location 304 from data obtained from the proximity sensor(s) 128 (e.g., a position of a particular electrode of the proximity sensor(s) 128). In other embodiments, the hover interface module 118 may access the program data 112 to obtain a pixel location(s) corresponding to a position of the object 132 detected by the proximity sensor(s) 128. In any case, the location 304 may be spaced a shortest distance from the object 302 relative to distances from the object 302 to other locations on the front surface 300 of the display 132. In this manner, the object's position may be resolved to the location 304 based on a direction from the object 302 to the front surface 300 of the display 132 that is perpendicular to the front surface 300.
In some embodiments, the hover interface module 118 may determine whether the location 304 is within a control area 800 of a browser so as to selectively respond to hovering objects 302 within the control area 800, but not respond to objects 302 outside of the control area 800. For example, an object 302 hovering over a navigation bar of a web browser rendered on the display 132 may be determined to be positioned outside of the control area 800. In some embodiments, the hover interface module 118 may further determine whether an input gesture is received from the object 302 when in a hover state. For example, if the object 302 remains within an area 306 surrounding the location 304 for a predetermined period of time (e.g., 50 milliseconds), it may be determined that an input gesture has been provided by the object 302.
At 1208, the hover interface module 118 may determine a portion 306 of the content 138 that is rendered at the location 304 or within a threshold distance, h, from the location 304. The portion 306 is to be displayed within a magnified window 142 for facilitating readability of the portion 306 for a user of the computing device 102. As such, the portion 306 is to correspond to the location 304 in that it is a portion of the content 138 that is in relatively close proximity to the location 304. In this sense, the portion 306 of the content 308 determined/selected at 1208 may be directly underneath the object 304 (i.e., at the location 304), or at least within a threshold distance, h, of from the location 304 (e.g., directly above the object 302, as shown in
At 1210, the hover interface module 118 may display a magnified window 142 in a region of the display that contains the portion 306 of the content. The portion 306 rendered within the magnified window 142 may be rendered in actionable form (i.e., the portion 306 within the magnified window 142 may be actionable by responding to user input when the user input is provided within the magnified window 142). The process 1200 facilitates convenient browsing of the content 138 rendered on the display 132, especially in circumstances where the content 138 is rich and dense, making readability and selectability an issue on some displays 132.
For further illustration, Table 2 shows the hover interaction logic that may be utilized by the hover interface module 118 to determine how to respond to an object 302 interacting with the display 132 via a combination of hover-based input and touch-based input.
At 1302, the hover interface module 118 may determine that the object 302 has moved below a predetermined speed across the front surface 300 of the display 132 while maintaining a spaced relationship with the front surface 300 of the display 132. The predetermined speed may be set such that an object 302 moving too fast (e.g., at or above the predetermined speed) across the front surface 300 in a hovering manner may cause a previously displayed magnified window 142 to disappear from the display 132. In order to determine that the object 302 is moving across the front surface 300 of the display 132, the hover interface module 118 may leverage/access data from the proximity sensor(s) 128 that indicate detected locations corresponding to the object 302, and may further reference a clock or similar timer to determine a speed of the object's movement across the front surface 300.
At 1304, the hover interface module 118 may display the magnified window 142 (previously rendered in a first region of the display 132) in another region of the display 132 in response to the detected object movement at 1302. This may be performed at time intervals that make it look to the naked eye of a user like the magnified window 142 is moving with the movement of the object 302. Each new location of the object 302 may cause movement of the magnified window 142 to a new region of the display 132 at 1304. The process 1300 may allow a user to drag/move an object 302 in a spaced relationship over the front surface 300 of the display 132 to browse different portions of the rendered content 138.
At 1402, the proximity sensor(s) 128 may detect an object 302 hovering in front of the display 132 that is rendering content 138. That is, the object 302 may be in front of, but not in contact with, a front surface 300 of the display 132 at a distance from the front surface 300 such that the proximity sensor(s) 128 may detect the object 302.
At 1404, the hover interface module 118 may display a magnified window 142 in a region of the display that contains a portion 306 of the content 138 in actionable form that corresponds to the position of the object 302 detected at 1402.
At 1406, the hover interface module 118 may determine (based on data received from the proximity sensor(s) 128) that the object 302 has moved farther away from the front surface 300 of the display 132 in a direction perpendicular to the front surface 300 (i.e., the z-direction shown in
At 1408, the hover interface module 118 may decrease a magnification level of the portion 306 of the content 138 within the magnified window 142 such that the magnified window 142 zooms out to reveal more of the content 138 within the magnified window 142 when the object 302 moves farther away from the front surface 300.
At 1410 the hover interface module 118 may determine (based on data received from the proximity sensor(s) 128) that the object 302 has moved closer to the front surface 300 of the display 132 in a direction perpendicular to the front surface 300 (i.e., the z-direction shown in
At 1412, the hover interface module 118 may increase the magnification level of the portion 306 of the content 138 within the magnified window 142 such that the magnified window 142 zooms in to reveal less of the content 138 within the magnified window 142 when the object 302 moves closer to the front surface 300.
The environment and individual elements described herein may of course include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein.
Other architectures may be used to implement the described functionality, and are intended to be within the scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, the various functions and responsibilities might be distributed and divided in different ways, depending on circumstances.
A method comprising: rendering content (e.g., web content, document content, application content, etc.) on a display; detecting an object (e.g., finger, hand, pen, stylus, wand, etc.) in front of, but not in contact with, a front surface of the display; determining, at least partly in response to the detecting the object, a location on the front surface of the display that is spaced a shortest distance from the object relative to distances from the object to other locations on the front surface of the display; determining a portion of the content that is rendered at the location or within a threshold distance from the location; and displaying, in a region of the display, a magnified window of the portion of the content.
The method of Example One, wherein the portion of the content includes one or more interactive elements comprising at least one of an embedded link, a video playback button, or an audio playback button, and wherein individual ones of the one or more interactive elements are configured to respond to user input (e.g., touch-based input, hover-based input, etc.) when the user input is provided within the magnified window.
The method of any of the previous examples, alone or in combination, wherein the user input comprises the object contacting the front surface of the display on the individual ones of the one or more interactive elements within the magnified window.
The method of any of the previous examples, alone or in combination, wherein the magnified window comprises a browser window rendering the portion of the content.
The method of any of the previous examples, alone or in combination, wherein the detecting the object in front of, but not in contact with, the front surface of the display comprises determining an input gesture (e.g., the object hovering around a location for a predetermined period of time, a symbol/sign created by the object, a swiping or movement-based gesture, etc.) from the object.
The method of any of the previous examples, alone or in combination, wherein the input gesture is determined by: detecting that the object is at a first position that is within a threshold distance from the front surface measured in a direction perpendicular to the front surface; and determining that the object is within a predetermined area of the first position for a predetermined period of time, the predetermined area being parallel to the front surface of the display.
The method of any of the previous examples, alone or in combination, wherein the front surface of the display comprises a top portion, a bottom portion, a left side, and a right side, a positive vertical direction pointing toward the top portion, and wherein the region includes a bottom boundary that is at a predetermined distance from the location in the positive vertical direction.
The method of any of the previous examples, alone or in combination, further comprising: determining that the object has moved farther away from the front surface of the display in a direction perpendicular to the front surface; and in response to the determining that the object has moved farther away from the front surface, decreasing a magnification level of the portion of the content within the magnified window.
The method of any of the previous examples, alone or in combination, wherein the region has a width that is no greater than about 75% of a width of the display.
The method of any of the previous examples, alone or in combination, wherein the location is a first location, the method further comprising: determining that the object has moved below a predetermined speed across the front surface of the display while maintaining a spaced relationship with the front surface of the display; and in response to the determining that the object has moved, moving the magnified window with the object across the front surface of the display to another region of the display, wherein the magnified window after the moving contains another portion of the content that is rendered at, or within a threshold distance from, a new location on the front surface of the display corresponding to a position of the object after having moved across the front surface of the display.
The method of any of the previous examples, alone or in combination, further comprising causing the magnified view to disappear from the display in response to at least one of: (i) determining that the object moves outside of a control area (e.g., a browser control area) of the display, (ii) determining that the object moves across the front surface of the display above a predetermined speed, (iii) determining that the object contacts the front surface of the display outside of the region, or (iv) determining that the object has moved away from the front surface of the display in a direction perpendicular to the front surface of the display beyond a threshold distance from the front surface of the display measured along the direction.
The method of any of the previous examples, alone or in combination, further comprising: determining that the location is within a threshold distance from a boundary of a control area (e.g., a browser control area) of the display; determining that the object has moved across the front surface of the display while maintaining a spaced relationship with the front surface of the display to a new location that is closer to the boundary of the control area relative to a distance from the location to the boundary of the control area; and in response to the determining that the object has moved to the new location, panning the portion of the content within the magnified window to reveal another portion of the content that is rendered closer to the boundary of the control area relative to a distance from the portion of the content to the boundary of the control area.
The method of any of the previous examples, alone or in combination, further comprising: detecting a first contact from the object on the front surface of the display at the location and a second contact from the object on the front surface of the display at the location, the second contact being detected within a threshold period of time from the detecting the first contact; and in response to the detecting the first contact and the second contact, rendering, on the display, a zoomed-in view of the content around the location.
A system comprising: a display configured to display content (e.g., web content, document content, application content, etc.); one or more sensors (e.g., proximity sensor(s)) configured to detect an object (e.g., finger, hand, pen, stylus, wand, etc.) in front of, but not in contact with, a front surface of the display; one or more processors; and memory storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising: determining, at least partly in response to detecting the object in front of, but not in contact with, the front surface of the display, a location on the front surface of the display that is spaced a shortest distance from the object relative to distances from the object to other locations on the front surface of the display; determining a portion of the content that is rendered at the location or within a threshold distance from the location; and causing a presentation, in a region of the display, of a magnified window of the portion of the content.
The system of Example Fourteen: wherein the portion of the content includes one or more interactive elements comprising at least one of an embedded link, a video playback button, or an audio playback button, and wherein individual ones of the one or more interactive elements are configured to respond to user input (e.g., touch-based input, hover-based input, etc.) when the user input is provided within the magnified window.
The system of any of the previous examples, alone or in combination, wherein the user input comprises the object contacting the front surface of the display on the individual ones of the one or more interactive elements within the magnified window.
The system of any of the previous examples, alone or in combination, wherein the magnified window comprises a browser window rendering the portion of the content.
The system of any of the previous examples, alone or in combination, wherein the one or more sensors are further configured to determine a distance between the front surface of the display and the object in a direction perpendicular to the front surface of the display, the acts further comprising: determining that the object has moved farther away from the front surface in the direction; and in response to the determining that the object has moved farther away from the front surface, decreasing a magnification level of the portion of the content within the magnified window.
The system of any of the previous examples, alone or in combination, wherein the location is a first location, the acts further comprising: determining that the object has moved below a predetermined speed across the front surface of the display while maintaining a spaced relationship with the front surface of the display; and in response to the determining that the object has moved, moving the magnified window with the object across the front surface of the display to another region of the display, wherein the magnified window after the moving contains another portion of the content that is rendered at, or within a threshold distance from, a new location on the front surface of the display corresponding to a position of the object after having moved across the front surface of the display.
One or more computer-readable storage media comprising memory storing a plurality of programming instructions that are executable by one or more processors of a computing device to cause the computing device to perform acts comprising: rendering content (e.g., web content, document content, application content, etc.) on a display; detecting an input gesture (e.g., an object hovering around a location for a predetermined period of time, a symbol/sign created by the object, a swiping or movement-based gesture, etc.) from an object (e.g., finger, hand, pen, stylus, wand, etc.) in front of, but not in contact with, a front surface of the display; determining, at least partly in response to the detecting the input gesture, a location on the front surface of the display that is spaced a shortest distance from the object relative to distances from the object to other locations on the front surface of the display; determining a portion of the content that is rendered at the location or within a threshold distance from the location; and displaying, in a region of the display, a magnified window of the portion of the content.
A system comprising: means for displaying content (e.g., web content, document content, application content, etc.); one or more means for detecting (e.g., a proximity sensor(s)) an object (e.g., finger, hand, pen, stylus, wand, etc.) in front of, but not in contact with, a front surface of the means for displaying; one or more means for executing computer-executable instructions (e.g., processor(s), including, for example, hardware processor(s) such as central processing units (CPUs), system on chip (SoC), etc.); and means for storing computer-executable instructions (e.g., memory, computer readable storage media such as RAM, ROM, EEPROM, flash memory, etc.) that, when executed by the one or more means for executing computer-executable instructions, cause the one or more means for executing computer-executable instructions to perform acts comprising: determining, at least partly in response to detecting the object in front of, but not in contact with, the front surface of the means for displaying, a location on the front surface of the means for displaying that is spaced a shortest distance from the object relative to distances from the object to other locations on the front surface of the means for displaying; determining a portion of the content that is rendered at the location or within a threshold distance from the location; and causing a presentation, in a region of the means for displaying, of a magnified window of the portion of the content.
The system of Example Twenty-One, wherein the one or more means for detecting the object are further configured to determine a distance between the front surface of the means for displaying and the object in a direction perpendicular to the front surface of the means for displaying, the acts further comprising: determining that the object has moved farther away from the front surface in the direction; and in response to the determining that the object has moved farther away from the front surface, decreasing a magnification level of the portion of the content within the magnified window.
In closing, although the various embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.