A wide variety of displays for computer systems are available. Often display systems display content on an opaque background screen. However, systems are available which display content on a transparent background screen.
The figures depict implementations/embodiments of the invention and not the invention itself. Some embodiments are described, by way of example, with respect to the following Figures.
The drawings referred to in this Brief Description should not be understood as being drawn to scale unless specifically noted.
For simplicity and illustrative purposes, the principles of the embodiments are described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one of ordinary skill in the art, that the embodiments may be practiced without limitation to these specific details. Also, different embodiments may be used together. In some instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the description of the embodiments.
For a display screen capable of operating in at least a transparent mode, sensors may be added to the display system that increase the number of possible ways that a user can interact with the display system. There are many different ways that a user of a thru-screen display system can (1) move the display screen or alternatively (2) move an object (including the user's hands, electronic device, etc.) with respect to the display screen. When a user moves the display screen from one position to another for example, this could trigger an event which may cause a change in the user interface displayed such as the appearance of a new control that wasn't previously there. Similarly, if a user removes an object that is behind (or underneath) the thru-screen display, an option to show a virtual representation of a keyboard or a virtual representation may automatically appear. Having sensors which can detect these changes and notify the display system can automate these tasks and remove complexity from the user interface.
The present invention describes a method and system capable of automatically modifying the content displayed based on sensor input based on a current or past physical action.
One benefit of the described embodiments is that content presented on the thru screen display is controlled automatically in reaction to the user's sensed physical interactions. This is in contrast to some systems where the user controls the displayed content manually by using user interfaces (i.e. a menu, etc.) to perform a selection. In one example, the sensed physical interactions do not include selections by the user via user interfaces.
In some cases, the user's physical interactions are with an interfacing object. An example of an interfacing object would be for example, the user's hands. An alternative example of an interfacing object might be a device such as a camera or keyboard. The content that is displayed on the display screen is due to the sensed physical event.
Referring to
The sensing system in the thru screen can be a combination of hardware-based sensing (including hinge closure sensors, base/monitor position, and keyboard docking) as well as software-based sensing (such as through image analysis of the video stream from the front and rear facing cameras.) In one example, the display system shown in
In addition, the display system also includes a display generation component 126, wherein based on data 128 from the interaction sensing component 116, the display generation component 126 creates content for the display on the display screen 112. The display controller component 130 outputs data 134 from at least the display generation component 126 to the display screen 112. Data (144a, 144b, 150a, 150b) is used by the display generation component 126 to generate content on the display screen. In one example, the displayed content is a visual representation of a physical object that it is replacing where the physical object was previously positioned behind the display screen. In one example, this replacement display content, could for example be displayed on display screen operating in either a transparent or opaque background. In one example, where the display screen 112 is operating in a transparent mode, the display content may be spatially aligned with the object 120 placed behind the display screen.
The display system 100 includes an interaction display control component 118. The interaction display control component 118 is capable of receiving data from the interaction sensors regarding physical interactions by a user, where the interaction sensors are either part of the display system or information from interaction sensors can be communicated to the display system controller component. Based on the collected sensor data, the interaction display control component 118 can determine if the interaction meets the predefined interaction criteria 160. If the predefined interactions meet the interaction criteria 162, then content is modified according to the content modification component 164. In one example, the modifications to the display content are changes to the content that occur when the display screen is powered on and visible to the user.
In one example, the interaction display control component 118 includes a predefined list of interactions 160. For example, in the example shown in
The examples shown in
How the physical interaction is sensed depends on the type, number and location of the sensors available to the display system. For example, in one embodiment the physical removal of the keyboard from the docking station might be sensed by the change in current in a current sensor located in the docking station. When the sensed current reaches a certain predefined level according to the interaction criteria 162, then the system knows that the keyboard has been physically removed from the docking station. In another example, a camera or a plurality of cameras might be positioned in the vicinity of the display screen so that it can capture the area behind the display screen. The cameras (using image recognition software) can continuously monitor the area behind the display screen and when it senses that the camera is removed (predefined interaction), then the virtual keyboard will appear on the display screen. In another example, the keyboard includes an RFID label that label can be read by sensors (an RFID reader) when positioned behind the display screen that cannot be read when the keyboard is removed from behind the display screen. In another example, the keyboard could be plugged in via a USB plug and the unplugging of the USB plug could be sensed. In another example, the keyboard could be underneath the display screen being charged on an induction charging pad and a change in the electromagnetic field measurements could indicate that the keyboard was no longer plugged in and available for use.
In one example, instead of using a single type of sensor to confirm the interaction, different sensor types are used to determine whether the interaction conditions have been met. Take for example, the case where the keyboard is plugged in via a USB cable—but the keyboard is not located behind the display screen. If multiple sensor types exist, one type of sensor (i.e. current detector) might detect the USB connection and another type of sensor (i.e. camera) might detect that the keyboard is not under the display screen. For this case, in one example the display content might be changed to display a virtual keyboard. Alternatively for the same case, the display content might be changed to display a message instructing the user to “Move the keyboard underneath the display screen.”
Referring to
In the example previously described, examples are given for the sensing a change in the keyboard position location. However, in an alternative embodiment, the sensors are not monitoring a change in status—they are monitoring the current status. For example, in this case the physical interaction being monitored is whether the user has or has not physically placed a keyboard behind the display screen. If a display screen is not behind the display screen, then a virtual keyboard is automatically generated on the display screen.
The automated reaction to the user's interaction (or failure to interact) reduces the need for additional user interactions. For example, instead of the user actively selecting from a series of menus the type of user interface that the user wants displayed on the display screen (for example, a virtual keyboard), the virtual keyboard automatically appears when a predefined physical user action (removal of the physical keyboard) occurs.
In one example, the sensor used to determine whether the user's hand holding a camera is behind the display screen is a camera or plurality of camera (not shown) physically located on the frame 154 of the display screen. The event or action which causes the user's hand/camera to be sensed is moving within the capture boundaries of the camera. In another example (where the back surface of the display screen is touch sensitive), the appearance of the bounding box 310 user interface is dependent upon sensing the user touching the back of touch sensitive display screen.
In one example, different user interfaces appear based on whether the user's hands are positioned in front of or behind the display screen surface. For example, the bounding box display might appear when a camera senses that the user's hands are behind the display screen. When the user removes her hands from behind the display screen, the camera or other image sensing device will recognize that user's hands are no longer behind the display screen. Responsive to sensing that the user's hands are not behind the display screen, user interface elements that are usable when the user can interact with or touch the front side of the display screen can automatically appear.
As previously stated, the examples shown in
In one example of the invention, the modified image displayed does not create displayed content based on sensed values of the user's viewpoint. However, as shown in
Referring to
Some or all of the operations set forth in the method 600 may be contained as utilities, programs or subprograms, in any desired computer accessible medium. In addition, the method 600 may be embodied by computer programs, which may exist in a variety of forms both active and inactive. For example, they may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats. Any of the above may be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form.
The computing apparatus 700 includes one or more processor(s) 702 that may implement or execute some or all of the steps described in the method 600. Commands and data from the processor 702 are communicated over a communication bus 704. The computing apparatus 700 also includes a main memory 706, such as a random access memory (RAM), where the program code for the processor 702, may be executed during runtime, and a secondary memory 708. The secondary memory 708 includes, for example, one or more hard drives 710 and/or a removable storage drive 712, representing a removable flash memory card, etc., where a copy of the program code for the method 700 may be stored. The removable storage drive 712 reads from and/or writes to a removable storage unit 714 in a well-known manner.
These methods, functions and other steps described may be embodied as machine readable instructions stored on one or more computer readable mediums, which may be non-transitory. Exemplary non-transitory computer readable storage devices that may be used to implement the present invention include but are not limited to conventional computer system RAM, ROM, EPROM, EEPROM and magnetic or optical disks or tapes. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that any interfacing device and/or system capable of executing the functions of the above-described examples are encompassed by the present invention.
Although shown stored on main memory 706, any of the memory components described 706, 708, 714 may also store an operating system 730, such as Mac OS, MS Windows, Unix, or Linux; network applications 732; and a display controller component 130. The operating system 730 may be multi-participant, multiprocessing, multitasking, multithreading, real-time and the like. The operating system 730 may also perform basic tasks such as recognizing input from input devices, such as a keyboard or a keypad; sending output to the display 720; controlling peripheral devices, such as disk drives, printers, image capture device; and managing traffic on the one or more buses 704. The network applications 732 includes various components for establishing and maintaining network connections, such as software for implementing communication protocols including TCP/IP, HTTP, Ethernet, USB, and FireWire.
The computing apparatus 700 may also include an input devices 716, such as a keyboard, a keypad, functional keys, etc., a pointing device, such as a tracking ball, cursors, mouse 718, etc., and a display(s) 720, such as the screen display 110 shown for example in
The processor(s) 702 may communicate over a network, for instance, a cellular network, the Internet, LAN, etc., through one or more network interfaces 724 such as a Local Area Network LAN, a wireless 702.11x LAN, a 3G mobile WAN or a WiMax WAN. In addition, an interface 726 may be used to receive an image or sequence of images from imaging components 728 such as the image capture device.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the invention. The foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive of or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in view of the above teachings. The embodiments are shown and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents:
This case is a continuation in part of the case entitled “Display System and Method of Displaying Based on Device Interactions” filed on Oct. 29, 2010, having Ser. No. 12/915,311, which is hereby incorporated by reference in its entirety. In addition this case is related to the case entitled “An Augmented Reality Display System and Method of Display” filed on Oct. 22, 2010, having serial number PCT/US2010/053860 and the case entitled “Display System and Method of Displaying Based on Device Interactions” filed on Oct. 29, 2010, having Ser. No. 12/915,311, both cases which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 12915311 | Oct 2010 | US |
Child | 13223130 | US | |
Parent | PCT/US2010/053860 | Oct 2010 | US |
Child | 12915311 | US |