The present disclosure generally relates to information handling systems, and more particularly relates to merger of user interfaces for multiple computing devices for an augmented user interface surface.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the Figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements. Embodiments incorporating teachings of the present disclosure are shown and described with respect to the drawings herein, in which:
The use of the same reference symbols in different drawings may indicate similar or identical items.
The following description in combination with the Figures is provided to assist in understanding the teachings disclosed herein. The description is focused on specific implementations and embodiments of the teachings, and is provided to assist in describing the teachings. This focus should not be interpreted as a limitation on the scope or applicability of the teachings.
Head mounted display devices, wearable around the user's head and/or eyes and having the capability of reflecting projected images as well as allowing the user to see through it may be used with augmented or virtual reality display systems. A user may see through a head mounted display device to view a base computing device with a separate video display and user interface. A base computing device user interface may display a base user interface element, including but not limited to any type of object, button, tile, program window, softkey or other visual representation of a code object that facilitates interaction between the user and the processor. A base computing device may include but may not be limited to any information handling system displaying images and data as described herein and may include a desktop personal computer, a laptop computer, a tablet computer, or a mobile phone. Viewing the base computing device through the head mounted display device while the head mounted display device also reflects projected images may generate a multiple device augmented user interface surface with the multiple device augmented surface management system described in embodiments herein.
In other aspects, a user may interact with the augmented display of the head mounted display device or the base computing system via an augmented user interface input device. The augmented user interface device may include a finger or hand or other implement within the field of view of a head mounted display device. In some embodiments, the augmented user interface input device may include a stylus or a pen. A plane of operation of the augmented user interface device may be tracked by the multiple device augmented surface management system and be cross referenced with respect to the plane of a base computing device such as a tablet or laptop computer. Further the plane of operation of the augmented user interface device may also be tracked by the multiple device augmented surface management system and be cross referenced with respect to planes of images or icons for the head mounted display device. As described below, this may be accomplished via a three dimensional camera system locating the augmented user interface device. In other aspects, a stylus, pen or other augmented user interface device may be fitted with a three axis sensor or optical sensor to communicate tip location or other related data for the augmented user interface device. For example, such a sensor may be affixed or worn on a finger.
In order to generate a multiple device augmented user interface surface, a multiple device augmented surface management system or method may be in place to reflect the projected images of the head mounted display device in such a way that they do not compromise the user's ability to see and/or manipulate the base computing device. Further, in some embodiments, a multiple device augmented surface management system or method may be used to correlate a base user interface element displayed on the base computing device with the image of that base user interface element, as seen through the head mounted display device, such that manipulation of either the base user interface element or the image of the base user interface element acts to perform the same manipulation on both the base user interface element and the image of the base user interface element. Further, in some embodiments, a multiple device augmented surface management system or method may be used to correlate a base user interface element displayed on the base computing device with a head mounted display user interface element, as seen through the head mounted display device, such that manipulation of either the base user interface element or the head mounted display user interface element acts to perform the same manipulation on both the base user interface element and the head mounted display user interface element. The multiple device augmented surface management system or method may involve generation of three-dimensional maps that provide three-dimensional location data for objects seen through the head mounted display device, including but not limited to, the base computing device, user input devices (such as a keyboard, and a cursor user input device, a mouse, touchpad, or gesture or touch screen input), an augmented user input device (such as a stylus, pen, hand, etc.), and objects not related to any user interface aspects (such as legs, furniture, other individuals, etc.). Such three-dimensional maps may enable the reflection of images through the head mounted display device to not obscure any portion of the base computing device, according to user preferences.
In the present disclosure, a multiple device augmented surface management system is established to generate a multiple device augmented user interface surface. In one example embodiment, the multiple device augmented surface management system may involve reflection of images through the head mounted display device such that those images do not obscure the view of the base computing device in any way. In another embodiment, the multiple device augmented surface management system may involve reflection of images through the head mounted display device such that those images may at least partially obscure the base computing device. In one aspect, the reflection of images through the head mounted display device may be coordinated with the user interface display of the base information handling system. In other aspects, the images reflected through the head mounted display device may be two-dimensional or three-dimensional, and/or the multiple device augmented surface management system may correlate a base user interface element displayed on the base computing device with the image of that base user interface element, as seen through the head mounted display device, such that manipulation of either the base user interface element or the image of the base user interface element acts to perform the same manipulation on both the base user interface element and the image of the base user interface element. In another embodiment, a multiple device augmented surface management system or method may be used to correlate a base user interface element displayed on the base computing device with a head mounted display user interface element, as seen through the head mounted display device, such that manipulation of either the base user interface element or the head mounted display user interface element acts to perform the same manipulation on both the base user interface element and the head mounted display user interface element. In one embodiment, the multiple device augmented surface management system is software code executable on one or more application processors, which may reside at the base computing device, at the head mounted display device, or at one or more remote servers and data base systems. In other embodiments, some or all of the multiple device augmented surface management system may include firmware executed via processors or controller or may be hardcoded as an applied specific integrated circuit (ASIC) or other circuit to execute some or all of the operations described in the disclosure herein.
Examples are set forth below with respect to particular aspects of an information handling system for merging of user interfaces for multiple computing devices to create an augmented user interface surface.
Information handling system 100 can include devices or modules that embody one or more of the devices or execute instructions for the one or more systems and modules described above, and operates to perform one or more of the methods described above. The information handling system 100 may execute code 124 for a multiple device augmented surface management system that may operate on servers or systems, remote data centers, or on-box in individual client information handling systems such as a head-mounted display device or a base computing system according to various embodiments herein. In some embodiments, it is understood any or all portions of code 124 for a multiple device augmented surface management system may operate on a plurality of information handling systems 100.
The information handling system 100 may include a processor 102 such as a central processing unit (CPU), a graphics processing unit (GPU), control logic or some combination of the same. Any of the processing resources may operate to execute code that is either firmware or software code. Moreover, the information handling system 100 can include memory such as main memory 104, static memory 106, and drive unit 116 (volatile (e.g. random-access memory, etc.), nonvolatile (read-only memory, flash memory etc.) or any combination thereof). Additional components of the information handling system can include one or more storage devices such as static memory 106 and drive unit 116. The information handling system 100 can also include one or more buses 108 operable to transmit communications between the various hardware components such as any combination of various input and output (I/O) devices. Portions of an information handling system may themselves be considered information handling systems.
As shown, the information handling system 100 may further include a base video display unit 110, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, or a cathode ray tube (CRT). Additionally, the information handling system 100 may include an input device 118, such as a keyboard, and a cursor user input device, such as a mouse, touchpad, or gesture or touch screen input. The information handling system 100 may also include a head mounted display device 132, which may display images using, for example, a curved mirror based reflection, a waveguide based method or a light guide based method. Waveguide methods may further include, but may not be limited to diffraction optics, holographic optics, polarized optics, and reflective optics. These are just examples, and it is contemplated the head mounted display device may use any method that reflects projected images in order to create an augmented reality.
The information handling system 100 can also include a signal generation device 114, such as a speaker, microphone, ultrasonic speaker, ultrasonic microphone, or remote control. The base video display unit 110 may display a base user interface 112 which incorporates a base user interface element users may manipulate in order to affect communication between the user and the base computing device.
The information handling system 100 can represent a server device whose resources can be shared by multiple client devices, or it can represent an individual client device, such as a desktop personal computer, a laptop computer, a tablet computer, or a mobile phone. In a networked deployment, the information handling system 100 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment.
The information handling system 100 can include a set of instructions 124 that can be executed to cause the computer system to perform any one or more of the methods or computer based functions disclosed herein. For example, information handling system 100 includes one or more application programs 124, and Basic Input/Output System and Firmware (BIOS/FW) code 124. BIOS/FW code 124 functions to initialize information handling system 100 on power up, to launch an operating system, and to manage input and output interactions between the operating system and the other elements of information handling system 100. In a particular embodiment, BIOS/FW code 124 reside in memory 104, and include machine-executable code that is executed by processor 102 to perform various functions of information handling system 100. In another embodiment (not illustrated), application programs and BIOS/FW code reside in another storage medium of information handling system 100. For example, application programs and BIOS/FW code can reside in static memory 106, drive 116, in a ROM (not illustrated) associated with information handling system 100 or other memory. Other options include application programs and BIOS/FW code sourced from remote locations, for example via a hypervisor or other system, that may be associated with various devices of information handling system 100 partially in memory 104, storage system 106, drive unit 116 or in a storage system (not illustrated) associated with network interface device 120 or any combination thereof. Application programs 124 and BIOS/FW code 124 can each be implemented as single programs, or as separate programs carrying out the various features as described herein. Application program interfaces (APIs) such Win 32 API may enable application programs 124 to interact or integrate operations with one another.
In an example of the present disclosure, instructions 124 may execute the multiple device augmented surface management system 132 as disclosed herein, and an API may enable interaction between the application program and device drivers and other aspects of the information handling system and software instructions 124 thereon. The computer system 100 may operate as a standalone device or may be connected, such as via a network, to other computer systems or peripheral devices.
Main memory 104 may contain computer-readable medium (not shown), such as RAM in an example embodiment. An example of main memory 104 includes random access memory (RAM) such as static RAM (SRAM), dynamic RAM (DRAM), non-volatile RAM (NV-RAM), or the like, read only memory (ROM), another type of memory, or a combination thereof. Static memory 106 may contain computer-readable medium (not shown), such as NOR or NAND flash memory in some example embodiments. The disk drive unit 116 may include a computer-readable medium 122 such as a magnetic disk in an example embodiment. The computer-readable medium of the memory, storage devices and a multiple device augmented surface management system 104, 106, 116, and 132 may store one or more sets of instructions 124 such as software code corresponding to the present disclosure.
The disk drive unit 116, and static memory 106, also contains space for data storage such as an information handling system for multiple device augmented surfaces. Data including but not limited to data regarding ambient light measurements, three-dimensional images taken by a three-dimensional camera array, data generated by a user interface regarding coordinates, shape, color, and any other characteristics of appearance of a base user interface element display on said user interface at any given time, data regarding the coordinates, shape, color, resolutions, and any other characteristics of appearance of a video display within which a user interface appears, data regarding identification of base user interface elements identified within user interfaces, characteristics such as position and orientation of base user interface elements displayed within a user interface, data regarding orientation of a video display with respect to a head mounted display device, user input regarding preferred orientation of base user interface elements to be displayed by a head mounted display device, and low-level graphics primitives may also be stored in part or in full in data storage 106 or 116. Further, the instructions 124 may embody one or more of the methods or logic as described herein. For example, instructions relating to the hardware implementation of the head mounted display device 130 or subcomponents, or the multiple device augmented surface management system 132 software algorithms may be stored here.
In a particular embodiment, the instructions, parameters, and profiles 124 may reside completely, or at least partially, within the main memory 104, the static memory 106, disk drive 116 and/or within the processor 102 during execution by the information handling system 100. Software applications may be stored in static memory 106 or disk drive 116.
Network interface device 120 represents a NIC disposed within information handling system 100, on a main circuit board of the information handling system, integrated onto another component such as processor 102, in another suitable location, or a combination thereof. Network interface device 120 includes network channels 134 and 136 that provide interfaces to devices that are external to information handling system 100. In a particular embodiment, network channels 134 and 136 are of a different type than memory bus 108 and network interface device 120 translates information from a format suitable to the memory bus 108 to a format suitable to external devices. An example of network channels 134 and 136 includes InfiniBand channels, Fibre Channel channels, Gigabit Ethernet channels, proprietary channel architectures, or a combination thereof. The network interface device 120 can include another information handling system, a data storage system, another network, a grid management system, another suitable resource, or a combination thereof.
Network channels 134 and 136 can be connected to external network resources, and may be connected directly to a multiple device augmented surface management system 132, or indirectly through network interface device 120 to drive unit 116, both of which can also contain computer readable medium 122. While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein.
In a particular non-limiting, exemplary embodiment, the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to store information received via carrier wave signals such as a signal communicated over a transmission medium. Furthermore, a computer readable medium can store information received from distributed network resources such as from a cloud-based environment. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is equivalent to a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
The information handling system 100 may also include a multiple device augmented surface management system 132 that may be operably connected to the bus 108, may connect to the bus indirectly through the network 128 and the network interface device 120, or may connect directly to the network interface device 120 via the network channels 134 and 136. The multiple device augmented surface management system 132 is discussed in greater detail herein below.
In other embodiments, dedicated hardware implementations such as application specific integrated circuits, programmable logic arrays and other hardware devices can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
When referred to as a “system”, a “device,” a “module,” or the like, the embodiments described herein can be configured as hardware. For example, a portion of an information handling system device may be hardware such as, for example, an integrated circuit (such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a structured ASIC, or a device embedded on a larger chip), a card (such as a Peripheral Component Interface (PCI) card, a PCI-express card, a Personal Computer Memory Card International Association (PCMCIA) card, or other such expansion card), or a system (such as a motherboard, a system-on-a-chip (SoC), or a stand-alone device). The system, device, or module can include software, including firmware embedded at a device, such as a Intel® Core class processor, ARM® brand processors, Qualcomm® Snapdragon processors, or other processors and chipset, or other such device, or software capable of operating a relevant environment of the information handling system. The system, device or module can also include a combination of the foregoing examples of hardware or software. In an example embodiment, the multiple device augmented surface management system 132 above and the several modules described in the present disclosure may be embodied as hardware, software, firmware or some combination of the same. Note that an information handling system can include an integrated circuit or a board-level product having portions thereof that can also be any combination of hardware and software. Devices, modules, resources, or programs that are in communication with one another need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices, modules, resources, or programs that are in communication with one another can communicate directly or indirectly through one or more intermediaries.
In accordance with various embodiments of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.
In an embodiment, it is understood that the three-dimensional camera array 310, as shown in
As shown in
As shown in
As shown in
For example, although the display of the base user interface 112 in an embodiment does not change between
As shown in
Each of these submodules may contain further submodules. For example, the user interface merger submodule 605, in an embodiment, may further contain the scene lighting condition submodule 620, the user interface interpreter submodule 625, the three-dimensional mapping submodule 630, and the error adaptation submodule 635. The scene lighting condition submodule 620, in an embodiment, may operate to measure ambient light surrounding the head mounted display device, and adjust the contrast of the three-dimensional image based on that ambient light measurement. The user interface interpreter submodule 625 may operate to identify and correlate a base user interface element displayed within the base user interface with a base user interface element captured in the three-dimensional image taken by the three-dimensional camera array on the head mounted display device, to determine the orientation of the base video display plane with respect to the head mounted display device, and to identify a non-base user interface feature in the three-dimensional image. The three-dimensional mapping submodule 630, in an embodiment, may operate to identify the borders of the base video display in the three-dimensional image and to generate a three-dimensional map of the base video display, base user interface, base user interface element, and non-base user interface feature. The error adaptation submodule 635 in an embodiment, may operate to detect errors in feature identification and mapping, and correct those errors as necessary.
The merged interface rendering submodule 610, in an embodiment, may further contain the view manager submodule 640, the low-level graphics primitives submodule 645, and the remote rendering and load submodule 650. The view manager submodule 640, in an embodiment, may operate to prompt user input regarding the way in which the user chooses to merge the base user interface and the head mounted display device user interface together to form a single augmented user interface surface. The low-level graphics primitives submodule 645 may operate to generate and place within the three-dimensional map generated by the three-dimensional mapping submodule 630 graphics primitives of the head mounted display device user interface elements. These graphics primitives may be placed in the three-dimensional map in multiple orientations, including, but not limited to: (1) solely outside the borders of the base video display; and (2) within and/or outside the borders of the base video display. The remote rendering and load submodule 650, in an embodiment, may operate to render the head mounted display device user interface elements upon the low-level primitives which may be oriented in one of many different ways, including, but not limited to: (1) solely outside the borders of the base video display; and (2) within and/or outside the borders of the base video display.
The merger customization submodule 615, in an embodiment, may further contain the customized low-level graphics primitives submodule 655 and the customized remote rendering and load submodule 660. The customized low-level graphics primitives submodule 655 may operate to generate and place within the three-dimensional map generated by the three-dimensional mapping submodule 630 graphics primitives of the head mounted display device user interface elements, according to customized merged head mounted display device user interface element definitions received by the merger customization submodule 615. The customized remote rendering and load submodule 660, in an embodiment, may operate to render the head mounted display device user interface elements according to customized merged head mounted display device user interface element definitions received by the merger customization submodule 615.
In an example of the present disclosure, code instructions may execute the multiple device augmented surface management system 132 as disclosed herein, and an API may enable interaction between the application program and device drivers and other aspects of the information handling system and software instructions thereon. In an embodiment, the scene lighting condition submodule 620 may be capable of accessing data including but not limited to ambient light measurements captured by the ambient light detector and the three-dimensional image generated by the three-dimensional camera array. In an embodiment, the user interface interpreter submodule 625 may be capable of accessing data including but not limited to data generated by the base user interface regarding the coordinates, shape, color, and any other characteristics of appearance of each base user interface element as displayed on the user interface at any given time. For example, with respect to the embodiment described in
The user interface interpreter submodule 625 in the embodiment shown in
The three-dimensional mapping submodule 630 shown in
The error adaptation submodule 630 as shown in
At block 710, in an embodiment, the scene lighting condition submodule may receive data regarding ambient light surrounding the head mounted display device, as measured by the ambient light detector. As described herein, the ambient light detector may measure the ambient light surrounding the head mounted display device in an embodiment using any type of electro-optical sensor capable of detecting ambient light surrounding the head mounted display device and converting the ambient light measurement into an electronic signal, including but not limited to photoconductive devices, photovoltaics, photodiodes, or phototransistors.
At block 715, in an embodiment, the scene lighting condition submodule may adjust the contrast of the three-dimensional image based on the ambient light measurement. The scene lighting condition submodule in an embodiment may adjust the three-dimensional image of the base user interface, base user interface element, base video display, and non-base user interface feature as captured by the three-dimensional camera array based on the ambient light measured by the ambient light detector through any one of several known methods for adjusting contrast of digital images, including but not limited to raster graphics editing and vector graphics editing. It is contemplated that many methods for adjusting contrast of digital images to account for ambient light are known in the field of digital imaging.
At block 720, in an embodiment, the user interface interpreter submodule may identify a base user interface element in the three-dimensional image. As described herein, in an embodiment, the base video display may display a base user interface element as part of the base user interface, and the three-dimensional camera array on the head-mounted display device may capture a three-dimensional image of the base video display, the base user interface, the base user interface element, and the non-base user interface feature. For example, in the embodiment shown in
As described herein, in an embodiment, the user interface interpreter submodule may be capable of accessing data including but not limited to data generated by the base user interface regarding the coordinates, shape, color, and any other characteristics of the base video display, appearance of each base user interface element as displayed on the user interface at any given time. In one embodiment, the user interface interpreter submodule may use this information regarding physical dimensions, colors, shading, and brightness of the base user interface and/or base user interface element displayed within the base user interface, along with an image matching algorithm to identify any base user interface elements represented by the base user interface and displayed on the base video display, that are captured in the three-dimensional image. For example, in the embodiment described with reference to
It is contemplated that the user interface interpreter submodule may use object recognition methods, including but not limited to approaches based on computer aided design like object models, appearance-based methods, and feature-based methods. Approaches based on computer aided design like object models may further include, but may not be limited to edge detection, primal sketch, Marr, Mohan and Nevatia's approach, Lowe's Approach, Olivier Faugeras' approach, generalized cylinders, geons, and Dickinson, Forsyth and Ponce's approach. Appearance-based methods may further include, but may not be limited to edge matching, divide and conquer search, greyscale matching, gradient matching, histograms of receptive field responses, and large modelbases. Feature-based methods may further include, but may not be limited to interpretation trees, hypothesize and test methods, pose consistency, pose clustering, invariance, geometric hashing, scale-invariant feature transform, and speeded up robust features.
At block 725, in an embodiment, the user interface interpreter submodule may further correlate the base user interface element as displayed on the base video display with the image of the base user interface element as captured in the three-dimensional image. In other words, the user interface interpreter submodule may associate the base user interface element displayed within the base user interface, with the image of the base user interface element captured in the three-dimensional image, such that other modules, instructions, or processes may interpret actions taken to manipulate either the base user interface element as displayed within the base user interface, or the image of the base user interface element as shown in the three-dimensional image to affect manipulation of both the base user interface element as displayed within the base user interface, and the image of the base user interface element as shown in the three-dimensional image. For example, in the embodiments described in
At block 730, in an embodiment, the user interface interpreter submodule may also determine the orientation of the plane of the base video display with respect to the head mounted display device. As described herein, the user interface interpreter submodule in an embodiment may access data regarding the dimensions of the base video display, the characteristics of the base user interface and/or base user interface element. For example, in an embodiment, the user interface interpreter submodule may access characteristics including, but not limited to information regarding physical dimensions, colors, shading, and brightness of the base user interface and/or base user interface element. The user interface interpreter submodule in an embodiment may compare these characteristics of the base user interface and/or base user interface element as displayed on the base video display to the characteristics of the base user interface and/or base user interface element as captured in the three-dimensional image. In an embodiment, the comparison between these characteristics as displayed on the base video display and as captured in the three-dimensional image, may be analyzed using an image processing method to determine the plane of the base video display with respect to the three-dimensional camera array and the head-mounted display device at the time of the capture of the three-dimensional image. It is contemplated any image processing method known in the art to be capable of attitude determination of features within an image could be used in an embodiment.
At block 735, in an embodiment, the three-dimensional mapping submodule may identify the borders of the base video display in the three-dimensional image. In an embodiment, the three-dimensional mapping submodule may identify the borders of the base video display 110 as shown in the three-dimensional image. As described herein, the three-dimensional mapping submodule, in an embodiment, may access data including but not limited to data related to three-dimensional image taken by the three-dimensional camera array, and identification of the base video display within that image as made by the user interface interpreter submodule. Also as described herein, in an embodiment, the user interface interpreter submodule may calculate the physical length and height of a base user interface element, as displayed on the base user interface, and the physical length and height of the base user interface itself. For example, in an embodiment as depicted in
The user interface interpreter submodule in an embodiment may use calculations of the physical dimensions of the base user interface element as displayed on the base user interface and the physical dimensions of the base user interface itself to determine the location of a base user interface element with respect to the borders of the base video display. In other words, the user interface interpreter submodule in an embodiment may calculate the orientation of a base user interface element with respect to the borders of the base video display (in physical dimensions). As described herein, the location of the base user interface element with respect to the borders of the base video display, as calculated by the user interface interpreter submodule, does not necessarily take into account the change in orientation of the plane of the base video display as captured in the three-dimensional image from the perspective of the head-mounted display device. For example, in the embodiments described with respect to
As described herein, the head mounted display device, and thus the three-dimensional camera array in an embodiment may view the base user interface within the base video display at an angle other than perfectly perpendicular from and/or directly above the base user interface. As described herein, the head mounted display device in an embodiment may perceive the top of the base user interface as farther away from the head mounted display device than the bottom of the base user interface, particularly if a user tilts the base video display. If the user tilts the device in such a way, the dimensions of the base user interface elements as viewed by the head mounted display device appear skewed when compared to the dimensions of the base user interface elements as displayed by the base user interface. As described herein, the three-dimensional mapping submodule may access data regarding the orientation of a base user interface element with respect to the borders of the base video display (in physical dimensions), as calculated by the user interface interpreter submodule.
For example, in the embodiment described with respect to
At block 735, the three-dimensional mapping submodule in an embodiment may identify the borders of the base video display in the three-dimensional image by projecting the orientation of the base user interface element with respect to the borders of the base video display (in physical dimensions) in a flat plane onto the actual plane as captured in the three-dimensional image using the orientation of the plane of the base video display, as calculated in
At block 740, in an embodiment, the user interface interpreter submodule may identify a non-base user interface feature in the three-dimensional image. In an embodiment, the user interface interpreter submodule may identify features within the three-dimensional image other than the base video display, base user interface, or base user interface element as a non-base user interface feature. For example, in the embodiment described in
At block 745, in an embodiment, the three-dimensional mapping submodule may generate a three-dimensional map of the base video display, the base user interface, the base user interface element, and/or any non-base user interface feature captured in the three-dimensional image. As described herein, the three-dimensional mapping submodule, in an embodiment, may access data including but not limited to data related to three-dimensional image taken by the three-dimensional camera array. In an embodiment, at block 745, the three-dimensional mapping submodule may use the three-dimensional image, as adjusted by the scene lighting condition submodule to calculate or determine the physical coordinates of any object (or pixel) captured in the three-dimensional image. The three-dimensional mapping submodule in an embodiment may also access data regarding any identification of the edges of the base video display (as identified by the three-dimensional mapping submodule in
At block 750, in an embodiment, the error adaptation submodule may detect errors in feature identification and mapping and correct those errors as necessary. In an embodiment, an error adaptation submodule may access the three-dimensional map generated by the three-dimensional mapping submodule. The error adaptation submodule may detect errors in feature identification and mapping of those features in an embodiment through any means of observational error or measurement error correction. For example, in an embodiment, an error adaptation submodule may compare identification or mapping of the same feature in several three-dimensional maps captured over a period of time, and detect a statistical deviation in one of the identifications or mappings. In such a scenario, the error adaptation submodule may correct this deviation by adapting that identification or mapping to fall more in line with the identifications or mappings captured both before and after the erroneous identification or mapping. It is contemplated any method to correct the multiple testing problem of statistics may be used, including, but not limited to multiple testing correction, the Bonferroni method, the closed testing procedure, the Boole-Bonferroni bound, and the Holm-Bonferroni method.
In an embodiment, at block 1030, the low-level graphics primitive submodule may generate a merged head mounted display device user interface element primitive and place that primitive outside the base video display borders. It is understood low-level graphics primitives operate graphics rendering methods to provide a simple geometric form upon which more elaborate graphics such as surface textures, lighting, surface patterns, or surface images may be overlaid. Using primitives allows rendering techniques to streamline the movement and articulation of subcomponents of complex structures more efficiently. Low-level graphics primitives in an embodiment may include geometric forms including, but not limited to, points, lines, line segments, planes, circles, ellipses, triangles, polygons, spline curves, spheres, cubes, boxes, toroids, cylinders and pyramids. The low-level primitives submodule in an embodiment may generate a primitive having the size, shape, and location of a merged head mounted display device user interface element as it would appear after it is rendered and displayed. For example, with respect to the embodiment described in
At block 1040, in an embodiment, the remote rendering and load submodule may render the head mounted display device user interface element outside the base video display borders. In an embodiment, at block 1040, the remote rendering and load submodule may render the merged head mounted display device user interface element surface texture, lighting, surface pattern, or other surface image onto the graphics primitive for the merged head mounted display device user interface element, which may be located outside the borders of the base video display. For example, with respect to the embodiment described in
In an embodiment, at block 1050, the low-level graphics primitives submodule may generate graphics primitive of a merged head mounted display device user interface element and place that primitive within and/or outside the base video display borders. At block 1050, the low-level graphics primitives submodule in an embodiment may generate a graphics primitive having the size, shape, and location of the merged head mounted display device user interface element as it would appear after it is rendered and displayed. For example, with reference to the embodiment described in
At block 1060, in an embodiment, the remote rendering and load submodule may render and display a merged head mounted display device user interface element within and/or outside the base video display borders. In an embodiment, at block 1040, the remote rendering and load submodule may render the surface texture, lighting, surface pattern, or other surface image onto the graphics primitive for the merged head mounted display device user interface element, which may be located within and/or outside the borders of the base video display. For example, with reference to the embodiment described in
As described herein, the user interface interpreter submodule may associate the base user interface element displayed within the base user interface, with the image of the base user interface element captured in the three-dimensional image, such that other modules, instructions, or processes may interpret actions taken to manipulate either the base user interface element as displayed within the base user interface, or the image of the base user interface element as shown in the three-dimensional image to affect manipulation of both the base user interface element as displayed within the base user interface, and the image of the base user interface element as shown in the three-dimensional image. As further described herein, the multiple device augmented surface management system, which controls the display of merged head mounted display device user interface elements may connect to a bus of the information handling system in order to communicate with the information handling system and any subcomponents of the information handling system, including the base user interface, which controls the display of base user interface elements. As also described herein, the information handling system of an embodiment may receive user input via an input device, directing the manipulation of a base user interface element, or a merged head mounted display device user interface element. For example, with reference to the embodiment described in
As described herein, the information handling system of an embodiment may be in communication with the multiple device augmented surface management system and any of its submodules. Further, in an embodiment, the information handling system may also have access to data generated by any of the multiple device augmented surface management submodules. For example, with reference to the embodiment described in
Thus, the information handling system in an embodiment may have access to any correlation the user interface interpreter submodule may make between a base user interface element and an image of the same base user interface element as captured in a three-dimensional image. For example, with reference to the embodiment described in
In addition, the information handling system in an embodiment may have access to any merged head mounted display device user interface elements the remote rendering and load submodule may render and display. For example, with reference to the embodiment described in
Thus, the information handling system in an embodiment may be able to receive input from a user input device, instructing manipulation of either a base user interface element, a merged head mounted display device user interface element, or both simultaneously. Because the information handling system in an embodiment may be in communication with the base user interface and the submodules directing rendering of the merged head mounted display device user interface elements, the information handling system in an embodiment could instruct base user interface to manipulate base user interface elements accordingly, instruct the remote rendering and load submodule to manipulate merged head mounted display device user interface elements, or could instruct both the base user interface and the remote rendering and load submodule to manipulate the base user interface elements and merged head mounted display device user interface elements simultaneously.
For example, with respect to the embodiment described in
At block 1120, in an embodiment, if the user chooses to customize the merged head mounted display device user interface element, the merger customization submodule may receive a customized merged head mounted display device user interface element definition. The customized merged head mounted display device user interface element definition of a customization definition profile may consist of a program instruction detailing the chosen customization. For example, with respect to the embodiment described in
At block 1130, in an embodiment, the merger customization submodule may store the customized merged head mounted display device user interface element definition in memory. In an embodiment, the merger customization submodule may store the customized merged head mounted display device user interface element interface in any memory to which it has access, including, but not limited to main memory, static memory, or the disk drive unit.
At block 1140, in an embodiment, the customized low-level graphics primitives submodule may generate a merged head mounted display device user interface element graphics primitive according to the customized merged head mounted display device user interface element definition. For example, in an embodiment where customization of a merged head mounted display device user interface element includes specifying the size or location of the merged head mounted display device user interface element with reference to other merged head mounted display device user interface elements, or with reference to the base video display, the low-level graphics primitives submodule may generate a merged head mounted display device user interface element graphics primitive having the specified size or location given in the customized merged head mounted display device user interface element definition. In another example, in an embodiment where customization includes specifying whether to view the merged head mounted display device user interface element as a two-dimensional object or a three-dimensional object, the customized low-level graphics primitives submodule may generate a two-dimensional or three-dimensional merged head mounted display device user interface element graphics primitive, depending upon the specified dimensionality given in the customized merged head mounted display device user interface element definition. In another example, in an embodiment where customization includes specifying viewing of a merged head mounted display device user interface element only while viewing a specific application on the base video display, or only while not viewing a specific application on the base video display, the customized low-level graphics primitives submodule may only display the merged head mounted display device user interface element only while the base user interface simultaneously displays a specified base user interface element, or the customized low-level graphics primitives submodule may only display the merged head mounted display device user interface element only while the base user interface simultaneously does not display a specified base user interface element, depending on the specified correlation between the display of merged head mounted display device user interface elements with base user interface elements given in the customized merged head mounted display device user interface element definition. These customizations are only a few examples of a wide range of potential customizations, which can include any specification as to the orientation of merged head mounted display device user interface elements within the merged user interface, and correlation between the display of merged head mounted display device user interface elements with base user interface elements.
At block 1150, in an embodiment, the customized remote rendering and load submodule may render and display a merged head mounted display device user interface element according to the customized merged head mounted display device user interface element definition. For example, in an embodiment where customization of a merged head mounted display device user interface element includes specifying the size or location of the merged head mounted display device user interface element with reference to other merged head mounted display device user interface elements, or with reference to the base video display, the customized remote rendering and load submodule may render and display a merged head mounted display device user interface element having the specified size or location given in the customized merged head mounted display device user interface element definition. In another example, in an embodiment where customization includes specifying whether to view the merged head mounted display device user interface element as a two-dimensional object or a three-dimensional object, the customized remote rendering and load submodule may render and display a two-dimensional or three-dimensional merged head mounted display device user interface element graphics primitive, depending upon the specified dimensionality given in the customized merged head mounted display device user interface element definition. In another example, in an embodiment where customization includes specifying viewing of a merged head mounted display device user interface element only while viewing a specific application on the base video display, or only while not viewing a specific application on the base video display, the customized remote rendering and load submodule may render and display a merged head mounted display device user interface element only while the base user interface simultaneously displays a specified base user interface element, or the customized remote rendering and load submodule may render and display a merged head mounted display device user interface element only while the base user interface simultaneously does not display a specified base user interface element, depending on the specified correlation between the display of merged head mounted display device user interface elements with base user interface elements given in the customized merged head mounted display device user interface element definition. For example, in the embodiment described in
These customizations are only a few examples of a wide range of potential customizations, which can include any specification as to the orientation of merged head mounted display device user interface elements within the merged user interface, and correlation between the display of merged head mounted display device user interface elements with base user interface elements.
The blocks of the flow diagrams discussed above need not be performed in any given or specified order. It is contemplated that additional blocks, steps, or functions may be added, some blocks, steps or functions may not be performed, blocks, steps, or functions may occur contemporaneously, and blocks, steps or functions from one flow diagram may be performed within another flow diagram.
Although only a few exemplary embodiments have been described in detail herein, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the embodiments of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the embodiments of the present disclosure as defined in the following claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover any and all such modifications, enhancements, and other embodiments that fall within the scope of the present invention. Thus, to the maximum extent allowed by law, the scope of the present invention is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.