The present disclosure generally relates to augmented reality.
In physical environments such as corporate meeting spaces, operational control rooms, and even various rooms in a home, there may be a number of display surfaces on which content may be displayed. Such display surfaces may be completely independent and of one or more different types, including for instance, telepresence units, televisions, monitors, and/or projection surfaces. It may be an inefficient, frustrating, and a time-consuming experience to deal with content across multiple, fragmented display surfaces in such a physical environment.
So that the present disclosure may be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings. The appended drawings, however, illustrate only some example features of the present disclosure and are therefore not to be considered limiting, for the description may admit to other effective features.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the elements, items, stages, etc. of a given physical environment, system, user interface, method, etc.
There is provided, in accordance with some embodiments of the presently disclosed subject matter a system comprising at least one camera adapted to capture at least one first image including one or more display surfaces, at least one touchscreen adapted to detect user input, and at least one processor adapted to display, simultaneously on the at least one touchscreen, an augmented reality user interface including the at least one first image, and at least one other item not captured by the at least one camera, the at least one other item including at least one of: one or more control items, or one or more content items, interpret the user input detected by the at least one touchscreen to include selection of a first item in the at least one first image, and a second content item, determine that the first item is a first display surface of the one or more display surfaces in the at least one first image, and cause the second content item to be moved to, or duplicated to, or moved from, or duplicated from, the first display surface.
Some embodiments of the subject matter may use an augmented reality user interface or a template user interface, to manipulate the layout of content across display surfaces and/or a touchscreen in a physical environment. Additionally or alternatively, the augmented reality user interface may be used for non-layout manipulation of content, as will be described in more detail below. The term augmented reality, as used herein, should be understood to also encompass technology (e.g. mixed reality) which includes augmented reality in combination with other aspect(s) (e.g. virtual reality).
Physical environment 100 includes display surfaces 145 and a portable computer 110. For example, display surfaces 145 may include:
Portable computer 110 may be a tablet computer, such as an iPad, a smartphone, a laptop, or any other suitable portable (i.e. mobile) computer. The term computer, as used herein, refers to any element that includes at least one processor, whether the element is portable or not. Portable computer 110 includes a camera (not shown) adapted to capture image(s), and a touchscreen 115 adapted to detect user input. The term user input, as used herein, may include any appropriate user gesture(s), such as one or more swipes (also referred to as slide(s)), one or more taps, one or more double-taps, one or more long-presses, one or more pinches, one or more reverse-pinches, one or more drag-and-drops, one or more drop-down menu selections, and/or one or more keyed inputs (e.g. via touchscreen keyboard).
Displayed on touchscreen 115 in
The image displayed on touchscreen 115 includes, inter-alia, display surfaces 130 and 135. Display surface 130 is completely included in the image, whereas display surface 135 is only partially included in the image. A particular content item may be displayed on touchscreen 115, e.g. due to the camera capturing in the image a display surface (e.g. 135) displaying the particular content item, or due to the particular content item being one of content items 120 not captured by the camera that are displayed on touchscreen 115. Touchscreen 115 may detect user input, the user input being indicative, for instance, of the particular content item, and of a particular display surface 145 (e.g. display surface 130 or 135) included in the image. Alternatively for example, the particular display surface 145 indicated by the user input may comprise any of the following:
Subsequent to the detection of the user input, the particular content item may be manipulated, for example, by moving or duplicating the particular content item, that may be one of content item(s) 120, to the particular display surface 145 or vice versa; by moving or duplicating the particular content item displayed on the particular display surface 145 to another display surface 145; by moving or duplicating the particular content item displayed on another display surface 145 to the particular display surface 145; and/or by any other content manipulation(s) that will be described in more detail below.
It will be appreciated that physical environment 100 may vary, depending on the embodiment, and may not necessarily be as depicted in
In stage 210, a shared environment state is set up, where the shared environment state includes a virtual model of physical environment 100 and origin point information. The virtual model may include a virtual camera. Possible embodiments of stage 210 will be discussed further below. However, for the sake of the current discussion, it is assumed that a virtual model is available for the remaining stages of method 200, so that a system which performs the remainder of method 200 may use the virtual model. Such a system may include touchscreen 115 of portable computer 110, a camera of portable computer 110 and additional elements. Possible elements of such a system will be discussed in more detail below with reference to
In stage 220, it is determined whether or not to manipulate content on one or more display surfaces 145 and/or on touchscreen 115 (
For example, the augmented reality user interface may provide an association, via the image, to display surfaces 145 physically in physical environment 100 which may be displaying content items, and the augmented reality user interface may further provide control items 125 (e.g. contextual) and/or content items 120, allowing the user to seemingly “touch” a content item at any distance and cause manipulation of the content item. The interpretation (e.g. in real-time) of the user input may be achieved using the previously constructed virtual model.
As mentioned above, the user input is interpreted in stage 260 to include selection of an item in the image. In stage 270, it is determined that the selected item is a particular display surface 145 of the one or more display surfaces 145 in the image. That is, the two dimensional representation of the particular display surface 145 is captured in the image displayed on touchscreen 115 of portable computer 110 and the three dimensional physical entity of the particular display surface 145 exists in physical environment 100. For example, the particular display surface 145 that is being selected may be determined based on the position on the touchscreen that is touched during the user input. When the user touches a position on touchscreen 115 of portable computer 110, it may be determined, using a virtual model of physical environment 100, if the user “touched” a particular display surface 145 physically in physical environment 100. Such a determination may be made in the virtual model via a process called raycasting, where the ray is representative of user input, or via any other appropriate process. With raycasting, a ray is sent from a virtual camera in the virtual model in the direction specified by the touch position. If the ray intersects with an object (e.g. a virtual display surface) in the virtual model, it may be concluded that the user has “touched” the object, and based on the type of user input, the appropriate manipulation of the content item may be performed, as will be explained in more detail below with reference to stage 290.
An example of raycasting will now be discussed with reference to
Referring to
Virtual spatial representation 300 is included in a virtual model of physical environment 100 (
It will be appreciated that virtual spatial representation 300 may vary depending on the embodiment, and may not necessarily be as depicted in
Referring back to method 200 of
In some embodiments, certain control item(s) 125, such as contextual control items may be invoked, depending on the content item which is interpreted as being selected in stage 260. Such invoked control items 125 may be used for notification to the user (e.g. regarding the content item) and/or for user input. For example, confirmation/non-confirmation control items 125 may be invoked to confirm/not-confirm layout manipulation of the content item (see, for example, description of items 8254 and 8255 with reference to
In embodiments where the augmented reality user interface is not used, and templates are used instead, stage 230 and stages 250 to 270 may be omitted.
In stage 280 the virtual model may be updated, at any appropriate point in time after the virtual model was constructed, e.g. after performance of stage 240 or 270. For example, if and when the camera physically on portable computer 110 moves in physical environment 100 (
In stage 290, content may be caused to be manipulated in physical environment 100. For example, causing the manipulation of content may include causing the manipulation of the layout (or in other words the positioning) of content in accordance with the template interpreted in stage 240 as having been selected. As another example, causing the manipulation of content may include causing the manipulation of a content item interpreted in stage 260 as having been selected using the augmented reality user interface. In the latter example, causing such a manipulation may include causing a content item to move from or to, or causing the content item to be duplicated from or to the display surface determined in stage 270. More broadly, in the latter example, causing the manipulation of the content item may include causing any of the following:
In some embodiments, causing the manipulation of content may include operation(s) such as generation of content rendering instruction(s), provision of the content rendering instruction(s) and/or execution of the content rendering instruction(s). Content rendering instructions are instructions regarding the rendering (e.g. displaying) of content. The content rendering instructions may be executed with respect to the content and with respect to touchscreen 115 and/or display surface(s) 145. It is noted that although the terms moving (transferring), mirroring (duplicating), and deleting are used herein, such terms are used to reflect what appears to be occurring to the user. For example, moving a content item from a source (e.g. touchscreen 115 of portable computer 110 or a first display surface 145) to a destination (e.g. a second display surface 145 or touchscreen 115 of portable computer 110) may be achieved by the usage of content rendering instructions to stop displaying the content item at the source, and begin displaying the content item at the destination, or may be achieved in any other appropriate manner which appears to the user as a transfer of the content item. Deleting a content item, for example, may be achieved by the usage of content rendering instructions to stop displaying the content item at the source, or may be achieved by any other appropriate manner which appears to the user as a deletion of the content item. Whether or not the content item is accessible for display at the source (although not being displayed at the source) after the transfer or deletion may vary depending on the embodiment. Duplicating a content item at a destination, may be achieved, for example, by the usage of content rendering instructions to begin displaying the content item at the destination, or may be achieved in any other appropriate manner which appears to the user as a duplication of the content item. Whether or not the content item was accessible for display at the destination (although not being displayed at the destination) prior to the transfer or duplication may vary depending on the embodiment.
Stage 290 may additionally or alternatively be achieved by bridging user input on portable computer 110 and content manipulation in physical environment 100. Such bridging may rely, for example, on communication protocols such as WebSockets over WiFi. In some embodiments, the bridge between user input and content positioning in physical environment 100 may be achieved by first bridging user input and content arrangement in the virtual model of physical environment 100. The virtual content arrangement may then be reflected (e.g. in real-time) on display surface(s) 145 physically in physical environment 100. For instance, the conversion of user input into suitable content rendering instructions may rely on the virtual model taking into account any of the following: spatial characteristics of the available display surfaces 145 (e.g. absolute positions, dimensions (i.e. sizes), orientations, relative positions, etc.), non-spatial characteristics of the available display surfaces 145 (e.g. resolutions, color profiles, processing powers, networking capabilities, etc.), the position of the user (e.g. by proxy of position of the camera of portable computer 110) relative to such display surfaces 145 in physical environment 100, etc.
In some embodiments, method 200 may include more, fewer, and/or different stages than illustrated in
Certain embodiments of any of stages 250, 260, 270 and 290 will now be described with reference to
The user may desire to insert a content item onto a particular display surface 145 (e.g. display surface 130) of physical environment 100 (
Refer to
For example, content item 6201 (e.g. relating to a basketball game) that is to be selected by the user (in
Referring to
Consequent to drag-and-drop gesture 760 shown in
Refer to
In
In
A camera of portable computer 110 captures an image, inter-alia of display surface 135 displaying a content item 1070. In
As the field of view of the camera of portable computer 110 changes, the image captured by the camera and displayed on touchscreen 115 of portable computer 110 changes as well. The user input may be interpreted as continuing to include selection of content item 1070, and may be interpreted to further include selection of a destination display surface 145 (e.g. interim destination display surface 145 and/or final destination display surface 145) in the changed image, for content item 1070.
Referring to
In
User input by way of control items 1695 may change content items 1690 displayed on touchscreen 115. Optionally such user input may also change content item 1570 displayed on display surface 140 (thereby duplicating content displayed on touchscreen 115 to display surface 140). For example, the user may change the players displayed on touchscreen 115, and the player changes may consequently also be displayed on display surface 140.
Additionally or alternatively, in an augmented reality user interface, an image may be displayed on touchscreen 115. In such a user interface, one or more of content item(s) 1690 and control item(s) that are shown in
In
Additionally or alternatively, due to user input by way of suitable examples of content control items 1825, two dimensional versus three dimensional displaying, displaying suitable for a horizontal display surface 145 (e.g. any of tables 137) versus displaying suitable for a vertical display surface 145 (e.g. wall 150), and/or any other appropriate displaying properties may be toggled for content item 1770.
A discussion of templates now follows. Referring again to stage 230 of method 200 of
User input (e.g. a tap on an icon for a given template, included in a template user interface displaying on touchscreen 115 of portable computer 110) may be interpreted as including the selection of the given template in stage 240. The virtual model is optionally updated in stage 280. In stage 290, the positioning (i.e. layout) of content may be caused to be manipulated in physical environment 100 as defined by the given template. For example, the content positioning may be in accordance with the parameters of the virtual model. As part of the template definition, content may have been tagged with metadata such as priority, relationship, and information density. The metadata describing the content may be used as the basis for the content positioning decisions; the feasibility and optimization of positioning may be based on the spatial (e.g. absolute positions, sizes, orientations, relative positions, etc.) and non-spatial (e.g. resolutions, processing powers, color profiles, networking capabilities) characteristics of display surfaces 145, as contained in the virtual model. Using the content layout in the virtual model, the content layout may be reflected, e.g. in real-time, on display surfaces 145 in physical environment 100.
In some embodiments, one of such templates may serve as a starting point for content layout, e.g. prior to performance of stage 220 and possible operations with respect to the augmented reality user interface (e.g. with reference to any of stages 240 to 290).
In
In
In
Although a shared environment state may be set up in any appropriate manner, for illustrative purposes, some embodiments of setting up a shared environment state will now be presented, with reference to
Referring again to method 200 of
In stage 202 an origin point (0,0,0) may be established to be used to anchor spatial characteristics of physical environment 100 in the virtual model. The origin (0, 0, 0) may be an arbitrary three-dimensional point. The locations of virtual spatial representations of objects in virtual spatial representation 300 (
In stage 204, physical environment 100 may be scanned, in order to determine characteristics of physical environment 100, including physical displays 145. Spatial characteristics of physical environment 100 may be determined, and optionally non-geometric/spatial characteristics of physical environment 100 may be determined. Examples of spatial characteristics for display surface 145 may include absolute positions, sizes, orientations, relative positions, etc. Examples of non-spatial/non-geometric characteristics for display surface 145 may include resolutions, networking capabilities, processing powers, color profiles, etc.
a) via manual calibration: Memory 2430 for the shared environment state may provide the origin point (if determined in stage 202 of
b) via automated discovery: network-enabled (e.g. WiFi, Bluetooth, etc.) display surfaces 145 such as a display surface 24512 may broadcast the non-spatial capabilities (e.g. resolutions, networking capabilities, processing powers, color profiles, and/or other non-spatial characteristics), dimensions (also referred to herein as sizes), orientations, absolute positions, relative positions, etc., of display surfaces 145 in physical environment 100, through a location service (e.g. a CMX access point 2420 to an external system 2410). For example, display surfaces 145, such as display surface 24512, may be Internet of things (IoT) devices with IoT connectivity. In accordance with automated discovery, processor 2470 (e.g. included in portable computer 110 or in another computer which may or may not also include capturer 2450), knowing the position and orientation thereof in physical environment 100 may infer the positions and orientations of display surfaces 145 relative to processor 2470 using three dimensional math. For example, processor 2470 may be located at the established origin point or may be able to infer the position thereof relative to the established origin point.
c) via a hybrid combination of manual calibration and automated discovery
Stage 204, including scanning of physical environment 100, may be performed once on first run (e.g. in the case of a static physical environment 100), or more than once (e.g. continually) so as to take into account any changes in physical environment 100. If performed more than once, then in subsequent times, an updating of the virtual model in stage 280 may follow.
In stage 206, a virtual model of physical environment 100 may be constructed. The virtual model may include virtual spatial representation 300. Optionally, the virtual model may also include any gathered non-spatial data. The shared environment state may include the virtual model combined with the origin point information.
For example, the information gathered in stage 204 may be used (e.g. by processor 2470) to construct the virtual model. Spatial and optionally non-spatial data regarding physical environment 100, including data regarding display surfaces 145 in physical environment 100, may be used. Standard three-dimensional modeling tools such as Unity® (e.g. executed by processor 2470) may use the gathered spatial data to construct the virtual model of physical environment 100, e.g. including virtual spatial representation 300. A three dimensional engine, such as Unity, may also position the virtual camera within the virtual model, e.g. at the established origin point.
When the virtual model includes non-spatial characteristics in addition to the spatial characteristics of physical environment 100, the spatial and non-spatial characteristics may be stored in a same shared environment state memory or the non-spatial characteristics may be stored and accessible in a complementary storage location to the storage location of the spatial characteristics (e.g. the non-spatial characteristics may be stored in a database in a different shared environment state memory than the shared environment state memory which includes the spatial characteristics).
Depending on the embodiment, location service 2410, memory 2430, capturer 2450, and/or processor 2470 of
In some embodiments, any of stages 202, 204 and 206 may be repeated, if appropriate. For example, any of stages 202, 204 and 206 may be repeated if physical environment 100 has changed due to any of display surfaces 145 having changed. For example, a collection of display surfaces 145 may be changed by adding one or more display surfaces 145, removing one or more display surfaces 145, replacing one or more display surfaces 145, upgrading and/or otherwise changing the spatial and/or non-spatial characteristics of one or more display surfaces 145, etc.
System 2500 includes one or more cameras 2510 adapted to capture image(s), where the image(s) may include one or more display surfaces 145. System 2500 further includes one or more touchscreens 2520 adapted to detect user input. System 2500 further includes one or more processors 2530 adapted to display simultaneously on touchscreen(s) 2520 an augmented reality user interface which includes image(s), and includes item(s) not captured by camera(s) 2510. Processor(s) 2530 may include, for example, any of the following: graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)) central processing units (CPU(s)), etc. Processor(s) 2530 is further adapted to interpret the user input detected by touchscreen(s) 2520 to include selection of an item in the image(s), and a content item; determine that the item in the image(s) is a display surface 145 of the display surface(s) 145 in the image(s); and cause the content item to be moved to, or duplicated to, or moved from, or duplicated from, the display surface 145, and/or cause any other manipulation of the layout of the content item. Processor(s) 2530 is optionally also adapted to interpret user input detected by touchscreen(s) 2520 to include selection of a template and to cause layout of content to be manipulated in accordance with the template. Processor(s) 2530 is optionally also adapted to interpret user input detected by touchscreen(s) 2520 to be indicative of a manipulation not necessarily relating to content layout. For example, the user input may be interpreted as relating to, toggling one or more displaying properties for a content item, etc. Processor(s) 2530 may be adapted to then cause such a manipulation.
System 2500 further includes one or more memories 2540 for storing software which may be executed by processor(s) 2530 in order to perform one or more function(s) described herein, such as displaying a user interface on touchscreen(s) 2520, interpretation of detected user input, determination of a display surface, and causing manipulation of content (e.g. causing moving, duplicating, deleting, positioning in accordance with template, resizing, display property toggling, etc.). Software may include firmware, if appropriate. Memory/ies 2540 may further store data such as the shared environment state, etc. Memory/ies 2450 may include for instance, any of the following: volatile, non-volatile, erasable, non-erasable, removable, non-removable, writeable, re-writeable memory, for short term storing, for long term storing, etc., such as registers, read-only memory (ROM), static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, embedded DRAM, etc.
System 2500 further includes networking circuitry 2550 adapted to communicate with elements external to system 2500. For example, networking circuitry 2550 may be used to communicate with an external system in order to receive or access the shared environment state (e.g. if set up by the external system), or to receive an origin point and/or display surface characteristics (e.g. when setting up the shared environment state). Networking circuitry 2550 may additionally or alternatively be used to communicate with elements external to system 2500, unrelated to the setting up of the shared environment state, e.g. when causing the manipulation of content, when invoking contextual control items, etc. Networking circuitry 2550 may include any appropriate networking circuitry for communication. For instance, networking circuitry 2550 may include antenna(s) and transmitter(s)/receiver(s) for wireless connectivity.
System 2500 optionally also includes additional capturer(s) 2560 in addition to camera(s) 2510, for instance if system 2500 is adapted to set up the shared environment state (stage 210), but scanning of physical environment 100 (e.g. in stage 202 and/or 204) in order to set up the shared environment state is performed by capturer(s) 2560 that are not cameras 2510. In other embodiments, camera(s) 2510 may be adapted to scan in order to set up the shared environment state; or system 2500 may not be adapted to set up the shared environment state, and consequently additional capturer(s) 2560 may be omitted from system 2500.
Depending on the embodiment, system 2500 may perform any of stages 220 to 290. System 2500 or an external system may perform any of stages 202 to 206 of stage 210. If stage 210 is performed by an external system, system 2500 may be adapted to receive or access the shared environment state set up by the external system, e.g. including to receive or access the virtual model, and/or to receive or access origin point data.
In some embodiments, system 2500 may include portable computer 110, whereas in other embodiments system 2500 may include portable computer 110 and other element(s). Portable computer 110 may include camera(s) 2510 and touchscreen(s) 2520. In the former embodiments, processors 2530, memory/ies 2540 and networking circuitry 2550 may also be included in portable computer 110. In the latter embodiments, any of processors 2530, memory/ies 2540 and/or networking circuitry 2550 may be distributed between portable computer 110 and the other element(s), the other element(s) including computer(s). The networking circuitry 2550 in the latter embodiments may be adapted for communication between portable computer 110 and the other element(s), in addition to or instead of being adapted for communication between system 2500 and elements external to system 2500. The other element(s), in the latter embodiments, which may be included in system 2500 may be located in proximity to portable computer 110, or remotely from portable computer 110 (e.g. in a cloud). In the latter embodiments, the functionality of processor(s) 2530 may be distributed in any appropriate manner, in order to enable processor(s) 2530 to collectively perform the functionality. For example, in the latter embodiments, processor(s) 2530 in portable computer 110 may be adapted to display the user interface(s) described herein on touchscreen(s) 2520. In order for processor(s) 2530 in the other element(s) to interpret the user input detected by touchscreen(s) 2520 to include selection of an item in the image(s), and a content item; determine that the item in the image(s) is a display surface 145 in the image(s); and cause the content item to be moved to, or duplicated to, or moved from, or duplicated from, the display surface 145, processor(s) 2530 in portable computer 110 may provide to the processor(s) 2530 in the other element(s), via networking circuitry 2550, an indication of the location(s) on touchscreen(s) 2520 detected as touched by the user. Processor(s) 2530 in the other element(s) may use the indication to interpret the user input to include selection of an item in the image(s) and a content item, determine the display surface in the image(s) and cause the content item to be moved or duplicated. Alternatively for example, in the latter embodiments, processor(s) 2530 in portable computer 110 may interpret the detected user input to include the selection of the content item and may provide to processor(s) 2530 in the other element(s), via networking circuitry 2550, an indication of which content item was selected, and an indication of the location(s) on touchscreen(s) 2520 detected as touched by the user. Processor(s) 2530 in the other element(s) may use the indication to interpret selection of an item in the image(s), determine the display surface in the image(s) and cause the content item to be moved or duplicated.
Advantages of the subject matter may include any of the following. First, users need not independently configure the content on each display surface 145, or rely on proxy control systems such as hardware and software remote controls, unidirectional mirroring (e.g. screen sharing), or rigid video wall management software. Most proxy control systems enable one-to-one interactions between a control device such as portable computer 110 and a particular display surface 145. Selecting a particular display surface 145 in such proxy control systems may include choosing the name/ID of the particular display surface 145 from a list or other abstract representation. Even under proxy control systems where all display surfaces 145 are connected and orchestrated (e.g. video wall), layout controls may remain dedicated to a single static, non-flexible unit (in this case, the single unit is a pre-defined cluster of display surfaces 145). Second, placement control items 125 and content control items 125 may be included in the augmented reality user interface, which has a direct connection to physical environment 100 itself. Therefore, user input with respect to the augmented reality user interface may cause manipulation of content displayed on touchscreen 115 and/or on display surfaces 145. Such an experience may be direct, concrete, and/or substantial for a user. Third, such an experience may result in a greater willingness of a user to adopt technologies such as “connected” collaboration, IoT technology, etc.; and/or such an experience may result in time savings and critical efficiencies, e.g. in professional environments such as enterprise meeting spaces or operational control rooms. For example, IoT connectivity, the scanning of a three dimensional physical environment (e.g. physical environment 100) and subsequent three dimensional virtual spatial representation (e.g. 300) in the virtual model, spatial awareness, wireless connectivity, and augmented reality, may be used to enhance the experience. Other advantages may be apparent from the description herein.
It will be appreciated that the subject matter contemplates, for example, a computer program product comprising a computer readable medium having computer readable program code embodied therein for executing one or more methods disclosed herein; and/or for executing one or more parts of method(s) disclosed herein, e.g. with reference to
In the above description of example embodiments, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. However, it will be appreciated by those skilled in the art that some examples of the subject matter may be practiced without these specific details. In other instances, well-known features have not been described in detail so as not to obscure the subject matter.
It will also be appreciated that various features of the subject matter which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the subject matter which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable subcombination.
It will further be appreciated by persons skilled in the art that the presently disclosed subject matter is not limited by what has been particularly shown and described hereinabove. Rather the scope of the subject matter is defined by the appended claims and equivalents thereof:
This application claims priority to U.S. Provisional Patent Application No. 62/614,508, filed on Jan. 8, 2018, entitled “Manipulation Of Content On Display Surfaces Via Augmented Reality,” the content of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62614508 | Jan 2018 | US |