This relates generally to delivering recommendations, including but not limited to, electronic devices that enable the delivery of optimal recommendations in computer-generated reality environments.
A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
In contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands).
A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects.
Examples of CGR include virtual reality and mixed reality.
A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment.
In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end.
In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground.
Examples of mixed realities include augmented reality and augmented virtuality.
An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment.
An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof.
An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include smartphones, tablets, desktop/laptop computers, head-mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback and/or cameras having hand tracking and/or other body pose estimation abilities).
A head-mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head-mounted system may be a head-mounted enclosure (HME) configured to accept an external opaque display (e.g., a smartphone). The head-mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head-mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one implementation, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
CGR (AR) technology has the potential to be an integral part of a user's everyday life. Devices that implement CGR can provide information to the user pertaining to many aspects, from navigation, to weather, to architecture, to games, and much more. However, the information provided to the user can be overwhelming and may not pertain to the user's interests.
In accordance with some embodiments, a method is performed at an electronic device with one or more processors and a non-transitory memory. The method includes obtaining pass-through image data characterizing a field of view captured by an image sensor. The method also includes determining whether a recognized subject in the pass-through image data satisfies a confidence score threshold associated with a user-specific recommendation profile. The method further includes generating one or more computer-generated reality (AR) content items associated with the recognized subject in response to determining that the recognized subject in the pass-through image data satisfies the confidence score threshold. The method additionally includes compositing the pass-through image data with the one or more CGR content items, where the one or more CGR content items are proximate to the recognized subject in the field of view.
In accordance with some embodiments, a method is performed at an electronic device with one or more processors and a non-transitory memory. The method includes obtaining a first set of subjects associated with a first pose of the device. The method also includes determining likelihood estimate values for each of the first set of subjects based on user context and the first pose. The method further includes determining whether at least one likelihood estimate value for at last one respective subject in the first set of subjects exceeds a confidence threshold. The method additionally includes generating recommended content or actions associated with the at least one respective subject using at least one classifier associated with the at least one respective subject and the user context in response to determining that the at least one likelihood estimate value exceeds the confidence threshold.
In accordance with some embodiments, an electronic device includes a display, one or more input devices, one or more processors, non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of the operations of any of the methods described herein. In accordance with some embodiments, a non-transitory computer readable storage medium has stored therein instructions which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to perform or cause performance of the operations of any of the methods described herein. In accordance with some embodiments, an electronic device includes: a display, one or more input devices; and means for performing or causing performance of the operations of any of the methods described herein. In accordance with some embodiments, an information processing apparatus, for use in an electronic device with a display and one or more input devices, includes means for performing or causing performance of the operations of any of the methods described herein.
For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
In embodiments described below, pass-through image data characterizing a field of view captured by an image sensor is composited with one or more computer-generated reality (CGR) content items. The one or more CGR content items are associated with a recognized subject in the pass-through image data and the recognized subject in the pass-through image data satisfies a confidence score threshold. In the composited image, the one or more CGR content items are placed proximate to the recognized subject in the field of view. Accordingly, the embodiments described below provide a seamless integration of user-specific content. The user-specific content is generated and displayed to a user based on likelihoods of user interests. For example, a cupcake recipe or nutritional information for a cupcake are generated and displayed to the user when a cupcake is recognized within the user's field of view. As such, the recommended CGR content items generated according to various embodiments described herein allow the user to remain immersed in their experience without having to manually enter in search queries or indicate preferences. The seamless integration also reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In embodiments described below, a set of subjects associated with a pose of a device is obtained and likelihood estimate values for each of the set of subjects are determined based on user context and the pose. Recommended content or actions associated with at least one respective subject in the set of subjects are generated. The recommended content or actions are generated using at least one classifier associated with the at least one respective subject in response to determining that at least one likelihood estimate value for the at least one respective subject in the set of subjects exceeds a confidence threshold. As such, the embodiments described below provide a process for generating recommended CGR content based on how likely a user will be interested in a subject. The content recommendation according to various embodiments described herein thus provides a seamless user experience that requires less time and user inputs when locating for information or next action. This also reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the CGR device 104 corresponds to a tablet or mobile phone. In various implementations, the CGR device 104 corresponds to a head-mounted system, such as a head-mounted device (HMD) or a head-mounted enclosure (HME) having a tablet or mobile phone inserted therein. In some implementations, the CGR device 104 is configured to present CGR content to a user. In some implementations, the CGR device 104 includes a suitable combination of software, firmware, and/or hardware.
According to some implementations, the CGR device 104 presents, via a display 122, CGR content to the user while the user is virtually and/or physically present within a scene 106. In some implementations, the CGR device 104 is configured to present virtual content (e.g., the virtual cylinder 109) and to enable video pass-through of the scene 106 (e.g., including a representation 117 of the table 107) on a display. In some implementations, the CGR device 104 is configured to present virtual content and to enable optical see-through of the scene 106
In some implementations, the user holds the CGR device 104 in his/her hand(s). In some implementations, the user wears the CGR device 104 on his/her head. As such, the CGR device 104 includes one or more CGR displays provided to display the CGR content. For example, the CGR device 104 encloses the field-of-view of the user. In some implementations, the CGR device 104 is replaced with a CGR chamber, enclosure, or room configured to present CGR content in which the user does not wear the CGR device 104.
In some implementations, the controller 102 is configured to manage and coordinate presentation of CGR content for the user. In some implementations, the controller 102 includes a suitable combination of software, firmware, and/or hardware. In some implementations, the controller 102 is a computing device that is local or remote relative to the scene 106. For example, the controller 102 is a local server located within the scene 106. In another example, the controller 102 is a remote server located outside of the scene 106 (e.g., a cloud server, central server, etc.). In some implementations, the controller 102 is communicatively coupled with the CGR device 104 via one or more wired or wireless communication channels 144 (e.g., BLUETOOTH, IEEE 802.11x, IEEE 802.16x, IEEE 802.3x, etc.). In some implementations, the functionalities of the controller 102 are provided by and/or combined with the CGR device 104.
As illustrated in
As shown in
As explained above with reference to
The CGR content item 250 shown in
In some embodiments, the table classifier is selected based on weights assigned to a cluster of classifiers. In some embodiments, the classifiers correspond to entries in a library of objects/subjects, e.g., shapes, numbers, animals, foods, plants, people, dogs, squares, flowers, shapes, lighting, or the like. Using one or more classifiers, a subject can be recognized in the image data. During the subject recognition, weights are assigned to different classifiers and one or more classifiers can be selected based on the weight associated with each classifier. The selected classifier(s) can then be used for recognizing a subject in the image data.
For example, based on the gaze proximate to the region 262, weights are assigned to the table classifier, a cupcake classifier, and a book classifier. As the gaze settles on the table surface, the weight assigned to the table classifier increases, while the weights assigned to the cupcake classifier and the book classifier decrease. Based on the weights assigned to the classifiers, the table classifier is selected for identifying the table subject 230 proximate to the gaze region 262. Having recognized the table 230, the device 104 renders the CGR content 260, such as recommendations of a chair which may match the style of the table 230, adjacent to the table 230.
In some embodiments, each of the likelihood estimate values is assigned an initial value, e.g., all likelihood estimate values are 0 or the likelihood estimate values are equally distributed. As shown in
The selection process illustrated in
Turning to
For example, in
As shown in
In some embodiments, user context 505 is specified in a user-specific recommendation profile. In some embodiments, the user-specific recommendation profile includes user history, user-specific list, user-enabled modules (e.g., career-specific or task specific such as engine repair), and/or the like.
In some embodiments, an analyzer 520 includes a plurality of classifiers 522. In some embodiments, the plurality of classifiers 522 correspond to entries in a library of subjects, e.g., shapes, numbers, animals, foods, plants, people, etc. The classifiers are provided to a likelihood estimator 530 along with associated weights, e.g., a dog classifier for identifying a dog, etc.
Along with the inputs from the analyzer 520, the likelihood estimator 530 receives the image data and pose information from the scanner 510 and receives the user context 505. Based on the received information, the likelihood estimator 530 identifies a subject in the field of view that the user is most likely interested in and generates recommended CGR content items 560 for the user to view and/or interact as shown in
In some embodiments, cascaded caches 550-1, 550-2, 550-3 . . . 550-N are used to facilitate the subject identification and CGR content item recommendation. Subjects and the associated recommendations are stored in the cascaded caches in the order of weights. For example, during one iteration, the first cascaded cache 550-1 stores a subject with the lowest recommendation weight and the last cascaded cache 550-N stores a subject with the highest recommendation weight. As such, the first cascaded cache 550-1 includes information about the subject that is determined to be the least important or relevant to the user at this stage and the last cascaded cache 550-N includes information about the subject that is determined to the most important or relevant to the user at this stage. During subsequent stages or iterations as shown in
In some embodiments, fine matching 540 is performed to fine-tune the results from the likelihood estimator 530. In some embodiments, the fine matching 540 is performed remotely (e.g., at a second device) to conserve computational resources of the local device. In such embodiments, an encoder 532 is used to reduce the vector dimensionality for efficient communication of the data to the remote source. Upon receiving the encoded data, a decoder 542 on the remote source decodes the data before fine grained matching is performed. In some embodiments, at the remote source, machine learning is applied across multiple users so that better recommendations can be generated for a particular user.
In some embodiments, the method 600 is performed by processing logic, including hardware, firmware, software, or a suitable combination thereof. In some embodiments, the method 600 is performed by one or more processors executing code, programs, or instructions stored in a non-transitory computer-readable storage medium (e.g., a non-transitory memory). Some operations in method 600 are, optionally, combined and/or the order of some operations is, optionally, changed. Briefly, the method 600 includes: obtaining pass-through image data characterizing a field of view captured by an image sensor; determining whether a recognized subject in the pass-through image data satisfies a confidence score threshold associated with a user-specific recommendation profile; generating one or more computer-generated reality (CGR) content items associated with the recognized subject in response to determining that the recognized subject in the pass-through image data satisfies the confidence score threshold; and compositing the pass-through image data with the one or more CGR content items, where the one or more CGR content items are proximate to the recognized subject in the field of view.
The method 600 begins, at block 602, with the electronic device obtaining scene data. According to some embodiments, the device 104 or a component thereof (e.g., the image capture control module 850 in
The method 600 continues, at block 604, with the electronic device determining whether a recognized subject in the pass-through image data satisfies a confidence score threshold associated with a user-specific recommendation profile. In other words, the device 104 or a component thereof (e.g., the subject recognition module 854 in
In some embodiments, the user-specific recommendation profile includes at least one of a context of a user interacting with the device, biometrics of the user, previous searches by the user, or a profile of the user. For example, the context of the user interacting with the device includes a recent order placed by the user from a veterinarian, a cupcake baker, etc. In another example, biometric sensors can be used to measure the biometrics of the user, e.g., elevated blood pressure and/or heart rate indicating the sadness or excitement the user experiences towards a subject. In still another example, the user-specific recommendation profile includes previous searches by the user and the associated actions taken, e.g., the user searched cupcakes multiple times before but decided to say “no” to the cupcakes in all previous occasions. In yet another example, the metadata in the user profile can show a priori information for assigning weights and/or likelihood estimate values.
In some embodiments, the recognized subject in the pass-through image data is recognized by detecting a gaze at a region in the field of view as represented by block 606, obtaining a subset of the pass-through image data corresponding to the region as represented by block 608, and identifying the recognized subject based on the subset of the pass-through image data and a classifier as presented by block 610. For example, in
In some embodiments, the method 600 further continues, at block 612, with the electronic device assigning weights to classifiers based on the gaze, where each of the classifiers are associated with a subject in the gaze region, and adjusting the weights to the classifiers based on updates to the gaze. In some embodiments, the method 600 further continues, at block 614, with the electronic device selecting the classifier from the classifiers with a highest weight.
For example, in
In some embodiments, as represented by block 616, the gaze region includes at least part of the recognized subject. For example, in
In some embodiments, as represented by block 620, the recognized subject includes multiple searchable elements, and each is associated with at least one classifier. For example, the picture frame 220 includes multiple searchable elements, the frame itself, the vase in the picture, and the flowers in the pictured vase. In order to differentiate these searchable elements and generate CGR content item for an element that the user will most likely be interested in, content recommendations are fine-tuned as described above with reference to
Still referring to
The method 600 continues, at block 624, with the electronic device compositing the pass-through image data with the one or more CGR content items. In some embodiments, the electronic device further rendering the pass-through image data in the field of view with the one or more CGR content items displayed proximate to the recognized subject. In some other embodiments, the one or more CGR content items are displayed adjacent to the recognized subject according to the field of view of the user using the device. For example, in case of CGR-enabled glasses, the camera with the image sensor and the user's optical train may be two separate things. As such, location(s) of the one or more CGR content items can be determined based on the field of view of the image sensor or the user. Alternatively, the field of view of the image sensor and the user can be reconciled, e.g., one may overlay the other. In such embodiments, location(s) of the one or more CGR content items can be determined based on the field of view of the image sensor and the user.
For example, the device 104 or a component thereof (e.g., the CGR content rendering module 858 in
In some embodiments, the method 700 is performed by processing logic, including hardware, firmware, software, or a suitable combination thereof. In some embodiments, the method 700 is performed by one or more processors executing code, programs, or instructions stored in a non-transitory computer-readable storage medium (e.g., a non-transitory memory). Some operations in method 700 are, optionally, combined and/or the order of some operations is, optionally, changed. Briefly, the method 700 includes: obtaining a first set of subjects associated with a first pose of the device; determining likelihood estimate values for each of the first set of subjects based on user context and the first pose; determining whether at least one likelihood estimate value for at last one respective subject in the first set of subjects exceeds a confidence threshold; and generating recommended content or actions associated with the at least one respective subject using at least one classifier associated with the at least one respective subject and the user context in response to determining that the at least one likelihood estimate value exceeds the confidence threshold.
The method 700 begins, at block 702, with the electronic device obtaining a first set of subjects associated with a first pose of the device. According to some embodiments, the device 104 or a component thereof (e.g., the image capture control module 850 in
The method 700 continues, at block 704, with the electronic device determining likelihood estimate values for each of the first set of subjects based on user context and the first pose. For example, as shown in
In some embodiments, the likelihood estimate values are recursively determined. As represented by block 706, in some embodiments, the likelihood estimate values are recursively determined based on updated user context during multiple time periods. For example, in
For example, in
In some embodiments, the likelihood estimate values are assigned an initial likelihood estimate value (e.g., all likelihood estimate values are 0) or the likelihood estimate values are evenly distributed (e.g., the frame 310, the flower 320, and the vase 330 are assigned equal values initially as shown in
The method 700 continues, at block 710, with the electronic device determining whether at least one likelihood estimate value for at last one respective subject in the first set of subjects exceeds a confidence threshold. For example, given subjects A, B, and C, where the likelihood estimate values are A=0.4, B=0.3, C=0.3, the device 104 of a component thereof (e.g., the CGR content recommendation module 856 in
In some embodiments, none of the likelihood estimate values exceed the threshold or multiple likelihood estimate values tie for exceeding the threshold. In such embodiments, more than one iteration is needed to recursively determine updated likelihood estimate values, as described above with reference to steps 706 and 708. In other words, determining whether at least one of the likelihood estimate values exceed a threshold indicates a convergence to a single likelihood estimate value corresponding to a single subject, as represented by block 714. For example, the device 104 or a component thereof (e.g., the CGR content recommendation module 856 in
The method 700 continues, at block 716, with the electronic device generating recommended content or actions associated with the at least one respective subject using at least one classifier associated with the at least one respective subject and the user context in response to determining that the at least one likelihood estimate value exceeds the confidence threshold. In some embodiments, the device 104 or a component thereof (e.g., the CGR content rendering module 858 in
In some embodiments, the method 700 continues, at block 722, with the electronic device predicting a different subject based on at least one of updated user context and updated first pose information that exceeds the confidence threshold and generating a set of recommended content or actions associated with the different subject. For example, if the first pose and the second pose indicate the focal point is moving to the right within the field of view, based on the user context, the likelihood estimator predicts the next subject on the right side of the field of view to provide recommended content. For example, as shown in
In some embodiments, the one or more communication buses 804 include circuitry that interconnects and controls communications between system components. The memory 810 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random-access solid-state memory devices; and, in some embodiments, include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 810 optionally includes one or more storage devices remotely located from the one or more CPUs 802. The memory 810 comprises a non-transitory computer readable storage medium. Moreover, in some embodiments, the memory 810 or the non-transitory computer readable storage medium of the memory 810 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 820, an image capture control module 850, an image processing module 852, a subject recognition module 854, a CGR content recommendation module 856, and a CGR content rendering module 858. In some embodiments, one or more instructions are included in a combination of logic and non-transitory memory. The operating system 820 includes procedures for handling various basic system services and for performing hardware dependent tasks.
In some embodiments, the image capture control module 850 is configured to control the functionality of an image sensor or camera assembly to capture images or obtain image data. To that end, the image capture control module 850 includes a set of instructions 851a and heuristics and metadata 851b.
In some embodiments, the image processing module 852 is configured to pre-process raw image data from the image sensor or camera assembly (e.g., convert RAW image data to RGB or YCbCr image data and derive pose information etc.). To that end, the image processing module 852 includes a set of instructions 853a and heuristics and metadata 853b.
In some embodiments, the subject recognition module 854 is configured to recognize subject(s) from the image data. To that end, the subject recognition module 854 includes a set of instructions 855a and heuristics and metadata 855b.
In some embodiments, the CGR content recommendation module 856 is configured to recommend CGR content item(s) associated with the recognized subject(s). To that end, the CGR content recommendation module 856 includes a set of instructions 857a and heuristics and metadata 857b.
In some embodiments, the CGR content rendering module 858 is configured to composite and render the CGR content items in the field of view proximate to the recognized subject. To that end, the CGR content rendering module 858 includes a set of instructions 859a and heuristics and metadata 859b.
Although the image capture control module 850, the image processing module 852, the subject recognition module 854, the CGR content recommendation module 856, and the CGR content rendering module 858 are illustrated as residing on a single computing device, it should be understood that in other embodiments, any combination of the image capture control module 850, the image processing module 852, the subject recognition module 854, the CGR content recommendation module 856, and the CGR content rendering module 858 can reside in separate computing devices in various embodiments. For example, in some embodiments each of the image capture control module 850, the image processing module 852, the subject recognition module 854, the CGR content recommendation module 856, and the CGR content rendering module 858 can reside on a separate computing device or in the cloud.
Moreover,
While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node” are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
This application claims priority to U.S. Provisional Patent App. No. 62/729,960 filed on Sep. 11, 2018, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62729960 | Sep 2018 | US |