Computing devices function as a source of information for a user. Many interfaces or representations for information from a computing device have display screens which purely focus on presenting a user with information which is unrelated to the surroundings of a user, such as a list of search results.
Augmented reality (AR) refers to interfaces and displays which provide information to a user in the context of the user's environment. For example, augmented reality systems may provide information to a user about the user's surroundings as a complement to a user's natural vision or hearing.
Examples for hierarchical clustering for view management in augmented reality are described. One disclosed method includes the steps of accessing point of interest (POI) metadata for a plurality of points of interest associated with a scene; generating a hierarchical cluster for at least a portion of the POIs; establishing a plurality of subdivisions associated with the scene; selecting a plurality of POIs from the hierarchical cluster for display based on an augmented reality (AR) viewpoint of the scene, the plurality of subdivisions, and a traversal of at least a portion of the hierarchical cluster; and displaying labels comprising POI metadata associated with the selected plurality of POIs, the displaying based on placements determined using image-based saliency. In another example, a computer-readable medium comprises program code configured to cause a processor to execute such a method.
These illustrative examples are mentioned not to limit or define the scope of this disclosure, but rather to provide examples to aid understanding thereof. Illustrative examples are discussed in the Detailed Description, which provides further description. Advantages offered by various examples may be further understood by examining this specification.
Aspects of the disclosure are illustrated by way of example. The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate one or more certain examples and, together with the description of the example, serve to explain the principles and implementations of the certain examples.
Examples are described herein in the context of hierarchical clustering for view management in augmented reality. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Reference will now be made in detail to implementations of examples as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following description to refer to the same or like items.
In the interest of clarity, not all of the routine features of the examples described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another.
Devices such as digital cameras, phones with embedded cameras, or other camera or sensor devices may be used to identify and track objects in three-dimensional (3D) environments. This may be used to create augmented reality displays where information about objects recognized by a system may be presented to a user that is observing a display of the system. Such information may be presented on an overlay of the real environment in a device's display.
Depending on the environment represented by the display, certain problems may arise with augmented reality. If the amount of information or the number of POIs associated with a certain environment are too large, then the view displayed may become cluttered, and the supplemental information presented by the augmented reality interface or browser may overwhelm other information which may be more important. Additionally, depending on the interface, certain information may interfere with other information. Occlusion of both annotations presented as part of the augmented reality and occlusion of the background or real word details may thus be a problem. Further problems may arise when simple filtering by category, tags, or distance from a user makes hidden information disappear completely. Also, the spatial relation to the real world for augmented reality information may be a problem because data points or augmented reality information may not relate to visible POIs.
Various examples may ameliorate or remove these issues from an augmented reality system by providing automatic clutter avoidance. Examples may also provide a “semantic level of detail” where a user may drill down to additional information using a browser interface or other commands. Additionally, examples may combine advantages of ranked search and free viewpoint exploration to improve the presentation of information as part of an augmented reality system.
An augmented reality system as discussed herein may refer to information presented to a user through a wearable headset with glasses on a view taken by a camera of a smartphone, tablet device, laptop computer, phablet, or any other such device. The augmented reality system may use sensor information to represent the real world, and then provide information on POIs as part of an output to the user.
Different types of augmented reality systems can display information to a user viewing a scene through a camera, a heads-up display, or even wearable items, like glasses equipped with display equipment, e.g., a projector that can project images onto the lenses or with an ancillary display-capable lens. In an illustrative example of an augmented reality system, a user captures a real-time scene using a camera on a smartphone and views the scene on the smartphone's display. The smartphone processes the image information received from the camera, identifies points of interest (POIs), and generates and displays information related to some of the POIs overlaid on the scene. The user is then able to view that information and gain knowledge about the scene that may not otherwise be apparent from simply viewing the scene itself.
In this illustrative example, the smartphone is configured to generate and display augmented information (or augmented reality information) in a way to provide additional information to the user while attempting to avoid cluttering the screen with too much augmented information or without obscuring the POIs themselves or even other augmented information. To do so, the smartphone identifies the POIs in the scene, which may include POIs that are not visible in the scene (e.g., the smartphone detects their location and presence via locationing information), and computes a hierarchical cluster of the identified POIs based on the three-dimensional locations of the identified POIs. The smartphone also subdivides the display screen into multiple “tiles.” In this example, the tiles are not visible to the user, but instead represent logical divisions of all or a portion of the display screen area, for example by subdividing the display screen area into four quadrants. In this example, the tiles are used to manage the amount of augmented reality information that may be displayed on the screen.
In this example, the hierarchical cluster is represented by a tree having a root node and one or more nodes descended from the root node. The smartphone then traverses the hierarchical cluster beginning at the root node and projects information from the traversed nodes onto one or more of the tiles until a maximum number of nodes for each tile has been reached. The information from the traversed nodes, in this example, are displayed on the display screen as labels, and the smartphone optimizes the placement of each of the labels using image-based saliency to avoid occluding important parts of the scene, such as buildings or other POIs, and to avoid occluding other labels or nodes.
However, since the smartphone provides a display of the scene in real-time (or near-real-time) to the user, the information in the scene may change as the user moves or changes the orientation of the camera. When this happens, the smartphone updates the traversal of the hierarchical cluster and may display additional, different, or fewer labels based on the traversal.
In addition, this illustrative example allows the user to interact with one or more labels to expand the node and to explore more deeply into the hierarchical cluster. When a user selects a node to be explored, the smartphone traverses one or more child nodes of the selected node and generates and displays labels associated with those child nodes of the expanded node. Again, the labels are arranged on the screen using image-based saliency. In this case, because additional labels have been displayed on the screen, and they may occlude aspects of the scene or other labels. When the smartphone reconfigures the layout of the labels and will move existing labels, or may collapse other labels into a node to reduce the amount of augmented information visible on the screen, while presenting the labels associated with the selected node. Thus, this illustrative example provides augmented information to a user but addresses problems with occluding important aspects of the scene or other labels, and also provides a dynamic, interactive augmented virtual reality that updates as the view into a scene changes or based on user interaction with the augmented information.
In this example, the system traverses a cluster tree from the root of the tree and project nodes or POIs to the screen. As the system traverses the tree, it projects the nodes onto the screen and associates the node with one of the tiles. In this example, the system traverses the tree according to a priority, such as based on a relative location to the AR viewpoint, a user preference, or other factor, such as sponsored advertising, and selects POIs from the cluster to display. In some examples, the system may displays all of the POIs for a scene. For example, there may only be two or three POIs in the scene. In some examples, however, a significant number of POIs may be available. In one example, the system projects POIs or nodes to the screen as it traverses the tree and upon reaching a threshold number of projected POIs or nodes, the system stops traversing the tree. In some cases, the system may traverse a tree and project POIs associated with a common parent node and if the system exceeds a threshold number of POIs, the system instead display the parent node and not the POI child nodes of the parent. A user may subsequently select the displayed parent node to expand it and view the child POI nodes. After selecting nodes for display, the system must determine where to display the labels.
Referring to
As an AR view changes, the system traverses the tree according to the new AR view, regenerates the edge and view details, and the adjusts the placement of the POI detail information. Further, the system allows for interactive control by the user. For example, a user may interact with displayed labels presented in the augmented reality view to “open” or “unfold” a node. For example, a label associated with a POI may be displayed and the user may select the label using a user-manipulatable input device, e.g., a mouse or touch screen, or may execute a gesture in space for detection by a camera-based gesture detection system. Selection of the label may cause the system to display additional information within the label, such as user-generated content (e.g., reviews), information about wait times, etc. In some examples, selection of a label associated with a node may cause labels associated with child nodes of the node to be displayed. The system again employs the layout solver to adjust the displayed AR nodes based on the increased label information from the selected node. Thus, selecting a node may open or unfold the node and may cause other AR nodes to shifted away from their associated POI, be compressed such as being reduced in size or replaced by an icon, or removed from the view. If the selected node is closed or refolded, the system again employs the layout solver to adjust the displayed AR nodes based on the changes.
In certain embodiments, the layout solver may update the layout periodically. This may involve an update that appears to be in real time or near-real-time for a user. Such updates may occur periodically, such as every second or every five seconds, or may occur based on events such as changes in location or viewpoint. In some examples, the system may update the AR view after a threshold number of change in the view or label information occurs. For example, the system may only employ the layout optimizer when edge or other view details within the scene change by a sufficient amount. In some examples, label placement may be impacted by dynamic factors such as lighting. In some examples, the layout optimizer may be executed to provide real-time, such as at a rate of at least 24 or 30 times per second, or near-real time updates, such as at a rate of between 1 to 24 times per second. Further, in some example, as a view changes, the system may provide a weight to maintaining a node in a position relative to the background so that the node and associated metadata move with the background as a camera or sensor moves across a scene.
These illustrative examples are mentioned not to limit or define the scope of this disclosure, but rather to provide examples to aid understanding thereof. Still further examples are provided in the detailed description below.
A POI as used herein may refer to a physical location. For the purposes of an augmented reality view, the augmented reality may be considered to display information or metadata from a POI search related to the position of a user or the view of a user.
POI information and metadata, which may also be referred to as augmented reality information, refers to characteristics which describe individual POIs, and which may differentiate individual POIs from each other. For example, augmented reality information for a particular POI may include an address, phone number, opening times, food/drinks served, type of food, prices, user reviews, physical descriptions, and so on.
An augmented reality node (or “AR node”) refers to a point or area in an augmented reality display which identifies a POI, and which provides a base for the presentation of metadata associated with the POI, and for the addition of more information to an augmented reality view if an AR node is selected. In some examples, an AR node may thus be considered part of a browser which enables a user to navigate various levels of detail for a particular POI. An AR node may be unfolded, opened, or expanded to provide additional information in the augmented reality view, or closed or collapsed to reduce the amount of information in the AR view. In some examples, an AR node may correspond to a node in a hierarchical cluster tree, or may be a visual manifestation of such a node.
A label or annotation refers to POI metadata or augmented reality information that is displayed on a device output along with an AR node. This may include address and title information, or any information associated with a particular POI. In certain embodiments, a user or the system may select default label information, such as the name of a business, a person, or location title associated with a POI. When a user unfolds or opens an AR node, additional information may be displayed as part of the label.
An augmented reality viewpoint refers to the perspective of the sensor that creates the background view which is displayed with the augmented reality information to create the augmented reality view. The augmented reality viewpoint is thus local to the POIs, even if the display used to show a view to a user is remote.
Referring now to
To improve the presentation of labels for view management of the augmented reality displays illustrated by the examples of
In this example, the precomputation analyzes the real-world locations of the POIs to create a hierarchical cluster of POIs. For example, a node may be established corresponding to a particular city block, e.g., a city block bounded 4th street, 5th street, Cherry street, and Marshall street, or to a particular location, such as a mall or fair. POIs within the region are identified and may be grouped, for example by relative location or proximity to other POIs in the region. POIs may also be grouped according to different semantic categories, such as restaurants, entertainment, retail, professional, etc. These categories may then be further subdivided, e.g., Italian restaurants, Thai restaurants, retail clothing stores, movie theaters, dentist office, etc. These can be further subdivided according to reviews, etc. Thus, by arranging the categories in a desired hierarchy (e.g., location, restaurants, Italian, medium price point), child nodes may be added into the hierarchy. Thus, the hierarchy establishes clusters of POIs according to certain criteria, and these hierarchical clusters of POIs are thus precomputed in 3D world space.
In addition to the techniques discussed above, clustering may be accomplished using other methodologies as well. One example technique includes hierarchical k-means clustering. Various techniques may generate hierarchical clusters using one or more metrics, such as a weighted sum of: (1) the POI distance from the user or camera in 3D world space; (2) a semantic similarity or a percent of matching tags or metadata information for given POIs; (3) a geometry of a view or scene presented as part of an augmented reality view, including building models and outlines; (4) a travel time to a particular POI, especially where geography dictates that this time would create a different set of information than distance information (e.g., walls, roadblocks, and rivers creating barriers that must be moved around) (5) addresses; and (6) user- or system-selected weights. In various embodiments, any combination of these and other weights may be used as clustering metrics to compute hierarchical clusters of POIs.
While potentially computationally-intensive, clustering may be performed in real-time or near-real-time. For example, a user may enter an area that does not have precomputed hierarchical clustering information. In one example, the user's AR device may access location-based information via a network connection, such as from a location-based internet search, obtain POIs for an area centered on the user (e.g., a circular area with a radius of 0.5 km from the user's location), and determine a hierarchical clustering that may be used to provide AR labels. In one example, a user may change a preference associated with a clustering to cause the clustering to be re-executed based on the changed preference.
Referring now to
The method 300 begins at block S300. At block S300, an image of a scene is received by the device 1000. In this example, the device 1000 captures video of the scene using its camera 301. However, in some examples, the device may receive images or video of a scene from a remote device over a network connection or from an external camera in communication with the device 1000.
At block S302, the system accesses POI metadata for a plurality of POIs associated with the scene. For example, the system may access one or more data stores and retrieve data records associated with a plurality of POIs. The system may employ a means for accessing POI metadata, such as a database query or file system, As described above, the POI information may be stored locally within the device, or may be accessed from one or more data stores over a network, such as network 1210 shown in
In one example, the mobile device 1000 comprises a GPS receiver and obtains GPS location information from the GPS receiver and associates the GPS location information with the capture images or video. Such GPS information may include a latitude and longitude as well as a directional heading. In some examples, other sensors or components may be employed to obtain location information, such as inertial sensors or WiFi.
At block S304, the device 1000 generates a hierarchical cluster for at least a portion of the plurality of POIs. In this embodiment, the system generates the hierarchical cluster using hierarchical k-means clustering based on at least one of: (1) a distance from an augmented reality viewpoint; (2) a semantic similarity between metadata for POIs; (3) a geometry of the scene; and (4) pre-selected weighting associated with categories of the POI metadata. Some example systems comprise a means for generating a hierarchical cluster using such a technique. In some embodiments, the device 1000 may generate the hierarchical cluster based on other or additional information, such as a driving distance to the POI or the address of the POI or the AR viewpoint.
In some examples, the device 1000 may receive a hierarchical cluster from a remote computing device or server using a network, such as network 1210 or the Internet. In one example, the device 1000 may transmit location and heading information or one or more captured images to a remote computing device, which generates a hierarchical cluster for one or more of the POIs in the scene and transmits the hierarchical cluster to the device 1000.
For example,
As shown in
At block S306, the device 1000 divides an output display space into a plurality of tiles. For example, the device 1000 may divide the output display space into four tiles corresponding to four quadrants of the display space. In some examples, a greater or lesser number of tiles may be employed. Further, tiles of different sizes and shapes may be used in some examples. For example, the device 1000 may divide the output display space into three tiles with one tile representing the left half of the display space, one tile representing the upper right quadrant of the display space, and one tile representing the lower right quadrant of the display space. In some examples, the device 1000 generates tiles based on detected features in the scene. For example, referring to
As is discussed in greater detail below with respect to
After the device 1000 generates a hierarchical cluster for at least a portion of the plurality of POIs, the method proceeds to block S308.
At block S308, the device 1000 displays, in the output display, AR nodes associated with POIs based on the plurality of tiles and the hierarchical cluster for the at least a portion of the plurality of viewpoints. The device 1000 traverses the hierarchical cluster tree and assigns AR nodes associated with nodes in the hierarchical cluster to tiles based on a location of an associated POI or cluster of POIs in the scene. As the devices traverses the hierarchical cluster tree and displays AR nodes in the tiles, the number of AR nodes displayed within a tile may reach a threshold number of AR nodes. In some examples, the device 1000 continues to traverse the hierarchical cluster tree, but skips nodes in the tree that would cause a display of an AR node in a tile that is “full.” Thus, the device 1000 continues to traverse the hierarchical cluster tree and display AR nodes in other tiles.
For example,
In the example of
In some examples, when a number of AR nodes in a tile reaches a threshold, such as a maximum number of nodes for a tile, the device may attempt to collapse the nodes into a single node. For example, if a tile includes a plurality of AR nodes that are all associated with nodes in the hierarchical cluster tree that are child nodes of the same parent node, the device may collapse the AR nodes associated with the child nodes and replace those AR nodes with a single AR node associated with parent node in the hierarchical cluster tree of the child nodes. Thus, in some examples, the device 1000 may attempt to reduce a number of AR nodes displayed within a single tile.
In some examples, the device 1000 may only collapse AR nodes under certain conditions. In one example, the device 1000 may only collapse AR nodes that are associated with POIs more than 0.1 kilometers from the AR viewpoint, or AR nodes associated with POIs that are not visible within the scene, such as indoor stores within a mall or stores that are located on a far side of a building visible in the scene.
In some examples, once all tiles are full, or the hierarchical cluster tree has been fully traversed, the method may proceed to block S310. In some examples, the method proceeds to block S310 once any of the tiles has reached a threshold number of AR nodes.
At block S310 the device 1000 determines placement of labels associated with the nodes using image-based saliency and displays the labels according to the determined placement. In some examples, additional information may be employed to determine the placement of the labels. For example, in one aspect, the device 1000 may employ a means for displaying labels that determines edge information for the scene and may determine placement of the labels based on image-based salience and the edge information.
At block S312, the system receives a selection of an AR node or label. For example, a user may use a mouse or other input device to move a cursor to select an AR node, a user may touch a touch-sensitive input device at a location corresponding to an AR node, or may perform a gesture for a camera-based gesture detection system to select an AR node, such as by pointing in real-world space at an apparent location of the AR node. These and other means for receiving a selection of a node may be incorporated into one or more example systems.
At block S314, the device 1000 unfolds the selected AR node or label in response to the selection. In this example, unfolding the AR node involves obtaining additional information associated with the AR node, displaying at least a portion of the additional information, and adjusting the placement of other augmented information on the display based on the display of the additional information. In some examples, unfolding the AR node involves additional or fewer steps. For example, additional information may already be available such that no additional information needs to be obtained. In some examples, the adjusting the placement of other augmented information, including AR nodes or labels, may include animation of the rearranging or may involve collapsing or removing other augmented information.
In this example, the device 1000 identifies information associated with the AR node, such as information from an associated or corresponding node in the hierarchical cluster tree. In some examples, the information may include additional descriptive information about a POI associated with the AR node or one or more child nodes of a node associated with the AR node. For example, the additional information may include user reviews or ratings of a POI, information about hours of operation, an estimated travel time, an address, or any other information about or related to the POI. In some examples, the additional information may include one or more child nodes of a node in the hierarchical cluster tree associated with the AR node.
The device 1000 also displays at least a portion of the additional information associated with the AR node. For example, if the additional information includes additional descriptive information for a label associated with the AR node, the device 1000 may increase the size of the label to accommodate the additional information, or may incorporate user interface controls into the label, such as a scroll bar, to provide access to the additional information. In some examples, unfolding the AR nodes results in additional AR nodes being displayed. For example, an AR node may be associated with a node in the hierarchical cluster tree that has one or more child nodes. Unfolding the AR node may include displaying AR nodes associated with the one or more child nodes, including icons or labels associated with the one or more child nodes. In some examples, displaying the additional information may include ceasing display of the selected AR node, or it may cause a change in appearance of the selected AR node. These and other means for updating the displaying of labels based on opening the selected node, such as those described below, may be employed by one or more systems.
When the device 1000 displays additional information, it may adjust the placement of other augmented information in the display.
Referring now to
In some examples, the determination of which nodes to collapse may be based on various user preferences and system determinations. For example, the system may determine to display fewer than the maximum number of allowable labels. In other embodiments, the system may make other adjustments to the display of certain nodes as part of a single selection. In the example of
Additionally, as is shown by
Referring to
After the device 1000 has unfolded the selected node, the method has completed. However, in some examples, the system may iteratively execute portions of the method of
While the above description for
In still further embodiments, the display of nodes and labels based on saliency as described above may be combined with other inputs to create hybrid displays for augmented reality. For example, a system may have a user input for real-time adjustment of clutter that will collapse or expand nodes in the cluster tree without selection of a specific node.
In some examples described above, an output display space may be divided into multiple tiles. In some examples, however, a real-world environment itself may be divided into tiles that are fixed relative to coordinates in the real-world environment. For example, a device may capture a real-world environment by a camera and assign coordinates to POIs or other features in the environment. As described above with respect to
For example, in one aspect, the device dynamically divides the coordinate space into one or more tiles. In this aspect, the device initializes a coordinate space having no tiles, and after identifying a POI, generates a first tile surrounding the POI. The device may then identify a second POI. After identifying the second POI, the device may place the second POI in the first tile, or in some aspects, the device may generate a second tile for the second POI, which may or may not overlap with the first tile. The device may iteratively generate additional tiles as additional POIs are identified, or may assign one or more of the additional POIs to existing tiles. In some aspects, one or more of the dynamically generated tiles may comprise different shapes. For example, in some aspects, tiles may be polygons, such as rectangles or triangles. In other some aspects, tiles may be circular and may be centered on a respective POI with a radius based on a characteristic of one or more POIs, such as its distance from the device, a relative importance of the POI, or the number of POIs within the tile.
Referring now to
A hierarchical cluster tree will be generated in these examples as described throughout this written description. However, POIs will be associated with coordinates within the coordinate space and thus will be assigned to a tile in the coordinate space. In some examples, nodes will be collapsed or expanded according to predetermined threshold values associated with a maximum number of AR nodes or labels per tile, or per group of tiles. Thus, as described above, the display of AR nodes or labels will operate according to various aspects of this disclosure, however, the tiles will be fixed within the environment or coordinate space, rather than associated with tiles in the output display space. Further, as a user selects AR nodes to expand or collapse, the placement of AR nodes and labels within the environment will be adjusted, such as by moving or resizing labels or collapsing AR nodes into a parent AR node, as described above. Further, because the tiles are fixed within the coordinate space, as the AR viewpoint changes, the set of tiles and associated AR nodes and labels changes based on the AR viewpoint.
Referring now to
Referring now to
Referring now to
In some examples, the mobile device 1000 includes a display device and may provide the augmented images or video to the display device. Further, in some examples, the mobile device 1000 may be configured to transmit the augmented images or video over a wireless link 1016, 1046 using the wireless transceiver 1012 or the SPS transceiver 1042. In one such example, the device may be configured to provide the augmented images or video to both the mobile device's display and to substantially simultaneously wirelessly transmit the augmented image to another device.
Referring now to
Referring now to
In some examples, a remote device with a camera, such as a smartphone, may be positioned within a scene and may capture images or videos of the scene and transmit the images or video over the network 1210 to a computing device, such as the computing device 900 shown in
While the methods and systems herein are described in terms of software executing on various machines, the methods and systems may also be implemented as specifically-configured hardware, such as field-programmable gate array (FPGA) specifically to execute the various methods. For example, examples can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in a combination thereof. In one example, a device may include a processor or processors. The processor comprises a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs for editing an image. Such processors may comprise a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), field programmable gate arrays (FPGAs), and state machines. Such processors may further comprise programmable electronic devices such as PLCs, programmable interrupt controllers (PICs), programmable logic devices (PLDs), programmable read-only memories (PROMs), electronically programmable read-only memories (EPROMs or EEPROMs), or other similar devices.
Such processors may comprise, or may be in communication with, media, for example computer-readable storage media, that may store instructions that, when executed by the processor, can cause the processor to perform the steps described herein as carried out, or assisted, by a processor. Examples of computer-readable media may include, but are not limited to, an electronic, optical, magnetic, or other storage device capable of providing a processor, such as the processor in a web server, with computer-readable instructions. Other examples of media comprise, but are not limited to, a floppy disk, CD-ROM, magnetic disk, memory chip, ROM, RAM, ASIC, configured processor, all optical media, all magnetic tape or other magnetic media, or any other medium from which a computer processor can read. The processor, and the processing, described may be in one or more structures, and may be dispersed through one or more structures. The processor may comprise code for carrying out one or more of the methods (or parts of methods) described herein.
The foregoing description of some examples has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications and adaptations thereof will be apparent to those skilled in the art without departing from the spirit and scope of the disclosure.
Reference herein to an example or implementation means that a particular feature, structure, operation, or other characteristic described in connection with the example may be included in at least one implementation of the disclosure. The disclosure is not restricted to the particular examples or implementations described as such. The appearance of the phrases “in one example,” “in an example,” “in one implementation,” or “in an implementation,” or variations of the same in various places in the specification does not necessarily refer to the same example or implementation. Any particular feature, structure, operation, or other characteristic described in this specification in relation to one example or implementation may be combined with other features, structures, operations, or other characteristics described in respect of any other example or implementation.
This application claims the benefit of U.S. Provisional Application No. 61/954,549, filed Mar. 17, 2014, entitled “Hierarchical Clustering for View Management in Augmented Reality,” which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61954549 | Mar 2014 | US |