Many methods of organizing and navigating images of various scenes captured by multiple users have been used. In some examples, images of the same scenes captured by multiple users may be combined into groups. Such groups may contain various numbers of captured images taken from many angles and distances in relation to the captured scenes. Users may then view the images within a single group without regard to where the images were captured. Thus, the current methods of organizing and navigating images results in an unpredictable display of images which may be unintuitive and jarring to a user.
Embodiments within the disclosure relate generally to presenting images. One aspect includes a method for organizing and navigating image clusters on a device. A set of captured images may be accessed by one or more processing devices. The one or more processing devices may then detect whether images within the set of captured images satisfy a predetermined pattern; group the images in the set of captured images into one or more clusters according to the detected predetermined pattern; receive a request to display a first cluster of the one or more clusters of captured images; and select, in response to the request, a first captured image from the first cluster to display. The first captured image from the first cluster may be provided for display.
One embodiment provides a system for organizing and navigating image clusters. The system includes one or more computing devices; and memory storing instructions, the instructions executable by the one or more computing devices. The instructions include accessing, by one or more computing devices, a set of captured images; detecting, by the one or more computing devices, whether images within the set of captured images satisfy a predetermined pattern; and grouping, by the one or more computing devices, the images in the set of captured images into one or more clusters according to the detected predetermined pattern; receiving, by the one or more computing devices, a request to display a first cluster of the one or more clusters of captured images; selecting, by the one or more computing devices in response to the request, a first captured image from the first cluster to display; and providing the first captured image from the first cluster for display.
One embodiment provides a system for organizing and navigating image clusters. The system includes one or more computing devices; and memory storing instructions, the instructions executable by the one or more computing devices. The instructions include accessing, by one or more computing devices, a set of captured images; detecting, by the one or more computing devices, whether images within the set of captured images satisfy a predetermined pattern; grouping, by the one or more computing devices, the images in the set of captured images into one or more clusters according to the detected predetermined pattern; receiving, by the one or more computing devices, a request to display a first cluster of the one or more clusters of captured images; selecting, by the one or more computing devices in response to the request, a first captured image from the first cluster to display; providing the first captured image from the first cluster for display; determining, by the one or more computing devices, from the images within the first cluster, a set of neighboring captured images that are within a predetermined proximity to the first captured image; assigning, by the one or more computing devices, one or more neighboring images of the first captured image from the set of neighboring captured images; and providing, by the one or more computing devices in response to a click or drag event, the one or more neighboring images.
Overview
The technology relates to navigating imagery that is organized into clusters based on common patterns exhibited when imagery is captured. For example, images of various scenes captured by multiple users may be analyzed by one or more computing devices to organize the captured images into clusters that correspond to the same scene and, when combined, satisfy at least one common pattern. A user may then select a cluster, and in response, a captured image within the selected cluster may be displayed on one of the one or more computing devices. As the user pans through the captured images within a cluster, the display may switch to another captured image in accordance with the common pattern assigned to the images within the cluster. Additionally, the user may switch between viewing different clusters. Accordingly, the user is provided with a smooth navigation experience that may mimic the way the multiple users typically capture the scene that is being displayed.
Captured images from one or more users may also be combined into patterns. For example, the captured images of a given scene may be grouped, by one or more computing devices, into clusters based on the type of pattern the captured images satisfy when combined. In this regard, a group of captured images, from one or more users may be placed into one or more of a panoramic pattern cluster, a translation pattern cluster, and an orbit pattern cluster.
In order to determine panoramic pattern clusters, panoramic centers representing the coordinates from where a camera captured a first captured image may be determined. Updated panoramic centers may be found by iteratively averaging the coordinates of the first panoramic center with each additional captured image within the first panoramic circle. A final panoramic circle may then be calculated to include as many of the captured images that were within the first panoramic circle as possible. Additional final panoramic circles may be found by iteratively performing the above process on captured images that do not fall within a determined final panoramic circle.
To determine a translation pattern cluster a reduced candidate set of closest neighboring images within a threshold angle may be calculated for, and associated with, each captured image. The reduced candidate sets may be extended to create potential translation pattern clusters of captured images which are within an angle threshold value of the associated captured image. Translation pattern clusters may be determined by keeping potential translation pattern clusters that satisfy threshold criteria. In this regard, potential translation pattern clusters that contain less than a threshold number of captured images may be disregarded. The potential translation pattern clusters that contain at least the threshold number of captured images may be made translation pattern clusters.
In order to determine orbit pattern clusters, captured images that are directed towards one or more orbit object centers and provide a smooth translation around the orbit object centers may be found or identified. For example, the captured images may be compared to determine groups of captured images that have overlapping image data. The groups of captured images that contain overlapping image data are considered to be neighboring images. Orbit object centers are determined by considering each captured image as a ray emanating from the location at which the image was captured, and in the direction a camera center was pointed when the captured image was captured. Intersection points between rays of neighboring images may then be found. A clustering algorithm may then find orbit object centers by determining areas that have a sufficient level of intersecting points.
For each orbit object center, captured images that contain image data associated with a portion of a scene located at an orbit object center may be associated with that orbit object center. The groups of captured images that contain at least a threshold number of captured images may be analyzed to determine if the view angle covered by the captured images within each group of images contain a view angle greater than a threshold angle value. All groups of images that satisfy these thresholds may be considered an orbit pattern cluster.
One or more captured images within the determined clusters may be viewed on the display of a computing device. In this regard, a user may navigate between different clusters or within a single cluster. For example, click targets that represent clusters not currently being viewed may be provided on the display to enable the user to switch between clusters. Additionally, drag targets, which represent neighboring images within a currently viewed cluster may be provided to the user to enable the user to navigate within the currently viewed cluster.
Click targets may be determined based on a currently selected cluster. For example, a user may select an initial cluster to view. A captured image within the initial cluster may be displayed on a computing device. One or more neighboring captured images to the currently displayed captured image, outside of the initial cluster, may be assigned as the one or more click targets.
In order to provide navigation within a panoramic pattern cluster using a drag target, a drag target cost may be computed for each captured image closest to a currently displayed captured image, within the current cluster. For each drag vector direction, the image with the lowest drag target cost may be made a drag target along that drag vector direction. Accordingly, when the user pans the display along a drag vector direction, the drag target with the lowest drag target cost may be displayed on the computing device.
For navigating within orbit and translation pattern clusters, neighboring captured images to the right and left of a currently displayed captured image may be found or identified. The view angles of the neighboring images may be compared with the view angle of the currently displayed image to determine if the images are located to the left or right of the currently displayed image. The neighboring images which are closest a target drag distance, either to the left or right, may then be made the left and right drag targets, respectively. Accordingly, when the user pans the display to the left or right, the left or right drag target, respectively, may be displayed on the computing device.
The features described herein may allow for a smooth navigation experience that mimics the way users typically would view the scene as if they were there. By doing so, the user may experience a predictable navigation experience which provides a natural view of a scene. In addition, the navigation experience will be consistent when navigating both large and small groups of captured images of a scene.
Example Systems
Memory may also include data 118 that may be retrieved, manipulated or stored by the processor. The memory may be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.
The instructions 116 may be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the one or more processors. In that regard, the terms “instructions,” “application,” “steps” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by a processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
Data 118 may be retrieved, stored or modified by the one or more processors 112 in accordance with the instructions 116. For instance, although the subject matter described herein is not limited by any particular data structure, the data may be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data may also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data may comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data.
The one or more processors 112 may be any conventional processors, such as a commercially available CPU. Alternatively, the processors may be dedicated components such as an application specific integrated circuit (“ASIC”) or other hardware-based processor. Although not necessary, one or more of computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, faster or more efficiently.
Although
Each of the computing devices 110 may be at different nodes of a network 160 and capable of directly and indirectly communicating with other nodes of network 160. Although only a few computing devices are depicted in
As an example, each of the computing devices 110 may include web servers capable of communicating with storage system 150 as well as computing devices 120, 130, and 140 via the network. For example, one or more of server computing devices 110 may use network 160 to transmit and present information to a user, such as user 220, 230, or 240, on a display, such as displays 122, 132, or 142 of computing devices 120, 130, or 140. In this regard, computing devices 120, 130, and 140 may be considered client computing devices and may perform all or some of the features described herein.
Each of the client computing devices 120, 130, and 140 may be configured similarly to the server computing devices 110, with one or more processors, memory and instructions as described above. Each client computing device 120, 130 or 140 may be a personal computing device intended for use by a user 220, 230, 240, and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display such as displays 122, 132, or 142 (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device that is operable to display information), and user input device 124 (e.g., a mouse, keyboard, touch-screen or microphone). The client computing device may also include a camera for recording video streams and/or capturing images, speakers, a network interface device, and all of the components used for connecting these elements to one another.
Although the client computing devices 120, 130 and 140 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing device 120 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet. In another example, client computing device 130 may be a head-mounted computing system. The user may interact with the client computing device, for example, by inputting information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen.
As with memory 114, storage system 150 may be of any type of computerized storage capable of storing information accessible by the server computing devices 110, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. In addition, storage system 150 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system 150 may be connected to the computing devices via the network 160 as shown in
Example Methods
Operations in accordance with a variety of aspects of the disclosure will now be described. It should be understood that the following operations do not have to be performed in the precise order described below. Rather, various steps can be handled in reverse order or simultaneously or not at all.
Images of various scenes captured by multiple users may be analyzed by one or more computing devices to organize the captured images into clusters that correspond to the same scene and, when combined, satisfy at least one common pattern. Images may be captured by users in accordance with one of three common patterns. Users may capture images of a scene with cameras by performing a panoramic pattern, a translation pattern, or an orbit pattern. For example, a user may stand in a single spot and capture multiple images of a scene while turning a camera, therefore performing a panoramic pattern. A user, such as user 220 using client computing device 120 may move the computing device in an arcing motion when capturing images of a scene.
Users may also capture multiple images of a scene while moving in a linear direction, therefore performing a translation pattern.
Additionally, the user may capture multiple images of a scene while moving in a circular direction around the scene, therefore performing an orbit pattern.
Captured images from one or more users may also be combined into patterns. For example, the captured images of a given scene may be grouped, by one or more computing devices, into clusters based on the type of pattern the captured images satisfy when combined. In this regard, a group of captured images, from one or more users may be placed into a panoramic pattern cluster, a translation pattern cluster, and/or an orbit pattern cluster. In grouping the captured images the height dimension (altitude) may be omitted because groups of photos are likely to be taken within a small range of height. Accordingly, the grouping of captured images may be based on a two-dimensional plane representing a lateral and longitudinal plane.
In order to determine panoramic pattern clusters, a first panoramic center representing the coordinates from where a camera captured a first captured image is determined. In this regard, a first panoramic circle may be found around the first panoramic center by calculating a circular distance around the first panoramic center with a set radius such as about 3 meters, or more or less, as shown in
A final panoramic circle may then be found around the updated panoramic center coordinates so to include as many of the captured images that were within the first panoramic circle as possible. In the example shown in
Panoramic pattern clusters may be determined by keeping final panoramic circles that satisfy certain threshold criteria. In this regard, final panoramic circles that contain less than a threshold number of captured images, such as 8, or more or less, may be disregarded. In addition, any final panoramic circles that contain at least the threshold number of captured images may be analyzed to determine if the view angle, the total angle of the field of view, covered by the captured images within each final panoramic circle contain a view angle greater than a threshold angle value, as shown in
To determine a translation pattern cluster, captured images that are within a set angle threshold of each other may be determined. For example, for a first captured image, an initial candidate set of the closest neighboring captured images, to the left and right of the first captured image along a horizontal axis relative to the scene, may be determined, as shown in
The reduced candidate sets may be extended to create potential translation pattern clusters. For example, starting with a first captured image, each captured image within the reduced candidate set associated with the first captured image may be analyzed. The captured images which are within the set angle threshold value and located to the left of the first captured image along the horizontal axis relative to the scene may be combined into a potential translation pattern cluster. If all captured images within the reduced candidate set associated with the first captured image satisfy or are within the set angle threshold value, the captured images within a neighbor candidate set associated with a captured image immediately left of the first captured image may also be analyzed. Each captured image of the neighbor candidate set that satisfies the angle threshold value may be placed into the potential translation cluster. For example, the neighbor candidate sets associated with captured images immediately left of the currently analyzed captured image may be iteratively analyzed, starting with images closest to the currently analyzed images, to determine whether captured images satisfy the angle threshold value or are within the angle threshold value of the currently analyzed image. This may continue until one of the captured images within the neighbor candidate sets does not satisfy or is not within the angle threshold value of the currently analyzed image. All analyzed captured images which satisfy the angle threshold value of the currently analyzed image may then be included in the potential translation pattern cluster. The same process may then be performed to find images to the right of the first captured image.
Translation pattern clusters may be determined by keeping potential translation pattern clusters that satisfy additional threshold criteria. In this regard, potential translation pattern clusters that contain less than a threshold number of captured images, such as 8, or more or less, may be disregarded. The potential translation pattern clusters that contain at least the threshold number of captured images may be considered translation pattern clusters.
In order to determine orbit pattern clusters, captured images that are directed towards one or more orbit object centers and provide a smooth translation around the orbit object centers are found. For example, the captured images may be compared to determine groups of captured images that have overlapping image data. Overlapping image data may be found using computer vision algorithms, such as feature detection algorithms and structure from motion (SfM) algorithms. In this regard, feature points for each of the captured images may be determined. The feature points of each captured image may then be compared using SfM algorithms to determine if any of the points are common, meaning they contain overlapping image data. The groups of captured images that contain overlapping image data are considered to be neighboring images.
Orbit object centers may be determined by considering each captured image as a ray emanating from the location at which the image was captured, and in the direction a camera center was pointed when the captured image was captured as shown in
Orbit pattern clusters may be determined by associating captured images with orbit object centers. For example, as shown in
Orbit pattern clusters may be determined by keeping groups of captured images that satisfy certain threshold criteria. In this regard, groups of captured images that contain less than a threshold number of captured images, such as 8, or more or less, may be disregarded. The groups of captured images that contain at least the threshold number of captured images may be analyzed to determine if the view angle covered by the captured images within each group of images contain a view angle greater than a threshold value. All groups of images that satisfy at least one or both of the above threshold criteria may be considered an orbit pattern cluster.
One or more captured images within the determined clusters may be viewed on the display of a computing device. In this regard, a user may navigate between different clusters or within a single cluster. For example, click targets, which represent clusters not currently being viewed, may be provided on the display to enable the user to switch between clusters. Click targets may be activated by clicking on the target using a mouse point or tapping on a touch sensitive screen. Additionally, drag targets, which represent neighboring images within a currently viewed cluster may be provided to the user to enable the user to navigate within the currently viewed cluster. Drag targets may be activated by dragging on the target using a mouse point or tapping on a touch sensitive screen.
Click targets may be determined based on a currently selected cluster. For example, a user may select an initial cluster to view. A captured image within the initial cluster may be displayed on a computing device. One or more neighboring captured images to the currently displayed captured image, outside of the initial cluster, may be assigned as the one or more click targets, as shown in
In order to provide navigation within a panoramic pattern cluster using a drag target, a drag target cost may be computed for each captured image closest to a currently displayed captured image, within the current cluster. In this regard, the drag target cost may include various cost terms. These cost terms may be related to the flow fields, of the translational and rotational “flow” of pixels between the currently displayed captured image and the captured images closest to the currently displayed captured image. Other cost terms which are not based on flow, such as an overlap cost, may also be used. In one example, the target image should overlap with the reference image as much as possible. Thus, the overlap cost may include two cost values: (1) how much of the reference image overlaps with the target image and (2) how much of the target image overlaps with the reference image. As overlap increases, the overlap cost may decrease. Thus, this cost value may be an inverted value. For each captured image, the costs may be compounded to create a drag target cost for each captured image.
In one example, for each drag vector direction, the image with the lowest drag target cost may be made a drag target along that drag vector direction, as shown in
For navigating within orbit and translation pattern clusters, neighboring captured images to the right and left of a currently displayed captured image are found. For example, a center point for a currently displayed cluster may be determined. Returning to
Neighboring images within the orbit and translation pattern clusters may be made either a left or right drag target relative to the currently displayed image. In this regard, the neighboring images which are closest to a target drag distance may be made the left or right drag targets. A target drag distance may be between about 0 and 30 degrees, or more or less. The neighboring images that are closest to the target drag distance, calculated as the view angle difference between each respective neighboring image and the currently displayed image, either to the left or right, may then be made the left and right drag targets, respectively. Accordingly, when the user pans the display to the left or right, the left or right drag target, respectively, may be displayed on the computing device.
Flow diagram 1200 of
Although the technology herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present technology. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present technology as defined by the appended claims.
The present application is a continuation of U.S. patent application Ser. No. 14/779,500, filed on Sep. 23, 2015, which is a national phase entry under 35 U.S.C. § 371 of International Application No. PCT/CN/2015/075241, filed Mar. 27, 2015, the disclosures of said applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6078701 | Hsu | Jun 2000 | A |
6788333 | Uyttendaele | Sep 2004 | B1 |
8335384 | Nakate | Dec 2012 | B2 |
8456467 | Hickman | Jun 2013 | B1 |
8571331 | Cifarelli | Oct 2013 | B2 |
9740936 | Zhou | Aug 2017 | B2 |
20060097988 | Hong | May 2006 | A1 |
20060120624 | Jojic et al. | Jun 2006 | A1 |
20060193538 | Vronay et al. | Aug 2006 | A1 |
20060195475 | Logan | Aug 2006 | A1 |
20070098266 | Chiu et al. | May 2007 | A1 |
20070279494 | Aman et al. | Dec 2007 | A1 |
20070282792 | Bailly | Dec 2007 | A1 |
20080118160 | Fan et al. | May 2008 | A1 |
20100231687 | Amory | Sep 2010 | A1 |
20100250581 | Chau | Sep 2010 | A1 |
20110032361 | Tamir et al. | Feb 2011 | A1 |
20110110605 | Cheong | May 2011 | A1 |
20110181617 | Tsuda et al. | Jul 2011 | A1 |
20120026193 | Higeta | Feb 2012 | A1 |
20120154548 | Zargarpour et al. | Jun 2012 | A1 |
20150029092 | Holz et al. | Jan 2015 | A1 |
20150373268 | Osborne | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
1773493 | May 2006 | CN |
102347016 | Feb 2012 | CN |
102474636 | May 2012 | CN |
104123339 | Oct 2014 | CN |
1990772 | Jul 2009 | EP |
H09091410 | Apr 1997 | JP |
2004526254 | Aug 2004 | JP |
2006293996 | Oct 2006 | JP |
2008276668 | Nov 2008 | JP |
2010514041 | Apr 2010 | JP |
2012048204 | Mar 2012 | JP |
Entry |
---|
Office Action for Korean Patent Application No. 10-2017-7023248 dated Apr. 19, 2019. |
Notice of Reasons for Rejection for Japanese Patent Application No. 2017-544300, dated Nov. 19, 2018. |
Extended European Search Report for European Patent Application No. 15886795.2, dated Oct. 18, 2018. 11 pages. |
Notification of First Office Action for Chinese Patent Application No. 201580076700.1 dated Jun. 5, 2019. |
First Examination Report for Indian Patent Application No. 201747029272 dated Sep. 27, 2019. 6 pages. |
Garg et al., “Dynamic Mosaics”, 8 pages, 2012. |
Ladikos et al., “Spectral Camera Clustering”, Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference, Sep. 27, 2009-Oct. 4, 2009, 7 pages. |
International Search Report and Written Opinion for PCT/CN2015/075241 dated Dec. 31, 2015. |
Number | Date | Country | |
---|---|---|---|
20180005036 A1 | Jan 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14779500 | US | |
Child | 15651573 | US |