The present invention relates to navigating between panoramic images.
Computer systems exist that include a plurality of panoramic images geo-coded to locations on a map. To navigate between neighboring panoramic images, the user may select a button on a map and a new neighboring panoramic image may be loaded and displayed. Although this technique has benefits, jumping from one image to the next image can be distracting to a user. Accordingly, new navigation methods and systems are needed.
The present invention relates to using image content to facilitate navigation in panoramic image data. In a first embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
In a second embodiment, a method for creating and displaying annotations includes (1) creating a virtual model from a plurality of two-dimensional images; (2) determining an intersection of a ray and the virtual model, wherein the ray extends from a camera viewport of a first image; (3) retrieving a panoramic image; (4) orienting the panoramic image to face the intersection; and (5) displaying the panoramic image.
In a third embodiment, a system creates and displays annotations corresponding to a virtual model, wherein the virtual model was created from a plurality of two-dimensional images. The system includes a navigation controller that determines an intersection of a ray, extended from a camera viewport of a first image, and a virtual model, retrieves a third panoramic image and orients the third panoramic image to face the intersection. The virtual model comprises a plurality of facade planes.
Further embodiments, features, and advantages of the invention, as well as the structure and operation of the various embodiments of the invention are described in detail below with reference to accompanying drawings.
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.
The drawing in which an element first appears is typically indicated by the leftmost digit or digits in the corresponding reference number. In the drawings, like reference numbers may indicate identical or functionally similar elements.
The present invention relates to using image content to facilitate navigation in panoramic image data. In the detailed description of the invention that follows, references to “one embodiment”, “an embodiment”, “an example embodiment”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
As described herein, embodiments of the present invention enables users to navigate between panoramic images using image content. In one embodiment, a model is created representing the image content. A user may select an object contained in a first panoramic image. The location of the object is determined by projection the user's selection onto the model. A second panorama is selected and/or oriented according to that location. In this way, embodiments of this invention enable users to navigate between the first and second panorama using image content.
Several avatars (e.g., cars) 104, 106, 108, and 110 are shown at locations on street 102. Each avatar 104, 106, 108, and 110 has an associated panoramic image geo-coded to the avatar's location on street 102. The panoramic image may include content 360 degrees around the avatar. However, only a portion of the panorama may be displayed to a user at a time, for example, through a viewport. In diagram 100, the portion of the panorama displayed to the user is shown by the each avatar's orientation. Avatars 104, 106, 108, and 110 have orientations 124, 126, 122, 120 respectively.
Avatar 104 has orientation 124 facing a point 118. Avatar 104's viewport would display a portion of a panorama geo-coded to the location of the avatar 104. The portion of the panorama displayed in the viewport would contain a point 118. Embodiments of the present invention use virtual model 112 to navigate from the position of avatar 104 to the positions of avatar 106, 108, and 110.
In a first embodiment of the present invention, hereinafter referred to as the switching lanes embodiment, a user may navigate between lanes. The switching lanes embodiment enables a user to navigate from avatar 104's panorama to avatar 106's panorama. Avatar 106's panorama is geo-coded to a location similar to avatar 104's panorama, but in a different lane of street 102. Because the panorama is geo-coded to a different location, if avatar 104 and avatar 106 had the same orientation, then their corresponding viewports would display different content. Changing content displayed in the viewport can be disorienting to the user. The switching lanes embodiment orients avatar 106 to face point 118 on virtual model 112. In this way, the portion of the panorama displayed in avatar 106's viewport contains the same content as the portion of the panorama displayed in avatar 104's viewport. In this way, the switching lanes embodiment makes switching between lanes less disorienting.
In a second embodiment of the present invention, hereinafter referred to as the walk-around embodiment, a user may more easily view an object from different perspectives. The user may get the sense that he/she is walking around the object. The walk-around embodiment enables a user to navigate from avatar 104's panorama to avatar 108's panorama. The location of avatar 108 may be, for example, selected by the user. For example, a user may select the location of avatar 108 by selecting a location on a map or pressing an arrow button on a keyboard. Because the panorama is geo-coded to a different location, if avatar 104 and avatar 106 had the same orientation, then their corresponding viewports would display different content, and an object of interest displayed in avatar 104's viewport may not by be displayed in avatar 106's viewport. The walk-around embodiment orients avatar 108 to face point 118 on virtual model 112. In this way, the portion of the panorama displayed in avatar 106's viewport contains the same content as the portion of the panorama displayed in avatar 104's viewport. As result, the user may more easily view an object from different perspectives.
In an embodiment, a transition may be displayed to the user between avatar 104 and avatar 108. The transition may show intermediate panoramas for avatar positions between avatar 104 and avatar 108. The intermediate panoramas may be oriented to face point 118 as well.
In a third embodiment, hereinafter referred to as the click-and-go embodiment, a user may navigate to a second panoramic image at a new location according to the location of an object of a first panorama. The click-and-go embodiment enables a user to navigate from avatar 104's panorama to an avatar 110's panorama. The position of avatar 110 is the position of the closest available panorama to point 118 on virtual model 112. Point 118 may be determined according to a selection by the user in the first panorama.
In embodiments, avatar 110 may have an orientation 120 facing point 118 or a different orientation 128. Orientation 128 may be the orientation of the orientation of street 102.
By selecting avatar 110 according to point 118 on virtual model 112, the click and go embodiment uses virtual model 112 to navigate between panoramic images. As is described below, in an embodiment, virtual model 112 is generated using the content of panoramic images.
In an example, the click and go embodiment may enable a user to get a closer look at an object in the example, the user may select an object in a first panorama and a second panorama close to the object is loaded. Further, the portion of the second panorama containing the object may be displayed in the viewport. In this way, using the content of the panoramic images to navigate between panoramic images creates a more satisfying and less disorienting user experience.
In an embodiment, a panorama viewer may display a transition between avatar 104 and avatar 108. The transition may display intermediate panoramas for avatar positions between avatar 104 and avatar 108. The intermediate panoramas may be oriented to face point 118 as well.
A ray 212 is extended from a camera viewpoint 210 through point 268. In an example, camera viewpoint 210 may be the focal point of the camera used to take photographic image 266. In that example, the distance between image 266 and camera viewpoint 210 is focal length 270.
A point 204 is the intersection between ray 212 and virtual model 202. Point 204 may be used to navigate between street level panoramic images, as is shown in
In embodiments, the intersection may be used in several ways to navigate between panoramic images. For example, in the switching lanes or walk around embodiments, a second panoramic image may be selected at step 310. In the switching lanes embodiment, the second panoramic image has a location similar to the first panoramic image, but in a different lane. In the walk-around embodiment, the second panoramic image may be selected, for example, by a user. The second panoramic image is oriented to face the intersection at step 316. After step 316, method 300 ends.
In the click-and-go embodiment, a second panoramic image may be such that it is close to the intersection (for example, within a selected or pre-defined distance of the intersection) at step 308, as described with respect to
Method 400 starts with step 402. In step 402, features of images are identified. In an embodiment, the features are extracted from the images for subsequent comparison. This is described in more detail below with respect to
In step 404, features in neighboring images are matched. In an embodiment, matching features may include constructing a spill tree. This is described in more detail below with respect to
In step 406, the locations of features are calculated, for example, as points in three-dimensional space. In an embodiment, points are determined by computing stereo triangulations using pairs of matching features as determined in step 404. How to calculate points in three-dimensional space is described in more detail below with respect to
In step 408, facade planes are estimated based on the cloud of points calculated in step 406. In an embodiment, step 408 may comprise using an adaptive optimization algorithm or best fit algorithm. In one embodiment, step 408 comprises sweeping a plane, for example, that is aligned to a street as is described below with respect to
In step 410, street planes are estimated based on the location of streets. These street planes together with the facade planes estimated in step 408 are used to form a virtual model corresponding to objects shown in a plurality of two-dimensional images.
In one embodiment, images 502 and 504 may be taken from a moving vehicle with a rosette of eight cameras attached. The eight cameras take eight images simultaneously from different perspectives. The eight images may be subsequently stitched together to form a panorama. Image 502 may be an unstitched image from a first camera in the eight camera rosette directed perpendicular to the vehicle. Image 504 may be an unstitched image from a second camera adjacent to the first camera taken during a later point in time.
In an embodiment, the step of extracting features may include interest point detection and feature description. Interest point detection detects points in an image according to a condition and is preferably reproducible under image variations such as variations in brightness and perspective. The neighborhood of each interest point is a feature. Each feature is represented by a feature descriptor. The feature descriptor is preferably distinctive.
In an example, a Speeded Up Robust Features (SURF) algorithm is used to extract features from neighboring images. The SURF algorithm is described, for example, in Herbert Bay, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features”, Proceedings of the Ninth European Conference on Computer Vision, May 2006. The SURF algorithm includes an interest point detection and feature description scheme. In the SURF algorithm, each feature descriptor includes a vector. In one implementation, the vector may be 128-dimensional. In an example where the images are panoramas taken from street level, the SURF algorithm may extract four to five thousand features in each image, resulting in a feature descriptor file of one to two megabytes in size.
In an embodiment, each feature such as feature 512 is represented by a feature descriptor. Each feature descriptor includes a 128 dimensional vector. The similarity between a first feature and a second feature may be determined by finding the Euclidean distance between the vector of the first feature descriptor and the vector of the second feature descriptor.
A match for a feature in the first image among the features in the second image may be determined, for example, as follows. First, the nearest neighbor (e.g., in 128-dimensional space) of a feature in the first image is determined from among the features in the second image. Second, the second-nearest neighbor (e.g., in 128 dimensional space) of the feature in the first image is determined from among the features in the second image. Third, a first distance between the feature in the first image and the nearest neighboring feature in the second image is determined, and a second distance between the feature in the first image and the second nearest neighboring feature in the second image is determined. Fourth, a feature similarity ratio is calculated by dividing the first distance by the second distance. If the feature similarity ratio is below a particular threshold, there is a match between the feature in the first image and its nearest neighbor in the second image.
If the feature similarity ratio is too low, not enough matches are determined. If the feature similarity ratio is too high, there are too many false matches. In an embodiment, the feature similarity ratio may be between 0.5 and 0.95 inclusive.
In an embodiment, the nearest neighbor and the second nearest neighbor may be determined by constructing a spill tree of the features in the second image. The spill tree closely approximates the nearest neighbors and efficiently uses processor resources. In an example where the images being compared are panoramic images taken from street level, there may be hundreds of pairs of matched features for each pair of images. For each pair of matched features, a point in three-dimensional space can be determined, for example, using stereo triangulation.
After a ray for each of the matching features is formed, a point in three-dimensional space may be determined.
In example 700, two camera rosettes 702 and 704 are shown. In an embodiment, these two camera rosettes can be the same (e.g., the same camera rosette can be used to take images at different locations and at different points in time). Each camera rosette 702 and 704 includes an image with a matched feature. In example 700, camera rosette 702 includes a feature 706 that is matched to a feature 708 of camera rosette 704. As shown in
In embodiments, as described above, the steps illustrated by examples 600 and 700 are repeated for each pair of matched features to determine a cloud of three-dimensional points.
As described herein, in accordance with an embodiment of the present invention, features are extracted from the first and second images. Matching features are identified, and for each pair of matching features, a three-dimensional point is determined, for example, using stereo triangulation. This results in a cloud of three-dimensional points, such as those illustrated in
In an embodiment, if a position for a façade plane (e.g., a position having a specified number of nearby points) is not found, the angle of the façade plane may be varied relative to the street. Accordingly
As described herein, a virtual model according to the present invention is formed from facade planes. The facade planes may be generated according to image content. In an embodiment, the model may also include one or more street planes (e.g., a plane parallel to the street). In an embodiment, a street plane may be calculated based on a known position of a street (e.g., one may know the position of the street relative to the camera used to take the images). The virtual model may be two-dimensional or three-dimensional.
Server 1024 may include a web server. A web server is a software component that responds to a hypertext transfer protocol (HTTP) request with an HTTP reply. As illustrative examples, the web server may be, without limitation, an Apache HTTP Server, an Apache Tomcat, a Microsoft Internet Information Server, a JBoss Application Server, a WebLogic Application Server, or a Sun Java System Web Server. The web server may serve content such as hypertext markup language (HTML), extendable markup language (XML), documents, videos, images, multimedia features, or any combination thereof. This example is strictly illustrative and does not limit the present invention.
Server 1024 may serve map tiles 1014, a program 1016, configuration information 1018, and/or panorama tiles 1020 as discussed below.
Network(s) 1044 can be any network or combination of networks that can carry data communication, and may be referred to herein as a computer network. Network(s) 1044 can include, but is not limited to, a local area network, medium area network, and/or wide area network such as the Internet. Network(s) 1044 can support protocols and technology including, but not limited to, World Wide Web protocols and/or services. Intermediate web servers, gateways, or other servers may be provided between components of system 1000 depending upon a particular application or environment.
Server 1024 is coupled to a panorama database 1028 and model database 1030. Panorama database 1028 stores images. In an example, the images may be photographic images taken from street level. The photographic images taken from the same location may be stitched together to form a panorama. Model database 1030 stores a three-dimensional model corresponding to the images in panorama database 1028. An example of how the three-dimensional model may be generated is discussed in further detail below. Annotation database 1032 stores user-generated annotations.
Each of panorama database 1028, model database 1030, and annotation database 1032 may be implemented on a relational database management system. Examples of relational databases include Oracle, Microsoft SQL Server, and MySQL. These examples are illustrative and are not intended to limit the present invention.
Server 1024 includes a navigation controller 1032. Navigation controller 1032 uses a model in model database 1030 generated from image content to facilitate navigation between panoramas. Navigation controller 1032 receives input from a navigation data 1042. Navigation data 1042 contains data about the present position and orientation and data about the desired next position. For example, in the click-and-go embodiment, navigation data 1042 may contain a first panoramic image and the location in a first panoramic image where the user would like to go. Navigation data 1042 may be, for example, an HTTP request with data encoded as HTTP parameters.
In response to navigation data 1042, navigation controller 1032 determines the new panorama in panorama database 1028 based on the model in model database 1030. Navigation controller 1032 also determines the orientation to display a second panorama. Navigation controller 1032 outputs the new panorama and the orientation in configuration information 1018 and panorama tiles 1020.
Navigation controller 1032 may include a switching lanes controller 1034, a click-and-go controller 1036, and a walk-around controller 1038. Each of switching lanes controller 1034, click-and-go controller 1036, and walk-around controller 1038 responds to navigation data 1042 according to an embodiment of the present invention.
Switching lanes controller 1034 operates according to the switching lanes embodiment of the present invention. In response to navigation data 1042, switching lanes controller 1034 selects a second panoramic image from panorama database 1028. The second panoramic image is close to the location of the first panoramic image, but in a different lane. In an example, the second panoramic image may be the closest panoramic image in panorama database 1028 that exists in a different lane. Switching lanes controller 1034 determines a location in the model in model database 1030 according to the position and orientation of the first panorama in navigation data 1042. In an embodiment, to determine the location, switching lanes controller 1034 extends a ray from the position in the direction of the orientation, as described with respect to
Click-and-go controller 1036 operates according to the click-and-go embodiment of the present invention. In response to navigation data 1042, click-and-go controller 1036 selects a second panoramic image from panorama database 1028. Click-and-go controller 1036 selects the second panoramic image based on a location in a first panoramic image from navigation data 1042. The location in the first panoramic image may be determined by a user input, such as a mouse. Click-and-go controller 1036 uses the location in first panoramic image to determine a location in the model in model database 1042, as described with respect to
Walk-around controller 1038 selects a second panoramic image from panorama database 1028 in response to navigation data 1042. The second panoramic image may be selected, for example, according to a position in navigation data 1042 entered by a user. Walk-around controller 1038 determines a location in the model in model database 1030 according to the position and orientation of the first panorama in navigation data 1042. To determine the location, walk-around controller 1038 extends a ray from the position in the direction of the orientation, as described with respect to
In an embodiment, client 1002 may contain a mapping service 1006 and a panorama viewer 1008. Each of mapping service 1006 and panorama viewer 1008 may be a standalone application or may be executed within a browser 1004. In embodiments, browser 1004 may be Mozilla Firefox or Microsoft Internet Explorer. Panorama viewer 1008, for example, can be executed as a script within browser 1004, as a plug-in within browser 1004, or as a program which executes within a browser plug-in, such as the Adobe (Macromedia) Flash plug-in.
Mapping service 1006 displays a visual representation of a map, for example, as a viewport into a grid of map tiles. Mapping system 1006 is implemented using a combination of markup and scripting elements, for example, using HTML and Javascript. As the viewport is moved, mapping service 1006 requests additional map tiles 1014 from server(s) 1024, assuming the requested map tiles have not already been cached in local cache memory. Notably, the server(s) which serve map tiles 1014 can be the same or different server(s) from the server(s) which serve panorama tiles 1020, configuration information 1018 or the other data involved herein.
In an embodiment, mapping service 1006 can request that browser 1004 proceed to download a program 1016 for a panorama viewer 1008 from server(s) 1024 and to instantiate any plug-in necessary to run program 1016. Program 1016 may be a Flash file or some other form of executable content. Panorama viewer 1008 executes and operates according to program 1016.
Panorama viewer 1008 requests configuration information 1018 from server(s) 1024. The configuration information includes meta-information about a panorama to be loaded, including information on links within the panorama to other panoramas. In an embodiment, the configuration information is presented in a form such as the Extensible Markup Language (XML). Panorama viewer 1008 retrieves visual assets 1020 for the panorama, for example, in the form of panoramic images or in the form of panoramic image tiles. In another embodiment, the visual assets include the configuration information in the relevant file format. Panorama viewer 1008 presents a visual representation on the client display of the panorama and additional user interface elements, as generated from configuration information 1018 and visual assets 1020. As a user interacts with an input device to manipulate the visual representation of the panorama, panorama viewer 1008 updates the visual representation and proceeds to download additional configuration information and visual assets as needed.
Each of browser 1004, mapping service 1006, and panorama viewer 1008 may be implemented in hardware, software, firmware or any combination thereof.
Processing pipeline server 1124 includes a feature extractor 1116, a feature matcher 1118, a point calculator 1120, and a surface estimator 1122. Each of feature extractor 1116, feature matcher 1118, point calculator 1120, and surface estimator 1122 may be implemented in hardware, software, firmware or any combination thereof.
Feature extractor 1116 selects images 1102 from panorama database 1028. In an embodiment, images 1102 may include two images which are street level unstitched panoramic images. The two images may be taken from nearby location to one another, but from different perspectives. In an embodiment, the images are taken from a moving vehicle with a rosette of eight cameras attached. The eight cameras take eight images simultaneously from different perspectives. The eight images may be subsequently stitched together to form a panorama. The first image may be an unstitched image from a first camera in the eight camera rosette. The second image may be an unstitched image from a second camera adjacent to the first camera taken during a later point in time.
Feature extractor 1116 extracts features from images 1102. In an embodiment, feature extractor 1116 may perform more than one function such as, for example, interest point detection and feature description. Interest point detection detects points in an image according to conditions and is preferably reproducible under image variations such as variations in brightness and perspective. The neighborhood of each interest point is then described as a feature. These features are represented by feature descriptors. The feature descriptors are preferably distinctive.
In an example, a Speeded Up Robust Features (SURF) algorithm may be used to extract features from the images. The SURF algorithm includes an interest point detection and feature description scheme. In the SURF algorithm, each feature descriptor includes a vector. In one implementation, the vector may be 128-dimensional. In an example where the images are panoramas taken from street level, the SURF algorithm may extract four to five thousand features in each image, resulting in a feature descriptor file 1104 of one to two megabytes in size.
Feature matcher 1118 uses each feature descriptor file 1104 to match features in the two images. In an example, each feature is represented by a feature descriptor in feature descriptor file 1104. Each feature descriptor includes a 128-dimensional vector. The similarity between a first feature and a second feature may be determined by finding the Euclidean distance between the vector of the first feature and the vector of the second feature.
A match for a feature in the first image among the features in the second image may be determined as follows. First, feature matcher 1118 determines the nearest neighbor (e.g., in 118-dimensional space) of the feature in the first image determined from among the features in the second image. Second, feature matcher 1118 determines the second-nearest neighbor of the feature in the first image determined from among the features in the second image. Third, feature matcher 1118 determines a first distance between the feature in the first image and the nearest neighboring feature in the second image, and feature matcher 1118 determines a second distance between the feature in the first image and the second nearest neighboring feature in the second image. Fourth, feature matcher 1118 calculates a feature similarity ratio by dividing the first distance by the second distance. If the feature similarity ratio is below a particular threshold, there is a match between the feature in the first image and its nearest neighbor in the second image.
Feature matcher 1118 may determine the nearest neighbor and second nearest neighbor, for example, by constructing a spill tree.
If the feature similarity ratio is too low, feature matcher 1118 may not determine enough matches. If the feature similarity ratio is too high, feature matcher 1118 may determine too many false matches. In an embodiment, the feature similarity ratio may be between 0.5 and 0.95 inclusive. In examples where the images are panoramas taken from street level, there may be several hundred matched features. The matched features are sent to point calculator 1120 as matched features 1106.
Point calculator 1120 determines a point in three-dimensional space for each pair of matched features 1106. To determine a point in three-dimensional space, a ray is formed or determined for each feature, and the point is determined based on the intersection of the rays for the features. In an embodiment, if the rays do not intersect, the point is determined based on the midpoint of the shortest line segment connecting the two rays. The output of point calculator 1120 is a cloud of three-dimensional points 1108 (e.g., one point for each pair of matched features).
Surface estimator 1122 determines a facade plane based on the cloud of points 1108. Surface estimator 1122 may determine the facade plane by using a best-fit or regression analysis algorithm such as, for example, a least-squares or an adaptive optimization algorithm. Examples of adaptive optimization algorithms include, but are not limited to, a hill-climbing algorithm, a stochastic hill-climbing algorithm, an A-star algorithm, and a genetic algorithm. Alternatively, surface estimator 1122 may determine the facade surface by translating a plane to determine the best position of the plane along an axis, as described above with respect to
Surface estimator 1122 may also determine more or more street planes. The street planes and the facade planes together form surface planes 1110. Surface estimator 1122 stores surface planes 1110 in model database 1030.
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
The present application is a continuation of U.S. patent application Ser. No. 14/584,183, filed Dec. 29, 2014, which is a continuation of U.S. patent application Ser. No. 13/605,635, filed Sep. 6, 2012 and which issued as U.S. Pat. No. 8,963,915 on Feb. 24, 2015, which is a continuation of U.S. patent application Ser. No. 12/038,325, filed Feb. 27, 2008 and which issued as U.S. Pat. No. 8,525,825 on Sep. 3, 2013, all of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5594844 | Sakai et al. | Jan 1997 | A |
5737533 | de Hond | Apr 1998 | A |
6009190 | Szeliski et al. | Dec 1999 | A |
6111582 | Jenkins | Aug 2000 | A |
6157747 | Szeliski et al. | Dec 2000 | A |
6256043 | Aho et al. | Jul 2001 | B1 |
6308144 | Bronfeld et al. | Oct 2001 | B1 |
6346938 | Chan | Feb 2002 | B1 |
6999078 | Akerman et al. | Feb 2006 | B1 |
7096428 | Foote | Aug 2006 | B2 |
7161604 | Higgins et al. | Jan 2007 | B2 |
7336274 | Kida | Feb 2008 | B2 |
7353114 | Rohlf et al. | Apr 2008 | B1 |
7570261 | Edecker et al. | Aug 2009 | B1 |
7698336 | Nath | Apr 2010 | B2 |
7712052 | Szeliski et al. | May 2010 | B2 |
7746376 | Mendoza et al. | Jun 2010 | B2 |
7843451 | Lafon | Nov 2010 | B2 |
7882286 | Natanzon et al. | Feb 2011 | B1 |
7933897 | Jones et al. | Apr 2011 | B2 |
7966563 | VanBree | Jun 2011 | B2 |
7990394 | Vincent et al. | Aug 2011 | B2 |
8072448 | Zhu et al. | Dec 2011 | B2 |
8319952 | Otani et al. | Nov 2012 | B2 |
8392354 | Salemann | Mar 2013 | B2 |
8447136 | Ofek et al. | May 2013 | B2 |
8525825 | Zhu et al. | Sep 2013 | B2 |
8525834 | Salemann | Sep 2013 | B2 |
8587583 | Newcombe et al. | Nov 2013 | B2 |
8624958 | Mendoza et al. | Jan 2014 | B2 |
8774950 | Kelly et al. | Jul 2014 | B2 |
8818076 | Shenkar | Aug 2014 | B2 |
20020070981 | Kida | Jun 2002 | A1 |
20030063133 | Foote et al. | Apr 2003 | A1 |
20040196282 | Oh | Oct 2004 | A1 |
20040257384 | Park et al. | Dec 2004 | A1 |
20050073585 | Ettinger et al. | Apr 2005 | A1 |
20050128212 | Edecker et al. | Jun 2005 | A1 |
20050210415 | Bree | Sep 2005 | A1 |
20060004512 | Herbst et al. | Jan 2006 | A1 |
20060050091 | Shoemaker et al. | Mar 2006 | A1 |
20060132482 | Oh | Jun 2006 | A1 |
20060271280 | O'Clair | Nov 2006 | A1 |
20070030396 | Zhou et al. | Feb 2007 | A1 |
20070070069 | Samarasekera et al. | Mar 2007 | A1 |
20070076920 | Ofek | Apr 2007 | A1 |
20070143345 | Jones et al. | Jun 2007 | A1 |
20070208719 | Tran | Sep 2007 | A1 |
20070210937 | Smith et al. | Sep 2007 | A1 |
20070250477 | Bailly | Oct 2007 | A1 |
20070273558 | Smith | Nov 2007 | A1 |
20070273758 | Mendoza et al. | Nov 2007 | A1 |
20080002916 | Vincent et al. | Jan 2008 | A1 |
20080033641 | Medalia | Feb 2008 | A1 |
20080106593 | Arfvidsson et al. | May 2008 | A1 |
20080143709 | Fassero et al. | Jun 2008 | A1 |
20080268876 | Gelfand et al. | Oct 2008 | A1 |
20090132646 | Yang et al. | May 2009 | A1 |
20090279794 | Brucher et al. | Nov 2009 | A1 |
20090315995 | Khosravy et al. | Dec 2009 | A1 |
20100076976 | Sotirov et al. | Mar 2010 | A1 |
20100250120 | Waupotitsch et al. | Sep 2010 | A1 |
20100257163 | Ohazama et al. | Oct 2010 | A1 |
20100305855 | Dutton et al. | Dec 2010 | A1 |
20110137561 | Kankainen | Jun 2011 | A1 |
20110202492 | Salemann | Aug 2011 | A1 |
20110254915 | Vincent et al. | Oct 2011 | A1 |
20110270517 | Benedetti | Nov 2011 | A1 |
20110283223 | Vaittinen et al. | Nov 2011 | A1 |
20140152699 | Oh | Jun 2014 | A1 |
20140160119 | Vincent et al. | Jun 2014 | A1 |
20140294263 | Hermosillo Valadez et al. | Oct 2014 | A1 |
20150154796 | Co | Jun 2015 | A1 |
20160148413 | Oh | May 2016 | A1 |
Number | Date | Country |
---|---|---|
101055494 | Oct 2007 | CN |
101090460 | Dec 2007 | CN |
2440197 | Jan 2008 | GB |
2004342004 | Dec 2004 | JP |
2005250560 | Sep 2005 | JP |
2008520052 | Jun 2008 | JP |
2006053271 | May 2006 | WO |
2007044975 | Apr 2007 | WO |
2008147561 | Dec 2008 | WO |
Entry |
---|
International Preliminary Report on Patentability, dated Sep. 10, 2010, International Patent Application No. PCT/US2009/001216, The International Bureau of WIPO, Geneva, Switzerland, 11 pages. |
Notification of the First Office Action, dated Mar. 30, 2012, Chinese Patent Application 200980114885.5, The State Intellectual Property Office of the People's , Republic of China, 7 pages (English language translation appended). |
Notification of the Third Office Action, dated Feb. 25, 2013, Chinese Patent Application 200980114885.5, The State Intellectual Property Office of the People's Republic of China, 7 pages (English language translation appended). |
First Office Action, dated Apr. 1, 2013, Japanese Patent Application No. 2010-548720, Japanese Patent Office, 7 pages (English language translation appended). |
The State Intellectual Property Office of the People'S Republic of China, “Notification of the Second Office Action,” Appln. No. 200980114885.5, dated Sep. 12, 2012, pp. 1/52-10/52 (English language translation provided—pp. 1-15). |
Yang, Yang, “Research on the Virtual Reality Technology of Digital Tour System,” Chinese Master's Theses Full-Text Database Information Science and Technology, Issue 5, Nov. 15, 2007, pp. 12/52-39/52. |
Cobzas et al., “A Panoramic Model for Remote Robot Environment Mapping and Predictive Display,” Published 2005. |
Kulju et al., “Route Guidance Using a 3D City Mobile Device,” Published 2002. |
Kimber, et al., “FlyAbout: Spatially Indexed Panoramic Video,” Proceedings of the ninth ACM international conference on Multimedia; 2001; pp. 339-347. |
Lowe et al.; “Fitting Parameterized Three-Dimensional Models to Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 13, 5, May 1991; pp. 441-450. |
Bay et al., “Surf: Speeded Up Robust Features;” Computer Vision-European Conference on Computer Vision 2006; Jul. 26, 2006; pp. 1-14. |
Zhu et al., U.S. Appl. No. 12/014,513, filed Jan. 15, 2008, entitles Three-Dimensional Annotations for Street View Data. |
Kadobayashi, R., et al., “Combined Use of 2D Images and 3D Models for Retrieving and Browsing Digital Archive Contents”, Videometrics VIII, Proceeding of the SPIE—The International Society of Optical Engineering, San Jose, CA, 2005, 10 pages. |
International Search Report and Written Opinion for International Patent Application PCT/US2009/001216, European Patent Office, Netherlands, completed Sep. 8, 2009, dated Sep. 21, 2009, 20 pages. |
Snavely, Noah et al., “Photo tourism: Exploring Photo Collections in 3D,” ACM Transactions on Graphics (SIGGRAPH Proceedings), 25 (3), 2006, pp. 835-846. |
Web Archive Capture (Feb. 26, 2008), <http://phototour.cs.washington.edu/>. |
Wei et al., “A Panoramic Image-Based Approach to Virtual Scene Modeling,” Oct. 2006. |
Cornells et al. “3D Urban Scene Modeling Integrating Recognition and Reoncstruction,” published Oct. 2007. |
Xu Huaiyu et al: “A Virtual Community Building Platform Based on Google Earth”, Hybrid Intelligent Systems, 2009. HIS '09. Fifth International Conference on, IEEE, Piscataway, NJ, USA, Aug. 12, 2009, pp. 349-352, XP031529801. |
U.S. Appl. No. 13/414,658, filed Jul. 3, 2012. |
Notification of First Office Action for Chinese Patent Application No. 201310473478.1 dated Mar. 3, 2016. |
Frueh et al. “Data Processing Algorithms for Generating Textured 3D Building Façade Meshes from laser Scans and Camera Images”, Published 2004. |
Number | Date | Country | |
---|---|---|---|
20170186229 A1 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14584183 | Dec 2014 | US |
Child | 15460727 | US | |
Parent | 13605635 | Sep 2012 | US |
Child | 14584183 | US | |
Parent | 12038325 | Feb 2008 | US |
Child | 13605635 | US |