Providing a thumbnail image that follows a main image

Information

  • Patent Grant
  • 11163813
  • Patent Number
    11,163,813
  • Date Filed
    Wednesday, February 21, 2018
    7 years ago
  • Date Issued
    Tuesday, November 2, 2021
    3 years ago
Abstract
The technology relates to selecting and displaying images captured at different points in time. As an example, a user of a computing device may view a first street level image as viewed from a particular location and oriented in a particular direction. The user may select other time periods for which similar images are available. Upon selecting a particular time period, a second street level image may be displayed concurrently with the first street level image, wherein the second street level image was captured on or around the selected time period. If the user changes the perspective of the first image an automatic change in perspective of the second image may occur.
Description

Various systems provide users with images of different locations, including panoramic images. For example, a panoramic image may include an image or collection of images having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater. Some panoramic images may provide a 360-degree view of a location.


Some systems allow users to view images in sequences, such as in time or space. In some examples, these systems can provide a navigation experience in a remote or interesting location. Some systems allow users to feel as if they are rotating within a virtual world by clicking toward the edges of a displayed portion of a panorama and having the panorama appear to “rotate” in the direction of the clicked edge, or clicking and dragging on the displayed portion of the panorama and having it appear to “rotate” following the mouse cursor.


SUMMARY

Aspects of the disclosure provide a computer implemented method for selecting time-distributed panoramas for display. The method may include one or more computing devices receiving a request for a first image at a first location, wherein the first image will be displayed at a first orientation and the first image has associated with a first time, and wherein each of the one or more computing devices includes one or more processors. The method may further include determining at least one other image based at least in part on the first location, wherein each of the at least one other images are associated with a different time than the first time. The method may also include providing for display the first image and an indication that the at least one other images associated with different time are available. A request may be received for a second image associated with a second time, wherein the second image is one of the at least one other images, and wherein the second time is different from the first time. In response to the received request, the second image may be provided for display concurrently with the first image, wherein the second image is associated with the second time. In another example, the second image is provided for display so that the orientation of the second image corresponds to the orientation of the first image.


The method may further include receiving input indicating a change to a second orientation of the first image, and automatically providing for display a transition in the second image so as to correspond to the second orientation of the first image. In another example, the method may include receiving a request for a third image associated with a third time, wherein the third image is one of the at least one other images, and wherein the third time is different from the second time. In response to the received request for a third image, the second image may be replaced with the third image, wherein the third image is provided for display so that it corresponds to the first location and the first orientation of the first image.


In yet another aspect, the method may include providing for display a timeline having a visual indicia of a given time period being displayed. In addition, receiving the request for the third image may include a user moving the visual indicia along the timeline from a first position associated with the second time to a second position associated with the third time. In accordance with one aspect, replacing the second image with the third image may include the second image fading out and the third image fading in as the user moves the visual indicia along the timeline. The method may also include determining that one or more intermediate images are associated with the location of the first image, wherein these intermediate images were captured at an intermediate time period between the second time and the third time. Replacing the second image with the third image may further include replacing the second image with one or more intermediate images and replacing the one or more intermediate images with the third image.


In still another aspect, the method may include providing for display concurrently with the first image one or more supplementary images that relate to the first image. In response to receiving the request for the second image, replacing one or more of the supplementary images with corresponding supplementary images that correspond to the second time.


In another aspect, the method may include providing for display a plurality of supplementary images concurrently with the first image, wherein the supplementary images are selected from the at least one other images, wherein the supplementary images are associated with different time periods, and wherein each supplementary image has an orientation that corresponds to the orientation of the first image.


In still another aspect, the disclosure provides for a non-transitory computer-readable storage medium on which computer readable instructions of a program are stored, the instructions, when executed by one or more processors, cause the one or more processors to perform the methods described above.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an example of a system that may be used in accordance with aspects of the disclosure.



FIG. 2 is a diagram of various devices that may be used in connection with the example system.



FIG. 3 is a screen shot of a street level map view that may be displayed to a user.



FIG. 4 is a screen shot of a street level view that includes a thumbnail image in accordance with aspects of the disclosure.



FIG. 5 is a screen shot wherein the thumbnail presents an image that was captured at an earlier date than the street level view.



FIG. 6 is a screen shot of a street level view and thumbnail view that has been panned in accordance with aspects of the disclosure.



FIG. 7 is a screen shot of the street level view and associated thumbnails displaying images from earlier dates.



FIG. 8 is a flow diagram describing an example of a method that may be used to implement the aspects of the disclosure.





DETAILED DESCRIPTION

Overview


The technology relates to selecting and displaying images captured at different points in time. As an example, a user of a computing device may view a street level panoramic image as viewed from a particular location and oriented in a particular direction. This image may be displayed in a main viewing area. In addition to this main image, the system may also provide another view of the same location, as it appeared at a different point in time. In one aspect, the computing device may display a thumbnail image that was captured at a date other than the date of image being displayed in the main viewing area. In addition, the thumbnail image may be displayed so that it corresponds to the location and orientation of the panoramic image that is displayed in the main viewing area. As an example, a user may be viewing a popular restaurant and wonder what was at that location before the restaurant. Similarly, the user may wish to see what a particular location looked like during a certain time of year.


In one aspect, the computing device may display an icon indicating that additional street level images from other points in time are available. For example, a clock icon may be displayed in connection with the main image when imagery from another time period is available. Upon the user selecting the clock icon, the computing device may display the thumbnail image that corresponds to the street level image being displayed in the main viewing area.


Corresponding thumbnail images may be selected from among a plurality of panoramic images to identify an image that was captured at a different point in time than the main image but that has a capture location that is closest to the capture location of the main image. Here, the different point in time may refer to two different collection dates or passes near the location of the given panoramic image.


In one example, the user may select the time period of the images to be displayed within the thumbnail viewing area by choosing a particular point along a displayed timeline. The timeline may include all time periods for which corresponding street level images are available, or the user may select a particular range of time periods to be displayed within the timeline. In one aspect, the timeline may contain a plurality of markers indicating points in time for which additional street level images are available.


The user may also transition between thumbnail images from different points in time. For example, the user may slide a cursor along the displayed timeline from a first point in time to a second point in time. In one example, the transition of the thumbnail image may occur by fading out the image that corresponds to the first point in time and fading in the thumbnail image that corresponds to the second point in time as the cursor moves between the two time periods. In another example, the computing device may display, in chronological order, some or all of the intermediate images that were taken between two points in time.


In one aspect of the disclosure, the orientation of the thumbnail image may always correspond to that of the main image that is displayed in the main viewing area. Accordingly, if the user causes the main image to pan by 20 degrees to the left, then the thumbnail image may also automatically pan 20 degrees to the left—or the panorama in the thumbnail may change so that the same area centered in the main view is pictured in the thumbnail. Other changes in the main viewing area may also automatically occur in the thumbnail image. For example, a change in the level of zoom or the location from which the main image is being viewed may cause a corresponding change in the thumbnail image. Similarly, a change in the thumbnail image may also cause a corresponding change in the main image.


The computing device may also display a plurality of supplementary thumbnail images that are related to the main image in some way, such as being within a predetermined distance from the main image. In one aspect, the supplementary images may correspond to the point in time that has been selected by the user. Accordingly, as the user selects a particular time period within the timeline, each of the supplementary images may be updated so as to correspond to the selected time period.


Example Systems


FIGS. 1 and 2 include an example system 100 in which the features described herein may be implemented. It should not be considered as limiting the scope of the disclosure or usefulness of the features described herein. In this example, system 100 can include one or more computing devices 110, 120, 130, and 140, storage system 150, as well as collection devices 160 and 170. One or more computing devices 110 can contain one or more processors 112, memory 114 and other components typically present in general purpose computing devices. Memory 114 of the one or more computing devices 110 can store information accessible by one or more processors 112, including instructions 116 that can be executed by the one or more processors 112.


Memory can also include data 118 that can be retrieved, manipulated or stored by the processor. The memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.


The instructions 116 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the processor. In that regard, the terms “instructions,” “application,” “steps” and “programs” can be used interchangeably herein. The instructions can be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.


Data 118 can be retrieved, stored or modified by processor 112 in accordance with the instructions 116. For instance, although the subject matter described herein is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data.


The one or more processors 112 can include any conventional processors, such as one or more commercially available CPUs and/or GPUs. Alternatively, the processor can be a dedicated component such as an ASIC or other hardware-based processor. Although not necessary, one or more computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently.


Although FIG. 1 functionally illustrates the processor, memory, and other elements of computing device 110 as being within the same block, the processor, computer, computing device, or memory can actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing. For example, the memory can be a hard drive or other storage media located in one or more housings different from those of the one or more computing devices 110. Accordingly, references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel. For example, the computing devices 110 may include server computing devices operating as a load-balanced server farm. Yet further, although some functions described below are indicated as taking place on a single computing device having a single processor, various aspects of the subject matter described herein can be implemented by a plurality of computing devices, for example, communicating information over network 180.


The one or more computing devices 110 can be at various nodes of a network 180 and capable of directly and indirectly communicating with other nodes of network 180. Although only a few computing devices are depicted in FIGS. 1-2, it should be appreciated that a typical system can include a large number of connected computing devices, with each different computing device (as well as collection device) being at a different node of the network 180. The network 180 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks. The network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission of information.


As an example, the one or more computing devices 110 may include one or more web servers that are capable of communicating with storage system 150 as well as computing devices 120, 130, and 140 via the network. For example, one or more server computing devices 110 may use network 180 to transmit and present information to a user, such as user 220, 230, or 240, on a display, such as displays 122, 132, or 142 of computing devices 120, 130, or 140. In this regard, computing devices 120, 130, and 140 may be considered client computing devices and may perform all or some of the features described below.


Each of the client computing devices may be configured similarly to the server computing devices 110, with one or more processors, memory and instructions as described above. Each client computing device 120, 130 or 140 may be a personal computing device intended for use by a user 220, 250, 250, and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display such as displays 122, 132, or 142 (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device that is operable to display information), and user input device 124 (e.g., a mouse, keyboard, touchscreen or microphone). The client computing device may also include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another.


Although the client computing devices 120, 130 and 140 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing device 120 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet. In another example, client computing device 130 may be a head-mounted computing system. As an example the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen.


Storage system 150 may store various types of information. As described in more detail below, the storage system 150 may store images, such as those described above as having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater. In that regard, example panoramic images described herein provide a 360-degree view of a location, though other types of images may also be used. In addition, each panoramic image may be associated with geographic location information indicating the location and, in some cases, the orientation at which the panoramic image was captured (e.g., which part of the panoramic image is oriented towards “North”, etc.) as well as timestamp information indicating the date and time at which the panoramic image was captured.


The storage system 150 may also store 3D geometry data. As explained above and described in more detail below, this 3D geometry data may correspond to points on the surface of any objects in the plurality of panoramic image. The 3D geometry data may provide the position (x,y,z) of points relative to a particular coordinate system (e.g. relative to a position of a LIDAR system that generated the geometry data or a global positioning system (GPS) such as latitude, longitude, and altitude coordinates).


Storage system 150 may also store map information. The map information may be an image based map or may include a plurality of vectors used to identify the shape, orientation, and other characteristics of streets used to display a map. In this regard, the streets may be divided into discrete road segments. As an example, collection of such road segments (or vectors) may be used to display a map.


As with memory 114, storage system 150 can be of any type of computerized storage capable of storing information accessible by server 110, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. In addition, storage system 150 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system 150 may be connected to the computing devices via the network 180 as shown in FIG. 1 and/or may be directly connected to or incorporated into any of the computing devices 110-140 (not shown).


Collection devices 160 and 170 may include a computing device, configured similarly to one of the server computing devices or client computing devices with a processor and memory storing data and instructions (not shown in FIG. 1 for simplicity). Collection devise 160 and 170 may also provide all or some of the images of storage system 150. Each of the collection devices 160 and 170 may include a camera or other information collection device. For example, collection device 160 may include a camera 162 mounted on a vehicle. As the vehicle is driven along a street, the camera of collection device 160 may capture panoramic images. In this regard, all or some of the panoramic images of storage system 150 may be considered “street level images.” As another example, collection device 170 may include a camera rig attached to a backpack (e.g., for paths and other non-street areas), a smartphone camera, a dedicated camera device, etc. which a person walks, bikes, or otherwise moves around with in order to capture panoramic images. In addition to capturing images, the collection devices and/or camera may be configured to provide each panoramic image with a timestamp indicating the date and time at which the image was captured. The captured panoramic images and timestamps may be uploaded or downloaded to the storage system 150.


Each of collection devices 160 or 170 may include a position system 164 in order to determine the camera's relative or absolute position on a map or on the Earth when an image is captured. For example, the position system 164 may include a GPS receiver to determine the device's latitude, longitude and/or altitude position and provide a two or three dimensional (2D or 3D) location at which each panoramic image was captured by the collection device. Other location systems such as laser-based localization systems, inertial-aided GPS, trilateration/triangulation, or camera-based localization may also be used to identify the location of the vehicle. The location of the vehicle may include an absolute geographical location, such as latitude, longitude, and altitude as well as relative location information, such as location relative to other cars immediately around it which can often be determined with less noise than an absolute geographical location.


The positioning system 164 may also include other devices in communication with the camera or collection device, such as an accelerometer, gyroscope or another direction/speed detection device to determine the orientation of the camera 162 when the panoramic image was captured. By way of example only, an acceleration device may determine its pitch, yaw or roll (or changes thereto) relative to the direction of gravity or a plane perpendicular thereto. The device may also track increases or decreases in speed and the direction of such changes.


The collection device's provision of location and orientation data as set forth herein may be associated with the panoramic images as they are being captured and provided automatically to the storage system 150, other computing devices and combinations of the foregoing. Although camera 162 and position system 164 are depicted within the collection device 160, these components may or may not be included in the same physical housing. In this regard, the position system 164 may be a different device from the camera 162 such that both components output the 2D or 3D location information and orientation information and panoramic images to the collection device which processes these outputs in order to associate them with one another and provide them to the storage system 150.


In some examples, the collection device 160 may include a LIDAR system for generating the 3D geometry data described above. For example, as a vehicle is driven along the street, a LIDAR system may be used to collect laser data or light intensity information which is converted into three dimensional points, which can then be used to determine point clouds and/or the surfaces of objects. These objects will correspond to objects that are included in a panoramic image that was captured by a camera, such as camera 162, at approximately the same geographic location as the laser data.


Example Methods

In order to provide a user with images, a first plurality of images, including panoramic images, may be captured and stored. These images may be captured by one or more cameras, including cameras mounted on a vehicle (or other device). If the camera is mounted on a vehicle, images may be captured in a sequence as the camera is moved along. Each image may be associated with 2D or 3D location and orientation information corresponding to the geographic location where the panoramic image was captured as well as a timestamp indicating the date and time when the image was captured. For example, as a collection device such as collection device 160 is moved around, such as by driving a vehicle along a street, camera 162 may capture panoramic images. At the same time, the position system 164 may provide GPS coordinates for each panoramic image captured by camera 162. Each time the collection device 160 captures a series of images may be considered a separate “run.”


The location at which a particular image was captured, which may be referred to as the “snap location,” may be determined by associating the GPS coordinates of the capture device at the time each image was taken. In turn, the snap location may be used by the one or more server computing devices to select one or more panoramic images that were captured at approximately the same location but at different points in time. Here, a different point in time, refers to images that were captured on different days. In this regard, given a snap location of a first panoramic image, other panoramic images of the storage system 150 may be accessed by the one or more server computing devices 110 in order to identify a second panoramic image having a snap location that is both closest to the snap location of the first panoramic image and has a timestamp which indicates that the second image was captured on a different day than the first panoramic image. In some instances if the closest panoramic image for a different day is greater than a specified distance, for example, 15 meters or more, than no second panoramic images may be selected for that particular day.


If there are multiple other runs for a given road segment, (e.g., one in August 2011, one in September 2012, etc.), the panoramic images associated with such runs (as identified by the timestamps) may be queried in groups such that a closest panoramic image along the road segment may be identified for each individual run. In this regard, a plurality of such second images, one for each run, may be selected based on a particular first image. Any such second images may be provided for display to a user in conjunction with the first image. For example, a user, such as user 220, may make a request to view a first panoramic image using a client computing device, such as client computing device 120, by selecting an option to view the first panoramic image, by searching for the first panoramic image by entering a particular location into a search engine, selecting a point on a map corresponding to the 2D or 3D location of the first panoramic image, or in other conventional ways. In response, the client computing device may send a request for or identify the first panoramic image to the one or more server computing devices 110.


The one or more server computing devices may receive the request to identify any second images for the first panoramic image. In one example, any of the selected second images described above may be associated with the first panoramic image such that the server computing device 110, may use the first panoramic image to retrieve any second images. Thus, any second images may be selected in advance by the one or more server computing devices, that is, before the user has requested to view the first image. Alternatively, the selection may be performed in real time (e.g., without delay and in response to the request for or identifying the first panoramic image) in order to keep the closest available images up to date. This may be especially useful as the plurality of images of the storage system 150 may change over time as additional images are collected.


Once identified, the one or more server computing devices may provide the first panoramic image for display, as well as an option to view any of the identified second images for display. For instance, when a user views the first panoramic image, he or she may be provided with an option to view one or more second images in conjunction with the first panoramic image or to switch to a view of one of the second panoramic images. In some examples, any second images, although not immediately displayed may also be provided to the client computing device, before the user has selected the option, in order to allow the client computing device to display them more quickly.



FIG. 3 is an example screen shot 300 which may be displayed to a user, such as user 220, on a display of a client computing device, such as display 122 of client computing device 120. In this example, the screen shot 300 includes a display of a portion of the panoramic image 302, or a view port. This view port includes a particular orientation and zoom level which allows the user to view portions of buildings 304, 306, and 308. In this example, panoramic image 302 may be considered a first panoramic image as described above. The screen shot 300 also includes display box 310 which includes a first part with a clock icon 312 that indicates that a second panoramic image is available for the panoramic image 302. Display box 310 also includes other information such as location data 314 identifying an approximate location of the portion of the panoramic image 302.


By selecting the option to view a second panoramic image, the user may be provided with a display of a portion of a second panoramic image. For example, as shown in example screen shot 400 of FIG. 4, once a user has selected clock icon 312, the display box 310 changes to include a time window 402. Time window 402 includes thumbnail image 404, which may be a portion of a second panoramic image that was selected as having the same snap location as panoramic image 302. In this example, the thumbnail image 404 includes buildings 420, 422, and 424. Here, buildings 424 and 420 may correspond to buildings 304 and 308, respectively of panoramic image 302. However, building 422 does not correspond to building 306, as in this example, building 422 did not exist at the time that panoramic image 302 was captured. Rather, at the time panoramic image 302 was captured, building 422 was replaced by building 306.


Time window 402 may also include a timeline 406 or other selection arrangements which provides a number of different functions. In this example, timeline 406 indicates the quantity of available second images for the panoramic image 302. As there are three points 408, 410, and 412 on the timeline, this may indicate that there are images from at least two different dates that correspond to the location of panoramic image 302. A scroll marker 401 may be used to indicate the date that is currently being displayed within time window 402. For example, point 408, as it is slightly larger than points 410 and 412, indicates that scroll marker 401 is at location 408 and that thumbnail image 404 was captured in 2009. Assuming that image 302 is the most recent image available for the current location, point 412 may indicate that panoramic image 302 was captured in 2013. Point 410 may further indicate that another image, captured sometime between 2009 and 2013 is also available for viewing by the user. Thus, by manipulating scroll marker 401 along timeline 406, the user may view other available panoramic images, including the image corresponding to point 410 as well as panoramic image 302 (corresponding to point 412) in the time window 402. Of course, other timelines may include fewer points than available panoramic images such that locations along the timeline between points may also correspond to available panoramic images, and other such mechanisms may be used to indicate to the user that second images are available.


As the user moves scroll marker 401 along timeline 406 from a first date to a second date, the image displayed in time window 402 transitions from a thumbnail image taken on the first date to a thumbnail image taken on a second date. In accordance with one aspect, the transition between thumbnail images may include animation or effects. For example, as scroll marker 401 is moved from a first date to a second date, a first thumbnail image displayed within time window 402 may begin to fade out, while a second thumbnail image corresponding to the second date may fade in, so as to replace the first thumbnail image. In another example, the movement of scroll marker 401 may cause the first thumbnail image to slide out of view within the time window 402 as the second thumbnail image replaces the first thumbnail image by sliding into view.



FIG. 5 shows screenshot 500 in which a user has slid scroll marker 401 to location 412 from location 408 shown in FIG. 4. As scroll marker 401 moves from location 408 to location 412, the time window 402 transitions from showing a thumbnail image 404 that was taken in 2009 to showing a thumbnail image 504 that was taken in 2013. As described above, the transition between these images may include image 404 fading away within time window 402 as scroll marker moves away from location 408. Similarly image 504 may begin fade in within time window 402 as scroll marker 401 approaches location 412. In addition, if any intermediate images are available between 2009 and 2013, time window 402 may also display those intermediate images as scroll marker 401 moves to a location corresponding to the date on which the intermediate image was taken. For example, location 410 may be associated with an image that was taken in the year 2011. As scroll marker 401 moves between location 408 and location 412, time window 402 may transition from image 404 taken in 2009 to an intermediate image taken in 2011, and then transition again from the intermediate image to image 504 taken in 2013.


Since 2013 is the same point in time as panoramic image 302 was captured, thumbnail image 504 may be based on a portion of panoramic image 302 that is displayed in the main viewing area 510. Accordingly, thumbnail image 504 and the displayed portion of panoramic image 302 are the same.


The location, orientation, and zoom level of the first panoramic image shown in the main viewing area may be used to determine how to display the second thumbnail image within time window 402. For example, in FIG. 4, the displayed snap location, orientation, and zoom level of the second image within time window 402 may be selected in order to correspond to the snap location, orientation, and zoom level of image 302. In addition, if the location, orientation, or zoom level of image 302 changes, the location, orientation, or zoom level of the second image within time window 402 may automatically change as well. For example, if the user pans image 302 to the left by 30 degree, image 404 shown in time window 402 may automatically pan to the left by 30 degrees as well. This can be seen in FIG. 6, wherein the user has panned image 302 to the left, thereby causing image 404 to pan by a corresponding amount, so that images 302 and 404 maintain corresponding orientations. In this way, the user may easily compare panoramic image 302 and thumbnail image 404 from various camera orientations and levels of zoom. Through this comparison, the user may determine how the displayed location has changed between the date thumbnail image 404 was taken and the date panoramic image 302 was taken. For example, the user may compare image panoramic image 302 and thumbnail image 404 of FIG. 4 to determine that that building 306 did not exist in 2009, but was instead building 522. The user may then pan within panoramic image 302, causing an automatic panning of thumbnail image 404, so as to determine if any changes have occurred to other objects in the area, including signs, roadways, buildings, trees, etc. In another aspect, user may pan or zoom within image 404 so as to cause an automatic panning or zooming of image 302.


The three dimensional geometry data associated with a first panoramic image may be used to determine the distance between the point of view of the main viewing area (e.g., where there is no zoom, the point of view would be the actual location of the first panoramic image) to an object within the image. This distance, as well as the three dimensional geometry data associated with the thumbnail image, may then be used to adjust the zoom of the thumbnail image when the thumbnail image is displayed in connection with the first panoramic image. In this regard, the thumbnail image may be displayed such that objects in the thumbnail image will appear to be the same distance from the user as those same objects in the main viewing area of the first panoramic image. For example, as can be seen from screen shot 600 of FIG. 6, the displayed portion of panoramic image 302 corresponds to the displayed portion of panoramic image 404. In this regard, both the orientations of the panoramic images are corresponding as well as the zoom levels, even though they are not necessarily the same.


Similarly, the orientations of two images may be considered to be corresponding even if the two images do not face the exact same direction. For example, orientations may be slightly different depending on the 2D or 3D location of each panoramic image. In particular, if two images were captured at slightly different locations, the system may adjust the orientation of the thumbnail image so that the same object or area is displayed within the center of each image. This adjustment may be based on the three dimensional geometry data that is associated with each image, as well as the distance between the snap location of each image.


Returning to FIG. 4, in addition to panoramic image 302 and thumbnail image 404. The client computing device 120 may display additional related images 430-436. These related images 430-436 may be images that are selected for display due to their relationship to panoramic image 302 or thumbnail image 404. For example, related images 430-436 may be images that represent different perspectives or camera orientations that are available from the same location as images 302 and 404. Related images 430-436 may also include images that depict nearby locations or related objects. For example, if image 302 shows a particular type of restaurant or a particular restaurant chain, related images 430-436 may include images of other locations that contain the same type of restaurant or the same restaurant chain.


In one aspect, related images 430-436 may correspond with the image being displayed in time window 402. Accordingly, as the user moves scroll marker 401 to different locations along timeline 406, related images 430-436 may change so as to display images that corresponds to the time period designated by scroll marker 401. For example, in FIG. 4, scroll marker 401 is at location 408 corresponding to February 2009. Accordingly, one or more of the related images 430-436 may be selected from images that were captured on or around February 2009. In FIG. 5, scroll marker 401 is at location 412, which corresponds to April 2013. Related images 530-536 may therefore be selected from images that were taken on or around April 2013.


In accordance with another aspect, the client user device may display a plurality of related images that were taken at different dates. Similarly to the thumbnail image displayed in time window 402, these related images may correspond to the location, orientation, and zoom level of the first panoramic image displayed in the main viewing area. For example, as shown in screen shot 700 of FIG. 7, related images 730-734 each correspond to the location, orientation, and zoom level of panoramic image 702 and thumbnail image 404. As described above, the location, orientation, and zoom level of the related images 730-734 and panoramic image 702 may each correspond to one another without matching exactly. For example, related images 730-734 and panoramic image 702 may have been captured at slightly different locations, and the orientation and zoom level of related images 730-734 may be adjusted based on this difference in locations. As shown in screen shot 700, each of the related images 730-734 was captured on a different date. Accordingly, a user may easily compare changes that have occurred to a location over a series of dates. The dates of related images may be dependent on the images that are available for the location and orientation being displayed in the main viewing area. Related images 730-734 may include every available image that corresponds to the location and orientation of panoramic image 702 or may include only a subset of available images. For example, related images 730-734 may be selected for display so as to provide no more than one image for a particular interval of time, such as by providing one image for every year.


A visual indicia, such as icons 740 and 742, may be displayed to the user to indicate that additional related images are available. For example, icon 740 indicates that at least one older image is available for the location corresponding to panoramic image 702, and icon 742 indicates that at least one more recent image is available for this location. The user may view the additional images by selecting icon 740 or 742 thereby causing the related image to scroll in the selected direction.


In accordance with one aspect, the user may switch between the displayed thumbnail image and the first panoramic image. For example, a user may select the thumbnail image displayed in the time window 402 or one of the points of timeline 406 using mouse pointer, finger and touchscreen. In response, the client computing device may transition from a display of a first panoramic image to a display of the selected thumbnail image. In between, if needed, the client computing device may request the second paranoramic image from the one or more server computing devices as well. In screen shot 700 of FIG. 7, the user has selected thumbnail image 404, thereby causing main viewing area 510 to display the same image as panoramic image 702.


Flow diagram 800 of FIG. 8 is an example of some of the features described above that may be performed by one or more computing devices, such as computing devices 110, described above. As shown in block 802 of this example, the one or more computing devices access a plurality of panoramic images. Each given panoramic image of the plurality of panoramic images may be associated with multi-dimensional location information defined in at least two dimensions, uni-dimensional (1D) location information defined in only one dimension, three dimensional (3D) geometry data corresponding to a surface of an object depicted in the given panoramic image, and time information identifying when the given panoramic image was captured. At block 804, a first panoramic image, of the plurality of images, is provided for display to a user in a main viewing area. The first image may be provided in connection with a request from a user for a panoramic image that is associated with a particular location. The first panoramic image may be the most recent panoramic image available based on the identified time information. At block 806, the user may be provided with an indication that additional images are available for the particular location associated with the first panoramic image, and that these additional images were captured on dates that are different than the date on which the first panoramic image was captured. For example, a timeline of dates may be displayed, wherein each date indicates that one or more additional images are available for the location of the first panoramic image.


A request from the user may be received, wherein the request identifies a particular date for which a second image is to be displayed (block 808). For example, a user may select a particular date within the displayed timeline. A thumbnail image that was captured at or near the selected date may then be provided for display (block 810). This thumbnail image may be provided for display so that the location, orientation, and level of zoom of the thumbnail image correspond to the location, orientation, and level of zoom of the first image. As stated above, correspondence does not require an exact match. For example, the location of the first image may be determined to correspond to the location of the second image if the two images are within a predetermined distance from one another, such as being within ten feet of one another. Similarly, the orientation of each image may be corresponding if the same geographic area or object appears within the center of each of the images. In addition, the level of zoom of each image may be considered as corresponding to one another if each image appears to be approximately the same distance from the same objects shown in each image.


The user may change the selected time period, by for example, moving a cursor along the displayed timeline from a first date to a second date. If a change in the selected time period is determined (block 812), the thumbnail image may be replaced with a new thumbnail image, wherein the new thumbnail image corresponds to the newly selected date (block 814). The user may also change the perspective of the first image by, for example panning the image to a different orientation or by changing the zoom level of the image. If a change in perspective is determined (block 816), the perspective of the thumbnail image may be automatically changed so that the perspective of the thumbnail image continues to correspond with the perspective of the first image (block 818). Blocks 812-818 may be repeated until the user no longer provides any additional input regarding the selected time period to be displayed or changes to the perspective of the first image.


In accordance with one aspect, the displayed images may be altered based on the date on which the image was captured. In particular, older images may be altered so as to appear as if it has aged relative to more recent images. The apparent aging of the images may be exaggerated relative to actual differences in the time periods for each image. For example, returning to FIG. 4, thumbnail image 404, which was captured in 2009, may be altered to appear as if it has aged in comparison to image 302, which was captured in 2013. These images may be altered by implementing any number of effects, such as by reducing the colors within the image, by adjusting the exposure, or by adding scratches, discolorations, or other imperfections to the image. These alterations may occur through the application of one or more filters. In addition, the alterations for each displayed image may be increased as the user goes back in time within time window 402 by viewing increasingly older images. In this way, the user is provided with an additional indication of the relative age of the displayed images.


Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.

Claims
  • 1. A computer-implemented method of providing a thumbnail image that corresponds to a main image, the method comprising: selecting to display on a graphical interface, by one or more computing devices, the main image at a first geographical location, wherein the main image is associated with imagery of the first geographical location captured from a first orientation and at a first time;determining, by the one or more computing devices, one or more thumbnail images based at least in part on the first geographical location, wherein the one or more thumbnail images are each associated with the imagery of the first geographical location captured from an orientation similar to the first orientation and at a time different than the first time;generating to display, by the one or more computing devices, the main image and an indication that the one or more thumbnail images each associated with its respective time are available;providing to display concurrently with the main image, by the one or more computing devices, a selected one of the one or more thumbnail images;receiving, by the one or more computing devices, input representing a change of the main image from a first zoom level to a second zoom level;determining a distance between a point of view of a main viewing area of the graphical interface to an object within the main image based on three dimensional geometry data associated with the main image;generating, by the one or more computing devices, a corresponding change in zoom level of the selected thumbnail image to the second zoom level, wherein generating the corresponding change in zoom level of the selected thumbnail image to the second zoom level includes adjusting the zoom level of the selected thumbnail image based on the determined distance; andproviding to display the selected thumbnail image at the second zoom level, so that the main image and the selected thumbnail image are presented concurrently at the second zoom level.
  • 2. The method of claim 1, wherein the indication that the one or more thumbnail images are available is an icon arranged on the graphical interface.
  • 3. The method of claim 2, wherein the icon is a clock icon.
  • 4. The method of claim 2, wherein: prior to providing the selected thumbnail image, the method further includes receiving a selection of the icon; andupon receiving the selection, generating a time window for display on the graphical interface, the time window including the selected thumbnail image therein.
  • 5. The method of claim 1, further comprising: receiving, by the one or more computing devices, input indicating a change of the main image from the first orientation to a second orientation; andgenerating, by the one or more computing devices, a corresponding change in orientation of the selected thumbnail image to the second orientation.
  • 6. The method claim 1, further comprising providing for display a timeline having a visual indicia of a given time period, wherein the timeline indicates a quantity of available thumbnail images associated with different times during the given time period.
  • 7. The method of claim 6, wherein the timeline indicates at least one of an earliest date and a most current date of the thumbnail images during the given time period.
  • 8. The method of claim 6, wherein the timeline includes a scroll marker indicating a date of the selected thumbnail image.
  • 9. The method of claim 8, wherein, in response to receiving an input for a different point in time, the method includes: adjusting a location of the scroll marker on the timeline to correspond to the different point in time, andreplacing the selected thumbnail image with a different thumbnail image, the different thumbnail image corresponding to the different point in time.
  • 10. A system of providing a thumbnail image that corresponds to a main image, the system comprising one or more computing devices, the one or more computing devices being configured to: select to display on a graphical interface the main image at a first geographical location, wherein the main image is associated with imagery of the first geographical location captured from a first orientation and at a first time;determine one or more thumbnail images based at least in part on the first geographical location, wherein the one or more thumbnail images are each associated with the imagery of the first geographical location captured from an orientation similar to the first orientation and at a time different than the first time;generate to display the main image and an indication that the one or more thumbnail images each associated with its respective time are available;provide to display, concurrently with the main image, a selected one of the one or more thumbnail images;receive input representing a change of the main image from a first zoom level to a second zoom level;determining a distance between a point of view of a main viewing area of the graphical interface to an object within the main image based on three dimensional geometry data associated with the main image;generate a corresponding change in zoom level of the selected thumbnail image to the second zoom level, wherein generating the corresponding change in zoom level of the selected thumbnail image to the second zoom level includes adjusting the zoom level of the selected thumbnail image based on the determined distance; andprovide to display the selected thumbnail image at the second zoom level, so that the main image and the selected thumbnail image are presented concurrently at the second zoom level.
  • 11. The system of claim 10, wherein the one or more computing devices are further configured to generate a time window for display on the graphical interface, the time window including the selected thumbnail image therein.
  • 12. The system of claim 10, wherein the one or more computing devices are further configured to: receive input indicating a change of the main image from the first orientation to a second orientation; andgenerate a corresponding change in orientation of the selected thumbnail image to the second orientation.
  • 13. The system claim 10, wherein the one or more computing devices are further configured to provide for display a timeline having a visual indicia of a given time period, the timeline indicating a quantity of available thumbnail images associated with different times during the given time period.
  • 14. The system of claim 13, wherein the timeline includes a scroll marker indicating a date of the selected thumbnail image, and in response to receiving an input for a different point in time, the one or more computing devices are further configured to: adjust a location of the scroll marker on the timeline to correspond to the different point in time, andreplace the selected thumbnail image with a different thumbnail image, the different thumbnail image corresponding to the different point in time.
  • 15. A non-transitory computer-readable storage medium on which computer readable instructions of a program are stored, the instructions, when executed by one or more processors, cause the one or more processors to perform a method, the method comprising: selecting to display on a graphical interface the main image at a first geographical location, wherein the main image is associated with imagery of the first geographical location captured from a first orientation and at a first time;determining one or more thumbnail images based at least in part on the first geographical location, wherein the one or more thumbnail images are each associated with the imagery of the first geographical location captured from an orientation similar to the first orientation and at a time different than the first time;generating to display the main image and an indication that the one or more thumbnail images each associated with its respective time are available;providing to display concurrently with the main image, a selected one of the one or more thumbnail images;receiving input representing a change of the main image from a first zoom level to a second zoom level;determining a distance between a point of view of a main viewing area of the graphical interface to an object within the main image based on three dimensional geometry data associated with the main image;generating a corresponding change in zoom level of the selected thumbnail image to the second zoom level, wherein generating the corresponding change in zoom level of the selected thumbnail image to the second zoom level includes adjusting the zoom level of the selected thumbnail image based on the determined distance; andproviding, to display, the selected thumbnail image at the second zoom level, so that the main image and the selected thumbnail image are presented concurrently at the second zoom level.
  • 16. The non-transitory computer-readable storage medium of claim 15, wherein the method further comprises: receiving input indicating a change of the main image from the first orientation to a second orientation; andgenerating a corresponding change in orientation of the selected thumbnail image to the second orientation.
  • 17. The non-transitory computer-readable storage medium of claim 15, wherein the method further comprises providing for display a timeline having a visual indicia of a given time period, and the timeline indicates a quantity of available thumbnail images associated with different times during the given time period.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 14/258,709, filed Apr. 22, 2014, the entire disclosure of which is incorporated herein by reference.

US Referenced Citations (438)
Number Name Date Kind
5710875 Harashima et al. Jan 1998 A
5754174 Carpenter et al. May 1998 A
D399501 Arora et al. Oct 1998 S
5832173 Terasawa et al. Nov 1998 A
D406123 Hodgson Feb 1999 S
5912165 Cabib et al. Jun 1999 A
D418495 Brockel et al. Jan 2000 S
D424543 Hodgson May 2000 S
6075595 Malinen Jun 2000 A
6177932 Galdes et al. Jan 2001 B1
6373568 Miller et al. Apr 2002 B1
6448956 Berman Sep 2002 B1
D464360 Grundel et al. Oct 2002 S
6504571 Narayanaswami et al. Jan 2003 B1
D471225 Gray Mar 2003 S
6769131 Tanaka et al. Jul 2004 B1
6895126 Di Bernardo et al. May 2005 B2
7009699 Wolleschensky et al. Mar 2006 B2
D523442 Hiramatsu Jun 2006 S
D525632 Jost et al. Jul 2006 S
D536340 Jost et al. Feb 2007 S
7225207 Ohazama et al. May 2007 B1
D550236 Armendariz Sep 2007 S
D555664 Nagata et al. Nov 2007 S
D557272 Glaser et al. Dec 2007 S
D558220 Maitlen et al. Dec 2007 S
D561191 Haning et al. Feb 2008 S
D561193 O'Mullan et al. Feb 2008 S
D563975 Vigesaa Mar 2008 S
D566716 Rasmussen et al. Apr 2008 S
7353114 Rohlf et al. Apr 2008 B1
D571819 Scott et al. Jun 2008 S
D572719 Beamish et al. Jul 2008 S
7398156 Funato Jul 2008 B2
D574388 Armendariz et al. Aug 2008 S
D578544 Nathan et al. Oct 2008 S
D593578 Ball et al. Jun 2009 S
D595304 Rasmussen et al. Jun 2009 S
7561169 Carroll Jul 2009 B2
D599812 Hirsch Sep 2009 S
D601165 Truelove et al. Sep 2009 S
D601166 Chen et al. Sep 2009 S
D602495 Um et al. Oct 2009 S
D605657 Danton Dec 2009 S
D606551 Willis Dec 2009 S
7720359 Koyanagi et al. May 2010 B2
RE41428 Mayer et al. Jul 2010 E
D619614 O'Mullan et al. Jul 2010 S
D620950 Rasmussen Aug 2010 S
7840032 Ofek Nov 2010 B2
7912634 Reed et al. Mar 2011 B2
7921108 Wang et al. Apr 2011 B2
7971155 Yoon Jun 2011 B1
D642195 Marks et al. Jul 2011 S
7983489 Aguera y Arcas et al. Jul 2011 B2
D645052 Rasmussen Sep 2011 S
D645470 Matas Sep 2011 S
8064633 Noda et al. Nov 2011 B2
8077918 Kirmse et al. Dec 2011 B2
8085990 Ofek Dec 2011 B2
D652053 Impas et al. Jan 2012 S
8090714 Yang et al. Jan 2012 B2
8103081 Gossage et al. Jan 2012 B2
8145703 Frishert et al. Mar 2012 B2
D656950 Shallcross et al. Apr 2012 S
8155391 Tang et al. Apr 2012 B1
D661702 Asai et al. Jun 2012 S
D661704 Rasmussen Jun 2012 S
8213749 Di Bernardo et al. Jul 2012 B2
D664983 Moreau et al. Aug 2012 S
D665409 Gupta et al. Aug 2012 S
D667432 Phelan Sep 2012 S
D667834 Coffman et al. Sep 2012 S
D667840 Anzures Sep 2012 S
8274524 Cornell et al. Sep 2012 B1
8302007 Barcay et al. Oct 2012 B2
8339394 Lininger Dec 2012 B1
8352465 Jing et al. Jan 2013 B1
D682842 Kurata et al. May 2013 S
D682876 MacNeil May 2013 S
D683356 Rally May 2013 S
8447136 Dfek et al. May 2013 B2
D684161 Truelove et al. Jun 2013 S
D684167 Yang et al. Jun 2013 S
8510041 Anguelov et al. Aug 2013 B1
D689072 Park et al. Sep 2013 S
D689079 Edwards et al. Sep 2013 S
D689082 Stiffler Sep 2013 S
D689085 Pasceri et al. Sep 2013 S
D689089 Impas et al. Sep 2013 S
8543323 Gold et al. Sep 2013 B1
D690737 Wen et al. Oct 2013 S
D692450 Convay et al. Oct 2013 S
D696279 Bortman et al. Dec 2013 S
D696285 Hally Dec 2013 S
8610741 Szeliski et al. Dec 2013 B2
8649663 Saitou et al. Feb 2014 B2
D701879 Foit et al. Apr 2014 S
D701882 Soegiono et al. Apr 2014 S
8711174 Fialho et al. Apr 2014 B2
D706822 Wang Jun 2014 S
D708638 Manzari et al. Jul 2014 S
8791983 Shikata Jul 2014 B2
8817067 Fan et al. Aug 2014 B1
D712920 Sloo et al. Sep 2014 S
D713853 Jaini et al. Sep 2014 S
D715316 Hemeon et al. Oct 2014 S
D715820 Rebstock Oct 2014 S
D715836 Huang et al. Oct 2014 S
8872847 Nash et al. Oct 2014 B2
D716827 Dowd Nov 2014 S
8893026 Lindemann et al. Nov 2014 B2
D719186 Kim Dec 2014 S
8928691 Maurer et al. Jan 2015 B2
8930141 Wither et al. Jan 2015 B2
8965696 van Os et al. Feb 2015 B2
D726204 Prajapati et al. Apr 2015 S
9020745 Johnston et al. Apr 2015 B2
D728616 Gomez et al. May 2015 S
D730378 Xiong et al. May 2015 S
D730379 Xiong et al. May 2015 S
9036000 Ogale et al. May 2015 B1
D731520 Xiong et al. Jun 2015 S
D731524 Brinda et al. Jun 2015 S
D731545 Lim et al. Jun 2015 S
D732062 Kwon Jun 2015 S
D732567 Moon et al. Jun 2015 S
9047692 Seitz et al. Jun 2015 B1
D733740 Lee et al. Jul 2015 S
D733741 Lee et al. Jul 2015 S
D734356 Xiong et al. Jul 2015 S
D735733 Hontz, Jr. Aug 2015 S
9106872 Tsurumi Aug 2015 B2
D738900 Drozd et al. Sep 2015 S
D738901 Amin Sep 2015 S
D738914 Torres et al. Sep 2015 S
9158414 Gluzberg et al. Oct 2015 B1
9171527 Siegel Oct 2015 B2
D743984 Salituri Nov 2015 S
9189839 Sheridan et al. Nov 2015 B1
D745020 Mariet et al. Dec 2015 S
D745038 Abbas Dec 2015 S
D746313 Walmsley et al. Dec 2015 S
D746319 Zhang et al. Dec 2015 S
9215448 Barnes Dec 2015 B2
9218682 Arrasvuori Dec 2015 B2
9218789 Lininger et al. Dec 2015 B1
9225947 Lee et al. Dec 2015 B2
D746856 Jiang et al. Jan 2016 S
9244940 Donsbach et al. Jan 2016 B1
9256961 Lynch Feb 2016 B2
9256983 Lynch Feb 2016 B2
D754720 Yang Apr 2016 S
9311396 Meadow et al. Apr 2016 B2
9317188 Gregotski et al. Apr 2016 B2
9325946 Tanaka et al. Apr 2016 B2
D757784 Lee et al. May 2016 S
9330501 Sahoo et al. May 2016 B2
9330504 Ege May 2016 B2
D760272 Li Jun 2016 S
9363463 Taneichi et al. Jun 2016 B2
9377320 Sheridan et al. Jun 2016 B2
D762238 Day et al. Jul 2016 S
9390519 Lynch Jul 2016 B2
D762702 Hoang et al. Aug 2016 S
D763294 Amin et al. Aug 2016 S
9411419 Kasahara et al. Aug 2016 B2
9418472 Dillard et al. Aug 2016 B2
9424536 Bear et al. Aug 2016 B2
D766263 Rice et al. Sep 2016 S
D767589 Ye et al. Sep 2016 S
9442956 Konig et al. Sep 2016 B2
9454848 Mattila Sep 2016 B2
D768178 Valade et al. Oct 2016 S
D768685 Lee et al. Oct 2016 S
D769279 Woo et al. Oct 2016 S
D769909 Roberts et al. Oct 2016 S
D769931 McMillan et al. Oct 2016 S
9471834 Filip Oct 2016 B1
9477368 Filip et al. Oct 2016 B1
9529803 Kisielius et al. Dec 2016 B2
9532008 Ohnishi Dec 2016 B2
9535587 Dorfman et al. Jan 2017 B2
9551579 Sheridan et al. Jan 2017 B1
9554060 Filip Jan 2017 B2
D780210 Kisielius et al. Feb 2017 S
D780211 Kisielius et al. Feb 2017 S
9569498 Sheridan et al. Feb 2017 B2
D780777 Kisielius et al. Mar 2017 S
D780794 Kisielius et al. Mar 2017 S
D780795 Kisielius et al. Mar 2017 S
D780796 Kisielius et al. Mar 2017 S
D780797 Kisielius et al. Mar 2017 S
D781317 Kisielius et al. Mar 2017 S
D781318 Kisielius et al. Mar 2017 S
D781337 Kisielius et al. Mar 2017 S
9601087 Suzuki et al. Mar 2017 B2
D784395 Laing et al. Apr 2017 S
9641755 Lynch May 2017 B2
9791290 Kraus et al. Oct 2017 B2
9805064 Kojima et al. Oct 2017 B2
9813621 Anderson et al. Nov 2017 B2
9841291 Sheridan et al. Dec 2017 B2
9864481 Misawa Jan 2018 B2
9886794 van Os et al. Feb 2018 B2
9898857 Dillard et al. Feb 2018 B2
9924156 Barnes Mar 2018 B2
9934222 Leong et al. Apr 2018 B2
9972121 Li et al. May 2018 B2
10030990 Lynch Jul 2018 B2
D829737 Kisielius et al. Oct 2018 S
D830399 Kisielius et al. Oct 2018 S
D830407 Kisielius et al. Oct 2018 S
10094675 Hajj et al. Oct 2018 B2
10127722 Shakib et al. Nov 2018 B2
10139985 Mildrew et al. Nov 2018 B2
D835147 Kisielius et al. Dec 2018 S
10163263 Zhu et al. Dec 2018 B2
10176633 Moore et al. Jan 2019 B2
10247568 Fillhardt et al. Apr 2019 B2
20010014185 Chitradon et al. Aug 2001 A1
20010017668 Wilcock et al. Aug 2001 A1
20020047895 Bernardo et al. Apr 2002 A1
20020075322 Rosenzweig et al. Jun 2002 A1
20020122073 Abrams et al. Sep 2002 A1
20020171668 Samra Nov 2002 A1
20030025803 Nakamura et al. Feb 2003 A1
20030030636 Yamaoka Feb 2003 A1
20030117611 Chon et al. Jun 2003 A1
20030142523 Biacs Jul 2003 A1
20040001109 Blancett et al. Jan 2004 A1
20040125133 Pea et al. Jul 2004 A1
20040125148 Pea et al. Jul 2004 A1
20040196282 Oh Oct 2004 A1
20040264919 Taylor et al. Dec 2004 A1
20050063608 Clarke et al. Mar 2005 A1
20050216186 Dorfman et al. Sep 2005 A1
20050232606 Hosoda et al. Oct 2005 A1
20060041591 Rhoads Feb 2006 A1
20060120624 Jojic et al. Jun 2006 A1
20060181546 Jung et al. Aug 2006 A1
20060203335 Martin et al. Sep 2006 A1
20060208926 Poor et al. Sep 2006 A1
20060238379 Kimchi et al. Oct 2006 A1
20060251338 Gokturk et al. Nov 2006 A1
20060266942 Ikeda Nov 2006 A1
20060271287 Gold et al. Nov 2006 A1
20070024722 Eura et al. Feb 2007 A1
20070081081 Cheng Apr 2007 A1
20070096945 Rasmussen et al. May 2007 A1
20070103461 Suzuno et al. May 2007 A1
20070110338 Snavely May 2007 A1
20070113255 Kurosawa May 2007 A1
20070136259 Dorfman et al. Jun 2007 A1
20070150188 Rosenberg Jun 2007 A1
20070216709 Kojima et al. Sep 2007 A1
20070250477 Bailly Oct 2007 A1
20070279438 Takakura et al. Dec 2007 A1
20080002962 Ito et al. Jan 2008 A1
20080016472 Rohlf et al. Jan 2008 A1
20080043020 Snow et al. Feb 2008 A1
20080060004 Nelson et al. Mar 2008 A1
20080066000 Ofek Mar 2008 A1
20080077597 Butler Mar 2008 A1
20080089593 Ohwa Apr 2008 A1
20080091635 James et al. Apr 2008 A1
20080158366 Jung et al. Jul 2008 A1
20080174593 Ham et al. Jul 2008 A1
20080187181 Meadow et al. Aug 2008 A1
20080266142 Sula et al. Oct 2008 A1
20080285886 Allen Nov 2008 A1
20080291201 Lafon Nov 2008 A1
20080291217 Vincent Nov 2008 A1
20080292213 Chau Nov 2008 A1
20090046057 Umezawa Feb 2009 A1
20090063424 Iwamura et al. Mar 2009 A1
20090064014 Nelson et al. Mar 2009 A1
20090135178 Aihara et al. May 2009 A1
20090179895 Zhu et al. Jul 2009 A1
20090202102 Miranda et al. Aug 2009 A1
20090210793 Yee Aug 2009 A1
20090213112 Zhu et al. Aug 2009 A1
20090240431 Chau et al. Sep 2009 A1
20090279794 Brucher et al. Nov 2009 A1
20090284551 Stanton Nov 2009 A1
20090290812 Naaman et al. Nov 2009 A1
20090303251 Balogh et al. Dec 2009 A1
20100064239 Crawford et al. Mar 2010 A1
20100115455 Kim May 2010 A1
20100122208 Herr et al. May 2010 A1
20100149212 Fukuya et al. Jun 2010 A1
20100184451 Wang et al. Jul 2010 A1
20100188503 Tsai et al. Jul 2010 A1
20100215250 Zhu Aug 2010 A1
20100215254 Prokhorov Aug 2010 A1
20100250581 Chau Sep 2010 A1
20100259641 Fujimoto Oct 2010 A1
20100309512 Onoda Dec 2010 A1
20100316357 Saitou et al. Dec 2010 A1
20100325589 Ofek et al. Dec 2010 A1
20110007094 Nash et al. Jan 2011 A1
20110007130 Park et al. Jan 2011 A1
20110007134 Knize et al. Jan 2011 A1
20110010668 Feldstein et al. Jan 2011 A1
20110016398 Hanes Jan 2011 A1
20110050706 Cherna et al. Mar 2011 A1
20110055749 Wallace et al. Mar 2011 A1
20110074707 Watanabe et al. Mar 2011 A1
20110074811 Hanson et al. Mar 2011 A1
20110085778 Iwase et al. Apr 2011 A1
20110123120 Quack May 2011 A1
20110173565 Ofek et al. Jul 2011 A1
20110211040 Lindemann et al. Sep 2011 A1
20110211764 Krupka et al. Sep 2011 A1
20110234832 Ezoe et al. Sep 2011 A1
20110249166 Moriyama Oct 2011 A1
20110254976 Garten Oct 2011 A1
20110292076 Wither Dec 2011 A1
20110302527 Chen et al. Dec 2011 A1
20110316884 Giambalvo Dec 2011 A1
20120011464 Hayashi et al. Jan 2012 A1
20120033032 Kankainen Feb 2012 A1
20120062695 Sakaki Mar 2012 A1
20120075410 Matsumoto et al. Mar 2012 A1
20120092447 Jeong et al. Apr 2012 A1
20120098854 Ohnishi Apr 2012 A1
20120127066 Iida et al. May 2012 A1
20120169769 Minamino et al. Jul 2012 A1
20120188247 Cheung et al. Jul 2012 A1
20120191339 Lee et al. Jul 2012 A1
20120194547 Johnson et al. Aug 2012 A1
20120242783 Seo et al. Sep 2012 A1
20120274625 Lynch Nov 2012 A1
20120281119 Ohba et al. Nov 2012 A1
20120293607 Bhogal et al. Nov 2012 A1
20120299920 Coombe et al. Nov 2012 A1
20120300019 Yang et al. Nov 2012 A1
20120301039 Maunder et al. Nov 2012 A1
20120316782 Sartipi et al. Dec 2012 A1
20130035853 Stout et al. Feb 2013 A1
20130044108 Tanaka et al. Feb 2013 A1
20130076784 Maurer et al. Mar 2013 A1
20130100114 Lynch Apr 2013 A1
20130103303 Lynch Apr 2013 A1
20130106990 Williams et al. May 2013 A1
20130162665 Lynch Jun 2013 A1
20130169668 Lynch Jul 2013 A1
20130169685 Lynch Jul 2013 A1
20130182108 Meadow et al. Jul 2013 A1
20130201216 Nakamura et al. Aug 2013 A1
20130232168 McGregor et al. Sep 2013 A1
20130239057 Ubillos et al. Sep 2013 A1
20130294650 Fukumiya et al. Nov 2013 A1
20130321461 Filip Dec 2013 A1
20130332890 Ramic et al. Dec 2013 A1
20140002439 Lynch Jan 2014 A1
20140002440 Lynch Jan 2014 A1
20140016193 Terashima et al. Jan 2014 A1
20140019301 Meadow et al. Jan 2014 A1
20140019302 Meadow et al. Jan 2014 A1
20140023355 Terashima Jan 2014 A1
20140078177 Yamaji et al. Mar 2014 A1
20140078263 Kim Mar 2014 A1
20140079322 Yamaji et al. Mar 2014 A1
20140118405 Chand May 2014 A1
20140164988 Barnett et al. Jun 2014 A1
20140181259 You Jun 2014 A1
20140210940 Barnes Jul 2014 A1
20140240455 Subbian et al. Aug 2014 A1
20140253542 Jung et al. Sep 2014 A1
20140297575 Rapoport et al. Oct 2014 A1
20140362108 Aguera-Arcas Dec 2014 A1
20140376823 Cui et al. Dec 2014 A1
20150077521 Borchert et al. Mar 2015 A1
20150085068 Becker et al. Mar 2015 A1
20150109328 Gallup et al. Apr 2015 A1
20150109513 Nayar et al. Apr 2015 A1
20150113474 Gallup et al. Apr 2015 A1
20150130848 Sakaniwa et al. May 2015 A1
20150145995 Shahraray et al. May 2015 A1
20150154736 Seitz et al. Jun 2015 A1
20150161807 Pack Jun 2015 A1
20150170615 Siegel Jun 2015 A1
20150185018 Hesch et al. Jul 2015 A1
20150185873 Ofstad et al. Jul 2015 A1
20150185991 Ho et al. Jul 2015 A1
20150235398 Kim et al. Aug 2015 A1
20150248197 Peters et al. Sep 2015 A1
20150254694 Filip Sep 2015 A1
20150262391 Chau Sep 2015 A1
20150278878 Chau Oct 2015 A1
20150294153 Naithani et al. Oct 2015 A1
20150301695 Leong et al. Oct 2015 A1
20150302633 Li et al. Oct 2015 A1
20150304588 Jung et al. Oct 2015 A1
20150310596 Sheridan et al. Oct 2015 A1
20150371389 Siegel et al. Dec 2015 A1
20160005437 Barry et al. Jan 2016 A1
20160014190 Sheory Jan 2016 A1
20160019223 Kisielius et al. Jan 2016 A1
20160019713 Dillard et al. Jan 2016 A1
20160027177 Hutchison Jan 2016 A1
20160042252 Sawhney et al. Feb 2016 A1
20160048934 Gross Feb 2016 A1
20160063516 Terrazas et al. Mar 2016 A1
20160063705 Xu et al. Mar 2016 A1
20160081620 Narayanan et al. Mar 2016 A1
20160098612 Viviani Apr 2016 A1
20160140744 Strelow et al. May 2016 A1
20160156840 Arai et al. Jun 2016 A1
20160179760 Strong et al. Jun 2016 A1
20160209648 Haddick et al. Jul 2016 A1
20160231134 Nguyen Kim et al. Aug 2016 A1
20160321783 Citrin et al. Nov 2016 A1
20160349066 Chung et al. Dec 2016 A1
20160379094 Mittal et al. Dec 2016 A1
20170109612 Mittal et al. Apr 2017 A1
20170116477 Chen et al. Apr 2017 A1
20170132224 Yang May 2017 A1
20170142766 Kim May 2017 A1
20170178404 Dillard et al. Jun 2017 A1
20170256040 Grauer Sep 2017 A1
20170287221 Ghaly et al. Oct 2017 A1
20170300511 Brewington et al. Oct 2017 A1
20170308752 Takeuchi et al. Oct 2017 A1
20170356755 Strawn et al. Dec 2017 A1
20180018754 Leng et al. Jan 2018 A1
20180035074 Barnes, Jr. Feb 2018 A1
20180053293 Ramalingam et al. Feb 2018 A1
20180061126 Huang et al. Mar 2018 A1
20180143023 Bjorke et al. May 2018 A1
20180143756 Mildrew et al. May 2018 A1
20180350126 Oh Dec 2018 A1
20190005719 Fleischman et al. Jan 2019 A1
20190026793 Rollon Jan 2019 A1
20190043259 Wang et al. Feb 2019 A1
20190051029 Schpok Feb 2019 A1
20190087067 Hoyden et al. Mar 2019 A1
Foreign Referenced Citations (2)
Number Date Country
102661748 Sep 2012 CN
1703426 Sep 2006 EP
Non-Patent Literature Citations (46)
Entry
Kim et al., “A unified visualization framework for spatial and temporal analysis in 4D GIS”, Proceedings of 2003 IEEE International Geoscience and Remote Sensing Symposium, v. 6, pp. 3715-3717, 2003. (Year: 2003).
Luttermann et al., “VRML History: Storing and Browsing Temporal 3D-Worlds”, VRML '99, pp. 153-181, 1999. (Year: 1999).
Snavely et al., “Photo Tourism: Exploring Photo Collections in 3D”, ACM Transactions on Graphics, v. 25, n. 3, pp. 835-846, Jul. 2006. (Year: 2006).
Examination Report issued in European Patent Application 15771739.8, dated Jan. 23, 2019, 5 pages.
Rejection Decision for Chinese Patent Application No. 201580020984.2 dated May 28, 2019.
Second Office Action dated Jan. 8, 2019, for Chinese Patent Application No. 201580020984.2.
Wu, et al, “Automatic Alignment of Large-scale Aerial Rasters to Road-maps” Proceedings of the 15th international Symposium on Advances in Geographic information Systems, 2007.
Barclay, et al., “Microsoft TerraServer: A Spatial Data Warehouse”, 2005.
Bauman, “Raster Databases”, 2007.
Ghemawat, et al. “The Google File System”, 2003.
U.S. Appl. No. 11/415,960, Zelirilca et al., “Coverage Mask Generation for Large Images”, filed May 2, 2006.
U.S. Appl. No. 11/437,553, “Large-Scale Image Processing Using Mass Parallelizallon Techniques”, filed May 19, 2006.
U.S. Appl. No. 11/473,461, Kirmse et al, “Hierarchical Spatial Data Structure and 3D Index Data Verseoning for Generating Packet Data”, filed Jun. 22, 2006.
Scranton et al., “Sky in Google Earth: The Next Frontier in Astronomical Data Discovery and Visualization”, http://earth.google.com/sky/, Sep. 10, 2007.
International Search Report, PCT/US09/04817, dated Oct. 8, 2009.
http://ieeexplore.ieee.org/search retrieved from the Internet on Sep. 7, 2010.
Potmesil M., “Maps alive: Viewing geospacial information on the WWW”, Computer Systems and ISDN Systems, North Holland Publishing, Amsterdam, NL, vol. 29, No. 8-13, Sep. 1, 1997 (Sep. 1, 1997), pp. 1327-1342, XP004095328.
Nan L. et al., “A spatial-temporal system for dynamic cadastral management,” Journal of Environmental Management, Academic Press, London, GB, vol. 78, No. 4, Mar. 1, 2006 (Mar. 1, 2006), pp. 373-381, retrieved on Mar. 1, 2006.
Rocchini D. et al., “Landscape change and the dynamics of open formations in a natural reserve,” Landscape and urban Planning, Elsevier, vol. 77, No. 1-2, Jun. 15, 2006 (Jun. 15, 2006), pp. 167-177, retrieved on Jun. 15, 2006.
The extended European search report, Application No. EP 09 81 0353.4, PCT/US2009004817, dated Dec. 5, 2011.
Gail Langran, Nicholas R. Chrisman: “A Framework for temporal Geographic Information”, University of Washington Cartographica, vol. 25, No. 3, Dec. 31, 1988 (Dec. 31, 1988 ), pp. 1-14, Retrieved from the Internet: URL:http://www.unigis.ac.at/fernstudien/unigis_professional/lehrgangs_cd_l..../module//modul2/Temporal%20Geographic%20Information.pdf.
European Examination Report for Application No. 09810353.4 dated Oct. 18, 2012.
Vlahakis et al., “Archeoguide: An Augmented Reality Guide for Archaeological Sites”, IEEE Computer Graphics and Applications, Sep./Oct. 2002, pp. 52-60.
Haval, “Three-Dimensional Documentation of Complex Heritage Structures”, Interpretive Enviornments, Apr.-Jun. 2000, pp. 52-55.
Magnenat-Thalmann et al., “Real-Time Animation of Ancient Roman Sites”, 2006, pp. 19-30.
Conti et al., “DentroTrento—A virtual Walk Across history”, 2006, pp. 318-321.
European Office Action for Application No. 09810353 dated Oct. 9, 2013.
U.S. Appl. No. 13/854,314, filed Apr. 1, 2013.
U.S. Appl. No. 13/870,419, filed Apr. 25, 2013.
Bhagavathy et al., “Modeling and Detection of Geospatial Objects Using Texture Motifs” 3706 IEEE Transactions on Geoscience and Remote Sensing. vol. 44, No. 12, Dec. 2006.
Blackcoffee Design, 1000 Icons Symbols and Pictograms: Visual Communication for Every Language, Gloucester, MA: Rockport Publishers, 2006, 29, 49, 65, 101.
Iconfinder, “Expand Icons”, [unknown date], Iconfinder [online], [site visited Oct. 19, 2015]. Available from internet: <URL:https://www.iconfinder.com/search/?q=expand>.
Frutiger, Adrian, Signs and Symbols: their design and meaning, New York: Watson-Guptill Publications, 1998, 337, 350.
Dreyfuss, Henry, Symbol Sourcebook, New York: Van Nostrand Reinhold Co., 1972, 28.
Taylor, Frank, New Google Maps Moon Update, Sep. 13, 2007, Google Earth Blog [online], [site visited Oct. 15, 2015]. Available from Internet: <URL: https://www.gearthblog.com/blog/archives/2007/09/new_goolge_maps_moon_update.html>.
Abair, Randy, Google Maps Changes, Sep. 2013 Online Marketing Year in Review, Jan. 2, 2014, Vermont DesignWorks Blog [online], [site visited Oct. 15, 2015]. Available from Internet: <URL: http://www.vtdesignworks.com/blog/seo-2013>.
GordyHanner, Why can't I watch Videos in full screen on Youtube?, Dec. 6, 2010, Youtube [online], [site visited Oct. 15, 2015]. Available from Internet: <URL:https://www.youtube.com/watch?v=8n7nn-3CI2A>.
Clohessy, James W. and Patrick J Cerra, How do you warn 19 million people at the drop of a hat?, ArcNews, Fall 2011, [online], [site visited Oct. 15, 2015]. Available from Internet: <URL:https://www.esri.com/news/arcnews/fall11articles/how-do-you-warn-19-million-people-at-the-drop-of-a-hat.html>.
Icons, Google Design Library, updated, Google Inc. [online], [site visited Oct. 19, 2015]. Available from Internet: <https://www.google.com/design/icons/>.
Thompson, Helen, With Google Maps, Apr. 23, 2014, Smithsonianmag.com [online], [site visited Jul. 19, 2016]. Available from Internet: <http://www.smithsonianmag.com/innovation/google-maps-unveils-time-travel-function-street-view-180951184/?no-ist>.
International Preliminary Report on Patentability for PCT Application No. PCT/US2015/025551, dated Nov. 3, 2016.
Wikipedia, Google Street View, Sep. 3, 2014, wikipedia.com [online], [site visited Nov. 4, 2016]. Available from Internet: <https ://en.wikipedia.org/wiki/Google Street_ View>.
Wikipedia, Google Maps Street View redesign, Jun. 10, 2014, wikipedia.com [online], [site visited Nov. 7, 2016]. Available from Internet: <https://en.wikipedia.org/wiki/Google_Maps>.
Snavely et al., “Photo Tourism: Exploring Photo Collections in 3D”, 2006, Particularly see: FIGS. 1 (c), 5, Section 5.1, 12 pages.
First Office Action dated Mar. 20, 2018, for Chinese Patent Application No. 201580020984.2.
Examination Report for European Patent Application No. 15771739.8, dated May 8, 2018. 10 pages.
Related Publications (1)
Number Date Country
20180181568 A1 Jun 2018 US
Continuations (1)
Number Date Country
Parent 14258709 Apr 2014 US
Child 15900924 US