Various systems allow users to view images in sequences, such as in time or space. In some examples, these systems can provide a navigation experience in a remote or interesting location. Some systems allow users to feel as if they are rotating within a virtual world by clicking on the edges of a displayed portion of a panorama and having the panorama appear to “move” in the direction of the clicked edge.
One aspect of the disclosure provides a computer-implemented method of navigating multidimensional spaces. The method includes providing, by one or more processors, a first image of a multidimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multidimensional space extends into the first image such that a second image is connected to the first image along a direction of the overlay line; receiving, by the one or more processors, user input indicating a swipe across a portion of the display, the swipe being defined by a starting pixel and an ending pixel of the display; determining, by the one or more computing devices, based on the starting pixel and the ending pixel, that the swipe occurred at least partially within an interaction zone of the first image, the interaction zone defining an area around the overlay line at which the user can interact with the multidimensional space; when the swipe occurred at least partially within the interaction zone, determining, by the one or more processors, that the swipe indicates a request to display an image different from the first image; when the swipe indicates a request to display the image different from the first image, selecting, by the one or more computing devices, the second image based on the starting point of the swipe, the ending point of the swipe, and a connection graph connecting the first image and the second image along the direction of the overlay line; and providing, by the one or more computing devices, the second image for display on the display in order to providing a feeling of movement in the multidimensional space.
In one example, the method also includes providing a transition image for display between the first image and the second image, the transition image being provided as a thumbnail image with less detail than the first image and the second image. In another example, the method also includes providing instructions to fade the overlay line out after a threshold period of time has passed without any user action with the overlay line. In this example, after fading the overlay line out, the method includes receiving second user input on the display and providing instructions to redisplay the overlay line in response to the second user input. In another example, the method also includes determining a direction and magnitude of the swipe based on the starting pixel of the swipe and the ending pixel of the swipe, and selecting the second image is further based on the direction and magnitude.
In another example, the method also includes providing the second image with a second overlay line extending across a portion of the second image and indicating a direction in which the multidimensional space extends into the second image such that a third image is connected to the second image along a direction of the second overlay line in the connection graph. In this example, the method includes receiving second user input indicating a second swipe, determining that the second swipe is within a threshold angle perpendicular to the direction in which the multidimensional space extends into the second image, and when the second swipe is within a threshold angle perpendicular to the direction in which the multidimensional space extends into the second image, panning across multidimensional space of the second image. Alternatively, the method also includes receiving second user input indicating a second swipe, determining that the second swipe is within a threshold angle perpendicular to the direction in which the multidimensional space extends into the second image, and when the second swipe is within a threshold angle perpendicular to the direction in which the multidimensional space extends into the second image, changing the orientation within the second image. In another alternative, the method also includes receiving second user input indicating a second swipe, determining that the second swipe is within a threshold angle perpendicular to the direction in which the multidimensional space extends into the second image, and when the second swipe is within a threshold angle perpendicular to the direction in which the multidimensional space extends into the second image, switching from the second image to a third image located on a second connection graph adjacent to the connection graph, the second image and the third image having no direction connection in the connection graph. In this example, the method also includes providing for display with the second image, a third overlay line, the third overlay line representing a second navigation path proximate to a current view of the second image, the third overlay line being provided such that the third overlay line and the second overlay line cross over one another when displayed with the second image. In addition, the method includes receiving second user input along the third overlay line indicating a request to transition from an image along the second overlay line to an image along the third overlay line and in response to the second user input, providing a third image for display, the third image being arranged along the third overlay line in the connection graph. Further, the method includes selecting a set of images for display in series as a transition between the second image and the third image based on connections between images in the connection graph and providing the set of images for display on the display. In addition, prior to providing the set of images, the method also includes filtering the set of images to remove at least one image based on a connection between two images of the set of images in a second connection graph different from the first connection graph such that the filtered set of images are provided for display as the transition between the second image and the third image.
Another aspect of the disclosure provides a system. The system includes one or more computing devices, each having one or more processors. The one or more computing devices are configured to provide a first image of a multidimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multidimensional space extends into the first image such that a second image is connected to the first image along a direction of the overlay line; receive user input indicating a swipe across a portion of the display, the swipe being defined by a starting pixel and an ending pixel of the display; determine, based on the starting pixel and the ending pixel, that the swipe occurred at least partially within an interaction zone of the first image, the interaction zone defining an area around the overlay line at which the user can interact with the multidimensional space; when the swipe occurred at least partially within the interaction zone, determine that the swipe indicates a request to display an image different from the first image; when the swipe indicates a request to display the image different from the first image, select the second image based on the starting point of the swipe, the ending point of the swipe, and a connection graph connecting the first image and the second image along the direction of the overlay line; and provide the second image for display on the display in order to providing a feeling of movement in the multidimensional space.
In one example, the one or more computing devices are further configured to provide a transition image for display between the first image and the second image, the transition image being provided as a thumbnail image with less detail than the first image and the second image. In another example, the one or more computing devices are further configured to provide instructions to fade the overlay line out after a threshold period of time has passed without any user action with the overlay line. In this example, the one or more computing devices are further configured to, after fading the overlay line out, receive second user input on the display and provide instructions to redisplay the overlay line in response to the second user input. In another example, the one or more computing devices are further configured to determine a direction and magnitude of the swipe based on the starting pixel of the swipe and the ending pixel of the swipe and to select the second image further based on the direction and magnitude. In another example, the one or more computing devices are further configured to provide the second image with a second overlay line extending across a portion of the second image and indicating a direction in which the multidimensional space extends into the second image such that a third image is connected to the second image along a direction of the second overlay line in the connection graph.
A further aspect of the disclosure provides a non-transitory, computer-readable storage device on which computer readable instructions of a program are stored. The instructions, when executed by one or more processors, cause the one or more processors to perform a method. The method includes providing a first image of a multidimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multidimensional space extends into the first image such that a second image is connected to the first image along a direction of the overlay line; receiving user input indicating a swipe across a portion of the display, the swipe being defined by a starting pixel and an ending pixel of the display; determining, based on the starting pixel and the ending pixel, that the swipe occurred at least partially within an interaction zone of the first image, the interaction zone defining an area around the overlay line at which the user can interact with the multidimensional space; when the swipe occurred at least partially within the interaction zone, determining that the swipe indicates a request to display an image different from the first image; when the swipe indicates a request to display the image different from the first image, selecting the second image based on the starting point of the swipe, the ending point of the swipe, and a connection graph connecting the first image and the second image along the direction of the overlay line; and providing the second image for display on the display in order to providing a feeling of movement in the multidimensional space.
Overview
The technology relates to an interface for enabling a user to navigate within a multidimensional environment in first or third person view. In some examples, the environment may include a three dimensional model rendered by mapping images to the model or a series of geolocated (for example, begin associated with orientation and location information) images with information identifying the two or three dimensional relationships of these images with one another.
To provide for “realistic” motion in multidimensional space, the interface may allow continuous motion, intuitive turning, looking around the scene and moving forwards and backwards. For example, a reference line may be displayed to indicate to a user a direction in which the user may “traverse” in the multidimensional space using a touch and/or motion controls. By swiping in different directions relative to the line, the interface may easily recognize when and the direction in which the user is trying to move as compared to when the user is attempting to simply change the orientation and look around.
In order to provide the interface, a plurality of geolocated images must be available. In addition to being associated with geolocation information, the images may be connected to one another in one or more image graphs. The graphs may be generated using various techniques including manual and automated linking based on location and distance between images, the manner in which the images were captured such as where the images are captured by a camera as the camera is moved along, and other methods which identify a best image in a set of images for connecting to any given point or orientation in an image.
One or more server computing devices may access these one or more graphs in order to provide images for display to a user. For example, the user's client computing device may send a request for images identifying a location. The one or more server computing devices may access the one or more image graphs in order to identify an image corresponding to the location. This image may then be provided to the user's client computing device.
In addition to providing the image, the one or more server computing devices may provide the client computing device with instructions for displaying a navigational overlay. This overlay may be represented as a line which indicates to the user a direction in which the user can move in the multidimensional space represented by the image. The line itself may actually correspond to a connection between the image that the user is currently viewing and other images in the one or more image graphs. As an example, this line may correspond to a road along which a camera was moved in order to capture the images identified in the image graph.
The user may then use the line to navigate through the multidimensional space. For example, the line may be used to suggest to a user an interaction zone for moving from the image to a different image in the one or more image graphs. If the user swipes within the interaction zone of the line and generally parallel or within some small angle difference to the line, the user's client computing device may “move” around in the multidimensional space by transitioning to a new image according to the characteristics of the swipe and the image graph. In other examples, a swipe may be identified as a request to rotate the view within the current image or pan in the current image.
The characteristics of the swipe including direction, magnitude and speed can be used to define how the view will change. The magnitude of the swipe or the length in pixels can be used to determine whether how far to move the view forward or backward. In addition, the speed of the swipe (pixels per second), and even the acceleration of the swipe, may be used to determine how fast the view appears to move through the multidimensional space.
The direction may be determined by unprojecting (converting from two dimensions to three dimensions) the current and previous screen coordinates onto the y=z plane in normalized device coordinates (NDC) and then projecting down again onto the ground plane. This allows the vertical display movement to map to forward movement in the multidimensional space and horizontal display movement maps to map to lateral movement in the multidimensional space in a predictable way that is independent of scene geometry or the horizon.
As an example, when the user taps a view represented on a display of a client computing device, a line may appear. They user may then swipe along a portion of the display. Using the initial location or pixel(s) of the swipe and other locations along the swipe, the speed of the swipe may be determined. Once the swipe has completed or the user's finger leaves the display, a transition animation, such as zooming and fading into a new image, is displayed in order to transition to a new image.
As the view will typically traverse a plurality of images in order to reach the image identified based on the characteristics of the swipe, the full resolution of these plurality of images may not actually be displayed. Instead lower resolution versions of the images, such as thumbnail images, may be displayed as part of a transition between a current image and an identified image. When the actual identified image is displayed, this image may be displayed at the full resolution. This may save time and processing power.
The characteristics such as opacity, width, color, and location of the line may be changed in order to allow the user to more easily understand navigate a multidimensional space. In addition, in order to further reduce the interference of the line with the user's exploration of the multidimensional space, when the user is not interacting with the interface, no line may be shown. As an example, the line may be faded in an out as needed based on whether the user is interacting with the display.
In some examples, the connection graph may branch off, such as where there is an intersection of two roads, or rather, an intersection in the one or more image graphs. In order to keep the overlay simple and easy to understand, only lines that are directly connected to a current line within a short distance of the current point of view may appear overlaid on the imagery. In these branch areas, such as at a traffic intersection where more than one road intersects one another, a user may want to change from one line to another. Of course, traveling forward and making a 90 degree turn in the middle of an intersection can feel unnatural. In this regard, the one or more image graphs may be used to cut across the corners of an intersection between two lines by displaying images that are not on either line as a transition between the two lines.
The features described herein allow the use to explore while at the same time following specific, pre-determined motion path inside of the multidimensional space while at the same time preventing the user from getting “stuck” or moving in an invalid way. In addition, the system is able to recognize the difference when the user is trying to move versus when the user is trying to look around. Other systems require multiple types of inputs in order to distinguish these types of movement. These systems may also require a user to point to a specific location in order to move towards that location. This is much less intuitive than allowing a user to swipe in order to move in the multidimensional space and does not allow continuous motion because the user must tap or click on an arrow each time they want to move.
Further aspects, features and advantages of the disclosure will be appreciated when considered with reference to the following description of embodiments and accompanying figures. The same reference numbers in different drawings may identify the same or similar elements. Furthermore, the following description is not limiting; the scope of the present technology is defined by the appended claims and equivalents. While certain processes in accordance with example embodiments are shown in the figures as occurring in a linear fashion, this is not a requirement unless expressly stated herein. Different processes may be performed in a different order or concurrently. Steps may also be added or omitted unless otherwise stated.
Example Systems
Memory can also include data 118 that can be retrieved, manipulated or stored by the processor. The memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.
The instructions 116 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the one or more processors. In that regard, the terms “instructions,” “application,” “steps” and “programs” can be used interchangeably herein. The instructions can be stored in object code format for direct processing by a processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
Data 118 can be retrieved, stored or modified by the one or more processors 112 in accordance with the instructions 116. For instance, although the subject matter described herein is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data.
The one or more processors 112 can be any conventional processors, such as a commercially available CPU. Alternatively, the processors can be dedicated components such as an application specific integrated circuit (“ASIC”) or other hardware-based processor. Although not necessary, one or more of computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently.
Although
Each of the computing devices 110 can be at different nodes of a network 160 and capable of directly and indirectly communicating with other nodes of network 160. Although only a few computing devices are depicted in
As an example, each of the computing devices 110 may include web servers capable of communicating with storage system 150 as well as computing devices 120, 130, and 140 via the network. For example, one or more of server computing devices 110 may use network 160 to transmit and present information to a user, such as user 220, 230, or 240, on a display, such as displays 122, 132, or 142 of computing devices 120, 130, or 140. In this regard, computing devices 120, 130, and 140 may be considered client computing devices and may perform all or some of the features described herein.
Each of the client computing devices 120, 130, and 140 may be configured similarly to the server computing devices 110, with one or more processors, memory and instructions as described above. Each client computing device 120, 130 or 140 may be a personal computing device intended for use by a user 220, 230, 240, and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display such as displays 122, 132, or 142 (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device that is operable to display information), and user input device 124 (e.g., a mouse, keyboard, touch-screen or microphone). The client computing device may also include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another.
Although the client computing devices 120, 130 and 140 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing device 120 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet. In another example, client computing device 130 may be a head-mounted computing system. As an example the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen.
As with memory 114, storage system 150 can be of any type of computerized storage capable of storing information accessible by the server computing devices 110, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. In addition, storage system 150 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system 150 may be connected to the computing devices via the network 160 as shown in
Storage system 150 may store images and associated information such as image identifiers, orientation and location of the image, orientation and location of the camera that captured the image, as well as intrinsic camera settings (such as focal length, zoom, etc.). In addition to being associated with orientation and location information, the images may be connected to one another in one or more image graphs. In other words, from any given image, these image graphs may indicate which other images are connected to that image and in which direction.
The graphs may be generated using various techniques including manual and automated linking based on location and distance between images, the manner in which the images were captured such as where the images are captured by a camera as the camera is moved along, and other methods which identify a best image in a set of images for connecting to any given point or orientation in an image. For instance, as shown in example 300 of
The graphs themselves may have different images and connections.
Example Methods
As previously discussed, the following operations do not have to be performed in the precise order described below. Rather, as mentioned above, various operations can be handled in a different order or simultaneously, and operations may be added or omitted.
As an example, a client computing device may provide users with an image navigation experience. The client computing device may do so by communicate with one or more server computing devices in order to retrieve and display images. The one or more server computing devices may access these one or more graphs in order to provide images for display to a user. For example, the user's client computing device may send a request for images identifying a location. The one or more server computing devices may access the one or more image graphs in order to identify an image corresponding to the location. This image may then be provided to the user's client computing device.
In addition to providing the image, the one or more server computing devices may provide the client computing device with instructions for displaying a navigational overlay. This overlay may be represented as a line which indicates to the user a direction in which the user can move in the multidimensional space. For instance,
The user may then use the line to navigate through the multidimensional space. For example, the line may be used to suggest to a user an interaction zone for moving from the image to a different image in the one or more image graphs. As shown in the example 600 of
If the user swipes within the interaction zone of the line and generally parallel or within some small angle difference to the line as client computing device may “move” around in the multidimensional space. For instance, as shown in examples 800A and 800B
Returning to
If the user swipes outside of this interaction zone 602 or rather so many pixels away from the line 502 or within some specified angle distance (for instance greater than θ1, but less than 90-θ1) to the line, rather than transitioning to a new image, the user's client computing device may change the orientation of the view of the current image, or rather rotate the view within the current image. For example, if the user were to swipe his or her finger across the display of example 600 from the left side of the display 122 towards the right side of the display 122 at an angle that is greater than θ1, but less than 90-θ1, rather than moving generally along the line 502, the display may rotate within image 444. As shown in example 1100 of
In addition, if the user swipes generally perpendicular or within the small angle distance θ1 from perpendicular to the direction of line 502 (or less than 90-θ1 from parallel to the direction of line 502) within or outside of the interaction zone, this may indicate that the user wishes to pan (move sideways) in the current image. For example, if the user were to swipe his or her finger across the display of example 600 from the left side of the display 122 towards the right side of the display 122 at an angle that is greater 90-θ1 parallel to the direction of line 502, rather than moving generally along the line 502 or rotating within image 444, the display may pan within the current image. In this regard, if there is more than one line according to the one or more image graphs, this movement may cause the view to actually “jump” to a different line. For instance, from image 444, the display may jump to image 404 of image graph 420 as shown in example 1200 of
The characteristics of the swipe including direction, magnitude and speed can be used to define how the view will change. For instance, while a line is displayed, if the user swipes along the interaction zone, the result may be movement in a direction (within the image graphs) towards the point where the swipe began. In this regard, dragging downward as shown in
The magnitude of the swipe or the length in pixels can be used to determine whether how far to move the view forward or backward. For instance if the swipe does not cross a threshold minimum number of pixels the result may be no movement. If the swipe meets the threshold minimum number of pixels, the movement in the multidimensional may correspond to the number of pixels the swipe crosses. However, because the view has perspective, the relationship between the distance in pixels and the distance in the multidimensional may be exponential (as opposed to linear) as the plane on which the line appears would tilt towards the vanishing point in the image. In this regard, the distance in pixels may be converted to a distance in the multidimensional space. An image along the line according to the one or more image graphs that is closest to the distance in the multidimensional space from the original image may be identified as the image to which the view will transition.
In addition, the speed of the swipe (pixels per second), and even the acceleration of the swipe, may be used to determine how fast the view appears to move through the multidimensional. For instance, the movement may initially correspond to the speed of the swipe, but this speed may slow down and come to a stop at the image identified according to the magnitude of the swipe. In the event that the distance determined based on the magnitude of the swipe is between two images, the speed (or acceleration) of the swipe may be used to determine which image to identify as the image to which the view will transition. For instance, the farther image from the original image may be selected if the speed (or acceleration) is relatively high or greater than some threshold speed (or acceleration). At the same time, the closer image to the original image may be selected if the speed (or acceleration) is relatively low or lower than the threshold speed (or acceleration). In yet another example, where the speed (or acceleration) meets some other threshold value, in response, the view may appear to continuously move through the multidimensional space by transitioning between images along the line in the one or more image graphs according to the speed of the swipe until the user taps the display. This tap may cause the movement to slow down to a stop or immediately stop on the current or next image according to the one or more image graphs. In yet another example, the speed of a swipe made generally perpendicular to a line may be translated into a slower movement through the multidimensional than if the same speed of a swipe made generally parallel to the line. Yet further, the speed of the swipe may be determined based upon where the swipe occurs. In this regard, speed may be determined by measuring meters per pixel at a point on the screen halfway between the bottom of the screen and the horizon, following the intuition that this is the “average” screen position of the swipe. So if the user drags his or her finger exactly over this point, the pixels on the ground will move at the same speed as the user's finger.
The direction may be determined by unprojecting (converting from two dimensions to three dimensions) the current and previous screen coordinates onto the y=z plane in normalized device coordinates (NDC) and then projecting down again onto the ground plane. For instance, example 1300 of
As an example, when the user taps a view represented on a display of a client computing device, a line may appear. They user may then swipe along a portion of the display. Using the initial location or pixel(s) of the swipe and other locations along the swipe, the speed of the swipe may be determined. Once the swipe has completed or the user's finger leaves the display, a transition animation, such as zooming and fading into a new image, is displayed in order to transition to a new image. As an example, if the speed is small or less than a threshold, the next image along the line may be displayed as a new view. If the speed is greater than the threshold, the ending time and position on the display of the swipe are determined. The image closest to this position along the line according to the one or more image graphs is identified as a target image. In this example, the transition animation between images along the line continues until the target image is reached. In some examples, images displayed during the transition animation and the target image may be retrieved in real time from local memory or by providing the location information and requesting images from the one or more server computing devices while the transition animation is being played.
As the view will typically traverse a plurality of images in order to reach the image identified based on the characteristics of the swipe, the full resolution of these plurality of images may not actually be displayed. Instead lower resolution versions of the images, such as thumbnail images, may be displayed as part of a transition between a current image and an identified image. When the actual identified image is displayed, this image may be displayed at the full resolution. This may save time and processing power.
The characteristics of the line may be changed in order to allow the user to more easily understand navigate a multidimensional. For instance, the opacity of the line may be adjusted in order to allow the user to see more or less of the features below the line thus reducing the impact of the line on the user's ability to visually explore the multidimensional. In this regard, the width of the line may correspond to a width of a road on which the line is overlaid. Similarly, the width of the line and interaction zone may be adjusted in order to prevent the line from taking up too much of the display while at the same time making the line thick enough for the user to be able to interact with the line using his or her finger. The color of the line, for instance blue, may be selected in order to complement the current view or to allow the line to stand out from the current view.
The location of the line need not always be identical to a connection line in the one or more image graphs. For instance, where the line is displayed to correspond to the width of a road, the corresponding connections in the one or more image graphs may not actually run down the middle of the road such that the line does not perfectly correspond to the one or more image graphs. In areas where the geometry of the connection lines zig zags, the connection lines may actually be fit with a straighter line as the overlay. In this regard, the line may not pass through the center of each image but may have a smooth appearance when overlaid on the view.
In some examples, the connection graph may branch off, such as where there is an intersection of two roads, or rather, an intersection in the one or more image graphs. Example 1500A of
In these branch areas, such as at a traffic intersection where more than one road intersect one another, a user may want to change from one line to another. Of course, traveling forward and making a 90 degree turn in the middle of an intersection can feel unnatural. In this regard, the one or more image graphs may be used to cut across the corners of an intersection between two lines by displaying images that are not on either line as a transition.
In order to further reduce the interference of the line with the user's exploration of the multidimensional, when the user is not interacting with the interface, no line may be shown. This allows the user to see the entire 3D scene. As an example, if the user taps and drags around (or clicks and drags around, or motions and drags around) anywhere in the scene, while the line is not visible, he or she will look around the scene. If the user taps on the image once, the line may appear. Effects such as a shimmer, brightening then dulling of the line, thickening then thinning of the line, or quickly making the line more or less opaque then returning to normal opacity may be used to indicate the interactive nature of the line to the user. If the user makes a dragging motion within the interaction zone, even when the line is not visible, the line may appear and the image may appear to transition along the line. After some predetermined period of time, such as 2 seconds or more or less, where there is no input received by the client computing device on the display or a half a second or more or less after a single tap on the display, the line may be faded until it disappears, again, to reduce the impact of the line on the image.
The interface may also provide for other types of navigation in addition to between images along the line corresponding to the grid. For example, as noted above, a single tap may cause the line to appear. In this example the single tap may not cause the view to change, but rather the view may appear to remain stationary. At the same time, a double tap may take a user to an image connected in the one or more image graphs to the current image at or near the point of the double tap. Further, if a user is currently facing nearly perpendicular to a line, and the user swipes generally perpendicular to the line, the view may appear to remain perpendicular to the line, allowing the user to ‘strafe’ along the road. In addition, a pinching gesture may zoom in or out of a particular image without actually causing a transition to a new image.
Although the examples above relate to lines, various other overlays may be used to provide the user with an indication of navigable areas in the multidimensional. For instance, a plurality of scroll bars may be placed on a map in addition to or along with a toggle switch for switching between looking around (changing orientation) and moving through the multidimensional in order to provide visual feedback to the user as he or she moves through the multidimensional. In another example, rather than a finite line, a wider line which appears to blend laterally into the view may be used. In yet another alternative, discs or short arrow-style indications which do not necessarily appear to extend far into the scene can be used to suggest the interaction zone.
Most of the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. As an example, the preceding operations do not have to be performed in the precise order described above. Rather, various steps can be handled in a different order or simultaneously. Steps can also be omitted unless otherwise stated. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
Number | Name | Date | Kind |
---|---|---|---|
5247356 | Ciampa | Sep 1993 | A |
5325472 | Horiuchi | Jun 1994 | A |
5396583 | Chen et al. | Mar 1995 | A |
5528263 | Platzker | Jun 1996 | A |
5559707 | DeLorme et al. | Sep 1996 | A |
6097393 | Prouty, IV et al. | Aug 2000 | A |
6285317 | Ong | Sep 2001 | B1 |
6292215 | Vincent | Sep 2001 | B1 |
6314370 | Curtright | Nov 2001 | B1 |
6346938 | Chan et al. | Feb 2002 | B1 |
6477268 | Chiang et al. | Nov 2002 | B1 |
6486877 | Watanabe | Nov 2002 | B1 |
6515664 | Hii | Feb 2003 | B1 |
6563529 | Jongerius | May 2003 | B1 |
6609128 | Underwood | Aug 2003 | B1 |
6611615 | Christensen | Aug 2003 | B1 |
6687753 | Schneider | Feb 2004 | B2 |
6717608 | Mancuso et al. | Apr 2004 | B1 |
6771304 | Mancuso et al. | Aug 2004 | B1 |
6774914 | Benayoun | Aug 2004 | B1 |
6885392 | Mancuso et al. | Apr 2005 | B1 |
6895126 | Di Bernardo et al. | May 2005 | B2 |
7006707 | Peterson | Feb 2006 | B2 |
7050102 | Vincent | May 2006 | B1 |
7096428 | Foote et al. | Aug 2006 | B2 |
7158878 | Rasmussen et al. | Jan 2007 | B2 |
7197170 | Dwyer et al. | Mar 2007 | B2 |
7239760 | Di Bernardo et al. | Jul 2007 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7369136 | Heckbert | May 2008 | B1 |
7424133 | Schultz et al. | Sep 2008 | B2 |
7460953 | Herbst et al. | Dec 2008 | B2 |
7466244 | Kimchi et al. | Dec 2008 | B2 |
7526718 | Samadani et al. | Apr 2009 | B2 |
7574381 | Lin-Hendel | Aug 2009 | B1 |
7577316 | Di Bernardo et al. | Aug 2009 | B2 |
7643673 | Rohlf et al. | Jan 2010 | B2 |
7734116 | Panabaker et al. | Jun 2010 | B2 |
7777648 | Smith et al. | Aug 2010 | B2 |
7778491 | Steedly et al. | Aug 2010 | B2 |
7787659 | Schultz et al. | Aug 2010 | B2 |
7805025 | DiBernardo et al. | Sep 2010 | B2 |
7813596 | Di Bernardo et al. | Oct 2010 | B2 |
7889948 | Steedly et al. | Feb 2011 | B2 |
7920072 | Smith et al. | Apr 2011 | B2 |
7990394 | Vincent et al. | Aug 2011 | B2 |
8041730 | Upstill et al. | Oct 2011 | B1 |
8159524 | Wilson et al. | Apr 2012 | B2 |
8160400 | Snavely et al. | Apr 2012 | B2 |
8243997 | Davis et al. | Aug 2012 | B2 |
8250481 | Klaric et al. | Aug 2012 | B2 |
8279218 | Fan | Oct 2012 | B1 |
8326866 | Upstill et al. | Dec 2012 | B1 |
8350850 | Steedly et al. | Jan 2013 | B2 |
8463487 | Nielsen et al. | Jun 2013 | B2 |
8473148 | Nielsen et al. | Jun 2013 | B2 |
8514266 | Wilson et al. | Aug 2013 | B2 |
8527538 | Upstill et al. | Sep 2013 | B1 |
8542884 | Maltby, II | Sep 2013 | B1 |
8577177 | Guetter et al. | Nov 2013 | B2 |
8593518 | Schultz et al. | Nov 2013 | B2 |
8612465 | Brewington | Dec 2013 | B1 |
RE44925 | Vincent | Jun 2014 | E |
8817067 | Fan | Aug 2014 | B1 |
8941685 | Chapin | Jan 2015 | B1 |
9046996 | Gallup et al. | Jun 2015 | B2 |
20010033283 | Liang et al. | Oct 2001 | A1 |
20010038718 | Kumar et al. | Nov 2001 | A1 |
20020010734 | Ebersole et al. | Jan 2002 | A1 |
20020063725 | Tarbutton et al. | May 2002 | A1 |
20030014224 | Guo et al. | Jan 2003 | A1 |
20030080978 | Navab et al. | May 2003 | A1 |
20030103169 | Okada et al. | Jun 2003 | A1 |
20030110185 | Rhoads | Jun 2003 | A1 |
20030142115 | Endo et al. | Jul 2003 | A1 |
20030158655 | Obradovich et al. | Aug 2003 | A1 |
20030210228 | Ebersole et al. | Nov 2003 | A1 |
20040107043 | de Silva | Jun 2004 | A1 |
20040234933 | Dawson et al. | Nov 2004 | A1 |
20040239699 | Uyttendaele et al. | Dec 2004 | A1 |
20040247173 | Nielsen et al. | Dec 2004 | A1 |
20040257436 | Koyanagi et al. | Dec 2004 | A1 |
20050004749 | Park | Jan 2005 | A1 |
20050116964 | Kotake et al. | Jun 2005 | A1 |
20050195096 | Ward et al. | Sep 2005 | A1 |
20050254621 | Kalender et al. | Nov 2005 | A1 |
20060073853 | Cighir et al. | Apr 2006 | A1 |
20060074549 | Takahashi et al. | Apr 2006 | A1 |
20060098010 | Dwyer et al. | May 2006 | A1 |
20060132482 | Oh | Jun 2006 | A1 |
20060132602 | Muto et al. | Jun 2006 | A1 |
20060195475 | Logan et al. | Aug 2006 | A1 |
20060228019 | Rahmes et al. | Oct 2006 | A1 |
20060238379 | Kimchi et al. | Oct 2006 | A1 |
20060238380 | Kimchi et al. | Oct 2006 | A1 |
20060238381 | Kimchi et al. | Oct 2006 | A1 |
20060238382 | Kimchi et al. | Oct 2006 | A1 |
20060238383 | Kimchi et al. | Oct 2006 | A1 |
20060241859 | Kimchi et al. | Oct 2006 | A1 |
20060241860 | Kimchi et al. | Oct 2006 | A1 |
20060268153 | Rice et al. | Nov 2006 | A1 |
20070024612 | Balfour | Feb 2007 | A1 |
20070035563 | Biocca et al. | Feb 2007 | A1 |
20070110338 | Snavely et al. | May 2007 | A1 |
20070201752 | Gormish et al. | Aug 2007 | A1 |
20070210937 | Smith et al. | Sep 2007 | A1 |
20070237420 | Steedly et al. | Oct 2007 | A1 |
20070265781 | Nemethy et al. | Nov 2007 | A1 |
20070273558 | Smith et al. | Nov 2007 | A1 |
20070273758 | Mendoza et al. | Nov 2007 | A1 |
20080027985 | Kasperkiewicz et al. | Jan 2008 | A1 |
20080028341 | Szeliski et al. | Jan 2008 | A1 |
20080043020 | Snow | Feb 2008 | A1 |
20080069480 | Aarabi et al. | Mar 2008 | A1 |
20080089558 | Vadon et al. | Apr 2008 | A1 |
20080143709 | Fassero et al. | Jun 2008 | A1 |
20080189031 | Meadow et al. | Aug 2008 | A1 |
20080195315 | Hu et al. | Aug 2008 | A1 |
20080221843 | Shenkar et al. | Sep 2008 | A1 |
20080231700 | Schultz et al. | Sep 2008 | A1 |
20080309668 | Borovikov | Dec 2008 | A1 |
20090073191 | Smith et al. | Mar 2009 | A1 |
20090092277 | Ofek et al. | Apr 2009 | A1 |
20090100767 | Kondo et al. | Apr 2009 | A1 |
20090128549 | Gloudemans | May 2009 | A1 |
20090210277 | Hardin | Aug 2009 | A1 |
20090237403 | Horii et al. | Sep 2009 | A1 |
20090327024 | Nielsen et al. | Dec 2009 | A1 |
20100079501 | Ikeda et al. | Apr 2010 | A1 |
20100085350 | Mishra et al. | Apr 2010 | A1 |
20100118025 | Smith et al. | May 2010 | A1 |
20100182264 | Hahn | Jul 2010 | A1 |
20100211920 | Westerman | Aug 2010 | A1 |
20100232770 | Prestenback et al. | Sep 2010 | A1 |
20100238164 | Steedly et al. | Sep 2010 | A1 |
20100269039 | Pahlavan | Oct 2010 | A1 |
20100289826 | Park et al. | Nov 2010 | A1 |
20100295971 | Zhu | Nov 2010 | A1 |
20100299630 | McCutchen | Nov 2010 | A1 |
20100309167 | Nam | Dec 2010 | A1 |
20100332513 | Azar et al. | Dec 2010 | A1 |
20110082846 | Bamba et al. | Apr 2011 | A1 |
20110090337 | Klomp et al. | Apr 2011 | A1 |
20110091076 | Schultz et al. | Apr 2011 | A1 |
20110205341 | Wilson | Aug 2011 | A1 |
20110212717 | Rhoads et al. | Sep 2011 | A1 |
20110244919 | Aller et al. | Oct 2011 | A1 |
20110302527 | Chen | Dec 2011 | A1 |
20110310088 | Adabala et al. | Dec 2011 | A1 |
20110312374 | Chen et al. | Dec 2011 | A1 |
20120072863 | Akifusa | Mar 2012 | A1 |
20120086725 | Joseph et al. | Apr 2012 | A1 |
20120200702 | Wilson et al. | Aug 2012 | A1 |
20120263397 | Kimura | Oct 2012 | A1 |
20120301039 | Maunder et al. | Nov 2012 | A1 |
20130059558 | Gehlen | Mar 2013 | A1 |
20130104076 | Cristescu et al. | Apr 2013 | A1 |
20130132366 | Pieper | May 2013 | A1 |
20130174072 | Nielsen et al. | Jul 2013 | A9 |
20130222385 | Dorsey et al. | Aug 2013 | A1 |
20130321290 | Oh et al. | Dec 2013 | A1 |
20130345980 | van Os | Dec 2013 | A1 |
20140007022 | Tocino Diaz | Jan 2014 | A1 |
20140019917 | Piemonte | Jan 2014 | A1 |
20170038941 | Pylappan | Feb 2017 | A1 |
Number | Date | Country |
---|---|---|
2843772 | Feb 2013 | CA |
1367469 | Sep 2002 | CN |
1689525 | Nov 2005 | CN |
1918451 | Feb 2007 | CN |
0509839 | Oct 1992 | EP |
1426904 | Jun 2004 | EP |
1553521 | Jul 2005 | EP |
1612707 | Jan 2006 | EP |
2003141562 | May 2003 | JP |
2003219403 | Jul 2003 | JP |
2004319984 | Nov 2004 | JP |
2005006081 | Jan 2005 | JP |
2005250560 | Sep 2005 | JP |
2006030208 | Feb 2006 | JP |
2006068301 | Mar 2006 | JP |
2007110675 | Apr 2007 | JP |
2010510309 | Apr 2010 | JP |
2006053271 | May 2006 | WO |
Entry |
---|
Brian E. Brewington, Observation of Changing Information Sources, Jun. 2000, 156 pages, retrieved from the internet: <http://agent.cs.dartmouth.edu/papers/brewington:thesis.pdf>. |
P. Coppin, I. Jonckheere, K. Nackaerts, B. Muys and E. Lambin, Digital Change Detection Methods in Ecosystem Monitoring: A Review, May 10, 2004, 32 pages. |
Hernandez, C., Vogiatzis, G., & Furukawa, Y. (Jun. 14, 2010). 3d Shape Reconstruction from Photographs: a Multi-View Stereo Approach. Half-day tutorial at CVPR 2010, 3 pages. Retrieved from: <http://carlos-hernandez.org/cvpr2010/>. |
Avanish Kushal, Ben Self, Yasutaka Furukawa, David Gallup, Carlos Hernandez, Brian Furless and Steven M. Seitz, Photo Tours, 2012, 8 pages. |
Noah Snavely, Rahul Garg, Steven M. Seitz, and Richard Szeliski, Finding Paths Through The Worlds's Photos, 2008, 11 pages. |
“Photo Tourism Exploring Photo Collections in 3D” [online], Retrieved Jul. 25, 2013, <phototour.cs.washington.edu>, 2 pages. |
Noah Snavely, Steven M. Seitz, and Richard Szeliski, “Photo Tourism: Exploring Photo Collections in 3D” Power Point presentation, 2006, 79 pages. |
Bing Maps Tile System, http://msdn.microsoft.com/en-us/library/bb259689(v=MSDN.10).aspx, retrieved from the Internet on Apr. 5, 2010, 13 pages. |
Working with 3D, http://msdn.microsoft.com/en-us/library/cc451896(v=MSDN.10).aspx, retrieved from the Internet on Apr. 5, 2010, 17 pages. |
Importing and Mapping Data, http://msdn.microsoft.com/en-us/library/bb259689(v=MSDN.10).aspx, retrieved from the Internet on Apr. 5, 2010, 17 pages. |
Toyama, K. et al., “Geographic Location Tags on Digital Images,” Proceeding Multimedia '03 Proceedings of the eleventh ACM International Conference on Multimedia, pp. 156-166. |
Foy, Laura, Channel 9, “First Look: Streetside in Bing Maps”, Posted: Dec. 2, 2009, <http://channe19.msdn.com/blogs/laurafoy/first-look-streetside-in-bing-maps>. |
Microsoft, Inc., “What is Photosynth?—About—Photosynth,” http://photosynth.net/about.aspx, p. 1-2. |
Woldberg, George, “Geometrick Transformation Techniques for Digital Images: A Survey.” (1988). |
Microsoft Inc., “Mobile Panoramas”, Microsoft Photosynth, <http://photosynth.net/mobile.aspx>, printed on Aug. 21, 2014. |
Microsoft Inc., “How Does it Work?”, Microsoft Photosynth, <http://photosynth.net/Background.aspx>, printed on Aug. 21, 2014. |
Microsoft Inc., “FAQ”, Microsoft Photosynth, <http://photosynth.net/faq.aspx>, printed on Aug. 21, 2014. |
Microsoft Inc., “Publishing Panoramas to Photpsynth”, Microsoft Photosynth, <http://photosynth.net/ice.aspx>, printed on Aug. 21, 2014. |
Microsoft Inc., “Photosynth Help”, Microsoft Photosynth, <http://photosynth.net/help.aspx>, printed on Aug. 21, 2014. |
Kaminsky et al. “Alignment of 3D Clouds to Overhead Images,” Computer Vision and Pattern Recognition Workshops, 2009, CVPR Workshops 2009. IEEE Computer Society Conference on. IEEE, 2009. |
Author: Digial Inspiration, Title: “How to Embed FLV Flash Videos in your Blog?”, URL:http://labnol.blogspot.com/2006/08/how-to-embed-flv-flash-videos-in-your.- html. |
Kimber, et al.; “FlyAbout: Spatially Indexed Panoramic Video,” Proceedings of the ninth ACM international conference on Multimedia; pp. 339-347; 2001. |
Pegoraro, Rob. “Mapping That Puts You There”; The Washington Post; May 31, 2007; p. D1. 2 pages. |
Kourogi et al., A Panorama-Based Method of Personal Positioning and Orientation and Its Real-time Applications for Wearable Computers, ISWC2001 in Zurich, Switzerland, pp. 107-114 (2001). |
Kang et al., Virtual Navigation of Complex Scenes Using Clusters of Cylindrical Panoramic Images, Cambridge Research Laboratory—Technical Report Series, CRL 97/5, Sep. 1997, pp. 1-22. |
Chen, Shenchang Eric, “QuickTime.RTM. VR—An Image-Based Approach to Virtual Environment Navigation,” Apple Computer, Inc., Computer Graphics Proceedings, Annual Conference Series, Aug. 6, 1995, pp. 29-38. |
Dykes, J., “An approach to virtual environments for visualization using linked geo-referenced panoramic imagery,” Computers, Environment and Urban Systems, 24, 2000, pp. 127-152. |
Hirose, Michitaka, “Image-Based Virtual World Generation,” IEEE MultiMedia, 1997, pp. 27-33. |
Kato, et al., “Town digitizing: Recording of street views by using omnidirectional vision sensors,” IEEE, 2000, pp. 2571-2576. |
Tanikawa et al., “Building a photo-realistic virtual world using view-dependent images and models,” IEEE, 1999, pp. VI-98 to VI-103. |
Zheng and Tsuji, “Panoramic Representation for Route Recognition by a Mobile Robot,” International Journal of Computer Vision, 9:1, Kluwer Academic Publishers, The Netherlands, 1992, pp. 55-76. |
Zheng and Tsuji, “Panoramic Representation of Scenes for Route Understanding,” IEEE, 1990, pp. 161-167. |
Lippman, Andrew, “Movie-Maps: An Application of the Optical Videodisc to Computer Graphics,” ACM, 1980, pp. 32-42. |
Uyttendaele et al., “Image-Based Interactive Exploration of Real-World Environments,” IEEE Computer Society, May/Jun. 2004, pp. 52-63. |
Thomas H. Kolbe, “Augmented Videos and Panoramas for Pedestrian Navigation,” Proceedings of the 2nd Symposium on Location Based Services & TeleCartography Jan. 28-29, 2004, in Vienna, pp. 7/16-16/16. |
Boult, et al., “Omnidirectional Video Applications”, 2000. |
Foote, et al., “FlyCam: Practical Panoramic Video and Automatic Camera Control”, copyright 2000. |
Wikipedia, “QuickTime VR,” Nov. 7, 2007. Downloaded from http://en.wikipedia.org/wiki/QuickTime VR on Dec. 28, 2007, 3 pages. |
Apple Inc., “QuickTime VR,” 2007. Downloaded from http://www.apple.com/quicktime/technologies/qtvr/ on Dec. 28, 2007, 3 pages. |
Microsoft Corp. “Windows Live Local—Virtual Earth Technology Preview”; 2005. Downloaded from http://preview.local.live.com/ on Oct. 26, 2007.1 page. |
Darken, R. and Cevik, H., “Map Usage in Virtual Environments: Orientation Issues,” Proceedings of the IEEE Virtual Reality Conference, pp. 133-140, IEEE, 1999. |
Dykes, J. et al., “Virtual environments for student fieldwork using networked components,” International Journal of Geographical Information Science, vol. 13, No. 4, pp. 397-416, Taylor & Francis Ltd., 1999. |
Murphy, D. W. et al., Air-Mobile Ground Surveillance and Security System (AMGSSS) Project Summary Report, Technical Document 2914, Defense Special Weapons Agency (DSWA) and Physical Security Equipment Management Office (PSEMCO), 80 pages, Sep. 1996. |
Nguyen, H. G. et al., “Virtual Systems: Aspects of the Air-Mobile Ground Security and Surveillance System Prototype,” Unmanned Systems, vol. 14, No. 1, 11 pages, Winter 1996. |
Wei, S. et al., “Color Anaglyphs for Panorama Visulizations”, Communication and Information Technology Research Technical Report (“CITR-TR”), No. 19, pp. 2-15, Feb. 1998. |
Thomas H. Kolbe, Bonn, “Augmented Videos and Panoramas for Pedestriant Navigation,” Proceedings of the 2nd Symposium on Location Based Services & TeleCartography 2004. |
Google Maps, “Introducing Smart Navigation in Street View: Double-Click to go(anywhere!)”, <http://google-latlong.blogspot.com/2009/06/introducing-smart-navigation-in-street.html>, Jun. 2009, 8 pgs. |
International Search Report and Written Opinion for PCT Application No. PCT/US2016/066342, dated Jul. 19, 2017. 19 pages. |
International Preliminary Report on Patentability for PCT Application No. PCT/US2016/066342, dated Jun. 28, 2018. 15 pages. |
Number | Date | Country | |
---|---|---|---|
20170178404 A1 | Jun 2017 | US |