The present disclosure relates to rendering stereoscopic images with respect to mapping services.
Example embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
According to an aspect of an embodiment, a method may include obtaining a first digital image that depicts a first aerial view of a first area of a setting. The first digital image may have a first center point that corresponds to a first coordinate within the setting. The method may additionally include obtaining a second digital image that depicts a second aerial view of a second area of the setting. The second digital image may have a second center point that corresponds to a second coordinate within the setting. The second coordinate may be laterally offset from the first coordinate by a target offset. Further, the method may include determining an overlapping area where the first area and the second area overlap and obtaining a third digital image based on the overlapping area, the first digital image, and the second digital image. In addition, the method may include generating a first-eye image of a stereoscopic image of the setting based on the first digital image and generating a second-eye image of the stereoscopic image based on the third-digital image. The method may also include presenting the stereoscopic image on a screen of an electronic device.
The objects and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims. Both the foregoing general description and the following detailed description are given as examples and are explanatory and are not restrictive of the invention, as claimed.
Aerial view images are often taken of settings and may be used for many different applications. For example, many people use digital mapping applications (“mapping applications”) to help familiarize themselves with an area or to navigate from one point to another. These mapping applications may be included in or accessible via various devices or navigation systems such as desktop computers, smartphones, tablet computers, automobile navigation systems, Global Positioning System (GPS) navigation devices, etc. In some instances these applications may use aerial view images of a setting. Examples of mapping applications include Google Maps®, Google Earth®, Bing Maps®, etc. Other uses for aerial view images may include analysis of the landscape and geography of planets and viewing of different areas for recreational or other purposes, etc.
In addition, humans have a binocular vision system that uses two eyes spaced approximately two and a half inches (approximately 6.5 centimeters) apart. Each eye sees the world from a slightly different perspective. The brain uses the difference in these perspectives to calculate or gauge distance. This binocular vision system is partly responsible for the ability to determine with relatively good accuracy the distance of an object. The relative distance of multiple objects in a field-of-view may also be determined with the help of binocular vision.
Three-dimensional (stereoscopic) imaging takes advantage of the depth perceived by binocular vision by presenting two images to a viewer where one image is presented to one eye (e.g., the left eye) and the other image is presented to the other eye (e.g., the right eye). The images presented to the two eyes may include substantially the same elements, but the elements in the two images may be offset from each other to mimic the offsetting perspective that may be perceived by the viewer's eyes in everyday life. Therefore, the viewer may perceive depth in the elements depicted by the images.
According to one or more embodiments of the present disclosure, one or more stereoscopic images may be generated based on monoscopic digital images. In some embodiments, the monoscopic digital images may be obtained from a mapping application. The stereoscopic images may each include a first-eye image and a second-eye image that, when viewed using any suitable stereoscopic viewing technique, may result in a user experiencing a three-dimensional effect with respect to the elements included in the stereoscopic images. The monoscopic images may depict an aerial view of geographic setting of a particular geographic location and the resulting stereoscopic images may provide a three-dimensional (3D) rendering of the geographic setting. The presentation of the stereoscopic images to provide a 3D rendering of geographic settings may help users be better familiarized with the geographic settings. Reference to a “stereoscopic image” in the present disclosure may refer to any configuration of a first-eye image and a second-eye image that when viewed by their respective eyes may generate a 3D effect as perceived by a viewer.
In some embodiments, the stereoscopic images may be generated based on the movement of an object (e.g., a vehicle) through the setting in which the first-eye image for each stereoscopic image may include the object at a first particular location in the setting. Additionally, as described below, the second-eye image for each stereoscopic image may represent the setting offset from the representation of the setting in the corresponding first-eye image in which the offset may be based on as if another object that is not actually present (“virtual object”) were next to the object in the first-eye image at a second particular location that may be laterally offset from the first particular location and where the virtual object is facing substantially the same direction as the object. The second-eye images may thus be generated based on as if the virtual object were travelling parallel to the object actually travelling through the setting.
In some embodiments, the monoscopic images 102 may include digital images that depict an aerial view of a geographic setting. For example, the monoscopic images 102 may include digital images captured by aircraft, satellites, telescopes, etc., that depict an aerial view of a geographic setting. In some instances, one or more of the monoscopic images 102 may depict the aerial view from a straight top-to-bottom perspective that may be looking straight down or substantially straight down at the geographic setting. In these or other instances, one or more of the monoscopic images 102 may or may depict the aerial view from a tilted perspective that may not be looking straight down at the geographic setting.
In some embodiments, the stereoscopic image module 104 may be configured to acquire the monoscopic images 102 via a mapping application or another suitable source. For example, in some embodiments, the stereoscopic image module 104 may be configured to access the mapping application via any suitable network such as the Internet to request the monoscopic images 102 from the mapping application. In these or other embodiments, the mapping application and associated monoscopic images 102 may be stored on a same device that may include the stereoscopic image module 104. In these or other embodiments, the stereoscopic image module 104 may be configured to access the mapping application stored on the device to request the monoscopic images 102 from a storage area of the device on which they may be stored.
Additionally or alternatively, the stereoscopic image module 104 may be included with the mapping application in which the stereoscopic image module 104 may obtain the monoscopic images 102 via the mapping application by accessing portions of the mapping application that control obtaining the monoscopic images 102. In other embodiments, the stereoscopic image module 104 may be separate from the mapping application, but may be configured to interface with the mapping application to obtain the monoscopic images 102. Additionally or alternatively, the stereoscopic image module 104 may be integrated or used with any other application that may use aerial view images. The stereoscopic image module 104 may be configured to generate one or more stereoscopic images 108 as indicated below with respect to
The stereoscopic image module 104 may be configured to generate any number of stereoscopic images 108 based on any number of monoscopic images 102 using the principles described below. Additionally, as indicated above, because the monoscopic images 102 and the stereoscopic images 108 may be aerial-view images of a setting, the stereoscopic images 108 may be stereoscopic aerial-view images that may be rendered with respect to the setting. Additionally or alternatively, in some embodiments, the stereoscopic image module 104 may be configured to generate a series of stereoscopic images 108 that may correspond to a navigation route such that the navigation route may be rendered in 3D.
In these or other embodiments, the stereoscopic image module 104 may be configured to interface with a display module of a device such that the stereoscopic images 108 may be presented on a corresponding display to render the 3D effect. The stereoscopic image module 104 may be configured to present the stereoscopic images 108 according to the particular requirements of the corresponding display and display module.
Therefore, the stereoscopic image module 104 may be configured to generate stereoscopic aerial-view images based on monoscopic digital images as described above. Modifications, additions, or omissions may be made to
In some embodiments, the first digital image 210 may depict a first area of a geographic setting based on one or more properties of a camera that may capture the first digital image 210. For example, the first area may be based on a position of the corresponding camera, a field-of-view of the camera, a zooming factor of the camera, etc.
In some embodiments, the first digital image 210 may depict the first area of the setting according to a first orientation. In particular, in some embodiments, the first orientation may correspond to a navigational direction (e.g., North, South, East, West, Northwest, Northeast, Southwest, Southeast, etc.) that may be used to orient the perspective illustrated in the first area. For example, a first arrow 220 illustrated in
In these or other embodiments, the first digital image 210 may be obtained based on a location of an object in the setting. For example, the object may include a vehicle of a user, an electronic device of the user, etc., that may be configured to receive GPS coordinates of the object. The first digital image 210 may be obtained based on the GPS coordinates such that a first coordinate within the setting that may correspond to a first center point 214 of the first digital image 210 may be based on or may be the GPS coordinates of the object.
In these or other embodiments, the first digital image 210 may be obtained based on a particular direction that may be associated with a navigation route included in the first area such that the first orientation may be based on the particular direction. For example, the first digital image 210 may be obtained such that the first orientation corresponds to the particular direction of the navigation route at a coordinate that may correspond to the first center point 214 of the first digital image 210.
Additionally or alternatively, in some embodiments, the first digital image 210 may be obtained based on a direction of travel of the object. For example, in some embodiments, the first digital image 210 may be obtained such that the first orientation may be based on the direction of travel of the object. In particular, in some embodiments, the first digital image 210 may be obtained such that the navigational direction of the first arrow 220 may correspond to—e.g., be based on, be substantially equal to or equal to, etc.—the direction of travel of the object.
Additionally, the second digital image 212 may depict a second area of the geographic setting based on one or more properties of a camera that may capture the second digital image 212. For example, the second area may be based on a position of the corresponding camera, a field-of-view of the camera, a zooming factor of the camera, etc. Further, in some embodiments, the first digital image 210 and the second digital image 212 may be substantially the same size and may have substantially the same aspect ratio in which they may both include the same number of or approximately the same number of pixels in both the horizontal and vertical directions.
In some embodiments, the second digital image 212 may depict the second area of the setting according to a second orientation. In particular, in some embodiments, the second orientation, like the first orientation, may correspond to a navigational direction (e.g., North, South, East, West, Northwest, Northeast, Southwest, Southeast, etc.) that may be used to orient the perspective illustrated in the second area. For example, a second arrow 222 illustrated in
In some embodiments, the first digital image 210 and the second digital image 212 may be such that the first area and the second area may not be the same but may overlap with each other. In these or other embodiments, the first digital image 210 and the second digital image 212 may be such that one or more elements of the overlapping area of the first digital image 210 and the second digital image 212 may be laterally offset from each other. Additionally or alternatively, the lateral offset of the one or more elements may be based on a target lateral offset. The target lateral offset may be based on a target distance between same elements of the first digital image 210 and the second digital image 212 that when the stereoscopic image 280 is viewed, a 3D effect may be perceived with respect to the corresponding elements. In these or other embodiments, the target offset may thus be based on a target degree of 3D effect.
By way of example, the first digital image 210 may include the first center point 214 and the second digital image 212 may include a second center point 216. The first center point 214 may correspond to the first coordinate of the setting and the second center point 216 may correspond to a second coordinate of the setting that may be different from the first coordinate of the setting. In some embodiments, the second coordinate of the setting may be laterally offset from the first coordinate of the setting. In some embodiments, the “lateral” nature of the lateral offset of the first coordinate with respect to the second coordinate may be with respect to the first orientation and not the second orientation in instances in which the first and second orientations are rotated with respect to each other and not parallel to each other. Additionally the second digital image 212 may include a second offset point 218. The second offset point 218 may be laterally offset from the second center point 216. Additionally, the second offset point 218 may be laterally offset from the second center point 216 by a target offset that may be based on a target degree of a 3D effect. Reference of a lateral offset with respect to a particular orientation between first and second coordinates may indicate that the first coordinate depicted in a digital image with the particular orientation may be horizontally removed in the digital image from the second coordinate with little to no vertical offset in the digital image between the first coordinate and the second coordinate.
Due to the lateral offset, the first coordinate may be depicted in the second digital image 212 but may correspond to the second offset point 218 and not the second center point 216. As such, the first coordinate may be depicted by the first digital image 210 and the second digital image 212 but may not be depicted at the same locations in the first digital image 210 and the second digital image 212. Further, the first coordinate may thus be laterally offset in the first digital image 210 as compared to the second digital image 212 by the target offset.
In some embodiments, the second digital image 212 may be obtained based on the first digital image 210 and the target offset. For example, in some embodiments the second digital image 212 may be requested based on coordinates that may be associated with the first area such that one or more of the coordinates may also be included in the second area but offset by the target offset in the second digital image 212 as compared to their locations in the first digital image 210. In particular, in some embodiments, the second digital image 212 may be obtained based on the first coordinate that may correspond to the first center point 214 and the target offset such that the first coordinate may be offset from the second center point 216 by the target offset and may thus accordingly correspond to the second offset point 218.
In these or other embodiments, the second digital image 212 may be obtained based on a target direction of the target offset. For example, in the illustrated example, the target direction may be to the right such that the second area may be offset to the right as compared to the first area and such that the second offset point 218 that corresponds to the first coordinate may be to the left of the second center point 216. In these or other embodiments, the target direction may be to the left. Additionally or alternatively, in some embodiments, the target direction may be based on whether the first digital image 210 corresponds to the left-eye and the second digital image 212 corresponds to the right-eye or vice versa.
Additionally or alternatively, in some embodiments, the second digital image 212 may be obtained based on the first orientation associated with the first digital image 210. For example, in some embodiments, the second digital image 212 may be obtained based on the first orientation such that the second orientation is substantially parallel to or parallel to the first orientation. In particular, in some embodiments, the second digital image 212 may be obtained such that the navigational direction that may be indicated by the second arrow 222 may be the same as or substantially the same as the navigational direction that may be indicated by the first arrow 220.
As another example, in some embodiments, the second digital image 212 may be obtained based on the first orientation such that the second orientation is rotated with respect to the first orientation. In these or other embodiments, the rotation may have a rotational direction that may be toward the first orientation. For example, the first orientation may correspond to the first arrow 220 pointing substantially north. Additionally, the second digital image 212 may be based on a shift to the right from the first digital image 210. The second orientation in this instance may be such that the second arrow 222 is pointing at least slightly northwest such that the rotational direction of the second orientation may be based on the first orientation. As another example, the first orientation may again correspond to the first arrow 220 pointing substantially north. Additionally, the second digital image 212 may be based on a shift to the left from the first digital image 210. The second orientation in this instance may be such that the second arrow 222 is pointing at least slightly northeast such that the rotational direction of the second orientation may be based on the first orientation. In these or other embodiments, the first orientation may be rotated instead of or in addition to such that the first orientation and the second orientation may be rotated toward each other.
In some embodiments, the amount of rotation of the first orientation and the second orientation toward each other may be based on a target rotation angle. The target rotation angle may be based on a target 3D effect in some embodiments. Additionally or alternatively, the target rotation angle may be based on a target focal point for the target 3D effect.
Additionally or alternatively, in some embodiments, the second digital image 212 may be obtained based on the location of the object in the setting and a direction of travel of the object in the setting. For example, as indicated above, in some embodiments, the first digital image 210 may be obtained based on the location of the object and the direction of travel of the object in that the first digital image 210 may be centered based on the location of the object and in that the first digital image 210 may have an orientation that is based on the direction of travel of the object. In these and other embodiments, the second digital image 212 may be obtained based on a virtual object that may be travelling parallel to the object.
For example, a virtual location of the virtual object may be obtained based on the location of the object in the setting, the target offset, and the direction of travel of the object. In particular, the virtual location may be laterally offset from the location of the object by the target offset. Additionally, the lateral offset may be with respect to the first orientation of the first digital image 210, which may be based on the direction of travel of the object. As such, the virtual location may be parallel to the location of the object. In these or other embodiments, the second digital image 212 may be obtained based on the virtual location such that the second coordinate, which may correspond to the second center point 216, may correspond to the virtual location of the virtual object. Further, the second orientation of the second digital image 212 may based on the first orientation of the first digital image 210, such as discussed above. As indicated above, the first orientation may be based on the direction of travel of the object and the second orientation may be based on the first orientation to mimic the virtual object travelling parallel to the object. Additionally or alternatively, in some embodiments, a virtual direction of travel of the virtual object may be obtained based on the direction of travel of the object such that the direction of travel of the virtual object may be substantially parallel to the direction of travel of the object and the second orientation may be obtained based on the virtual direction of travel. As such, the second digital image 212 may be obtained based on a virtual object travelling parallel to the object in some embodiments.
In some embodiments, the stereoscopic image 280 may be generated based on the overlapping area of the first digital image 210 and the second digital image 212. For example, in some embodiments, the overlapping area that is included in the first area of the setting associated with the first digital image 210 and the second digital image 212 may be determined. Based on the overlapping area, a first sub-area of the first area may be determined. The first sub-area may include a portion of the first area that is included in the overlapping area. In some embodiments, the first sub-area may include all of or substantially all of the portion of the first area that is included in the overlapping area. Similarly, based on the overlapping area, a second sub-area of the second area may be determined. The second sub-area may include a portion of the second area that is included in the overlapping area. In some embodiments, the second sub-area may include all of or substantially all of the portion of the second area that is included in the overlapping area.
The overlapping area and the resulting first sub-area and second sub-area may be based on a variety of factors such as camera locations during capture of the first digital image 210 and the second digital image 212, camera rotation during capture of the first digital image 210 and the second digital image 212, the first and second orientations with respect to each other, an amount of offset between the first area and the second area, and an amount of tilt in the aerial views of the first digital image 210 and the second digital image 212, a zoom factor of the first digital image 210, a zoom factor of the second digital image 212, a size of the first digital image 210, a size of the second digital image 212, a size of the first area, and a size of the second area.
Below are some examples of how the overlapping area may differ based on one or more factors listed above. In the examples given below, the sizes of the first digital image 210 and the second digital image 212, the sizes of the first area and the second area, and the tilt angles and zoom factors associated with the first digital image 210 and the second digital image 212 may be substantially the same. The examples listed below are given to aid understanding and are not all inclusive and do not cover every scenario.
In some embodiments, the first digital image 210 and the second digital image 212 may be portions of a digital image that may be captured by a camera at a particular position. For example,
The first digital image 210 and the second digital image 212 may be portions of the particular digital image that may be captured by the camera 201 such that the first area depicted by the first digital image 210 and the second area depicted by the second digital image 212 may each be included in a larger area that may be depicted by the particular digital image.
For example,
In the illustrated example of
As mentioned above, in some embodiments, the second orientation may be rotated with respect to the first orientation and the rotation may also affect the overlapping area. For example,
As another example, in some embodiments, the first digital image 210 and the second digital image 212 may be captured with a camera at different locations or different rotation angles, which may also affect the size, shape, etc. of the overlapping area. For example,
As another example,
The capture of the first digital image 210 and the second digital image 212 according to
For example,
In these or other embodiments, the second orientation may be rotated with respect to the first orientation, which may also affect the overlapping area in instances in which the camera position (e.g., location or rotational position) differs during the capture of the first digital image 210 and the second digital image 212.
The size and shape of the overlapping area and the corresponding first sub-area 247 and the second sub-area 249 may be based on the difference in the first location 203 and the second location 205 or the difference in the first rotational position and the second rotational position. For example, the trapezoidal dimensions of the first sub-area 247 and the second sub-area 249 may vary based on the differences. Additionally, the trapezoidal dimensions may differ depending on whether a change in location of the camera 201 has occurred—such as indicated in
Additionally, an amount of tilt of the aerial views depicted in the first digital image 210 and the second digital image 212 may also affect the size and shape of the overlapping area. For example,
The first area 251 and the second area 253 may overlap over an overlapping area 255 that may include a first sub-area of the first area 251 and a second sub-area of the second area 253.
Additionally, in the illustrated example of
The overlapping area between the first digital image 210 and the second digital image 212 (and consequently the corresponding first sub-area and the corresponding second sub-area) may be determined using any suitable technique. For example, in some embodiments it may be determined based on a comparison of image data included in pixels of the first digital image 210 and the second digital image 212 to determine which elements of the setting may be depicted in both the first digital image 210 and the second digital image 212. Additionally or alternatively, the overlapping area may be determined using and based on geometric principles that may be associated with camera locations during capture of the first digital image 210 and the second digital image 212, camera rotation during capture of the first digital image 210 and the second digital image 212, the first and second orientations with respect to each other, an amount of offset between the first area and the second area, and an amount of tilt in the aerial views of the first digital image 210 and the second digital image 212, a zoom factor of the first digital image 210, a zoom factor of the second digital image 212, a size of the first digital image 210, a size of the second digital image 212, a size of the first area, and a size of the second area.
In some embodiments, a third digital image 270 (depicted in
By way of example, with respect to the example given with respect to
By way of another example, with respect to the example given with respect to
By way of another example, with respect to the example given with respect to
By way of another example, with respect to the example given with respect to
By way of another example, with respect to the example given with respect to
The above examples of obtaining the third digital image 270 are not exhaustive or limiting. For example, as indicated above, the size, shape, dimensions, etc., of the second sub-area may vary depending on many different factors. Accordingly, in general the third digital image 270 may be requested from the mapping application such the third area is included in the second sub-area associated with the second digital image 212 whatever the shape, size, dimensions, etc., of the second sub-area may be in some embodiments. Additionally or alternatively, in general the third digital image 270 may also be requested such that the third orientation may be the same as the second orientation. In these and other embodiments, the third digital image 270 may be requested from the mapping application such that it has substantially the same size, aspect ratio, and dimensions as the first digital image 210 while maintaining that the third area is completely included in the corresponding second sub-area. In these and other embodiments, the third digital image 270 may be requested such that the third area may cover as much of the corresponding second-sub area as possible while also having the same size, aspect ratio, and dimensions as the first digital image 210 and/or such that the third orientation is still the same as the second orientation.
Additionally or alternatively, in some embodiments the third digital image 270 may be obtained by performing a series of cropping operations and resizing operations with respect to the second digital image 212. For example, in some embodiments, the second digital image 212 may be cropped to only depict the second-sub area. In these and other embodiments, the cropped second digital mage may be resized to have the same resolution, aspect ratio, dimensions, etc., as the first digital image 210 to obtain the third digital image 270. Examples of this principle are included in U.S. Provisional Application No. 62/254,404, Entitled “STEREOSCOPIC MAPPING,” which was filed on Nov. 12, 2015 and which is incorporated by reference in the present disclosure in its entirety.
Further, the above description is given with respect to obtaining the third digital image 270 based on the second sub-area associated with the second digital image 212 and the resolution, aspect ratio, dimensions, etc., of the first digital image 210. However, in other embodiments, the third digital image 270 may instead be obtained based on the first sub-area associated with the first digital image 210 and the resolution, aspect ratio, dimensions, etc., of the second digital image 212.
In some embodiments—e.g., when the third digital image 270 is generated based on the second sub-area—the stereoscopic image 280 may include the first digital image 210 and the third digital image 270. In particular, the first digital image 210 may be used as a first-eye image of the stereoscopic image 280 and the third digital image 270 may be used as a second-eye image of the stereoscopic image 280, such as illustrated in
Therefore, the stereoscopic image 280 may be generated based on aerial view images. Additionally, as indicated above, the aerial view images may be obtained based on the movement of an object or based on a navigation path such that the stereoscopic image 280 may be generated for a navigation application in some embodiments. Further, as indicated above, multiple stereoscopic images may be generated in the manner described with respect to the stereoscopic image 280 as the object moves or is simulated as moving along a path in the setting to render a 3D effect with respect to the movement along the path in the setting. In addition, the second digital image 212 may be obtained based on a virtual object as described above, such that the third digital image 270, and thus the stereoscopic image 280, may be generated based on the virtual object and travel of the virtual object.
Modifications, additions, or omissions may be made with respect to embodiments described above with respect to
In general, the processor 350 may include any suitable special-purpose or general-purpose computer, computing entity, or processing device including various computer hardware or software modules and may be configured to execute instructions stored on any applicable computer-readable storage media. For example, the processor 350 may include a microprocessor, a microcontroller, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a Field-Programmable Gate Array (FPGA), or any other digital or analog circuitry configured to interpret and/or to execute program instructions and/or to process data. Although illustrated as a single processor in
In some embodiments, the processor 350 may interpret and/or execute program instructions and/or process data stored in the memory 352, the data storage 354, or the memory 352 and the data storage 354. In some embodiments, the processor 350 may fetch program instructions from the data storage 354 and load the program instructions in the memory 352. After the program instructions are loaded into memory 352, the processor 350 may execute the program instructions.
For example, in some embodiments, the sim module may be included in the data storage 354 as program instructions. The processor 350 may fetch the program instructions of the sim module from the data storage 354 and may load the program instructions of the sim module in the memory 352. After the program instructions of the sim module are loaded into memory 352, the processor 350 may execute the program instructions such that the computing system may implement the operations associated with the sim module as directed by the instructions.
The memory 352 and the data storage 354 may include computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media may include any available media that may be accessed by a general-purpose or special-purpose computer, such as the processor 350. By way of example, and not limitation, such computer-readable storage media may include tangible or non-transitory computer-readable storage media including RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid state memory devices), or any other storage medium which may be used to carry or store particular program code in the form of computer-executable instructions or data structures and which may be accessed by a general-purpose or special-purpose computer. Combinations of the above may also be included within the scope of computer-readable storage media. Computer-executable instructions may include, for example, instructions and data configured to cause the processor 350 to perform a certain operation or group of operations.
Modifications, additions, or omissions may be made to the computing system 302 without departing from the scope of the present disclosure. For example, in some embodiments, the computing system 302 may include any number of other components that may not be explicitly illustrated or described.
As indicated above, the embodiments described in the present disclosure may include the use of a special purpose or general purpose computer (e.g., the processor 350 of
The method 400 may begin at block 402 where a first digital image may be obtained. The first digital image may depict a first aerial view of a first area of a setting. The first digital image may have a first center point that may correspond to a first coordinate within the setting. The first digital image 210 described above is an example of the first digital image that may be obtained. Further, in some embodiments, the first digital image may be obtained in any manner such as described above with respect to obtaining the first digital image 210.
At block 404 a second digital image may be obtained based on the first digital image and based on a target offset. The second digital image may depict a second aerial view of a second area of the setting. The second digital image may have a second center point that may correspond to a second coordinate within the setting. The second coordinate may be laterally offset from the first coordinate by the target offset. In some embodiments, the lateral offset of the second coordinate from the first coordinate may be with respect to a first orientation of the first digital image. The second digital image 212 described above is an example of the second digital image that may be obtained. Further, in some embodiments, the second digital image may be obtained in any manner such as described above with respect to obtaining the second digital image 212.
At block 406, an overlapping area where the first area and the second area overlap may be determined. Examples of the overlapping area are given above with respect to one or more of
At block 408, a third digital image may be obtained based on the overlapping area, the first digital image, and the second digital image. The third digital image 270 described above is an example of the third digital image that may be obtained. Further, in some embodiments, the third digital image may be obtained in any manner such as described above with respect to obtaining the third digital image 270.
At block 410, a first-eye image of a stereoscopic image of the setting may be generated based on the first-digital image. At block 412, a second-eye image of the stereoscopic image may be generated based on the third digital image. In some embodiments, the first and second eye images may be generated as described above with respect to one or more of
Therefore, the method 400 may be used to generate a stereoscopic image according to one or more embodiments of the present disclosure. Modifications, additions, or omissions may be made to the method 400 without departing from the scope of the present disclosure. For example, the functions and/or operations described with respect to
As used in the present disclosure, the terms “module” or “component” may refer to specific hardware implementations configured to perform the actions of the module or component and/or software objects or software routines that may be stored on and/or executed by general purpose hardware (e.g., computer-readable media, processing devices, etc.) of the computing system. In some embodiments, the different components, modules, engines, and services described in the present disclosure may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While some of the system and methods described in the present disclosure are generally described as being implemented in software (stored on and/or executed by general purpose hardware), specific hardware implementations or a combination of software and specific hardware implementations are also possible and contemplated. In this description, a “computing entity” may be any computing system as previously defined in the present disclosure, or any module or combination of modulates running on a computing system.
Terms used in the present disclosure and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including, but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes, but is not limited to,” etc.).
Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations.
In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” or “one or more of A, B, and C, etc.” is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc.
Further, any disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” should be understood to include the possibilities of “A” or “B” or “A and B.”
Additionally, the use of the terms “first,” “second,” “third,” etc., are not necessarily used herein to connote a specific order or number of elements. Generally, the terms “first,” “second,” “third,” etc., are used to distinguish between different elements as generic identifiers. Absence a showing that the terms “first,” “second,” “third,” etc., connote a specific order, these terms should not be understood to connote a specific order. Furthermore, absence a showing that the terms first,” “second,” “third,” etc., connote a specific number of elements, these terms should not be understood to connote a specific number of elements. For example, a first widget may be described as having a first side and a second widget may be described as having a second side. The use of the term “second side” with respect to the second widget may be to distinguish such side of the second widget from the “first side” of the first widget and not to connote that the second widget has two sides.
In addition, in the appended claims, the term “non-transitory computer-readable storage media” is used. The term “non-transitory” should be construed to exclude only those types of transitory media that were found to fall outside the scope of patentable subject matter in the Federal Circuit decision of In re Nuijten, 500 F.3d 1346 (Fed. Cir. 2007).
All examples and conditional language recited in the present disclosure are intended for pedagogical objects to aid the reader in understanding the present disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the present disclosure.