1. Field of the Invention
This application relates to the field of video processing. More specifically, the application relates to systems and methods for providing virtual navigation for video.
2. Related Art
It is known to provide apparatus for processing images capable of moving the position of a visual point and overlay graphics upon an image. But, previous approaches have been limited in their ability to capture the images. Typically the approaches have only allowed or employed coordinates of the picture elements on an imaging device and the angle of the imaging device to be used when applying a rotational transformation to the image data. Simply put, only the visual point of the capture device is adjusted in the known approaches. Further, the known approaches are based on static positions of the imaging device or camera that captures distortion free images.
Thus, there is a need in the art for improvements that enable other visual points to be adjusted while compensating for distortion in a captured device that is moving. The aforementioned shortcomings and others are addressed by systems and related methods according to aspects of the invention.
In view of the above, systems and methods are provided for adjusting the parameters of an image capturing device, such as a camera, that captures a sequence of images and generates another image from the captured sequence of images using parameters that identify a virtual location and overlaying the generated image with a 3D generated image where compensating for distortion that has occurred in the image. The distortion is typically introduced by the optical capture device (e.g. radial distortion), while a computer generated image is ideal without distortion. In order for both an optical captured image and a computer generated image to be synthesized or combined together seamlessly, the computer generated image typically needs distortion compensation.
Other devices, apparatus, systems, methods, features and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
The description below may be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. In the figures, like reference numerals designate corresponding parts throughout the different views.
An approach of adjusting the parameters of an image capturing device, such as a camera is described. Parameters, such as tilt angle may be adjusted via a user interface. The user interface may be a touch panel that enables parameters to be changed. A processed image that results from the captured video image may also be processed to remove or reduce distortion caused by lens or movement.
In
In operation, the VNS 100 captures an image sequence with the video capture unit 102. The parameter setting unit 106 enables adjustment of the parameters used by the image capture unit 102 by user interaction or automatic calculation. The parameters that are able to be adjusted may include but is not limited to the virtual capture device position, viewing angle, focal length, distortion parameters, etc. The image generating unit 104 then generates a different view of the scene using the captured image sequence, the video capture device 102 and the parameter set of the virtual camera.
The parameter setting unit 106 is responsible for adjusting the virtual capture device parameters used in the image generating unit 104. The parameter setting unit 106 may include, but is not limited to, adjusting parameters by user gesture or automatic calculation. The user gesture may include finger touch on a touch screen to control the viewing angle. The automatic calculation may include changes to the viewing angle according to the distance between an obstacle and the video capture unit 102.
Turning to
Without loss of generality,
In
The video capturing unit 102 may have an image capturing device such as a video camera. The image capturing device captures a sequence of images that are provided in a digital format by the video capturing unit 102. Non-digital sequences of images may be converted by the video capturing unit 102 into digital image data. The image generating unit generates another digital image using the digital image data from the video capturing unit 102 and device parameters stored in the parameters setting unit 306. Some of the plurality of device parameters that may be entered and stored, or calculated may include Position, View Angle, Focal length, Distortion Parameters, and Principle Point of the video capture device of the video capturing unit 102 and the virtual capture device 202 employed by the image generating unit 104.
3D object information may be stored in a 3D object information storing unit 302. The 3D object information storing unit may be implemented as a data store in memory and/or media such as a digital video disk (DVD), where the data store is a data structure that stores the digital data in memory or in a combination of hardware and software such as removable memory and hard disk drives (HDD). The 3D object information storing unit 302 may be accessed by the 3D object image generating unit 304 in order to generate a 3D image. The 3D object image generating unit 304 generates what the 3D object will be in the virtual capture device focal plane with the virtual capture device 202 parameter. The 3D Object information storing unit 302 and 3D object image generating unit 304 may be combined into a single unit, device, software or implemented separately. In other implementations, the 3D object information storing unit 302 may be separate from the VNS 300. The overlay unit 308 may overlay or combine both the image generated by the image generating unit 104 and the image generated by the 3D object image generating unit 304. Examples of 3D objects that may be overplayed by overlay unit 308 may include but is not limited to boundary boxes, signs, parking markers, and vehicle tracks corresponding to a steering angle. The resulting overlay image or combined image may then be displayed by display unit 108.
The 3D object information may refer to a point or vertex position which a computer was using to construct or draw the 3D object. Images may not be selected to draw, but a subsequence of images may be transformed from the captured source images. Thus the 3D image is generated by 3D object image generation when supplied with device parameters, such as a vertex position.
Turning to
It will be understood, and is appreciated by persons skilled in the art, that one or more processes, sub-processes, or process steps described in connection with
The software in software memory may include an ordered listing of executable instructions for implementing logical functions (that is, “logic” that may be implemented either in digital form such as digital circuitry or source code or in analog form such as analog circuitry or an analog source such as an analog electrical, sound or video signal), and may selectively be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that may selectively fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a “computer-readable medium” is any tangible means that may contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The tangible computer readable medium may selectively be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device. More specific examples, but nonetheless a non-exhaustive list, of tangible computer-readable media would include the following: a portable computer diskette (magnetic), a RAM (electronic), a read-only memory “ROM” (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic) and a portable compact disc read-only memory “CDROM” (optical). Note that the computer-readable medium may even be paper (punch cards or punch tape) or another suitable medium upon which the instructions may be electronically captured, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and stored in a computer memory.
The foregoing description of implementations has been presented for purposes of illustration and description. It is not exhaustive and does not limit the claimed inventions to the precise form disclosed. Modifications and variations are possible in light of the above description or may be acquired from practicing the invention. The claims and their equivalents define the scope of the invention.