This invention relates to a system controller, a multi-camera view system, an automotive vehicle, a method of processing at least two input images, a computer program product and a non-transitory tangible computer readable storage medium.
A multi-camera view system is a system used for displaying an output image on a display by capturing two or more input images by respective two or more cameras. The output image may be e.g. used by a driver of an automotive vehicle to better estimate distances, presence of obstacles. The output image may be a view from a selected viewpoint.
In such multi-camera view systems, typically a dedicated processing unit deals with the processing of the two or more input images to provide the desired view. The dedicated processing unit typically accesses the two or more input images as captured by the cameras and processes these input images to generate the output image. Transfer of the input images from and/or to the dedicated processing unit is a cumbersome operation requiring relatively high transfer bandwidth and computing power.
The present invention provides a system controller, a multi-camera view system, an automotive vehicle, a method of processing at least two images, a computer program product and a non-transitory tangible computer readable storage medium as described in the accompanying claims.
Specific embodiments of the invention are set forth in the dependent claims.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. In the Figures, elements, which correspond to elements already described, may have the same reference numerals.
The system controller 90 comprises an image resizing unit 20 coupled to the at least two cameras 10, a memory 30 coupled to the resizing unit 20, a processing unit 40 coupled to the memory 30.
The at least two cameras 10 are used to capture the at least two input images, respectively. The image resizing unit 20 has an input via which the image resizing unit 20 receives the at least two input images from the cameras 10. The image-resizing unit 20 is arranged to output at least two resized images corresponding to the at least two input images received from the cameras 10. The memory 30 stores the two resized images. The processing unit 40 generates the output image from the at least two resized images. The output image is outputted to the display 50, e.g. via the controlling unit 60. The display 50 displays the output image. The displayed output image is a view from a selected viewpoint. For example, the controlling unit 60 may select the viewpoint. The image resizing unit 20 is arranged to resize the at least two input images based on the selected viewpoint.
Resizing the at least two input images based on the selected viewpoint may occur in any manner specific for the specific implementation.
The dashed lines in
In one example, the processing unit 40 is coupled to the image resizing unit 20. The processing unit 40 may be arranged to generate at least one resizing factor. The image resizing unit 20 receives the at least one resizing factor to resize the at least two input images based on the selected viewpoint.
In another example, the controlling unit 60 may be arranged to generate the at least one resizing factor based on the selected viewpoint. The image resizing unit 20 may be coupled to the controlling unit 60 for receiving the at least one resizing factor from the controlling unit 60 to resize the at least two input images.
For example, the CPU 70 may comprise at least an input and an output. The GPU 42 may be arranged to generate the at least one resizing factor. The CPU 70 may be arranged to receive via the input the at least one resizing factor from the GPU 42. The CPU 70 may be arranged to output via the output the at least one resizing factor to the image-resizing unit 20.
In another example, the GPU 42 may be arranged to generate the at least one resizing factor based on the stored at least two resized images which are resulting from a selected viewpoint. The GPU 42 may retrieve respective sizes of the stored at least two resized images which are used to generate the output image. The GPU 42 may generate the at least one resizing factor from the respective sizes. The resizing factor may for example be updated in the described manner for a selected new viewpoint.
In a further example, the at least one resizing factor may be generated by adapting an image resolution of the output image to a pixel resolution of the display 50. The pixel resolution of the display 50 may be e.g. be retrieved by the controlling unit 62.
In any of the examples described above, the image-resizing unit 20 may resize the at least two input images by using one or more resizing factors. The controlling unit 60 or the processing unit 40 of
The CPU 70 may configure, e.g. by software instructions, the image-resizing unit 20 to resize the selected input image by e.g. the respective resizing factor.
Resizing of the at least two input image occurs “on the fly” when the at least two cameras 10 capture the at least two input images. As a consequence, the resized images, and not the input images, are accessed and processed by the processing unit 40 or the GPU 42 to generate the output image. Since the processing unit 40 or the GPU 42 uses resized images for generating the output image, transfer bandwidth from the memory 30 and towards the memory 30 may be substantially reduced. Further, resizing is dependent on the selected viewpoint, e.g., on the output image viewed from a viewpoint on the display 50. The viewpoint can e.g. be automatically selected or selected by a user.
The image resizing unit 20 may be arranged to resize the at least two input images based on a real-time selected viewpoint. For example, the image resizing unit 20 may adaptively resize the at least two input images by evaluating a real-time selected viewpoint. Each time the selected viewpoint is changed in the display 50, resizing of the at least two input images may be adapted to the changed selected viewpoint. Adapting the resizing of the input images to real-time selected viewpoints enhances memory bandwidth use e.g. for changing viewpoints over time.
For some selected viewpoint, size of the input image, e.g., its image resolution, may be superfluous. An image resolution lower than the input image resolution may be sufficient to display the output image without losing details of each of the at least two input images.
Details of one input image may either not be used in the output image or used with a lower quality, in which case a lower image resolution of the input images may be used.
For example, the processing unit 40 or the GPU 42 may be arranged to merge the at least two input images to generate the view: e.g. a first input image Pic1 and a second input image Pic2 as schematically indicated in the
The meaning of the “selected viewpoint” is explained hereinafter.
The at least two cameras 10 may be arranged to view from at least two different adjacent views. The selected viewpoint corresponds to a selected virtual viewpoint. In response to the selected viewpoint, the at least two input images are merged. The output image may seem to be taken from a virtual camera arranged at the selected virtual viewpoint.
The at least two cameras 10 may be very wide angle cameras, e.g. fish-eye cameras. Images captured from very wide angle cameras are distorted. The processing unit 40 or the GPU 42 processes the resized images in order to remove distortion and generate a view with the desired details. Resized images rendered on the display 50 may be processed with any algorithm known in the art and suitable for the specific implementation.
In an example, in response to the selected viewpoint, via e.g. the HMI 80, the CPU 70 may be arranged to calculate the at least one resized factor based on geometric approximations of the displayed view and output via the output the at least resizing factor to the image resizing unit 20.
The HMI 80 may be of any type suitable for the specific implementation. For example, the HMI 80 may be integrated in the display 50 as a touchscreen interface responding to a finger and/or multi-fingers touch of the user. The HMI 80 may be implemented with buttons, joystick-like apparatuses or via a touchscreen suitable to for example scroll, zoom-in, zoom-out the output image on the display 50.
Resizing of the at least two input images may be triggered by the user selecting the viewpoint via the HMI 80. Alternatively, the viewpoint may be selected automatically by the multi-camera view system 100, 110 or 120.
The multi-camera view systems 100, 110 and 120 shown with reference to the
For example, any of the multi-camera view systems 100, 110, 120 may be a surround view system.
The multi-camera view system 100, 110 or 120 may be able to generate a 360 degrees output image, a two dimensional, or a three-dimensional output image.
The display 50 of the multi-camera view system 100, 110 or 120 may be arranged to view real-time video resulting from the real-time captured at least two input images.
The automotive vehicle 500 may comprise the system controller 92, the display 50 and four cameras 1, 2, 3 and 4. The display 50 may be arranged e.g. on a driver and/or passenger position in order for the driver or passenger to view the display 50 while driving. The four cameras 1, 2, 3 and 4 are arranged at sides of the automotive vehicle. The four cameras 1, 2, 3 and 4 are arranged to view each from a different viewing angle. For example as shown in
The viewpoint can be selected by the driver and or passengers, or be automatically selected by a steering direction or gear position. For example, turning the steering may trigger a side view to be displayed; putting the gear into reverse may trigger a back side view to be displayed.
The method comprises receiving 200 the at least two input images, resizing 300 the at least two input images to obtain corresponding at least two resized images based on the selected viewpoint, storing 400 the at least two resized images, generating 450 the output image from the at least two resized images. The method may comprise selecting 150 the viewpoint. The viewpoint may be selected before or after receiving the at least two resized images. The viewpoint may be selected e.g. as described with reference to
In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the scope of the invention as set forth in the appended claims.
For example, in
The graphic-processing unit 42 in
The connections may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise the connections may for example be direct connections or indirect connections.
Because the apparatus implementing the present invention is, for the most part, composed of electronic components and circuits known to those skilled in the art, circuit details have not been explained in any greater extent than that considered necessary, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.
The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. The computer program may be provided on a data carrier, such as a CD-ROM or diskette, stored with data loadable in a memory of a computer system, the data representing the computer program. The data carrier may further be a data connection, such as a telephone cable or a wireless connection.
The term “program,” as used herein, is defined as a sequence of instructions designed for execution on a computer system. A program, or computer program, may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
Furthermore, although
Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
Furthermore, those skilled in the art will recognize that boundaries between the functionality of the above described operations merely illustrative. The functionality of multiple operations may be combined into a single operation, and/or the functionality of a single operation may be distributed in additional operations. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
A computer system processes information according to a program and produces resultant output information via I/O devices. A program is a list of instructions such as a particular application program and/or an operating system. A computer program is typically stored internally on computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. A parent process may spawn other, child processes to help perform the overall functionality of the parent process. Because the parent process specifically spawns the child processes to perform a portion of the overall functionality of the parent process, the functions performed by child processes (and grandchild processes, etc.) may sometimes be described as being performed by the parent process.
Also, the invention is not limited to physical devices or units implemented in non-programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code. Furthermore, the devices may be physically distributed over a number of apparatuses, while functionally operating as a single device. Also, devices functionally forming separate devices may be integrated in a single physical device. Also, the units and circuits may be suitably combined in one or more semiconductor devices. However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.