The invention generally relates to a camera system comprising multiple camera sub-modules, as well as a camera sub-module.
Spherical imaging typically involves a set of image sensors and wide-angle camera objectives spatially arranged to capture parts or the full spherical ambient field, each camera sub-system facing specific parts of the ambient and surrounding environment. Typical designs consist of 2 to 6 or more individually camera modules with wide angle optics creating a certain degree of image overlap between neighboring camera systems ensuring each of the individual images to be merged by image/video stitching algorithms. This, forming a stitched spherical video imagery. Image and video stitching is a well-known procedure to digitally merge individually images. Digital image stitching algorithms specifically designed for 360 images and videos consists in many forms and brands and are provided by many companies and commercial available software's.
Due to the spatial separation of each individual camera objective, as indicated in
As illustrated in
In the stitching process when two images merges, the image overlap area is associated with a parallax error where objects and background do not spatially overlap in the overlap area causing the merged image to display errors, see
A zero parallax would require the cameras to be physically merged in the same position in space. In
The image/video stitching algorithms demands high computer power processes and scales exponentially with increased image resolution and requires heavy CPU and GPU loads in real time processing.
Zero parallax may be one of the design requirements for a high performance, low CPU/GPU loads and ultra-low real time video processing and performance for spherical imaging camera. There may also be other requirements that need to be considered when building complex high-performance spherical imaging camera systems in an efficient manner.
It is a general object to provide an improved camera system for enabling spherical imaging.
It is a specific object to provide a camera system comprising multiple camera sub-modules.
It is another object to provide a camera sub-module for such a camera system. These and other objects are met by embodiments as defined herein.
According to a first aspect, there is provided a camera system comprising multiple camera sub-modules, wherein each camera sub-module comprises:
In this way, an improved camera system is obtained. The proposed technology more specifically enables complex, high-performance and/or zero-parallax 2D and/or 3D camera systems to be built in an efficient manner.
For example, the camera sub-modules may be spatially arranged such that the output surfaces of the FOTs of the camera sub-modules are directed inwards towards a central part of the camera system, and the sensors are located in the central part of the camera system.
The camera system may thus be adapted, e.g., for immersive and/or spherical 360 degrees monoscopic and/or stereoscopic video content production for virtual, augmented and/or mixed reality applications.
The camera system may also be adapted, e.g., for volumetric capturing and light-field immersive and/or spherical 360 degrees video content production for virtual, augmented and/or mixed reality applications, including Virtual Reality (VR) and/or Augmented Reality (AR) applications.
By way of example, the FOTs may be adapted for conveying photons in the infrared, visible and/or ultraviolet part of the electromagnetic spectrum, and the sensor may be adapted for infrared imaging, visible light imaging and/or ultraviolet imaging.
According to a second aspect, there is provided a camera sub-module for a camera system comprising multiple camera sub-modules, wherein the camera sub-module comprises:
Other advantages offered by the invention will be appreciated when reading the below description of embodiments of the invention.
The invention, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
Throughout the drawings, the same reference numbers are used for similar or corresponding elements.
On a general level, the proposed technology involves the basic key features followed by some optional features:
Reference can now be made to the non-limiting examples of
According to a first aspect, there is provided a camera system 10 comprising multiple camera sub-modules 100, wherein each camera sub-module 100 comprises:
In this way, an improved camera system is obtained. The proposed technology more specifically enables complex, high-performance and/or zero-parallax camera systems to be built in an efficient manner.
It should be understood that the expression spherical imaging should be interpreted in a general manner, including imaging by a camera system that has an overall input surface, which generally corresponds to the surface area of a spheroid or a truncated segment thereof.
By way of example, the camera sub-modules may be spatially arranged such that the input surfaces 112 of the FOTs 110 of the camera sub-modules 100 together define an outward facing overall surface area 20, which generally corresponds to the surface area of a sphere or a truncated segment thereof to provide at least partially spherical coverage of the surrounding environment.
For example, the camera sub-modules may be spatially arranged such that the input surfaces 112 of the FOTs 110 of the camera sub-modules 100 together define an outward facing overall surface area 20, with half-spherical to full-spherical coverage of the surrounding environment.
A number of non-limiting examples, where the camera sub-modules are spatially arranged such that the input surfaces of the FOTs of the camera sub-modules together define an outward facing overall surface area, which generally corresponds to the surface area of a spheroid or a truncated segment thereof, are illustrated in
For example, the camera sub-modules may be spatially arranged such that the output surfaces of the FOTs of the camera sub-modules are directed inwards towards a central part of the camera system, and the sensors are located in the central part of the camera system, e.g. see
In other words, the FOTs of the camera sub-modules may be spatially arranged to form a generally spherical three-dimensional geometric form or a truncated segment thereof having an outward facing overall surface area corresponding to the input surfaces of the FOTs.
In a particular set of examples, the FOTs of the camera sub-modules may be spatially arranged to form an at least partly symmetric, semi-regular convex polyhedron composed of two or more types of regular polygons, or a truncated segment thereof.
By way of example, the FOTs of the camera sub-modules may be spatially arranged to form a three-dimensional Archimedean solid or a dual or complementary form of an Archimedean solid, or a truncated segment thereof, and the input surfaces of the FOTs correspond to the facets of the Archimedean solid or of the dual or complementary form of the Archimedean solid, or a truncated segment thereof.
In the following, a set of non-limiting examples of geometric forms are given. For example, the FOTs of the camera sub-modules may be spatially arranged to form any of the following three-dimensional geometric forms, or a truncated segment thereof: cuboctahedron, great rhombicosidodecahedron, great rhombicuboctahedron, icosidodecahedron, small rhombicosidodecahedron, small rhombicuboctahedron, snub cube, snub dodecahedron, truncated cube, truncated dodecahedron, truncated icosahedron, truncated octahedron, and truncated tetrahedron, deltoidal hexecontahedron, deltoidal icositetrahedron, disdyakis dodechedron, disdyakis tracontahedron, pentagonal hexecontahedron, pentagonal icositetrahedron, pentakis dodecahedron, rhombic dodecahedron, rhombic tracontahedron, small triakis octahedron, tetrakis hexahedron, triakis icosahedron.
Reference can also be made to
It should be understood that the camera sub-modules 100 are schematically shown side-by-side for simplicity of illustration, but in practice they are spatially arranged such that the input surfaces 112 of the FOTs 110 of the camera sub-modules 100 together define an outward facing overall surface area, which generally corresponds to the surface area of a spheroid or a truncated segment thereof. By way of example, the camera system is built for enabling spherical imaging.
The horizontal dashed lines in
By way of example, the camera system 10 may comprise connections for connecting the sensors 120 of the camera sub-modules 100 to signal and/or data processing circuitry.
In a particular example, the camera system 10 comprises signal processing circuitry 130; 135 configured to process the electrical signals of the sensors 120 of the camera sub-modules 100 to enable formation of an electronic image of at least parts of the surrounding environment.
As an example, the signal processing circuitry 130 may be configured to perform signal filtering, analog-to-digital conversion, signal encoding and/or image processing.
As a complement, the camera system may if desired include a data processing system 140 connected to the signal processing circuitry 130; 135 and configured to generate the electronic image, e.g. see
In a particular example, the signal processing circuitry 130 comprises one or more signal processing circuits 135, where a set of camera sub-modules 100-1 to 100-K share a signal processing circuit 135 configured to process the electrical signals of the sensors 120 of the set of camera sub-modules 110-1 to 100-K, e.g. as illustrated in
In another particular example, the signal processing circuitry 130 comprises a number of signal processing circuits 135, where each camera sub-module 100 comprises an individual signal processing circuit 135 configured to process the electrical signals of the sensor 120 of the camera sub-module 100, e.g. as illustrated in
The signal and/or data processing may include selecting and/or requesting one or more segments of image data from one or more of the sensors 120 for further processing.
Optionally, each camera sub-module 100 may include an optical element 150 such as an optical lens or an optical lens system arranged on top of the input surface 112 of the FOT 110, e.g. as illustrated in
As a possible design choice, the number of pixels per optical fiber may be, e.g. in the range between 1 and 100, e.g. see
In a particular example, the number of pixels per optical fiber is in the range between 1 and 10.
By way of example, the camera sub-modules may be spatially arranged to enable zero parallax between images from neighboring camera sub-modules.
It may be desirable to spatially arrange the camera sub-modules such that the input surfaces of the FOTs of neighboring camera sub-modules are seamlessly adjoined, e.g. as illustrated in
Alternatively, or as a complement, the electrical signals of the sensors of neighboring sub-camera modules may be processed to correct for parallax errors caused by small displacement between sub-camera modules.
By way of example, the FOTs may be adapted for conveying photons in the infrared, visible and/or ultraviolet part of the electromagnetic spectrum, and the sensor may be adapted for infrared imaging, visible light imaging and/or ultraviolet imaging.
Accordingly, the sensor may for example be a short wave, near wave, mid wave and/or long infrared sensor, a light image sensor and/or an ultraviolet sensor.
For example, the camera system may be a video camera system, a video sensor system, a light field sensor, a volumetric sensor and/or a still image camera system.
The camera system may be adapted, e.g., for immersive and/or spherical 360 degrees video content production for virtual, augmented and/or mixed reality applications.
By way of example, the FOTs of the camera sub-modules 100 may be spatially arranged to form a generally spherical three-dimensional geometric form, or a truncated segment thereof, the size of which is large enough to encompass a so-called Inter-Pupil Distance or Inter-Pupillary Distance (IPD). For example, the diameter of the generally round or spherical geometric form should thus be larger than the IPD. This will enable selection of image data from selected parts of the overall imaging surface area of the camera system that correspond to the IPD of a person to allow for three-dimensional imaging effects.
The proposed technology also covers a camera sub-module for building a modular camera or camera system.
According to another aspect, there is thus provided a camera sub-module 100 for a camera system comprising multiple camera sub-modules, wherein the camera sub-module 100 comprises:
For example, reference can once again be made to
By way of example, the camera sub-module 100 may also comprise optional electronic circuitry 130; 135; 140 configured to perform signal and/or data processing of the electrical signals of the sensor, as previously discussed.
In a particular example, the camera sub-module 100 may further comprise an optical element 150 such as an optical lens or an optical lens system arranged on top of the input surface 112 of the FOT 110.
By way of example, the FOT 110 is normally arranged to assume a determined magnification/reduction ratio between input surface 112 and output surface 114.
In the following, the proposed technology will be described with reference a set of non-limiting examples.
As mentioned by way of example, the proposed technology may be used, e.g. for zero optical parallax for immersive 360 cameras. As an example, such a camera or camera system may involve a set of customized fiber optic tapers in conjunction with image sensors and associated electronics arranged as camera sub-modules having facets in an Archimedean solid or other relevant three dimensional geometrical form, for covering a region of interest.
In particular, the proposed technology may provide a solution for parallax free image and video production in immersive 360 camera designs. An advantage is that the need for parallax correction is significantly relaxed or possibly even eliminated for real time live video or post productions captured from the system and consequently a minimum demand of computer power is needed in the image and video process, which results in reduced times in the real time video streaming process and also allows for the design of more compact and mobile camera designs compared with current methods and designs.
By way of example, the proposed technology may involve a set of tailor-designed fiber optic tapers in conjunction with image sensors and associated electronics, realizing new designs and video data processing of immersive and/or 360 video content, data streaming and/or cameras.
In a particular, non-limiting example, the proposed technology is based on a set of FOTs designed and spatially arranged as facets arranged in Archimedean solids or other relevant three dimensional geometrically forms. For example, one form is the truncated icosahedron, see the example of
Fiber optic plates (FOP) are optical devices comprised of a bundle of micron-sized optical fibers. Fiber optical plates are generally composed of a large number of optical fibers fused together into a solid 3D geometry coupled to an image sensor such as a CCD or CMOS device. A FOP is geometrically characterized by having the input and output sides equal in size that directly conveys light or image incident on its input surface to its output surface, see
A tapered FOP, which is normally referred to as a fiber optic taper (FOT), is typically fabricated by heat treatment to have a different size ratio between their input and output surfaces, see
By fiber optic plate and/or fiber optic taper in the embodiments herein is normally intended to be an element, device or unit by means of which light and images are conveyed from one side to the other.
In the example of
In order to keep a high contrast of the FOT by parallel input light and a large numerical aperture to ensure as much light as possible to be detected by the sensor, an optical element 150 can be added on top of the input surface 112 of the FOT 110, e.g. as illustrated in
The design virtually transposes the sensor pixel array of the sensor to the outer or external surface of element 150 or to surface 112. Herein the term EVPE stands for External Virtual Pixel Element, each of which corresponds to one or more of the pixels 122 of the sensor pixel array.
In a sense, when considering a whole set of camera sub-modules, the outward facing overall surface area can be viewed as an EVPE array or continuum that corresponds to the sensor pixel array defined by the sensors of the camera sub-modules. In other words, the (internal) sensor pixel array of the sensor(s) is virtually transposed to a corresponding (external) array of EVPEs on the outward facing overall surface area, or the other way around.
By way of example, hexagonal and pentagonal shaped FOTs 110 of camera sub-modules may be arranged as part of a truncated icosahedron, e.g. see
By way of example, the camera system comprises a data processing system configured to realize spherical 2D (monoscopic) and/or 3D (stereoscopic) image/video output by requesting and/or selecting the image data corresponding to one or more regions of interest of the (parallax-free) outward facing External Virtual Pixel Elements (EVPE:s) as one or more so-called viewports for display.
In other words, the camera system comprises a data processing system configured to request and/or select image data corresponding to one or more regions of interest of the outward facing overall imaging surface area of the camera system for display.
To provide 2D image and/or video output, the data processing system is configured to request and/or select image data corresponding to a region of interest as one and the same viewport for display by a pair of display and/or viewing devices.
To provide 3D image and/or video output, the data processing system is configured to request and/or select image data corresponding to two different regions of interest as two individual viewports for display by a pair of display and/or viewing devices.
For 3D output, the two different regions of interest are normally circular regions, the center points of which are separated by an Inter-Pupil Distance or Inter-Pupillary Distance, IPD. The IPD corresponds to the distance between human eyes, normalized or individualized.
By way of example, reference can be made to
In a particular example, surface segments capturing EVPE image data, corresponding to one or more viewports 40, are selected for display. For example, the viewports 40 are the imagery displayed in a pair of VR and/or AR viewing devices.
A pair of VR and AR viewing devices is typically designed with two image screens and associated optics, one for each eye. A 2D perception of a scene is achieved by displaying the same imagery (viewport) in both displays. A 3D depth perception of a scene is typically achieved by displaying a viewport on each display corresponding to an image viewed from each eye displaced by the IPD. From this parallax, the human brain and its visual cortex creates the 3D depth perception.
The viewport, composed of EVPE:s, is mapped from sets of camera sub-modules 100 and corresponding sensor element 120 and region of interest (ROI) functionality allowing for selectable viewport image readouts. A 2D and/or 3D viewport realization is/are thus realized by using the same viewport for both eyes for 2D monoscopic display and viewports separated by IPD, e.g. as illustrated in
By way of example, the mapping of EVPE:s can be image processed by computer implementation 200 to allow for tiled and viewport dependent streaming.
In order to get a feeling of the expected complexity of possible camera realizations, reference can be made to the following illustrative and non-limiting examples. By way of example, a typical FOT 110 may be supporting image resolutions ranging, e.g., from 20 Ip/mm to 250 Ip/mm and typically from 100 Ip/mm to 120 Ip/mm, but not limited to these values (Ip stands for line pairs). Typical fiber optic element 116 sizes may range, e.g., from 2.5 μm to 25 μm but not limited to this range. For example, the image resolution of sensor 120 may be ranging, typically, from 1 Mpixel to 30 Mpixel, but not limited to this range. As an example, the camera system 10 may have an angular image resolution, which ranges, typically, from 2 pix/degree to 80 pix/degree but not limited to these values. In this particular example, the number of EVPE:s is thus ranging, typically, from 30 million to 1 billion for a camera system. Based on VR/AR viewing devices with 40 and 100 degrees field of view, the corresponding viewport EVPE density may range, e.g., from 0.6 to 20 Mpixel and 3 to 120 Mpixel respectively.
It will be appreciated that the methods and devices described above can be combined and re-arranged in a variety of ways, and that the methods can be performed by one or more suitably programmed or configured digital signal processors and other known electronic circuits (e.g. Field Programmable Gate Array (FPGA) devices, Graphic Processing Unit (GPU) devices, discrete logic gates interconnected to perform a specialized function, and/or application-specific integrated circuits).
Many aspects of this invention are described in terms of sequences of actions that can be performed by, for example, elements of a programmable computer system. The steps, functions, procedures and/or blocks described above may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
Alternatively, at least some of the steps, functions, procedures and/or blocks described above may be implemented in software for execution by a suitable computer or processing device such as a microprocessor, Digital Signal Processor (DSP) and/or any suitable programmable logic device such as a FPGA device, a GPU device and/or a Programmable Logic Controller (PLC) device.
It should also be understood that it may be possible to re-use the general processing capabilities of any device in which the invention is implemented. It may also be possible to re-use existing software, e.g. by reprogramming of the existing software or by adding new software components.
It is also possible to provide a solution based on a combination of hardware and software. The actual hardware-software partitioning can be decided by a system designer based on a number of factors including processing speed, cost of implementation and other requirements.
The term ‘processor’ should be interpreted in a general sense as any system or device capable of executing program code or computer program instructions to perform a particular processing, determining or computing task.
The processing circuitry including one or more processors 210 is thus configured to perform, when executing the computer program 225, well-defined processing tasks such as those described herein, including signal processing and/or data processing such as image processing.
The processing circuitry does not have to be dedicated to only execute the above-described steps, functions, procedure and/or blocks, but may also execute other tasks.
Moreover, this invention can additionally be considered to be embodied entirely within any form of computer-readable storage medium having stored therein an appropriate set of instructions for use by or in connection with an instruction-execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch instructions from a medium and execute the instructions.
The software may be realized as a computer program product, which is normally carried on a non-transitory computer-readable medium, for example a CD, DVD, USB memory, hard drive or any other conventional memory device. The software may thus be loaded into the operating memory of a computer or equivalent processing system for execution by a processor. The computer/processor does not have to be dedicated to only execute the above-described steps, functions, procedure and/or blocks, but may also execute other software tasks.
The flow diagram or diagrams presented herein may be regarded as a computer flow diagram or diagrams, when performed by one or more processors. A corresponding apparatus may be defined as a group of function modules, where each step performed by the processor corresponds to a function module. In this case, the function modules are implemented as a computer program running on the processor.
The computer program residing in memory may thus be organized as appropriate function modules configured to perform, when executed by the processor, at least part of the steps and/or tasks described herein.
Alternatively, it is possible to realize the module(s) predominantly by hardware modules, or alternatively by hardware, with suitable interconnections between relevant modules. Particular examples include one or more suitably configured digital signal processors and other known electronic circuits, e.g. discrete logic gates interconnected to perform a specialized function, and/or Application Specific Integrated Circuits (ASICs) as previously mentioned. Other examples of usable hardware include input/output (I/O) circuitry and/or circuitry for receiving and/or sending signals. The extent of software versus hardware is purely implementation selection.
It is becoming increasingly popular to provide computing services (hardware and/or software) where the resources are delivered as a service to remote locations over a network. By way of example, this means that functionality, as described herein, can be distributed or re-located to one or more separate physical nodes or servers. The functionality may be re-located or distributed to one or more jointly acting physical and/or virtual machines that can be positioned in separate physical node(s), i.e. in the so-called cloud. This is sometimes also referred to as cloud computing, edge computing or fog computing, which is a model for enabling ubiquitous on-demand network access to a pool of configurable computing resources such as networks, servers, storage, applications and general or customized services.
The embodiments described above are to be understood as a few illustrative examples of the present invention. It will be understood by those skilled in the art that various modifications, combinations and changes may be made to the embodiments without departing from the scope of the present invention. In particular, different part solutions in the different embodiments can be combined in other configurations, where technically possible.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SE2018/050340 | 3/29/2018 | WO | 00 |