The present invention is directed to a method for recording an image using a mobile device.
A mobile device is typically used to record a camera image, which is, for example a photo camera (standalone device) or a mobile computer device (smartphone, tablet computer, laptop, etc.), into which the digital photo camera is integrated.
However, digital projectors for stationary applications may be considered, for example in a conference room or in a movie theater. In miniaturized form, digital projectors are used both as a standalone device and integrated into smartphones for mobile applications.
Digital photo cameras and image processing software typically include algorithms for digital photo effect filters to modify a recorded image after recording.
Structured-light projectors may be used for measuring methods for the three-dimensional detection of surface shapes.
The disadvantage of conventional methods and devices is that the recorded images may be processed only subsequently (i.e., after recording), and photographic design options are limited thereby.
An object of the present invention is therefore to expand the photographic design options and to increase the range of functions of the mobile devices.
The example mobile device according to the present invention and the example method according to the present invention for recording an image using a mobile device may have the advantage over the related art that a piece of image information projected into the detection area is added to an object (or a photo scene) before and/or during a recording of the image. The image information is preferably superimposed onto the object in such a way that the image information appears together with the object on the recorded image. In particular, the image information projected onto the object (or into the photo scene before and/or during the image recording) includes decorative image elements and/or image elements supporting the photographer during the recording and/or metrological image elements and/or image elements for reconstructing a spatial, physical characteristic of the photographed object.
According to the present invention, the mobile device (which is, in particular, a smartphone) preferably includes a projection unit designed as a scanning laser projector. This makes it advantageously possible, according to the present invention, to achieve a comparatively great depth of field. According to the present invention, the projection unit and the camera unit are integrated into the mobile device in such a way that the projection area of the projection unit (which is also referred to here as the image field of the projector or the projection image field) and the detection area of the camera unit (which is also referred to here as the image field of the camera or the camera image field) overlap. The projection area is preferably (partially or completely) surrounded by the detection area of the camera unit, the projection area, in particular, completely filling the detection area. The detection area and the projection area are preferably congruent, so that the camera unit is able to optically detect the images—which are generated by the projection unit on an object (for example on a projection screen) positioned within the detection area of the camera unit. In particular, the camera unit and the projection unit form an optical assembly, the projection functionality and the camera functionality being provided by the optical assembly. According to the present invention, the camera unit and, in particular, additionally a camera flash device of the mobile device (preferably a flash device including a light-emitting diode) are synchronized with the projection unit, so that a piece of image information (projection image) is optically superimposed onto the object (or the photo scene). In contrast to a purely electronic or digital superimposition during an image post-processing, this makes it advantageously possible, according to an example embodiment of the present invention, to make the object (or the photo scene) onto which the image information is optically superimposed visible and able to be experienced directly in the real scene—alternatively or in addition to displaying it on a device display.
Advantageous example embodiments and refinements of the present invention are described herein with reference to the figures.
According to one preferred refinement of the present invention, it is provided that the image information projected into the projection area in the first method step includes a piece of boundary information for displaying a boundary of the detection area, a projection of the boundary information into the detection area being deactivated during the second method step. This makes it advantageously possible to simplify the recording of the image for the user of the mobile device by using orientation markers, in particular when a display is not visible or unavailable for displaying the image to be recorded.
According to one preferred refinement of the present invention, it is provided that another piece of image information is projected into the detection area during the exposure of the recording medium. This makes it advantageously possible for the image information to be optically superimposed onto the detection area (in particular, including an object or a photo scene onto which the image information is superimposed). The additional image information is preferably essentially identical to the image information or it is another piece of image information (dynamically) adapted to the object with respect to the image information.
According to one preferred refinement, it is provided that the camera unit and the projection unit are synchronized, the camera unit and the projection unit being synchronized, in particular, in such a way that the exposure of the recording medium in the second method step is started at a certain line start of the image information projected by the projection unit, the camera unit and the projection unit being synchronized, in particular, in such a way that an integral number of projection images are detected by the recording medium for the duration of the exposure of the recording medium in the second method step. This makes it advantageously possible that the camera begins recording at a certain line start of the projection unit due to a synchronization of the projection operation and the image recording. The certain line start is preferably the image beginning of the image information (i.e., the first pixel). Due to the synchronization, the exposure duration (or recording duration of the camera unit) is preferably coordinated with an image repetition rate of the projection unit, so that an integral number of projection images is projected into the projection area within the recording time (or exposure duration) of the camera unit.
According to one preferred refinement, it is provided that a projection mapping method is carried out, the projected image information and/or the projected additional image information is/are aligned with a contour of an object positioned in the projection area. This makes it advantageously possible that, due to the use of the projection mapping method, the projected image is aligned with the contours of the object (or the photo scene), and different areas within the projection area (for example, the foreground area and background area) may thus be illuminated with different themes in each case (in particular, adapted to the particular area).
According to one preferred refinement, it is provided that the contour of the object is detected by the mobile device, the projection unit being configured, in particular, in such a way that the projected image information and/or additional image information is interactively adapted to the object. This makes it advantageously possible for an interactivity function to be provided by the projection unit, an infrared (IR) source of the projection unit, in particular, being used to provide the interactivity function, so that the projection unit has a camera function in the infrared range in addition to the projection function in the wavelength range visible to the human eye. In particular, the projection unit is designed in such a way that a perspective of the projection essentially exactly matches a perspective of the interactive camera function of the projection unit.
According to one preferred refinement, it is provided that, in the second method step, the recording medium of the camera unit is exposed during a main exposure time interval and during a post-exposure time interval, which follows the main exposure time interval, the detection area, in particular, being illuminated by ambient light, by a camera flash and/or by the projection unit during the post-exposure time interval, the detection area being, in particular, selectively illuminated, during the exposure with the aid of the projection unit. This makes it advantageously possible for a flash-like functionality to be provided by the projection unit, a visible image (for example a white image) being generated (which replaces, in particular, a camera flash) and/or the illumination takes place selectively (i.e., only the areas of the object are illuminated which are comparatively dark or appear to be comparatively dark on the image or whose intensity value is below an intensity threshold value). This makes it advantageously possible to selectively improve the contrast of the image and/or to set the exposure duration or aperture setting of the camera unit independently of the intensity values in comparatively dark areas. In particular, the projection unit is the only light source of the mobile device or a light source in addition to existing ambient light or to an existing camera flash (for example, an LED flash).
Another subject of the present invention is a mobile device for recording an image, the mobile device including a camera unit, a detection area being assigned to the camera unit, the mobile device including a projection unit, a projection area being assigned to the projection unit, the detection area and the projection area at least partially overlapping, the mobile device being configured in such a way that a piece of image information is projected into the detection area and a recording medium of the camera unit being subsequently exposed, the mobile device being configured to store the image generated by the exposure of the recording medium.
According to one preferred refinement, it is provided that the mobile device includes a synchronization element for synchronizing the camera unit and the projection unit, the synchronization means being configured, in particular, in such a way that the exposure of the recording medium in the second method step is started at a certain line start of the image information projected by the projection unit, the synchronization means being configured, in particular, in such a way that an integral number of projection images are detected by the recording medium for the duration of the exposure of the recording medium in the second method step. This makes it advantageously possible for the recorded image to reproduce the object onto which the image information is superimposed essentially completely and free of artefacts.
According to one preferred refinement, it is provided that the projection unit is a scanning 3D laser projector, the scanning 3D laser projector being configured to detect an object positioned in the projection area, the 3D laser projector being configured, in particular, in such a way that the projected image information and/or additional image information is interactively adapted to the object. This makes it advantageously possible for another piece of image information, which is adapted to the object, to be projected onto the object by reconstructing a three-dimensional shape of the object. A 3D model of the object is preferably generated as a function of the detection of the object and stored, in particular, in the mobile device. For example, the 3D model is stored, and a representation of the object is created with the aid of a 3D printer, or the representation of the object is integrated into a virtual 3D environment (3D computer game). In this application scenario, the 3D model does not initially have any retroaction on the projection image. According to one preferred alternative specific embodiment of the present invention, the projection image is adapted to the (3D) object as a function of the 3D model by generating a structured light image for detecting the (3D) object, preferably with the aid of infrared light, and projecting it onto the (3D) image, the image information generated by the projector (i.e., a visible image projected simultaneously with the structured light image) being adapted to the object.
The same parts are always provided with the same reference numerals and are therefore generally also named or mentioned only once in each case.
According to the present invention, it is preferably provided that camera unit 10 and projection unit 20 are operated synchronously with each other so that the recording of the image takes place as a function of a time sequence of the variable image content, camera unit 10 and projection unit 20 being connected to a shared controller 30. In
An object 2, which is illuminated with special patterns (stripes in this case) at a comparatively short distance is illustrated in
If this is not the case, the exposure operation is ended (cf. reference numeral sequence 308′, 310). Otherwise, 308″, a post-exposure, 309, is carried out before the end of the exposure operation, 310. For example, the post-exposure is carried out if the projected elements in the camera image already have a certain brightness, the object itself, however, appearing to be comparatively dark (or having a lower intensity value) in an area outside the projected elements. The post-exposure preferably follows the main exposure. After the end of exposure, an exposure correction is carried out in method step 310. An image transfer to a mapping algorithm (cf. reference numeral 312) is carried out in method step 311. In method step 313, a check is carried out of whether a trigger is activated. If this is not the case (cf. reference numeral 313′), the projection-synchronous image recording is fed back to the beginning (see reference number 302). If the trigger is activated (cf. reference numeral 313″), the image is stored (cf. reference numeral 314).
The post-exposure operation is illustrated in
The exposure correction according to the method step represented by reference numeral 311 in
Number | Date | Country | Kind |
---|---|---|---|
102015219538.4 | Oct 2015 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/069144 | 8/11/2016 | WO | 00 |