This application claims priority from European Application No. 15307108.9, entitled “Method for Controlling Movement Of At Least One Movable Object, Computer Readable Storage Medium and Apparatus Configured To Control Movement Of At Least One Movable Object,” filed on Dec. 22, 2015, the contents of which are hereby incorporated by reference in its entirety.
The present solution relates to a method for controlling a movement of at least one movable object relative to a camera capturing a sequence of images of a scene, and to an apparatus and a computer readable storage medium configured to control movement of at least one movable object relative to a camera capturing a sequence of images of a scene. The solution further relates to a vehicle, in particular an unmanned airborne vehicle, and to a system comprising a plurality of vehicles.
In the area of still image and video capture, using more than one camera opens up a variety of possibilities for acquisition of image data that can be used for unusual and creative image and movie effects. Furthermore, watching a scene from different points of view can provide the user more information, for example when watching sports events. When a scene can be captured from different points of view, applications like the “bullet time” capture can be realized. Using this technique, a steady ring of cameras almost hidden by a green screen is implemented. When capturing outdoor images with a non-steady multi camera arrangement, for example in action sports, camera equipped robots can be used. Even for private use, unmanned airborne vehicles (UAV) are becoming more and more popular. In a multi camera system, one camera robot might, however, capture a scene in which there is another camera robot visible. This might be undesirable, but the removal of the robots is extremely time-consuming, in particular for video captures.
Each single frame has to be inspected and manipulated manually in order to remove the robot(s) in the images.
It would be desirable to provide a method for controlling a movement of at least one movable object relative to a camera capturing a scene, a computer readable storage medium and an apparatus configured to control movement of at least one movable object, a vehicle and a system comprising a plurality of vehicles, wherein the quality of the image data captured by the camera shall be enhanced. Enhancement is desirable in particular with respect to visibility of undesired objects, such as other image capturing devices forming part of a camera system comprising the camera.
According to one aspect, a method for controlling a movement of at least one movable object relative to a camera capturing a sequence of images of a scene comprises:
The present solution is based on the following considerations. In a scenario in which manually or autonomously controlled robots are operated, which are movable objects, the control of the robots is impacted to avoid visibility of the robots in the viewing zone of the camera. The camera may be a static camera of a camera mounted on one of the robots. In other words, the control of the movable objects is influenced in that each of it keeps clear of the viewing range or visible zone of at least one camera. The camera is in particular carried by another robot, i.e. another movable object. A peer of cameras can be mounted on movable objects, in particular one camera on each, and all robots forming the group communicate such that every robot keeps off the viewing zone of all the other cameras. In principle, this also applies to movable objects being equipped with other devices having a directional characteristic, for example directional microphones.
The image data, for example a stream of images or frames, still pictures, 3D-Images or 3D movie data, being captured by the peer of cameras does require significantly less post-processing, as no undesired robots need to be erased from the frames. High quality image data can be provided. This applies to both, still pictures imaging a scene from different viewing angles and to streams of frames. These can for example provide a basis for visual effects of detaching the time and space of the camera from that of its visible subject. At the same time, the system is highly flexible. Unlike traditional arrays comprising a large number of cameras being used to capture image data forming the basis for the mentioned type of visual effect, the system for performing the method according to aspects of the present solution can be operated more flexibly under various (environmental) conditions, at significantly lower cost and within a shorter preparation phase.
In an advantageous embodiment, the step of controlling the movement includes modifying a motion, in particular a direction of motion, of the movable object such that the movable object avoids an interference with the viewing zone.
Advantageously, motion of the movable object, for example of a robot, in particular of an unmanned airborne vehicle, is actively controlled such that movement into the viewing zone of a camera is avoided. In particular for airborne vehicles, it is important to recognize that the viewing zone is a three-dimensional cone being defined by the viewing angle of the applied camera. Furthermore, said control is in particular performed in real time.
According to a further advantageous aspect, a method for controlling at least a first movable object and a second movable object, each having a camera, comprises:
The method according to this aspect allows controlling a peer of cameras, which are in particular mounted on unmanned airborne vehicles. Every vehicle carries at least one camera. In other words, every movable object carrying a camera of the peer of cameras keeps clear of the viewing zone of all the other cameras, which are mounted on the further movable objects. This highly flexible system of cameras, unlike traditional arrays of cameras, can be applied to flexibly capture image data from various different viewing angles.
In one embodiment, a security zone is defined, said security zone comprising the viewing zone. This is in particular performed for every camera, if a peer of cameras is applied. Furthermore, the retrieved position information is compared with borders of the security zone and it is determined if the at least one movable object interferes with the security zone, and the alert is generated in case the movable object interferes with the security zone.
Use of a security zone is one practicable way to determine if a movable object is about to interfere with the viewing zone of a camera. In particular, the security zone is chosen such that when the movable object crosses a border of the security zone there remains sufficient time to control movement of the movable object before it interferes with the viewing zone.
Similarly, a computer readable storage medium has stored therein instructions enabling controlling a movement of at least one movable object relative to a camera capturing a sequence of images of a scene, wherein the instructions, when executed by a computer, cause the computer to:
Same or similar advantageous aspects, which have been mentioned with respect to the method for generating a panoramic view, apply to the computer readable storage medium in a same or similar way. This, in particular, pertains to the advantageous embodiments and aspects, which are mentioned above.
According to another solution of the above problem, there is an apparatus configured to control movement of at least one movable object relative to a camera capturing a sequence of images of a scene, the apparatus comprising:
With respect to the above-referred computer readable storage medium and the apparatus, same or similar aspects and advantages which have been mentioned relative to the method apply in the same or similar way.
According to another aspect of the present solution, a vehicle, in particular an unmanned airborne vehicle, comprises:
According to an advantageous embodiment, the vehicle further comprises a camera, wherein the data communication interface is further configured to establish a data link, in particular a wireless data link, with further vehicles of the same or similar type, wherein local information about a viewing zone of the camera is communicated to the further vehicles.
The plurality of vehicles, which are in particular unmanned airborne vehicles, can be applied to acquire image data and/or audio/video data from a variety of different viewing angles. The cameras on the unmanned airborne vehicles can form a peer of cameras, which is highly flexible.
Hence, according to another solution of the problem to be solved, there is a system comprising a plurality of vehicles, in particular a plurality of unmanned airborne vehicles being configured according to the above-referred aspects.
Furthermore, the object is also solved by an apparatus configured to control movement of at least one movable object relative to a camera capturing a sequence of images of a scene, the apparatus comprising a processing device and a memory device having stored therein instructions, which, when executed by the processing device, cause the apparatus to:
For a better understanding the principles of embodiments of the present solution shall now be explained in more detail in the following description with reference to the figures. It is understood that the present solution is not limited to these exemplary embodiments and that specified features can also expediently be combined and/or modified without departing from the scope of the present solution as defined in the appended claims. In the drawings, the same or similar types of elements or respectively corresponding parts are provided with the same reference numbers in order to prevent the item from needing to be reintroduced.
Each of the cameras captures a sequence of images or frames of a scene 10, which is located in the first viewing zone 4a and in the second viewing zone 4b.
Furthermore, in the scenario of
In addition to this, there are a first security zone 14a and a second security zone 14b, comprising the first viewing zone 4a and the second viewing zone 4b, respectively. The first and second security zones 14a, 14b are depicted in
Further in particular, the movable objects 2a, 2b are vehicles, for example unmanned airborne vehicles, frequently referred to as UAVs. The retrieved position information is forwarded to a comparing unit 22. This in particular forms part of a processing unit 23, for example a microcontroller or another suitable unit. The comparing unit 22 is configured to compare the retrieved position information with data characterizing the position of at least one of the viewing zones 4a, 4b of the camera(s).
By way of an example, it is assumed that the apparatus 18 forms part of the first movable object 2a. Its comparing unit 22 compares the position information of this particular movable object 2a with data characterizing the second viewing zone 4b of the second movable object 2b (see
In addition to the mentioned units in the apparatus 18, there is a notification unit 24, which can also form part of the processing unit 23. The notification unit 24 is configured to generate an alert for the movable object 2a, 2b, in case the movable object 2a, 2b is about to interfere with the viewing zone 4a, 4b. When sticking back to the above example, the notification unit 24 is configured to generate an alert for the first movable object 2a, if this particular movable object 2a is about to interfere with the second viewing zone 4b of the second movable object 2b, i.e. of its camera.
In addition to this, there is a movement controller 26, which is configured to control movement of the movable object 2a, 2b, responsive to the alert. In particular, a direction of motion 12a, 12b is changed such that an interference with the viewing zone 4a, 4b is avoided. When making again reference to the above example, the first direction of motion 12a of the first movable object 2a will be changed in that the first movable object 2a avoids entering the second viewing zone 4b of the camera of the second movable object 2b.
In the above-described scenario, each movable object 2a, 2b has to be aware of data characterizing the local position of the viewing zones 4a, 4b of the respective other movable objects 2a, 2b. To enable the movable objects 2a, 2b to hold this information, there is a data link 28 between the objects (see
The movable objects 2a, 2b can be unmanned airborne vehicles forming a system comprising a plurality of similarly configured vehicles. The wireless data link 28 is then established between all the vehicles of the same type. These can form a system of vehicles representing a peer of cameras being useful for capturing of the scene 10, for example.
In a method for controlling the movement of a first movable object 2a and a second movable object 2b, each having a camera, as for example depicted in
A practicable solution for determining if a collision of one of the movable objects 2a, 2b with one of the viewing zones 4a, 4b of other movable objects 2a, 2b is about to occur is the use of security zones 14a, 14b. The security zones 14a 14b each comprise the respective viewing zone 4a, 4b and the retrieved position information is compared with the security zone 14a, 14b. It is determined if one of the movable objects 2a, 2b interferes with the security zone 14a, 14b and the alert is generated in case the movable object 2a, 2b interferes with the security zone 14a, 14b.
When making again reference to the above example, the first direction of motion 12a of the first movable object 2a is changed if the first movable object 2a interferes with the second security zone 14b of the second movable object 2b. Hence, information not only about the viewing zone 4a, 4b, but also about the security zone 14a, 14b is communicated via the data link 28 to the other movable objects 2a, 2b.
The apparatus 18 configured to control the movement of at least one movable object 2a, 2b in particular further comprises a memory device 30, in particular a non-volatile memory device, such as a flash memory or a hard disc, having stored therein instructions, which are executable by the processing unit 23, which is a processing device. These cause the processing unit 23 to retrieve position information, to compare the retrieved position information with data characterizing a position of the viewing zone 4a, 4b of the camera to generate an alert if the movable object 2a, 2b is about to interfere with the viewing zone 4a, 4b, and to control the movement of the movable object 2a, 2b in response to the alert.
Determination of a collision of one of the movable objects 2a, 2b with the viewing zone 4a, 4 of another movable object 2a, 2b and the security zone 14a, 14b of said movable objects 2a, 2b, respectively, will be now explained with reference to
Firstly, imaging of a point M in space having the Cartesian coordinates X, Y and Z is considered. The imaging of this point on a sensor is defined by the below formula (1).
K is the transformation matrix of the imaging system. In this matrix, f indicates the focal length of the imaging system defining the viewing angle. u0 and v0 represent the pixel coordinates in the image plane of the image sensor. Hence, m is the image of the point M on the image sensor. The image of the point M on the sensor is represented in homogeneous coordinates, which are mx, my and mw. The pixel coordinates u0 and v0 are calculated according to formula (2) below.
In other words, pb is the image of the point M on the image sensor. The transformation matrix or imaging matrix K is also defined by formula (3), wherein f is again the focal length and u0 and v0 (formula (1)) are calculated from the values of camW and camH.
These define the resolution of the image sensor in horizontal direction or in the direction of the width of the sensor (camW) and in vertical direction or in the direction of the height of the sensor (camH).
t
BA
=C
A
−C
B (4)
For determination whether the first movable object 2a is in the viewing zone 4b of the second movable object 2b, the initial or world coordinate system 32 is converted into a second local coordinate system 34b of the second movable object 2b (illustrated by a dashed line in
KB is the imaging matrix of the camera of the second movable object 2b and RB, as mentioned, is the rotation and translation matrix transforming the global coordinate system 32 into the local coordinate system 34b.
The point, which is “imaged” onto the sensor of the camera of the second movable object 2b, is defined by the vector pointing from the second movable object 2b towards the first movable object 2a, i.e. tBA. Similar to the point M above, the position of the first movable object 2a is calculated in homogeneous coordinates xb, yb, and Wb.
The corresponding point on the image sensor is represented by formula (6) below.
ub and vb are the pixel coordinates of the image of the first movable object 2a on the image sensor of the camera of the second movable object 2b.
Finally, it is determined, whether these pixel coordinates of the first movable object 2a, are in the image plane, to be more precise in the range of the sensor of the camera of the second movable object 2b.
According to formula (7) below, it is determined whether the pixel coordinates of the position of the first movable object 2a, namely ub and vb, are in the image plane, which means if ub is between 0 and WB−1 and vb is between 0 and HB−1.
If formula (7) is fulfilled, the first movable object 2a is visible in the camera of the second movable object 2b. This causes the first movable object 2a to take action in that it leaves the viewing zone 4b of the camera of the second movable object 2b. This is in particular performed by the apparatus 18 forming part of the movable object 2b controlling the movement.
Similarly, calculations can be performed with respect to the visibility of the second movable object 2b in the first viewing zone 4a of the camera of the first movable object 2a. For this purpose, a vector −tBA is calculated and the initial and world coordinate system 32 is transformed into the first local coordinate system 34a of the first movable object 2a. Subsequently, the image matrix of the camera of the first movable object 2a is applied on the position of the second movable object 2b.
By changing, for example, the focal length f (see formula (1)), the viewing angle of the camera of the second movable object 2b is widened. It is greater compared to the viewing angle defining the viewing zone 4b, which is the real viewing angle of the camera. By similar calculations, which were outlined with reference to the above formulas (1) to (8), it is determined whether the first movable object 2a is in the second security zone 14b of the second movable object 2b.
Another option for defining the security zone 14a, 14b is not altering the imaging matrix KB but by defining an offset of the position of the movable object 2a, 2b. This is illustrated in
In the local coordinate system 34b, the position of the second movable object 2b is shifted reversely by the amount δ. Hence, by application of the same imaging matrix KB, a security zone 14b is defined, which includes the real viewing zone 4b. Again, if a collision of the first movable object 2a with the second security zone 14b is detected, the first movement vector 12a (see
The above example refers to a situation, in which a “collision” between the first movable object 2a and the second viewing zone 4b and the security zone 14b of the second movable object 2b, respectively, is explained. Similar mechanisms can be implemented for a plurality of movable objects 2a, 2b keeping clear of the viewing zone 4a, 4b and the security zone 14a, 14b of a camera or of a plurality of cameras. This in particular applies to a situation in which there is a peer of cameras, each camera being mounted on an unmanned airborne vehicle representing a movable object.
All named characteristics, including those taken from the drawings alone, and individual characteristics, which are disclosed in combination with other characteristics, are considered alone and in combination as important to the present solution. Embodiments according to the present solution can be fulfilled through individual characteristics or a combination of several characteristics. Features which are combined with the wording “in particular” or “especially” are to be treated as preferred embodiments.
Number | Date | Country | Kind |
---|---|---|---|
15307108.9 | Dec 2015 | EP | regional |