The present application is based on and claims priority of European Patent Application No. 19217997.6 filed on Dec. 19, 2019, the entire contents of which are incorporated herein by reference.
The present disclosure relates to prioritization among cameras of a multi-camera arrangement.
By implementing a surveillance system, it may be possible to see what is happening in a specific surrounding—e.g. a part of a city, an open square, a city block, a road, an industrial site etc.—as it is happening. A multi-camera arrangement with cameras distributed geographically may provide real-time insights and information, which may be of interest, revealing and/or of help in one way or another. As a complement, a policeman or guard may carry alarm-supporting equipment enabling him or her, when deemed warranted—e.g. following an observed and/or reported criminal action—to trigger a geotagged alarm informing the surveillance system of said alarm along with a position of the policeman or guard. Moreover, additionally or alternatively, should for instance an object comprising a position-tracking device—such as e.g. a mobile phone or a vehicle—be observed and/or reported stolen or tampered with, then may in a similar manner a geotagged alarm be triggered, informing the surveillance system of said alarm along with a position of the object.
Commonly, when a geotagged alarm is triggered, an operator of such a surveillance system may select one or more cameras out of the multi-camera arrangement, to capture a surrounding covering the geotagged position.
Manual selection of cameras may, however, be inefficient and time consuming and further dependent on the operator's discretion, why there is a need for improvement.
It is therefore an object of embodiments herein to provide an approach for in an improved and/or alternative manner prioritize one or more cameras out of a multi-camera arrangement,
The object above may be achieved by the subject-matter disclosed herein. Embodiments are set forth in the appended claims, in the following description and in the drawings.
The disclosed subject-matter relates to a method performed by an assessment system for prioritization among cameras of a multi-camera arrangement. The assessment system obtains respective geographical camera position and camera properties of each of the cameras. The assessment system further receives information data indicating a geographical object position and object features of a physical object positioned in a surrounding in a potential field of view of each of the cameras. Moreover, the assessment system determines—for each of the cameras—by comparing the object position and object features with the respective camera position and camera properties, a respective distance to the object position and a respective expected pixel size of the object at the respective distance. The assessment system furthermore compares respective image data of the surrounding derived from each of the cameras, with the respective expected pixel size. Moreover, the assessment system assigns each of the cameras a respective rating based on to what extent the respective image data corresponds to the respective expected pixel size.
The disclosed subject-matter further relates to an assessment system for—and/or adapted for—prioritization among cameras of a multi-camera arrangement. The assessment system comprises a camera obtaining unit for—and/or adapted for—obtaining respective geographical camera position and camera properties of each of the cameras. The assessment system further comprises a physical object receiving unit for—and/or adapted for—receiving information data indicating a geographical object position and object features of a physical object positioned in a surrounding in a potential field of view of each of the cameras. Moreover, the assessment system comprises an expectations determining unit for—an/or adapted for—determining for each of the cameras, by comparing the object position and object features with the respective camera position and camera properties, a respective distance to the object position and a respective expected pixel size of the object at the respective distance. The assessment system furthermore comprises a comparing unit for—and/or adapted for—comparing respective image data of the surrounding derived from each of the cameras, with the respective expected pixel size. Moreover, the assessment system comprises an assigning unit for—and/or adapted for—assigning each of the cameras a respective rating based on to what extent the respective image data corresponds to the respective expected pixel size.
Furthermore, the disclosed subject-matter relates to a surveillance system comprising an assessment system as described herein.
Moreover, the disclosed subject-matter relates to a computer program product comprising a computer program containing computer program code means arranged to cause a computer or a processor to execute the steps of the assessment system described herein, stored on a computer-readable medium or a carrier wave.
The disclosed subject-matter further relates to a non-volatile computer readable storage medium having stored thereon said computer program product.
Thereby, there is introduced an approach according to which there is assessed which camera is deemed best suited to capture a specific surrounding. That is, since the disclosure relates to prioritization among cameras of a multi-camera arrangement, and there is obtained respective geographical camera position and camera properties of each of the cameras, it may be established where each respective camera is located, along with intrinsic characteristics of respective camera. Furthermore, since there is received information data indicating a geographical object position and object features of a physical object positioned in a surrounding in a potential field of view of each of the cameras, it may be established where a geotagged physical object—e.g. a policeman or guard carrying alarm-supporting equipment or a person tampering with, or in possession of a stolen, position-tracking device—is located, along with characteristics of the object, e.g. indicating that said object is a human being. Moreover, since there is determined for each of the cameras, by comparing the object position and object features with the respective camera position and camera properties, a respective distance to the object position and a respective expected pixel size of the object at the respective distance, it may be established at what distances from respective camera the object is located, and subsequently, respective camera's expected size in pixels of the object at the respective distance. That is, the expected pixel size of the object is dependent on the distance to said object in combination with the camera properties, and accordingly, once respective distance and camera properties are established, so may the respective expected pixel size. Since there is further determined respective conformity of the respective expected pixel size with respective image data of the surrounding derived from each of the cameras, obtained respective image data—and/or one or more detected objects thereof—from the cameras of the surrounding, in which the physical object is positioned and/or determined to be positioned, is compared to the respective expected pixel size of the physical object applicable for respective camera. Accordingly, it may be established how well respective expected pixel size conforms with respective image data and/or one or more detected objects thereof, which may equate with to what extent respective camera is able to capture and/or detect the physical object. Moreover, since each of the cameras are assigned a respective rating based on respective determined conformity, respective camera may be ranked in view of its ability to detect and/or capture the physical object—and/or the surrounding covering the physical object—thus enabling for prioritization among said cameras. For instance, the higher conformity the higher rating, and subsequently, the higher rating the higher priority.
For that reason, an approach is provided for in an improved and/or alternative manner prioritize one or more cameras out of a multi-camera arrangement.
The technical features and corresponding advantages of the above mentioned method will be discussed in further detail in the following.
The various aspects of the non-limiting embodiments, including particular features and advantages, will be readily understood from the following detailed description and the accompanying drawings, in which:
Non-limiting embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which currently preferred embodiments of the disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Like reference characters refer to like elements throughout. Dashed lines of some boxes in the figures indicate that these units or actions are optional and not mandatory.
In the following, according to embodiments herein which relate to prioritization among cameras of a mufti-camera arrangement, there will be disclosed an approach assessing which camera is deemed best suited to capture a specific surrounding.
Referring now to the figures and
Optionally, one or more cameras 21, 22, 23, 2n of the multi-camera arrangement 2 may be, and/or comprise, a—e.g. known—pan-tilt-zoom, PTZ, camera. Thereby, a more flexible field of view may be supported as compared to a fixed field of view.
The assessment system 1—and further the multi-camera arrangement 2—may be comprised in—and/or be in connection with—an exemplifying surveillance system 3. The exemplifying surveillance system 3 may refer to any—e.g. known—surveillance system covering a specific surrounding—such as e.g. a part of a city, an open square, a city block, a road, an industrial site etc.—comprising the necessary software and hardware for video management thereof.
“Assessment system” may refer to “viewability assessment system”, “camera assessment system”, “camera rating system”, “camera selection system” and/or “control system”, whereas the phrase “for prioritization among cameras” may refer to “for rating cameras”, “for assigning ratings to cameras”, “for prioritizing cameras” and/or “for selecting among cameras”. “Cameras of a multi-camera arrangement”, on the other hand, may refer to “cameras comprised in a multi-camera arrangement”, whereas “multi-camera arrangement” may refer to “plurality of cameras”. “Surveillance system” may refer to “camera surveillance system” and/or “surveillance system covering a specific surrounding, area and/or scene”.
The assessment system 1 is—e.g. by means of a camera obtaining unit 101 (shown and further described in
For instance, one camera—which may have a first setup of camera properties—may be positioned at a first position, whereas another camera—which may have a second setup or similar camera properties—may be positioned at a second position.
The camera positions 211, 221, 231, 2n1 and/or camera properties 212, 222, 232, 2n2 may be obtained in any arbitrary—e.g. known—manner, such as obtained from a data table, database and/or server holding such cameras information. Additionally or alternatively, the camera positions 211, 221, 231, 2n1 and/or camera properties 212, 222, 232, 2n2 may be pre-stored in the assessment system 1, and/or derived from respective camera 21, 22, 23, 2n. The respective camera properties 212, 222, 232, 2n2 may refer to any properties of respective camera 21, 22, 23, 2n defining its intrinsic characteristics, such as resolution, zoom and/or image enhancement capability e.g. noise reduction. Moreover, camera properties of one camera may differ from camera properties of another camera.
“Obtaining” respective geographical camera position and camera properties may refer to “deriving” and/or “determining” respective geographical camera position and camera properties, whereas “geographical camera position” may refer to merely “camera position”. “Camera properties”, on the other hand, may refer to “camera characteristics”, “intrinsic camera features”, “camera parameters” and/or “camera specification”, and according to an example further to “camera optical properties”. The phrase “obtaining respective geographical camera position and camera properties of each of said cameras” may refer to “obtaining, for each of said cameras, a respective geographical camera position and respective camera properties”, and/or to “determining respective geographical camera position and camera properties of each of said cameras from camera data indicating respective geographical camera position and camera properties”.
The assessment system 1 is—e.g. by means of a physical object receiving unit 102 (shown and further described in
The information data 43 may be received in any arbitrary—e.g. known—manner, such as received—e.g. via wire and/or wirelessly—from the exemplifying surveillance system 3 and/or a position/positioning retrieving system (not shown) associated with said surveillance system 3 and/or the assessment system 1. The physical object 4 may refer to any real object, such as e.g. a moving object and/or target, for instance a human being. According to an example, the physical object 4 may be represented by a vehicle. Moreover, the physical object 4 may be detected—and/or have been detected—in any arbitrary—e.g. known—manner, such as by means of image processing. The object features 42 may refer to any characteristics of the object 4, e.g. object type such as indicating that said object 4 e.g. is a human being, and/or characteristics indicating physical size—and/or proportions—of the object 4. Respective field of view 213, 223, 233, 2n3 of the cameras 21, 22, 23, 2n may be supported by default; additionally or alternatively, for instance should said cameras 21, 22, 23, 2n be represented by PTZ cameras, then said respective field of view 213, 223, 233, 2n3 may be “potential”, i.e. supported following panning, tilting and/or zooming.
“Receiving” information data may refer to “deriving” information data, and according to an example further to “receiving at a first point in time” information data. “Information data”, on the other hand, may refer to “an information message” and/or “an electronic and/or digital information message”, whereas “information data indicating” may refer to “information data revealing and/or reflecting”. “Geographical object position” may refer to merely “object position”, whereas “object features” may refer to “object characteristics”, “object properties” and/or “an object type”. “Physical object” may refer to “real object” and/or “target”, and according to an example further to “moving object” and/or “human being”. “Positioned” in a surrounding, on the other hand, may refer to “determined and/or estimate” to be positioned” in a surrounding and/or “geotagged to be positioned” in a surrounding, whereas “surrounding in a potential field of view” may refer to “surrounding covered by a potential field of view” and/or “scene or area of a potential field of view”. “Potential” field of view may refer to “potential respective” field of view and/or “supported” field of view, whereas “potential field of view” may refer to merely “field of view”.
The assessment system 1 is—e.g. by means of an expectations determining unit 103 (shown and further described in
Respective distance D1, D2, D3, Dn to the object position 41 may refer to any arbitrary distance, for instance ranging from tenths of metres up to hundreds or even thousands of metres. In exemplifying
“Determining” may in this context refer to “calculating”, whereas “expected pixel size” in this context may refer to “assumed pixel size”. Expected “pixel size”, on the other hand, may refer to expected “size in pixels”, and according to an example further to expected “pixel height” and/or “pixel area distribution”. The phrase “and a respective expected pixel size” may refer to “and, subsequently, a respective expected pixel size”. According to an example, the phrase “a respective expected pixel size of said object” may refer to “a respective expected pixel size of said object in a potential camera image”, “a respective expected pixel size of said object in a vertical or essentially vertical direction”, “a respective expected pixel size of said object in a vertical or essentially vertical direction and/or in a horizontal or essentially horizontal direction”, and/or “a respective expected shape and pixel size of said object”. Moreover, the phrase “determining for each of said cameras, by comparing said object position and object features with said respective camera position and camera properties, a respective distance to said object position and a respective expected pixel size of said object at said respective distance” may refer to “determining for each of said cameras, by comparing said object position with said respective camera position, a respective distance to said object position, and by comparing said camera properties with said object features and said respective distance, a respective expected pixel size of said object at said respective distance”.
As exemplified in
During the conformity comparison and/or check, one or more objects detected in respective image data 214, 224, 234, 2n4 may for instance be identified as having an object type—e.g. human—corresponding to the object 4, whereby respective pixel sizes of these detected objects may be compared to the respective expected pixel sizes.
For instance, as depicted in exemplifying
Optionally, and as exemplified in
“Determining respective conformity of” may refer to “determining respective resemblance and/or matching of”, “determining a respective conformity of and/or comparing”, whereas “respective conformity” may refer to “respective conformity value and/or parameter” and/or merely “conformity”. “With respective image data”, on the other hand, may refer to “with respective at least a first detected object of image data”. According to an example, “with respective image data” may further refer to “with respective pixel size of at least a first detected object of image data”, “with respective pixel size of a portion—such as a head—of at least a first detected object of image data” and/or “with respective pixel size—in a vertical direction and/or in a horizontal direction—of at least a first detected object of image data”. The term “respective image data” may refer to “a respective image” and/or “a respective image frame”, whereas “derived” from each of the cameras may refer to “received and/or obtained” from each of the cameras and/or “captured by and derived” from each of the cameras.
The assessment system 1 is—e.g. by means of an assigning unit 105 (shown and further described in
“Assigning each of said cameras a respective rating” may refer to “assigning digitally each of said cameras a respective rating” and/or “rating each of said cameras”, whereas “rating” may refer to “priority” and/or “ranking”. “Based on respective determined conformity”, on the other hand, may refer to “based on respective degree of determined conformity” and/or merely to “based on respective conformity”, whereas assigning “a respective rating” may refer to assigning “in an ascending or descending order a respective rating”. “Rating” may further refer to “rating indicative of a priority of the camera”.
Optionally, the rating of at least one camera of said cameras 21, 22, 23, 2n may additionally be based on at least one additional parameter. Thereby, consideration may be given to other criteria than merely degree of conformity between respective image data 214, 224, 234, 2n4 and the respective expected pixel size of the physical object 4. Accordingly, pertinent the at least one additional parameter, one camera representing a lower degree of conformity than another camera, may nonetheless be assigned a more prioritized rating.
To what extent the one or more additional parameters affect the corresponding rating as compared to degree of conformity, may be selected as deemed appropriate for the implementation at hand. The at least one additional parameter may be derived in any arbitrary—e.g. known—manner, for instance from a data table, database and/or server holding such information, from the exemplifying surveillance system 3, from the at least one camera and/or from an external entity. “Be based on” at least one additional parameter may refer to “weigh in” and/or “take into consideration” at least one additional parameter, whereas “based on at least one additional parameter” may refer to “based on additional input data”.
Such an additional parameter may for instance comprise the camera properties 212, 222, 232, 2n2—or a selection thereof—of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to intrinsic characteristics of the camera 21, 22, 23, 2n, such as resolution, zoom and/or image enhancement capability e.g. noise reduction, affecting quality of the image data 214, 224, 234, 2n4 derived therefrom. For instance, one camera 21, 22, 23, 2n with camera properties 212, 222, 232, 2n2 representing high resolution may be considered to have a higher priority than another camera 21, 22, 23, 2n with camera properties 212, 222, 232, 2n2 representing lower resolution.
Additionally or alternatively, such an additional parameter may comprise the distance D1, D2, D3, Dn between the at least one camera 21, 22, 23, 2n and the object position 41. Thereby, consideration may be given to the distance D1, D2, D3, Dn between the physical object 4 and the camera 21, 22, 23, 2n. For instance, one camera 21, 22, 23, 2n with a relatively short distance D1, D2, D3, Dn to the object position 41 may be considered to have a higher priority than another camera 21, 22, 23, 2n with a relatively extensive distance D1, D2, D3, Dn to the object position 41.
Additionally or alternatively, such an additional parameter may comprise the geographical position 211, 221, 231, 2n1 of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to where the at least one camera 21, 22, 23, 2n is positioned, and for instance circumstances applicable for that position 211, 221, 231, 2n1. For instance, one camera 21, 22, 23, 2n positioned in vicinity of plural other cameras 21, 22, 23, 2n may be considered to have a higher priority than another camera 21, 22, 23, 2n positioned in vicinity of few other cameras 21, 22, 23, 2n.
Additionally or alternatively, such an additional parameter may comprise the potential field of view 213, 223, 233, 2n3 of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to coverage ability. For instance, one camera 21, 22, 23, 2n with a rather narrow potential field of view 213, 223, 233, 2n3 may be considered to have a higher priority than another camera 21, 22, 23, 2n with a wider potential field of view 213, 223, 233, 2n3,
Additionally or alternatively, such an additional parameter may comprise a pan, tilt and/or zoom capability of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to the camera's 21, 22, 23, 2n ability to pan, tilt and/or zoom. For instance, one camera 21, 22, 23, 2n with a relatively restricted ability to pan, tilt and/or zoom—e.g. a fixed camera—may be considered to have a higher priority than another camera 21, 22, 23, 2n with a less restricted ability to pan, tilt and/or zoom, e.g. a PTZ camera. Additionally or alternatively, such an additional parameter may comprise a spectrum of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to whether the camera 21, 22, 23.2n is e.g. a visual light camera, thermal camera or IR camera. For instance, one camera 21, 22, 23, 2n supporting an IR spectrum may be considered to have a higher priority than another camera 21, 22, 23, 2n supporting a visual light spectrum.
Additionally or alternatively, such an additional parameter may comprise an auto tracking capability of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to the camera's 21, 22, 23, 2n ability of auto tracking. For instance, one camera 21, 22, 23, 2n with a relatively high ability to auto track—e.g. the physical object 4—may be considered to have a higher priority than another camera 21, 22, 23, 2n with a lower ability to auto track.
Additionally or alternatively, such an additional parameter may comprise an orientation of the physical object 4 relative the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to e.g. what direction the physical object 4—e.g. a person tampering with, or in possession of a stolen, position-tracking device—is determined to be turned towards. For instance, one camera 21, 22, 23, 2n which the physical object 4 is determined to be facing or essentially facing may be considered to have a higher priority than another camera 21, 22, 23, 2n from which the physical object 4 is turned away.
Additionally or alternatively, such an additional parameter may comprise an unavailability of the at least one camera 21, 22, 23, 2n. Thereby, consideration may be given to whether a camera 21, 22, 23, 2n is occupied, reserved and/or for any arbitrary reason not available or merely available to a limited extent. For instance, one camera 21, 22, 23, 2n which is available may be considered to have a higher priority than another camera 21, 22, 23, 2n which is unavailable, e.g. occupied.
Additionally or alternatively, such an additional parameter may comprise a current point in time. Thereby, consideration may be given to what time it is, such as date, day of week, time of day etc. For instance, one camera 21, 22, 23, 2n may be considered to have a higher priority than another camera 21, 22, 23, 2n depending on what time and/or date it is.
Additionally or alternatively, such an additional parameter may comprise a weather condition. Thereby, consideration may be given to what weather it is, such as rain, snow and/or fog. For instance, one camera 21, 22, 23, 2n may be considered to have a higher priority than another camera 21, 22, 23, 2n depending on the weather conditions.
Optionally, the assessment system 1 may—e.g. by means of an optional camera selection unit 106 (shown and further described in
“Selecting” a camera may refer to “the assessment system selecting” a camera and/or “selecting subsequently” a camera, whereas the phrase “selecting a camera out of said cameras based on said respective rating” according to an example may refer to “selecting a camera out of said cameras with the highest and/or most prioritized rating”.
Optionally, as exemplified in
The subsequent point in time may refer to any arbitrary feasible time, for instance ranging from a few milliseconds up to minutes, hours, days or even years from when the previously discussed actions may have taken place e.g. at an exemplifying first point in time. Moreover, the subsequent information data 83 may be received in any arbitrary—e.g. known—manner, such as received—e.g. via wire and/or wirelessly—from the exemplifying surveillance system 3 and/or a position/positioning retrieving system (not shown) associated with said surveillance system 3 and/or the assessment system 1. The subsequent physical object 8 may refer to any real object, such as e.g. a moving object and/or target, for instance a human being. According to an example, the subsequent physical object 8 may be represented by a vehicle. The subsequent physical object 8, here exemplified as a human being, may—or may not—refer to the previously discussed physical object 4. The subsequent object features 82 may refer to any characteristics of the object 82, e.g. object type, such as indicating that said object 8 e.g. is a human being.
“Receiving” subsequent information data may refer to “deriving” subsequent information data, whereas “subsequent” may refer to “second”. “Subsequent information data”, on the other hand, may refer to “a subsequent information message” and/or “a subsequent electronic and/or digital information message”, whereas “subsequent information data indicating” may refer to “subsequent information data revealing and/or reflecting”. “Geographical subsequent object position” may refer to merely “subsequent object position”, whereas “subsequent object features” may refer to “subsequent object characteristics”, “subsequent object properties” and/or “a subsequent object type”. “Subsequent physical object” may refer to “subsequent real object” and/or “subsequent target”, and according to an example further to “moving subsequent object” and/or “subsequent human being”. “Positioned” in the surrounding, on the other hand, may refer to “positioned essentially” in the surrounding, “determined and/or estimated to be positioned” in the surrounding, and/or“geotagged to be positioned” in the surrounding,
Further optionally, the assessment system 1 may—e.g. by means of an optional subsequent expectations determining unit 108 (shown and further described in
The subsequent distance Ds to the subsequent object position 81 may refer to any arbitrary distance, for instance ranging from tenths of metres up to hundreds or even thousands of metres. The expected subsequent pixel size may refer to any arbitrary size—e.g. in a vertical and/or a horizontal direction—for instance ranging from a few pixels up to hundreds or even thousands of pixels. Moreover, the expected subsequent pixel size may refer to outer contours of the subsequent physical object 8, and further comprise a combination of two or more pixel sizes, for instance a combination of a vertical pixel size and horizontal pixel size, and/or a plurality of vertical pixel sizes and a plurality of horizontal pixel sizes.
“Determining” may in this context refer to “calculating”, whereas “expected subsequent pixel size” in this context may refer to “assumed subsequent pixel size”. Expected subsequent “pixel size”, on the other hand, may refer to expected subsequent “size in pixels”, and according to an example further to expected subsequent “pixel height”. The phrase “and an expected subsequent pixel size” may refer to “and, subsequently, an expected subsequent pixel size”. According to an example, the phrase “an expected subsequent pixel size of said subsequent object” may refer to “an expected subsequent pixel size of said subsequent object in a vertical or essentially vertical direction”. Additionally or alternatively, the foregoing phrase may moreover refer to “an expected subsequent pixel size of said subsequent object in a vertical or essentially vertical direction and/or in a horizontal or essentially horizontal direction”. Said phrase may further, additionally or alternatively, refer to “an expected subsequent shape and pixel size of said subsequent object”. Moreover, the phrase “determining for the selected camera, by comparing said subsequent object position and subsequent object features with said camera position and camera properties of the selected camera, a distance to said subsequent object position and an expected subsequent pixel size of said subsequent object at said subsequent distance” may refer to “determining for said selected camera, by comparing said subsequent object position with said camera position of the selected camera, a subsequent distance to said subsequent object position, and by comparing said camera properties of the selected camera with said subsequent object features and said subsequent distance, an expected subsequent pixel size of said subsequent object at said subsequent distance”. “Subsequent distance” may refer to merely “distance”.
Moreover optionally, and as exemplified in
For instance, as depicted in exemplifying
According to an example, and as exemplified in
“Determining conformity of” may refer to “determining resemblance and/or matching of”, “determining a conformity of” and/or “comparing”, whereas “conformity” may refer to “conformity value and/or parameter” and/or “subsequent conformity”. “With subsequent image data”, on the other hand, may refer to “with at least a first detected object of said subsequent image data”, According to an example, “with subsequent image data” may further refer to “with a pixel size of at least a first detected object of said subsequent image data”, “with a pixel size of a portion—such as a head—of at least a first detected object of said subsequent image data” and/or “with a pixel size—in a vertical direction and/or in a horizontal direction—of at least a first detected object of said subsequent image data”. The term “subsequent image data” may refer to “a subsequent image” and/or “a subsequent image frame”, whereas “derived” from the selected camera may refer to “received and/or obtained” from the selected camera and/or “captured by and derived” from the selected camera.
Yet further optionally, the assessment system 1 may—e.g. by means of an optional other-camera selecting unit 110 (shown and further described in
The viewability threshold may be set to any arbitrary level deemed suitable for the implementation at hand, for instance range from tens of percent up to above about 95 percent.
“Selecting” a camera may refer to “the assessment system selecting” a camera and/or “selecting subsequently” a camera, whereas the phrase “selecting a camera based on the assigned said respective rating” according to an example may refer to “selecting a camera with the next highest and/or next most prioritized rating”. “Subsequent conformity”, on the other hand, may refer to merely “conformity”, whereas “viewability threshold” may refer to “predeterminable viewability threshold”, “viewability threshold value”, “conformity threshold” and/or merely “threshold”. The phrase “when the subsequent conformity is below a viewability threshold” may refer to “should the subsequent conformity fall below a viewability threshold”.
As further shown in
Further shown in
In Action 1001, the assessment system 1 obtains—e.g. with support from the camera obtaining unit 101—respective geographical camera position 211, 221, 231, 2n1 and camera properties 212, 222, 232, 2n2 of each of the cameras 21, 22, 23, 2n.
In Action 1002, the assessment system 1 receives—e.g. with support from the physical object receiving unit 102—information data 43 indicating a geographical object position 41 and object features 42 of a physical object 4 positioned in a surrounding 5 in a potential field of view 213, 223, 233, 2n3 of each of the cameras 21, 22, 23, 2n.
In Action 1003, the assessment system 1 determines—e.g. with support from the expectations determining unit 103—for each of the cameras 21, 22, 23, 2n, by comparing the object position 41 and object features 42 with the respective camera position 211, 221, 231, 2n1 and camera properties 212, 222, 232, 2n2, a respective distance D1, D2, D3, Dn to the object position 41 and a respective expected pixel size of the object 4 at the respective distance D1, D2, D3, Dn.
In Action 1004, the assessment system 1 determines—e.g. with support from the conformity determining unit 104—respective conformity of the respective expected pixel size, with respective image data 214, 224, 234, 2n4 of the surrounding 5 derived from each of the cameras 21, 22, 23, 2n.
In Action 1005, the assessment system 1 assigns—e.g. with support from the assigning unit 105—each of the cameras 21, 22, 23, 2n a respective rating based on respective determined conformity.
In optional Action 1006, the assessment system 1 may select—e.g. with support from the optional camera selecting unit 106—a camera 23 out of said cameras 21, 22, 23, 2n based on the respective rating.
In optional Action 1007, the assessment system 1 may receive—e.g. with support from the subsequent object receiving unit 107—at a subsequent point in time, subsequent information data 83 indicating a subsequent object position 81 and subsequent object features 82 of a subsequent physical object 8 positioned in the surrounding 5.
In optional Action 1008, the assessment system 1 may determine—e.g. with support from the subsequent expectations determining unit 108—for the selected camera 23, by comparing the subsequent geographical object position 81 and subsequent object features 82 with the camera position 231 and camera properties 232 of the selected camera 23, a subsequent distance Ds to the subsequent object position 81 and an expected subsequent pixel size of the subsequent object 8 at the subsequent distance Ds.
In optional Action 1009, the assessment system 1 may determine—e.g. with support from the subsequent conformity determining unit 109—conformity of the expected subsequent pixel size with subsequent image data 238 of the surrounding 5 derived from the selected camera 23.
In optional Action 1010, the assessment system 1 may select—e.g. with support from the other-camera selecting unit 110—a camera 21, 22, 2n other than the selected camera 23 out of said cameras 21, 22, 23, 2n, based on the assigned respective rating, when the subsequent conformity is below a viewability threshold.
The person skilled in the art realizes that the present disclosure by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims. It should furthermore be noted that the drawings not necessarily are to scale and the dimensions of certain features may have been exaggerated for the sake of clarity. Emphasis is instead placed upon illustrating the principle of the embodiments herein. Additionally, in the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality.
Number | Date | Country | Kind |
---|---|---|---|
19217997.6 | Dec 2019 | EP | regional |