The present inventive concepts relate to the field of stereo sensors, and more particularly to the field of camera heads using such stereo cameras.
A stereo sensor, at a high level, is a sensor that forms a single product, result, or output from inputs simultaneously from a pair of sensors or detectors. For example, a stereo camera is a pair of cameras that generate a single view of an imaged entity or location from image information received from both cameras. Each camera in a stereo camera has a field of view (FOV), and the fields of view the two cameras can be combined to give an overall field of view for the stereo camera. In a stereo camera, the fields of view tend to overlap.
The “stereo” nature of a stereo sensor allows for the determination of range information. It can also enable imaging in 3 dimensions, rather than only two dimensions.
Stereo cameras are well known, and have been used in many applications. As examples, stereo cameras have been found to have particular utility in providing three-dimensional (3D) imaging for mapping environments and navigating through them. In such uses, it is not uncommon to use multiple stereo cameras to increase the overall field of view of a system that uses such stereo cameras as an input.
For example, U.S. Pat. No. 7,446,766 demonstrates a use of stereo cameras for building evidence grids representing a physical environment and navigating through the environment.
In accordance with one aspect of the present disclosure, provided is a multi-camera head, comprising a head frame, a plurality of stereo cameras mounted to the head frame and arranged around an axis, and at least one stereo camera mounted to a top of the head frame, and across the axis.
In various embodiments, the plurality of stereo cameras can be pitched toward the axis at a pitch angle relative to vertical.
In various embodiments, the pitch angle can be in a range of between about 5° to about 15° relative to the vertical axis.
In various embodiments, the pitch angle can be in a range of about 10° to about 12° relative to the vertical axis.
In various embodiments, the pitch angle can be about 11.25° relative to the vertical axis.
In various embodiments, at least one of the plurality of stereo cameras includes two lenses in a plane, where the two lenses are offset at an angle relative to a horizontal axis of the plane.
In various embodiments, the offset angle can be about 45°.
In various embodiments, each of the plurality of stereo cameras can include two lenses in a respective plane, where the two lenses in each respective plane are offset at an angle of about 45° relative to a horizontal axis of the plane.
In various embodiments, the plurality of stereo cameras can be four stereo cameras.
In various embodiments, the multi-camera head can further comprise a body disposed between at least two stereo cameras from the plurality of stereo cameras.
In accordance with another aspect of the invention, provided is a multi-camera head, comprising four stereo cameras mounted to four respective sides of a head frame and arranged around a vertical axis, an elongated body disposed between a first pair of adjacent sides and a second pair of adjacent sides.
In various embodiments, the multi-camera head can further comprise at least one stereo camera mounted between the four stereo cameras, and across the vertical axis.
In various embodiments, the four stereo cameras can be pitched at a pitch angle toward the vertical axis.
In various embodiments, the pitch angle can be about 11.25° relative to the vertical axis.
In various embodiments, the camera lenses of at least one stereo camera can be offset at an offset angle relative to a horizontal axis.
In various embodiments, the offset angle can be about 45°.
In various embodiments, the multi-camera head can be coupled to a robotic vehicle.
In various embodiments, the multi-camera head can further comprise at least one light stack configured to generate outputs indicating a predetermined condition or state.
In accordance with another aspect of the invention, provided is a computer-implemented method of analyzing a pitch angle of a plurality of stereo cameras disposed around an axis in a multi-camera head. The method comprises modeling the multi-camera head as a point source at the center of a computer generated sphere, including defining a field of view of each stereo camera, entering a pitch angle for each stereo camera, graphically modeling the sphere, and graphically projecting a field of view of each stereo camera onto the sphere.
In various embodiments, the multi-camera can include a top stereo camera disposed between the plurality of stereo cameras disposed around the axis and the method can further comprise graphically projecting a field of view of the top stereo camera onto the sphere.
In various embodiments, the method can further comprise, in response to a user input altering a pitch angle of at least one stereo camera, graphically re-projecting the field of view of each stereo camera onto the sphere to display the altered pitch angle.
In accordance with aspects of the present invention, provided is a multi-camera head as shown in the drawings and described herein.
In accordance with aspects of the present invention, provided is robotic vehicle having a multi-camera head as shown in the drawings and described herein.
In various embodiments, the robotic vehicle can be autonomous or unmanned vehicle, e.g., a pallet truck or tugger.
In accordance with aspects of the present invention, provided is a computer-implemented method of analyzing a pitch angle of a plurality of stereo cameras disposed around an axis in a multi-camera head as shown in the drawings and described herein.
The present invention will become more apparent in view of the attached drawings and accompanying detailed description. The embodiments depicted therein are provided by way of example, not by way of limitation, wherein like reference numerals refer to the same or similar elements. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating aspects of the invention. In the drawings:
Hereinafter, aspects of the present invention will be described by explaining illustrative embodiments in accordance therewith, with reference to the attached drawings. While describing these embodiments, detailed descriptions of well-known items, functions, or configurations are typically omitted for conciseness.
It will be understood that, although the terms first, second, etc. are be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another, but not to imply a required sequence of elements. For example, a first element can be termed a second element, and, similarly, a second element can be termed a first element, without departing from the scope of the present invention. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being “on” or “connected” or “coupled” to another element, it can be directly on or connected or coupled to the other element or intervening elements can be present. In contrast, when an element is referred to as being “directly on” or “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.).
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
Spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like may be used to describe an element and/or feature's relationship to another element(s) and/or feature(s) as, for example, illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use and/or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” and/or “beneath” other elements or features would then be oriented “above” the other elements or features. The device may be otherwise oriented (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
A “stereo camera” is a type of camera with at least two lenses with a separate image sensor for each lens that are cooperatively processed to form a stereo image. This allows the ability to capture three-dimensional images, make stereo views and 3D images, and perform range imaging. The distance between the lenses in a typical stereo camera (the intra-axial distance) is about the distance between one's eyes (known as the intra-ocular distance) and is about 6.35 cm, although stereo cameras can have other intra-axial distances.
In this embodiment, multi-camera head 100 includes five stereo cameras 110 mounted on different sides of a head frame 150, e.g., sides A through E in
There is no desire to image the ground in the present embodiment, since that is not particularly useful in the exemplary mapping and navigation context (e.g., for robotic vehicles). Therefore, there is no camera downwardly projecting at the bottom of the head frame. In other embodiments, however, there could be a desire for such a downwardly projecting camera. In this embodiment, each stereo camera 110 lens 114a, 114b is a DSL series lens, by Sunex, and has a field of view of about 90 degrees or more. As will be appreciated by those skilled in the art, the present invention is not limited to these specific stereo cameras.
Each stereo camera 110 includes a printed circuit board (PCB) 112, to which two camera lenses 114a and 114b are mounted. The PCB includes electronics that process image information received from lenses 114a, 114b into digital image information that is output to an apparatus connected to the stereo camera 110, such as a robotic vehicle, e.g., robotic warehouse vehicle. Such stereo image processing logic is known in the art, so is not described in detail herein. The stereo cameras 110 are mounted to head frame 150 by screws securing the PCBs 112 to respective frame sides A through E, in this embodiment.
The head frame 150 is made of a rigid material in this embodiment, such as a metal, fiberglass, or molded plastic. Each of the five sides A through E includes a mounting surface to which a respective stereo camera 110 can be mounted. In this embodiment, the mounting surfaces of sides A through D take the form of mounting plates 152. Mounting plates 152 (and sides A through D) are generally vertically disposed in this embodiment. And mounting surface E takes the form of a top frame member or plate 154 that is generally horizontally disposed. A bottom frame member or plate 156 is also provided in this embodiment, which is opposite and substantially parallel to the top frame member 154.
The bottom frame member 156, in this embodiment, defines an opening 158 (partially visible) that accommodates the passage of communication wires or cables, a mast of a robotic vehicle that uses the multi-camera head for data gathering and/or navigation, or a combination thereof. In this embodiment, therefore, it is presumed that a mast will be generally centrally disposed within head frame 150. However, the invention is not so limited. In other embodiments a mast or other support (e.g., a surface of a vehicle, equipment, or other structure) could be mounted at any one or more locations on the head frame, preferably not occluding the view of the cameras.
In this embodiment, a top of each mounting plate 152 is secured to top frame member 154 by screws and a bottom of each mounting plate 152 is secured to bottom frame member 156 by other screws. The resulting structure forms the substantially rigid head frame 150. In other embodiments, as an example, the entire head frame 150 could be a single, cast piece.
As is also visible from
In this embodiment, a pitch angle of the mounting plate 152 is the same as the pitch angle of the camera lenses, because lenses 114a and 114b lie in a plane that is parallel to the associated mounting plate 152 in this embodiment. Therefore, the pitch angle of the mounting plate is transferred to the lenses, in the embodiment of
In projection patterns (a) through (d), stereo cameras 110 discussed above were used. In projection pattern (a) α=0° with respect to vertical, i.e., β=90° with respect to horizontal (or ground surface). In projection pattern (b) α=5° with respect to vertical, i.e., β=85° with respect to horizontal. In projection pattern (c) α=10° with respect to vertical, i.e., β=80° with respect to horizontal. And in projection pattern (d) α=11.25° with respect to vertical, i.e., β=78.75° with respect to horizontal, as in the embodiment of
In each of projection patterns (a) through (d), the top camera 110E is as described above. Accordingly, the projection from top camera 110E appears on top of the sphere, and is denoted as ProjE. Projection patterns from the four side stereo cameras 110, one on each of sides A through D, are denoted as ProjA, ProjB, ProjC, and ProjD, respectively.
As can be seen, changing pitch angle a, 13 changes the FOV coverage collectively formed by projection patterns ProjA, ProjB, Projc, and ProjD, and the overall FOV coverage when also considering projection ProjE. The determination of a preferred pitch angle α, β can be a function of many considerations and how the multi-camera head 100 is to be used. In the present embodiment, given the exemplary stereo cameras, head frame, camera orientations on the frame, and context of 3-D mapping and navigation, the considerations include minimizing distortion, minimizing the number of cameras, and maximizing useful views. Given that, in this embodiment projection pattern (d), with a pitch angle of α=11.25° with respect to vertical β=78.75° with respect to horizontal is presently preferred. As will be appreciated by those skilled in the art, a different pitch angle, or no pitch angle (projection pattern (a)), could be preferred in other embodiments.
In
In this embodiment, a multi-camera head 100′ is provided that includes a body 710 between sides. In this embodiment, sides A and D remain substantially adjacent to each other and sides B and C remain substantially adjacent to each other, with body 710 disposed in between. Side E is disposed within a top surface 712 of the body 710, between sides A and D and sides B and C. The body 710 can also include first and second sides 714, 716 and a bottom 718 (see
The arrangement and orientation of the stereo cameras 110 can be substantially the same as that described above, as can the pitch angle α (or β) of the lenses 114a, 114b. The axial displacement and heights of the lenses 114a, 114b can also be the same as discussed above.
In various embodiments, a light stack 720 can be provided between sides A and D and/or sides B and C. The light stack can include one or more lights that can be used as external indicators of certain conditions of the multi-camera head 100′ or a system with which the multi-camera head 100′ communicates. In the case where the multi-camera head is coupled to a manned or unmanned vehicle or other piece of mobile equipment, the light stack could include light signals indicating vehicle or equipment statuses or warnings, as examples. Audio outputs can alternatively or additionally be added to the light stack 720, body 710, or otherwise to multi-camera head 100′ (or multi-camera head 100).
In step 1220, a pitch angle of the four side cameras is entered, or defined within the computer. In step 1230, a sphere is modeled by the computer, with the multi-camera head at its center. In step 1240, the FOVs of the cameras of the multi-camera head are projected from the center onto the sphere, which can be graphically shown on a computer screen. Projection patterns (a) through (d) in
In some embodiments, the compute could enable graphical interaction with the sphere and/or FOV projections. For example, a user could be allowed to “grab” a FOV projection and move it, and the computer could adjust the other FOV projections and output the resultant pitch angle. In another embodiment, the computer could be enabled to maximize FOV coverage for the entire sphere or a designated portion thereof. In yet another embodiment, different cameras could be defined within the computer, and the computer could comparatively show FOV projections of the different cameras on the same sphere—or recommend cameras or camera combinations for best achieving a defined set of requirements, e.g., maximize FOV coverage for the sphere or a designate portion of the sphere.
One or more input devices 1310 is included to provide data, information, and/or instructions to one or more processing element or device 1320. Input device 1310 can be or include one or more of a keyboard, mouse, touch screen, keypad or microphone, as examples.
The processing element/device 1320 can be or include a computer processor or microprocessor, as examples. Processing element or device 1320 can retrieve, receive, and/or store data and information, e.g., in electronic form, from a computer storage system 1230.
Computer storage system 1330 can be or include one or more non-transitory storage media or system for computer data, information, and instructions, such as electronic memory, optical memory, magnetic memory, and the like. A computer storage media can, for example, be volatile and non-volatile memory and take the form of a drive, disk, chip, and various forms thereof, and can include read only memory (ROM) and random access memory (RAM).
The processing element/ device 1320 can output data and information to one or more output devices 840. Such output devices can include any of a variety of computer screens and displays, speakers, communications ports or interfaces, a network, or separate system, as examples. In cases of touch screens, input devices 810 and output devices 840 can be merged in a single device.
In one embodiment, output devices 1340 include a computer display that renders screens including spherical projections like those shown in
While the foregoing has described what are considered to be the best mode and/or other preferred embodiments, it is understood that various modifications can be made therein and that the invention or inventions may be implemented in various forms and embodiments, and that they may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim that which is literally described and all equivalents thereto, including all modifications and variations that fall within the scope of each claim.
This is a continuation-in-part application that claims the benefit of priority under 35 U.S.C. §120 from United States Design Application serial number 29/398,127, entitled MULTI-CAMERA HEAD, filed on Jul. 26, 2011, which is incorporated herein by reference in its entirety. This is a continuation-in-part application that claims the benefit of priority under 35 U.S.C. §120 from U.S. patent application Ser. No. 13/731,897, filed Dec. 31, 2012, entitled AUTO-NAVIGATING VEHICLE WITH FIELD-OF-VIEW ENHANCING SENSOR POSITIONING AND METHOD OF ACCOMPLISHING SAME, which claimed priority from U.S. Provisional Application 61/581,863, filed Dec. 30, 2011, entitled ROBOTIC VEHICLE WITH OPERATOR FIELD OF VIEW ENHANCING SENSOR POSITIONING AND METHOD OF ACCOMPLISHING SAME, which are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
61581863 | Dec 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 29398127 | Jul 2011 | US |
Child | 13836619 | US | |
Parent | 13731897 | Dec 2012 | US |
Child | 29398127 | US |