System and method for detecting pedestrians using a single normal camera

Information

  • Patent Grant
  • 10043067
  • Patent Number
    10,043,067
  • Date Filed
    Monday, December 3, 2012
    11 years ago
  • Date Issued
    Tuesday, August 7, 2018
    6 years ago
Abstract
The present application provides pedestrian detection system and method. A pedestrian detection method includes obtaining an image captured by a camera, and identifying a pedestrian candidate in the image. According to the method, the pedestrian candidate is confirmed by transforming the image into a top view image, calculating the actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera, and determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range.
Description
CROSS REFERENCE TO RELATED APPLICATION

The present application is a U.S. National Phase of International Patent application Ser. No. PCT/CN2012/085727, entitled “SYSTEM AND METHOD FOR DETECTING PEDESTRIANS USING A SINGLE NORMAL CAMERA,” and filed on Dec. 3, 2012, the entire contents of which are hereby incorporated by reference for all purposes.


TECHNICAL FIELD

The present application generally relates to system and method for detecting pedestrians using a single normal camera.


BACKGROUND

Various pedestrian detecting technologies have been developed, and have been used in vehicles to detect and remind a driver of pedestrians in the vicinity of a vehicle. Some solutions are based on radar, some solutions are based on multiple cameras, some solutions are based on laser, and some solutions are based on infrared cameras, but these solutions have a same drawback which is high cost. Although some conventional solutions using a single normal camera are low cost, these solutions produce many false positives in order to get high detection rate. Examples of such solutions please see N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, CVPR, 2005; P. Dollar, C. Wojek, B. Schiele and P. Perona, “Pedestrian Detection: An Evaluation of the State of the Art”, PAMI, 2011; D. Geronimo and A. M. Lopez and A. D. Sappa and T Graf “Survey of Pedestrian Detection for Advanced Driver Assistance Systems”, PAMI, 2010; and M. Enzweiler and D. M. Gavrila. Monocular Pedestrian Detection: Survey and Experiments. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2179-2195, 2009. In view of the above, there is need to provide a more robust method and system for detecting pedestrians using a single normal camera.


SUMMARY

In one embodiment of the present application, a pedestrian detection method is provided. The method includes: obtaining an image captured by a camera; identifying a pedestrian candidate in the image; transforming the image into a top view image; calculating the actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera; and determining whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range.


In some embodiments, the extrinsic parameters of a camera may include pitch angle α, yaw angle β, and installation height h.


In some embodiments, the image captured by the camera may be transformed into the top view image using intrinsic parameters of the camera, such as focal length fu and fv, and optical center cu and cv. In some embodiments, if the lens of the camera is a fish-eye lens, the top view transformation matrix may be:












i
g


T

=

[





-

1

f
u





c
2






1

f
v




s
1



s
2







-

1

f
u





c
2



c
u


-


1

f
v




c
v



s
1



s
2


-


c
1



s
2









1

f
u




s
2






1

f
v




s
1



c
1







-

1

f
u





c
u



s
2


-


1

f
v




c
v



s
1



c
2


-


c
1



c
2







0




1

f
v




c
1







-

1

f
v





c
v



c
1


-

s
1





]





Equation






(
1
)








where, c1=cos α, s1=sin α, c2=cos β, and s2=sin β. If the camera uses a different lens, the top view transformation matrix may be different.


In some embodiments, the coordinates of a point in a top view image may be calculated by multiplying the coordinates of the point in the image by the top view transformation matrix.


In some embodiments, the method may further include: distortion correcting the image to obtain a corrected image; and transforming the corrected image into the top view image.


In some embodiments, the method may further include: generating an alert if the pedestrian candidate is determined to be a true positive.


In one embodiment of the present application, a pedestrian detection system is provided. The pedestrian detection system includes: an output device; and a processing device configured to: obtain an image captured by a camera; identify a pedestrian candidate in the image; transform the image into a top view image; calculate the actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera; determine whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range; and control the output device to generate an alert if the pedestrian candidate is determined to be a true positive.


In some embodiments, the processing device may be further configured to: distortion correct the image to obtain a corrected image; and transform the corrected image into the top view image.


In some embodiments, the pedestrian detection system may further include the camera.


In one embodiment of the present application, a pedestrian detection system is provided. The pedestrian detection system includes: an output device; and a processing device to: obtain an image captured by a camera; identify a pedestrian candidate in the image; transform the image into a top view image; calculate the actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera; determine whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range; and control the output device to generate an alert if the pedestrian candidate is determined to be a true positive.


In one embodiment of the present application, a pedestrian detection system is provided. The pedestrian detection system includes: a device to identify a pedestrian candidate in an image captured by a camera; a device to transform the image into a top view image; a device to calculate the actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera; a device to determine whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range; and an output device to generate an alert if the pedestrian candidate is determined to be a true positive.


Only a single normal camera is required using the method and system of the present application to detect pedestrians, so the cost of a vehicle mounted pedestrian detection system can be reduced. In addition, the method and system of the present application can be used in existing vehicle models having only one single camera configured to capture images of the view ahead, it is very convenient to add this function in such vehicle models. For example, this function can be added just by updating software of a Driving Assistant System of an existing vehicle model. Furthermore, in the method and system of the present application, motion information is not required, thus the computation complexity can be greatly decreased.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.



FIG. 1 illustrates a schematic flow chart of a pedestrian detection method 100 according to one embodiment of the present application.



FIG. 2A illustrates an example image captured by a camera.



FIG. 2B illustrates an example image obtained by distortion correcting the image shown in FIG. 2A.



FIG. 3A illustrates that a pedestrian candidate is identified in the corrected image shown in FIG. 2B.



FIG. 3B illustrates a top view image transformed from the corrected image shown in FIG. 2B.



FIG. 4 illustrates a schematic diagram of a vehicle and a pedestrian.



FIG. 5 illustrates a schematic diagram of how to calculate the actual height of a pedestrian candidate.



FIG. 6 illustrates an example image in which a detection result is presented.



FIG. 7 illustrates a schematic block diagram of a system for detecting pedestrians according to one embodiment of the present application.





DETAILED DESCRIPTION

In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.



FIG. 1 illustrates a schematic flow chart of a pedestrian detection method 100 according to one embodiment of the present application.


In 101, obtain an image captured by a camera. FIG. 2A illustrates an example of an image captured by a camera.


In 103, apply distortion correction to the image captured by the camera to obtain a corrected image. In many cases, an image captured by a camera, especially a wide angle camera, has distortion, and distortion correction may be used to reduce influence of such distortion to subsequent process. Since distortion correction technologies are well known in the art, such technologies will not be described in detail here. FIG. 2B illustrates an example of corrected image obtained by distortion correcting the image shown in FIG. 2A.


In 105, identify a pedestrian candidate in the corrected image. Some examples of such technologies please refer to “Histograms of Oriented Gradients for Human Detection by Navneet Dalal and Bill Triggs, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. II, pages 886-893, June 2005”; “Real-Time Human Detection Using Contour Cues by Jianxin Wu, Christopher Geyer, and James M. Rehg: Proc. The 2011 IEEE Int'l Conference on Robotics and Automation (ICRA 2011), Shanghai, China, May 2011, pp. 860-867”; and “Fast Pedestrian Detection Using A Cascade Of Boosted Covariance Features, In: IEEE Transactions on Circuits and Systems for Video Technology, 2008”.


In some algorithms, an identified pedestrian candidate may be enclosed by a rectangle; in some algorithms, an identified pedestrian candidate may be enclosed by an oval. FIG. 3A illustrates an example of an image in which a pedestrian candidate is identified and enclosed by a rectangle 201.


In 107, transform the corrected image into a top view image. FIG. 3B illustrates an example of top view image transformed from the image shown in FIG. 3A. As one can see that only a part of the pedestrian candidate is contained in the top view image shown in FIG. 3B, and this part will be referred as the segmented part hereinafter. How to transform an image into a top view image is well known in the art, and it will not be described in detail here.



FIG. 4 illustrates a vehicle 301 having a camera 303 mounted thereon running on a road surface 305, and a pedestrian 307 in front of the vehicle 301. In an illustrative embodiment shown in FIG. 4, the pitch angle of the camera 303 is a, the yaw angle of the camera 303 equals to zero, and the installation height of the camera 303 is h. For convenience sake, the yaw angle of the camera 303 is set as zero in this embodiment. If the yaw angle does not equal to zero, the subsequent computation may be more complicated. A pitch angle of a camera is an angle between x axis and the projection of the principal axis of the camera on the plane defined by x axis and z axis. A yaw angle of a camera is an angle between x axis and the projection of the principal axis of the camera on the plane defined by x axis and y axis.


In 109, calculate an actual height of the pedestrian candidate based on the top view image and the extrinsic parameters of the camera 303. FIG. 5 illustrates the relationship between various dimensions. In FIG. 5, the installation height h and the pitch angle α of the camera 303 are known. As a result, d can be calculated according to Equation (1).









d
=

h

tag





α






Equation






(
1
)








Referring back to FIG. 3B, the relationship between d1 and d2 can be calculated. In other words, the ratio r1=d2/d1 can be calculated. d1 represents the actual dimension of the projection of the segmented part on the road surface 305, and d2 represents the actual horizontal distance between the camera 303 and the pedestrian candidate. Then d1 can be calculated according to Equation (2).










d
1

=

d

1
+

r
1







Equation






(
2
)








After d1 is calculated, the actual height of the segmented part H1 can be calculated according to Equation (3).

H1=d1×tagα  Equation (3)


According to the top view transforming algorithm, the ratio r2=H1/H2 can be calculated, thus the actual height of the pedestrian H2 can be calculated according to Equation (4).










H
2

=


H
1


r
2






Equation






(
4
)








In 111, calculate the actual horizontal distance between the camera and the pedestrian candidate. d2 represents the actual horizontal distance between the camera and the pedestrian candidate. Since the ratio r1=d2/d1 and d1 are known, d2 can be calculated according to Equation (5).

d2=d1×r2  Equation (5)


According to the above embodiment, the actual height of the pedestrian candidate H2 is calculated based on the ratio r1 of d2 to d1 and extrinsic parameters of the camera. In other words, the actual height of the pedestrian candidate H2 is calculated based on the position of the pedestrian candidate in the top view image and extrinsic parameters of the camera.


In 113, determine whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined height range. If the actual height of the pedestrian candidate is out of the height range, then the pedestrian candidate may be determined as a false positive, otherwise the pedestrian candidate may be determined as true positive. In one embodiment, the height range may be from 1 meter to 2.4 meters. The lower limit and the upper limit of the height range may be set according to specific situation. For example, for Asia area, the lower limit and the upper limit may be set lower, and for Europe area, the lower limit and the upper limit may be set higher. For example, the lower limit may be 0.8 meter, 0.9 meter, 1.1 meters, or 1.2 meters; the upper limit may be 2 meters, 2.1 meters, 2.2 meters, 2.3 meters, 2.5 meters. The above numbers are only for illustrative purpose, and are not intended to be limiting.


In 115, output the result. When a pedestrian is detected, a notice may be presented to a user such as a driver. In some embodiments, a detected pedestrian may be enclosed by a rectangle in the image, and the actual distance between the camera and the detected pedestrian may also be provided in the image as shown in FIG. 6. In some embodiments, a sound alert may be generated when a pedestrian is detected.



FIG. 7 illustrates a system 400 for detecting pedestrian. The system 400 includes a camera 401, a processing device 403, a memory device 405, a sound alert generator 407, and a display device 409. The system 400 may be mounted on a vehicle to detect and remind a driver of pedestrians in the vicinity of the vehicle.


The camera 401 is to capture images. The processing device 403 may be configured to conduct 103 to 113 of the method 100. The memory device 405 may store an operating system and program instructions therein.


When a pedestrian is detected, the processing device 403 may send an instruction to control the sound alert generator 407 to generate a sound alert, may control the display device 409 to present the detected pedestrian by enclosing the pedestrian in a rectangle in the image, and may control the display device 409 to present the actual distance between the detected pedestrian and the camera 401. In some embodiments, the actual distance between the detected pedestrian and the vehicle on which the system 400 is mounted may be calculated and presented on the display device 409.


There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally a design choice representing cost vs. efficiency tradeoffs. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.


While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims
  • 1. A pedestrian detection method comprising: obtaining an image captured by a camera;identifying a pedestrian candidate in the image;transforming the image into a top view image;calculating an actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera, the actual height of the pedestrian candidate comprising a full height of the pedestrian candidate extending from a head of the pedestrian candidate to a foot of the pedestrian candidate; anddetermining whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range from a first height to a second height.
  • 2. The method of claim 1, wherein the extrinsic parameters of the camera comprise pitch angle α, yaw angle β, and installation height h.
  • 3. The method of claim 1, further comprising: distortion correcting the image to obtain a corrected image; and transforming the corrected image into the top view image.
  • 4. The method of claim 1, further comprising: generating an alert if the pedestrian candidate is determined to be a true positive.
  • 5. The method of claim 1, wherein the first height of the predetermined pedestrian height range is 1 meter and wherein the second height of the predetermined pedestrian height range is 2.4 meters.
  • 6. A pedestrian detection system comprising: an output device; anda processing device configured to: obtain an image captured by a camera; identify a pedestrian candidate in the image; transform the image into a top view image; calculate an actual height of the pedestrian candidate based on the top view image and extrinsic parameters of the camera; determine whether the pedestrian candidate is a true positive by determining whether the actual height of the pedestrian candidate is within a predetermined pedestrian height range, the predetermined pedestrian height range including a non-zero lower limit and a non-zero upper limit that is different from the non-zero lower limit; and control the output device to generate an alert if the pedestrian candidate is determined to be a true positive.
  • 7. The system of claim 6, wherein the extrinsic parameters of the camera comprise pitch angle α, yaw angle β, and installation height h.
  • 8. The system of claim 6, further comprising the camera.
  • 9. The system of claim 6, wherein the processing device is further configured to: distortion correct the image to obtain a corrected image; and transform the corrected image into the top view image.
  • 10. The system of claim 6, wherein the predetermined pedestrian height range is from 1 meter to 2.4 meters.
  • 11. The method of claim 4, wherein the alert is a sound alert, and wherein the extrinsic parameters of the camera comprise pitch angle α, yaw angle β, and installation height h.
  • 12. The method of claim 11, further comprising: distortion correcting the image to obtain a corrected image; and transforming the corrected image into the top view image.
  • 13. The method of claim 1, further comprising presenting the true positive detected pedestrian to a vehicle driver via a display device in a vehicle by enclosing the pedestrian in the image.
  • 14. The method of claim 13, further comprising controlling the display device to present a distance between the true positive detected pedestrian and the camera.
  • 15. The method of claim 14, wherein the pedestrian is enclosed with a rectangle on the display device.
  • 16. The system of claim 6, further comprising the camera, wherein the processing device is further configured to: distortion correct the image to obtain a corrected image; and transform the corrected image into the top view image.
  • 17. The system of claim 16, wherein the predetermined pedestrian height range is from 1 meter to 2.4 meters, and wherein the alert is a sound alert.
  • 18. The system of claim 6, further comprising a display device, wherein the processing device is further configured to present the true positive detected pedestrian to a vehicle driver via the display device in a vehicle by enclosing the pedestrian in the image.
  • 19. The system of claim 18, wherein the processing device is further configured to control the display device to present a distance between the true positive detected pedestrian and the camera.
  • 20. The system of claim 19, wherein the pedestrian is enclosed with a rectangle on the display device.
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2012/085727 12/3/2012 WO 00
Publishing Document Publishing Date Country Kind
WO2014/085953 6/12/2014 WO A
US Referenced Citations (49)
Number Name Date Kind
6122597 Saneyoshi Sep 2000 A
6327522 Kojima Dec 2001 B1
7388476 Nagaoka Jun 2008 B2
7853072 Han Dec 2010 B2
8923553 Yashiro Dec 2014 B2
8942511 Yashiro Jan 2015 B2
8981966 Stein Mar 2015 B2
9008365 Xu Apr 2015 B2
9111147 Thornton Aug 2015 B2
9517767 Kentley Dec 2016 B1
20030138133 Nagaoka Jul 2003 A1
20040183906 Nagaoka Sep 2004 A1
20040258279 Hirvonen Dec 2004 A1
20050063565 Nagaoka Mar 2005 A1
20050196034 Hattori Sep 2005 A1
20060126898 Nagaoka Jun 2006 A1
20070047837 Schwab Mar 2007 A1
20070229238 Boyles Oct 2007 A1
20070237387 Avidan Oct 2007 A1
20090046093 Kikuchi Feb 2009 A1
20090103779 Loehlein Apr 2009 A1
20090225189 Morin Sep 2009 A1
20110249030 Hirose Oct 2011 A1
20120050074 Bechtel Mar 2012 A1
20120127312 Nagamine May 2012 A1
20120182140 Kumabe Jul 2012 A1
20120194680 Ishii Aug 2012 A1
20130182905 Myers Jul 2013 A1
20130222592 Gieseke Aug 2013 A1
20130251203 Tanabiki Sep 2013 A1
20130314503 Nix Nov 2013 A1
20130342694 Friedhoff Dec 2013 A1
20140063252 Zhao Mar 2014 A1
20140072170 Zhang Mar 2014 A1
20140177946 Lim Jun 2014 A1
20140226855 Savvides Aug 2014 A1
20140270378 Aimura Sep 2014 A1
20140320658 Pliefke Oct 2014 A1
20150086077 Du Mar 2015 A1
20150178556 Perski Jun 2015 A1
20150178571 Zhang Jun 2015 A1
20150234474 Yokoyama Aug 2015 A1
20150317797 Lu Nov 2015 A1
20150332089 Zhang Nov 2015 A1
20160086033 Molin Mar 2016 A1
20160283590 Matsuda Sep 2016 A1
20160292890 Baek Oct 2016 A1
20160342830 Ariizumi Nov 2016 A1
20170158134 Shigemura Jun 2017 A1
Foreign Referenced Citations (3)
Number Date Country
101959060 Jan 2011 CN
102103747 Jun 2011 CN
2010193170 Sep 2010 JP
Non-Patent Literature Citations (12)
Entry
D. Varga, T. Szirányi, A. Kiss, L. Spórás, and L. Havasi. A Multi-View Pedestrian Tracking Method in an Uncalibrated Camera Network. Proceedings of the IEEE International Conference on Computer Vision Workshops, 37-44, 2015.
Goubet, E. “Pedestrian Tracking Using Thermal Infrared Imaging ” SPIE Defense & Security Symposium May 2006.
Dalal, N. et al., “Histograms of Oriented Gradients for Human Detection,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR2005, vol. 1, Jun. 2005, 8 pages.
Enzweiler, M. et al., “Monocular Pedestrian Detection: Survey and Experiments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, No. 12, Dec. 2009, 17 pages.
Geronimo, D. et al., “Survey of Pedestrian Detection for Advanced Driver Assistance Systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, No. 7, Jul. 2010, 20 pages.
Dollar, P., et al., “Pedestrian Detection: An Evaluation of the State of the Art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, No. 4, Aug. 2011, 20 pages.
ISA State Intellectual Property Office of the People's Republic of China, International Search Report Issued in International Application No. PCT/CN2012/085727, Sep. 12, 2013, WIPO, 2 pages.
Paisitkriangkrai, S. et al., “Fast Pedestrian Detection Using a Cascade of Boosted Covariance Features,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, No. 8, Aug. 2008, 12 pages.
Wu, J. et al., “Real-Time Human Detection Using Contour Cues,” 2011 IEEE International Conference on Robotics and Automation (ICRA), May 2011, 8 pages.
Momeni-K., M. et al., “Height Estimation from a Single Camera View,” Proceedings of the International Conference on Computer Vision Theory and Applications (VISIGRAPP 2012), Feb. 24, 2012, Rome, Italy, 7 pages.
Salih, Y. et al., “Depth and Geometry from a Single 2D Image Using Triangulation,” Proceedings of the 2012 IEEE International Conference on Multimedia and Expo Workshops (ICMEW '12), Jul. 9, 2012, Melbourne, Australia, 5 pages.
European Patent Office, Extended European Search Report Issued in Application No. 12889460.7, dated Jul. 1, 2016, 6 pages.
Related Publications (1)
Number Date Country
20150332089 A1 Nov 2015 US