This application is based on Japanese Patent Application No. 2014-213240 filed with the Japan Patent Office on Oct. 17, 2014, the entire contents of which are incorporated herein by reference.
Field
The present invention relates to a technology for estimating information on a sensing area from an image of a camera.
Related Art
Nowadays, there is increasing a need for estimating a person's flow line, a floor surface, and a shape of a room or a passage based on a result of detecting the person from the image photographed with the camera. This kind of technology is applied to structure recognition of a sensing area in a monitoring camera or an image sensor, or has been incorporated in a household electrical appliance.
For example, in a technology disclosed in Unexamined Japanese Patent Publication No. 2013-24534, whether the person moves is detected using the image of the camera, a height of the person is estimated when the person moves, an obstacle is detected in a room from a movement history of the person, and a detection result of the obstacle is used in air conditioning control of an air conditioning apparatus (air conditioner). Unexamined Japanese Patent Publication No. 2002-197463 discloses a method for determining stillness and movement or a posture of the person by monitoring change in coordinate of a top head among plural frame images. Unexamined Japanese Patent Publication No. 2013-37406 discloses a method for estimating a height using a detection result of a head in an image.
(1) A technique of calculating an existence position (a position on a real world) of the person by estimating a depth distance (a distance between the camera and the person) from a size of the person in the image and (2) a technique of calculating the existence position of the person through a triangulation principle by photographing the person from two directions using two cameras are well known as a typical technique of calculating the existence position of the person from the image of the camera.
However, in technique (1), distance estimation accuracy is not stable because the depth distance is estimated based on the size of the person in the image, a detection result of the size varies largely depending on image quality, resolution, a physical constitution or a body shape of the person, and a posture. On the other hand, in technique (2), because of the necessity of two cameras, device cost increases compared with the device of a monocular camera. The images of the two cameras are separately processed, and a result is obtained by a combination of the pieces of information on the images. Therefore, the processing may become complicated.
One or more embodiments of the present invention provides a technology for improving the estimation accuracy by a simple method in the processing of detecting the person from the image of the camera and estimating information on the sensing area based on a detection result.
According to one or more embodiments of the present invention, an area information estimating device includes a camera configured to photograph a sensing area, a person detector configured to detect a person from an image of the camera, and a position calculator configured to calculate an existence position of the person in the sensing area based on a coordinate on the image in which the person is detected by the person detector. At this point, the person detector excludes the person in whom a relationship between the coordinate and a size on the image does not satisfy a predetermined condition from a detection target.
In the above configuration, the person in whom the relationship between the coordinate and the size on the image does not satisfy the predetermined condition (data degrading the estimation accuracy) is excluded, so that the information on the sensing area can accurately be estimated. It is only necessary to determine and estimate whether or not the relationship between the coordinate and the size on the image satisfies the predetermined condition. Therefore, the simple processing can easily be performed at high speed.
The person detector detects an upper body of the person, and the position calculator may obtain a coordinate on a foot image of the person detected by the person detector from a coordinate and a size on an upper body image of the person, and calculate the existence position of the person in the sensing area from the coordinate on the foot image. In the above configuration, the existence position is calculated from the foot coordinate (that is, the coordinate of a point that is in contact with the floor surface), so that the distance accuracy can be improved. Therefore, the information on the sensing area can accurately be estimated.
The person detector may change the predetermined condition between the case that the detected person is in a sitting position and the case that the detected person is in a standing position. The case of the sitting position differs from the case of the standing position in the coordinate on the image, so that both the existence positions of the person in the sitting position and the person in the standing position can accurately be obtained by changing the predetermined condition according to the coordinate on the image.
The person detector performs upper body detection detecting the upper body of the person and whole body detection detecting a whole body of the person, the person detector may determine that the person is in the standing position both when the upper body of the person is detected in the upper body detection and when the whole body of the person is detected in the whole body detection, and the person detector may determine that the person is in the sitting position when the upper body of the person is detected in the upper body detection while the whole body of the person is not detected in the whole body detection. Therefore, the person in the sitting position and the person in the standing position can be classified by the simple method.
According to one or more embodiments of the present invention, an area information estimating device includes a camera configured to photograph a sensing area, a person detector configured to detect a person from an image of the camera, and a position calculator configured to calculate an existence position of the person in the sensing area based on a coordinate on the image in which the person is detected by the person detector. At this point, the person detector sets only the person in whom a whole body in a standing position is photographed in the image to a detection target. In the above configuration, a sitting person or a partially-hidden person can be excluded from the detection target using only the detection result of the whole body. Therefore, the information on the sensing area can accurately be estimated.
The position calculator may calculate the existence position of the person detected by the person detector in the sensing area from a coordinate of a foot image of the person. In the above configuration, the existence position is calculated from the foot coordinate (that is, the coordinate of a point that is in contact with the floor surface), so that the distance accuracy can be improved. Therefore, the information on the sensing area can accurately be estimated.
According to one or more embodiments of the present invention, an area information estimating device includes a camera configured to photograph a sensing area, a person detector configured to detect a person from an image of the camera, and a position calculator configured to calculate an existence position of the person in the sensing area based on a coordinate on the image in which the person is detected by the person detector. At this point, the person detector detects an upper body of the person, and the position calculator obtains a coordinate on a foot image of the person detected by the person detector from a coordinate and a size on an upper body image of the person, and calculates the existence position of the person in the sensing area from the coordinate on the foot image. In the above configuration, the existence position is calculated from the foot coordinate (that is, the coordinate of a point that is in contact with the floor surface), so that the distance accuracy can be improved. Therefore, the information on the sensing area can accurately be estimated.
The position calculator may obtain the coordinate of the foot by calculating a ratio of the upper body to a size. Generally, because of a roughly identical size ratio of the human body, the proper estimation result can be obtained only by the calculation of the ratio of the upper body size. Therefore, the simple processing can be performed.
The area information estimating device may further include an estimator configured to estimate an existence allowable area where a person can exist in the sensing area based on a distribution of the existence position of the person detected in a predetermined period in the sensing area. The estimator may superpose each detected existence position on plan view in a form of a probability density distribution, and determine an area where a cumulative probability becomes greater than or equal to a predetermined level as the existence allowable area. In the configuration, an influence of the detection error can be reduced by recognizing the detected existence position as the probability density distribution, and the existence allowable area of the person can accurately be estimated.
An area information estimating device according to one or more embodiments of the present invention may include at least a part of the above configurations and functions. According to one or more embodiments of the present invention, an air conditioning apparatus, a monitoring camera, an image sensor, a robot vision, a computer vision, and a household electrical appliance may include an area information estimating device according to one or more embodiments of the present invention. An area information estimating method according to one or more embodiments of the present invention may include at least a part of the above pieces of processing. A program according to one or more embodiments of the present invention may cause a computer to perform an area information estimating method according to one or more embodiment of the present invention. According to one or more embodiments of the present invention, a computer-readable recording medium may include a program according to one or more embodiments of the present invention recorded in a non-transitory manner. A combination of the above configurations and pieces of processing is within the scope of the present invention, as long as a technical inconsistency is not generated.
Accordingly, according to one or more embodiments of the present invention, the estimation accuracy can be improved by the simple method in the processing of detecting the person from the image of the camera and estimating information on the sensing area based on the detection result.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Unless explicitly stated to be so limited, the present invention is not limited to a size, a material, a shape, a relative position, and the like of a component described in the following embodiments. In embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid obscuring the invention.
(Apparatus Configuration)
An entire configuration of an air conditioning apparatus (hereinafter, referred to as an “air conditioner”) according to a first embodiment of the present invention will be described with reference to
The air conditioner 1 mainly includes an area information estimating device 2, an area information storage 3, an air conditioning controller 4, a temperature sensor 5, a heat exchanger 6, and a blast fan 7. The area information on the room in which the air conditioner 1 is installed is stored in the area information storage 3. The area information estimating device 2 includes a camera 21, a person detector 22, a position calculator 23, a position storage 24, and an estimator 25.
The air conditioning controller 4 is constructed with a processor, a memory, and the like. The processor executes a program to decide a running condition such as a desired temperature, a desired wind direction, and a desired air flow, and controls the heat exchanger 6 and the blast fan 7 such that the air conditioner 1 operates on the running condition. The air conditioning controller 4 can also decide the running condition in consideration of the area information that is estimated by the area information estimating device 2 and stored in the area information storage 3 in addition to a setting condition (such as a setting temperature) input from the remote controller 8 and an indoor temperature obtained from the temperature sensor 5. The area information estimating device 2 is provided with the person detection function, so that the air conditioning controller 4 can also decide the running condition in consideration of a real-time person detection result.
For example, the temperature sensor 5 acquires the indoor temperature with an infrared sensor. The heat exchanger 6 is connected to an outdoor unit (not illustrated) to constitute a refrigerating cycle, and heats or cools air taken in the heat exchanger 6. The blast fan 7 generates an air current to circulate indoor air. The air flow and the wind direction (a vertical direction and a horizontal direction) can be adjusted in the blast fan 7.
The area information estimating device 2 detects a person from an image in which an area (hereinafter, referred to as a “sensing area”) becoming an estimation target is photographed, records an existence position of the person in the sensing area, and generates area information based on a distribution of the existence position. In the first embodiment, an area (hereinafter, referred to as a “floor surface area”) where a floor surface is exposed in the sensing area is estimated as the area information. As used herein, the term “floor surface is exposed” means a state in which an object, such as furniture and fixtures, a post, and a wall, which obstructs passage or activity of the person, does not exist. In other words, the floor surface area is an area (hereinafter, referred to as an “existence allowable area”) where the person can exist (pass) in the sensing area. The air conditioning control is performed in consideration of the floor surface area, which allows heating or cooling to be efficiently performed.
The area information estimating device 2 is constructed with a computer including a processor, a memory, and the camera 21, and the processor executes a program to implement the functions of the person detector 22, the position calculator 23, the position storage 24, and the estimator 25. A part of or all the functional units may be constructed with an ASIC or an FPGA circuit. For example, the area information estimating device 2 is provided as a built-in type board computer in which the processor, the memory storing various programs, a camera module, and an external interface are mounted on a circuit board.
As illustrated in
The person detector 22 has the function of detecting the person from the image photographed with the camera 21. Various algorithms are proposed as a technique of detecting the person from the image. Examples of the algorithm include “face detection” that detects a portion like a face based on a local density difference of an eye or a mouth or a color feature such as a hair color and a skin color and “human body detection” that detects a head, an upper body, or a whole body based on a silhouette (contour). Any algorithm may be adopted in the person detector 22, or plural kinds of algorithms may be combined. The face detection has an advantage of a low chance that an object except for the person is falsely detected. On the other hand, the human body detection (silhouette detection) has an advantage of high robustness against an orientation or a posture of the person. In the human body detection, which one of the head, the upper body, and the whole body is set to a detection target depends on a way to provide data used in learning of a classifier. That is, when the upper body is detected, the classifier is caused to learn the feature of the silhouette of the upper body using a teacher image in which the upper body (for example, a portion above a breast) is photographed. When the whole body is detected, it is only necessary to perform the learning using the teacher image in which the head to a foot are photographed. At this point, the robust human body detection can be performed against a posture change when the teacher images of various orientations and postures such as a forward-looking orientation, an obliquely-looking orientation, a sideway-looking orientation, a rearward-looking orientation, an erect posture, and a walking posture.
The position calculator 23 has the function of calculating the existence position of the person in the sensing area based on a coordinate on the image in which the person is detected by the person detector 22. Particularly, the position calculator 23 performs processing of transforming an image coordinate (X,Y) into a position coordinate (T,L) on the floor surface in a real space by a geometric calculation in which already-known parameters such as the view angle and the installation angle of the camera 21 are used. In an image coordinate system of the first embodiment, an origin O is set to a center at a lower end of the image, an X-axis is set to a horizontal direction of the image, and Y-axis is set to a vertical direction of the image. As illustrated in
The position storage 24 is a database in which the person's existence position (T,L) calculated by the position calculator 23 is stored. Not only the information on the person's existence position but also additional information such as a detection clock time may be stored in the position storage 24 while the additional information is correlated with the information on the person's existence position. When the person detection processing is continuously performed in a certain level of period (for example, several hours to several days), a distribution (history) of the person's existence positions acquired in the period is accumulated in the position storage 24.
The estimator 25 has the function of estimating the floor surface area (person existence allowable area) based on the distribution of the person's existence positions accumulated in the position storage 24. In the case that furniture and fixtures are installed in the room, the area where the person can pass (exist) is limited to the area where furniture and fixtures are not installed, namely, the floor surface area. Accordingly, the floor surface area in the room can be estimated by analyzing the distribution of the person's existence positions accumulated in a certain period. The area information obtained by the estimator 25 is stored in the area information storage 3, and used in the air conditioning control of the air conditioning controller 4.
(Processing Flow)
A flow of air conditioning control processing performed by air conditioner 1 will be described below with reference to a flowchart in
The camera 21 captures the overhead image of the room (sensing area) in which the air conditioner 1 is installed (Step S10). The gray scale image having a horizontal pixel number Hp of 1200 and a vertical pixel number Vp of 1600 is captured in the first embodiment.
The person detector 22 detects the person from the image (Step S11).
The position calculator 23 obtains the existence position (T,L) of each of the detected persons 41a to 41c in the sensing area based on the detection coordinate (Xc,Yc) of each of the persons 41a to 41c and the size S (Step S12).
A specific example of the coordinate calculated by the position calculator 23 will be described with reference to
It is assumed that, when the person exists at a position of a distance D [mm] in the optical axis direction from the camera 21, the person has the detection coordinate (Xc,Yc) and the size S [pixel]. Because a ratio of a size [pixel] on the image and a size [mm] on the real space becomes S [pixel]:W [mm] at the position separated from the camera 21 by the distance D, the horizontal position T [mm] of the person is obtained from
T=(X2×W)/S
as illustrated in
Derivation of the depth position L [mm] will be described below. As illustrated in
tan(β/2)=(V/2)/D
holds. Because an equation
Vp/V=S/W
holds at the position separated from the camera 21 by the distance D, when the vertical distance V is eliminated from both the equations, an equation
D=(Vp×W)/(2×S×tan(β/2))
is obtained. Accordingly, a depth position L1 [mm] in
At the position separated from the camera 21 by the distance D, it is assumed that Y2 (=Yc−Vp/2) is the vertical pixel number from the center of the image to the detection coordinate of the person, and that V2 [mm] is the vertical distance in the real space. At this point, because
Y2/V2=S/W
holds, a depth position L2 [mm] in
L2=V2×sin α=(Y2×W×sin α)/S.
Therefore, the depth position L[mm] of the person can be calculated by
In the above coordinate calculation technique, assuming that the person body width W is kept constant, and the planar position of the person is estimated based on the size S on the image (that is, the person is located farther away from the camera 21 with decreasing size). The coordinate calculation technique has an advantage that the existence position (T,L) can be calculated even if an installation height (a height from the floor surface) of the camera 21 is unclear.
When the existence position (T,L) of each of the persons 41a to 41c detected from the image is obtained through the calculation, the flow goes to Step S13 in
After the pieces of processing in Steps S10 to S13 are repeated in the predetermined period, the estimator 25 estimates the floor surface area (person existence allowable area) based on the pieces of information on the person's existence position accumulated in the position storage 24 (Step S14).
In Step S15, the air conditioning controller 4 performs the air conditioning control based on the acquired area information. For example, amenity can efficiently be improved by preferentially sending wind to a place where the person passes (exists) frequently. Alternatively, a wind flow is predicted based on a room shape, and the wind direction may be controlled such that room is airy.
(Scheme for Accuracy Improvement)
The good result is frequently obtained by the above method. However, in the case that a person who is largely deviated from a standard body shape is photographed in the image, there is a possibility of degrading estimation accuracy of the area information. Because the distance is calculated based on the average person body width W of the adult, the accuracy can be expected in the case that the size on the image decreases as the depth position of the person is located farther as illustrated in
Therefore, in the person detector 22 of the first embodiment, the detection target (data used to estimate the area information) is limited to the body shape within a specific range and the person in the standing position to suppress the degradation of the estimation accuracy of the area information. Specifically, the person detector 22 regards the person in whom a relationship between the detection coordinate and the size on the image does not satisfy a predetermined condition in persons detected from the image as a noise (data degrading the estimation accuracy), and the person detector 22 excludes the person from the detection target.
The subject having a person body width (shoulder width) W of 450 mm and a height H of 1750 mm
The camera installation height h of 2200 mm, the camera looking-down angle α of 25 degrees, the camera vertical view angle β of 56.6 degrees, and the vertical pixel number Vp of 1600 pixels
As can be seen from
The person detection processing in consideration of the relationship between the detection coordinate Yc and the detection size S on the image will be described below with reference to a flowchart in
The person detector 22 detects the upper body of the person from the image (Step S90). The persons 70a and 70b having the standard body shape, the small person 71a, and the large person 71b are detected in the case that the image in
Then the person detector 22 determines whether the relationship between the detection coordinate Yc and the detection size S satisfies the predetermined condition in each person detected in Step S90 (Step S91). In the first embodiment, the person detector 22 acquires the reference value Sref corresponding to the detection coordinate Yc from the look-up table in
The person in which the determination that “the predetermined condition is not satisfied” is made in Step S91 is removed from the detection result (Step S92). In the example of
The subsequent pieces of processing (Steps S12 to S14 in
The experimental result in
An average value of the adult is used with respect to the person body width W [mm] and the height H [mm], and an already-known parameter is used with respect to the camera installation height h [mm], the camera looking-down angle α [degree], the camera vertical view angle β [degree], and the vertical pixel number Vp [pixel].
A second embodiment of the present invention will be described below. The second embodiment differs from the first embodiment in that the whole body detection is used as the person detection algorithm and that the existence position on the real space is calculated from the foot coordinate.
The position calculator 23 calculates the vertical coordinate Yb [pixel] of the person's foot from the detection coordinate (Xc,Yc) and detection size (Sh,Sv) of the person.
Yb=Yc−Sv/2
When the vertical coordinate Yb of the foot is obtained, as illustrated in
The horizontal position T may be calculated by the same technique as the first embodiment, or calculated from a ratio of the horizontal pixel number Hp and the vertical detection coordinate Yc.
In the technique of the second embodiment, the sitting person or the partially-hidden person can be excluded from the detection target using only the detection result of the whole body. Additionally, the depth position L is calculated from the foot coordinate (that is, the coordinate of the point that is in contact with the floor surface), so that the distance accuracy can be improved compared with the technique of the first embodiment of calculating the depth position L from the size of the detection size S. Therefore, the floor surface area can accurately be estimated in the sensing area.
A third embodiment of the present invention will be described below. A technique of estimating the foot coordinate from the detection result of the upper body is adopted in the third embodiment.
Processing flows of the person detector 22 and the position calculator 23 of the third embodiment will be described below with reference to
In the third embodiment, as illustrated in
Yb=Yc−5S/2.
As can be seen from
After the foot coordinates Yb1 and Yb2 are obtained, the position calculator 23 calculates the existence positions of the persons 110a and 110b based on the foot coordinate. The specific calculation method is identical to that of the second embodiment.
In the technique of the third embodiment, the depth position L is calculated from the foot coordinate (that is, the coordinate of the point that is in contact with the floor surface), so that the distance accuracy can be improved compared with the technique of the first embodiment of calculating the depth position L from the size of the detection size S. Therefore, the floor surface area can accurately be estimated in the sensing area. In the third embodiment, the person in whom the relationship between the detection coordinate Yc and the detection size S does not satisfy the predetermined condition may be excluded from the detection target similarly to the first embodiment. The accuracy can be expected to be improved by the exclusion.
A fourth embodiment of the present invention will be described below. A method in which the upper body detection and the whole body detection are combined is adopted in the fourth embodiment.
In the case that a person 120a in the standing position and a person 120b in the sitting position are located at the same distance (it is assumed that the persons 120a and 120b have the same body shape) as illustrated in
In the fourth embodiment, the predetermined condition of the case that the detected person is in the sitting position differs from that of the case that the detected person is in the standing position, whereby the person 120b in the sitting position is also selected as the detection target. A specific processing flow will be described below with reference to
The person detector 22 detects the upper body from the image (Step S130). The person detector 22 also detects the whole body from the same image (Step S131). The upper body detection and the whole body detection may be performed in the reverse order.
Then the person detector 22 determines whether the relationship between the detection coordinate Yc and the detection size S satisfies the predetermined condition in each detected person. At this point, the processing in the case that the detected person is in the standing position (in the case that the detected person is detected in both the upper body detection and the whole body detection) differs from the processing in the case that the detected person is in the sitting position (in the case that the detected person is detected only in the upper body detection) (Step S132). That is, the person detector 22 acquires the reference value Sref corresponding to the detection coordinate Yc from a look-up table (or the function) for the standing position with respect to the person in the standing position (Step S133), the person detector 22 estimates whether the upper body size S of the person in the standing position falls within a permissible range (Step S134), and the person detector 22 excludes the person in the standing position from the detection target when the upper body size S of the person in the standing position is out of the permissible range (Step S135). On the other hand, the person detector 22 acquires the reference value Sref corresponding to the detection coordinate Yc from a look-up table (or the function) for the sitting position with respect to the person in the sitting position (Step S136), the person detector 22 estimates whether the upper body size S of the person in the sitting position falls within a permissible range (Step S137), and the person detector 22 excludes the person in the sitting position from the detection target when the upper body size S of the person in the sitting position is out of the permissible range (Step S138).
Then only the remaining detection result is output after the data of the person in the standing position and the data of the person in the sitting position are selected and excluded by different standards. The subsequent pieces of processing (Steps S12 to S14 in
A fifth embodiment of the present invention will be described below. In the first to fourth embodiments, using the map in which the detected existence position is plotted on the plan view as illustrated in
In the configuration of the fifth embodiment, an influence of the detection error can be reduced by recognizing the detected existence position as the probability density distribution, and the floor surface area (person existence allowable area) can accurately be estimated.
The configurations of the above embodiments are merely illustrated as specific examples of the present invention, and the present invention is not limited to the configurations of the embodiments. Various specific configurations can be formed without departing from the scope of the present invention.
In one or more of the above embodiments, by way of example, the area estimation result is applied to the air conditioning control of the air conditioner. However, the present invention is not limited to the air conditioning control of the air conditioner. One or more embodiments of the present invention can suitably be applied to structure recognition (such as recognition of a person's flow line, a floor surface (including the ground), a work area, and a passage or an obstacle) in the sensing area in the monitoring camera, the image sensor, computer vision, robot vision, and the like. The above embodiments may be combined with one another as long as the technical inconsistency is not generated.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.
Number | Date | Country | Kind |
---|---|---|---|
2014-213240 | Oct 2014 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
7755480 | Aritsuka | Jul 2010 | B2 |
9445164 | Imahara | Sep 2016 | B2 |
20030048926 | Watanabe | Mar 2003 | A1 |
20080193009 | Sonoura | Aug 2008 | A1 |
20100033579 | Yokohata | Feb 2010 | A1 |
20110052076 | Yashiro | Mar 2011 | A1 |
20120020518 | Taguchi | Jan 2012 | A1 |
20120201468 | Oami | Aug 2012 | A1 |
20130170760 | Wang et al. | Jul 2013 | A1 |
20130230245 | Matsumoto | Sep 2013 | A1 |
20130251203 | Tanabiki | Sep 2013 | A1 |
20130329958 | Oami | Dec 2013 | A1 |
20140163424 | Kawaguchi | Jun 2014 | A1 |
20150362706 | Chujo et al. | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
2 966 503 | Jan 2016 | EP |
2002197463 | Jul 2002 | JP |
2013024534 | Feb 2013 | JP |
2013037406 | Feb 2013 | JP |
Entry |
---|
Partial European Search Report issued in corresponding European Application No. 15185277.9, mailed Apr. 14, 2016 (8 pages). |
Calderara, Simone et al.; “HECOL: Homography and Epipolar-based Consistent Labeling for Outdoor Park Surveillance, Accepted Manuscript”; Computer Vision and Image Understanding; Jul. 7, 2007; pp. 1-41 (42 pages). |
Haritaoglu, I et al.; “W4: Real-Time Surveillance of People and Their Activities”; IEEE Transactions on Pattern Analysis and Machine Intelligence; IEEE Computer Society; USA; vol. 22, No. 8; Aug. 1, 2000; pp. 809-830 (22 pages). |
Extended European Search Report in counterpart European Application No. 15 18 5277.9 issued Jul. 13, 2016 (15 pages). |
Qing Ye at al; “A method of automatic people counting used in air-conditioning energy-saving”; Computer Engineering and Technology; 2010 2nd International Conference on IEEE; XP031690173; Piscataway, NJ, USA; Apr. 16, 2010; pp. V6-703-V6-708 (6 pages). |
Number | Date | Country | |
---|---|---|---|
20160110602 A1 | Apr 2016 | US |