Background Field
Embodiments of the subject matter described herein are related generally to pose determination of a subject in images, and more particularly to estimating the three-dimensional pose of a subject's head using images with depth data.
Relevant Background
Many computing devices include cameras that are capable of capturing multiple images, e.g., in a stream of video frames. For example, many personal computers include a camera, such as a web cam, that is capable of capturing images of the user. Additionally, devices, such as cellular telephones, smart phones, and tablet computers typically include cameras of subjects, such as the user with a forward facing camera or others with a back facing camera.
Identifying the location of the subject, and more particularly, the subject's head in the image is useful for many applications, such as telepresence, and gaming. However, identifying and tracking the position of a subject's head in a series of images is problematic for conventional systems.
A three-dimensional pose, i.e., a pose with 6 degrees of freedom, of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.
In one implementation, a method includes capturing multiple images with depth data of a head of a subject and determining a rotation matrix and a translation vector associated with a pose of the head in the multiple images relative to a reference coordinate frame using the depth data.
In one implementation, an apparatus includes an RGBD camera to capture images with depth data of a head of a subject; and a processor coupled to the RGBD camera to receive multiple images with the depth data of the head of the subject, the processor being configured to determine a rotation matrix and a translation vector associated with a pose of the head in the multiple images relative to a reference coordinate frame using the depth data.
In one implementation, an apparatus includes means for capturing multiple images with depth data of a head of a subject and means for determining a rotation matrix and a translation vector associated with a pose of the head in the multiple images relative to a reference coordinate frame using the depth data.
In one implementation, a non-transitory computer-readable medium including program code to receive multiple images with the depth data of a head of a subject; and program code to determine a rotation matrix and a translation vector associated with a pose of the head in the multiple images relative to a reference coordinate frame using the depth data.
As used herein, a mobile platform refers to any portable electronic device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop or other suitable mobile platform. The mobile platform may be capable of receiving wireless communication and/or navigation signals, such as navigation positioning signals. The term “mobile platform” is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wireline connection, or other connection, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND. Also, “mobile platform” is intended to include all electronic devices, including wireless communication devices, computers, laptops, tablet computers, smart phones etc. which are capable of capturing images with depth data of a subject.
It should be understood, however, that while device 100 is sometimes described herein as a mobile platform, the device 100 may be a stationary device or may include one or more stationary components, e.g., the image processing unit 112 and/or Extended Kalman filter 270 may be stationary, while the RGBD camera 110 while being connected to the image processing unit 112 and Extended Kalman filter is itself moveable.
Returning to
If it is determined that there are insufficient corners on the face to track (230), the list of corners to be tracked on the face is refilled (250). A face, however, is substantially “texture-less.” Consequently, a corner detector, which may use FAST (Features from Accelerated Segment Test) corners or feature detectors from OpenCV, will generally identify corners in the background areas of the images, and thus, it is desirable to segment the face from the background before detecting corners to be tracked. Further, it is desirable to avoid T-junctions, which may appear in images of a face. Additionally, as discussed above, the head pose estimation uses an Extended Kalman filter, which tracks corners from the list of good corners. Thus, when refilling the list of corners, it is desirable to retain the corners that are being tracked so that only new corners are added, while avoiding duplicates.
Referring back to
Equations of Motion
ρi(t+1)=ρi(t), i=1 . . . N ρi(0)=ρ0i
T(t+1)=exp({circumflex over (ω)}(t))T(t) T(0)=T0
Ω(t+1)=LogSO(3)(exp({circumflex over (ω)}(t))exp({circumflex over (Ω)}(t))) Ω(0)=Ω0
v(t+1)=v(t)+αv(t) v(0)=v0
ω(t+1)=w(t)+αω(t) ω(0)=ω0 eq. 1
Measurements
yi(t)=π(exp({circumflex over (Ω)}(t))y0i(t)exp(ρi(t))+T(t))+ni(t) i=1 . . . N
log zi(t)=log([0 0 1](exp({circumflex over (Ω)}(t))y0i(t) exp(ρi(t))+T(t)))+nzi(t) i=1 . . . N eq. 2
where: y0i(t) ε R2 are measured, approximately, in the first instant in which they become visible, that for simplicity here is assumed to be the time t=0 (we will describe how to deal with points that disappear, as well as point features that appear at times t>0 later).
ρi(t) ε R is the state variable representing the logarithm of the depth of the ith feature point at time t. Logarithmic coordinates are chosen both to enforce the fact that the depth of visible points exp(ρ) has to be positive, as well as to rectify the distribution of range depth measurement uncertainty from the range portion of the device.
T(t) ε R3 is the state variable representing the translation vector from the camera frame at time t to the world frame.
Ω(t) ε so(3) is the state variable representing rotation from the camera frame at time t to the world frame.
v(t) ε R3 is the linear velocity of the camera at time t.
ω(t) εR3 is the angular velocity of the camera at time t.
αv and αω are assumed to be zero-mean Gaussian noise processes.
yi(t) ε R2 is the measured image coordinates in the camera coordinate frame of the ith feature point at time t.
log zi(t) ε R is the log of measured depth in the camera coordinate frame of the ith feature point at time t. In practice, the range data is pre-processed to yield the log-depth, and the noise is supposed to be additive in log-scale.
The Extended Kalman filter (270) produces a translation vector (T) and rotation matrix (R) from the current camera position to the world coordinates. As illustrated in
The device 100 also includes a control unit 160 that is connected to and communicates with the RGBD camera 110. The control unit 160 accepts and processes images and depth data captured by the RGBD camera 110 and controls the display 102. The control unit 160 may be provided by a processor 161 and associated memory 164, hardware 162, software 165, firmware 163, and a bus 160b. The control unit 160 may include an image processing unit 112 that performs various aspects of the process described above, such as the 2D face tracker (210), refilling corners (250) and optical flow (240), as described in
The image processing unit 112 and Extended Kalman filter 270 are illustrated separately from processor 161 for clarity, but may be part of the processor 161 or implemented in the processor based on instructions in the software 165 which is run in the processor 161. It will be understood as used herein that the processor 161 can, but need not necessarily include, one or more microprocessors, embedded processors, controllers, application specific integrated circuits (ASICs), digital signal processors (DSPs), and the like. The term processor is intended to describe the functions implemented by the system rather than specific hardware. Moreover, as used herein the term “memory” refers to any type of computer storage medium, including long term, short term, or other memory associated with the device 100, and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
The methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware 162, firmware 163, software 165, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in memory 164 and executed by the processor 161. Memory may be implemented within or external to the processor 161. If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include non-transitory computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Although the present invention is illustrated in connection with specific embodiments for instructional purposes, the present invention is not limited thereto. Various adaptations and modifications may be made without departing from the scope of the invention. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description.
This application claims priority under 35 USC 119 to U.S. Provisional Application No. 61/487,170, filed May 17, 2011, and entitled “Mapping, Localization and Pose Estimation by a Filtering Method Integrating Optical and Range Data,” and U.S. Provisional Application No. 61/562,959, filed Nov. 22, 2011, and entitled “Head Pose Estimation Using RGBD Camera,” both of which are assigned to the assignee hereof and which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6545706 | Edwards et al. | Apr 2003 | B1 |
20030218672 | Zhang | Nov 2003 | A1 |
20040017930 | Kim et al. | Jan 2004 | A1 |
20060187305 | Trivedi et al. | Aug 2006 | A1 |
20070052698 | Funayama et al. | Mar 2007 | A1 |
20070269108 | Steinberg et al. | Nov 2007 | A1 |
20080144925 | Zhu | Jun 2008 | A1 |
20080247651 | Takaki et al. | Oct 2008 | A1 |
20080278487 | Gobert | Nov 2008 | A1 |
20090129631 | Faure et al. | May 2009 | A1 |
20090195538 | Ryu et al. | Aug 2009 | A1 |
20090244309 | Maison et al. | Oct 2009 | A1 |
20090304232 | Tsukizawa | Dec 2009 | A1 |
20100094460 | Choi et al. | Apr 2010 | A1 |
20100161225 | Hyung et al. | Jun 2010 | A1 |
20100208057 | Meier et al. | Aug 2010 | A1 |
20100329549 | Steinberg et al. | Dec 2010 | A1 |
20110044504 | Oi et al. | Feb 2011 | A1 |
20110205338 | Choi et al. | Aug 2011 | A1 |
Number | Date | Country |
---|---|---|
101443791 | May 2009 | CN |
101489467 | Jul 2009 | CN |
1646006 | Apr 2006 | EP |
2006527443 | Nov 2006 | JP |
2007514211 | May 2007 | JP |
2008257649 | Oct 2008 | JP |
2011043969 | Mar 2011 | JP |
WO2004111687 | Dec 2004 | WO |
2005006251 | Jan 2005 | WO |
Entry |
---|
Murphy-Chutorian et al., “Head Pose Estimation in Computer Vision: A Survey”, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-20 (2008). |
Murphy-Chutorian et al., “Head Pose Estimation in Computer Vision: A Survey”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, No. 4, pp. 607-626 (2009). |
Henry P. et al., “RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor,” Proc. of International Symposium on Experimental Robotics (ISER), 2010, 15 pages. |
Ma Y. et al., “An Invitation to 3-D Vision, From Images to Geometric Models,” 2004, Chapter 12, pp. 5. |
International Search Report and Written Opinion—PCT/US2012/036362, International Search Authority—European Patent Office, Jun. 8, 2012. |
Malassiotis S et al., “Robust Real-Time 3D Head Pose Estimation from Range Data”, Pattern Recognition, Elsevier, GB, vol. 38, No. 8, Aug. 1, 2005 (Aug. 1, 2005), pp. 1153-1165. XP027610828, ISSN: 0031-3203[retrieved on Aug. 1, 2005]. |
Number | Date | Country | |
---|---|---|---|
20120293635 A1 | Nov 2012 | US |
Number | Date | Country | |
---|---|---|---|
61487170 | May 2011 | US | |
61562959 | Nov 2011 | US |