Imaging systems based on light waves are becoming more widely used for object detection as semiconductor processes have become faster to support such systems. Some imaging systems are capable of providing dozens of images per second, making such systems useful for object tracking as well. While the resolution of such imaging systems may be relatively low, applications using these systems are able to take advantage of the speed of their operation.
Mobile devices such as notebook computers or smart phones are not easily adapted to using such imaging systems due to the power requirements of the imaging systems and the limited power storage capability of the mobile devices. The greatest contributor to the high power requirement of light-based imaging systems is the illumination source, which may be applied at a constant power level and/or constant frequency during operation. Further, such systems may be applied with a constant maximum lateral resolution (i.e., number of pixels) for best performance in worst case usage scenarios. This power demand often exceeds the power storage capabilities of mobile devices, diminishing the usefulness of the imaging systems as applied to the mobile devices.
The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
For this discussion, the devices and systems illustrated in the figures are shown as having a multiplicity of components. Various implementations of devices and/or systems, as described herein, may include fewer components and remain within the scope of the disclosure. Alternately, other implementations of devices and/or systems may include additional components, or various combinations of the described components, and remain within the scope of the disclosure.
This disclosure is related to imaging systems (imaging systems using emitted electromagnetic (EM) radiation, for example) that are arranged to detect, recognize, and/or track objects in a preselected area relative to the imaging systems. For example, an imaging system may be used to detect and recognize a human hand in an area near a computing device. The imaging system may recognize when the hand is making a gesture, and track the hand-gesture combination as a replacement for a mouse or other input to the computing device.
In one implementation, the imaging system uses distance calculations to detect, recognize, and/or track objects, such as a human hand, for example. The distance calculations may be based on receiving reflections of emitted EM radiation, as the EM radiation is reflected off objects in the preselected area. For example, the distance calculations may be based on the speed of light and the travel time of the reflected EM radiation.
Representative implementations of devices and techniques provide adaptable settings for example imaging devices and systems. The adaptable settings may be associated with various operating modes of the imaging devices and systems and may be used to conserve power. Operating modes may be defined based on whether an object is detected within a preselected area, for example. In one implementation, operating modes are defined based on whether a human hand is detected within the preselected area.
Operating modes may be associated with parameters such as power levels, modulating frequencies, duty cycles, and the like of the emitted EM radiation. One or more parameters of the emitted EM radiation may be dynamically and automatically adjusted based on a present operating mode and subsequent operating modes. For example, a higher power mode may be used by an imaging system when a desired object is detected and a lower power mode may be used when no object is detected. In one implementation, a resolution of a sensor component may be adjusted based on the operating modes.
Various implementations and arrangements for imaging systems, devices, and techniques are discussed in this disclosure. Techniques and devices are discussed with reference to example light-based imaging systems and devices illustrated in the figures. However, this is not intended to be limiting, and is for ease of discussion and illustrative convenience. The techniques and devices discussed may be applied to any of various imaging device designs, structures, and the like (e.g., radiation based, sonic emission based, particle emission based, etc.) and remain within the scope of the disclosure.
Implementations are explained in more detail below using a plurality of examples. Although various implementations and examples are discussed here and below, further implementations and examples may be possible by combining the features and elements of individual implementations and examples.
In various implementations, the imaging system 102 may be integrated with the mobile device 104, or may have some components separate or remote from the mobile device 104. For example, some processing for the imaging system 102 may be located remotely (e.g., cloud, network, etc.). In another example, some outputs from the imaging system may be transmitted, displayed, or presented on a remote device or at a remote location.
As discussed herein, a mobile device 104 refers to a mobile computing device such as a laptop computer, smartphone, or the like. Examples of a mobile device 104 may include without limitation mobile computing devices, laptop or notebook computers, hand-held computing devices, tablet computing devices, netbook computing devices, personal digital assistant (PDA) devices, reader devices, smartphones, mobile telephones, media players, wearable computing devices, and so forth. The implementations are not limited in this context. Further, stationary computing devices are also included within the scope of the disclosure as a computing device 104, with regard to implementations of an imaging system 102. Stationary computing devices may include without limitation, stationary computers, personal or desktop computers, televisions, set-top boxes, gaming consoles, audio/video systems, appliances, and the like.
An example object 106 may include any item that an imaging system 102 may be arranged to detect, recognize, track and/or the like. Such items may include human body parts, such as all or a portion of a human hand, for example. Other examples of an object 106 may include a mouse, a puck, a wand, a controller, a game piece, sporting equipment, and the like. In various implementations, the imaging system 102 may also be arranged to detect, recognize, and/or track a gesture of the object 106. A gesture may include any movement or position or configuration of the object 106 that is expressive of an idea. For example, a gesture may include positioning a human hand in an orientation or configuration (e.g., pointing with one or more fingers, making an enclosed shape with one or more portions of the hand, etc.) and/or moving the hand in a pattern (e.g., in an elliptical motion, in a substantially linear motion, etc.). Gestures may also be made with other objects 106, when they are positioned, configured, moved, and the like.
The imaging system 102 may be arranged to detect, recognize, and/or track an object 106 that is within a preselected area 108 relative to the mobile device 104. A preselected area 108 may be chosen to encompass an area that human hands or other objects 106 may be within, for example. In one case, the preselected area 108 may encompass an area where hands may be present to make gestures as a replacement for a mouse or other input device. This area may be to the front, side, or around the mobile device 104, for example.
The illustration of
As discussed above, the techniques, components, and devices described herein with respect to an imaging system 102 are not limited to the illustration in
If included in an implementation, the illumination module 202 is arranged to emit electromagnetic (EM) radiation (e.g., light radiation) to illuminate the preselected area 108. In an implementation, the illumination module 202 is a light emitter, for example. In one implementation, the light emitter comprises a light-emitting diode (LED). In another implementation, the light emitter comprises a laser emitter. In one implementation, the illumination module 202 illuminates the entire environment (e.g., the preselected area 108) with each light pulse emitted. In an alternate implementation, the illumination module 202 illuminates the environment in stages or scans.
In various implementations, different forms of EM radiation may be emitted from the illumination module 202. In one implementation, infrared light is emitted. For example, the light radiation may comprise one or more modulated infrared light pulses. The illumination module 202 may be switched on for a short interval, allowing the emitted light pulse to illuminate the preselected area 108, including any objects 106 within the preselected area. Infrared light provides illumination to the preselected area 108 that is not visible to the human eye, and so is not distracting. In other implementations, other types or frequencies of EM radiation may be emitted that provide visual feedback or the like. As mentioned above, in alternate implementations, other energy forms (e.g., radiation based, sonic emission based, particle emission based, etc.) may be emitted by the illumination module 202.
In an implementation, the illumination module 202 is arranged to illuminate one or more objects 106 that may be present in the preselected area 108, to detect the objects 106. In one implementation, a parameter or characteristic of the output of the illumination module 202 (a light pulse, for example) is arranged to be automatically and dynamically adjusted based on whether an object 106 is detected in the preselected area 108. For example, to conserve power, the power output or integration time of the illumination module 202 may be reduced when no object 106 is detected in the preselected area 108 and increased when an object 106 is detected in the preselected area 108. In one implementation, at least one of an illumination time, a duty cycle, a peak power, and a modulation frequency of the light pulse is adjusted based on whether an object 106 is detected within the preselected area 108. In another implementation, at least one of the illumination time, the duty cycle, the peak power, and the modulation frequency of the light pulse is further adjusted based on whether a human hand is detected within the preselected area 108.
In one implementation, operating modes are defined for the imaging system 102 that are associated with the parameters, characteristics, and the like (e.g., power levels, modulating frequencies, etc.), for the output of the illumination module 202, based on whether an object 106 is detected in the preselected area 108.
As shown in
In one implementation, as shown in the state diagram 300 of
If included in an implementation, the optics module 204 is arranged to receive the EM radiation when the EM radiation is reflected off of an object 106. In some implementations, the optics module 204 may include one or more optics, lenses, or other components to focus or direct the reflected EM waves. For example, in other alternate implementations, the optics module 204 may include a receiver, a waveguide, an antenna, and the like.
As shown in
In an implementation, the sensor module 206 (or the individual pixels of the sensor module 206) provides a measure of the time for the EM radiation to travel from the illumination module 202, to the object 106, and back to the sensor module 206. Accordingly, in such an implementation, the imaging system 102 comprises a three-dimensional range imaging device arranged to detect an object 106 within the preselected area 108 based on time-of-flight principles.
For example, in one implementation, the sensor module 206 is an image sensor arranged to detect an object 106 within the preselected area 108 based on receiving the reflected EM radiation. The sensor module 206 can detect whether an object is in the preselected area 108 based on the time that it takes for the EM radiation emitted from the illumination module 202 to be reflected back to the sensor module 206. This can be compared to the time that it takes for the EM radiation to return to the sensor module 206 when no object is in the preselected area 108.
In an implementation, the sensor module 206 is arranged to recognize a gesture of at least one human hand or an object 106 within the preselected area 108 based on receiving the reflection of the EM pulse. For example, the sensor module 206 can recognize a human hand, an object 106, and/or a gesture based on the imaging of each individual pixel of the sensor module 206. The combination of each pixel as an individual imaging sensor can result in an image of a hand, a gesture, and the like, based on reflection times of portions of the EM radiation received by the individual pixels. This, in combination with the frame rate of the sensor module 206, allows tracking of the image of a hand, an object, a gesture, and the like. In other implementations, the sensor module 206 can recognize multiple objects, hands, and/or gestures with imaging from the multiple individual pixels.
Further, in an implementation, the sensor module 206 is arranged to distinguish gestures of one or more human hands from other objects 106 within the preselected area 108 and to exclude the other objects 106 when the gestures of the human hands are recognized. In other implementations, the sensor module 206 may be arranged to distinguish other objects 106 in the preselected area 108, and exclude any other items detected.
In one implementation, the sensor module 206 is arranged to determine a distance of a detected object 106 from the imaging system 102, based on receiving the reflected EM radiation. For example, the sensor module 206 can determine the distance of a detected object 106 by multiplying the speed of light by the time taken for the EM radiation to travel from the illumination module 202, to the object 106, and back to the sensor module 206. In one implementation, each pixel of the sensor module 206 is arranged to measure the time for a portion of the EM radiation to travel from the illumination module 202, to the object 106, and back to the pixel.
In an implementation, a lateral resolution of the sensor module 206 is adjustable based on the operating mode of the imaging system 102. As shown in the state diagram 300 of
In an additional implementation, to conserve power, the frame rate in frames per second and/or latency of the sensor module 206 may also be adjusted based on the operating mode of the imaging system 102. As shown in
In another implementation, power to the modulation drivers for the pixels (and/or to the illumination source/emitter) may be adjusted in like manner based on the operating mode of the imaging system 102. For example the power may be reduced (e.g., minimum power) in the first operating mode, increased in the second operating mode, and further increased (e.g., maximum power) in the third operating mode.
In a further implementation, the sensor module 206 may perform binning of the pixels configured to receive the reflection of the EM radiation. For example, the binning may include combining a group of adjacent pixels and processing the group of pixels as single composite pixel. Increased pixel area may result in higher sensor-sensitivity, and therefore reduce the illumination demand, allowing a power reduction in the emitted EM radiation. This power reduction may be in the form of reduced peak power, reduced integration time, or the like.
If included in an implementation, the control module 208 may be arranged to provide controls and/or processing to the imaging system 102. For example, the control module 208 may control the operating modes of the imaging system 102, control the operation of the other modules (202, 204, 206), and/or process the signals and information output by the other modules (202, 204, 206). In various implementations, the control module 208 is arranged to communicate with one or more of the illumination module 202, optics module 204, and sensor module 206. In some implementations, the control module 208 may be integrated into one or more of the other modules (202, 204, 206), or be remote to the modules (202, 204, 206).
In one implementation, the control module 208 is arranged to determine the operating mode of the imaging system 102 based on whether the EM radiation is reflected off an object 106. Further, the control module 208 may be arranged to determine the operating mode of the imaging system 102 based on whether the object 106 is a human hand. As discussed with respect to the state diagram 300 in
In an implementation, the control module 208 is arranged to detect, recognize, and/or track a gesture made by one or more hands, or by an object 106. In various implementations, the control module 208 may be programmed to recognize some objects 106 and exclude others. For example, the control module 208 may be programmed to exclude all other objects when at least one human hand is detected. The control module 208 may also be programmed to recognize and track certain gestures associated with inputs or commands to the mobile device 104, and the like. In one example, the control module 208 may set the imaging system 102 to the third operating mode when tracking a gesture, to ensure the best performance, and provide the most accurate read of the gesture.
In one implementation, the control module 208 is arranged to calculate a distance of the object 106 from the imaging system 102, based on the measured time of the reflected EM radiation. Accordingly, the control module 208 may be arranged to convert the current signal output from the sensor module 206 (or from the pixels of the sensor module 206) to a distance of the object 106 from the imaging system 102. Further, in an implementation, the control module 208 may be arranged to convert the current signal to a three-dimensional image of the object 106. In one implementation, the control module 208 is arranged to output the calculated distance and/or the three-dimensional image of the object 106. For example, the imaging system 102 may be arranged to output a distance, a three-dimensional image of the detected object 106, tracking coordinates of the object 106, and so forth, to a display device, to another system arranged to process the information, or the like.
In various implementations, additional or alternative components may be used to accomplish the disclosed techniques and arrangements.
The order in which the process is described is not intended to be construed as a limitation, and any number of the described process blocks can be combined in any order to implement the process, or alternate processes. Additionally, individual blocks may be deleted from the process without departing from the spirit and scope of the subject matter described herein. Furthermore, the process can be implemented in any suitable materials, or combinations thereof, without departing from the scope of the subject matter described herein.
At block 402, the process includes emitting electromagnetic (EM) radiation to illuminate a preselected area. In one example, the EM radiation may be emitted by an emitter (such as illumination module 202) comprising an LED or laser emitter, for example. In various implementations, the EM radiation comprises a modulated infrared light pulse. In various implementations, the preselected area may be relative to a computing device (such as mobile device 104), such as to provide an input to the computing device, for example.
At block 404, the process includes receiving a reflection of the EM radiation. For example, the reflection of the EM radiation may be received by an imaging sensor (such as sensor module 206). The EM reflection may be received by the imaging sensor via optics, a receiver, an antenna, or the like, for instance.
In various implementations, the process may include detecting, recognizing, and/or tracking an object, a human hand, and/or a gesture of the object or human hand.
At block 406, the process includes adjusting one or more parameters of the EM radiation based on whether the reflection of the EM radiation is reflected off an object within the preselected area. In various implementations, the one or more parameters of the EM radiation may include an illumination time, a duty cycle, a peak power, and a modulation frequency of the electromagnetic radiation. One or more parameters may be increased when an object is detected, and decreased when no object is detected, for example.
In a further implementation, the process includes adjusting the one or more parameters of the EM radiation based on whether the reflection of the EM radiation is reflected off a human hand within the preselected area. One or more parameters may be further increased when a hand is detected, and decreased when no hand is detected, for example.
In one implementation, the process includes adjusting one or more parameters of the imaging sensor based on whether the reflection of the EM radiation is reflected off an object within the preselected area. In various implementations, the one or more parameters of the imaging sensor may include a lateral resolution (in number of pixels), a depth resolution (in distance, for example), and a frame rate (in frames per second, for example).
In another implementation, the process includes binning pixels configured to receive the reflection of the EM radiation. For example, the binning may include combining the signals from a group of adjacent pixels and processing the combined signal of the group of pixels as single composite pixel.
In an implementation, the process further includes measuring a time from emitting the EM radiation to receiving the reflection of the EM radiation and calculating a distance of an object based on the measured time. In a further implementation, the process includes outputting imaging information, such as a distance, a three-dimensional image of the detected object, tracking coordinates of the object, and so forth, to a display device, to another system arranged to process the information, or the like.
In alternate implementations, other techniques may be included in the process 400 in various combinations, and remain within the scope of the disclosure.
Although the implementations of the disclosure have been described in language specific to structural features and/or methodological acts, it is to be understood that the implementations are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as representative forms of implementing example devices and techniques. It is to be noted that each of the claims may stand as a separate embodiment. However, other embodiments are provided by combining one or more features of an independent or dependent claim with features of another claim even when no reference is made to this claim.