This disclosure relates to touchless user input commands being identified and processed to perform tasks and related functions.
Conventionally, electronic and computer-based devices may be operated via remote controls. However, those separate devices are expensive and lose battery power or become lost over time. One way to overcome remote controls is to include a motion detector, camera or other type of interface designed to receive touchless commands via an input interface. Certain devices and related processing algorithms that support touchless commands are limited in their capabilities to identify user hand gestures. For example, the known touchless user input technology has a limited capability to identify a hand, finger and/or palm movement and distinguish such a hand movement over other types of hand movements. This limited identification functionality of conventional interfaces has, in turn, offered limited growth in the types of applications that can be integrated with hand or user input gesture commands in general.
One embodiment of the present invention may include a method of detecting an input gesture command. The method may include obtaining at least one digital image from a digital camera of a pre-defined controlled movement area comparing, via a processor, the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area and identifying, via the processor, at least one pixel difference between the at least one digital image and the at least one pre-stored background image. The method may also include designating, via the processor, the at least one digital image as having a detected input gesture command.
Another example embodiment of the present invention may include an apparatus configured to detect an input gesture command including a digital camera and a receiver configured to receive at least one digital image from the digital camera of a pre-defined controlled movement area. The apparatus may also include a processor configured to compare the at least one digital image to at least one pre-stored background image previously obtained from the digital camera of the same pre-defined controlled movement area, identify at least one pixel difference between the at least one digital image and the at least one pre-stored background image, and designate the at least one digital image as having a detected input gesture command.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of a method, apparatus, and system, as represented in the attached figures, is not intended to limit the scope of the invention as claimed, but is merely representative of selected embodiments of the invention.
The features, structures, or characteristics of the invention described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In addition, while the term “message” has been used in the description of embodiments of the present invention, the invention may be applied to many types of network data, such as packet, frame, datagram, etc. For purposes of this invention, the term “message” also includes packet, frame, datagram, and any equivalents thereof. Furthermore, while certain types of messages and signaling are depicted in exemplary embodiments of the invention, the invention is not limited to a certain type of message, and the invention is not limited to a certain type of signaling.
Example embodiments of the present invention provide touchless control communication devices, algorithms and computer-based operations. Examples of touchless input commands may include a user providing input commands by hand gestures, arm gestures, finger movements, fist movements, wrist movements, arm movements and a combination thereof. The commands may be detected by a standalone device that is configured to detect the user input via infrared feedback signals. The user may perform the hand movements to enact a conference room control function (i.e., begin presentation, turn presentation slide, lower screen, dim lights, etc.). Other uses for the touchless input commands may include residential household controls, gaming, etc.
The device 100 may also include a 12 volt power source and IR sensor to detect a movement in the proximity of the controlled movement area 150. The IR sensor 120 may alert the CMOS camera 130 to begin recording and to digitally capture various frames that are believed to include new hand gesture input commands. An audio feedback unit (not shown) may alert the user of the identified commands to allow the user to confirm or deny the command. Such a device would be configured to identify hand commands, such as (i.e., finger movements), right, left, up, down, towards, away and other movements. The response time of the device may also be around 200 ms.
Referring again to
The background frame may be normalized to have all zero value pixels which are compared to the corresponding location pixels of the recently obtained digital (new) frame and the differences in the new frame will be identified as non-zero values. In practice, the sum of the two frames must meet or exceed a predetermined threshold difference value in order to be deemed a gesture inclusive frame. If the camera position were to change, an automated re-calibration procedure would need to be conducted to identify the background of the new camera position. The camera 130 may perform automatic and periodic calibration efforts by obtaining snapshots and using them as a new basis for a background frame based on a digital image. A frame may be considered a message or data packet of digital data based on a still frame(s) of content from a digital camera snapshot.
The linear representation may include formatting the hand image to a linear grid. The hand identification may also include identifying one point as the center of the hand (i.e., fist) which is marked on the grid. The upper points above the fist may be identified as the appendages and the lower points as the arm. The points are connected to form a linear representation around a centered fist. The linear representation may be the basis for comparison purposes to known or pre-stored user command gestures.
Once a linear representation is obtained, a series of IF-THEN logic commands may be used to identify the specific command intended by the user. For example, if the arm endpoint is located at gridpoint (x, y) and the center of the fist is at gridpoint (a, b) and one appendage is identified as being above the others by an appreciable distance, then the logic may indicate a “1” as the resulting command. Instead, if there were two appendages identified, the command may be “2”, and if there are no appendages identified, the command may be “0”. To avoid false commands the “0” should be used as an enter function. For example, if the number “5” turned a conference room projector to the “ON” state, then a command of “5” followed by a “0” may be used to indicate an enter function. If the command is accepted an audible indicator may be used to confirm the input.
The control box 302 may include a CMOS camera 390 that is configured to capture images and provide background frames to a background identification and cancellation module 380. The movements captured in subsequent frames may be identified via a movement and shape identification module 370. The shapes may be converted to linear representations via the conversion module 360 and submitted to a linear control and register conversion module 350. The data images may be obtained and compared to pre-stored image data in a database (not shown) as a basis to determine what type of commands are being received. The master controller 330 may receive a command to transfer a command to a remote controlled device 310/320 based on the identified user input gesture command.
The digital camera may be triggered to obtain the at least one digital image based on a movement detected by an infrared (IR) sensor coupled to the processor associated with the digital camera. The at least one digital image content may be converted to a linear representation to identify a type of input gesture command provided by the user. The linear representation may include a plurality of gridpoints used to identify the user's body part used for the input gesture command. The linear representations may be compared to a pre-stored linear representation to identify the type of input gesture command. Once the command is identified, the command may be transmitted to a remote device based on the identified type of input gesture command.
The operations of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a computer program executed by a processor, or in a combination of the two. A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (“RAM”), flash memory, read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), registers, hard disk, a removable disk, a compact disk read-only memory (“CD-ROM”), or any other form of storage medium known in the art.
An exemplary storage medium may be coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (“ASIC”). In the alternative, the processor and the storage medium may reside as discrete components. For example,
As illustrated in
One example method of operation is illustrated in the flow diagram of
Although an exemplary embodiment of the system, method, and non-transitory computer readable medium of the present application has been illustrated in the accompanied drawings and described in the foregoing detailed description, it will be understood that the present invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit or scope of the invention as set forth and defined by the following claims. For example, the capabilities of the systems illustrated in
While preferred embodiments of the present invention have been described, it is to be understood that the embodiments described are illustrative only and the scope of the invention is to be defined solely by the appended claims when considered with a full range of equivalents and modifications (e.g., protocols, hardware devices, software platforms etc.) thereto.
This application claims priority to U.S. provisional patent application Ser. No. 61/478,841 entitled TOUCHLESS CONTROL, filed Apr. 25, 2011, the entire contents of which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61478841 | Apr 2011 | US |