The present invention generally relates to input processing systems and more particularly to processing input from a plurality of users via image processing.
Computing system applications interact with users by receiving user input, processing the input, and providing a result. As systems become more advanced and more people embrace technology, applications have evolved to engage multiple users simultaneously. Multiple users may interact with computing systems at the same time, and the users may provide output that relates to a particular user. For example, two users may use game controllers to play a computer game that allows the users to compete against each other during a game session. Each user provides input with a respective gaming controller.
Technology has evolved to allow different types of interaction with computing systems. Rather than receiving input from a remote gaming controller for each user for example, some computing systems utilize a single input mechanism such as a touch screen. When only one user is engaging with the computing device, the single input mechanism receives input from the single user. When multiple users engage the computing system with a single input mechanism, it is very difficult to determine which user is providing the input.
There is a need in the art for a system that allows multiple users to easily and efficiently interact with a computing device using a single input mechanism.
In an embodiment, performance, functionality, content, or business relevancy. Based on learning techniques, efficient monitoring, and resource management, the present system may capture data for and provide analysis information for outliers of a web application with very low overhead.
In an embodiment, input may be received by first identifying a plurality of users physically in the presence of the device. An input may be received by the device from a first user of the plurality of physically present users. A physical state may be detected from one of the plurality of users associated with the input.
In an embodiment, a system for detecting input may include a display device, a camera, a processor and modules stored in memory and executable by the processor. The camera may capture color image data and provide the image data to the processor. A feature detection module is executable to detect a physical feature of a user. A user focus detection module detects the point of focus of a user's eyes. An input processing module receives and processes an input from a user.
Embodiments of the invention determine which user of multiple users provided input through a single input device. The computing system may include a mechanism for capturing images of the one or more users. The images may be processed to determine which user provided an input using the input device. For example, the images may be processed to identify each users head and eyes, and determine the focus point for each user's eyes. The user which has eyes focused at the input device is identified as providing the input. In embodiments where the input mechanism may be a touch screen, the user having eyes focused on the touch screen portion which was touched is identified to be providing the input.
Embodiments of the invention may be used with several types of computing devices.
User focus module 510 may analyze images of a user's eye to determine where the user is focusing. The front of a human eye includes a black pupil, a colored iris around the pupil, and a white sclera around the iris. A computing device may analyze the area and location of the sclera to determine if a user is focused up, down, left and right. For example, when a user's eyes are focused on an object to his right, an image captured of the user's eyes will show more of the user's sclera on the right side of the eye in the image (the user's left side) than the left side, because the eyes will be moved towards the left side.
Input focusing module 530 receives input and processes the input. The input may be selection of a designated hot spot on a touch screen, a button, a wireless signal, or some other input. Input focusing module 530 may receive information from other modules which identify a user that has provided a recent input. The input processing module then processes the input as the identified user's action.
Feature library module 540 may include facial and eye masks, templates, models and other data used to process an image and identify a user physical feature and the feature state, such as which direction a user's eyes are focused at.
A method for detecting a human head via image processing may begin with analyzing an image for shapes resembling a human head. Shapes may be identified using contrast detection, motion detection, and other techniques. Once a potential head shape is detected, the head candidate is analyzed for features common to most human heads. The features may include contrast, shading or other features present where a nose, mouth, or eyes may be. If the candidate head satisfies a threshold level of features, the head candidate may be identified as a participating user. Other methods for detecting faces in images are known in the art.
An eye area of each user is located at step 615. Detecting an eye area of a participating user's head may involve searching for a contrast, brightness, or other image property level at about the area within the head that the eye is located. Once the user eyes are located, eye behavior may be calibrated for each participating user at step 620. Calibration may include instructions on the screen to a participating user indicating a distance range from the computing device the user's face should, instructions to look at a particular point on the screen, and other directions. The calibration may have a user look at different points or hot spots on the display of the computing device, and analyze the images of the user's eyes when the focus of the user's eyes is known. For example,
Tracking of user eyes begins at step 625. The tracking involves capturing consecutive images of the user. The images may be processed to track and maintain knowledge of the user eye location and focus. In some embodiments, the images are captured repeatedly and stored, but are then discarded if not input is received from any user.
Input is detected at a screen hot spot at step 630. The input may include a user touching a hot spot on a touch screen for one of computing console 120, tablet computer 220, and mobile device 320. The hot spot may be a particular image object displayed on the screen, such as an image of a character, ball, virtual item, text, or other object. A user having eyes focused on the hot spot location of the input is identified at step 635. The eye location may be determined as discussed above with respect to the calibration process. A user corresponding to the particular input may be identified in a variety of ways. For example, the eye focus for each user may be determined, and the user with the eye focus closest to the hot spot may be selected. Alternatively, the eye focus for each user may be determined until an eye focus is detected within a threshold distance of the hot spot at which input was received. In some embodiments, a likelihood of input may be determined for each user based on their input history, their eye focus, whether an input is expected from the user, and so on. Once an input is associated with an identified user, the input at the hot spot is processed for the particular user at step 640.
When input is received, the computing device 800 determines the user focus and identifies the input as coming from the user focusing on the hot spot that received the input. For example, user 835 has provided input at hot spot 855 by pressing the screen at hot stop 855. Upon receiving the input, the processing device will analyze images captured from the color camera 810, IR camera 815, or both. From the images, the users focus will be determined. If after processing the images, user 830 is determined to have a focus of 870 and user 835 is determined to have focus 880 which corresponds to hot step 855, the input received at hot spot 855 will be associated with user 835. By determining where user eyes are focused, an input received through a device used by a plurality of players may be associated with one of a plurality of players.
Cameras 910 may include one more cameras able to capture a series of photos suitable for image processing analysis. The photos may be embedded within the computing system of mounted externally to the system. The images captured by camera 910 may be provided to processor 915 via bus 960, which may execute modules stored in memory 920 to analyze the images for features detection.
IR device 935 may include an IR camera that is able to capture images in very low light conditions. The IR images may be processed similarly as color camera images for user feature detection. The images captured from IR device 935 may be sent to processor 915 for processing via bus 960.
The components shown in
Mass storage device 925, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 915. Mass storage device 925 can store the system software for implementing embodiments of the present invention for purposes of loading that software into main memory 920.
Portable storage device 930 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk or Digital video disc, to input and output data and code to and from the computer system 900 of
Input devices 945 provide a portion of a user interface. Input devices 945 may include an alpha-numeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, the system 900 as shown in
Display system 950 may include a liquid crystal display (LCD) or other suitable display device. Display system 950 receives textual and graphical information, and processes the information for output to the display device. Display system 950 may include a touch screen device which receives input by detecting a touch on the surface of the display. The pixels receiving the touch are communicated to processor 915 via bus 960.
Peripherals 955 may include any type of computer support device to add additional functionality to the computer system. For example, peripheral device(s) 955 may include a modem or a router.
The components contained in the computer system 900 of
The foregoing detailed description of the technology herein has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claims appended hereto.
This application is a continuation and claims the priority benefit of U.S. patent application Ser. No. 15/296,552 filed Oct. 18, 2016, now U.S. Pat. No. 10,496,159, which is a continuation and claims the priority benefit of U.S. patent application Ser. No. 13/464,703 filed May 4, 2012, now U.S. Pat. No. 9,471,763, the disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5471542 | Ragland | Nov 1995 | A |
6456262 | Bell | Sep 2002 | B1 |
6637883 | Tengshe et al. | Oct 2003 | B1 |
8922480 | Freed et al. | Dec 2014 | B1 |
9471763 | Norden | Oct 2016 | B2 |
10496159 | Norden | Dec 2019 | B2 |
20050024586 | Teiwes | Feb 2005 | A1 |
20090273687 | Tsukizawa et al. | Nov 2009 | A1 |
20100066667 | MacDougall et al. | Mar 2010 | A1 |
20100125816 | Bezos | May 2010 | A1 |
20100130280 | Arezina et al. | May 2010 | A1 |
20100165093 | Sugio et al. | Jul 2010 | A1 |
20100225595 | Hodges et al. | Sep 2010 | A1 |
20110254865 | Yee | Oct 2011 | A1 |
20120008139 | Miziolek et al. | Jan 2012 | A1 |
20120033853 | Kaneda et al. | Feb 2012 | A1 |
20120081392 | Arther | Apr 2012 | A1 |
20120105490 | Pasquero | May 2012 | A1 |
20120256967 | Baldwin | Oct 2012 | A1 |
20130003352 | Lee et al. | Jan 2013 | A1 |
20130033524 | Wang et al. | Feb 2013 | A1 |
20130057573 | Chakravarthula et al. | Mar 2013 | A1 |
20130145304 | DeLuca | Jun 2013 | A1 |
20130201305 | Sibecas et al. | Aug 2013 | A1 |
20130293467 | Norden | Nov 2013 | A1 |
20170139476 | Norden | May 2017 | A1 |
Number | Date | Country |
---|---|---|
102034088 | Apr 2011 | CN |
412015 | Oct 2015 | IN |
2006-201966 | Aug 2006 | JP |
2007-136000 | Jun 2007 | JP |
2012-065781 | Apr 2012 | JP |
2015-521312 | Jul 2015 | JP |
10-2015-0032661 | Mar 2015 | KR |
WO 2011130594 | Oct 2011 | WO |
WO 2013165646 | Nov 2013 | WO |
Entry |
---|
Chinese Patent Application No. 201380022450.4 Fourth Office Action dated Mar. 5, 2019. |
Chinese Patent Application No. 201380022450.4 Third Office Action dated May 3, 2018. |
Chinese Patent Application No. 201380022450.4 Second Office Action dated Sep. 22, 2017. |
European Patent Application No. 13785175.4 Extended European Search Report dated Dec. 11, 2015. |
Korean Patent Application No. 2014-7030872 Notice of Preliminary Rejection dated May 3, 2016. |
Japanese Patent Application No. 2015-510289 Notification of Reason(s) for Refusal dated Dec. 8, 2015. |
PCT Application No. PCT/US2013/035331 International Search Report and Written Opinion dated Jan. 30, 2015. |
U.S. Appl. No. 13/464,703 Office Action dated Jan. 7, 2016. |
U.S. Appl. No. 13/464,703 Office Action dated Jul. 27, 2015. |
U.S. Appl. No. 13/464,703 Final Office Action dated May 23, 2014. |
U.S. Appl. No. 13/464,703 Office Action dated Nov. 20, 2013. |
U.S. Appl. No. 15/296,552 Office Action dated Dec. 14, 2018. |
U.S. Appl. No. 15/296,552 Final Office Action dated Dec. 1, 2017. |
U.S. Appl. No. 15/296,552 Office Action dated May 18, 2017. |
Brazilian Patent Application No. BR1120140273430 Preliminary Office Action dated Jan. 14, 2020. |
Brazilian Patent Application No. BR1120140273430 Search Report dated Jan. 14, 2020. |
India Patent Application No. 2224/MUMNP/2014 Examination Search Report dated Dec. 30, 2019. |
Number | Date | Country | |
---|---|---|---|
20200249751 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15296552 | Oct 2016 | US |
Child | 16701832 | US | |
Parent | 13464703 | May 2012 | US |
Child | 15296552 | US |