Embodiments described herein generally relate to the field of electronic devices, and more particularly, to determining a number of users and their respective positions relative to a device.
Embodiments are illustrated by way of example and not by way of limitation in the FIGURES of the accompanying drawings, in which like references indicate similar elements and in which:
The FIGURES of the drawings are not necessarily drawn to scale, as their dimensions can be varied considerably without departing from the scope of the present disclosure.
In an example, there is disclosed a system, an apparatus, and a method for determining a number of users and their respective positions relative to a device. One example embodiment includes acquiring touch point data from a hand of a user, clustering the touch point data, and determining a respective position of the user by mapping the clustered touch point data to a pre-defined hand pattern. The touch point data can include a plurality of touch points and a distance between each touch point is used to cluster the touch point data. In one example, the touch point data may be acquired using a touch sensor and the touch sensor can be a touch display.
In addition, a touch point clustering module may be used to cluster the touch point data. The clustered touch point data can be re-configured when a finger touch point is classified as a thumb touch point. Further, the clustered touch point data may be prevented from being mapped to more than one hand pattern using a pattern conflict resolution module where the pattern conflict resolution module uses a horizontal span and a vertical span to determine the correct hand pattern. Hand geometric statistics can be used to remove false positives from the clustered touch point data.
Large screen devices like adaptive all-in-ones or big tablets are featured to be used as a multi-user device and can be used in the tabletop mode allowing a user to lay the system completely flat on a tabletop or other surface. These capabilities allow users to use the system for multi-user gaming, shared art, browsing, content creation, presentation applications and if required the system can be used as a lay flat surface. Supporting these usages has various challenges. Sometimes it can be difficult to know the number of users playing a multi-user game. Also, in a tabletop mode, a user can be positioned at any of the four sides of the surface and it can be difficult to know a user's current position in order to orientation the display accordingly. Currently, in multi-user games, users explicitly specify the number of players through the application user interface. Similarly, in table-top mode, user explicitly adjust the device orientation using control panel functions. Current solutions take the required parameters (position and count) from each user explicitly through some application user interface (UI) with multiple and monotonous steps. What is needed is a system and method that allows for a device to identify the number of users and their positions around the device by having the user do only a few simple steps. It would be beneficial if the system could automatically determine the number of users and their positions.
The foregoing is offered by way of non-limiting examples in which the system and method of the present specification may usefully be deployed. The following disclosure provides many different embodiments, or examples, for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Further, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Different embodiment many have different advantages, and no particular advantage is necessarily required of any embodiment.
In the examples of the present specification, a system and method can be provided that allows for an electronic device to identify the number of users and their positions around a device by having the user do only a few simple steps and then the system automatically determines the number of users and their positions. This allows the system to avoid the multiple initial setup steps typically required and hence makes the overall system faster and easier to operate. The system can provide a user experience that is more intuitive, elegant, and faster than existing solutions and does not take the required parameters (position and count) from each user explicitly through some application UI with multiple and monotonous steps. In an example, the system can be configured to automatically determine the number of users and their respective positions around a device by having each user perform a multi-finger touch on the device.
The multi-finger touch can be a pre-define touch gesture that can be analyzed to determine the number of users and their respective positions. The pre-defined touch gesture can be almost any type of multi-finger touch by the user. In a specific example, the most natural and conflict-free touch gesture is for a user to put their hand on a display and hold it on the display for few milliseconds (ms) (e.g., about 200 ms to about 1500 ms). The pre-define touch gesture data can be processed in a hardware accelerated (e.g., graphics processing unit-acceleration units (GPU-EUs)) environment for a real-time or near real time response. In other example, the processing can be slower.
The processing of the touch point data can be done in a GPU. A touch point clustering phase clusters touch points in different subgroups based on their shortest distance from each other. A thumb position correction phase can include is correction logic where the system considers the fact that the thumb and index fingers of a user are too close and their position on the horizontal axis should be reordered in order to match pre-defined hand patterns. A hand pattern matching (e.g., geometric recognition) phase maps each of the subgroups to possible hand patterns. For example, on a square display, the system may have four possible hand patterns, one for each side of the display. A pattern conflict resolution phase considers various other parameters like horizontal span and vertical span of the cluster to determine the most appropriate mapped pattern for the subgroup. A false positive removal phase can remove false positives by using various hand geometric statistics and comparing the geometric statistics with corresponding values of the current touch point subgroup.
Output of the touch point data processing is a hand count that represents the number of users and the orientation of each hand with respect to the device. From the orientation of each hand with respect to the device, a suggested user's position around the device can be determined. The output of the touch processing is made available to multi-user applications and background services through a user's touch software development kit (SDK) to enable various use cases.
The system can be configured to provide a new user experience of interacting with the system through a hand touch gesture to indicate the presence of the user around the system. The system can be touch hardware, operating system (OS), application software stack agnostic and can be ported to almost any platform. The processing of the touch gesture data can be done using a touch points clustering algorithm to identify a number of hands and the algorithm can detect hand orientation in n*log(n) time complexity (where “n” is the number of users). Various phases of the touch point clustering and hand orientation detection algorithms can be implemented into GPU-EUs for hardware accelerated response and for Pre-OS secure application usage possibilities (e.g. High-Definition Multimedia Interface (HDMI) TV contents).
In an example, users place their hands on a touch screen. A touch sensor sends the raw touch data to a touch driver. The touch driver can pass the data to a graphics driver through a private interface and memory map. The graphics driver initiates touch kernel processing in GPU-EUs using the touch input data. Touch kernel processing passes the touch input through the various phases including touch point clustering, thumb position correction, hand pattern matching, conflict resolution, and false positive removal. Output of this step is a hand count and the orientation of each hand. Once output of the touch kernel processing is available, it propagates to a user mode component of a user's touch SDK. User's touch SDK further sends notifications to the registered processes to take further appropriate actions accordingly.
The following detailed description sets forth example embodiments of apparatuses, methods, and systems relating to detection of display rotation mechanisms or devices for an electronic device. Features such as structure(s), function(s), and/or characteristic(s), for example, are described with reference to one embodiment as a matter of convenience; various embodiments may be implemented with any suitable one or more of the described features.
Turning to
Display 106 may be a liquid crystal display (LCD) display screen, a light-emitting diode (LED) display screen, an organic light-emitting diode (OLED) display screen, a plasma display screen, or any other suitable display screen system. In addition, display 106 may be a touch display. Electronic device 100 can include a battery and various electronics (e.g., wireless module (e.g., Wi-Fi module, Bluetooth module, etc.) processor, memory, camera, a microphone, speakers, etc.).
Turning to
By placing first user's hand 116 in touch sensor 120, user identification module 104 can be configured to recognize that a user wants to be identified and user identification module 104 can begin the process of recognizing the presence of the user. In another example, a presence indicator 122 may be selected and presence indicator 122 can signal user identification module 104 that a user wants to be identified and that user identification module 104 should begin the process of recognizing the presence of the user. Touch sensor 120 and/or the touch features of display 106 can detect first user's hand 116 and second user's hand 118 as touch points and user identification module 104 can group the touch points into two subgroups, one for each user.
Turning to
Touch point clustering module 128 can be configured to clusters the touch point data into different subgroups based on the shortest distance from each of the touch points in the touch point data. Each cluster or subgroup can represent a hand of a user. Thumb position correction module 130 can be configured to correct clustering logic when it determines that thumb and index fingers are too close and their position on a horizontal axis should be reordered in order to match a pre-defined hand pattern in pre-defined hand patterns 124. Thumb position correction module 130 can be configured to be applied to each subgroup individually. Hand pattern matching module 132 can be configured to map each of the subgroups into one of the possible hand patterns in pre-defined hand patterns 124.
It is possible that a subgroup could be mapped to more than one of the possible hand patterns in pre-defined hand patterns 124. Pattern conflict resolution module 134 can resolve the conflict by being configured to consider various other parameters like horizontal span and vertical span of the subgroup to determine the most appropriate mapped pattern for the subgroup. False positive removal module 136 can be configured to use various hand geometric statistics and compare the various hand geometric statistics with a touch point subgroup to remove the false positives. For example, an average vertical distance between various touch points of a subgroup cannot be more than a pre-defined number of inches.
Turning to
To create first touch point data 140 and second touch point data 142, touch point clustering module 128 can determine the distance between the touch points acquire by touch sensor module 126 and use the distance to separate the touch points into subgroups. Touch point clustering module 128 can be configured to pair touch points 144a-j and calculate the distance between each touch point. For example, touch points 144a and 144b have been paired and the distance between them calculated as D1. Touch points 144b and 144c have been paired and the distance between them calculated as D2. Touch points 144c and 144d have been paired and the distance between them calculated as D3. Touch points 144d and 144e have been paired and the distance between them calculated as D4. Touch points 144e and 144f have been paired and the distance between them calculated as D5. Touch points 144f and 144g have been paired and the distance between them calculated as D6. Touch points 144g and 144h have been paired and the distance between them calculated as D7. Touch points 144f and 144i have been paired and the distance between them calculated as D8. Touch points 144i and 144j have been paired and the distance between them calculated as D9.
Each pair of touch points can be sorted based on the distance between each pair. User identification module 104 can determine the pair of touch points with the largest distance between them and create two subgroups where each subgroup includes one touch point from the pair. For example, as illustrated in
The pairs of touch points (touch points 144a and 144b, touch points 144b and 144c, etc.) are sorted starting with the least distant pair of touch points and a sub-list is prepared that covers all the points with a minimum distance possible between the points. The process is iterated over this sub-list multiple times to add at least one node to any of the subgroups SG1 and SG2. A node is added to a subgroup if one of the points in a pair is already present in the subgroup. For example, touch point 144d would be added to first touch point data 140 because touch point 144d is paired with touch point 144e. This would cause touch point 144c to be added to first touch point data 140 because touch point 144c is paired with touch point 144d. Touch point 144g would be added to second touch point data 142 because touch point 144g is paired with touch point 144f. The process would continue until each touch point is added to a subgroup. Touch point 144f is paired with 144e but each were made a subgroup because they had the most distance between them of the touch points.
Turning to
In some examples, when touch point data is acquired, thumb touch point 152 and index finger touch point 154 are too close and a slight right or left move of their position on horizontal axis 150 can confuse the hand pattern matching logic and not allow a match to be found.
Turning to
To determine if touch point data (e.g., first touch point data 140) matches third side touch point data hand pattern 156, touch point data on the x-axis is sorted in increasing order and checked to determine if the points follow the Down-Up pattern on the y-axis. To determine if touch point data (e.g., first touch point data 140) matches second side touch point data hand pattern 160, touch point data on the y-axis is sorted in increasing order and checked if the points follow the Down-Up pattern on the x-axis. To determine if touch point data (e.g., first touch point data 140) matches first side touch point data hand pattern 158, touch point data on the x-axis is sorted in increasing order and checked to determine if the points follow the Up-Down pattern on the y-axis. To determine if touch point data (e.g., first touch point data 140) matches fourth side touch point data hand pattern 162, touch point data on the y-axis is sorted in increasing order and check if the points follow the Up-Down pattern on the x-axis
Turning to
Turning to
Turning to
Turning to
Turning to
In this example of
ARM ecosystem SOC 1100 may also include a subscriber identity module (SIM) I/F 1130, a boot read-only memory (ROM) 1135, a synchronous dynamic random access memory (SDRAM) controller 1140, a flash controller 1145, a serial peripheral interface (SPI) master 1150, a suitable power control 1155, a dynamic RAM (DRAM) 1160, and flash 1165. In addition, one or more example embodiments include one or more communication capabilities, interfaces, and features such as instances of Bluetooth™ 1170, a 3G modem 1175, a global positioning system (GPS) 1180, and an 802.11 Wi-Fi 1185.
In operation, the example of
Turning to
System control logic 1206, in at least one embodiment, can include any suitable interface controllers to provide for any suitable interface to at least one processor 1204 and/or to any suitable device or component in communication with system control logic 1206. System control logic 1206, in at least one example embodiment, can include one or more memory controllers to provide an interface to system memory 1208. System memory 1208 may be used to load and store data and/or instructions, for example, for system 1200. System memory 1208, in at least one example embodiment, can include any suitable volatile memory, such as suitable dynamic random access memory (DRAM) for example. System control logic 1206, in at least one example embodiment, can include one or more I/O controllers to provide an interface to display device 1210, touch controller 1202, and non-volatile memory and/or storage device(s) 1232.
Non-volatile memory and/or storage device(s) 1232 may be used to store data and/or instructions, for example within software 1228. Non-volatile memory and/or storage device(s) 1232 may include any suitable non-volatile memory, such as flash memory for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disc drives (HDDs), one or more compact disc (CD) drives, and/or one or more digital versatile disc (DVD) drives for example.
Power management controller 1218 may include power management logic 1230 configured to control various power management and/or power saving functions disclosed herein or any part thereof. In at least one example embodiment, power management controller 1218 is configured to reduce the power consumption of components or devices of system 1200 that may either be operated at reduced power or turned off when the electronic device is in a closed configuration. For example, in at least one example embodiment, when the electronic device is in a closed configuration, power management controller 1218 performs one or more of the following: power down the unused portion of the display and/or any backlight associated therewith; allow one or more of processor(s) 1204 to go to a lower power state if less computing power is required in the closed configuration; and shutdown any devices and/or components that are unused when an electronic device is in the closed configuration.
Communications interface(s) 1216 may provide an interface for system 1200 to communicate over one or more networks and/or with any other suitable device. Communications interface(s) 1216 may include any suitable hardware and/or firmware. Communications interface(s) 1216, in at least one example embodiment, may include, for example, a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
System control logic 1206, in at least one example embodiment, can include one or more I/O controllers to provide an interface to any suitable input/output device(s) such as, for example, an audio device to help convert sound into corresponding digital signals and/or to help convert digital signals into corresponding sound, a camera, a camcorder, a printer, and/or a scanner.
For at least one example embodiment, at least one processor 1204 may be packaged together with logic for one or more controllers of system control logic 1206. In at least one example embodiment, at least one processor 1204 may be packaged together with logic for one or more controllers of system control logic 1206 to form a System in Package (SiP). In at least one example embodiment, at least one processor 1204 may be integrated on the same die with logic for one or more controllers of system control logic 1206. For at least one example embodiment, at least one processor 1204 may be integrated on the same die with logic for one or more controllers of system control logic 1206 to form a System on Chip (SoC).
For touch control, touch controller 1202 may include touch sensor interface circuitry 1222 and touch control logic 1224. Touch sensor interface circuitry 1222 may be coupled to detect touch input over a first touch surface layer and a second touch surface layer of a display (i.e., display device 1210). Touch sensor interface circuitry 1222 may include any suitable circuitry that may depend, for example, at least in part on the touch-sensitive technology used for a touch input device. Touch sensor interface circuitry 1222, in one embodiment, may support any suitable multi-touch technology. Touch sensor interface circuitry 1222, in at least one embodiment, can include any suitable circuitry to convert analog signals corresponding to a first touch surface layer and a second surface layer into any suitable digital touch input data. Suitable digital touch input data for at least one embodiment may include, for example, touch location or coordinate data.
Touch control logic 1224 may be coupled to help control touch sensor interface circuitry 1222 in any suitable manner to detect touch input over a first touch surface layer and a second touch surface layer. Touch control logic 1224 for at least one example embodiment may also be coupled to output in any suitable manner digital touch input data corresponding to touch input detected by touch sensor interface circuitry 1222. Touch control logic 1224 may be implemented using any suitable logic, including any suitable hardware, firmware, and/or software logic (e.g., non-transitory tangible media), that may depend, for example, at least in part on the circuitry used for touch sensor interface circuitry 1222. Touch control logic 1224 for at least one embodiment may support any suitable multi-touch technology.
Touch control logic 1224 may be coupled to output digital touch input data to system control logic 1206 and/or at least one processor 1204 for processing. At least one processor 1204 for at least one embodiment may execute any suitable software to process digital touch input data output from touch control logic 1224. Suitable software may include, for example, any suitable driver software and/or any suitable application software. As illustrated in
Note that in some example implementations, the functions outlined herein may be implemented in conjunction with logic that is encoded in one or more tangible, non-transitory media (e.g., embedded logic provided in an application-specific integrated circuit (ASIC), in digital signal processor (DSP) instructions, software [potentially inclusive of object code and source code] to be executed by a processor, or other similar machine, etc.). In some of these instances, memory elements can store data used for the operations described herein. This can include the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), a DSP, an erasable programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) or an ASIC that can include digital logic, software, code, electronic instructions, or any suitable combination thereof.
It is imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., height, width, length, materials, etc.) have only been offered for purposes of example and teaching only. Each of these data may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.
One particular example implementation of an electronic device may include acquiring touch point data from a hand of a user, clustering the touch point data, and determining a respective position of the user by mapping the clustered touch point data to a pre-defined hand pattern. The touch point data can include a plurality of touch points and a distance between each touch point is used to cluster the touch point data. In one example, the touch point data may be acquired using a touch sensor and the touch sensor can be a touch display.
Example A1 is an electronic device that includes a touch sensor to acquire touch point data from a hand of a user, a touch point clustering module to cluster the touch point data, and a hand pattern module to determine a respective position of the user by mapping the clustered touch point data to a pre-defined hand pattern.
In Example A2, the subject matter of Example A1 may optionally include where the touch point data includes a plurality of touch points and a distance between each touch point is used to cluster the touch point data.
In Example A3, the subject matter of any of the preceding ‘A’ Examples can optionally include a thumb position correction module to correctly configure the clustered touch point data when a finger touch point is classified as a thumb touch point.
In Example A4, the subject matter of any of the preceding ‘A’ Examples can optionally include a pattern conflict resolution module to help prevent the clustered touch point data from being mapped to more than one hand pattern.
In Example A5, the subject matter of any of the preceding ‘A’ Examples can optionally include where the pattern conflict resolution module uses a horizontal span and a vertical span to determine the correct hand pattern.
In Example A6, the subject matter of any of the preceding ‘A’ Examples can optionally include a false positive removal module to remove false positives.
In Example A7, the subject matter of any of the preceding ‘A’ Examples can optionally include where the false positive removal module uses hand geometric statistics to remove false positives.
In Example A8, the subject matter of any of the preceding ‘A’ Examples can optionally include where the touch point data is received from a touch display.
Example M1 is a method that includes acquiring touch point data from a hand of a user, clustering the touch point data, and determining a respective position of the user by mapping the clustered touch point data to a pre-defined hand pattern.
In Example M2, the subject matter of any of the preceding ‘M’ Examples can optionally include where the touch point data includes a plurality of touch points and a distance between each touch point is used to cluster the touch point data.
In Example M3, the subject matter of any of the preceding ‘M’ Examples can optionally include where the touch point data is acquired using a touch sensor.
In Example M4, the subject matter of any of the preceding ‘M’ Examples can optionally include where the touch sensor is a touch display.
In Example M5, the subject matter of any of the preceding ‘M’ Examples can optionally include where a touch point clustering module is used to cluster the touch point data.
In Example M6, the subject matter of any of the preceding ‘M’ Examples can optionally include re-configuring the clustered touch point data when a finger touch point is classified as a thumb touch point.
In Example M7, the subject matter of any of the preceding ‘M’ Examples can optionally include preventing the clustered touch point data from being mapped to more than one hand pattern using a pattern conflict resolution module.
In Example M8, the subject matter of any of the preceding ‘M’ Examples can optionally include where the pattern conflict resolution module uses a horizontal span and a vertical span to determine the correct hand pattern.
In Example M9, the subject matter of any of the preceding ‘M’ Examples can optionally include removing false positives from the clustered touch point data.
In Example M10, the subject matter of any of the preceding ‘M’ Examples can optionally include using hand geometric statistics to remove false positives from the clustered touch point data.
In Example M11, the subject matter of any of the preceding ‘M’ Examples can optionally include where the first region of interest is a face and the method further includes tracking the face using a facial recognition module as the face moves through the image.
In Example M12, the subject matter of any of the preceding ‘M’ Examples can optionally include where the first region of interest is an object and the method further includes tracking the object using an object recognition module as the object moves through the image.
In Example M13, the subject matter of any of the preceding ‘M’ Examples can optionally include determining a configuration of an electronic device using the angle value.
In Example M14, the subject matter of any of the preceding ‘M’ Examples can optionally include displaying the detected rotation of display portion on a display.
Example C1 is one or more computer readable medium having instructions stored thereon, the instructions, when executed by a processor, cause the processor to acquire touch point data from a hand of a user, cluster the touch point data, wherein the touch point data includes a plurality of touch points and a distance between each touch point is used to cluster the touch point data, and determine a respective position of the user by mapping the clustered touch point data to a pre-defined hand pattern.
In Example C2, the subject matter of any of the preceding ‘C’ Examples can optionally include where the touch point data is acquired using a touch sensor.
Example X1 is a machine-readable storage medium including machine-readable instructions to implement a method or realize an apparatus as in any one of the Examples A1-A8, M1-M14.
Example Y1 is an apparatus comprising means for performing of any of the Example methods M1-M14.
In Example Y2, the subject matter of Example Y1 can optionally include the means for performing the method comprising a processor and a memory. In Example Y3, the subject matter of Example Y2 can optionally include the memory comprising machine-readable instructions.
Number | Date | Country | Kind |
---|---|---|---|
2682/CHE/2014 | Jun 2014 | IN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/030443 | 5/13/2015 | WO | 00 |