The present disclosure relates to devices, systems, and methods for contactless user interfacing and more particularly to devices, systems, and methods for contactless interfacing for user self-help.
According to an aspect of the present disclosure, a contactless user interface system for ordering merchandise without user contact with peripherals may include a user interface display configured for presenting visual information to the user, the visual information comprising at least one selectable merchandise option; and a sensor system for capturing contactless user inputs, the sensor system comprising at least one sensor configured to capture contactless user inputs including user position and user contactless gesture, the sensor system configured to detect penetration of an activation zone as an indication that the user intends to provide contactless user input to the user interface display, wherein the sensor system is configured for direct tracking of a user's hand to directly sense the position of the user's hand in at least a portion of an area outside from the activation zone, wherein the sensor system is configured to communicate one or more user input signals indicating direct tracking of the user's hand and indicating the detected penetration of the activation zone. The contactless user interface system may include a control system including a processor configured to execute instructions stored on memory to govern contactless user interface system operations. The control system may be arranged in communication with the sensor system to receive the one or more user input signals and configured to determine a target interface element applied by the user for contactless engagement with the visual information of the user interface display, based on indication of direct tracking of the user's hand and indication of penetration of the activation zone of the one or more user input signals.
In some embodiments, the control system may be configured to define the activation zone as a region ahead of the user interface display extending beyond a face of the user interface display by a predetermined distance. The activation zone may be defined by the control system for enhanced monitoring to determine a focal point of the target interface element, wherein the focal point of target interface element may be applied as a precise point of indication for user contactless activation of visual information.
In some embodiments, the focal point of the target interface element may include a point of the user's hand, or of an instrument held in the user's hand, that is closest to the user interface display within the activation zone as the precise point of indication for user contactless engagement with the visual information. The control system may be configured to define the activation zone as a 3-dimensional region. In some embodiments, the control system may be configured to define the activation zone as a region projecting farther from the user interface display near at least one of a top section and a bottom section of the user interface display than in a mid-section of the display.
In some embodiments, the control system may configured to determine an acceleration of the user's hand or instrument held in the user's hand within the activation zone based on the one or more user input signals. The control system may be configured to determine user intent to cause activation of visual information of the user interface display based on the determined acceleration within the activation zone. The control system may be configured to determine user intent to cause activation of an area on the user interface display by comparison of the determined acceleration within the activation zone with a predetermined threshold acceleration. The control system may be configured to determine the predetermined threshold acceleration based on at least one of penetration of the activation zone and direct tracking of the user's hand based on the user input signal.
In some embodiments, penetration of the user's hand within the activation zone may include at least one of depth of penetration into the activation zone and a hand configuration of the user. The hand configuration of the user may include the number of fingers of the user's hand extended to indicate the visual information for activation on the user interface display. In some embodiments, depth of penetration may include distance between the user interface display and a lead digit of the user's hand. In some embodiments, the control system may be configured to determine that the lead digit of the user's hand is the closest finger to the user interface display. In some embodiments, the control system may be configured to determine that the lead digit of the user's hand is the not closest finger to the user interface display, based on the user's hand configuration within the activation zone.
In some embodiments, the control system may be configured to define the activation zone based on information gathered by the sensor system regarding the user. The control system may be configured to actively define the activation zone. The control system may be configured to actively define the activation zone based on information gathered by the sensor system regarding the user.
According to another aspect of the present disclosure, a method of contactless user interfacing for ordering merchandise without user contact with peripherals may include presenting visual information to a user via a user interface display, the visual information comprising at least one selectable merchandise option; and capturing contactless user inputs including user position and user contactless gesture, via a sensor system, wherein capturing contactless user inputs includes detecting penetration of an activation zone as an indication that the user intends to provide contactless user input to the user interface display, and directly tracking a user's hand to directly sense the position of the user's hand in at least a portion of an area outside from the activation zone, and communicating one or more user input signals indicating direct tracking of the user's hand and indicating the detected penetration of the activation zone. The methods may include determining, via a control system, a target interface element applied by the user for contactless engagement with the visual information of the user interface display, based on indication of direct tracking of the user's hand and indication of penetration of the activation zone of the one or more user input signals.
In some embodiments, capturing contactless gestures may include defining the activation zone as a region ahead of the user interface display extending beyond a face of the user interface display by a predetermined distance. The activation zone may be defined by the control system for enhanced monitoring to determine a focal point of the target interface element, wherein the focal point of target interface element may be applied as a precise point of indication for user contactless activation of visual information. The focal point of the target interface element may include a point of the user's hand, or of an instrument held in the user's hand, that is closest to the user interface display within the activation zone as the precise point of indication for user contactless engagement with the visual information.
In some embodiments, the activation zone may be defined as a 3-dimensional region. In some embodiments, defining the activation zone may include defining a region projecting farther from the user interface display near at least one of a top section and a bottom section of the user interface display than in a mid-section of the display.
In some embodiments, determining a target selection device may include determining an acceleration of the user's hand or instrument held in the user's hand within the activation zone based on the one or more user input signals. Determining a target selection device may include determining user intent to cause activation of visual information of the user interface display based on the determined acceleration within the activation zone. Determining user intent to cause activation may include determining intent to cause activation of an area on the user interface display by comparison of the determined acceleration within the activation zone with a predetermined threshold acceleration.
In some embodiments, determining intent to cause activation may include determining the predetermined threshold acceleration based on at least one of penetration of the activation zone and direct tracking of the user's hand based on the user input signal. In some embodiments, penetration of the user's hand within the activation zone may include at least one of depth of penetration into the activation zone and a hand configuration of the user. In some embodiments, the hand configuration of the user may include the number of fingers of the user's hand extended to indicate the visual information for activation on the user interface display.
In some embodiments, depth of penetration may include distance between the user interface display and a lead digit of the user's hand. In some embodiments, determining a target selection device may include determining that the lead digit of the user's hand is the closest finger to the user interface display. In some embodiments, determining a target selection device may include determining that the lead digit of the user's hand is the not closest finger to the user interface display, based on the user's hand configuration within the activation zone.
In some embodiments, capturing contactless gestures includes defining the activation zone based on information gathered by the sensor system regarding the user. In some embodiments, defining the activation zone may include actively define the activation zone. In some embodiments, actively defining the activation zone may include actively defining the activation zone based on information gathered by the sensor system regarding the user.
According to another aspect of the present disclosure, a contactless user interface system for selecting merchandise without user contact with peripherals may include a user interface display configured for presenting visual information to the user, the visual information comprising at least one selectable merchandise option; a sensor system for capturing contactless user inputs, the sensor system comprising at least one sensor configured to capture contactless user inputs including user position and user contactless gesture; and a control system. The control system may include a processor configured to execute instructions stored on memory to govern contactless user interface system operations, the control system arranged in communication with the sensor system to receive the one or more user input signals for determining user inputs as commands.
In some embodiments, the control system may be configured to determine which fingertip of a user's hand is furthest removed from the center of the palm of the corresponding hand, and to identify the determined fingertip as a target interface element. In response to capture of multiple user hands close in time with each other by the sensor system, the control system may be configured to determine which one of the multiple user hands corresponds with a determined fingertip that is closest to the user interface display. In response to determination that one determined fingertip of one corresponding hand is the closest to the user interface display of multiple hands, the control system may be configured to designate the one determined fingertip as the primary target interface element.
In some embodiments, in response to determination that another determined fingertip of one corresponding hand is newly the closest to the user interface display of multiple hands, the control system may be configured to re-designate the another determined fingertip as the primary target interface element. Re-designation may be undertaken only after at least a predetermined time pause from a previous designation. Activation of a command by a primary target interface element may be undertaken only after a predetermined time pause from a previous command to avoid unintended activations. Activation of a command by a primary target interface element of a corresponding hand may be undertaken only after a predetermined time pause from a previous command to avoid unintended activations from the same corresponding hand. Activation of a command by a primary target interface element of another corresponding hand that is re-designated may be undertaken without the predetermined time pause.
In some embodiments, activation of a command by a primary target interface element of a corresponding hand may be undertaken only after a predetermined time pause from a previous command to avoid unintended activations from the same corresponding hand. Activation of a command by a primary target interface element of another corresponding hand that is re-designated may be undertaken without the predetermined time pause. In some embodiments, the contactless user interface system may be implemented as a portion of a virtual kiosk.
According to another aspect of the present disclosure, a virtual kiosk may include a contactless user interface system for selecting merchandise without user contact with peripherals, the system comprising: a user interface display configured for presenting virtual visual information to the user, the virtual visual information comprising at least one selectable merchandise option; a sensor system for capturing contactless user inputs, the sensor system comprising at least one sensor configured to capture contactless user inputs including user position and user contactless gesture, the sensor system configured to detect penetration of an activation zone as an indication that the user intends to provide contactless user input to the user interface display, wherein the sensor system is configured for direct tracking of a user's hand to directly sense the position of the user's hand in at least a portion of an area outside from the activation zone, wherein the sensor system is configured to communicate one or more user input signals indicating direct tracking of the user's hand and indicating the detected penetration of the activation zone; and a control system. The control system may include a processor configured to execute instructions stored on memory to govern contactless user interface system operations, the control system arranged in communication with the sensor system to receive the one or more user input signals and configured to determine a target interface element applied by the user for contactless engagement with the visual information of the user interface display, based on indication of direct tracking of the user's hand and indication of penetration of the activation zone of the one or more user input signals.
In some embodiments, the activation zone may be defined as a virtual zone. The activation zone may be defined as a physical zone. The user's hand may be defined as a virtual hand. The user's hand may be defined as a physical hand. In some embodiments, at least one of the sensor system and the control system may be a virtual system. At least one of the sensor system and the control system may be a physical system. In some embodiments, the user interface display may be a virtual display. The user interface display may be a physical display.
Additional features of the present disclosure will become apparent to those skilled in the art upon consideration of illustrative embodiments exemplifying the best mode of carrying out the disclosure as presently perceived.
The concepts described in the present disclosure are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
The detailed description particularly refers to the accompanying figures in which:
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
Referring to
In the exemplary depiction of
Users are often familiar with touchscreen displays, such as touchscreens displays for tablets, smartphones, or touchscreen kiosks, but may have limited experience with contactless user interfacing. Users often will presume that display screens require touching, and have difficulties receiving baseline instructions about how to interface with a display without prior experience or in-person instruction. For example, attempts to use physical signage near the display and even instructional videos are often overlooked or ignored by users.
Additionally, users may find it difficult to learn new techniques for interfacing. If a user is familiar with swipe gestures for scrolling pages, teaching another type of gesture for scrolling pages can be challenging, particularly in the context of the relatively brief time period expected for kiosk interaction. Frustration can occur rather quickly if user expectations for the time required to interact with the display are exceeded. Thus, a balance is required between user expectations and the amount of training required for the user to conduct basic interfacing. Accordingly, it can be understood that introduction of contactless interfacing to users traditionally familiar with touchscreens can be challenging. Achieving an intuitive, yet sensitive contactless interface experience can require consideration of these challenges.
Referring now to
The sensor system 16 includes a sensor 20 for capturing contactless user inputs. The sensor 20 is illustratively embodied as a camera arranged to capture the user's hand location and movements. In the illustrative embodiment, the sensor 20 is configured for direct hand tracking to directly sense the position of the user's hand(s). One non-exhaustive example of a suitable device for use as sensor 20 may include the Leap Motion Controller as marketed by Ultra Leap of Mountain View, California. In the illustrative embodiment, the sensor 20 is configured for image capture and analysis including within the visible spectrum and/or the non-visible spectrum (e.g., infrared, ultraviolet, microwave, x-ray). In some embodiments, the sensor 20 may comprise non-image based sensing such as by radar, lidar, and/or sonar. In some embodiments, the sensor 20 may capture location and/or movements beyond the user's hand, for example, the user's wrists, arms, torso, etc. In the illustrative embodiment, the camera 20 is arranged above the display 14 (and above the user), but in some embodiments, may have any suitable position for capturing contactless user inputs.
In
In the illustrative embodiment, the activation zone 22 is defined as a region in front of the display 14 that is within the user's reach for gesture (i.e., not specifically requiring actual reach to touch the display 14). The control system 18 can define the activation zone 22 as a region extending beyond a face 24 of the display 14. As suggested in
In the illustrative embodiment as shown in
For example, it has been observed that users tend to be less definitive in their gestures near the top and/or bottom regions of the display. Manifestation of such issues can take various forms but can include the user providing lesser extent of motion in the region, providing lesser dwell time within the area of interest, and/or providing generally less emphatic gestures. These challenges can be at least partly attributed to the user's reach, for example, the comfortable reach distance dr between the user's shoulder and the user's intended target selection element (e.g., pointing finger). Although the reach distance dr is indicated by a straight line in
The control system 18 may, additionally or alternatively, define the activation zone 22 actively based on the user. For example, the control system 18 may actively determine the predetermined spacing of the activation zone 22 based on the individual user. The sensor 20 can capture information about the user for communication to the control system 18. The control system 18 determines the predetermined projected distance(s) d1 based on the information about the user. Information about the user may include geometries and/or movements which indicate the user's comfortable range of motion, for example, height, stance, gait, arm length, hand dominance, posture, mobility limitations (e.g., cane, wheelchair, etc.), and/or other suitable aspects considering the user's reach.
Although the activation zone 22 is illustratively defined having symmetry in the vertical direction in
Referring now to
Referring still to the illustrative embodiment of
The field of view of the sensor system 16 comprises one or more activation subfields 32i for defining of the activation zone 22. The activation subfields 32i include subfields 32A-c. The activation subfields 32A-C are illustratively directed to specific areas of the region for occupation of the activation zone 22. Each activation subfield 32A-C is illustratively directed to a different region of the activation zone 22 to provide complete coverage definition of the activation zone 22. In the illustrative embodiment as shown in
The control system 18 can apply the activation subfields 32i to define the activation zone 22. The control system 18 can define the activation zone 22 as a region of enhanced scrutiny for determining user hand positions and/or gestures. In the illustrative embodiment, the sensor 20 includes a camera capturing image data, and the control system 18 applies an enhanced rate of computational analysis to the image data within the activation zone 22. For example, the control system 18 can apply multiple times the processing power to computational image analysis for the image data within the activation zone 22. As user hand positions and/or gestures are relatively quick, and change in real-time, enhancing the computational resources applied to the precise activation zone 22 can be applied to more intensely consider the image information in the focused area of the activation zone 22. This can enhance accuracy, precision, and/or speed of determination of the user's actual hand position and/or movements from which contactless interfacing can be achieved. In some embodiments, the control system 18 may provide enhanced scrutiny to data from the activation zone 22 by analyzing such data differently from data in direct hand tracking, for example, by different techniques for edge finding, body skeletal tracking, and/or even applying multiple different techniques in parallel to the data from the activation zone 22.
In particular, the control system 18 in defining the activation zone 22 can determine the user's hand configuration with enhanced reliability. Determining the user's hand configuration includes determining the shape of the user's hand position for determining the intended focal point, as the target interface element, for interfacing with the display 14. The intended focal point is generally discussed herein as being a point of the user's hand, but may also include instruments such as a stylus or other object being held by the user.
Referring to
For example, as suggested in
As suggested in
Each of the non-limiting examples of manners of addressing the display 14 from
As shown in
Referring now to
Referring now to
In box 202, the control system 18 can determine penetration of the activation zone 22. Determining penetration may include analyzing data from the sensor system 16 to determine that the user's hand is within the activation zone 22. Determining penetration may include analyzing information from sensor system 16 to determine a user's hand configuration within the activation zone, for example, the particular position, shape, and/or arrangement of the user's hand within the activation zone 22. Although direct hand tracking and determination of penetration may be performed separately and/or in wholly or partly shared operations, operations of boxes 202 are shown distinctly, but may each be performed together, in parallel, and/or cyclically relative to other operations.
In box 204, the control system 18 can determine the target interface element based on the direct hand tracking and determination of penetration of the activation zone 22. The control system 18 can analyze data of direct hand tracking from the sensor system 16 and the determination of penetration of the activation zone 22 to determine a precise point which the user intends as the point for interfacing with the display 14. In analyzing the information, the control system 18 may compare the information of the direct hand tracking and the determination of penetration. The control system 18 may perform testing on one or more of the information of direct hand tracking and the determination of penetration, for example, by generating predictions based on the one more information and verifying such predictions to determine reliability. The control system 18 may conduct statistical analysis, machine learning, and/or may resolve ambiguities and/or disparities in considering the information of direct hand tracking and/or the determination of penetration of the activation zone 22. Accordingly, the control system 18 determines the target interface element based on direct hand tracking and penetration of the activation zone 22.
The target interface element can then be treated as the focal point for user interfacing. For example, when the user addresses the display 14 with a hand configuration which is less definitive, the control system 18 can determine the intended focal point of the user's hand for selecting and/or manipulating visual information as icons on the display, scrolling pages, and/or generally interfacing in similar manner as for touchscreen operations, yet without the need for contacting the display.
In box 206, the control system 18 may detect threshold acceleration indicating a desired interaction with display 14. In the illustrative embodiment, the control system 18 may detect threshold acceleration of the target interface element. For example, once the control system 18 has determined the target interface element as a certain part of the user's hand or instrument (e.g., pen) held by the user, the acceleration of the target interface element at or above a predetermined threshold can be applied to determine user intent for a contactless operation, such as icon selection, scrolling, etc. In some embodiments, the detection of threshold acceleration can be applied together with the direct hand tracking and the determination of penetration of the activation zone 22 to determine the target interface element, for example, by identifying the highest point of acceleration applied by the user in gesturing above a minimum threshold. The control system 18 may utilize the detection of threshold acceleration as additional indication of the focal point that the user intends for interaction with the display 14. In some embodiments, threshold detection of acceleration may include multiple predetermined thresholds applied by the control system 18, for example, one threshold acceleration detection applied in determining the target interface element and another threshold acceleration detection applied in determining activation of visual information of the display 14.
The control system 18 may actively define the predetermined threshold for threshold acceleration determination. The control system 18 may define the predetermined threshold acceleration based on the direct hand tracking and/or penetration of the activation zone 22. For example, the user's manner of addressing the display 14 may affect the threshold acceleration for indicating activation of visual information. More specifically, if the user addresses the display 14 with their palm facing upwards, the threshold acceleration indicating that the user actually intends to select an icon may be slower than if the palm is facing downwards. Similarly, the user's body position, posture, height, and/or other aspects may be considered in determining predetermined threshold acceleration. Accordingly, the control system 18 can define relevant predetermined threshold acceleration based on the user.
In box 208, the control system 18 can determine user intent for activation of visual information of the display 14 based on the determined target interface element. Activation of visual information can include any sort of interaction with the visual information of the display 14, for example, selecting, manipulating, moving, altering, moving, scrolling, zooming, focusing, and/or any other interactions with visual information of the display 14. For example, based on a determination that the target interface element is a pen held by the user's hand, the control system 18 can determine that the user has gestured for selection of a particular icon on the display 14. Accordingly, the user's gestures can be more accurately, precisely, and/or reliably determined for interfacing.
In some embodiments, the control system 18 may apply the threshold acceleration in determining activation of visual information. For example, the threshold acceleration may be applied to the target interface element determined to be a pen held by the user, e.g., within the activation zone 22. The control system 18 may determine activation upon the user's pen achieving the threshold acceleration for icon selection, scrolling, manipulation, etc. Threshold accelerations for different operations may vary.
In some embodiments, the control system 18 may determine the precise visual information for selection based on the distance between the target interface element and the display 14. For example, the control system 18 may determine the target interface element as one of the user's fingers and may determine the distance between the one of the user's fingers and the display 18, such as the distance of the target interface element normal to the surface 24 of the display 18. Based on this determined distance between the target interface element and the display 18, the control system 18 can reliably determine the particular visual information of the display 18 with which the user intends to interact. For example, based on the distance of the target interface element normal to the surface 24 of the display 14, control system 18 may determine that one icon is closest to the user's intended target selection element, and thus that icon is intended to be interacted with. In some embodiments, the control system 18 may apply the determined distance between the target interface element and the display 18 in determining the configuration of the activation zone 22, for example, to define one or more of the predetermined distances di.
Referring now to
In box 302, the control system 18 can determine the definition of the activation zone 22 based on the detected aspects of the user's comfortable range of motion. The control system 18 can consider all available information, including known or pre-selected aspects of the user's comfortable range of motion to define the activation zone 22. For example, the control system 18 may consider statistical data regarding the detected or inputted age, gender, weight, season, geographic location, among other aspects of the user in determining definition of the activation zone 22.
In box 304, the control system 18 can define the activation zone 22 based on the determined definition. The control system 18 can define the activation zone 22 by executing the data analysis on the information received from the sensor system 16 concerning the defined activation zone 22. The control system 18 can actively define the activation zone 22 by conducting operations as discussed regarding boxes 300-304 in cycles to provide updated definition of the activation zone 22.
Referring to
In
CENTER OF PALM MEASUREMENT REGARDING THE DETERMINATION OF THE FINGER OF INTEREST (INTERFACING WITH MULTIPLE USERS/HANDS)—User contactless interfacing can face challenges related to the particular mode of interface, e.g., manner of the hand, used to communicate with the system. For example, some users may bring multiple hands into the range of detection of the system, and/or multiple users may be positioned near the detection range of the system simultaneously, and may attempt to interact with the system simultaneously. Among other scenarios, one exemplary situation may include when a guardian with their child nearby is interfacing with the system.
In case of multiple hands: all hands may be detected as full 3D models with joints (e.g., hand, fingers) in space for every frame the application runs. Then, for each hand, the system (e.g., via program) may see (e.g., observe/detect) which fingertip is furthest removed from the center of the palm of the corresponding hand, and may identify this furthest fingertip as the pointing finger. For each pointing finger on each hand, the system may take the fingertip location and measure how far away this point is form the surface of the screen or display; this is proximity.
The system may take the fingertip position that has the closest proximity as the main finger. If at any point another hand with fingers gets closer in proximity, or a finger on that hand is further removed from the palm by stretching, the system may re-designate this finger as the primary finger. In the illustrative embodiment, this logic may run on a per-hand basis, per-finger and may be repeated, for example, at about 120 Hz, or 120 times a second. Often, there may be one hand with one pointing finger, and in exemplary instances, the sensor system may be configured to handle (e.g., gather, detect) 10 hands at the same time, including up to 50 fingers of which are 10 pointing fingers, where there is one primary finger at all times. In some instances, this may be the case unless there are no hands detected at a given time.
PAUSE TIME—In some embodiments, a pause time may be implemented to assist with managing errant communications. For example, a brief pause may be implemented to reset the system and/or prevent capturing unintended gestures. Such pauses can prevent capturing accidental double entry (“taps”).
INTERRUPTION OF PAUSE TIME—Pause time can be interrupted (even though it's just a split second), if another hand decided to interact with the screen. The same hand should be prevented to invoke an accidental click, if another hand tries to invoke a click this would be ok as it is a different intention. The pausing mechanism is there to prevent non-intentional interactions.
VIRTUAL REALITY—In one exemplary practical application, a virtual kiosk may be configured in a virtual store. A system, such as the system 12 as disclosed within U.S. Provisional Patent Application No. 63/146,195, may be implemented, e.g., one-to-one, to virtual reality for operation with a virtual screen/kiosk. Indeed, in some instances, such an implementation as a virtual store may provide advantages, such as to accuracy, speed, timing, or otherwise, over implementation as a physical kiosk. Additionally, virtual reality (VR) implementations may include screens of various shapes, and interactions in VR may be solved in different ways. For example, in VR, space and/or distance can be relative, such that a subject (user) can move, but additionally, the kiosk can move. Indeed, the subject (user) may themselves comprise one or more of the sensors, for example, of the sensor system (e.g., sensor system 16 as disclosed within U.S. Provisional Patent Application No. 63/146,195). In various implementations of virtual kiosk or virtual reality implementation of contactless user interfacing, the particular manner of modelling, sensing, and/or detecting user intent of the hand (or selection object) can provide advantages to the user and/or particular to the VR space. This can be true for many varieties of sensor location and/or type of measurement.
Within the present disclosure various hardware indicated may take various forms. Examples of suitable processors may include one or more microprocessors, integrated circuits, system-on-a-chips (SoC), among others. Examples of suitable memory, may include one or more primary storage and/or non-primary storage (e.g., secondary, tertiary, etc. storage); permanent, semi-permanent, and/or temporary storage; and/or memory storage devices including but not limited to hard drives (e.g., magnetic, solid state), optical discs (e.g., CD-ROM, DVD-ROM), RAM (e.g., DRAM, SRAM, DRDRAM), ROM (e.g., PROM, EPROM, EEPROM, Flash EEPROM), volatile, and/or non-volatile memory; among others. Communication circuitry includes components for facilitating processor operations, for example, suitable components may include transmitters, receivers, modulators, demodulator, filters, modems, analog to digital converters, operational amplifiers, and/or integrated circuits.
Clause 1. A contactless user interface system for ordering merchandise without user contact with peripherals, the system comprising: a user interface display configured for presenting visual information to the user, the visual information comprising at least one selectable merchandise option; a sensor system for capturing contactless user inputs, the sensor system comprising at least one sensor configured to capture contactless user inputs including user position and user contactless gesture, the sensor system configured to detect penetration of an activation zone as an indication that the user intends to provide contactless user input to the user interface display, wherein the sensor system is configured for direct tracking of a user's hand to directly sense the position of the user's hand in at least a portion of an area outside from the activation zone, wherein the sensor system is configured to communicate one or more user input signals indicating direct tracking of the user's hand and indicating the detected penetration of the activation zone; a control system including a processor configured to execute instructions stored on memory to govern contactless user interface system operations, the control system arranged in communication with the sensor system to receive the one or more user input signals and configured to determine a target interface element applied by the user for contactless engagement with the visual information of the user interface display, based on indication of direct tracking of the user's hand and indication of penetration of the activation zone of the one or more user input signals.
Clause 2. The system of clause 1, wherein the control system is configured to define the activation zone as a region ahead of the user interface display extending beyond a face of the user interface display by a predetermined distance.
Clause 3. The system of any preceding clause, wherein the activation zone is defined by the control system for enhanced monitoring to determine a focal point of the target interface element, wherein the focal point of target interface element is applied as a precise point of indication for user contactless activation of visual information.
Clause 4. The system of any preceding clause, wherein the focal point of the target interface element includes a point of the user's hand, or of an instrument held in the user's hand, that is closest to the user interface display within the activation zone as the precise point of indication for user contactless engagement with the visual information.
Clause 5. The system of any preceding clause, wherein the control system is configured to define the activation zone as a 3-dimensional region.
Clause 6. The system of any preceding clause, wherein the control system is configured to define the activation zone as a region projecting farther from the user interface display near at least one of a top section and a bottom section of the user interface display than in a mid-section of the display.
Clause 7. The system of any preceding clause, wherein the control system is configured to determine an acceleration of the user's hand or instrument held in the user's hand within the activation zone based on the one or more user input signals.
Clause 8. The system of any preceding clause, wherein the control system is configured to determine user intent to cause activation of visual information of the user interface display based on the determined acceleration within the activation zone.
Clause 9. The system of any preceding clause, wherein the control system is configured to determine user intent to cause activation of an area on the user interface display by comparison of the determined acceleration within the activation zone with a predetermined threshold acceleration.
Clause 10. The system of any preceding clause, wherein the control system is configured to determine the predetermined threshold acceleration based on at least one of penetration of the activation zone and direct tracking of the user's hand based on the user input signal.
Clause 11. The system of any preceding clause, wherein penetration of the user's hand within the activation zone includes at least one of depth of penetration into the activation zone and a hand configuration of the user.
Clause 12. The system of any preceding clause, wherein the hand configuration of the user includes the number of fingers of the user's hand extended to indicate the visual information for activation on the user interface display.
Clause 13. The system of any preceding clause, wherein depth of penetration includes distance between the user interface display and a lead digit of the user's hand.
Clause 14. The system of any preceding clause, wherein the control system is configured to determine that the lead digit of the user's hand is the closest finger to the user interface display.
Clause 15. The system of any preceding clause, wherein the control system is configured to determine that the lead digit of the user's hand is the not closest finger to the user interface display, based on the user's hand configuration within the activation zone.
Clause 16. The system of any preceding clause, wherein the control system is configured to define the activation zone based on information gathered by the sensor system regarding the user.
Clause 17. The system of any preceding clause, wherein the control system is configured to actively define the activation zone.
Clause 18. The system of any preceding clause, wherein the control system is configured to actively define the activation zone based on information gathered by the sensor system regarding the user.
Clause 19. A method of contactless user interfacing for ordering merchandise without user contact with peripherals, the method comprising: presenting visual information to a user via a user interface display, the visual information comprising at least one selectable merchandise option; capturing contactless user inputs including user position and user contactless gesture, via a sensor system, wherein capturing contactless user inputs includes detecting penetration of an activation zone as an indication that the user intends to provide contactless user input to the user interface display, and directly tracking a user's hand to directly sense the position of the user's hand in at least a portion of an area outside from the activation zone, and communicating one or more user input signals indicating direct tracking of the user's hand and indicating the detected penetration of the activation zone; and determining, via a control system, a target interface element applied by the user for contactless engagement with the visual information of the user interface display, based on indication of direct tracking of the user's hand and indication of penetration of the activation zone of the one or more user input signals.
Clause 20. The method of clause 19, wherein capturing contactless gestures includes defining the activation zone as a region ahead of the user interface display extending beyond a face of the user interface display by a predetermined distance.
Clause 21. The method of any preceding clause, wherein the activation zone is defined by the control system for enhanced monitoring to determine a focal point of the target interface element, wherein the focal point of target interface element is applied as a precise point of indication for user contactless activation of visual information.
Clause 22. The method of any preceding clause, wherein the focal point of the target interface element includes a point of the user's hand, or of an instrument held in the user's hand, that is closest to the user interface display within the activation zone as the precise point of indication for user contactless engagement with the visual information.
Clause 23. The method of any preceding clause, wherein the activation zone is defined as a 3-dimensional region.
Clause 24. The method of any preceding clause, wherein defining the activation zone includes defining a region projecting farther from the user interface display near at least one of a top section and a bottom section of the user interface display than in a mid-section of the display.
Clause 25. The method of any preceding clause, wherein determining a target selection device includes determining an acceleration of the user's hand or instrument held in the user's hand within the activation zone based on the one or more user input signals.
Clause 26. The method of any preceding clause, wherein determining a target selection device includes determining user intent to cause activation of visual information of the user interface display based on the determined acceleration within the activation zone.
Clause 27. The method of any preceding clause, wherein determining user intent to cause activation includes determining intent to cause activation of an area on the user interface display by comparison of the determined acceleration within the activation zone with a predetermined threshold acceleration.
Clause 28. The method of any preceding clause, wherein determining intent to cause activation includes determining the predetermined threshold acceleration based on at least one of penetration of the activation zone and direct tracking of the user's hand based on the user input signal.
Clause 29. The method of any preceding clause, wherein penetration of the user's hand within the activation zone includes at least one of depth of penetration into the activation zone and a hand configuration of the user.
Clause 30. The method of any preceding clause, wherein the hand configuration of the user includes the number of fingers of the user's hand extended to indicate the visual information for activation on the user interface display.
Clause 31. The method of any preceding clause, wherein depth of penetration includes distance between the user interface display and a lead digit of the user's hand.
Clause 32. The method of any preceding clause, wherein determining a target selection device includes determining that the lead digit of the user's hand is the closest finger to the user interface display.
Clause 33. The method of any preceding clause, wherein determining a target selection device includes determining that the lead digit of the user's hand is the not closest finger to the user interface display, based on the user's hand configuration within the activation zone.
Clause 34. The method of any preceding clause, wherein capturing contactless gestures includes defining the activation zone based on information gathered by the sensor system regarding the user.
Clause 35. The method of any preceding clause, wherein defining the activation zone includes actively define the activation zone.
Clause 36. The method of any preceding clause, wherein actively defining the activation zone includes actively defining the activation zone based on information gathered by the sensor system regarding the user.
Clause 37. A contactless user interface system for selecting merchandise without user contact with peripherals, the system comprising: a user interface display configured for presenting visual information to the user, the visual information comprising at least one selectable merchandise option; a sensor system for capturing contactless user inputs, the sensor system comprising at least one sensor configured to capture contactless user inputs including user position and user contactless gesture; a control system including a processor configured to execute instructions stored on memory to govern contactless user interface system operations, the control system arranged in communication with the sensor system to receive the one or more user input signals for determining user inputs as commands.
Clause 38. The contactless user interface system of clause 37, wherein the control system is configured to determine which fingertip of a user's hand is furthest removed from the center of the palm of the corresponding hand, and to identify the determined fingertip as a target interface element.
Clause 39. The contactless user interface system of any preceding clause, wherein in response to capture of multiple user hands close in time with each other by the sensor system, the control system is configured to determine which one of the multiple user hands corresponds with a determined fingertip that is closest to the user interface display.
Clause 40. The contactless user interface system of any preceding clause, wherein in response to determination that one determined fingertip of one corresponding hand is the closest to the user interface display of multiple hands, the control system is configured to designate the one determined fingertip as the primary target interface element.
Clause 41. The contactless user interface system of any preceding clause, wherein in response to determination that another determined fingertip of one corresponding hand is newly the closest to the user interface display of multiple hands, the control system is configured to re-designate the another determined fingertip as the primary target interface element.
Clause 42. The contactless user interface system of any preceding clause, wherein re-designation is undertaken only after at least a predetermined time pause from a previous designation.
Clause 43. The contactless user interface system of any preceding clause, wherein activation of a command by a primary target interface element is undertaken only after a predetermined time pause from a previous command to avoid unintended activations.
Clause 44. The contactless user interface system of any preceding clause, wherein activation of a command by a primary target interface element of a corresponding hand is undertaken only after a predetermined time pause from a previous command to avoid unintended activations from the same corresponding hand.
Clause 45. The contactless user interface system of any preceding clause, wherein activation of a command by a primary target interface element of another corresponding hand that is re-designated is undertaken without the predetermined time pause.
Clause 46. The contactless user interface system of any preceding clause, wherein activation of a command by a primary target interface element of a corresponding hand is undertaken only after a predetermined time pause from a previous command to avoid unintended activations from the same corresponding hand.
Clause 47. The contactless user interface system of any preceding clause, wherein activation of a command by a primary target interface element of another corresponding hand that is re-designated is undertaken without the predetermined time pause.
Clause 48. The contactless user interface system of any preceding clause, implemented as a portion of a virtual kiosk.
Clause 49. A virtual kiosk comprising: a contactless user interface system for selecting merchandise without user contact with peripherals, the system comprising: a user interface display configured for presenting virtual visual information to the user, the virtual visual information comprising at least one selectable merchandise option; a sensor system for capturing contactless user inputs, the sensor system comprising at least one sensor configured to capture contactless user inputs including user position and user contactless gesture, the sensor system configured to detect penetration of an activation zone as an indication that the user intends to provide contactless user input to the user interface display, wherein the sensor system is configured for direct tracking of a user's hand to directly sense the position of the user's hand in at least a portion of an area outside from the activation zone, wherein the sensor system is configured to communicate one or more user input signals indicating direct tracking of the user's hand and indicating the detected penetration of the activation zone; a control system including a processor configured to execute instructions stored on memory to govern contactless user interface system operations, the control system arranged in communication with the sensor system to receive the one or more user input signals and configured to determine a target interface element applied by the user for contactless engagement with the visual information of the user interface display, based on indication of direct tracking of the user's hand and indication of penetration of the activation zone of the one or more user input signals.
Clause 50. The virtual kiosk of clause 49, wherein the activation zone is defined as a virtual zone.
Clause 51. The virtual kiosk of any preceding clause, wherein the activation zone is defined as a physical zone.
Clause 52. The virtual kiosk of any preceding clause, wherein the user's hand is defined as a virtual hand.
Clause 53. The virtual kiosk of any preceding clause, wherein the user's hand is defined as a physical hand.
Clause 54. The virtual kiosk of any preceding clause, wherein at least one of the sensor system and the control system is a virtual system.
Clause 55. The virtual kiosk of any preceding clause, wherein at least one of the sensor system and the control system is a physical system.
Clause 56. The virtual kiosk of any preceding clause, wherein the user interface display is a virtual display.
Clause 57. The virtual kiosk of any preceding clause, wherein the user interface display is a physical display.
While certain illustrative embodiments have been described in detail in the figures and the foregoing description, such an illustration and description is to be considered as exemplary and not restrictive in character, it being understood that only illustrative embodiments have been shown and described and that all changes and modifications that come within the spirit of the disclosure are desired to be protected. There are a plurality of advantages of the present disclosure arising from the various features of the methods, systems, and articles described herein. It will be noted that alternative embodiments of the methods, systems, and articles of the present disclosure may not include all of the features described yet still benefit from at least some of the advantages of such features. Those of ordinary skill in the art may readily devise their own implementations of the methods, systems, and articles that incorporate one or more of the features of the present disclosure.
This Utility Patent Application claims the benefit of priority to each of Provisional Application No. 63/146,195, filed on Feb. 5, 2021, entitled “DEVICES, SYSTEMS, AND METHODS FOR CONTACTLESS INTERFACING”, and Provisional Application No. 63/281,112, filed on Nov. 19, 2021, entitled “TIMING/MEASUREMENT/VR DEVICES, SYSTEMS, AND METHODS FOR CONTACTLESS INTERFACING,” the contents of each of which are hereby incorporated by reference in their entireties, including but without limitation, those portions related to interfacing.
Number | Date | Country | |
---|---|---|---|
63146195 | Feb 2021 | US | |
63281112 | Nov 2021 | US |