This invention relates to three dimensional (3D) user interface systems. More specifically, this invention relates to providing haptic feedback to a user of a 3D user interface system.
Operating systems can be found on almost any device that contains a computing system from cellular phones and video game consoles to supercomputers and web servers. A device's operating system (OS) is a collection of software that manages computer hardware resources and provides common services for user application programs. The OS typically acts as an interface between the hardware and the programs requesting input or output (I/O), CPU resources, and memory allocation. When an application executes on a computer system with an operating system, the application's code is usually executed directly by the hardware and can make system calls to the OS or be interrupted by it. The portion of the OS code that interacts directly with the computer hardware and implements services for applications is typically referred to as the kernel of the OS. The portion that interfaces with the applications and users is known as the shell. The user can interact with the shell using a variety of techniques including (but not limited to) using a command line interface or a graphical user interface (GUI).
Most modern computing devices support graphical user interfaces (GUI). GUIs are typically rendered using one or more interface objects. Actions in a GUI are usually performed through direct manipulation of graphical elements such as icons. In order to facilitate interaction, the GUI can incorporate one or more interface objects referred to as interaction elements that are visual indicators of user action or intent (such as a pointer), or affordances showing places where the user may interact. The term affordance here is used to refer to the fact that the interaction element suggests actions that can be performed by the user within the GUI.
A GUI typically uses a series of interface objects to represent in a consistent manner the ways in which a user can manipulate the information presented to the user via the user interface. In the context of traditional personal computers employing a keyboard and a pointing device, the most common combination of such objects in GUIs is the Window, Icon, Menu, Pointing Device (WIMP) paradigm. The WIMP style of interaction uses a virtual input device to control the position of a pointer, most often a mouse, trackball and/or trackpad and presents information organized in windows and/or tabs and represented with icons. Available commands are listed in menus, and actions can be performed by making gestures with the pointing device.
The term user experience is generally used to describe a person's emotions about using a product, system or service. With respect to user interface design, the ease with which a user can interact with the user interface is a significant component of the user experience of a user interacting with a system that incorporates the user interface. A user interface in which task completion is difficult due to an inability to accurately convey input to the user interface can lead to negative user experience, as can a user interface that rapidly leads to fatigue.
Touch interfaces, such as touch screen displays and trackpads, enable users to interact with GUIs via two dimensional (2D) gestures (i.e. gestures that contact the touch interface). The ability of the user to directly touch an interface object displayed on a touch screen can obviate the need to display a cursor. In addition, the limited screen size of most mobile devices has created a preference for applications that occupy the entire screen instead of being contained within windows. As such, most mobile devices that incorporate touch screen displays do not implement WIMP interfaces. Instead, mobile devices utilize GUIs that incorporate icons and menus and that rely heavily upon a touch screen user interface to enable users to identify the icons and menus with which they are interacting.
Multi-touch GUIs are capable of receiving and utilizing multiple temporally overlapping touch inputs from multiple fingers, styluses, and/or other such manipulators (as opposed to inputs from a single touch, single mouse, etc.). The use of a multi-touch GUI may enable the utilization of a broader range of touch-based inputs than a single-touch input device that cannot detect or interpret multiple temporally overlapping touches. Multi-touch inputs can be obtained in a variety of different ways including (but not limited to) via touch screen displays and/or via trackpads (pointing device).
In many GUIs, scrolling and zooming interactions are performed by interacting with interface objects that permit scrolling and zooming actions. Interface objects can be nested together such that one interface object (often referred to as the parent) contains a second interface object (referred to as the child). The behavior that is permitted when a user touches an interface object or points to the interface object is typically determined by the interface object and the requested behavior is typically performed on the nearest ancestor object that is capable of the behavior, unless an intermediate ancestor object specifies that the behavior is not permitted. The zooming and/or scrolling behavior of nested interface objects can also be chained. When a parent interface object is chained to a child interface object, the parent interface object will continue zooming or scrolling when a child interface object's zooming or scrolling limit is reached.
The evolution of 2D touch interactions has led to the emergence of 3D user interfaces that are capable of 3D interactions. A variety of machine vision techniques have been developed to perform three dimensional (3D) gesture detection using image data captured by one or more digital cameras (RGB and/or IR), or one or more 3D sensors such as time-of-flight cameras, and structured light cameras. Detected gestures can be static (i.e. a user placing her or his hand in a specific pose) or dynamic (i.e. a user transition her or his hand through a prescribed sequence of poses). Based upon changes in the pose of the human hand and/or changes in the pose of a part of the human hand over time, the image processing system can detect dynamic gestures.
An advance in the art is made by a vibrotactile system to augment a 3D input system in accordance with embodiments of this invention. In accordance with embodiments of this invention, an image processing system configured to conduct 3D gesture based interactive sessions that provides haptic feedback includes a memory containing an image processing application and a processor. An image processing application stored in the memory configures to processor to determine a haptic response and signal a haptic device to provide the haptic device to provide the determined response in the following manner. The processor receives image data of a target object. The captured image data is used to detect a 3D gesture in the image data. A haptic feedback response is determined based upon the detected 3D gesture. A haptic response signal that indicates the determined haptic feedback response is generated and provided to a haptic feedback device.
In accordance with embodiments of this invention a spatial position of the target object in a 3D interaction zone may also be determined. In accordance with these embodiments, the determination of the haptic feedback response is determined by the spatial position of the target object.
In accordance with some embodiments of this invention, the spatial position of the target object is used to determine that the target object is proximate a boundary of the 3D interaction zone and the haptic feedback is an indication that the target object is proximate the boundary. In accordance with some of these embodiments, the 3D interaction zone is determined based upon motions of the target object. In accordance with some of these embodiments, the 3D interaction zone is defined based upon the motions of the target object during an initialization gesture. In accordance with some of embodiments, a size of the 3D interaction zone is determined based upon a scale of at least one of a plurality of templates that matches a part of the target object visible in a sequence of frames of video data captured during detection of an initialization gesture and a distance of the part of the target object visible in the sequence of frames of video data from the camera used to capture the sequence of frames of video data. In accordance with some of these embodiments, a size of the 3D interaction zone is determined based upon a region in 3D space in which motion of the target object is observed during the initialization gesture. In accordance with some of these embodiments, the 3D interaction zone is a predetermined size determined based upon human physiology.
In accordance with some embodiments, the 3D gesture detected is a pointing gesture that is interacting with a gesture responsive interface object. In accordance with some embodiments, the 3D gesture detected is a pointing gesture that is interacting with an affordance. In accordance with some embodiments, an interactive response to the detected 3D gesture is determined and the haptic feedback response is determined based upon the interactive response. In accordance with some of these embodiments, the interactive response to the detected 3D gesture is an interaction with a gesture reactive interface object. In accordance with some of these embodiments, the detected 3D gesture targets a gesture reactive interface object and the interactive response is to unlock a new set of 3D gestures. In accordance with some embodiments, the detected 3D gesture is a tapping motion and the interactive response is interacting with a gesture response interface object. In accordance with some of these embodiments, the detected 3D gesture is a pointing motion, the interactive response is a tap and the determined haptic feedback is a specific vibration that simulates the sense of tapping on a surface. In accordance with some embodiments, the detected 3D gesture is a movement along a substantially vertical axis, the interactive response is determined to be a scrolling action and the haptic feedback response is a specific vibration that simulates the sense of moving a finger through an interior surface of a pipe. In accordance with some embodiments, the detected 3D gesture is a movement along a substantially horizontal axis, the interactive response is determined to be a scrubbing action and the haptic feedback response is a specific vibration that simulates that simulates the sense of rubbing a finger across a surface.
In accordance with some embodiments the haptic response signal is provided to the haptic feedback device by transmitting the haptic response signal to the haptic feedback device via a wireless connection. In accordance with some embodiments, the haptic response signal is provided to the haptic feedback device via a haptic feedback device interface.
In accordance with some embodiments, the haptic feedback device is a vibrotactile ring. In accordance with some of these embodiments, the vibrotactile ring includes vibrating motors situated around a circumference of the vibrotactile ring. A controller in the ring is configured to receive the haptic response signal and generate signals that selectively drive each of the vibrating motors to generate the determined haptic response. In accordance with some of these embodiments, the controller generates specific signals to drive each of the plurality of vibrating motors to provide a specific vibration pattern as the haptic response. In accordance with some of these embodiments, the processor is configured to provide the haptic response signal via a wireless connection to the vibrotactile ring and the vibrotactile ring further comprises a wireless receiver that receives the haptic response signal via the wireless connection and provides the haptic response signal to the controller. In accordance with some of these embodiments, the processor is configured to detect a hand pose associated with the 3D gesture in the captured image data and determine the orientation of the vibrotactile ring relative to a 3D interaction zone based upon the hand pose, and generate a haptic response signal based upon the orientation of the vibrotactile ring relative to the 3D interaction zone.
Turning now to the drawings, a 3D user interface that couples to a wearable device configured to provide haptic feedback/guidance is described. For purposes of this discussion, the terms 3D user interface, 3D gesture based user interface, Natural User Interface (NUI) are used interchangeably through this description to describe a system that captures images of a user and determines when certain 3D gestures are made that indicate specific interactions with the user interface. The present disclosure describes a 3D user interface system that senses the position of a hand, finger(s), any other body part(s), and/or an arbitrary object; correlates the position information to the display context; and transmits a pre-defined vibration pattern to a wearable haptic feedback device. The haptic feedback device may or may not be in contact with the user's skin to stimulate the user's sense of touch and/or provide haptic guidance to the user in order to augment the 3D gesture based user interface experience.
Real-Time Gesture Based Interactive Systems
A real-time gesture based interactive system in accordance with an embodiment of the invention is illustrated in
Based upon the location and pose of a detected human hand, the image processing system 120 can detect 3D gestures including (but not limited to) an initialization gesture indicating that the user is commencing gesture based interaction with the system and gestures that indicate the user's intention to interact with a specific interface object within the user interface. 3D gestures can be static (i.e. a user placing her or his hand in a specific pose) or dynamic (i.e. a user transition her or his hand through a prescribed sequence of poses). Based upon changes in the pose of the human hand and/or changes in the pose of a part of the human hand over time, the image processing system can detect dynamic gestures. In a number of embodiments, the real-time gesture based interactive system 100 includes a display 180 via which the real-time gesture based interactive system can present a user interface incorporating gesture reactive interface objects to the user. As noted above, the presentation of gesture reactive interface objects and/or the manner in which a user can interact with the interface objects changes as a reaction or in response to the 3D gesture input provided by the user. 3D gesture based user interfaces that incorporate gesture reactive interface objects are disclosed in U.S. patent application Ser. No. 13/965,157 entitled “Systems and Methods for Implementing Three-Dimensional (3D) Gesture Based Graphical User Interfaces (GUI) that Incorporate Gesture Reactive Interface Objects”, filed Aug. 12, 2013, the disclosure of which is hereby incorporated by reference in its entirety.
In accordance with many embodiments, the manner in which a gesture reactive interface object is displayed within the user interface and/or the size of the target zone associated with the interface object is determined based on a variety of factors including (but not limited to) the distance of the user from the display, the location of the display, the size of the display, the resolution of the display, the displayed content, the provider of the displayed content, and/or user-tunable factors. In accordance with other embodiments, the manner in which the gesture reactive interface object is displayed and/or the interactions permitted by the interface object are modified when a user targets the interface object via a 3D gesture. In the context of a real-time gesture based interactive system in accordance with many embodiments of the invention, the concept of targeting an interface object is separate from the concept of selecting the interface object. A 3D gesture that targets an interface object is a 3D gesture that (like a pointing device) points a cursor at an interface object, but does not select the interface object. In accordance with many embodiments, a selection process is utilized in which an object is targeted by a first targeting 3D gesture, the user interface is modified to inform the user that the object is selectable and/or the time remaining in which the interface object is selectable, and the selection process is completed using a separate second selection 3D gesture. In this way, the user interface is not simply providing a user experience that treats 3D gestures as another form of pointer input, such as the input that would be received via a mouse or a trackpad. Instead, gesture reactive interface objects respond to 3D gestures in a way that provides a user experience in which the process of selecting interface objects is easier and less tiring to complete.
In accordance with many embodiments of this invention, the real-time gesture based interactive system 100 also includes a haptic feedback device 190 and a haptic feedback device interface 185. The interactive system can continuously monitor user input received via 3D gestures and provide haptic feedback (where appropriate) to assist the user when interacting with the 3D gesture based user interface. A signal is then generated by the image processing system 120 that indicates the type of haptic feedback to provide to the user. The signal is provided by the image processing system 120 to a haptic feedback device interface 185. The haptic feedback device interface 185 then provides the signal to the haptic feedback device 190 to generate the signaled haptic feedback.
Although a specific real-time gesture based interactive system including two cameras is illustrated in
Image Processing System
Image processing systems in accordance with many embodiments of the invention can be implemented using a variety of software configurable computing devices including (but not limited to) personal computers, tablet computers, smart phones, embedded devices, Internet devices, wearable devices, and consumer electronics devices such as (but not limited to) televisions, projectors, disc players, set top boxes, glasses, watches, and game consoles. An image processing system in accordance with an embodiment of the invention is illustrated in
The image processing system 120 also includes memory 210 which can take the form of one or more different types of storage including semiconductor and/or disk based storage. In accordance with the illustrated embodiment, the processor 235 is configured using an operating system 230. Where the image processing system is part of an embedded system, the image processing system may not utilize an operating system. Referring back to
The 3D gesture tracking application 220 processes image data received via the camera interface 240 to identify 3D gestures such as hand gestures including initialization gestures and/or the orientation and distance of individual fingers. These 3D gestures can be processed by the processor 235, which can detect an initialization gesture and initiate an initialization process that can involve defining a 3D interaction zone in which a user can provide 3D gesture input to the image processing system. Following the completion of the initialization process, the processor can commence tracking 3D gestures that enable the user to interact with a user interface generated by the operating system 230 and/or the interactive application 225.
Based on the gestures identified by the 3D gesture tracking application, the haptic feedback application 225 generates signals that are provided to a haptic feedback device to provide haptic feedback to the user. The signals may include (but are not limited to) signals that indicate the initialization process is complete, signals that indicate that the hand is moving outside the 3D interaction zone, signals indicating that a function is being performed, and other signals that provide haptic feedback based upon detected gestures.
In accordance with many embodiments, the interactive application 215 and the operating system 230 configure the processor 235 to generate and render an initial user interface using a set of interface objects. The interface objects can be modified in response to a detected interaction with a targeted interface object and an updated user interface rendered. Targeting and interaction with interface objects can be performed via a 3D gesture based input modality using the 3D gesture tracking application 220. In accordance with several embodiments, the 3D gesture tracking application 220 and the operating system 230 configure the processor 235 to capture image data using an image capture system via the camera interface 24, and detect a targeting 3D gesture in the captured image data that identifies a targeted interface object within a user interface. The processor 235 can also be configured to then detect a 3D gesture in captured image data that identifies a specific interaction with the targeted interface object. Based upon the detected 3D gesture, the 3D gesture tracking application 220 and/or the operating system 230 can then provide an event corresponding to the appropriate interaction with the targeted interface objects to the interactive application to enable it to update the user interface in an appropriate manner. In addition, the haptic feedback application 245 can use an identified 3D gesture to determine an appropriate haptic feedback (if any) to provide in response to the identified 3D gesture and generate a signal for the feedback to send to the haptic feedback device. Although specific techniques for configuring an image processing system using an operating system, a 3D gesture tracking application, an interactive application, and haptic feedback application are described above with reference to
In accordance with many embodiments, the processor receives frames of video via the camera interface 245 from at least one camera or other type of image capture device. The camera interface can be any of a variety of interfaces appropriate to the requirements of a specific application including (but not limited to) the USB 2.0 or 3.0 interface standards specified by USB-IF, Inc. of Beaverton, Oreg., and the MIPI-CSI2 interface specified by the MIPI Alliance. In accordance with a number of embodiments, the received frames of video include image data represented using the RGB color model represented as intensity values in three color channels. In accordance with several embodiments, the received frames of video data include monochrome image data represented using intensity values in a single color channel. In accordance with several embodiments, the image data represents visible light. In accordance with other embodiments, the image data represents intensity of light in non-visible portions of the spectrum including (but not limited to) the infrared, near-infrared, and ultraviolet portions of the spectrum. In certain embodiments, the image data can be generated based upon electrical signals derived from other sources including but not limited to ultrasound signals. In several embodiments, the received frames of video are compressed using the Motion JPEG video format (ISO/IEC JTC1/SC29/WG10) specified by the Joint Photographic Experts Group. In a number of embodiments, the frames of video data are encoded using a block based video encoding scheme such as (but not limited to) the H.264/MPEG-4 Part 10 (Advanced Video Coding) standard jointly developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC JTC1 Motion Picture Experts Group. In certain embodiments, the image processing system receives RAW image data. In several embodiments, the camera systems that capture the image data also include the capability to capture dense depth maps and the image processing system is configured to utilize the dense depth maps in processing the image data received from the at least one camera system. In several embodiments, the camera systems include 3D sensors that capture dense depth maps including (but not limited to) time-of-flight camera.
In accordance with many embodiments, the display interface 250 is utilized to drive a display device that can be integrated within the image processing system and/or external to the image processing system. In a number of embodiments, the HDMI High Definition Multimedia Interface specified by HDMI Licensing, LLC of Sunnyvale, Calif. is utilized to interface with the display device. In other embodiments, any of a variety of display interfaces appropriate to the requirements of a specific application can be utilized.
In accordance with embodiments, the haptic feedback device interface can be any of a variety of interfaces appropriate to the requirements of a specific application including (but not limited to) the USB 2.0 or 3.0 interface standards specified by USB-IF, Inc. of Beaverton, Oreg., and the MIPI-CSI2 interface specified by the MIPI Alliance. In accordance with some embodiments, the haptic feedback device interface includes any of a variety of interfaces appropriate to the requirements of a particular haptic feedback device.
In accordance with many embodiments, the wireless transmitter interface can be any of a variety of appropriate interfaces for communication with another device such as, but not limited to Bluetooth®. In accordance with certain embodiments, the wireless transmitter interface allows communication with a haptic feedback device via a wireless communication.
Although a specific image processing system is illustrated in
Haptic Feedback Device
In accordance with some embodiments of this invention, the haptic feedback device 190 may be a ring, glove, watch or other article that is worn by the user and includes one or more vibration motors that vibrate to generate haptic feedback that is communicatively connected to the device that includes the image processing system. In accordance with some other embodiments of this invention, the haptic feedback device 190 may be embedded in the device that includes the image processing systems. Some examples of such systems, include but are not limited to, laptops, desktops, smart phones or similar devices. The components of a haptic feedback device 190 in accordance with embodiments of this invention are shown in
The transceiver 320 is a component that allows the controller 315 to receive and send signals to and from other processing systems in accordance with some embodiments of this invention. In accordance with many embodiments, the transceiver receives wireless signals from and may transmit wireless signals to the image processing system. The wireless signals may by IR signals, RF signals, or any other type of wireless signal that may be used to communicate between processing devices.
A Pulse Width Modulator is in communication with controller 315 and is controlled by controller 315 to generate signals applied to vibration motors 330 to provide the desired haptic feedback. Vibration motors 330 are one or more electro-mechanical devices that vibrate or provide some other stimuli based upon signals received from the pulse width modulator to provide the desired haptic feedback to the user. Power supply 335 is a battery or other supply of electricity used to power the various components of haptic feedback device 190.
Although specific haptic feedback devices in accordance with embodiments of the invention are described above with reference to
Vibrotactile Ring
In accordance with some embodiments of this invention, haptic feedback is provided by a vibrotactile ring that is worn by the user and is wirelessly connected to the imaging processing system. A vibrotactile ring in accordance with some of these embodiments of invention is illustrated in
In accordance with the shown embodiment, the base 405 includes a top deck 415 and a bottom deck 420. Top deck 415 houses the processing and transceiver components of vibrotactile ring 400. The bottom deck 420 houses the power supply of the ring. The components in the top and bottom decks of base 405 may be arranged in other manners depending on the exact configuration of the components and/or design of the ring.
Processes for Providing Haptic Feedback
In accordance with a number of embodiments of this invention, a 3D gesture based user interface system provides haptic feedback based upon the gestures captured by the system. A process for providing haptic feedback performed by an image processing system in accordance with embodiments of this invention in shown
With specific regard to the process 500, the system captures signal data (510) of a targeted object in an interactive area. In accordance with some of these embodiments, the signal data includes image data captured by an image capture device. 3D gestures are detected using the signal data (515). The spatial position of the targeted object, such as a hand or fingers, within a 3D interaction zone may also optionally be determined (520).
The manner in which interface objects respond to a detected 3D gesture and/or interface objects that are substituted into the 3D gesture based user interface in response to the detected 3D gesture is determined (525). The response of gesture responsive interface objects to a 3D gesture and/or the modification of interface objects used to render a display in response to a 3D gesture can be collectively referred to as the interface response of a 3D gesture based user interface in response to a detected 3D gesture. Examples of interface responses include, but are not limited to, initializing the 3D gesture system, selecting an interactive object in the 3D gesture based user interface, and manipulating an interactive object in the user interface.
The haptic feedback provided to the user via the haptic feedback device is determined based upon one or more parameters including the detected 3D gesture, the spatial position of the target object in the 3D interaction zone during the 3D gesture, and/or the determined interface response to the detected 3D gesture (530). The haptic feedback is a tactile response to the detected gesture to enhance the user experience of the 3D gesture based user interface. The tactile response may be used to provide tactile feedback when performing a 3D gesture, to indicate a current state of the interaction between the user and a targeted gesture reactive interface object, and/or guide the user in the use of the 3D gesture based user interface in any other way appropriate to the requirements of a specific application. The following are some specific examples of haptic feedback that can be utilized in accordance with embodiments of this invention. When a gesture is detected and the system is initialized, a certain vibration pattern is provided as haptic feedback. When the spatial position of the target object during the gesture moves out of the interactive area, another certain vibration pattern is provided as haptic feedback. The vibration pattern provided may indicate the direction in which the target object moved out of the 3D interaction zone in accordance with some of these embodiments. In addition, other vibration patterns may be provided as the target object is approaching a border of the 3D interaction zone to warn a user that the border is approaching. In accordance with some of these embodiments, the vibration patterns provided may gain intensity as the target object approaches the border. When a particular action is detected such as a tap or a swipe, a vibration pattern that mimics the feeling of the function on a touch pad may be provided as haptic feedback. Any number of other gestures, spatial relations, interface responses, and/or combinations of these inputs may invoke particular haptic feedback.
After the haptic feedback response is determined, a haptic response signal to send to the haptic feedback device is generated and provided to the haptic feedback device (535). This may be performed by looking up a code for the desired device in a table or other memory or generating a specific haptic code in response to the determination of the code to provide. In accordance with some embodiments where the haptic feedback device is connected to the image processing system, the haptic response signal is provided to the haptic feedback device through a haptic feedback device interface. In accordance with other embodiments where the haptic feedback device is communicatively connected to the image processing device, the haptic response signal is transmitted via message. In accordance with some of these embodiments, the haptic feedback device is wirelessly connected to the image processing system and the haptic response signal is sent via a message over the wireless connection.
After the haptic response signal is provided to the haptic feedback device, process 500 is repeated for the next detected gesture. In accordance with some embodiments of this invention, the process allows ongoing haptic feedback during a 3D gesture based interact session to prompt the user for inputs. For example, the haptic feedback may be used during an initialization process to prompt gestures that may be used to define the 3D interaction zone. Further, the haptic feedback may be used to direct the user to move the target object into a particular area of the 3D interaction zone in accordance with some embodiments. One skilled in the art will recognize that the user may be prompted to do any number of actions using the haptic feedback in accordance with embodiments of this invention.
Examples of Haptic Feedback
In accordance with some embodiments of this invention, the haptic feedback device is a vibrotactile ring as shown with respect to
In certain embodiments, a 3D interaction zone is defined in 3D space and motion of a finger and/or gestures within a plane in the 3D interaction zone parallel to the plane of the display can be utilized to determine the location on which to overlay a target on a display. A feature of systems in accordance with many embodiments of the invention is that they can utilize a comparatively small interaction zone. In several embodiments, the interaction zone is a predetermined 2D or 3D space defined relative to a tracked hand such that a user can traverse the entire 2D or 3D space using only movement of the user's finger and or wrist. Utilizing a small interaction zone can enable a user to move a target from one side of a display to another in an ergonomic manner. Larger movements, such as arm movements, can lead to fatigue during interaction of even small duration. In several embodiments, the size of the interaction zone is determined based upon the distance of the tracked hand from a reference camera and the relative position of the tracked hand in the field of view. In addition, constraining a gesture based interactive session to a small interaction zone can reduce the overall computational load associated with tracking the human hand during the gesture based interactive session. When an initialization gesture is detected, an interaction zone can be defined based upon the motion of the tracked hand. In several embodiments, the interaction zone is defined relative to the mean position of the tracked hand during the initialization gesture. In a number of embodiments, the interaction zone is defined relative to the position occupied by the tracked hand at the end of the initialization gesture. In certain embodiments, the interaction zone is a predetermined size. In many embodiments, the interaction zone is a predetermined size determined based upon human physiology. In several embodiments, a 3D interaction zone corresponds to a 3D that is no greater than 160 mm×90 mm×200 mm. In certain embodiments, the size of the 3D interaction zone is determined based upon the scale of at least one of the plurality of templates that matches a part of a human hand visible in a sequence of frames of video data captured during detection of an initialization gesture and the distance of the part of the human hand visible in the sequence of frames of video data from the camera used to capture the sequence of frames of video data. In a number of embodiments, the size of a 3D interaction zone is determined based upon the region in 3D space in which motion of the human hand is observed during the initialization gesture. In many embodiments, the size of the interaction zone is determined based upon a 2D region within a sequence of frames of video data in which motion of the part of a human hand is observed during the initialization gesture. In systems that utilize multiple cameras and that define a 3D interaction zone, the interaction zone can be mapped to a 2D region in the field of view of each camera. During subsequent hand tracking, the images captured by each camera can be cropped to the interaction zone to reduce the number of pixels processed during the gesture based interactive session. Although specific techniques are discussed above for defining interaction zones based upon hand gestures that do not involve gross arm movement (i.e. primarily involve movement of the wrist and finger without movement of the elbow or shoulder), any of a variety of processes can be utilized for defining interaction zones and utilizing the interaction zones in conducting 3D gesture based interactive sessions as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
An example of a 3D interaction zone for a 3D interactive system having a vibrotactile ring as a haptic feedback device in accordance with embodiments of this invention is shown in
Haptic feedback that warns when a target object is moving out of the 3D interaction zone in accordance with embodiments of this invention is shown in
The haptic feedback in response to a wake up gesture in accordance with embodiments of this invention is shown in
A haptic feedback in response to the detection of a tap gesture in accordance with embodiments of this invention is shown in
A haptic feedback in response to the detection of a gesture indicating a scrolling action in accordance with embodiments of this invention is shown in
A haptic feedback in response to the detection of a gesture indicating a scrubbing action in accordance with embodiments of this invention is shown in
Timing Diagram of Haptic Feedback
An example of typical haptic feedback provided over a period of time by a 3D gesture based user interface having a vibrotactile ring as a haptic feedback device in accordance with embodiments of this invention is shown in
Haptic Feedback Response
A 3D gesture based user interface detects a gesture and generates a haptic response signal that indicates to a haptic feedback device the haptic feedback response to provide. The conversion of haptic response signals to a haptic feedback response in a system including a 3D gesture based user interface incorporating a vibrotactile ring as a haptic feedback device in accordance with embodiments of this invention is shown in
In the illustrated embodiment, the code for a wake-up response is 0. However, other codes can be used without departing from this embodiment. The PWM pattern associated with wake-up code is shown by timing pulse 1410 applied to a first or left motor, timing pulse 1415 applied to a second or top motor, timing pulse 1420 applied to a third or right motor, and timing pulse 1425 applied to a fourth or bottom motor. As can be seen from pulses 1410, 1415, 1420, and 1425, the pulse for each subsequent motor is delayed until at least the end of the previous signal to cause a sequential vibration that travels the circumference of the vibrotactile ring.
Also in accordance with the shown embodiment, the code for a left exit in which the target object approaches a left boundary of the 3D interaction zone is 1. However, other codes may be used. The vibrotactile ring receives the code and associates the code 1 with the left exit response. The PWM pattern associated with exit code is shown by timing pulse 1435 applied to a first or left motor, timing pulse 1440 applied to a second or top motor, timing pulse 1445 applied to a third or right motor, and timing pulse 1450 applied to a fourth or bottom motor. As can be seen from pulses 1435, 1440, 1445, and 1450, only a sustained pulse is applied to the first or left motor. There is no pulse applied to the other motors.
Various Types of Haptic Feedback Devices
As discussed above, the haptic feedback device may be embedded in a system that includes the image processes such as, but not limited to, laptop computers, desktop computers, tablets, smartphones, and wearable computing devices such as (but not limited to) rings, wrist bands, watches, and glasses in accordance with embodiments of this invention. In accordance with many embodiments of this invention, the haptic feedback device may be implemented using components already incorporated into a device. For example, the haptic feedback device may utilize built-in vibration motors (or other devices for generating vibrations) present in most smartphones, tablets, and eBook readers. In accordance with these embodiments, the image processing system would send a signal to an application resident on the device that activates the built-in vibration motors within the device when a gesture is detected. Furthermore, the tactile device may be embedded in a device that is proximate the display to provide a user haptic feedback. For example, the haptic feedback device may be embedded in the steering wheel of an automobile with a digital dashboard. This would allow the user to user gestures to control the digital dashboard without looking at the dashboard. Thus allowing the user to concentrate on the road.
A smart-watch that includes a 3D gesture based user interface and has one or more vibration motors embedded in the smart-watch in accordance with embodiments of this invention is shown in
Although certain specific features and aspects of a gaming system have been described herein, many additional modifications and variations would be apparent to those skilled in the art. For example, the features and aspects described herein may be implemented independently, cooperatively or alternatively without deviating from the spirit of the disclosure. It is therefore to be understood that gaming system may be practiced otherwise than as specifically described. Thus, the foregoing description of the gaming system should be considered in all respects as illustrative and not restrictive, the scope of the claims to be determined as supported by this disclosure and the claims' equivalents, rather than the foregoing description.
The current application claims priority to U.S. Provisional Patent Application No. 61/860,872, filed Jul. 31, 2013, the disclosure of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5454043 | Freeman et al. | Sep 1995 | A |
5852672 | Lu | Dec 1998 | A |
6191773 | Maruno et al. | Feb 2001 | B1 |
6323942 | Bamji | Nov 2001 | B1 |
6512838 | Rafii et al. | Jan 2003 | B1 |
6515740 | Bamji et al. | Feb 2003 | B2 |
6522395 | Bamji et al. | Feb 2003 | B1 |
6580496 | Bamji et al. | Jun 2003 | B2 |
6587186 | Bamji et al. | Jul 2003 | B2 |
6614422 | Rafii | Sep 2003 | B1 |
6674895 | Rafii et al. | Jan 2004 | B2 |
6678039 | Charbon | Jan 2004 | B2 |
6690354 | Sze | Feb 2004 | B2 |
6690618 | Tomasi et al. | Feb 2004 | B2 |
6710770 | Tomasi et al. | Mar 2004 | B2 |
6834120 | LeClerc et al. | Dec 2004 | B1 |
6876775 | Torunoglu | Apr 2005 | B2 |
6906793 | Bamji et al. | Jun 2005 | B2 |
6919549 | Bamji et al. | Jul 2005 | B2 |
7006236 | Tomasi et al. | Feb 2006 | B2 |
7038659 | Rajkowski | May 2006 | B2 |
7050177 | Tomasi et al. | May 2006 | B2 |
7151530 | Roeber et al. | Dec 2006 | B2 |
7157685 | Bamji et al. | Jan 2007 | B2 |
7173230 | Charbon | Feb 2007 | B2 |
7176438 | Bamji et al. | Feb 2007 | B2 |
7203356 | Gokturk et al. | Apr 2007 | B2 |
7212663 | Tomasi | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7283213 | O'Connor et al. | Oct 2007 | B2 |
7295904 | Kanevsky | Nov 2007 | B2 |
7310431 | Gokturk et al. | Dec 2007 | B2 |
7321111 | Bamji et al. | Jan 2008 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7352454 | Bamji et al. | Apr 2008 | B2 |
7375803 | Bamji | May 2008 | B1 |
7379100 | Gokturk et al. | May 2008 | B2 |
7379163 | Rafii et al. | May 2008 | B2 |
7405812 | Bamji | Jul 2008 | B1 |
7408627 | Bamji et al. | Aug 2008 | B2 |
7433029 | Hsu | Oct 2008 | B1 |
7450220 | O'Connor et al. | Nov 2008 | B2 |
7464351 | Bamji et al. | Dec 2008 | B2 |
7471376 | Bamji et al. | Dec 2008 | B2 |
7507947 | Bamji et al. | Mar 2009 | B2 |
7511801 | Rafii et al. | Mar 2009 | B1 |
7526120 | Gokturk et al. | Apr 2009 | B2 |
7636150 | McCauley et al. | Dec 2009 | B1 |
7653833 | Miller et al. | Jan 2010 | B1 |
7653883 | Hotelling et al. | Jan 2010 | B2 |
7665041 | Wilson et al. | Feb 2010 | B2 |
7719662 | Bamji et al. | May 2010 | B2 |
7741961 | Rafii et al. | Jun 2010 | B1 |
7791715 | Bamji | Sep 2010 | B1 |
7805003 | Cohen et al. | Sep 2010 | B1 |
7877707 | Westerman et al. | Jan 2011 | B2 |
7936449 | Bamji et al. | May 2011 | B1 |
7994465 | Bamji et al. | Aug 2011 | B1 |
8009871 | Rafii et al. | Aug 2011 | B2 |
D645493 | Zhao | Sep 2011 | S |
8086971 | Radivojevic et al. | Dec 2011 | B2 |
8134637 | Rossbach | Mar 2012 | B2 |
8139141 | Bamji et al. | Mar 2012 | B2 |
8139142 | Bamji et al. | Mar 2012 | B2 |
8179604 | Prada Gomez et al. | May 2012 | B1 |
8180114 | Nishihara et al. | May 2012 | B2 |
8194233 | Bamji | Jun 2012 | B2 |
8203699 | Bamji et al. | Jun 2012 | B2 |
8212859 | Tang et al. | Jul 2012 | B2 |
8232990 | King et al. | Jul 2012 | B2 |
8265350 | Torii et al. | Sep 2012 | B2 |
8314924 | Bamji et al. | Nov 2012 | B2 |
8339359 | Hsieh et al. | Dec 2012 | B2 |
8363212 | Bamji et al. | Jan 2013 | B2 |
8368795 | Lo et al. | Feb 2013 | B2 |
8462132 | Ren et al. | Jun 2013 | B2 |
8525876 | Fan et al. | Sep 2013 | B2 |
8587773 | Bamji et al. | Nov 2013 | B2 |
8589033 | Rafii et al. | Nov 2013 | B2 |
8615108 | Stoppa | Dec 2013 | B1 |
8655021 | Dal Mutto et al. | Feb 2014 | B2 |
8675182 | Bamji | Mar 2014 | B2 |
8681124 | Bamji et al. | Mar 2014 | B2 |
8686943 | Rafii | Apr 2014 | B1 |
8693724 | Ahmed et al. | Apr 2014 | B2 |
8830312 | Hummel et al. | Sep 2014 | B2 |
8836768 | Rafii et al. | Sep 2014 | B1 |
8840466 | Kareemi et al. | Sep 2014 | B2 |
9092953 | Mortimer | Jul 2015 | B1 |
9229534 | Galor | Jan 2016 | B2 |
20020112095 | Ford et al. | Aug 2002 | A1 |
20020140633 | Rafii et al. | Oct 2002 | A1 |
20030021032 | Bamji et al. | Jan 2003 | A1 |
20030132921 | Torunoglu et al. | Jul 2003 | A1 |
20030132950 | Surucu et al. | Jul 2003 | A1 |
20030165048 | Bamji et al. | Sep 2003 | A1 |
20030169906 | Gokturk et al. | Sep 2003 | A1 |
20030174125 | Torunoglu et al. | Sep 2003 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040066500 | Gokturk et al. | Apr 2004 | A1 |
20040170323 | Cootes et al. | Sep 2004 | A1 |
20050134853 | Ingleson et al. | Jun 2005 | A1 |
20050271279 | Fujimura et al. | Dec 2005 | A1 |
20060047386 | Kanevsky | Mar 2006 | A1 |
20060197753 | Hotelling | Sep 2006 | A1 |
20060241371 | Rafii et al. | Oct 2006 | A1 |
20060272436 | Lein et al. | Dec 2006 | A1 |
20070057946 | Albeck et al. | Mar 2007 | A1 |
20080120577 | Ma et al. | May 2008 | A1 |
20080122799 | Pryor | May 2008 | A1 |
20080170776 | Albertson | Jul 2008 | A1 |
20080281523 | Dahl et al. | Nov 2008 | A1 |
20090021489 | Westerman et al. | Jan 2009 | A1 |
20090077161 | Hamilton, II et al. | Mar 2009 | A1 |
20090079813 | Hildreth | Mar 2009 | A1 |
20090096746 | Kruse et al. | Apr 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090110301 | Schopp et al. | Apr 2009 | A1 |
20090153671 | Lee et al. | Jun 2009 | A1 |
20090183125 | Magal et al. | Jul 2009 | A1 |
20090290811 | Imai | Nov 2009 | A1 |
20090307658 | Freitas et al. | Dec 2009 | A1 |
20100027845 | Kim et al. | Feb 2010 | A1 |
20100027846 | Xu et al. | Feb 2010 | A1 |
20100027892 | Guan et al. | Feb 2010 | A1 |
20100053151 | Marti et al. | Mar 2010 | A1 |
20100110384 | Maekawa | May 2010 | A1 |
20100124949 | Demuynck et al. | May 2010 | A1 |
20100156676 | Mooring et al. | Jun 2010 | A1 |
20100192109 | Westerman et al. | Jul 2010 | A1 |
20100199228 | Latta et al. | Aug 2010 | A1 |
20100202663 | Kim et al. | Aug 2010 | A1 |
20100208038 | Kutliroff et al. | Aug 2010 | A1 |
20100211920 | Westerman et al. | Aug 2010 | A1 |
20100229125 | Cha | Sep 2010 | A1 |
20100235786 | Maizels et al. | Sep 2010 | A1 |
20100265316 | Sali et al. | Oct 2010 | A1 |
20100277411 | Yee | Nov 2010 | A1 |
20100284082 | Shpunt et al. | Nov 2010 | A1 |
20100296368 | Dahl et al. | Nov 2010 | A1 |
20100306714 | Latta et al. | Dec 2010 | A1 |
20100321389 | Gay et al. | Dec 2010 | A1 |
20100329511 | Yoon et al. | Dec 2010 | A1 |
20110025827 | Shpunt et al. | Feb 2011 | A1 |
20110052006 | Gurman et al. | Mar 2011 | A1 |
20110069389 | Shpunt | Mar 2011 | A1 |
20110075259 | Shpunt | Mar 2011 | A1 |
20110096954 | Dahl | Apr 2011 | A1 |
20110103448 | Dahl et al. | May 2011 | A1 |
20110114857 | Akerman et al. | May 2011 | A1 |
20110115892 | Fan et al. | May 2011 | A1 |
20110134036 | Suggs | Jun 2011 | A1 |
20110134114 | Rais et al. | Jun 2011 | A1 |
20110148798 | Dahl | Jun 2011 | A1 |
20110149044 | Snin | Jun 2011 | A1 |
20110158508 | Shpunt et al. | Jun 2011 | A1 |
20110173574 | Clavin et al. | Jul 2011 | A1 |
20110187878 | Mor et al. | Aug 2011 | A1 |
20110188054 | Petronius et al. | Aug 2011 | A1 |
20110193939 | Vassigh | Aug 2011 | A1 |
20110197161 | Mattingly et al. | Aug 2011 | A1 |
20110205421 | Shpunt et al. | Aug 2011 | A1 |
20110210931 | Shai | Sep 2011 | A1 |
20110211044 | Shpunt et al. | Sep 2011 | A1 |
20110211754 | Litvak et al. | Sep 2011 | A1 |
20110219340 | Pathangay et al. | Sep 2011 | A1 |
20110222726 | Ruan | Sep 2011 | A1 |
20110243380 | Forutanpour et al. | Oct 2011 | A1 |
20110254762 | Dahl et al. | Oct 2011 | A1 |
20110254765 | Brand | Oct 2011 | A1 |
20110262006 | Nakano | Oct 2011 | A1 |
20110274357 | Iwamoto et al. | Nov 2011 | A1 |
20110286673 | Givon et al. | Nov 2011 | A1 |
20110289455 | Reville et al. | Nov 2011 | A1 |
20110291925 | Israel et al. | Dec 2011 | A1 |
20110291926 | Gokturk et al. | Dec 2011 | A1 |
20110291988 | Bamji et al. | Dec 2011 | A1 |
20110292036 | Sali et al. | Dec 2011 | A1 |
20110292181 | Acharya et al. | Dec 2011 | A1 |
20110292370 | Hills et al. | Dec 2011 | A1 |
20110292380 | Bamji | Dec 2011 | A1 |
20110293137 | Gurman et al. | Dec 2011 | A1 |
20110294574 | Yamada et al. | Dec 2011 | A1 |
20110295562 | Mehta et al. | Dec 2011 | A1 |
20110296353 | Ahmed et al. | Dec 2011 | A1 |
20110298704 | Krah | Dec 2011 | A1 |
20110300929 | Tardif et al. | Dec 2011 | A1 |
20110310010 | Hoffnung et al. | Dec 2011 | A1 |
20110310125 | McEldowney et al. | Dec 2011 | A1 |
20120011454 | Droz et al. | Jan 2012 | A1 |
20120027252 | Liu et al. | Feb 2012 | A1 |
20120038986 | Pesach | Feb 2012 | A1 |
20120042150 | Saar | Feb 2012 | A1 |
20120042246 | Schwesinger | Feb 2012 | A1 |
20120050488 | Cohen et al. | Mar 2012 | A1 |
20120051605 | Nagar et al. | Mar 2012 | A1 |
20120070070 | Litvak | Mar 2012 | A1 |
20120078614 | Galor et al. | Mar 2012 | A1 |
20120092304 | Katz | Apr 2012 | A1 |
20120099403 | Dahl et al. | Apr 2012 | A1 |
20120106792 | Kang et al. | May 2012 | A1 |
20120127070 | Ryoo et al. | May 2012 | A1 |
20120140094 | Shpunt et al. | Jun 2012 | A1 |
20120140109 | Shpunt et al. | Jun 2012 | A1 |
20120151339 | Zhang | Jun 2012 | A1 |
20120169583 | Rippel et al. | Jul 2012 | A1 |
20120169671 | Yasutake | Jul 2012 | A1 |
20120176414 | Givon | Jul 2012 | A1 |
20120182464 | Shpunt et al. | Jul 2012 | A1 |
20120202569 | Maizels et al. | Aug 2012 | A1 |
20120204133 | Guendelman et al. | Aug 2012 | A1 |
20120206339 | Dahl | Aug 2012 | A1 |
20120218183 | Givon et al. | Aug 2012 | A1 |
20120223882 | Galor et al. | Sep 2012 | A1 |
20120243374 | Dahl et al. | Sep 2012 | A1 |
20120249744 | Pesach et al. | Oct 2012 | A1 |
20120268364 | Minnen | Oct 2012 | A1 |
20120270653 | Kareemi et al. | Oct 2012 | A1 |
20120274550 | Campbell et al. | Nov 2012 | A1 |
20120274610 | Dahl | Nov 2012 | A1 |
20120281240 | Cohen et al. | Nov 2012 | A1 |
20120299820 | Dahl | Nov 2012 | A1 |
20120304067 | Han et al. | Nov 2012 | A1 |
20120306876 | Shotton et al. | Dec 2012 | A1 |
20120313848 | Galor et al. | Dec 2012 | A1 |
20120313900 | Dahl | Dec 2012 | A1 |
20120327125 | Kutliroff et al. | Dec 2012 | A1 |
20130014052 | Frey et al. | Jan 2013 | A1 |
20130038601 | Han et al. | Feb 2013 | A1 |
20130038881 | Pesach et al. | Feb 2013 | A1 |
20130038941 | Pesach et al. | Feb 2013 | A1 |
20130044053 | Galor et al. | Feb 2013 | A1 |
20130050080 | Dahl et al. | Feb 2013 | A1 |
20130055120 | Galor et al. | Feb 2013 | A1 |
20130055143 | Martin et al. | Feb 2013 | A1 |
20130055150 | Galor | Feb 2013 | A1 |
20130057654 | Rafii et al. | Mar 2013 | A1 |
20130063487 | Spiegel et al. | Mar 2013 | A1 |
20130069876 | Cheng et al. | Mar 2013 | A1 |
20130094329 | Dahl et al. | Apr 2013 | A1 |
20130106692 | Maizels et al. | May 2013 | A1 |
20130107021 | Maizels et al. | May 2013 | A1 |
20130135312 | Yang et al. | May 2013 | A1 |
20130147770 | Dahl et al. | Jun 2013 | A1 |
20130155031 | Dahl et al. | Jun 2013 | A1 |
20130162527 | Dahl | Jun 2013 | A1 |
20130176258 | Dahl et al. | Jul 2013 | A1 |
20130179034 | Pryor | Jul 2013 | A1 |
20130201316 | Binder et al. | Aug 2013 | A1 |
20130216094 | DeLean | Aug 2013 | A1 |
20130283213 | Guendelman | Oct 2013 | A1 |
20130335573 | Forutanpour et al. | Dec 2013 | A1 |
20140020635 | Sayers | Jan 2014 | A1 |
20140043598 | Bamji et al. | Feb 2014 | A1 |
20140173440 | Mutto et al. | Jun 2014 | A1 |
20140211991 | Stoppa et al. | Jul 2014 | A1 |
20140211992 | Stoppa et al. | Jul 2014 | A1 |
20140320408 | Zagorsek | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
9749262 | Dec 1997 | WO |
2005091125 | Sep 2005 | WO |
2006011153 | Feb 2006 | WO |
2007052262 | May 2007 | WO |
2006011153 | Oct 2008 | WO |
2008126069 | Oct 2008 | WO |
2007052262 | Apr 2009 | WO |
2008126069 | Apr 2009 | WO |
2009128064 | Oct 2009 | WO |
2009142443 | Nov 2009 | WO |
2009128064 | Jan 2010 | WO |
2010026587 | Mar 2010 | WO |
2010030296 | Mar 2010 | WO |
2010046901 | Apr 2010 | WO |
2010046901 | Aug 2010 | WO |
2010086866 | Aug 2010 | WO |
2010096279 | Aug 2010 | WO |
2010103482 | Sep 2010 | WO |
2010096279 | Nov 2010 | WO |
2010103482 | Nov 2010 | WO |
2011013079 | Feb 2011 | WO |
2011033519 | Mar 2011 | WO |
2011045789 | Apr 2011 | WO |
2012011044 | Jan 2012 | WO |
2012020380 | Feb 2012 | WO |
2012020410 | Feb 2012 | WO |
2012066501 | May 2012 | WO |
2012081012 | Jun 2012 | WO |
2012093394 | Jul 2012 | WO |
2012095756 | Jul 2012 | WO |
2012098534 | Jul 2012 | WO |
2012107892 | Aug 2012 | WO |
2012119633 | Sep 2012 | WO |
2012119885 | Sep 2012 | WO |
2012107892 | Nov 2012 | WO |
2012164562 | Dec 2012 | WO |
2013008236 | Jan 2013 | WO |
2013018099 | Feb 2013 | WO |
2013021385 | Feb 2013 | WO |
2012095756 | Jul 2013 | WO |
2014120554 | Aug 2014 | WO |
Entry |
---|
“0V7740 VGA product brief”, OmniVision, Retrieved from: http://www.ovt.com/download—document.php?type=sensor&sensorid=83, 2 pgs. |
“PointGrab Announces New Hand Gesture Control Solution for the Latest Premium Samsung Smart TV Models”, Yahoo! Finance, Retrieved on Apr. 4, 2013, from http://www.finance.yahoo.com/news/pointgrab-announces-hand-gesture-control-22000959.html, 2 pgs. |
Belaroussi, et al., “Comparison of Different Combination Strategies for Face Localization”, Proceedings of the 2006 International Conference on Image Processing, Computer Vision, & Pattern Recognition, Las Vegas, Nevada, Jun. 26-29, 2006, pp. 383-389. |
Canesta3D, “Canesta 3D ToF Sensor Demo for Living Room”, Youtube, Oct. 28, 2010, Retrieved from: http://www.youtube.com/watch?v=TmKShSHOSYU. |
Canesta3D, “Canesta PC demo video”, Youtube, Oct. 25, 2010, Retrieved from: http:/www.youtube.com/watch?v=I36Aqk1A6vY. |
Canesta3D, “Canesta TV Gesture User Interface Demo”, Youtube, May 29, 2009, Retrieved from: http://www.youtube.com/watch?v=uR27dPHI7dQ. |
Canesta3D, “Canesta's latest 3D Sensor—“Cobra” . . . highest res CMOS 3D depth sensor in the world”, Youtube, Oct. 25, 2010, Retrieved from: http://www.youtube.com/watch?v=5—PVx1NbUZQ. |
Canesta3D, “Future of Remote Control”, Youtube, Oct. 29, 2009, Retrieved from: http://www.youtube.com/watch?v=vnfdoDHiNil. |
Canesta3D, “Point Cloud Demo, using Canesta's 320×200 3D Tof Image Sensor”, Youtube, Oct. 28, 2010, Retrieved from: http://www.youtube.com/watch?v=4xIXsJuH74c. |
Carmody, Tim, “Why ‘Gorilla Arm Syndrome’ Rules Out Multitouch Notebook Displays”, Wired, Oct. 21, 2010, Retrieved from http://www.wired.com/gadgetlab/2010/10/gorilla-arm-multitouch/, 3 pgs. |
Forsyth, “Computer Vision—A Modern Approach”, Recognition as Template Matching, 46 pgs. |
Hasan et al., “Real Time Fingers and Palm Locating using Dynamic Circle Templates”, International Journal of Computer Applications, vol. 41, No. 6, Mar. 2012, pp. 33-43. |
Kerdvibulvech et al., “Markerless Guitarist Fingertip Detection Using a Bayesian Classifier and a Template Matching for Supporting Guitarists”, Proc. 10th Virtual Reality Int. Conf., Apr. 2008, 7 pgs. |
Kolsch et al., “Flocks of Features for Tracking Articulated Objects”, Retrieved from http://www.cs.ucsb.edu/˜mturk/pubs/KolschBook05.pdf, pp. 1-18, Index. |
Lin, John “Visual Hand Tracking and Gesture Analysis”, Dissertation, University of Illinois at Urbana-Champaign, 2004, 116 pgs. |
Murase et al., “Gesture Keyboard Requiring Only One Camera”, ACM UIST'11, Oct. 16-19, 2011, Santa Barbara, CA, pp. 1-2. |
Nosowitz, “The Second Wave of Gesture-Controlled TVs”, Popular Science, Retrieved on Apr. 4, 2013, from: www.popsci.com/gadgets/article/2012-01/second-wave-gesture-controlled-tvs, 6 pgs. |
Onishi et al., “3D Human Posture Estimation Using HOG Features of Monocular Images”, Pattern Recognition, Peng-Yeng Yin (Ed.), Intech, DOI:10.5772/7541., Oct. 1, 2009, pp. 1-11. |
Rautaray et al., “Vision Based Hand Gesture Recognition for Human Computer Interaction: A Survey”, Artificial Intelligence Review, Springer, Nov. 6, 2012, 54 pgs. |
Thayananthan, “Template-based Pose Estimation and Tracking of 3D Hand Motion”, Dissertation, University of Cambridge, 2005, 172 pgs. |
Zhang, Zhengyou, “Flexible Camera Calibration by Viewing a Plane From Unknown Orientations”, Microsoft Research, Redmond, WA, 8 pgs. |
Number | Date | Country | |
---|---|---|---|
61860872 | Jul 2013 | US |