The field of the present invention is curved and flexible touch surface systems using optical touch sensors.
Large curved displays reduce outer edge distortions and provide a panoramic view. Curved display screens are also used in certain mobile phones. One of the issues in the use of curved screens in commercial electronics is how accurately it can work with a touch sensor.
A flexible display is an electronic visual display which is flexible in nature; as opposed to the more prevalent traditional flat screen displays used in most electronics devices. In recent years there has been a growing interest from numerous consumer electronics manufacturers to apply this display technology in e-readers, mobile phones and other consumer electronics (—Wikipedia, “Flexible display”). The flexible nature of the display prevents manufacturers from adding conventional touch sensors to flexible displays.
The Wi-Fi Alliance launched the Miracast certification program at the end of 2012. WI-FI ALLIANCE and MIRACAST are registered trademarks of WI-FI Alliance Corporation California. Devices that are Miracast-certified can communicate with each other, regardless of manufacturer. Adapters are available that plug into High-Definition Multimedia Interface (HDMI) or Universal Serial Bus (USB) ports, allowing devices without built-in Miracast support to connect via Miracast. Miracast employs the peer-to-peer Wi-Fi Direct standard to send video and audio. WI-FI DIRECT is a registered trademark of WI-FI Alliance Corporation California. IPv4 is used on the Internet layer. On the transport layer, TCP or UDP are used. On the application layer, the stream is initiated and controlled via RTSP, RTP for the data transfer.
The present invention enables touch and gesture input on a Miracast-connected TV, monitor or projector to be detected and communicated back to the server, laptop, tablet or smartphone that originally sent the displayed image.
There is thus provided in accordance with an embodiment of the present invention a touch system having a curved touch surface, including a housing, a curved surface near the housing, light emitters mounted in the housing projecting light beams out of the housing over and across the curved surface, such that at least some of the light beams are incident upon and reflected by the curved surface when crossing over the curved surface, light detectors mounted in the housing detecting reflections, by a reflective object touching the curved surface, of the light beams projected by the light emitters, lenses mounted and oriented in the housing relative to the light emitters and to the light detectors such that (i) there is a particular angle of entry at which each light detector receives a maximal light intensity when light beams enter a lens corresponding to the light detector at the particular angle of entry, and (ii) there are target positions, associated with emitter-detector pairs, on the curved surface, whereby for each emitter-detector pair, when the object is located at the target position associated with the emitter-detector pair, then light beams emitted by the light emitter of that pair are reflected by the object into the lens corresponding to the light detector of that pair at the particular angle of entry, and a processor connected to the light emitters and to the light detectors, synchronously co-activating emitter-detector pairs, and calculating a location of the object touching the curved surface by determining an emitter-detector pair among the co-activated emitter-detector pairs, for which the light detector of the pair detects a maximal amount of light, and by identifying the target position associated with the pair.
In certain embodiments of the invention, the curved surface is a retractable surface.
In certain embodiments of the invention, the curved surface includes a first portion that is flat and a second portion that is curved, the second portion being further from the housing than the first portion, and when the object touches the second portion of the curved surface, some of the light reflected by the object is incident upon and reflected by the curved surface while crossing the curved surface toward the light detectors.
In certain embodiments of the invention, the processor is configured to calculate the location of the object by additionally determining positions associated with co-activated emitter-detector pairs that neighbor the thus-identified position, and calculating a weighted average of the thus-identified position and the thus-determined neighboring positions, wherein each position's weight in the average corresponds to a degree of detection of the reflected light beam for the emitter-detector pair to which that position is associated.
There is additionally provided in accordance with an embodiment of the present invention a touch system having a flexible touch surface, including a housing, a flexible surface near the housing, light emitters mounted in the housing projecting light beams out of the housing over and across the flexible surface such that, when the flexible surface is concavely flexed, at least some of the light beams are incident upon and reflected by the flexible surface as they cross over the flexible surface, light detectors mounted in the housing detecting reflections, by a reflective object touching the flexible surface, of the light beams projected by the emitters, lenses mounted and oriented in the housing relative to the light emitters and to the light detectors such that (i) there is a particular angle of entry at which each light detector receives a maximal light intensity when light beams enter a lens corresponding to the light detector at the particular angle of entry, and (ii) there are target positions, associated with emitter-detector pairs, on the flexible surface, whereby for each emitter-detector pair, when the object is located at the target position associated with the emitter-detector pair, then light beams emitted by the light emitter of that pair are reflected by the object into the lens corresponding to the light detector of that pair at the particular angle of entry, and a processor connected to the light emitters and to the light detectors, synchronously co-activating emitter-detector pairs, and calculating a location of the object touching the flexible surface by determining an emitter-detector pair among the co-activated emitter-detector pairs, for which the light detector of the pair detects a maximal amount of light, and by identifying the target position associated with the pair.
In certain embodiments of the invention, the flexible surface is retractable into the housing.
In certain embodiments of the invention, when the flexible display screen is concavely curved, and the object is touching a curved portion of the flexible surface, some of the light reflected by the object is incident upon and reflected by the flexible surface while crossing the flexible surface toward the light detectors.
In certain embodiments of the invention, the processor is configured to calculate a location of the object by additionally determining positions associated with co-activated emitter-detector pairs that neighbor the thus-identified position, and calculating a weighted average of the thus-identified position and the thus-determined neighboring positions, wherein each position's weight in the average corresponds to a degree of detection of the reflected light beam for the emitter-detector pair to which that position is associated.
There is further provided in accordance with an embodiment of the present invention a method of generating a three-dimensional image of an object using an optical proximity sensor in the shape of a bar, the sensor including a linear array of interleaved individually activatable light emitters and photodiode detectors mounted in the bar, a plurality of lenses through which light emitted by the emitters is projected into a planar airspace outside the bar, and through which light reflected by an object in the planar airspace is projected onto the photodiode detectors, wherein each lens is paired with: (i) a respective one of the emitters, to maximize outgoing light emission in a specific direction at an angle, designated θ, relative to the bar, the angle θ being the same for each lens-emitter pair, and (ii) respective first and second ones of the photodiode detectors, to maximize incoming reflected light detection at respective first and second specific directions at respective angles, designated φ1 and φ2, relative to the bar, the angle φ1 being the same for each first detector-lens pair, and the angle φ2 being the same for each second detector-lens pair, the method including repeatedly moving the proximity sensor, such that light emitted by the emitters is projected into a different planar airspace after each move, repeatedly selectively activating the emitters and the photodiode detectors, repeatedly identifying locations on the object in the planar airspace, based on outputs of the photodiode detectors and the known angles θ, φ1 and φ2, and combining the identified locations to generate a three-dimensional image of the object, based on the orientations of the different planar airspaces.
In certain embodiments of the invention, the method further includes repeatedly identifying a shape of the object in the planar airspace, based on the identified locations, and combining the identified shapes to generate a three-dimensional image of the object, based on the orientations of the different planar airspaces.
In certain embodiments of the invention, the combining of the identified locations creates a point cloud of the object.
In certain embodiments of the invention, the method further includes repeatedly identifying a size of the object in the planar airspace, based on outputs of the photodiode detectors and the known angles θ, φ1 and φ2, and combining the identified sizes in the different planar airspaces to derive a three-dimensional volume that contains the object, based on orientations of the planar airspaces.
In certain embodiments of the invention, the method further includes repeatedly identifying a distance between the object and the bar in the planar airspace, based on outputs of the photodiode detectors and the known angles θ, φ1 and φ2, and combining the identified distances in the different planar airspaces to derive a location of the object, based on orientations of the planar airspaces.
In certain embodiments of the invention, the light emitters are laser diodes.
In certain embodiments of the invention, the sensor bar further includes at least one inertial measurement unit (IMU), the method further including repeatedly identifying the orientation of the planar airspace based on outputs of the at least one IMU.
There is yet further provided in accordance with an embodiment of the present invention a handheld 3D scanner including a housing that can be lifted and moved by a human hand, a linear array of interleaved individually activatable light emitters and photodiode detectors mounted in the housing, a plurality of lenses, mounted in the housing, through which light emitted by the emitters is projected into a planar airspace outside the housing, and through which light reflected by the object in the planar airspace is directed onto the photodiode detectors, wherein each lens is paired with: (i) a respective one of the emitters, to maximize outgoing light emission in a specific direction at an angle, designated θ, relative to the lens, the angle θ being the same for each lens-emitter pair, and (ii) respective first and second ones of the photodiode detectors, to maximize incoming reflected light detection at respective first and second specific directions at respective angles, designated φ1 and φ2, relative to the lens, the angle φ1 being the same for each first detector-lens pair, and the angle φ2 being the same for each second detector-lens pair, at least one inertial measurement unit mounted in the housing and tracking the changing orientations of the planar airspace when the housing is lifted and moved, a processor (i) connected to the emitters and photodiode detectors repeatedly selectively activating the emitters and the photodiode detectors, repeatedly identifying a shape of the object in the planar airspace, based on outputs of the photodiode detectors and the known angles θ, φ1 and φ2, (ii) connected to the inertial measurement unit, and (iii) combining the identified shapes to generate a three-dimensional image of the object, based on the tracked orientations of the planar airspaces.
There is moreover provided in accordance with an embodiment of the present invention a handheld 3D scanner including a housing that can be lifted and moved by a human hand, a linear array of interleaved individually activatable light emitters and photodiode detectors mounted in the housing, a plurality of lenses, mounted in the housing, through which light emitted by the emitters is projected into a planar airspace outside the housing, and through which light reflected by the object in the planar airspace is directed onto the photodiode detectors, wherein each lens is paired with: (i) a respective one of the emitters, to maximize outgoing light emission in a specific direction at an angle, designated θ, relative to the lens, the angle θ being the same for each lens-emitter pair, and (ii) respective first and second ones of the photodiode detectors, to maximize incoming reflected light detection at respective first and second specific directions at respective angles, designated φ1 and φ2, relative to the lens, the angle φ1 being the same for each first detector-lens pair, and the angle φ2 being the same for each second detector-lens pair, at least one inertial measurement unit mounted in the housing tracking the changing orientations of the planar airspace when the housing is lifted and moved, a processor (i) connected to the emitters and photodiode detectors repeatedly selectively activating the emitters and the photodiode detectors, repeatedly identifying locations on the object in the planar airspace, based on outputs of the photodiode detectors and the known angles θ, φ1 and φ2, (ii) connected to the inertial measurement unit, and (iii) combining the identified locations to generate a three-dimensional point cloud of the object, based on the tracked orientations of the planar airspaces.
There is additionally provided in accordance with an embodiment of the present invention a scanning system including a rotating stand on which an object is placed, a linear array of interleaved individually activatable light emitters and photodiode detectors, a plurality of lenses through which light emitted by the emitters is projected into a planar airspace above the rotating stand and through which light reflected by the object in the planar airspace is directed onto the photodiode detectors, wherein each lens is paired with: (i) a respective one of the emitters, to maximize outgoing light emission in a specific direction at an angle, designated θ, relative to the lens, the angle θ being the same for each lens-emitter pair, and (ii) respective first and second ones of the photodiode detectors, to maximize incoming reflected light detection at respective first and second specific directions at respective angles, designated φ1 and φ2, relative to the lens, the angle φ1 being the same for each first detector-lens pair, and the angle φ2 being the same for each second detector-lens pair, a motor connected to the stand incrementally rotating the object, such that light emitted by the emitters is reflected by a different portion of the object after each move, a processor connected to the emitters and the photodiode detectors repeatedly selectively activating the emitters and the photodiode detectors, repeatedly identifying a shape of the object in the planar airspace, based on outputs of the photodiode detectors and the known angles θ, φ1 and φ2, and combining the identified shapes to generate a three-dimensional image of the object, based on those portions of the object that intersect the planar airspaces.
There is further provided in accordance with an embodiment of the present invention a fitting room system including the scanning system described in the previous paragraph, wherein the object is a shopper for clothes, wherein the processor extracts body measurements of the shopper from the three-dimensional image of the object, wherein the processor is communicatively coupled to a database of clothes indexed by at least one of the body measurements, and wherein the processor runs a query on the database that returns database items that match at least one of the body measurements, and a display connected to the processor displaying the database items returned by the query.
In certain embodiments of the fitting room system, the processor creates an avatar of the shopper based on the body measurements and renders an image of the avatar wearing the database items returned by the query.
There is yet further provided in accordance with an embodiment of the present invention an avatar generator including the scanning system described hereinabove wherein the processor creates an avatar of the object based on the three-dimensional image of the object.
In certain embodiments of the avatar generator, the processor is connected to a network, and the processor outputs the avatar over the network to a computer storing a user profile to which the avatar is added.
There is moreover provided in accordance with an embodiment of the present invention a touch pad for determining locations of multiple objects concurrently touching the pad, including a housing, an exposed surface mounted in the housing, two proximity sensor bars mounted in the housing along different edges of the exposed surface, each proximity sensor bar including a plurality of light pulse emitters projecting light out of the housing along a detection plane over and parallel to the exposed surface, a plurality of light detectors detecting reflections of the light projected by the emitters, by a reflective object passing through the detection plane, a plurality of lenses oriented relative to the emitters and the detectors in such a manner that each emitter-detector pair has a target position in the detection plane associated therewith, the target position being such that when the object is located at the target position, light emitted by the emitter of that pair passes through one of the lenses and is reflected by the object back through one of the lenses to the detector of that pair, wherein the target positions associated with emitter-detector pairs of each of the proximity sensor bars comprise some common positions, and a processor mounted in the housing and connected to the emitters and to the detectors of the two proximity sensor bars, the processor synchronously co-activating respective emitter-detector pairs of the two proximity sensor bars that are associated with common target positions, and determining locations of multiple objects that are concurrently passing through the detection plane.
In certain embodiments of the touch pad invention, the proximity sensor bars are mounted along adjacent edges of the exposed surface.
In certain embodiments of the touch pad invention, the proximity sensor bars are mounted along opposite edges of the exposed surface.
There is additionally provided in accordance with an embodiment of the present invention an interactive computer system including a first processor configured (i) to render a graphical user interface (GUI), and (ii) to respond to input to the GUI, a display device that renders the GUI on a surface, a first wireless transmitter and receiver transmitting the GUI from the first processor to the display device, wherein the transmitter is connected to the first processor and the receiver is connected to the display device, a proximity sensor, including a housing mounted along an edge of the surface, a plurality of light emitters mounted in the housing operable when activated to project light out of the housing over the surface, a plurality of light detectors mounted in the housing operable when activated to detect amounts of arriving light, and a second processor connected to the emitters and to the detectors, configured (i) to selectively activate the light emitters and the light detectors, and (ii) to identify the location of an object touching the surface, based on amounts of light detected by the activated detectors when the object reflects light projected by the activated light emitters back into the housing, a second wireless transmitter and receiver transmitting the identified location to the first processor as input to the GUI, wherein the transmitter is connected to the second processor and the receiver is connected to the first processor.
In certain embodiments of the computer system, the first processor, the first transmitter and the second receiver are mounted in a mobile phone, and the display device is one of the group consisting of: a TV, a computer monitor and a projector.
In certain embodiments of the computer system, the first processor and the first transmitter are mounted in a mobile phone, the second receiver is mounted in a dongle that is removably connected to the mobile phone and the display device is one of the group consisting of: a TV, a computer monitor and a projector.
In certain embodiments of the computer system, the first processor, the first transmitter and the second receiver comprise a server, and the display device is a thin client.
There is further provided in accordance with an embodiment of the present invention an interactive computer system including a display, a housing mounted along an edge of the display, a proximity sensor mounted in the housing, including a plurality of light emitters operable when activated to project light out of the housing over the display, and a plurality of light detectors operable when activated to detect amounts of arriving light, a processor mounted in the housing configured (i) to render a graphical user interface (GUI), (ii) to selectively activate the light emitters and the light detectors, and (iii) to identify one or more locations of an object touching the display, based on amounts of light detected by the activated detectors when the object reflects light projected by the activated light emitters back into the housing, and a wireless transmitter and receiver transmitting the GUI from the processor to the display, wherein the transmitter is connected to the processor and the receiver is connected to the display and wherein the processor is further configured to respond to the identified one or more locations as input to the GUI.
There is yet further provided in accordance with an embodiment of the present invention an optical assembly for detecting locations of objects in any of multiple parallel spatial planes, featuring a reflectance-based sensor that emits light into a single detection plane of the sensor and detects reflections of the emitted light, reflected by an object located in the detection plane of the sensor, a light re-director positioned away from the sensor that re-directs light emitted by the sensor into one or more spatial planes parallel to the detection plane of the sensor and, when the object is located in the one or more spatial planes, re-directs light reflected by the object into the detection plane of the sensor, and a processor connected to the sensor that controls light emitted by the sensor and, when an object passes through one or more of the spatial planes, the processor identifies both (i) the spatial planes through which the object passes, and (ii) the location of the object within the spatial planes through which it passes.
In certain embodiments of the optical assembly, the reflectance-based sensor includes an array of interleaved light emitters and photodiodes.
In certain embodiments of the optical assembly, the reflectance-based sensor includes a time-of-flight sensor.
In certain embodiments of the optical assembly, the reflectance-based sensor includes two cameras.
In certain embodiments of the optical assembly, the light re-director is a folding mirror.
In certain embodiments of the optical assembly, the light re-director is a beam splitter.
There is moreover provided in accordance with an embodiment of the present invention a method for detecting locations of objects in any of multiple parallel spatial planes, featuring: providing an optical assembly including (a) a reflectance-based sensor that emits light into a single detection plane of the sensor and detects reflections of the emitted light, reflected by an object located in the detection plane of the sensor, and (b) a light re-director positioned away from the sensor that re-directs light emitted by the sensor into one or more spatial planes parallel to the detection plane of the sensor and, when the object is located in the one or more spatial planes, re-directs light reflected by the object into the detection plane of the sensor, providing a processor connected to the sensor that controls light emitted by the sensor and processes light detected by the sensor, and when an object passes through one or more of the spatial planes, detecting, by the processor, both (i) the spatial planes through which the object passes, and (ii) the location of the object within the spatial planes through which it passes, including: detecting, by the processor, one or more virtual locations of the object within the detection plane of the sensor, based on light reflected by the object that is re-directed to the detection plane of the sensor and detected by the sensor, and transforming, by the processor, the one or more virtual locations of the object within the detection plane of the sensor to corresponding one or more real locations of the object within one or more spatial planes parallel to the detection plane of the sensor, based on the position of the light re-director relative to the sensor.
The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
The following table catalogs the numbered elements and lists the figures in which each numbered element appears. Similarly numbered elements represent elements of the same type, but they need not be identical elements.
Numbered elements in the 1000's are stages in a process flow.
Throughout this description, the terms “source” and “emitter” are used to indicate the same light emitting elements, inter alia LEDs, VCSELs and lasers, and the terms “sensor” and “detector” are used to indicate the same light detecting elements, inter alia photodiodes.
Reference is made to
The amount of light that travels from one source to one sensor depends on how centered the reflected object is on the source's beam, and how centered it is on the sensor's detection corridor. Such a source/sensor pair is referred to as a “hotspot”. The location at which a reflective object generates the highest amount of light for a source/sensor pair is referred to as the “hotspot location” or the “target position” for that source/sensor pair. A proximity sensor according to the present invention measures the transmitted amount of light for each source/sensor pair, and each such measurement is referred to as a “hotspot signal value”. The measurement normalizes all hotspot signal values to have the same range.
Since light that hits an object is reflected diffusely and reflections are maximally detected in two narrow corridors at opposite sides of the light beam, the present specification refers to a forward direction detection based on all of the narrow detection corridors in a first direction, and a backward direction detection based on all of the narrow detection corridors in the second direction. Stated differently, the forward direction includes all detections of source/sensor pairs in which the sensor of the pair has a higher location index than the source of the pair, and the backward direction includes all detections of source/sensor pairs in which the sensor of the pair has a lower location index than the source of the pair. The forward direction may be left or right, depending on device orientation. A hotspot where the sensor looks in the backward direction is referred to as a “backward hotspot”, and vice versa for those looking forward.
Reference is made to
Reference is made to
Reference is made to
The signal value relationship between two vertically adjacent hotspots corresponds to a curve in
In order to account for such curvature, the location between the crossings is found using the same method, but from the relationships of horizontally adjacent hotspots. The curves are now those in
In some embodiments of the invention, an object is moved throughout the detection area and signal values for all locations in the detection area are recorded and stored. In such cases, an object's current location is derived by matching the current source/sensor pair detection signal values with the stored recorded values and selecting the corresponding stored location in the detection area.
Reference is made to
The mapping transform takes the vertical (
All hotspots that have a signal value above a certain threshold, and that are stronger than all its eight immediate neighbors, are evaluated for possible object detections. All six triangles that use the maximum hotspot are screened as possible contributors to the detection. Each triangle is given a weight that is calculated as the product of all its hotspot signal values. The highest three are kept, and their weights are reduced by that of the fourth highest. The kept triangles are evaluated, and their results are consolidated to a weighted average, using the weights used for screening.
Using a robot to place a stylus at known locations opposite a proximity sensor of the present invention and recording the resulting detection signals, enables quantifying the accuracy of the algorithm. The recorded sample signal values are sent as input to the algorithm in random order, and the calculated detection locations based on these inputs are compared to the actual sample locations.
Reference is made to
Reference is made to
Reference is made to
As explained above with respect to
In order to determine how to interpolate the detected amounts of light, detection sensitivities are calculated in the vicinities of the hotspots using a calibration tool that places a calibrating object having known reflective properties at known locations in the detection zone outside and adjacent to proximity sensor 501. At each known location, a plurality of source/sensor pairs is synchronously activated and amounts of light detected by neighboring activated sensors are measured. Repetitive patterns in relative amounts of light detected by the neighboring activated sensors as the object moves among the known location are identified. These patterns are used to formulate detection sensitivities of proximity sensor 501 in the vicinities of the hotspots which are used to determine how to interpolate the amounts of light detected in order to calculate the location of a proximal object.
Reference is made to
In some embodiments, a calibration tool, either that illustrated in
In addition to determining interpolation methods, the calibration tools are used to map the locations of the hotspots that correspond to the source/sensor pairs. Often the locations of the hotspots are shifted from their expected locations due to mechanical issues such as imprecise placement or alignment of a light source or light detector within proximity sensor 501. When used to this end, each proximity sensor unit needs to be calibrated and the calibration tool of
Reference is made to
A proximity sensor according to the present invention is used to estimate a partial circumference of a proximal object. Reference is made to
As described above, each hotspot location is associated with one forward source/sensor pair and one backward source/sensor pair. In
The reflection values are used to generate a two-dimensional pixel image of reflection values indicating where reflective surfaces are positioned. For example, when all hotspot locations for all source/sensor pairs in proximity sensor 501 are assigned their respective, normalized reflection values, the result is a two-dimensional image. The reflection values in different embodiments are normalized within a range determined by the number of bits provided for each pixel in the two-dimensional image, e.g., 0-255 for 8-bit pixel values, and 0-1023 for 10-bit pixel values.
Reference is made to
Because both forward and backward source/sensor pairs correspond to each hotspot location, the reflection value for that location in the two-dimensional image can be derived in different ways. Namely, the forward-direction source/sensor pair can be used, or the backward-direction source/sensor pair can be used. In some embodiments, the average of these two values is used, and in other embodiments the maximum of these two values is used, such that some pixels derive their values from forward-direction source/sensor pairs, and other pixels derive their values from backward-direction source/sensor pairs.
Certain reflection values for source/sensor pairs are not caused by a reflective object at the corresponding hotspot, but rather by stray reflections at entirely different locations.
Reference is made to
This state is determined by the fact that source/sensor pair 104/202 has a significant detected reflection value, indicating that a reflective object is at corresponding location 940, and therefore, light beam 401 does not arrive at location 944. Moreover, because the lenses and the sensors are configured such that the maximum detection arrives at the sensor when it is reflected at angle θ1 it is clear that the source/sensor pair detecting the maximum reflection from among all source/sensor pairs that share a common source is the pair detecting reflections from an object at, or near, the corresponding hotspot location. Indeed, in the example shown in
In general, an emitted light path LP, such as path 401 in
Similarly, a reflected light path RP, such as path 402 in
In this manner, the two-dimensional pixel image is refined and begins to represent the contour of the object facing the sensor. Reference is made to
The next step is to filter the pixels in this image to obtain sub-pixel precision for the location of the object's contour between hotspot locations. After calculating sub-pixel values, various edge detection filters are applied to the two-dimensional pixel image to identify the edges of the object facing the sensor and discard stray reflections. Known edge detection filters include Sobel, Canny, Prewitt, Laplace, gradient. This edge information is used to determine a length of this portion of the object, i.e., a partial circumference of the object, and its location.
The length of the detected portion of the object is calculated using different methods, in accordance with different embodiments of the invention. Some embodiments determine the number of pixels, or sub-pixels, along the detected portion of the object. Other embodiments calculate the sum of the distances between each pair of neighboring pixels, or sub-pixels, along the detected portion of the object. Still other embodiments determine an equation for a curve that passes through each of the pixels, or sub-pixels, along the detected portion of the object, and calculates the length of the partial circumference of the object according to this equation.
In some embodiments, an estimate of the partial circumference is calculated based on three points: the point on the object for which there is a maximum detection value and the two outermost points along the partial circumference.
Reference is made to
In other embodiments of the invention, the shape of the proximity sensor is not a straight line, but circular, or wave-shaped to provide a 3D detection volume, instead of a 2D detection plane. In such alternative embodiments, the emitters and receivers are still alternated as they are in proximity sensor 501, and each emitter is paired with each of the receivers as a source/sensor pair having a corresponding hotspot within a 3D volume above the proximity sensor.
Reference is made to
Reference is made to
Reference is made to
Proximity sensors according to the present invention have numerous applications for touch screens, control panels and new user interface surfaces. The proximity sensor can be mounted, e.g., on a wall, on a window, or placed on a notebook, and it will provide touch and gesture detection upon that item. These detected gestures are then used as input to electronic systems. For example, a gesture along a wall is used to dim the lighting in the room by mounting the sensor along an edge of the wall and communicating the detected gestures to the lighting system. Significantly, the proximity sensor is only mounted along one edge of the detection area, reducing component cost and providing more flexibility for industrial design of touch screens and touch sensitive control panels.
Reference is made to
Input accessory 510 detects grasping and flexing gestures by having the user hold input accessory 510 in a manner that its detection plane 971 extends above and across the prone fingers of hand 810, as illustrated in
The processor is connected to the laser diode emitters and photodiode detectors, to (i) selectively activate the laser diode emitters and photodiode detectors, (ii) detect a plurality of grasping and flexing gestures performed when holding the accessory in the palm of hand 810 as one of more of the hand's fingers pass through the projection plane, based on detection signals generated by the photodiode detectors, and (iii) transmit input signals to the VR system, corresponding to the detected grasping and flexing gestures.
Reference is made to
When camera 977 captures an image of input accessory 510, image processing unit 978 extracts information, namely, (i) an identifier, from QR code 972, identifying the type of keyboard or trackpad to be rendered on display 976, and (ii) the location of QR code 972 in the wearer's field of view 974. Display 976 renders the thus identified keyboard anchored at or near the QR code location in the wearer's field of view 974 such that gestures performed on the virtual keyboard or trackpad are inside detection plane 971 of accessory 510, and the virtual keys are seen by the wearer as being actuated. Accessory 510 also includes communication circuitry that sends the detected gesture input to headset 975 or to another unit of the VR system.
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Proximity sensors according to the present invention are used to generate 3D information about one or more proximal objects by repeatedly moving the proximity sensor, such that light emitted by the emitters is projected into a different planar airspace after each move, repeatedly selectively activating the emitters and the photodiode detectors, repeatedly identifying a shape of the object in the planar airspace, based on outputs of the photodiode detectors and the known angles of maximum emitted light intensity and maximum detected reflections, and combining the identified shapes in the different planar airspaces to generate a three-dimensional image of the object, based on orientations of the planar airspaces. The generated 3D information identifies the object's shape, a 3D volume in which the object is contained, and the object's location.
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
As proximity sensor bar 510 is translated along the height of a scanned object, three types of detection event are generated: NULL, OBJECT and SHADOW. A NULL event is generated when noise or no signal is detected, indicating that no object is situated in the proximity sensor detection plane. An OBJECT event is generated when a high reflection is detected, indicating that an object is situated in the proximity sensor detection plane. A SHADOW event is on the border between a NULL event and an OBJECT event, i.e., when some reflection is detected, but it is unclear if that reflection should be treated as noise or as an object reflection. SHADOW events are generated, inter alia, when proximity sensor light beams pass near an object, for example when the detection plane is just beyond the top or bottom of the object.
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
At step 1003 (“Frame Design”) multiple 2D images of step 1002 captured when the proximity sensor was placed at different locations around the object, but along the same 2D plane, are selected and indexed to facilitate combining them into a 2D image referred to as a “frame”. Thus, a frame represents a slice of the object along a 2D plane. Different frames represent parallel slices of the object at different heights that can be combined into a 3D image of the object. When constructing a frame, it may become apparent that certain ones of the scans are problematic. For example, the reconstructed frame of a solid object should have a closed shape, but in one of the scans, the detected edge doesn't span the gap indicated by the neighboring scans on either side of that scan. In these cases, at step 1004 (“Frame Noise Filtering”) the problematic scan information is either removed from the frame construction or the data in the problematic scan is filtered. Also at step 1004, noise arising from combining 2D information from different scans into a coherent frame is filtered out (“Noisy frame filtering”).
At step 1005 multiple frames are combined to create a 3D image of the object (“3D Image Construction”) as discussed hereinabove. This involves the steps of identifying the center of gravity of the object in each frame (“center of gravity”) and using the center of gravity in adjacent frames to align those frames (“3D object alignment”). When the distance between the centers of gravity in adjacent frames is within a defined tolerance level, e.g., 5 mm, the frames are stacked with their centers of gravity aligned one above the other. When the distance between the centers of gravity in adjacent frames is above the defined tolerance, it is understood that the object is not perpendicular to the 2D planes of the frames and the adjacent frames are stacked accordingly. In certain cases, pixels from some reconstructed frames extend beyond the 3D domain. In such cases, these outlying pixels are translated to the nearest edge of the 3D domain (“Object transition”), as explained hereinbelow with reference to
At step 1006 the combined 3D image is filtered to smooth any discontinuities resulting from combining the different frames (“3D Object Continuity”). In some embodiments of the invention this is done by translating object pixels at both sides of a discontinuity so that they represent a single common location on the object. This reduces the size of the object.
At step 1007 lighting effects are added to the constructed 3D image for better visualization (“3D Object Visualization”). At step 1008, the system creates a 3D image from the combined 2D images. This 3D image uses the z-axis either to plot the height of the object (“length view”), or alternatively, to indicate the intensity of the detected signals at each mapped 2D coordinate (“signal view”).
Reference is made to
In certain embodiments of the invention, only a limited amount of memory, e.g., a screen buffer, is available to store the entire reconstructed 3D model of the scanned object. During reconstruction, if a portion of the object juts out beyond the 3D domain, it is transferred to the nearest edge of the 3D domain. The following example clarifies this case.
Suppose the scanned object is shaped as an upside-down “L”. The proximity sensor scans this object from all four sides at different heights from the bottom up. Each set of four scans at a single height is combined into a 2D slice of the object at the given height. These slices are combined from bottom to top, with the first slices of the object being mapped to the 3D domain. When the upper slices of the object are added to the model, outermost pixels on the roof of the upside-down “L” are outside the 3D domain. These pixels are translated to the nearest edge of the 3D domain. This is a dynamic process that is performed while adding the different 2D scans into a 3D model.
Reference is made to
At step 1014 the process combines the frames of step 1013 into a 3D object. This process involves copying each successive frame into a pre-allocated memory beginning at the initial launching point calculated at step 1011 and ensuring that the object edges in neighboring frames are connected. In some embodiments, the degree of alignment between the center of gravity of each newly added frame and the center of gravity of the current 3D object is measured. Misalignment up to a threshold is assumed to arise from an actual variation in the contour of the object, but a large misalignment is assumed to arise due to errors that occurred when the scans were performed. In this latter case, the new frame is translated so that its center of gravity is aligned with the center of gravity of the 3D object. At step 1014 the system checks if, after adding each new frame to the 3D object, any of the pixels in the new frame extend outside the 3D domain. If they do, those pixels are moved to the nearest edge of the 3D domain, thereby essentially shrinking the object. At step 1014 the system converts the 2D coordinates of the object to 3D coordinates.
Reference is made to
Reference is made to
Next, the system retrieves the scan information for the next frame, namely, four scans taken at an incrementally higher height than the previous scan. This process repeats until all of the scan information has been transformed to the shared 3D domain.
Reference is made to
Reference is made to
As discussed hereinabove with respect to
Signal difference=detection_signal962−detection_signal961
Thus, at the beginning of the tracked movement the fingertip is fully detected at both hotspots 961 and 962, and the difference between them is 0. As the fingertip moves toward hotspot 963 it remains fully detected at hotspot 962 and detection at hotspot 961 is gradually reduced. The detection values are 8-bit values.
Signal difference=detection_signal963−detection_signal962
Thus, at the beginning of the tracked movement the fingertip is not detected at hotspot 963 but is fully detected at hotspot 962, and the difference is −255. As the fingertip moves toward hotspot 963 it remains fully detected at hotspot 962 and detection at hotspot 963 gradually increases.
Although the signal plots in
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Proximity sensor bar 501 is positioned along the bottom edge of TV 832. This proximity sensor bar has been described hereinabove, inter alia, with reference to
Wireless channel 630 is enabled by communications chips in TV 832, or by an adapter that plugs into TV 832, e.g., via High-Definition Multimedia Interface (HDMI) or Universal Serial Bus (USB) ports on the TV. The described system with smartphone 833 and TV 832 is exemplary. In other embodiments of the invention smartphone 833 is replaced by a server, a laptop or a tablet. Similarly, in certain embodiments of the invention TV 832 is replaced by a projector or other display device. For example, when TV 832 is replaced by a projector, proximity sensor bar 501 is placed along an edge of the surface on which GUI 902 is projected to detect user interactions with that GUI.
In certain embodiments of the invention, a second proximity sensor bar is placed along a second edge of GUI 902, either adjacent to or opposite, the edge along which proximity sensor bar 501 is placed, as described hereinabove with reference to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
Reference is made to
In some embodiments of the invention, sensor 501 includes interleaved emitters and detectors as illustrated in
Reference is made to
Reference is made to
Reference is made to
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
This application claims priority benefit of U.S. Provisional Patent Application No. 62/425,087, entitled OPTICAL PROXIMITY SENSOR AND ASSOCIATED USER INTERFACE, filed on Nov. 22, 2016 by inventors Thomas Eriksson, Alexander Jubner, Rozita Teymourzadeh, Stefan Holmgren, Lars Sparf and Bengt Henry Hjalmar Edlund. This application claims priority benefit of U.S. Provisional Patent Application No. 62/462,034, entitled 3D IMAGING WITH OPTICAL PROXIMITY SENSOR, filed on Feb. 22, 2017 by inventors Thomas Eriksson, Alexander Jubner, Rozita Teymourzadeh, Stefan Holmgren, Lars Sparf, Bengt Henry Hjalmar Edlund. This application is a continuation-in-part of U.S. patent application Ser. No. 14/960,369, now U.S. Pat. No. 9,645,679, entitled INTEGRATED LIGHT GUIDE AND TOUCH SCREEN FRAME AND MULTI-TOUCH DETERMINATION METHOD, filed on Dec. 5, 2015 by inventors Thomas Eriksson, Alexander Jubner, John Karlsson, Lars Sparf, Saska Lindfors and Robert Pettersson. U.S. patent application Ser. No. 14/960,369 is a continuation of U.S. patent application Ser. No. 14/588,462, now U.S. Pat. No. 9,207,800, entitled INTEGRATED LIGHT GUIDE AND TOUCH SCREEN FRAME AND MULTI-TOUCH DETERMINATION METHOD, filed on Jan. 2, 2015 by inventors Thomas Eriksson, Alexander Jubner, John Karlsson, Lars Sparf, Saska Lindfors and Robert Pettersson. U.S. patent application Ser. No. 14/588,462 claims priority benefit of U.S. Provisional Patent Application No. 62/054,353, entitled INTEGRATED LIGHT GUIDE AND TOUCH SCREEN FRAME AND MULTI-TOUCH DETERMINATION METHOD, filed on Sep. 23, 2014 by inventors Saska Lindfors, Robert Pettersson, John Karlsson and Thomas Eriksson. This application is a continuation-in-part of U.S. patent application Ser. No. 15/000,815, now U.S. Pat. No. 9,921,661, entitled OPTICAL PROXIMITY SENSOR AND ASSOCIATED USER INTERFACE, filed on Jan. 19, 2016 by inventors Thomas Eriksson, Alexander Jubner, Rozita Teymourzadeh, Håkan Sven Erik Andersson, Per Rosengren, Xiatao Wang, Stefan Holmgren, Gunnar Martin Fröjdh, Simon Fellin, Jan Tomas Hartman, Oscar Sverud, Sangtaek Kim, Rasmus Dahl-Örn, Richard Berglind, Karl Erik Patrik Nordström, Lars Sparf, Erik Rosengren, John Karlsson, Remo Behdasht, Robin Kjell Åman, Joseph Shain, Oskar Hagberg and Joel Rozada. U.S. patent application Ser. No. 15/000,815 claims priority benefit from: U.S. Provisional Patent Application No. 62/107,536 entitled OPTICAL PROXIMITY SENSORS and filed on Jan. 26, 2015 by inventors Stefan Holmgren, Oscar Sverud, Sairam Iyer, Richard Berglind, Karl Erik Patrik Nordström, Lars Sparf, Per Rosengren, Erik Rosengren, John Karlsson, Thomas Eriksson, Alexander Jubner, Remo Behdasht, Simon Fellin, Robin Kjell Åman and Joseph Shain;U.S. Provisional Patent Application No. 62/197,813 entitled OPTICAL PROXIMITY SENSOR and filed on Jul. 28, 2015 by inventors Rozita Teymourzadeh, Håkan Sven Erik Andersson, Per Rosengren, Xiatao Wang, Stefan Holmgren, Gunnar Martin Fröjdh and Simon Fellin; andU.S. Provisional Patent Application No. 62/266,011 entitled OPTICAL PROXIMITY SENSOR and filed on Dec. 11, 2015 by inventors Thomas Eriksson, Alexander Jubner, Rozita Teymourzadeh, Håkan Sven Erik Andersson, Per Rosengren, Xiatao Wang, Stefan Holmgren, Gunnar Martin Fröjdh, Simon Fellin and Jan Tomas Hartman. U.S. patent application Ser. No. 15/000,815 is a continuation-in-part of U.S. patent application Ser. No. 14/630,737, entitled LIGHT-BASED PROXIMITY DETECTION SYSTEM AND USER INTERFACE and filed on Feb. 25, 2015 by inventors Thomas Eriksson and Stefan Holmgren. U.S. patent application Ser. No. 14/630,737 is a continuation of U.S. patent application Ser. No. 14/140,635, now U.S. Pat. No. 9,001,087, entitled LIGHT-BASED PROXIMITY DETECTION SYSTEM AND USER INTERFACE and filed on Dec. 26, 2013 by inventors Thomas Eriksson and Stefan Holmgren. U.S. patent application Ser. No. 14/140,635 is a continuation of U.S. patent application Ser. No. 13/732,456, now U.S. Pat. No. 8,643,628, entitled LIGHT-BASED PROXIMITY DETECTION SYSTEM AND USER INTERFACE and filed on Jan. 2, 2013 by inventors Thomas Eriksson and Stefan Holmgren. U.S. patent application Ser. No. 13/732,456 claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/713,546, entitled LIGHT-BASED PROXIMITY DETECTION SYSTEM AND USER INTERFACE and filed on Oct. 14, 2012 by inventor Stefan Holmgren. U.S. patent application Ser. No. 15/000,815 is a continuation-in-part of U.S. patent application Ser. No. 14/726,533, now U.S. Pat. No. 9,678,601, entitled OPTICAL TOUCH SCREENS and filed on May 31, 2015 by inventors Robert Pettersson, Per Rosengren, Erik Rosengren, Stefan Holmgren, Lars Sparf, Richard Berglind, Thomas Eriksson, Karl Erik Patrik Nordström, Gunnar Martin Fröjdh, Xiatao Wang and Remo Behdasht. U.S. patent application Ser. No. 14/726,533 is a continuation of U.S. patent application Ser. No. 14/311,366, now U.S. Pat. No. 9,063,614, entitled OPTICAL TOUCH SCREENS and filed on Jun. 23, 2014 by inventors Robert Pettersson, Per Rosengren, Erik Rosengren, Stefan Holmgren, Lars Sparf, Richard Berglind, Thomas Eriksson, Karl Erik Patrik Nordström, Gunnar Martin Fröjdh, Xiatao Wang and Remo Behdasht. U.S. patent application Ser. No. 14/311,366 is a continuation of PCT Patent Application No. PCT/US14/40579, entitled OPTICAL TOUCH SCREENS and filed on Jun. 3, 2014 by inventors Robert Pettersson, Per Rosengren, Erik Rosengren, Stefan Holmgren, Lars Sparf, Richard Berglind, Thomas Eriksson, Karl Erik Patrik Nordström, Gunnar Martin Fröjdh, Xiatao Wang and Remo Behdasht. This application is a continuation-in-part of U.S. patent application Ser. No. 14/880,231, now U.S. Pat. No. 10,004,985, entitled GAMING DEVICE and filed on Oct. 11, 2015 by inventors Stefan Holmgren, Sairam Iyer, Richard Berglind, Karl Erik Patrik Nordström, Lars Sparf, Per Rosengren, Erik Rosengren, John Karlsson, Thomas Eriksson, Alexander Jubner, Remo Behdasht, Simon Fellin, Robin Åman and Joseph Shain. U.S. patent application Ser. No. 14/880,231 is a divisional of U.S. patent application Ser. No. 14/312,787, now U.S. Pat. No. 9,164,625, entitled OPTICAL PROXIMITY SENSORS and filed on Jun. 24, 2014 by inventors Stefan Holmgren, Sairam Iyer, Richard Berglind, Karl Erik Patrik Nordström, Lars Sparf, Per Rosengren, Erik Rosengren, John Karlsson, Thomas Eriksson, Alexander Jubner, Remo Behdasht, Simon Fellin, Robin Åman and Joseph Shain. U.S. patent application Ser. No. 15/000,815 is a continuation-in-part of U.S. patent application Ser. No. 14/555,731, now U.S. Pat. No. 9,741,184, entitled DOOR HANDLE WITH OPTICAL PROXIMITY SENSORS and filed on Nov. 28, 2014 by inventors Sairam Iyer, Stefan Holmgren and Per Rosengren. U.S. patent application Ser. No. 15/000,815 is a continuation-in-part of U.S. patent application Ser. No. 14/791,414, entitled OPTICAL PROXIMITY SENSOR FOR TOUCH SCREEN AND ASSOCIATED CALIBRATION TOOL and filed on Jul. 4, 2015 by inventors Per Rosengren, Xiatao Wang and Stefan Holmgren. U.S. patent application Ser. No. 14/312,787 is a continuation-in-part of U.S. patent application Ser. No. 13/775,269, now U.S. Pat. No. 8,917,239, entitled REMOVABLE PROTECTIVE COVER WITH EMBEDDED PROXIMITY SENSORS and filed on Feb. 25, 2013 by inventors Thomas Eriksson, Stefan Holmgren, John Karlsson, Remo Behdasht, Erik Rosengren and Lars Sparf. U.S. patent application Ser. No. 14/312,787 is also a continuation of PCT Patent Application No. PCT/US14/40112, entitled OPTICAL PROXIMITY SENSORS and filed on May 30, 2014 by inventors Stefan Holmgren, Sairam Iyer, Richard Berglind, Karl Erik Patrik Nordström, Lars Sparf, Per Rosengren, Erik Rosengren, John Karlsson, Thomas Eriksson, Alexander Jubner, Remo Behdasht, Simon Fellin, Robin Åman and Joseph Shain. PCT Application No. PCT/US14/40112 claims priority benefit from: U.S. Provisional Patent Application No. 61/986,341, entitled OPTICAL TOUCH SCREEN SYSTEMS and filed on Apr. 30, 2014 by inventors Sairam Iyer, Karl Erik Patrik Nordström, Lars Sparf, Per Rosengren, Erik Rosengren, Thomas Eriksson, Alexander Jubner and Joseph Shain;U.S. Provisional Patent Application No. 61/972,435, entitled OPTICAL TOUCH SCREEN SYSTEMS and filed on Mar. 31, 2014 by inventors Sairam Iyer, Karl Erik Patrik Nordström, Lars Sparf, Per Rosengren, Erik Rosengren, Thomas Eriksson, Alexander Jubner and Joseph Shain;U.S. Provisional Patent Application No. 61/929,992, entitled CLOUD GAMING USER INTERFACE and filed on Jan. 22, 2014 by inventors Thomas Eriksson, Stefan Holmgren, John Karlsson, Remo Behdasht, Erik Rosengren, Lars Sparf and Alexander Jubner;U.S. Provisional Patent Application No. 61/846,089, entitled PROXIMITY SENSOR FOR LAPTOP COMPUTER AND ASSOCIATED USER INTERFACE and filed on Jul. 15, 2013 by inventors Richard Berglind, Thomas Eriksson, Simon Fellin, Per Rosengren, Lars Sparf, Erik Rosengren, Joseph Shain, Stefan Holmgren, John Karlsson and Remo Behdasht;U.S. Provisional Patent Application No. 61/838,296, entitled OPTICAL GAME ACCESSORIES USING REFLECTED LIGHT and filed on Jun. 23, 2013 by inventors Per Rosengren, Lars Sparf, Erik Rosengren, Thomas Eriksson, Joseph Shain, Stefan Holmgren, John Karlsson and Remo Behdasht; andU.S. Provisional Patent Application No. 61/828,713, entitled OPTICAL TOUCH SCREEN SYSTEMS USING REFLECTED LIGHT and filed on May 30, 2013 by inventors Per Rosengren, Lars Sparf, Erik Rosengren and Thomas Eriksson. The contents of these applications are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4243879 | Carroll et al. | Jan 1981 | A |
4267443 | Carroll et al. | May 1981 | A |
4301447 | Funk et al. | Nov 1981 | A |
4588258 | Hoopman | May 1986 | A |
4641426 | Hartman et al. | Feb 1987 | A |
4672364 | Lucas | Jun 1987 | A |
4703316 | Sherbeck | Oct 1987 | A |
4761637 | Lucas et al. | Aug 1988 | A |
4928094 | Smith | May 1990 | A |
5036187 | Yoshida et al. | Jul 1991 | A |
5070411 | Suzuki | Dec 1991 | A |
5103085 | Zimmerman | Apr 1992 | A |
5162783 | Moreno | Nov 1992 | A |
5194863 | Barker et al. | Mar 1993 | A |
5220409 | Bures | Jun 1993 | A |
5414413 | Tamaru et al. | May 1995 | A |
5463725 | Henckel et al. | Oct 1995 | A |
5559727 | Deley et al. | Sep 1996 | A |
5577733 | Downing | Nov 1996 | A |
5603053 | Gough et al. | Feb 1997 | A |
5729250 | Bishop et al. | Mar 1998 | A |
5748185 | Stephan et al. | May 1998 | A |
5825352 | Bisset et al. | Oct 1998 | A |
5880462 | Hsia | Mar 1999 | A |
5889236 | Gillespie et al. | Mar 1999 | A |
5900863 | Numazaki | May 1999 | A |
5914709 | Graham et al. | Jun 1999 | A |
5936615 | Waters | Aug 1999 | A |
5943044 | Martinelli et al. | Aug 1999 | A |
5946134 | Benson et al. | Aug 1999 | A |
5977888 | Fujita et al. | Nov 1999 | A |
5988645 | Downing | Nov 1999 | A |
6010061 | Howell | Jan 2000 | A |
6035180 | Kubes | Mar 2000 | A |
6091405 | Lowe et al. | Jul 2000 | A |
6161005 | Pinzon | Dec 2000 | A |
6333735 | Anvekar | Dec 2001 | B1 |
6340979 | Beaton et al. | Jan 2002 | B1 |
6362468 | Murakami et al. | Mar 2002 | B1 |
6421042 | Omura et al. | Jul 2002 | B1 |
6429857 | Masters et al. | Aug 2002 | B1 |
6492978 | Selig et al. | Dec 2002 | B1 |
6646633 | Nicolas | Nov 2003 | B1 |
6690365 | Hinckley et al. | Feb 2004 | B2 |
6690387 | Zimmerman et al. | Feb 2004 | B2 |
6707449 | Hinckley et al. | Mar 2004 | B2 |
6757002 | Oross et al. | Jun 2004 | B1 |
6762077 | Schuurmans et al. | Jul 2004 | B2 |
6788292 | Nako et al. | Sep 2004 | B1 |
6803906 | Morrison et al. | Oct 2004 | B1 |
6836367 | Seino et al. | Dec 2004 | B2 |
6864882 | Newton | Mar 2005 | B2 |
6874683 | Keronen et al. | Apr 2005 | B2 |
6875977 | Wolter et al. | Apr 2005 | B2 |
6947032 | Morrison et al. | Sep 2005 | B2 |
6954197 | Morrison et al. | Oct 2005 | B2 |
6972401 | Akitt et al. | Dec 2005 | B2 |
6972834 | Oka et al. | Dec 2005 | B1 |
6985137 | Kaikuranta | Jan 2006 | B2 |
7030861 | Westerman et al. | Apr 2006 | B1 |
7046232 | Inagaki et al. | May 2006 | B2 |
7133032 | Cok | Nov 2006 | B2 |
7162124 | Gunn, III et al. | Jan 2007 | B1 |
7170590 | Kishida | Jan 2007 | B2 |
7176905 | Baharav et al. | Feb 2007 | B2 |
7184030 | McCharles et al. | Feb 2007 | B2 |
7221462 | Cavallucci | May 2007 | B2 |
7225408 | O'Rourke | May 2007 | B2 |
7232986 | Worthington et al. | Jun 2007 | B2 |
7339580 | Westerman et al. | Mar 2008 | B2 |
7352940 | Charters et al. | Apr 2008 | B2 |
7369724 | Deane | May 2008 | B2 |
7372456 | McLintock | May 2008 | B2 |
7429706 | Ho | Sep 2008 | B2 |
7518738 | Cavallucci et al. | Apr 2009 | B2 |
7619617 | Morrison et al. | Nov 2009 | B2 |
7659887 | Larsen et al. | Feb 2010 | B2 |
7855716 | McCreary et al. | Dec 2010 | B2 |
7924264 | Ohta | Apr 2011 | B2 |
8022941 | Smoot | Sep 2011 | B2 |
8091280 | Hanzel et al. | Jan 2012 | B2 |
8115745 | Gray | Feb 2012 | B2 |
8120625 | Hinckley | Feb 2012 | B2 |
8139045 | Jang et al. | Mar 2012 | B2 |
8169404 | Boillot | May 2012 | B1 |
8193498 | Cavallucci et al. | Jun 2012 | B2 |
8243047 | Chiang et al. | Aug 2012 | B2 |
8269740 | Sohn et al. | Sep 2012 | B2 |
8289299 | Newton | Oct 2012 | B2 |
8316324 | Boillot | Nov 2012 | B2 |
8350831 | Drumm | Jan 2013 | B2 |
8426799 | Drumm | Apr 2013 | B2 |
8471814 | LaFave et al. | Jun 2013 | B2 |
8482547 | Christiansson et al. | Jul 2013 | B2 |
8508505 | Shin et al. | Aug 2013 | B2 |
8558815 | Van Genechten et al. | Oct 2013 | B2 |
8581884 | Fahraeus et al. | Nov 2013 | B2 |
8648677 | Su et al. | Feb 2014 | B2 |
8922340 | Salter et al. | Dec 2014 | B2 |
8933876 | Galor et al. | Jan 2015 | B2 |
9050943 | Muller | Jun 2015 | B2 |
9207800 | Eriksson et al. | Dec 2015 | B1 |
9223431 | Pemberton-Pigott | Dec 2015 | B2 |
D776666 | Karlsson et al. | Jan 2017 | S |
20010002694 | Nakazawa et al. | Jun 2001 | A1 |
20010022579 | Hirabayashi | Sep 2001 | A1 |
20010026268 | Ito | Oct 2001 | A1 |
20010028344 | Iwamoto et al. | Oct 2001 | A1 |
20010043189 | Brisebois et al. | Nov 2001 | A1 |
20010055006 | Sano et al. | Dec 2001 | A1 |
20020067348 | Masters et al. | Jun 2002 | A1 |
20020109843 | Ehsani et al. | Aug 2002 | A1 |
20020152010 | Colmenarez et al. | Oct 2002 | A1 |
20020175900 | Armstrong | Nov 2002 | A1 |
20030034439 | Reime et al. | Feb 2003 | A1 |
20030174125 | Torunoglu et al. | Sep 2003 | A1 |
20030231308 | Granger | Dec 2003 | A1 |
20030234346 | Kao | Dec 2003 | A1 |
20040046960 | Wagner et al. | Mar 2004 | A1 |
20040056199 | O'Connor et al. | Mar 2004 | A1 |
20040090428 | Crandall, Jr. et al. | May 2004 | A1 |
20040140961 | Cok | Jul 2004 | A1 |
20040198490 | Bansemer et al. | Oct 2004 | A1 |
20040201579 | Graham | Oct 2004 | A1 |
20050024623 | Xie et al. | Feb 2005 | A1 |
20050073508 | Pittel et al. | Apr 2005 | A1 |
20050093846 | Marcus et al. | May 2005 | A1 |
20050104860 | McCreary et al. | May 2005 | A1 |
20050122308 | Bell et al. | Jun 2005 | A1 |
20050133702 | Meyer | Jun 2005 | A1 |
20050174473 | Morgan et al. | Aug 2005 | A1 |
20050271319 | Graham | Dec 2005 | A1 |
20060001654 | Smits | Jan 2006 | A1 |
20060018586 | Kishida | Jan 2006 | A1 |
20060028455 | Hinckley et al. | Feb 2006 | A1 |
20060077186 | Park et al. | Apr 2006 | A1 |
20060132454 | Chen et al. | Jun 2006 | A1 |
20060161870 | Hotelling et al. | Jul 2006 | A1 |
20060161871 | Hotelling et al. | Jul 2006 | A1 |
20060229509 | Al-Ali et al. | Oct 2006 | A1 |
20060236262 | Bathiche et al. | Oct 2006 | A1 |
20060238517 | King et al. | Oct 2006 | A1 |
20060244733 | Geaghan | Nov 2006 | A1 |
20070024598 | Miller et al. | Feb 2007 | A1 |
20070052693 | Watari | Mar 2007 | A1 |
20070077541 | Champagne et al. | Apr 2007 | A1 |
20070084989 | Lange et al. | Apr 2007 | A1 |
20070103436 | Kong | May 2007 | A1 |
20070146318 | Juh et al. | Jun 2007 | A1 |
20070152984 | Ording et al. | Jul 2007 | A1 |
20070176908 | Lipman et al. | Aug 2007 | A1 |
20080008472 | Dress et al. | Jan 2008 | A1 |
20080012835 | Rimon et al. | Jan 2008 | A1 |
20080012850 | Keating, III | Jan 2008 | A1 |
20080013913 | Lieberman et al. | Jan 2008 | A1 |
20080016511 | Ryder et al. | Jan 2008 | A1 |
20080055273 | Forstall | Mar 2008 | A1 |
20080056068 | Yeh et al. | Mar 2008 | A1 |
20080068353 | Lieberman et al. | Mar 2008 | A1 |
20080080811 | Deane | Apr 2008 | A1 |
20080089587 | Kim et al. | Apr 2008 | A1 |
20080093542 | Lieberman et al. | Apr 2008 | A1 |
20080096620 | Lee et al. | Apr 2008 | A1 |
20080100572 | Boillot | May 2008 | A1 |
20080100593 | Skillman et al. | May 2008 | A1 |
20080117183 | Yu et al. | May 2008 | A1 |
20080122792 | Izadi et al. | May 2008 | A1 |
20080122796 | Jobs et al. | May 2008 | A1 |
20080122803 | Izadi et al. | May 2008 | A1 |
20080134102 | Movold et al. | Jun 2008 | A1 |
20080158172 | Klotelling et al. | Jul 2008 | A1 |
20080158174 | Land et al. | Jul 2008 | A1 |
20080211779 | Pryor | Sep 2008 | A1 |
20080221711 | Trainer | Sep 2008 | A1 |
20080224836 | Pickering | Sep 2008 | A1 |
20080259053 | Newton | Oct 2008 | A1 |
20080273019 | Deane | Nov 2008 | A1 |
20080278460 | Arnett et al. | Nov 2008 | A1 |
20080297487 | Klotelling et al. | Dec 2008 | A1 |
20090009944 | Yukawa et al. | Jan 2009 | A1 |
20090027357 | Morrison | Jan 2009 | A1 |
20090058833 | Newton | Mar 2009 | A1 |
20090066673 | Molne et al. | Mar 2009 | A1 |
20090096994 | Smits | Apr 2009 | A1 |
20090102815 | Juni | Apr 2009 | A1 |
20090122027 | Newton | May 2009 | A1 |
20090135162 | Van De Wijdeven et al. | May 2009 | A1 |
20090139778 | Butler et al. | Jun 2009 | A1 |
20090153519 | Suarez Rovere | Jun 2009 | A1 |
20090166098 | Sunder | Jul 2009 | A1 |
20090167724 | Xuan et al. | Jul 2009 | A1 |
20090189857 | Benko et al. | Jul 2009 | A1 |
20090195402 | Izadi et al. | Aug 2009 | A1 |
20090198359 | Chaudhri | Aug 2009 | A1 |
20090280905 | Weisman et al. | Nov 2009 | A1 |
20090322673 | Cherradi El Fadili | Dec 2009 | A1 |
20100002291 | Fukuyama | Jan 2010 | A1 |
20100013763 | Futter et al. | Jan 2010 | A1 |
20100023895 | Benko et al. | Jan 2010 | A1 |
20100031203 | Morris et al. | Feb 2010 | A1 |
20100079407 | Suggs | Apr 2010 | A1 |
20100079409 | Sirotich et al. | Apr 2010 | A1 |
20100079412 | Chiang et al. | Apr 2010 | A1 |
20100095234 | Lane et al. | Apr 2010 | A1 |
20100134424 | Brisebois et al. | Jun 2010 | A1 |
20100149073 | Chaum et al. | Jun 2010 | A1 |
20100185341 | Wilson et al. | Jul 2010 | A1 |
20100238138 | Goertz et al. | Sep 2010 | A1 |
20100238139 | Goertz et al. | Sep 2010 | A1 |
20100289755 | Zhu et al. | Nov 2010 | A1 |
20100295821 | Chang et al. | Nov 2010 | A1 |
20100299642 | Merrell et al. | Nov 2010 | A1 |
20100302185 | Han et al. | Dec 2010 | A1 |
20100321289 | Kim et al. | Dec 2010 | A1 |
20110005367 | Hwang et al. | Jan 2011 | A1 |
20110043325 | Newman et al. | Feb 2011 | A1 |
20110043826 | Kiyose | Feb 2011 | A1 |
20110044579 | Travis et al. | Feb 2011 | A1 |
20110050639 | Challener et al. | Mar 2011 | A1 |
20110050650 | McGibney et al. | Mar 2011 | A1 |
20110057906 | Raynor et al. | Mar 2011 | A1 |
20110063214 | Knapp | Mar 2011 | A1 |
20110074734 | Wassvik et al. | Mar 2011 | A1 |
20110074736 | Takakura | Mar 2011 | A1 |
20110075418 | Mallory et al. | Mar 2011 | A1 |
20110087963 | Brisebois | Apr 2011 | A1 |
20110090176 | Christiansson et al. | Apr 2011 | A1 |
20110116104 | Kao et al. | May 2011 | A1 |
20110122560 | Andre et al. | May 2011 | A1 |
20110128234 | Lipman et al. | Jun 2011 | A1 |
20110128729 | Ng | Jun 2011 | A1 |
20110148820 | Song | Jun 2011 | A1 |
20110157097 | Hamada et al. | Jun 2011 | A1 |
20110163956 | Zdralek | Jul 2011 | A1 |
20110163996 | Wassvik et al. | Jul 2011 | A1 |
20110169773 | Luo | Jul 2011 | A1 |
20110169780 | Goedz et al. | Jul 2011 | A1 |
20110169781 | Goedz et al. | Jul 2011 | A1 |
20110175533 | Holman et al. | Jul 2011 | A1 |
20110175852 | Goedz et al. | Jul 2011 | A1 |
20110179368 | King et al. | Jul 2011 | A1 |
20110179381 | King | Jul 2011 | A1 |
20110205175 | Chen | Aug 2011 | A1 |
20110205186 | Newton et al. | Aug 2011 | A1 |
20110221706 | McGibney et al. | Sep 2011 | A1 |
20110227487 | Nichol et al. | Sep 2011 | A1 |
20110227874 | Fahraeus et al. | Sep 2011 | A1 |
20110242056 | Lee et al. | Oct 2011 | A1 |
20110248151 | Holcombe et al. | Oct 2011 | A1 |
20110310005 | Chen et al. | Dec 2011 | A1 |
20120050226 | Kato | Mar 2012 | A1 |
20120056821 | Goh | Mar 2012 | A1 |
20120068971 | Pemberton-Pigott | Mar 2012 | A1 |
20120068973 | Christiansson et al. | Mar 2012 | A1 |
20120071994 | Lengeling | Mar 2012 | A1 |
20120086672 | Tseng et al. | Apr 2012 | A1 |
20120098753 | Lu | Apr 2012 | A1 |
20120098794 | Kleinert et al. | Apr 2012 | A1 |
20120116548 | Goree et al. | May 2012 | A1 |
20120131186 | Klos et al. | May 2012 | A1 |
20120162078 | Ferren et al. | Jun 2012 | A1 |
20120176343 | Holmgren et al. | Jul 2012 | A1 |
20120188203 | Yao et al. | Jul 2012 | A1 |
20120188205 | Jansson et al. | Jul 2012 | A1 |
20120212457 | Drumm | Aug 2012 | A1 |
20120212458 | Drumm | Aug 2012 | A1 |
20120218229 | Drumm | Aug 2012 | A1 |
20120262408 | Pasquero et al. | Oct 2012 | A1 |
20120306793 | Liu et al. | Dec 2012 | A1 |
20130044071 | Hu et al. | Feb 2013 | A1 |
20130127788 | Drumm | May 2013 | A1 |
20130127790 | Wassvik | May 2013 | A1 |
20130135259 | King et al. | May 2013 | A1 |
20130141395 | Holmgren et al. | Jun 2013 | A1 |
20130215034 | Oh et al. | Aug 2013 | A1 |
20130234171 | Heikkinen et al. | Sep 2013 | A1 |
20140049516 | Heikkinen et al. | Feb 2014 | A1 |
20140104160 | Eriksson et al. | Apr 2014 | A1 |
20140104240 | Eriksson et al. | Apr 2014 | A1 |
20150153777 | Liu et al. | Jun 2015 | A1 |
20150185945 | Lauber | Jul 2015 | A1 |
20150227213 | Cho | Aug 2015 | A1 |
20160154475 | Eriksson et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
0601651 | Jun 1994 | EP |
1906632 | Apr 2008 | EP |
10148640 | Jun 1998 | JP |
11232024 | Aug 1999 | JP |
3240941 | Dec 2001 | JP |
2003029906 | Jan 2003 | JP |
1020120120097 | Nov 2012 | KR |
1012682090000 | May 2013 | KR |
1020130053363 | May 2013 | KR |
1020130053364 | May 2013 | KR |
1020130053367 | May 2013 | KR |
1020130053377 | May 2013 | KR |
1020130054135 | May 2013 | KR |
1020130054150 | May 2013 | KR |
1020130133117 | Dec 2013 | KR |
WO8600446 | Jan 1986 | WO |
WO8600447 | Jan 1986 | WO |
WO2008004103 | Jan 2008 | WO |
WO2008133941 | Nov 2008 | WO |
WO2010011929 | Jan 2010 | WO |
WO2010015408 | Feb 2010 | WO |
WO2010134865 | Nov 2010 | WO |
WO2012017183 | Feb 2012 | WO |
WO2012089957 | Jul 2012 | WO |
WO2012089958 | Jul 2012 | WO |
WO2014041245 | Mar 2014 | WO |
WO2014194151 | Dec 2014 | WO |
WO 2015161070 | Oct 2015 | WO |
WO 2016048590 | Mar 2016 | WO |
Entry |
---|
PCT Application No. PCT/US2017/59625, Search Report and Written Opinion, dated Jan. 25, 2018, 13 pages. |
Hodges et al, “ThinSight: Versatile Multitouch Sensing for Thin Form-Factor Displays.” UIST'07, Oct. 7-10, 2007. <http://www.hci.iastate.edu/REU09/pub/main/telerobotics_team_papers/thinsight_versatile_multitouch_sensing_for_thin_formfactor_displays.pdf>. |
Moeller et al., ZeroTouch: An Optical Multi-Touch and Free-Air Interaction Architecture, Proc. CHI 2012: Proceedings of the 2012 Annual Conference Extended Abstracts on Human factors in Computing Systems, May 5, 2012, pp. 2165-2174, ACM, New York, NY, USA. |
Moeller et al., ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field, CHI EA '11: Proceedings of the 2011 Annual Conference Extended Abstracts on Human factors in Computing Systems, May 2011, pp. 1165-1170, ACM, New York, NY, USA. |
Moeller et al., IntangibleCanvas: Free-Air Finger Painting on a Projected Canvas, CHI EA '11: Proceedings of the 2011 Annual Conference Extended Abstracts on Human Factors in Computing Systems, May 2011, pp. 1615-1620, ACM, New York, NY, USA. |
Moeller et al., Scanning FTIR: Unobtrusive Optoelectronic Multi-Touch Sensing through Waveguide Transmissivity Imaging,TEI '10: Proceedings of the Fourth International Conference on Tangible, Embedded, and Embodied Interaction, Jan. 2010, pp. 73-76, ACM, New York, NY, USA. |
Van Loenen et al., Entertable: A Solution for Social Gaming Experiences, Tangible Play Research and Design for Tangible and Tabletop Games: Proceedings of the Workshop at the 2007 Intelligent User Interfaces Conference, Jan. 27, 2007, pp. 16-19. |
Butler et al., “SideSight: Multi-touch Interaction Around Smart Devices.” UIST'08, Oct. 2008. http://131.107.65.14/en-us/um/people/shahrami/papers/sidesight.pdf. |
Johnson, “Enhanced Optical Touch Input Panel”, IBM Technical Disclosure Bulletin vol. 28, No. 4, Sep. 1985, pp. 1760-1762. |
Rakkolainen et al., Mid-air display experiments to create novel user interfaces, Multimed. Tools Appl. (2009) 44: 389. doi:10.1007/s11042-009-0280-1. |
Hasegawa et al., SIGGRAPH 2015: Emerging Technologies, Article 18, Jul. 31, 2015, ACM, New York, NY, USA. ISBN: 978-1-4503-3635-2, doi: 10.1145/2782782.2785589. |
Number | Date | Country | |
---|---|---|---|
20170262134 A1 | Sep 2017 | US |
Number | Date | Country | |
---|---|---|---|
62462034 | Feb 2017 | US | |
62425087 | Nov 2016 | US | |
62266011 | Dec 2015 | US | |
62197813 | Jul 2015 | US | |
62107536 | Jan 2015 | US | |
62054353 | Sep 2014 | US | |
61986341 | Apr 2014 | US | |
61972435 | Mar 2014 | US | |
61929992 | Jan 2014 | US | |
61846089 | Jul 2013 | US | |
61838296 | Jun 2013 | US | |
61828713 | May 2013 | US | |
61713546 | Oct 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14312787 | Jun 2014 | US |
Child | 14880231 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14588462 | Jan 2015 | US |
Child | 14960369 | US | |
Parent | 14311366 | Jun 2014 | US |
Child | 14726533 | US | |
Parent | PCT/US2014/040579 | Jun 2014 | US |
Child | 14311366 | US | |
Parent | PCT/US2014/040112 | May 2014 | US |
Child | 14312787 | US | |
Parent | 14140635 | Dec 2013 | US |
Child | 14630737 | US | |
Parent | 13732456 | Jan 2013 | US |
Child | 14140635 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15000815 | Jan 2016 | US |
Child | 15588646 | US | |
Parent | 14960369 | Dec 2015 | US |
Child | 15000815 | US | |
Parent | 14880231 | Oct 2015 | US |
Child | 14960369 | US | |
Parent | 14791414 | Jul 2015 | US |
Child | 15000815 | US | |
Parent | 14726533 | May 2015 | US |
Child | 14791414 | US | |
Parent | 14630737 | Feb 2015 | US |
Child | 14726533 | US | |
Parent | 14555731 | Nov 2014 | US |
Child | 15000815 | US | |
Parent | 13775269 | Feb 2013 | US |
Child | 14312787 | US |