This relates generally to electronic devices, and more particularly, to electronic devices with optically-based user interface components.
Typical user interfaces for electronic devices such as cameras, computers, and televisions are based on buttons, switches, or touch technologies such as capacitive or resistive touch technologies that form a portion of a device display. In some devices, optical interface components based on light beam occlusion or light reflection have been provided.
Interface components based on buttons and switches may require aesthetically undesirable external protrusions. Interface components based on resistive and capacitive touch technologies can be expensive to implement, can require touches to a display that can affect optical performance, and can add to the weight and bulk of a device, particularly in large devices such as televisions.
It would therefore be desirable to be able to provide improved interfaces for electronic devices.
Electronic devices such as digital cameras, computers, cellular telephones, televisions or other electronic devices may be provided with camera-based optical user interface components. These camera-based optical user interface components may include one, two, three, four or more user interface camera modules that, in combination, gather user input data in response to three-dimensional user gestures in a given volume of space in the vicinity of the device. Each user interface camera module may include diffractive optical elements that redirect light onto one or more image sensors that have arrays of image pixels. The pixels in the image sensors may include photosensitive elements such as photodiodes that convert the incoming light into digital data. Image sensors may have any number of pixels (e.g., hundreds or thousands or more). A typical image sensor may, for example, have hundreds of thousands or millions of pixels (e.g., megapixels).
As shown in
Processing circuitry 16 may be used to determine and track a three-dimensional position of the user input member using the continuously captured images. Processing circuitry 16 may alter the operation of the device (e.g., by altering visual content displayed on display 12 or by launching software applications for device 10) based on the determined three-dimensional position of the user input member and/or based on changes in the determined three-dimensional position of the user input member.
Processing circuitry 16 may include one or more integrated circuits (e.g., image processing circuits, microprocessors, storage devices such as random-access memory and non-volatile memory, etc.) and may be implemented using components that are separate from camera modules 18 and/or that form part of camera modules 18. Image data that has been captured by camera modules 18 may be processed and stored using processing circuitry 16. Processed image data may, if desired, be provided to external equipment (e.g., a computer or other device) using wired and/or wireless communications paths coupled to processing circuitry 16.
In the example of
Each camera module 18 may be sensitive to a given color of light or a given lighting pattern generated by a light source (e.g., an infrared light source) in device 10. In this way, camera modules 18 may be provided with the ability to track multiple objects (e.g., multiple fingers or multiple hands) simultaneously and control the operation of the device based on the motions of the multiple objects.
Each camera module 18 may have field-of-view that includes a volume of space located adjacent to an outer surface of some or all of display 12 (or other work surface) as shown in the cross-sectional side view of device 10 in
Diffractive elements 34 in each camera module 18 may provide that camera module with a field-of-view that has a first extreme edge at or near the surface of display 12 (or other work area of device 10) and an second (outer) extreme edge several inches above the surface of display 12 (or other work area). For example, a first camera module 18 may have a field-of-view with a first edge that extends along the surface of the display and an outer edge 21 that extends at an angle away from the surface of the display. A second camera module 18 may have a field-of-view with a first edge that extends along the surface of the display and an outer edge 23 that extends at an additional angle away from the surface of the display. In this way, a camera module that is positioned in a plane with the display or slightly protruding above the plane of the display may be provided with a field-of-view that extends across the outer surface of the display.
The fields-of-view of multiple camera modules 18 may overlap to form an overlap region such as gesture tracking volume 24. When a user input member such as user finger 20 is located within volume 24, image data from multiple camera modules 18 may be combined (e.g., using processing circuitry 16) to determine a three-dimensional location (i.e., distances in the x, y, and z directions of
The arrangement of
Image sensor 30 may be formed on a semiconductor substrate (e.g., a silicon image sensor integrated circuit die). Image sensor 30 may contain an array of image pixels configured to receive light of a given color by providing each image sensor with a color filter. The color filters that are used for image sensor pixel arrays in the image sensors may, for example, be red filters, blue filters, and green filters. Each filter may form a color filter layer that covers some or all of the image sensor pixel array. Other filters such as white color filters, dual-band IR cutoff filters (e.g., filters that allow visible light and a range of infrared light emitted by LED lights), etc. may also be used.
Image sensor 30 may have one or more image pixel arrays with any number of image pixels (e.g., complementary metal-oxide semiconductor (CMOS) image pixels, charge-coupled device (CCD) image pixels, etc.).
Image sensor 30 may transfer captured image data to other circuitry in device 10 (e.g., processing circuitry 16) over path 50.
Diffractive element 34 may include grating structures or other structures that redirect light from a first angle into a second angle as the light passes through the diffractive element. In this way, diffractive element 34 may orient the field-of-view of the camera module along a surface of the device such as an outer surface of the display.
As shown in
Using camera modules 18, device 10 may detect the presence of member 20 at a first position 52. In response to detecting member 20 at position 52, device 10 (e.g., circuitry 16) may highlight a region such as region 41 using a visual marker such as circle 42 that surrounds the region. Marker 42 may have a size such as radius 53 that corresponds to the detected distance DO52 to position 52. In response to detecting movement of member 20 (e.g., to a second position 54), device 10 may move marker 42 in a corresponding direction and at a corresponding speed to highlight a second region 41′. Region 41′ may include a displayed icon 40. Device 10 may highlight a displayed icon 40 near the center of marker 42 using an additional marker such as circle 44.
In response to detecting movement of member 20 toward display 12 in direction 56, device 10 (e.g., circuitry 16) may change the size of marker 42 (as indicated by arrows 55). In response to detecting that member 20 has moved to a distance that is equal to cursor height CH, device 10 may remove marker 42 from display 12 leaving only marker 44 around an icon 40 to be selected. In response to detecting a “clicking” motion at or within distance CH (as indicated by arrows 58), the highlighted icon 40 may be selected. Selecting the highlighted icon may include opening a user file or launching a software application (as examples).
Illustrative steps that may be used in operating an electronic device having camera-based touch-free user input components are shown in
At step 100, a three-dimensional position of a user input object such as member 20 of
At step 102, user input data may be generated based on the determined three-dimensional position of the user input object. Generating user input data based on the determined three-dimensional position of the user input object may include generating user input data based the absolute three-dimensional position of the user input object and/or on changes in the determined three-dimensional position of the user input object.
At step 104, the device may be operated using the generated user input data. As examples, operating the device using the generated user input data may include changing display content on a device display, changing display content on a remote display, opening a user file, launching a software application, manipulating electronic documents, powering off the device or changing the operational mode of the device (e.g., from a three-dimensional user gesture input mode to a two-dimensional cursor mode).
Processor system 300, which may be a digital still or video camera system, may include a lens such as lens 396 for focusing an image onto a pixel array such as pixel array 201 when shutter release button 397 is pressed. Processor system 300 may include a central processing unit such as central processing unit (CPU) 395. CPU 395 may be a microprocessor that controls camera functions and one or more image flow functions and communicates with one or more input/output (I/O) devices 391 over a bus such as bus 393. Imaging device 200 may also communicate with CPU 395 over bus 393. System 300 may include random access memory (RAM) 392 and removable memory 394. Removable memory 394 may include flash memory that communicates with CPU 395 over bus 393. Imaging device 200 may be combined with CPU 395, with or without memory storage, on a single integrated circuit or on a different chip. Although bus 393 is illustrated as a single bus, it may be one or more buses or bridges or other communication paths used to interconnect the system components.
Various embodiments have been described illustrating electronic devices having touch-free user input devices such as camera-based user input devices. Camera-based user input devices may include two or more camera modules mounted in the device or positioned at various positions around a work space for the device. The work space may include a portion of the device such as the device display or may include a non-device surface. The field-of-view of each camera module may partially overlap the field-of-view of one or more other camera modules. User gestures performed in this overlap region (sometimes referred to as a gesture tracking volume) may be imaged using the camera modules. Processing circuitry in the device may generate user input data based on the imaged user gestures in the gesture tracking volume and modify the operation of the device using the generated user input data.
The camera modules may be mounted in a housing structure at various locations around the display. The camera modules may have outer surfaces that are parallel to the surface of the display or angled with respect to the surface of the display. Each camera module may include an image sensor, one or more lenses, and a diffractive element that redirects the field-of-view of that camera module. The field-of-view of each camera module may include an extreme edge that runs long the surface of the display.
Operating the display based on the user input data may include highlighting and/or selecting displayed icons on the display based on the user gestures in the gesture tracking volume or otherwise modifying the operation of the display based on touch-free user input gestures.
The foregoing is merely illustrative of the principles of this invention which can be practiced in other embodiments.
This application claims the benefit of provisional patent application No. 61/551,136, filed Oct. 25, 2011, which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6842175 | Schmalstieg et al. | Jan 2005 | B1 |
7697827 | Konicek | Apr 2010 | B2 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20100091110 | Hildreth | Apr 2010 | A1 |
20100283833 | Yeh | Nov 2010 | A1 |
20110103643 | Salsman et al. | May 2011 | A1 |
20120035934 | Cunningham | Feb 2012 | A1 |
20120081611 | Tan et al. | Apr 2012 | A1 |
20120127128 | Large et al. | May 2012 | A1 |
20120207345 | Tang | Aug 2012 | A1 |
20130215014 | Pryor | Aug 2013 | A1 |
Number | Date | Country | |
---|---|---|---|
20130100020 A1 | Apr 2013 | US |
Number | Date | Country | |
---|---|---|---|
61551136 | Oct 2011 | US |