The present application relates to control elements, and more specifically to salient control elements, such as may be used with a mobile device or other electronic device.
Mobile devices are increasingly relied upon to perform a range of functions, including serving as a camera, a phone, a texting device, an e-reader, a navigation device, as just a few examples. The number of control elements (such as buttons and other tactile elements provided to the user) has increased. New users can become frustrated in trying to learn which buttons or other control elements control which features of the device. Because more control elements are present, there is a greater chance that an incorrect control element will be actuated.
Described below are implementations of a salient control element, as well as a mobile device with such a control element, that address shortcomings in conventional control elements and mobile devices.
In one implementation, a salient control element for a mobile device comprises at least one button actuatable by a user to execute a mobile device function. The button has at least a first active state in which the button is extended or retracted relative to a surrounding surface and a second inactive state in which the button is substantially flush with the surrounding surface. The button is reconfigurable between the active state and the inactive state based upon a triggering event. The triggering event comprises at least one of receiving signals indicating a position, motion or orientation of the device, signals indicating a mode of operation or time, signals indicating that a predetermined application or service is active, signals indicating a current wireless communication, or signals indicating the mobile device is in a predetermined venue.
The mobile device can have a front surface that includes a display, adjoining side surfaces and a back surface, and the at least one button can be provided on one of the adjoining side surfaces or the back surface.
The at least one button can be a first button, and there can be at least a second button. The first and second buttons can be positioned on an adjoining side surface and separately configurable such that the first button can be configured to extend as a shutter release button for a camera in a first mode and the first and second buttons can be configured to extend as volume control buttons in a second mode. The triggering event for the first mode can comprise inertial measurement signals indicating that the mobile device is in a landscape orientation. The triggering event for the first mode can comprise signals indicating that the mobile device is in a camera mode.
The predetermined venue can comprise a motor vehicle, an aircraft or proximity to an intelligent device. The predetermined venue can comprise presence within range of another device's near field communication range. The predetermined venue can comprise presence within range of a gaming device, and the button can be reconfigured from a retracted inactive state to an extended active state as a gaming control.
The button can comprise a microfluidicly actuated element.
The button can be a first button, and there can be at one second button. The first and second buttons can be positioned on a rear side of the device and are configured to allow the user to input characters by blind typing or swipe writing.
The button can be positioned on one side of a cover attached to the device and movable between a closed position covering a display and an open position in which the display is visible. The button can be active when the display is visible.
The button can be a first button, and there can be multiple other buttons arranged on the cover in a keyboard pattern.
According to another implementation, a salient control element for a mobile device comprises at least one control element actuatable by a user to control operation of the mobile device. The control element has at least a first active state in which the control element is tactilely discernible to a user and a second inactive state in which the control element is substantially undiscernible relative to the surrounding surface. The control element button is reconfigurable between the active state and the inactive state based upon a triggering event. The triggering event can comprise at least one of receiving signals indicating a position, motion or orientation of the device, signals indicating a mode of operation or time, signals indicating that a predetermined application or service is active, signals indicating a current wireless communication, or signals indicating the mobile device is in a predetermined venue.
The control element can comprise an element that can sense deflection. The control element can comprise an element that can sense pressure. The control element can comprise a force sensing resistive element that can sense an applied force. The control element can comprise a piezoelectric element.
According to another implementation, a salient notification element for a mobile device comprises at least one notification element having at least a first active state in which the element is extended or retracted relative to a surrounding surface and a second inactive state in which the button is substantially flush with the surrounding surface. The element is configured to change from an inactive state to an active state by extending or retracting to be tactilely detectible to the user upon occurrence of a predetermined event. The element remains in the active state until reset by the user.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The foregoing and other objects, features, and advantages will become more apparent from the following detailed description, which proceeds with reference to the accompanying figures.
In the example of
According to one example, the button 16 is configured as a shutter release actuatable to take a photograph. The trigger to configure the salient control element 14 of
In one exemplary implementation, after the device 10 is turned on and its IMU is polled, it is determined whether the device is being held in a way to take a photo. If so, one or more salient control elements are configured (for example, and as described above, one or more buttons can be raised). When the operation is complete, such as when the user lowers the device, the IMU will indicate a change in position and the buttons can be rendered inactive (e.g., retracted, according to this example).
As another example, in a different context, the control element 14 can be an alarm clock/timer button that extends upon the alai in clock/timer reaching a predetermined time. In this example, the control element 14 is actuatable by the user to turn off the alarm or the timer. As examples only, the triggering event(s) can include reaching a predetermined time, having an active alarm or timer application or service running, the orientation of the device and/or whether the device is in a propped-up position.
In
The buttons 16, 18 can be configured for any suitable operation. For example, assuming the camera operation of
The trigger to change the function of the salient control element 14 to volume control for the mode of operation of
In
In
As above, appropriate triggers for reconfiguring the salient control elements 32, 34 from their inactive states in
In many implementations, the trigger for the device 10 to change the state of the salient control element 14 or 30 includes a position, orientation or motion of the device 10, such as is detected by the inertial measurement unit (IMU) of the device 10 or other similar circuit. The IMU detects, e.g., whether the device is in landscape or portrait orientation, whether the device is in motion or stationary, whether the device has been tilted to a predetermined angle, whether the device has been rotated by a predetermined amount about one of its axes, etc.
In addition, the trigger may include input from one or more touch-sensitive areas of the device. For example, the trigger could include detection that the user's palm is in contact with the display 12 of the device 10, as in the example of
Other examples of triggering events include an incoming wireless communication (e.g., receiving a text message, an email message or a telephone call) or a change in near field communication state (e.g., entering into or exiting from a connected near field communication state with a nearby device).
In
According to one usage, the user responds to the notifications. The user can respond by manually actuating each button 36 to indicate that the user has received the notification and to reset the notification button. Alternatively, or in addition, the notification buttons can be programmed to reset automatically, e.g., to retract or to extend after a period of time, after the device is moved from its face down position, etc.
In
In
Concerning other aspects relating to trigger events and conditions, a mobile device can be configured to cause one or more salient control elements to be activated based on the location of the mobile device and or the mobile device's proximity to another intelligent device. For example, the mobile device can be configured to cause salient control elements to become active when the user is present at a location associated with the user through a software application on the mobile device or service. As just one example, when the user leaves or arrives at her residence, salient control elements are presented for arming or disarming a security system or home automation program. Such salient control elements could include one or more rear side control elements that protrude from or are recessed from the rear surface. As another example, the salient control elements can be configured, upon detection of a nearby TV, to be configured into controls for the TV. For example, salient control elements on a rear side of a mobile device could become active upon entering within a predetermined range of a connected TV. More generally, the salient control elements can be configured to be responsive to other intelligent devices within a predetermined range, or other devices connected to the mobile device, such as by near field and other types of communication.
Similarly, the salient control elements can be configured to respond to other specific venues. For example, one or more salient control elements can be configured to become active while the user of the mobile device is driving an automobile, e.g., to present a reduced command set for safe but effective operation. In another example, the salient control elements may be configured to provide access to only limited device functions, e.g., if it is detected that the user is using the mobile device on an aircraft.
In some implementations, the salient control elements 14 and 30 are implemented as controllable microfluidic members capable of being reconfigured by instructions from a controller. For example, the described buttons can be configured to extend or retract as required by changing the fluid pressure and/or in associated fluid circuits. Such fluid circuits can be configured to operate using liquids, gases or a combination thereof. In some implementations, the user's multiple contacts with (e.g., a repeated taps) or other actions involving the control elements cause a pumping action that extends or retracts at least one control element.
In some implementations, other approaches are used to provide buttons or other control elements having at least two states, i.e., an active state and an inactive state. Desirably, a button in the active state has a highly tactile character and is distinguishable from a button in an inactive state. In addition to control elements characterized as “buttons,” it is also possible to configure them to have at least one tactilely perceptible edge.
In some implementations, the degree of deflection and/or pressure exerted by a user at a specified location is detected and/or measured, and if above a threshold, a contact is registered. In some implementations, the detected or measured contact includes a user's sliding motion.
The control elements can be implemented using artificial muscle, which is defined herein to describe materials and/or devices that expand, contract, rotate or otherwise move due to an external stimulus, such as voltage, current, pressure or temperature. Such materials and devices include electro-active polymers, dielectric elastomer actuators, relaxor ferroelectric polymers, liquid crystal elastomers, pneumatic artificial muscles, ionic polymer metal composites, shape memory alloys, and electric field-activated electrolyte-free artificial muscles, to name a few examples.
In addition, capacitive touch panel, electromagnetic induction touch panel and other similar technologies can be employed to implement the control elements and related components. Force sensing resistive elements and/or piezoelectric electric elements can be used.
In some implementations, there are cues that provide the user sufficient information as to the current function of the salient control elements. For example, if other indications show that the device is in a camera mode, then a single raised button provided in the usual location of the shutter release button (see
With reference to
A computing system may have additional features. For example, the computing system 400 includes storage 440, one or more input devices 450, one or more output devices 460, and one or more communication connections 370. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing system 400. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing system 400, and coordinates activities of the components of the computing system 400.
The tangible storage 440 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing system 400. The storage 440 stores instructions for the software 480 implementing one or more innovations described herein.
The input device(s) 450 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device, having one or more salient control elements, that provides input to the computing system 400. For video encoding, the input device(s) 450 may be a camera, video card, TV tuner card, or similar device that accepts video input in analog or digital form, or a CD-ROM or CD-RW that reads video samples into the computing system 400. The output device(s) 460 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing system 400
The communication connection(s) 470 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier.
The innovations can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing system on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing system.
For the sake of presentation, the detailed description uses terms like “determine” and “use” to describe computer operations in a computing system. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
In view of the many possible embodiments to which the disclosed principles may be applied, it should be recognized that the illustrated embodiments are only preferred examples and should not be taken as limiting in scope. Rather, the scope is defined by the following claims.
Number | Date | Country | |
---|---|---|---|
61821641 | May 2013 | US |