The present disclosure relates generally to computer user interfaces, and more specifically to user interfaces for communicating with other users.
Users can communicate electronically with one another by way of messages, such as text messages and messages containing videos or pictures. However, there is a need to enhance electronic communications by improving the communication of emotions, electronically, between users.
Some techniques for electronic communications using electronic devices, however, are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which can optionally include multiple key presses or keystrokes, and thereby delay communication of messages with a recipient. Existing techniques require more time than necessary, wasting user time and device energy, and lending to delays in communication.
Accordingly, the present technique provides electronic devices with faster, more efficient methods and interfaces for electronic communications. Such methods and interfaces optionally complement or replace other methods for electronic communications. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For example, the methods, systems, and user interfaces described herein provide for enhanced communication of emotions between users by permitting greater expression beyond the constraints of traditional video, images, and textual messages. For example, users can draw and/or add graphics to live or otherwise recently-captured video and images for quick and efficient communication with a recipient. Such communications can optionally be ephemeral and expire over time, which can optionally decrease user inhibitions for expression. Further, the present systems, methods, and user interfaces herein allow for quick and easy editing of the recently-captured video and/or image, for example by adding drawings and/or graphics based on touch inputs immediately after, during, and/or before capturing thereof. In this way, communication delays can optionally be minimized since edited communications can optionally be quickly sent to a recipient, and user expressions or emotions can optionally be more real, live, and/or authentic. In another aspect, the efficiency of such methods and interfaces in editing, capturing, and communicating with external devices can optionally conserve power and increase the time between battery charges of the device. Other benefits can optionally be contemplated.
Example methods are disclosed herein. An example method includes, at an electronic device having a touch-sensitive display and a camera, displaying, on the touch-sensitive display, a drawing area, where the drawing area includes a digital viewfinder that presents camera image data received from the camera; while displaying the drawing area, detecting a first touch input, at a first location in the drawing area, representing a first stroke; in response to detecting the first touch input, displaying a visual representation, at the first location in the drawing area, of the first stroke; while displaying the drawing area, detecting a user request to capture the camera image data presented in the digital viewfinder; in response to detecting the user request, capturing the camera image data presented in the digital viewfinder; and sending data representing the captured camera image data and the first stroke to an external device, where the sent data indicates a portion of the captured camera image data that corresponds to the first location of the first stroke.
An example method includes, at an electronic device having a touch-sensitive display and a camera, displaying, on the touch-sensitive display, an image in a digital viewfinder, where the image is based on camera image data received from the camera; detecting a first touch input at a first location in the digital viewfinder; in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a recording mode, displaying, in the digital viewfinder, a visual representation corresponding to the first touch input at the first location; and in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a non-recording mode, altering the image displayed in the digital viewfinder by adjusting a characteristic of the camera image data.
An example method includes, at an electronic device having a touch-sensitive display and a camera, displaying, on the touch-sensitive display, a text messaging user interface associated with a contact, where the text messaging user interface includes a message transcript area, and a compact drawing area, where the compact drawing area includes an expand affordance corresponding to an enlarged drawing area; detecting a first user input corresponding to the expand affordance; in response to detecting the first user input, replacing the displayed text messaging interface with display of the enlarged drawing area, where the enlarged drawing area includes a camera affordance, detecting a second user input corresponding to the camera affordance; and in response to detecting the second user input, displaying a digital viewfinder, in the enlarged drawing area, that presents camera image data received from the camera.
An example method includes, at an electronic device having a touch-sensitive display, receiving, at the electronic device, message data including a visual information capable of playback over time from a contact; displaying, on the touch-sensitive display, the message data including the visual information in a text messaging user interface of a messaging application, where the text messaging user interface includes a text message transcript associated with the contact, further where displaying the message data including the visual information comprises displaying a looped playback of the visual information in the text message transcript; in accordance with a determination that a status of the message data including the visual information meets a display criteria, maintaining the looped playback of the visual information in the text message transcript; and in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, ceasing displaying the looped playback of the visual information in the text message transcript.
Exemplary devices are disclosed herein. An example device includes a touch-sensitive display; a camera; one or more processors; a memory; and one or more programs. The one or more programs are stored in the memory and configured to be executed by the one or more processors and include instructions for displaying, on the touch-sensitive display, a drawing area, where the drawing area includes a digital viewfinder that presents camera image data received from the camera; while displaying the drawing area, detecting a first touch input, at a first location in the drawing area, representing a first stroke; in response to detecting the first touch input, displaying a visual representation, at the first location in the drawing area, of the first stroke; while displaying the drawing area, detecting a user request to capture the camera image data presented in the digital viewfinder; in response to detecting the user request, capturing the camera image data presented in the digital viewfinder; and sending data representing the captured camera image data and the first stroke to an external device, where the sent data indicates a portion of the captured camera image data that corresponds to the first location of the first stroke.
An example electronic device comprises a touch-sensitive display; a camera; one or more processors; a memory; and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for displaying, on the touch-sensitive display, an image in a digital viewfinder, where the image is based on camera image data received from the camera; detecting a first touch input at a first location in the digital viewfinder; in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a recording mode, displaying, in the digital viewfinder, a visual representation corresponding to the first touch input at the first location; and in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a non-recording mode, altering the image displayed in the digital viewfinder by adjusting a characteristic of the camera image data.
An example electronic device comprises a touch-sensitive display; a camera; one or more processors; a memory; and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for displaying, on the touch-sensitive display, a text messaging user interface associated with a contact, where the text messaging user interface includes a message transcript area, and a compact drawing area, where the compact drawing area includes an expand affordance corresponding to an enlarged drawing area; detecting a first user input corresponding to the expand affordance; in response to detecting the first user input, replacing the displayed text messaging interface with display of the enlarged drawing area, where the enlarged drawing area includes a camera affordance, detecting a second user input corresponding to the camera affordance; and in response to detecting the second user input, displaying a digital viewfinder, in the enlarged drawing area, that presents camera image data received from the camera.
An example electronic device comprises a touch-sensitive display; one or more processors; a memory; and one or more programs. The one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs including instructions for receiving, at the electronic device, message data including a visual information capable of playback over time from a contact; displaying, on the touch-sensitive display, the message data including the visual information in a text messaging user interface of a messaging application, where the text messaging user interface includes a text message transcript associated with the contact, further where displaying the message data including the visual information comprises displaying a looped playback of the visual information in the text message transcript; in accordance with a determination that a status of the message data including the visual information meets a display criteria, maintaining the looped playback of the visual information in the text message transcript; and in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, ceasing displaying the looped playback of the visual information in the text message transcript.
Example non-transitory computer readable storage media are disclosed herein. A non-transitory computer readable storage medium stores one or more programs. The one or more programs comprise instructions, which when executed by one or more processors of an electronic device, cause the device to display, on a touch-sensitive display, a drawing area, where the drawing area includes a digital viewfinder that presents camera image data received from a camera; while displaying the drawing area, detect a first touch input, at a first location in the drawing area, representing a first stroke; in response to detecting the first touch input, display a visual representation, at the first location in the drawing area, of the first stroke; while displaying the drawing area, detect a user request to capture the camera image data presented in the digital viewfinder; in response to detecting the user request, capture the camera image data presented in the digital viewfinder; and send data representing the captured camera image data and the first stroke to an external device, where the sent data indicates a portion of the captured camera image data that corresponds to the first location of the first stroke.
An example non-transitory computer readable storage medium stores one or more programs. The one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to display, on a touch-sensitive display, an image in a digital viewfinder, where the image is based on camera image data received from a camera; detect a first touch input at a first location in the digital viewfinder; in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a recording mode, display, in the digital viewfinder, a visual representation corresponding to the first touch input at the first location; and in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a non-recording mode, alter the image displayed in the digital viewfinder by adjusting a characteristic of the camera image data.
An example non-transitory computer readable storage medium stores one or more programs. The one or more programs comprise instructions, which when executed by one or more processors of an electronic device, cause the device to display, on a touch-sensitive display, a text messaging user interface associated with a contact, where the text messaging user interface includes a message transcript area, and a compact drawing area, where the compact drawing area includes an expand affordance corresponding to an enlarged drawing area; detect a first user input corresponding to the expand affordance; in response to detecting the first user input, replace the displayed text messaging interface with display of the enlarged drawing area, where the enlarged drawing area includes a camera affordance, detect a second user input corresponding to the camera affordance; and in response to detecting the second user input, display a digital viewfinder, in the enlarged drawing area, that presents camera image data received from a camera.
An example non-transitory computer readable storage medium stores one or more programs. The one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to receive, at the electronic device, message data including a visual information capable of playback over time from a contact; display, on a touch-sensitive display, the message data including the visual information in a text messaging user interface of a messaging application, where the text messaging user interface includes a text message transcript associated with the contact, further where displaying the message data including the visual information comprises displaying a looped playback of the visual information in the text message transcript; in accordance with a determination that a status of the message data including the visual information meets a display criteria, maintain the looped playback of the visual information in the text message transcript; and in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, cease displaying the looped playback of the visual information in the text message transcript.
In accordance with some embodiments, an electronic device comprises one or more processors;memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods described above. In accordance with some embodiments, a non-transitory computer readable storage medium stores one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to perform any of the methods described above. In accordance with some embodiments, an electronic device comprises means for performing any of the methods described above.
Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. An example transitory computer readable storage medium stores one or more programs. The one or more programs comprise instructions, which when executed by one or more processors of an electronic device, cause the device to display, on a touch-sensitive display, a drawing area, where the drawing area includes a digital viewfinder that presents camera image data received from a camera; while displaying the drawing area, detect a first touch input, at a first location in the drawing area, representing a first stroke; in response to detecting the first touch input, display a visual representation, at the first location in the drawing area, of the first stroke; while displaying the drawing area, detect a user request to capture the camera image data presented in the digital viewfinder; in response to detecting the user request, capture the camera image data presented in the digital viewfinder; and send data representing the captured camera image data and the first stroke to an external device, where the sent data indicates a portion of the captured camera image data that corresponds to the first location of the first stroke.
An example transitory computer readable storage medium stores one or more programs. The one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to display, on a touch-sensitive display, an image in a digital viewfinder, where the image is based on camera image data received from a camera; detect a first touch input at a first location in the digital viewfinder; in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a recording mode, display, in the digital viewfinder, a visual representation corresponding to the first touch input at the first location; and in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a non-recording mode, alter the image displayed in the digital viewfinder by adjusting a characteristic of the camera image data.
An example transitory computer readable storage medium stores one or more programs. The one or more programs comprise instructions, which when executed by one or more processors of an electronic device, cause the device to display, on a touch-sensitive display, a text messaging user interface associated with a contact, where the text messaging user interface includes a message transcript area, and a compact drawing area, where the compact drawing area includes an expand affordance corresponding to an enlarged drawing area; detect a first user input corresponding to the expand affordance; in response to detecting the first user input, replace the displayed text messaging interface with display of the enlarged drawing area, where the enlarged drawing area includes a camera affordance, detect a second user input corresponding to the camera affordance; and in response to detecting the second user input, display a digital viewfinder, in the enlarged drawing area, that presents camera image data received from a camera.
An example transitory computer readable storage medium stores one or more programs. The one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to receive, at the electronic device, message data including a visual information capable of playback over time from a contact; display, on a touch-sensitive display, the message data including the visual information in a text messaging user interface of a messaging application, where the text messaging user interface includes a text message transcript associated with the contact, further where displaying the message data including the visual information comprises displaying a looped playback of the visual information in the text message transcript; in accordance with a determination that a status of the message data including the visual information meets a display criteria, maintain the looped playback of the visual information in the text message transcript; and in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, cease displaying the looped playback of the visual information in the text message transcript.
Thus, devices are provided with faster, more efficient methods and interfaces for electronic communications, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces can optionally complement or replace other methods for electronic communications.
For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments.
There is a need for electronic devices that provide efficient methods and interfaces for electronic communications. For example, there is need to quickly compose electronic communications that extend beyond text messages. There is a need to connect with other users through electronic communications while still conveying emotion. In some cases, such techniques can reduce the cognitive burden on a user who produces electronic communications, thereby enhancing productivity. Further, such techniques can reduce processor and battery power otherwise wasted on redundant user inputs.
Below,
Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. The first touch and the second touch are both touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif. Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
Attention is now directed toward embodiments of portable devices with touch-sensitive displays.
As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button).
As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
It should be appreciated that device 100 is only one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in
Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips.
RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry 108 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g., 212,
I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/send electrical signals from/to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g., 208,
A quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a Device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g., 206) optionally turns power to device 100 on or off. The functionality of one or more of the buttons are, optionally, user-customizable. Touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
Touch-sensitive display 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives and/or sends electrical signals from/to touch screen 112. Touch screen 112 displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects.
Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112. In an exemplary embodiment, a point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, Calif.
A touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. patents: U.S. Pat. No. 6,323,846 (Westerman et al.), U.S. Pat. No. 6,570,557 (Westerman et al.), and/or U.S. Pat. No. 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
A touch-sensitive display in some embodiments of touch screen 112 is described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application Ser. No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A Computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
In some embodiments, in addition to the touch screen, device 100 optionally includes a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
Device 100 also includes power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
Device 100 optionally also includes one or more optical sensors 164.
Device 100 optionally also includes one or more contact intensity sensors 165.
Device 100 optionally also includes one or more proximity sensors 166.
Device 100 optionally also includes one or more tactile output generators 167.
Device 100 optionally also includes one or more accelerometers 168.
In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136. Furthermore, in some embodiments, memory 102 (
Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
In some embodiments, contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter).
Contact/motion module 130 optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
Text input module 134, which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e-mail 140, or IM 141; and so forth.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephone module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MIMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data.
In conjunction with touch screen 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, or delete a still image or video from memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, e-mail client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124). In some embodiments, device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notes module 153 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, e-mail client module 140, and browser module 147, online video module 155 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module 141, rather than e-mail client module 140, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety.
Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. For example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152,
In some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad.
Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. Event sorter 170 includes event monitor 171 and event dispatcher module 174. In some embodiments, application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
In some embodiments, application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripherals interface 118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110). Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
In some embodiments, event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
In some embodiments, operating system 126 includes event sorter 170. Alternatively, application 136-1 includes event sorter 170. In yet other embodiments, event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
A respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others. In some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test.
In some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type.
When a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
In some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
In some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized.
Device 100 optionally also include one or more physical buttons, such as “home” or menu button 204. As described previously, menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen 112.
In some embodiments, device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (SIM) card slot 210, headset jack 212, and docking/charging external port 124. Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Each of the above-identified elements in
Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device 100.
It should be noted that the icon labels illustrated in
Although some of the examples that follow will be given with reference to inputs on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No. PCT/US2013/040061, titled “Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, published as WIPO Publication No. WO/2013/169849, and International Patent Application Serial No. PCT/US2013/069483, titled “Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed Nov. 11, 2013, published as WIPO Publication No. WO/2014/105276, each of which is hereby incorporated by reference in their entirety.
In some embodiments, device 500 has one or more input mechanisms 506 and 508. Input mechanisms 506 and 508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device 500 to be worn by a user.
Input mechanism 508 is, optionally, a microphone, in some examples. Personal electronic device 500 optionally includes various sensors, such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
Memory 518 of personal electronic device 500 can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 1000-1300 (
As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (
As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in
As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation.
In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity.
The intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures.
An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a “light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a “deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch-surface. A decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments, the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a “down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an “up stroke” of the respective press input).
In some embodiments, the display of representations 578A-578C includes an animation. For example, representation 578A is initially displayed in proximity of application icon 572B, as shown in
In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an “up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances).
For ease of explanation, the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold.
As used herein, an “installed application” refers to a software application that has been downloaded onto an electronic device (e.g., devices 100, 300, and/or 500) and is ready to be launched (e.g., become opened) on the device. In some embodiments, a downloaded application becomes an installed application by way of an installation program that extracts program portions from a downloaded package and integrates the extracted portions with the operating system of the computer system.
As used herein, the terms “open application” or “executing application” refer to a software application with retained state information (e.g., as part of device/global internal state 157 and/or application internal state 192). An open or executing application is, optionally, any one of the following types of applications:
As used herein, the term “closed application” refers to software applications without retained state information (e.g., state information for closed applications is not stored in a memory of the device). Accordingly, closing an application includes stopping and/or removing application processes for the application and removing state information for the application from the memory of the device. Generally, opening a second application while in a first application does not close the first application. When the second application is displayed and the first application ceases to be displayed, the first application becomes a background application.
Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that are implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
For example, as shown in
As demonstrated, for example in
As shown in
Additionally and/or alternatively, the touch inputs 608 detected in the drawing area 606 include a detected characteristic intensity, such as an intensity profile of various values over time and/or a single detected value. Merely by way of example, a preview or visual representation of the first stroke reflect a corresponding characteristic intensity of the first touch input and translate the detected characteristic intensity into a graphic rendering (e.g., an intensity of an animated fire varies with a fluctuating characteristic intensity of the touch input 608). In another example, an animated graphic is associated with the characteristic intensity of the touch input 608 exceeding an intensity threshold (e.g., the graphic is displayed only when the characteristic intensity exceeds the threshold). In a further example, a stroke thickness and/or color varies dynamically in accordance with the characteristic intensity of its touch input 608. In other examples, a color of the visual representation of the first stroke is static and/or displayed in accordance with a color corresponding to a selected color affordance, such as an affordance from the plurality of color affordances 614 provided above the drawing area 606.
Turning now to
Turning to
In yet another example as shown in
In another example as demonstrated in
It is further noted that additional, subsequent inputs, or multiple touch inputs, are optionally detected. For example, a second touch input can optionally be detected after an intervening amount of time after detecting the first touch input. The second touch input can optionally be detected at a second location in the drawing area 606 and represent a second stroke, such as a line sketch or graphic. Visual representations corresponding to the first stroke and the second stroke optionally are separated by an intervening amount of time, and the second stroke can optionally be detected before, during, and/or after recording of the video. It is contemplated that in response to detecting the second touch input, the visual representation corresponding to the second touch input is displayed at the second location in the drawing area. The visual representation corresponding to the second stroke can optionally fade-out or remain displayed independent of the visual representation corresponding to the first stroke. Additionally, the second stroke can optionally include characteristic kinematics and/or characteristic intensity in similar fashion as the first stroke.
As described previously, the device 600 displays a visual representation in response to the touch inputs in the drawing area 606. Such visual representations can optionally include a line described by movement of the finger across the touch-sensitive display 602 that is within the drawing area 606. In another example, the visual representation includes an animated graphic that are pre-generated or predetermined, and/or a still graphic displayed at the first location in the drawing area 602. Animated graphics can optionally be displayed in accordance with one or more characteristics such as characteristic intensity, a characteristic kinematic, and/or duration of the corresponding first touch input or second touch input. Various animated graphics contemplated herein and described further below include, for example, a beating heart, a breaking heart, and/or a fireball. It is contemplated that by displaying such visual representations with their corresponding one or more characteristics, users communicating electronically through electronic touch communication functionalities described herein can optionally convey emotions to further enhance their messages among one another.
Further, as described herein, visual representations can optionally include still graphics. For example, the visual representation of the first stroke can include a still graphic that is displayed in accordance with an orientation or angle of their corresponding touch input (e.g., an angle defined between a multiple-finger touch input). Various still graphics contemplated herein can optionally include, merely by way of example, a heart, a kiss, a tear drop, and/or any other still graphic. Such still graphics can optionally be selected to facilitate users to convey emotions in electronic communications, such as enhancing emotions created in textual messages, video messages and/or picture messages. In other examples as shown above, visual representations can optionally correspond to lines whereby a first stroke includes a first endpoint corresponding to an initiation of the first touch input, a second endpoint corresponding to liftoff termination of the first touch input, and a line corresponding to movement of the first touch input across the touch-sensitive display. Display of the visual representation of the line includes displaying characteristic kinematics of the movement of the first touch input from the first endpoint to the second endpoint in response to detecting the first touch input.
Turning now to
Further, the device 600 can optionally associate the captured camera image data with the first location of the first stroke. For example, the device 600 can optionally overlay or superimpose the first stroke at the first location with the picture or recorded video. In another example, the device 600 can optionally associate an object in the digital viewfinder 610 with the visual representation of the first stroke. For instance, the device 600 can optionally associate the visual representation of the first stroke with a tracked point, such as a visually-tracked point, a mesh-marked point, and/or any of such points that can optionally correspond to an object in the camera image data presented in the digital viewfinder. Merely by way of example, such the device 600 can optionally track objects while capturing the camera image data. For instance, a first stroke can optionally be detected after detection of the user request to capture the camera image data and/or while the user is recording video. In other examples, the first stroke can optionally be detected prior to capturing the camera image data during a “set-up” period prior to recording so that a user can associate one or more visual representations with tracked points prior to the recording, such that the visual representations are displayed automatically during recording. In accordance with a determination that the tracked point associated with the first stroke is displayed in the digital viewfinder (e.g., the camera is panned and tracked point is within the digital viewfinder), the device 600 can optionally display the visual representation of the first stroke in the drawing area 606 at the tracked point. For instance, displaying the visual representation of first stroke in the drawing area 606 can optionally include displaying the visual representation of first stroke over the digital viewfinder 610 presented in the drawing area 606. During recording, the device 600 can optionally continue to update display of the visual representation of first stroke to coincide with the tracked point as the tracked point traverses within the digital viewfinder 610 (e.g., due to camera panning). The visual representation can optionally fade-out while the tracked-point is still within the viewfinder, in which case the visual representation can optionally be redisplayed after the tracked point has been detected to exit and then reenter the digital viewfinder 610. In other cases, the visual representation can optionally include an animated graphic that loops playback at the tracked-point and/or otherwise is maintained for display at the tracked point.
In accordance with a determination that the tracked point associated with the first stroke is not displayed in the digital viewfinder 606 (e.g., the tracked point has moved out of view of the digital viewfinder), the device 600 can optionally cease to display the visual representation of the first stroke. For instance, in one example, if a visual representation corresponding to the stroke is applied to an object and the camera pans away from the object such that the object is no longer presented in the digital viewfinder 610, then the representation of the first stroke is removed from display (e.g., remove before it fades out on its own). When the camera pans back to the same object that is now re-presented in the digital viewfinder 610, the visual representation of the first stroke reappears on the object. It is contemplated that the recorded video reflects the appearance and removal of the visual representation as it was displayed during the video capture.
In another example, the visual representation of the first stroke can optionally be associated with a direction. For instance, while capturing the camera image data, the device 600 can optionally associate the visual representation of the first stroke with a compass point associated with a viewing direction of the camera image data captured in the digital viewfinder. The compass point can optionally be based on a compass direction corresponding to the electronic device that is detected and registered as corresponding to the first stroke when the first touch input is detected at the first location of the drawing area. In accordance with a determination that the viewfinder is pointed in the direction of the compass point, the device 600 can optionally display the visual representation of the first stroke in the drawing area 606. Further, in accordance with a determination that the compass point associated with the first stroke is not displayed in the digital viewfinder, the device 600 can optionally cease to display the visual representation of the first stroke.
In another example, in accordance with various embodiments described herein, the preview can optionally be looped. For instance, a preview of a recorded video can optionally include a looped playback of the video. In some examples, the first touch input is detected in the drawing area 606 while displaying the preview that loops the playback of the video. The device 600 can optionally display a playback of the visual representation of the first stroke at the first location in the drawing area 606. The preview of the visual representation could be displayed before, during, and/or after capturing the camera image data. The preview of the visual representation can optionally correspond to a sketch, animated graphic, or a still graphic of the first touch input, and playback of the visual representation can optionally include a looped playback. In some examples, the looped playback is maintained until the data is sent to the external device or upon user request to cancel the preview. Further, the looped playback can optionally reflect characteristic kinematics and/or characteristic intensity in the visual representation.
As described above, the preview can optionally further include captured camera image data. For instance, the device 606 can optionally cease capturing the camera image data when recording is finished or a picture is taken. In some cases, described further below, the recording ends automatically in accordance with a recording timer, such as a 10-second timer, and/or ends manually from user input. In such cases, after ceasing capturing the camera image data, the displayed digital viewfinder 610 in the drawing area 606 can optionally be automatically replaced with the preview in the drawing area 606, as shown in at least
Further, the preview can optionally include maintaining display of the visual representation at the first location for a duration of a single loop of the captured camera image data. For instance, the first stroke can optionally be detected prior to recording video or taking still image, and/or the visual representation of the first stroke can optionally include an animation characteristic that causes the visual representation to be stamped-on to the recorded video until the video ends or the still image is removed from display. In another example, the visual representation of the first stroke can optionally be displayed in the preview with an animated effect rather than with the characteristic kinematics. The animated effect can optionally include initially flashing (e.g., flash, burn effect) the visual representation onto the display at the first location and maintaining display of the visual representation at the first location for the duration of the captured camera image data. In that case, looping display of the visual representation of the first stroke in the preview mode includes replaying the flash-on effect and maintaining the visual representation for the remaining duration of the captured camera image data. In that case, the visual representation of the first stroke appears stamped-on as a still graphic after the animated flash-on effect. In practice, any visual representation can optionally be permanently displayed by simply entering their corresponding touch inputs prior to initiating digital viewfinder 610.
In other examples, the device 600 can optionally display the visual representation of the first stroke for a portion of the preview. For instance, displaying the preview can optionally include displaying the visual representation at the first location for at least a partial duration of a single loop of the captured camera image data. First strokes that are detected prior to recording video and/or taking a still image can optionally have their visual representations be displayed for a predetermined period of time before fade-out and/or according to a duration of the first touch input. In other cases, the first stroke was detected while recording the video and the visual representation can optionally be displayed for a remaining duration of the recorded video until the video ends, fades out after a predetermined period of time, or is displayed for a duration corresponding to the first touch input. Still, in other cases, the first stroke can optionally be detected during the preview mode.
It is further noted that the visual representation of second stroke can optionally be displayed in preview. In one example, the device 600 can optionally display a preview including the captured camera image data, the visual representation of the first stroke at the first location, and the visual representation of the second stroke at the second location, where the visual representation of the first stroke and the visual representation of the second stroke are displayed in the preview in an order that their corresponding first touch input and second touch input were detected in the drawing area. For example, when the captured camera image data corresponds to a still image, the visual representation of second stroke can optionally be displayed immediately after the visual representation of first stroke without a pause corresponding to the intervening amount of time between detection of the first stroke and detection of the second stroke. The visual representation of second stroke can optionally fade-out after a predetermined period of time or remain displayed after its input. In another example, where the captured camera image data includes a video and displaying the preview includes looping playback of the visual representation of the first stroke and the visual representation of the second stroke with the intervening amount of time over a looped playback of the video. For instance, when the captured camera image data corresponds to a recorded video, the visual representations are displayed so as to be timed with the video recording such that they appear on certain frames where they were received.
Turning now to
In some examples, the data is manually sent. For example, the device 600 can optionally display a send affordance 634 in the drawing area 606 or preview and detect a third touch input 636 corresponding to selection of the send affordance 634. Sending the data representing the captured camera image data and the first stroke to the external device occurs in response to detecting the third touch input 636. For example, the device 600 sends both visual representations along with the intervening amount of time. In another example, the device 600 sends the preview based on the touch inputs and captured camera image data.
In another example, the data is includes flattened data. For instance, prior to sending data representing the captured camera image data and the first stroke to the external device and in accordance to a determination of a status of the external device, such as a status indicative of the external device being unable to receive a non-encoded data, the device 600 can optionally encode the captured camera image data with the visual representation of the first stroke. In some cases, flattening of the video and visual representations is achieved with a custom video compositor. Alternatively, the device 600 requests a server to flatten data. In another aspect, sending data representing the captured camera image data and the first stroke to the external device includes sending the encoded captured camera image data. For instance, in some cases, the data representing the captured camera image data and the first stroke comprises a separate data package for each of the captured camera image data and the first stroke. The electronic device can optionally determine that the external device is unable to receive the separate data packages. For example, in some cases the external device can optionally be outside of a network connection that permits sending and/or receiving of such separate data packages. In that case, the electronic device can optionally flatten the still image and/or recorded video with the first stroke in order to provide an encoded video to the electronic device. The encoded video can optionally be generated at the electronic device and/or at a server in connection with the electronic device. For example, the electronic device can optionally instruct the server to generate the encoded video.
In another example as shown at
In a further example at
In another example, the device 600 removes sent data after expiry. In response to a determination that the at least a portion of the sent data has been provided in the message transcript area 638 for a predetermined period of time, the device 600 removes display of the at least a portion of the sent data, such as the still image or the looped playback of the sent data, from the message transcript area. Other examples are possible.
Turning now to
As described below, method 1000 provides an intuitive way for electronic communications with video and/or still image. The method reduces the cognitive burden on a user for electronic communications, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to communicate electronically in a faster and more efficient manner conserves power and increases the time between battery charges.
As shown in the method 1000, the device (1002) can optionally display, on the touch-sensitive display screen (e.g., display screen 602), a drawing area, wherein the drawing area includes a digital viewfinder (e.g., digital viewfinder 610) that presents camera image data received from the camera (e.g., camera 604). The device (1004) can optionally, while displaying the drawing area, detect a first touch input, at a first location in the drawing area, representing a first stroke. The device (1006) can optionally, in response to detecting the first touch input, display a visual representation, at the first location in the drawing area, of the first stroke. The device can optionally (1008), while displaying the drawing area, detect a user request to capture the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610). The device (1010) can optionally, in response to detecting the user request, capture the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610). The device (1012) can optionally send data representing the captured camera image data and the first stroke to an external device, wherein the sent data indicates a portion of the captured camera image data that corresponds to the first location of the first stroke.
In some embodiments, the device associates the captured camera image data with the first location of the first stroke.
In some embodiments, the device associates the first stroke with a tracked point in the digital viewfinder (e.g., digital viewfinder 610) that corresponds to the first location of the first stroke.
In some embodiments, while capturing the camera image data: in accordance with a determination that the tracked point associated with the first stroke is displayed in the digital viewfinder (e.g., digital viewfinder 610), the device displays the visual representation of the first stroke in the drawing area at the tracked point; and in accordance with a determination that the tracked point associated with the first stroke is not displayed in the digital viewfinder (e.g., digital viewfinder 610), the device ceases to display the visual representation of the first stroke.
In some embodiments, while capturing the camera image data: the device associates the first stroke with a compass point indicative of a first viewing direction of the digital viewfinder (e.g., digital viewfinder 610), wherein the compass point is based on a compass direction detected at the electronic device (e.g., device 600); in accordance with a determination that the first viewing direction is displayed in the digital viewfinder (e.g., digital viewfinder 610), the device displays the visual representation of the first stroke in the drawing area at a position corresponding to the compass point; and in accordance with a determination that the first viewing direction is not displayed in the digital viewfinder (e.g., digital viewfinder 610), the device ceases to display the visual representation of the first stroke in the drawing area.
In some embodiments, the first stroke includes characteristic kinematics based on the detected first touch input, and the visual representation of the first stroke includes the characteristic kinematics (e.g.,
In some embodiments, the first stroke includes a characteristic intensity based on the detected first touch input, and the visual representation of the first stroke is based on the characteristic intensity (e.g.,
In some embodiments, wherein the first touch input is detected prior to capturing the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610), the device maintains display of the visual representation of the first stroke at the first location in the drawing area while capturing the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610) (e.g.,
In some embodiments, wherein the first touch input is detected prior to capturing the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610), the device maintains display of the visual representation of the first stroke at the first location in the drawing area for a partial duration of time while capturing the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610) (e.g.,
In some embodiments, wherein the first touch input is detected while capturing the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610), and wherein capturing the camera image data comprises recording a video of the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610), the device displays the visual representation of the first stroke at the first location in the drawing area for at least a period of time while capturing the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610) (e.g.,
In some embodiments, the first touch input is detected while displaying the captured camera image data in the drawing area, and the displayed captured camera image data includes at least one of a still image captured by the camera (e.g., camera 604) and a playback of a video recorded by the camera (e.g., camera 604).
In some embodiments, the playback of the video is a looped playback of the video.
In some embodiments, the device displays, on the touch-sensitive display screen (e.g., display screen 602), a playback of the visual representation of the first stroke at the first location in the drawing area.
In some embodiments, the playback of the visual representation is a looped playback of the visual representation.
In some embodiments, the device ceases capturing the camera image data; and after ceasing capturing the camera image data: replaces the displayed digital viewfinder (e.g., digital viewfinder 610) in the drawing area, and displays a preview based on overlaying the playback of the visual representation on the displayed captured camera image data.
In some embodiments, displaying the preview includes maintaining display of the visual representation at the first location for a duration of a single loop of the captured camera image data (e.g.,
In some embodiments, displaying the preview includes displaying the visual representation at the first location for at least a partial duration of a single loop of the captured camera image data.
In some embodiments, the visual representation of the first stroke is displayed with a color corresponding to a selected color affordance (e.g.,
In some embodiments, after detecting the first touch input, the device detects a second touch input, at a second location in the drawing area, representing a second stroke, wherein the first touch input and the second touch input are separated by an intervening amount of time; and in response to detecting the second touch input, displays the visual representation, at the second location in the drawing area, of the second stroke.
In some embodiments, the device displays a preview comprising the captured camera image data, the visual representation of the first stroke at the first location, and the visual representation of the second stroke at the second location, wherein the visual representation of the first stroke and the visual representation of the second stroke are displayed in the preview in an order that their corresponding first touch input and second touch input were detected in the drawing area.
In some embodiments, the captured camera image data comprises a video, and displaying the preview comprises looping playback of the visual representation of the first stroke and the visual representation of the second stroke with the intervening amount of time over a looped playback of the video.
In some embodiments, the visual representation of the first stroke comprises an animated graphic that is displayed in accordance with one or more a characteristics selected from the group consisting of a characteristic intensity, a characteristic kinematic, and a duration of the corresponding first touch input or second touch input.
In some embodiments, the animated graphic is a beating heart (e.g.,
In some embodiments, the animated graphic is a breaking heart (e.g.,
In some embodiments, the animated graphic is a fireball (e.g.,
In some embodiments, the visual representation of the first stroke comprises a still graphic that is displayed in accordance with an orientation of the corresponding first touch input or the corresponding second touch input.
In some embodiments, the still graphic is a heart (e.g.,
In some embodiments, the still graphic is a kiss (e.g.,
In some embodiments, the still graphic is a tear drop.
In some embodiments, the visual representation of the first stroke includes a first endpoint corresponding to an initiation of the first touch input, a second endpoint corresponding to liftoff termination of the first touch input, and a line corresponding to movement of the first touch input across the touch-sensitive display screen (e.g., display screen 602), wherein display of the visual representation of the line includes displaying characteristic kinematics of the movement of the first touch input from the first endpoint to the second endpoint (e.g.,
In some embodiments, the device displays a send affordance in the drawing area; detects a third touch input corresponding to selection of the send affordance; and sends data representing the captured camera image data and the first stroke to the external device (e.g., device 600) occurs in in response to detecting the third touch input (e.g.,
In some embodiments, prior to sending data representing the captured camera image data and the first stroke to the external device (e.g., device 600) and in accordance to a determination of a status of at the external device (e.g., device 600), the device encodes the captured camera image data with the visual representation of the first stroke, wherein sending data representing the captured camera image data and the first stroke to the external device (e.g., device 600) includes sending the encoded captured camera image data.
In some embodiments, the device displays, on the touch-sensitive display screen (e.g., display screen 602) screen at the electronic device (e.g., device 600), a text messaging user interface associated with a contact of the external device (e.g., device 600), wherein the text messaging user interface includes a message transcript area associated with the contact; and displays at least a portion of the sent data, in the message transcript area (e.g.,
In some embodiments, the device loops playback, in the message transcript area, of the at least a portion of the sent data while the at least a portion of the sent data is a most recent data communication in the message transcript area that includes a visual representation of a touch input; and in response to a determination that the at least a portion of the sent data is no longer the most recent data communication comprising the visual representation of the touch input, ceases looping playback of the at least a portion of the sent data and replacing the looped playback with a still frame based on the at least a portion of the sent data (e.g.,
In some embodiments, in response to a determination that the at least a portion of the sent data has been provided in the message transcript area for a predetermined period of time, the device removes display of the at least a portion of the sent data from the message transcript area (e.g.,
Note that details of the processes described above with respect to method 1000 (e.g.,
Turning now to
As shown in
The processing unit 1406 is configured to: enable display of, on the touch-sensitive display unit, a drawing area, wherein the drawing area includes a digital viewfinder that presents camera image data received from the camera unit (e.g., camera unit 1404). The processing unit 1406 is further configured to: while displaying the drawing area, detect a first touch input, at a first location in the drawing area, representing a first stroke. The processing unit 1406 is further configured to: in response to detecting the first touch input, enable display of a visual representation, at the first location in the drawing area, of the first stroke. The processing unit 1406 is further configured to: while displaying the drawing area, detect a user request to capture the camera image data presented in the digital viewfinder. The processing unit 1406 is further configured to: in response to detecting the user request, capture the camera image data presented in the digital viewfinder. The processing unit 1406 is further configured to: send data representing the captured camera image data and the first stroke to an external device, wherein the sent data indicates a portion of the captured camera image data that corresponds to the first location of the first stroke.
In some embodiments, the processing unit 1406 is further configured to: associate the captured camera image data with the first location of the first stroke.
In some embodiments, the processing unit 1406 is further configured to: associate the first stroke with a tracked point in the digital viewfinder that corresponds to the first location of the first stroke.
In some embodiments, the processing unit 1406 is further configured to: while capturing the camera image data: in accordance with a determination that the tracked point associated with the first stroke is displayed in the digital viewfinder, enable display of the visual representation of the first stroke in the drawing area at the tracked point; and in accordance with a determination that the tracked point associated with the first stroke is not displayed in the digital viewfinder, cease to enable display of the visual representation of the first stroke.
In some embodiments, the processing unit 1406 is further configured to: while capturing the camera image data: associate the first stroke with a compass point indicative of a first viewing direction of the digital viewfinder, wherein the compass point is based on a compass direction detected at the electronic device; in accordance with a determination that the first viewing direction is displayed in the digital viewfinder, enable display of the visual representation of the first stroke in the drawing area at a position corresponding to the compass point; and in accordance with a determination that the first viewing direction is not displayed in the digital viewfinder, cease to enable display of the visual representation of the first stroke in the drawing area.
In some embodiments, the first stroke includes characteristic kinematics based on the detected first touch input, and the visual representation of the first stroke includes the characteristic kinematics.
In some embodiments, the first stroke includes a characteristic intensity based on the detected first touch input, and the visual representation of the first stroke is based on the characteristic intensity.
In some embodiments, the first touch input is detected prior to capturing the camera image data presented in the digital viewfinder, wherein the processing unit 1406 is further configured to: maintain display of the visual representation of the first stroke at the first location in the drawing area while capturing the camera image data presented in the digital viewfinder.
In some embodiments, the first touch input is detected prior to capturing the camera image data presented in the digital viewfinder, wherein the processing unit is further configured to:
In some embodiments, the processing unit 1406 is further configured to: maintain display of the visual representation of the first stroke at the first location in the drawing area for a partial duration of time while capturing the camera image data presented in the digital viewfinder.
In some embodiments, the first touch input is detected while capturing the camera image data presented in the digital viewfinder, and wherein capturing the camera image data comprises recording a video of the camera image data presented in the digital viewfinder, wherein the processing unit is further configured to:
In some embodiments, the processing unit 1406 is further configured to: enable display of the visual representation of the first stroke at the first location in the drawing area for at least a period of time while capturing the camera image data presented in the digital viewfinder.
In some embodiments, the first touch input is detected while displaying the captured camera image data in the drawing area, wherein the displayed captured camera image data includes at least one of a still image captured by the camera and a playback of a video recorded by the camera.
In some embodiments, the playback of the video is a looped playback of the video.
In some embodiments, the processing unit 1406 is further configured to: enable display of, on touch-sensitive display unit, a playback of the visual representation of the first stroke at the first location in the drawing area.
In some embodiments, the playback of the visual representation is a looped playback of the visual representation.
In some embodiments, the processing unit 1406 is further configured to: cease capturing the camera image data; and after ceasing capturing the camera image data: replace the displayed digital viewfinder in the drawing area, and enable display of a preview based on overlaying the playback of the visual representation on the displayed captured camera image data.
In some embodiments, displaying the preview includes maintaining display of the visual representation at the first location for a duration of a single loop of the captured camera image data.
In some embodiments, displaying the preview includes displaying the visual representation at the first location for at least a partial duration of a single loop of the captured camera image data.
In some embodiments, the visual representation of the first stroke is displayed with a color corresponding to a selected color affordance.
In some embodiments, the processing unit 1406 is further configured to: after detecting the first touch input, detect a second touch input, at a second location in the drawing area, representing a second stroke, wherein the first touch input and the second touch input are separated by an intervening amount of time; and in response to detecting the second touch input, enable display of the visual representation, at the second location in the drawing area, of the second stroke.
In some embodiments, the processing unit 1406 is further configured to: enable display of a preview comprising the captured camera image data, the visual representation of the first stroke at the first location, and the visual representation of the second stroke at the second location, wherein the visual representation of the first stroke and the visual representation of the second stroke are displayed in the preview in an order that their corresponding first touch input and second touch input were detected in the drawing area.
In some embodiments, the captured camera image data comprises a video, further wherein displaying the preview comprises looping playback of the visual representation of the first stroke and the visual representation of the second stroke with the intervening amount of time over a looped playback of the video.
In some embodiments, the visual representation of the first stroke comprises an animated graphic that is displayed in accordance with one or more a characteristics selected from the group consisting of a characteristic intensity, a characteristic kinematic, and a duration of the corresponding first touch input or second touch input.
In some embodiments, the animated graphic is a beating heart.
In some embodiments, the animated graphic is a breaking heart.
In some embodiments, the animated graphic is a fireball.
In some embodiments, the visual representation of the first stroke comprises a still graphic that is displayed in accordance with an orientation of the corresponding first touch input or the corresponding second touch input.
In some embodiments, the still graphic is a heart.
In some embodiments, the still graphic is a kiss.
In some embodiments, the still graphic is a tear drop.
In some embodiments, the visual representation of the first stroke includes a first endpoint corresponding to an initiation of the first touch input, a second endpoint corresponding to liftoff termination of the first touch input, and a line corresponding to movement of the first touch input across the touch-sensitive display unit, wherein display of the visual representation of the line includes displaying characteristic kinematics of the movement of the first touch input from the first endpoint to the second endpoint.
In some embodiments, the processing unit 1406 is further configured to: enable display of a send affordance in the drawing area; detect a third touch input corresponding to selection of the send affordance; wherein sending data representing the captured camera image data and the first stroke to the external device occurs in in response to detecting the third touch input.
In some embodiments, the processing unit 1406 is further configured to: prior to sending data representing the captured camera image data and the first stroke to the external device and in accordance to a determination of a status of at the external device, encode the captured camera image data with the visual representation of the first stroke; wherein sending data representing the captured camera image data and the first stroke to the external device includes sending the encoded captured camera image data.
In some embodiments, the processing unit 1406 is further configured to: enable display of, on the touch-sensitive display screen at the electronic device, a text messaging user interface associated with a contact of the external device, wherein the text messaging user interface includes a message transcript area associated with the contact; and enable display of at least a portion of the sent data, in the message transcript area.
In some embodiments, the processing unit 1406 is further configured to: loop playback, in the message transcript area, of the at least a portion of the sent data while the at least a portion of the sent data is a most recent data communication in the message transcript area that includes a visual representation of a touch input; and in response to a determination that the at least a portion of the sent data is no longer the most recent data communication comprising the visual representation of the touch input, cease looping playback of the at least a portion of the sent data and replacing the looped playback with a still frame based on the at least a portion of the sent data.
In some embodiments, the processing unit 1406 is further configured to: in response to a determination that the at least a portion of the sent data has been provided in the message transcript area for a predetermined period of time, remove display of the at least a portion of the sent data from the message transcript area.
The operations described above with reference to
Turning now to
For example, as shown in
For example, in response to detecting the first touch input 646 and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a recording mode (e.g., recording a video), the device 600 displays, in the digital viewfinder 610, a visual representation of an animated/still graphic or a line. The visual representation communicates at least some information regarding the touch input 646, and not simply a generic image that is displayed in response to detecting touch input 646, without more, even if it displays in response to the touch input. The visual representation corresponding to the first touch input 646 is displayed at the first location by overlaying the visual representation on the camera image data shown in the digital viewfinder 610.
In some examples, the visual representation is displayed in the digital viewfinder 610 for a duration of the first touch input 646 and fades upon detection of lift-off of the first touch input 646. In that case, the visual representation is displayed in the digital viewfinder 610 for a predetermined period of time before fading. For example, the visual representation is displayed for a predetermined period of time after detection of lift-off of the first touch input 646 and then fades. In a different example, the visual representation is maintained in the digital viewfinder 610 while the digital viewfinder is displayed. The visual representation can optionally include an animation based on characteristic kinematics of the first touch input 646, and/or an animation based on characteristic intensity of the first touch input 646. Further, display of the visual representation can optionally include looping playback of the visual representation. The visual representation can optionally include or otherwise correspond to certain sound or tactile sensations output by, for example, tactile generator 167 of
As demonstrated in
As shown in
Turning back to
Turning back to
Turning now to
Turning back to
Turning now to
As described below, method 1100 provides an intuitive way for electronic communications with video and/or still image. The method reduces the cognitive burden on a user for electronic communications, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to communicate electronically in a faster and more efficient manner conserves power and increases the time between battery charges.
As shown in the method 1100, the device (1102) can optionally display, on the touch-sensitive display screen (e.g., display screen 602), an image in a digital viewfinder (e.g., digital viewfinder 610), wherein the image is based on camera image data received from the camera (e.g., camera 604). The device (1104) can optionally detect a first touch input at a first location in the digital viewfinder (e.g., digital viewfinder 610). The device (1106) can optionally, in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera (e.g., camera 604) is a recording mode, display, in the digital viewfinder (e.g., digital viewfinder 610), a visual representation corresponding to the first touch input at the first location. The device (1108) can optionally, in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera (e.g., camera 604) is a non-recording mode, altering the image displayed in the digital viewfinder (e.g., digital viewfinder 610) by adjusting a characteristic of the camera image data.
In some embodiments, the device determines whether the first touch input is detected while the operational mode of the camera (e.g., camera 604) is the recording mode or the non-recording mode (e.g.,
In some embodiments, the first touch input is a single-finger input and the characteristic is a focus of the camera image data (e.g.,
In some embodiments, the first touch input is a multiple-finger input and the characteristic is an optical magnification of the camera image data.
In some embodiments, the visual representation is displayed in the digital viewfinder (e.g., digital viewfinder 610) for a duration of the first touch input and fades upon detection of lift-off of the first touch input.
In some embodiments, the visual representation is displayed in the digital viewfinder (e.g., digital viewfinder 610) for a predetermined period of time before fading.
In some embodiments, the visual representation is maintained in the digital viewfinder (e.g., digital viewfinder 610) while the digital viewfinder (e.g., digital viewfinder 610) is displayed.
In some embodiments, the visual representation includes an animation based on characteristic kinematics of the first touch input.
In some embodiments, the visual representation includes an animation based on characteristic intensity of the first touch input (e.g.,
In some embodiments, displaying the visual representation includes looping playback of the visual representation.
In some embodiments, displaying the visual representation includes outputting at least one of an audio output and a haptic output associated with the visual representation.
In some embodiments, the first touch input is a single-finger tap at the first location in the digital viewfinder (e.g., digital viewfinder 610) and the visual representation is at least one of a circle, an ellipses, and an oval at the first location (e.g.,
In some embodiments, the first touch input is a single-finger hold that exceeds a predetermined duration and the visual representation is a teardrop at the first location of the first touch input.
In some embodiments, the first touch input is a single-finger contact having characteristic kinematics describing a movement of the single-finger contact beginning at the first location in the digital viewfinder (e.g., digital viewfinder 610) and the visual representation is a line beginning at the first location with the characteristic kinematics.
In some embodiments, the first touch input is a multiple-finger contact in the digital viewfinder (e.g., digital viewfinder 610) and the visual representation is centered at the first location between touch contacts of the multiple-finger contact (e.g.,
In some embodiments, the visual representation is oriented along an angle defined by the touch contacts on the touch-sensitive display screen (e.g., display screen 602) (e.g.,
In some embodiments, the multiple-finger contact is a two-finger contact on the touch-sensitive display screen (e.g., display screen 602) and the visual representation is a kiss that is displayed at the first location for a duration of the two-finger contact and fades upon lift-off of the two-finger contact (e.g.,
In some embodiments, the multiple-finger contact is a two-finger double-tap contact on the touch-sensitive display screen (e.g., display screen 602) and the visual representation is a stamped image at the first location that is angled according to the angle defined by the two-finger contact, further wherein the stamped image does not fade from display of the image in the digital viewfinder (e.g., digital viewfinder 610) (e.g.,
In some embodiments, the stamped image is a stamped kiss. (e.g.,
In some embodiments, wherein the operational mode is the recording mode, further wherein the first touch input includes a varying characteristic intensity that fluctuates based on a varying intensity of the first touch input on the touch-sensitive display screen (e.g., display screen 602), the device displays the visual representation at the first location, wherein the visual representation is the animated graphic that is rendered according to the varying characteristic intensity of the first touch input at the first location (e.g.,
In some embodiments, the first touch input corresponds to a press-and-hold input with the varying characteristic intensity at the first location and the visual representation is an animated fireball having a variable color scheme and size that are scaled in accordance with the varying characteristic intensity of the press-and-hold input (e.g.,
In some embodiments, the visual representation is a beating heart that loops for a duration of the first touch input (e.g.,
In some embodiments, the visual representation is a multiple-part animation having at least a first part and a second part, wherein the first part is based on a first detected aspect of the first touch input and the second part is distinct from the first part and is based on a subsequently detected aspect of the first touch input (e.g.,
In some embodiments, the multiple-part animation is a breaking heart animation, wherein: displaying the first part includes looping a beating heart animation at the first location for a duration of time corresponding to the first touch input on the touch-sensitive display screen (e.g., display screen 602) at the first location, and displaying the second part includes ceasing looping of the beating heart animation and replacing the beating heart animation with display of a breaking heart animation based on the subsequently detected aspect, wherein the subsequently detected aspect is a movement of the first touch input that meets a predefined distance threshold(e.g.,
Note that details of the processes described above with respect to method 1100 (e.g.,
Turning now to
As shown in
The processing unit 1506 is configured to: enable display (e.g., with display enabling unit 1508) of, on the touch-sensitive display unit, an image in a digital viewfinder, wherein the image is based on camera image data received from the camera. The processing unit 1506 is further configured to: detect (e.g., with detecting unit 1510) a first touch input at a first location in the digital viewfinder. The processing unit 1506 is further configured to: in response to detecting the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a recording mode, enable display (e.g., with display enabling unit 1508) of, in the digital viewfinder, a visual representation corresponding to the first touch input at the first location. The processing unit 1506 is further configured to: in response to detecting (e.g., with detecting unit 1510) the first touch input and in accordance with a determination that the first touch input is detected while an operational mode of the camera is a non-recording mode, alter (e.g., with image adjusting unit 1512) the image displayed in the digital viewfinder by adjusting a characteristic of the camera image data.
In some embodiments, the processing unit 1506 is further configured to: determine (e.g., with determining unit 1514) whether the first touch input is detected while the operational mode of the camera is the recording mode or the non-recording mode.
In some embodiments, the first touch input is a single-finger input and the characteristic is a focus of the camera image data.
In some embodiments, the first touch input is a multiple-finger input and the characteristic is an optical magnification of the camera image data.
In some embodiments, the visual representation is displayed (e.g., with display enabling unit 1508) in the digital viewfinder for a duration of the first touch input and fades upon detection of lift-off of the first touch input.
In some embodiments, the visual representation is displayed (e.g., with display enabling unit 1508) in the digital viewfinder for a predetermined period of time before fading.
In some embodiments, the visual representation is maintained in the digital viewfinder while the digital viewfinder is displayed (e.g., with display enabling unit 1508).
In some embodiments, the visual representation includes an animation based on characteristic kinematics of the first touch input.
In some embodiments, the visual representation includes an animation based on characteristic intensity of the first touch input.
In some embodiments, displaying the visual representation includes looping playback of the visual representation.
In some embodiments, displaying the visual representation includes outputting (e.g., with outputting unit 1516) at least one of an audio output and a haptic output associated with the visual representation.
In some embodiments, the first touch input is a single-finger tap at the first location in the digital viewfinder and the visual representation is at least one of a circle, an ellipses, and an oval at the first location.
In some embodiments, the first touch input is a single-finger hold that exceeds a predetermined duration and the visual representation is a teardrop at the first location of the first touch input.
In some embodiments, the first touch input is a single-finger contact having characteristic kinematics describing a movement of the single-finger contact beginning at the first location in the digital viewfinder and the visual representation is a line beginning at the first location with the characteristic kinematics.
In some embodiments, the first touch input is a multiple-finger contact in the digital viewfinder and the visual representation is centered at the first location between touch contacts of the multiple-finger contact.
In some embodiments, the visual representation is oriented along an angle defined by the touch contacts on the touch-sensitive display unit.
In some embodiments, the multiple-finger contact is a two-finger contact on the touch-sensitive display unit and the visual representation is a kiss that is displayed at the first location for a duration of the two-finger contact and fades upon lift-off of the two-finger contact.
In some embodiments, the multiple-finger contact is a two-finger double-tap contact on the touch-sensitive display unit and the visual representation is a stamped image at the first location that is angled according to the angle defined by the two-finger contact, further wherein the stamped image does not fade from display of the image in the digital viewfinder.
In some embodiments, the stamped image is a stamped kiss.
In some embodiments, the operational mode is the recording mode, further wherein the first touch input includes a varying characteristic intensity that fluctuates based on a varying intensity of the first touch input on the touch-sensitive display unit, wherein the processing unit 1506 is further configured to: enable display of the visual representation at the first location, wherein the visual representation is the animated graphic that is rendered according to the varying characteristic intensity of the first touch input at the first location.
In some embodiments, the first touch input corresponds to a press-and-hold input with the varying characteristic intensity at the first location and the visual representation is an animated fireball having a variable color scheme and size that are scaled in accordance with the varying characteristic intensity of the press-and-hold input.
In some embodiments, the visual representation is a beating heart that loops for a duration of the first touch input.
In some embodiments, the visual representation is a multiple-part animation having at least a first part and a second part, wherein the first part is based on a first detected aspect of the first touch input and the second part is distinct from the first part and is based on a subsequently detected aspect of the first touch input.
In some embodiments, the multiple-part animation is a breaking heart animation, wherein: displaying the first part includes looping a beating heart animation at the first location for a duration of time corresponding to the first touch input on the touch-sensitive display unit at the first location, and displaying the second part includes ceasing looping of the beating heart animation and replacing the beating heart animation with display of a breaking heart animation based on the subsequently detected aspect, wherein the subsequently detected aspect is a movement of the first touch input that meets a predefined distance threshold.
The operations described above with reference to
Turning now to
As shown at
Further, turning now to
As further shown in
As further shown in both
In another aspect, the color adjustment interface 680includes a brightness adjustment affordance 684. For example, the brightness adjustment affordance 684 can optionally include a brightness slider bar that indicates a brightness level of visual representations, such as strokes, in the drawing area. A default brightness level can optionally be set at 50 percent brightness. Dragging the slider bar rightward dims the color of the visual representations toward no brightness or black, while dragging leftward brightens the color of visual representations toward fully bright or white. In another aspect, brightness level is adjusted while saturation level is fixed. In a further aspect, it is contemplated that a background color of the enlarged drawing canvas and/or compact drawing canvas is black. In another embodiment, the background color can optionally be user-selected.
Referring back to
Turning now to
Turning now to
In a further example, as shown in
Turning now to
Turning now to
As described below, method 1200 provides an intuitive way for electronic communications with video and/or still image. The method reduces the cognitive burden on a user for electronic communications, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to communicate electronically in a faster and more efficient manner conserves power and increases the time between battery charges.
As shown in the method 1200, the device (1202) displays, on the touch-sensitive display screen (e.g., display screen 602), a text messaging user interface associated with a contact, wherein the text messaging user interface includes a message transcript area, and a compact drawing area, wherein the compact drawing area includes an expand affordance corresponding to an enlarged drawing area. The device (1204) detects a first user input corresponding to the expand affordance. The device (1206), in response to detecting the first user input, replaces the displayed text messaging interface with display of the enlarged drawing area, wherein the enlarged drawing area includes a camera affordance. The device (1208) detects a second user input corresponding to the camera affordance. The device (1210), in response to detecting the second user input, displays a digital viewfinder (e.g., digital viewfinder 610), in the enlarged drawing area, that presents camera image data received from the camera (e.g., camera 604).
In some embodiments, the device detects user input on an affordance representing the compact drawing area in the text messaging user interface; and in response to detecting the user input on the affordance, displays the compact drawing area in the text messaging user interface.
In some embodiments, the affordance representing the compact drawing area is displayed in a menu bar of the text messaging user interface, further wherein the menu bar includes a camera affordance corresponding to a camera (e.g., camera 604) roll, a text entry field corresponding to a soft keyboard for composing a textual message to the contact, and a record audio affordance that initiates recording of an audio message to the contact.
In some embodiments, the device receives an input corresponding to a request to display the soft keyboard; in response to the user request to display the soft keyboard, ceases to display the compact drawing area; and displays the soft keyboard.
In some embodiments, the text messaging user interface includes at least a first portion and a second portion displayed below the first portion, wherein the message transcript area is displayed in the first portion and the compact drawing area is displayed in the second portion.
In some embodiments, wherein the enlarged drawing area includes a minimize affordance, the device, in response to detecting user input on the minimize affordance, replaces display of the enlarged drawing area with display of the text messaging user interface.
In some embodiments, in response to detecting user input on the minimize affordance, the devices displays the compact drawing area.
In some embodiments, the enlarged drawing area includes an enlarged drawing canvas and the compact drawing canvas includes a compact drawing area, wherein the enlarged drawing canvas and the compact drawing canvas have a common aspect ratio.
In some embodiments, the device displays a text entry field in the enlarged drawing area; in response to detection of user input on the text entry field, displays a soft keyboard in the enlarged drawing area; and in response to detecting a set of user inputs on the soft keyboard corresponding to composition of a textual message and a request to send the textual message, composes a textual message and sending the textual message to the contact while maintaining display of the enlarged drawing area.
In some embodiments, the device displays a legend of one or more indicators in the compact drawing area, wherein each indicator represents a type of touch input and a visual representation corresponding to the type of touch input.
In some embodiments, the enlarged drawing area and the compact drawing area include display of a plurality of color affordances and an indicator representing a currently-selected color affordance.
In some embodiments, in response to detecting a user input on any one of the plurality of color affordances, wherein the user input corresponds to changing a color represented by the color affordance, the device displays a color selection interface, wherein the color selection interface includes a plurality of selectable colors; and in response to detecting user input corresponding to selecting of a color of the plurality of colors, updates the color affordance with the selected color.
In some embodiments, the color adjustment interface includes a brightness adjustment affordance.
In some embodiments, the device detects a first touch input, at a first location in the compact drawing area, representing a first stroke; in response to detecting the first touch input, displays a visual representation, at the first location in the compact drawing area, of the first stroke; and automatically sends data corresponding to the visual representation of the first stroke to an external device (e.g., device 600) associated with the contact.
In some embodiments, wherein the camera affordance is displayed at a first brightness level, the device, while displaying the enlarged drawing area, in response to detecting a third user input in the enlarged drawing area, dims the camera affordance to a second brightness level less than the first brightness level; and after a predetermined period of time has elapsed since cessation of the third user input, restores the camera affordance to the first brightness level.
In some embodiments, while displaying the digital viewfinder (e.g., digital viewfinder 610) in the enlarged drawing area, the device displays a record video affordance that toggles on and off recording a video based on the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610), a still image capture affordance that takes a picture based on the camera image data presented in the digital viewfinder (e.g., digital viewfinder 610), and a camera (e.g., camera 604) flip affordance that toggles activation of a front or back camera (e.g., camera 604).
In some embodiments, in response to user selection of the record video affordance that toggles on recording of the video, the device displays a countdown timer representing a remaining time until recording automatically ceases.
In some embodiments, in response to user selection of the record video affordance that toggles on recording of the video, the device displays an animated progress bar that fills horizontally indicating remaining duration until recording automatically ceases.
In some embodiments, upon detection of user input on a displayed exit affordance corresponding to the digital viewfinder (e.g., digital viewfinder 610), the device ceases displaying of the digital viewfinder (e.g., digital viewfinder 610) in the enlarged drawing area and replaces the digital viewfinder (e.g., digital viewfinder 610) with an enlarged drawing canvas.
Note that details of the processes described above with respect to method 1200 (e.g.,
Turning now to
As shown in
The processing unit 1606 is configured to: enable display (e.g., with display enabling unit 1608) of, on the touch-sensitive display unit, a text messaging user interface associated with a contact, wherein the text messaging user interface includes: a message transcript area, and a compact drawing area, wherein the compact drawing area includes an expand affordance corresponding to an enlarged drawing area. The processing unit 1606 is further configured to: detect (e.g., with detecting unit 1610) a first user input corresponding to the expand affordance. The processing unit 1606 is further configured to: in response to detecting the first user input, replace (e.g., with display enabling unit 1608) the displayed text messaging interface with display of the enlarged drawing area, wherein the enlarged drawing area includes a camera affordance. The processing unit 1606 is further configured to: detect (e.g., with detecting unit 1610) a second user input corresponding to the camera affordance. The processing unit 1606 is further configured to: in response to detecting the second user input, enable display (e.g., with display enabling unit 1608) of a digital viewfinder, in the enlarged drawing area, that presents camera image data received from the camera.
In some embodiments, the processing unit 1606 is further configured to: detect (e.g., with detecting unit 1610) user input on an affordance representing the compact drawing area in the text messaging user interface; and in response to detecting the user input on the affordance, enable display (e.g., with display enabling unit 1608) of the compact drawing area in the text messaging user interface.
In some embodiments, the affordance representing the compact drawing area is displayed in a menu bar of the text messaging user interface, further wherein the menu bar includes a camera affordance corresponding to a camera roll, a text entry field corresponding to a soft keyboard for composing a textual message to the contact, and a record audio affordance that initiates recording of an audio message to the contact.
In some embodiments, the processing unit 1606 is further configured to: receive (e.g., with receiving unit 1612) an input corresponding to a request to display the soft keyboard; in response to the user request to display the soft keyboard, cease to display (e.g., with display enabling unit 1608) the compact drawing area; and enable display (e.g., with display enabling unit 1608) of the soft keyboard.
In some embodiments, the text messaging user interface includes at least a first portion and a second portion displayed below the first portion, wherein the message transcript area is displayed in the first portion and the compact drawing area is displayed in the second portion.
In some embodiments, the enlarged drawing area includes a minimize affordance, wherein the processing unit 1606 is further configured to, in response to detecting user input on the minimize affordance, replace display (e.g., with display enabling unit 1608) of the enlarged drawing area with display of the text messaging user interface.
In some embodiments, the processing unit 1606 is further configured to: in response to detecting user input on the minimize affordance, enable display (e.g., with display enabling unit 1608) of the compact drawing area.
In some embodiments, the enlarged drawing area includes an enlarged drawing canvas and the compact drawing canvas includes a compact drawing area, wherein the enlarged drawing canvas and the compact drawing canvas have a common aspect ratio.
In some embodiments, the processing unit 1606 is further configured to: enable display (e.g., with display enabling unit 1608) of a text entry field in the enlarged drawing area; in response to detection of (e.g., with detecting unit 1610) user input on the text entry field, enable display (e.g., with display enabling unit 1608) of a soft keyboard in the enlarged drawing area; and in response to detecting (e.g., with detecting unit 1610) a set of user inputs on the soft keyboard corresponding to composition of a textual message and a request to send the textual message, compose (e.g., with composing unit 1614) a textual message and sending (e.g., with sending unit 1616) the textual message to the contact while maintaining display of the enlarged drawing area.
In some embodiments, the processing unit 1606 is further configured to: enable display of a legend of one or more indicators in the compact drawing area, wherein each indicator represents a type of touch input and a visual representation corresponding to the type of touch input.
In some embodiments, the enlarged drawing area and the compact drawing area include display of a plurality of color affordances and an indicator representing a currently-selected color affordance.
In some embodiments, the processing unit 1606 is further configured to: in response to detecting a user input on any one of the plurality of color affordances, wherein the user input corresponds to changing a color represented by the color affordance, enable display (e.g., with display enabling unit 1608) of a color selection interface, wherein the color selection interface includes a plurality of selectable colors; and in response to detecting (e.g., with detecting unit 1610) user input corresponding to selecting of a color of the plurality of colors, update (e.g., with updating unit 1618) the color affordance with the selected color.
In some embodiments, the color adjustment interface includes a brightness adjustment affordance.
In some embodiments, the processing unit 1606 is further configured to: detect (e.g., with detecting unit 1610) a first touch input, at a first location in the compact drawing area, representing a first stroke; in response to detecting the first touch input, enable display (e.g., with display enabling unit 1608) of a visual representation, at the first location in the compact drawing area, of the first stroke; and automatically send (e.g., with sending unit 1616) data corresponding to the visual representation of the first stroke to an external device associated with the contact.
In some embodiments, the camera affordance is displayed at a first brightness level, wherein the processing unit 1606 is further configured to: while displaying the enlarged drawing area, in response to detecting (e.g., with detecting unit 1610) a third user input in the enlarged drawing area, dim (e.g., with brightness adjustment unit 1620) the camera affordance to a second brightness level less than the first brightness level; and after a predetermined period of time has elapsed since cessation of the third user input, restore (e.g., with brightness adjusting unit 1620) the camera affordance to the first brightness level.
In some embodiments, the processing unit 1606 is further configured to: while displaying (e.g., with display enabling unit 1608) the digital viewfinder in the enlarged drawing area, enable display (e.g., with display enabling unit 1608) of, a record video affordance that toggles on and off recording a video based on the camera image data presented in the digital viewfinder, a still image capture affordance that takes a picture based on the camera image data presented in the digital viewfinder, and a camera flip affordance that toggles activation of a front or back camera.
In some embodiments, the processing unit 1606 is further configured to: in response to user selection of the record video affordance that toggles on recording of the video, enable display (e.g. with display enabling unit 1608) of a countdown timer representing a remaining time until recording automatically ceases.
In some embodiments, the processing unit 1606 is further configured to: in response to user selection of the record video affordance that toggles on recording of the video, enable display (e.g., with display enabling unit 1608) of an animated progress bar that fills horizontally indicating remaining duration until recording automatically ceases.
In some embodiments, the processing unit 1606 is further configured to: upon detection of user input on a displayed exit affordance corresponding to the digital viewfinder, cease displaying (e.g., with display enabling unit 1608) of the digital viewfinder in the enlarged drawing area and replacing the digital viewfinder with an enlarged drawing canvas.
The operations described above with reference to
Turning now to
Turning now to
In some embodiments, the visual information includes a still image and a visual representation corresponding to a touch input received at an external device associated with the contact 632. In that case, displaying the looped playback of the visual information includes overlaying the still image with a looped playback of the visual representation. In other examples, the visual information includes an encoded video, wherein the encoded video includes a visual representation of a touch input detected at an external device associated with the contact and at least one of a still image and a recorded video captured at the external device. For example, the visual representation is flattened on the still image and/or the recorded video during an encoding process.
In another example, the message data 704 includes audio information. In that case, the device 600 displays a sound affordance 708 in the enlarged drawing area or overlaid on the visual information displayed in the text message transcript. The device 600 can optionally detect a user input on the sound affordance 708 and in response to detecting the user input, cause output of the audio information through a speaker. For example, the sound information can optionally be output through a speaker at the electronic device 600, or in communication with the electronic device 600. Output of the audio information can optionally begin at a portion of the audio information that corresponds to a currently displayed frame if the message data is looped. In another example, audio information can optionally be output in response to detecting a user gesture (e.g., press-and-hold) on the displayed visual information in the message transcript. In another example, the audio information can optionally be played back automatically during playback of the visual information 706.
As shown at
As demonstrated in
In some embodiments, in accordance with a determination that a status of the message data 704 including the visual information 706 meets a display criteria, the device 600 maintains the looped playback of the visual information in the text message transcript. For example, such criteria can optionally include the message data is a most recent sent or received visual information communication with the contact 638 and/or the message data has not yet expired. In some examples, the device 600 determines whether the status of the message data 704 meets the display criteria. The display criteria includes a criterion that is met when the message data 704 is a most-recently-communicated message data in the text message transcript 638 with the contact 632. For example, the message data was the latest that was sent or received. For example, while looping playback of the visual information, if subsequent visual information is received at or sent by the electronic device, then the device 600 can optionally cease looped playback of the visual information 706 and display the received subsequent data by looping playback of the subsequent visual information.
In another embodiment, the display criteria include a criterion that is met when the message data has not yet expired. For example, the device 600 determines whether an expiration period has elapsed, where the expiration period initiates when the user initially views the message data 704 in the text message transcript 638. In some examples, the expiration period is two minutes.
In a further example, in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, the device 600 ceases displaying the looped playback of the visual information in the text message transcript 638. For example, the message data 704 can optionally no longer be a most recently-received data and/or the expiration period elapsed. In that case, looping is ceased in order to conserve power and/or memory at the device 600. In that case, the device 600 can optionally remove the message data 704 from the text message transcript 638 and/or replace the looped playback with a still image representing the message data. For example, in accordance with the determination that the status of the message data including the visual information does not meet the display criteria, the device 600 can optionally replace the looped playback of the visual information 706 with a still image of the visual information 706. For example, a still frame of the looped playback is displayed when the status no longer meets the criterion for most-recently-communicated message data but the status meets the criterion for not-yet-expired. In another example, upon detecting user selection of the still image of the visual information in the text message transcript; and in response to detection of the user selection, the device 600 can optionally replace display of the text messaging user interface 640 with display of the enlarged drawing area, where looped playback of the visual representation is displayed in the enlarged drawing area. In that case, in response to user selection of the still frame, the device 600 can optionally resume display of a playback or looped playback in the enlarged drawing area. Playback of the visual information 706 can optionally be paused while in the enlarged drawing area. Further, an exit affordance 702 on the visual information can optionally allow the user to return to a blank canvas to respond to the contact 632. Other examples can optionally be contemplated.
In some examples, the message data 704 including the visual information 706 is received by the device 600 and presented in the text message transcript as a static image or as a graphical affordance that is selectable. The static image can optionally include a single frame that is based on the visual information, where the single frame can optionally include the first touch input and/or a video or image based on camera image data. Upon user selection of the static image (e.g., tap on the still image, press-and-hold gesture on the still image), the device can initiate playback of the visual information with or without audio information being output. In some examples, a looped playback is initiated where the visual information loops in the text message transcript. In some examples, a single playback is initiated in the text message transcript and subsequent playbacks require subsequent user inputs (e.g., subsequent user taps) on the static image. In some examples, the looped playback is maintained if a subsequent user input on the looped playback is not detected. For example, the subsequent user input can optionally include a subsequent tap on the looped playback that stops the playback and replaces the playback with a still image, which can correspond to a frame in the playback that was stopped. In some examples the tap on the still image initiates playback of the visual information in the text message interface, while a distinct user input (e.g., a tap-and-hold gesture) opens the playback of the visual information in the full-screen drawing area view. In some cases, the still image fades or disappears from the text message transcript after expiration time period elapses (e.g., after the message data expires). In practice, providing the static, still image representative of playback of the visual information can conserve power at the mobile device.
Turning now to
As described below, method 1300 provides an intuitive way for electronic communications with video and/or still image. The method reduces the cognitive burden on a user for electronic communications, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to communicate electronically in a faster and more efficient manner conserves power and increases the time between battery charges.
As shown in the method 1300, the device (1302) receives, at the electronic device (e.g., device 600), message data including a visual information capable of playback over time from a contact. The device (1304) displays, on the touch-sensitive display screen (e.g., display screen 602), the message data including the visual information in a text messaging user interface of a messaging application, wherein the text messaging user interface includes a text message transcript associated with the contact, further wherein displaying the message data including the visual information comprises displaying a looped playback of the visual information in the text message transcript. The device (1306), in accordance with a determination that a status of the message data including the visual information meets a display criteria, maintains the looped playback of the visual information in the text message transcript. The device (1308), in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, ceases displaying the looped playback of the visual information in the text message transcript.
In some embodiments, the display criteria include a criterion that is met when the message data is a most-recently-communicated message data in the text message transcript with the contact.
In some embodiments, the display criteria include a criterion that is met when the message data has not yet expired.
In some embodiments, the device determines whether the status of the message data meets the display criteria.
In some embodiments, the device, in accordance with the determination that the status of the message data including the visual information does not meet the display criteria, replaces the looped playback of the visual information with a still image of the visual information.
In some embodiments, the device detects user selection of the still image of the visual information in the text message transcript; and in response to detection of the user selection, replaces display of the text messaging user interface with display of an enlarged drawing area, wherein looped playback of the visual representation is displayed in the enlarged drawing area.
In some embodiments, the device, in response to detecting a user request to retain display of the visual information of the message data in the text message transcript, maintains display of the visual information in the text message transcript.
In some embodiments, the device, in accordance with the determination that the status of the message data including the visual information does not meet the display criteria, removes the visual information from the text message transcript.
In some embodiments, the device detects user selection of the looped playback of the visual information in the text message transcript; and in response to detection of the user selection, replaces display of the text messaging user interface with display of an enlarged drawing area, wherein looped playback of the visual representation is displayed in the enlarged drawing area.
In some embodiments, the device, while displaying the enlarged drawing area, detects a user request to reply to the contact; and in response to detecting the user request, replaces display of the visual information in the enlarged drawing area with a blank drawing canvas in the enlarged drawing area.
In some embodiments, wherein the message data further comprises audio information, the device displays a sound affordance; detects a user input on the sound affordance; and in response to detecting the user input, causes output of the audio information through a speaker.
In some embodiments, the visual information comprises a recorded video and a visual representation corresponding to a touch input received at an external device (e.g., device 600) associated with the contact, further wherein displaying the looped playback of the visual information includes overlaying a looped playback of the video recording with a looped playback of the visual representation.
In some embodiments, the visual information includes a still image and a visual representation corresponding to a touch input received at an external device (e.g., device 600) associated with the contact, further wherein displaying the looped playback of the visual information includes overlaying the still image with a looped playback of the visual representation.
In some embodiments, the visual information includes an encoded video, wherein the encoded video includes a visual representation of a touch input detected at an external device (e.g., device 600) associated with the contact and at least one of a still image and a recorded video captured at the external device (e.g., device 600).
Note that details of the processes described above with respect to method 1300 (e.g.,
Turning now to
As shown in
The processing unit 1706 is configured to: receive (e.g., with receiving unit 1706), at the electronic device, message data including visual information capable of playback over time from a contact. The processing unit 1706 is further configured to: enable display (e.g., with display enabling unit 1710) of, on the touch-sensitive display unit, the message data including the visual information in a text messaging user interface of a messaging application, wherein the text messaging user interface includes a text message transcript associated with the contact, further wherein displaying the message data including the visual information comprises displaying a looped playback of the visual information in the text message transcript. The processing unit 1706 is further configured to: in accordance with a determination that a status of the message data including the visual information meets a display criteria, maintain (e.g., with display enabling unit 1710) the looped playback of the visual information in the text message transcript. The processing unit 1706 is further configured to: in accordance with a determination that a status of the message data including the visual information does not meet the display criteria, cease displaying (e.g., with display enabling unit 1710) the looped playback of the visual information in the text message transcript.
In some embodiments, the display criteria include a criterion that is met when the message data is a most-recently-communicated message data in the text message transcript with the contact.
In some embodiments, the display criteria include a criterion that is met when the message data has not yet expired.
In some embodiments, the processing unit 1706 is further configured to: determine (e.g., with determining unit 1712) whether the status of the message data meets the display criteria.
In some embodiments, the processing unit 1706 is further configured to: in accordance with the determination that the status of the message data including the visual information does not meet the display criteria, replace (e.g., with display enabling unit 1710) the looped playback of the visual information with a still image of the visual information.
In some embodiments, the processing unit 1706 is further configured to: detect (e.g., with detecting unit 1714) user selection of the still image of the visual information in the text message transcript; and
In some embodiments, the processing unit 1706 is further configured to: in response to detection of the user selection, replace display (e.g., with display enabling unit 1710) of the text messaging user interface with display of an enlarged drawing area, wherein looped playback of the visual representation is displayed in the enlarged drawing area.
In some embodiments, the processing unit 1706 is further configured to: in response to detecting a user request to retain display of the visual information of the message data in the text message transcript, maintain display (e.g., with display enabling unit 1716) of the visual information in the text message transcript.
In some embodiments, the processing unit 1706 is further configured to: in accordance with the determination that the status of the message data including the visual information does not meet the display criteria, remove (e.g., with transcript editing unit 1716) the visual information from the text message transcript.
In some embodiments, the processing unit 1706 is further configured to: detect user selection of the looped playback of the visual information in the text message transcript; and in response to detection of the user selection, replace (e.g., display enabling unit 1710) display of the text messaging user interface with display of an enlarged drawing area, wherein looped playback of the visual representation is displayed in the enlarged drawing area.
In some embodiments, the processing unit 1706 is further configured to: while displaying the enlarged drawing area, detect a user request to reply to the contact; and in response to detecting the user request, replace display (e.g., display enabling unit 1710) of the visual information in the enlarged drawing area with a blank drawing canvas in the enlarged drawing area.
In some embodiments, the message data further comprises audio information, and the processing unit 1706 is further configured to: enable display (e.g., display enabling unit 1710) of a sound affordance; detect a user input on the sound affordance; and in response to detecting the user input, cause output of the audio information through a speaker.
In some embodiments, the visual information comprises a recorded video and a visual representation corresponding to a touch input received at an external device associated with the contact, further wherein displaying the looped playback of the visual information includes overlaying (e.g., with overlaying unit 1718) a looped playback of the video recording with a looped playback of the visual representation.
In some embodiments, the visual information includes a still image and a visual representation corresponding to a touch input received at an external device associated with the contact, further wherein displaying the looped playback of the visual information includes overlaying (e.g., with overlaying unit 1718) the still image with a looped playback of the visual representation.
In some embodiments, the visual information includes an encoded video, wherein the encoded video includes a visual representation of a touch input detected at an external device associated with the contact and at least one of a still image and a recorded video captured at the external device.
The operations described above with reference to
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is the gathering and use of data available from various sources to improve the delivery to users of invitational content or any other content that can optionally be of interest to them. The present disclosure contemplates that in some instances, this gathered data can optionally include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, home addresses, or any other identifying information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to deliver targeted content that is of greater interest to the user. Accordingly, use of such personal information data enables calculated control of the delivered content. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of advertisement delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide location information for targeted content delivery services. In yet another example, users can select to not provide precise location information, but permit the transfer of location zone information.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publically available information.
This application is a continuation of U.S. patent application Ser. No. 15/608,817, entitled “DIGITAL TOUCH ON LIVE VIDEO”, filed May 30, 2017, which claims priority to U.S. provisional patent application 62/349,075, entitled “DIGITAL TOUCH ON LIVE VIDEO”, filed Jun. 12, 2016, the content of which are hereby incorporated by reference in their entirety. This application relates to the following co-pending applications: U.S. patent application Ser. No. 14/839,918, entitled “Electronic Touch Communication,” filed Aug. 28, 2015; U.S. patent application Ser. No. 14/839,921, entitled “Electronic Touch Communication,” filed Aug. 28, 2015; and U.S. patent application Ser. No. 14/839,919, entitled “Electronic Touch Communication,” filed Aug. 28, 2015. The contents of these applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62349075 | Jun 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15608817 | May 2017 | US |
Child | 16745060 | US |