The present disclosure relates generally to computer user interfaces, and more specifically to techniques for navigating, viewing, and editing a collection of media items, including aggregated content items.
As the storage capacity and processing power of devices continues to increase, coupled with the rise of effortless media sharing between interconnected devices, the size of user's libraries of media items (e.g., photos and videos) continues to increase.
However, as libraries of media items continue to grow, creating an archive of the user's life and experiences, the libraries can become cumbersome to navigate. For example, many libraries arrange media items by default in a substantially inflexible manner. A user browsing for media can desire to see media that is related to a current context across different time periods. However, some interfaces require the user to navigate to an excessive number of different media directories or interfaces to locate the content that they seek. This is inefficient and a waste of the user's time and resources. Accordingly, it is therefore desirable to facilitate presentation of media items in a contextually-relevant way and thereby provide an improved interface for engaging with media content.
Further, some techniques for navigating, viewing, and/or editing a collection of media items using electronic devices are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes. Existing techniques require more time than necessary, wasting user time and device energy. This latter consideration is particularly important in battery-operated devices.
Accordingly, the present technique provides electronic devices with faster, more efficient methods and interfaces for navigating, viewing, and editing a collection of media items, including aggregated content items (e.g., aggregated media items). Such methods and interfaces optionally complement or replace other methods for navigating, viewing, and editing a collection of media items. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For battery-operated computing devices, such methods and interfaces conserve power and increase the time between battery charges.
In accordance with some embodiments, a method is described. The method comprises: at a computer system that is in communication with a display generation component and one or more input devices: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content that is separate from the content items; while playing the visual content of the first aggregated content item and the audio content, detecting, via the one or more input devices, a user input; and in response to detecting the user input: modifying audio content that is playing while continuing to play visual content of the first aggregated content item.
In accordance with some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content that is separate from the content items; while playing the visual content of the first aggregated content item and the audio content, detecting, via the one or more input devices, a user input; and in response to detecting the user input: modifying audio content that is playing while continuing to play visual content of the first aggregated content item.
In accordance with some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content that is separate from the content items; while playing the visual content of the first aggregated content item and the audio content, detecting, via the one or more input devices, a user input; and in response to detecting the user input: modifying audio content that is playing while continuing to play visual content of the first aggregated content item.
In accordance with some embodiments, a computer system is described. The computer system is configured to communicate with a display generation component and one or more input devices, and comprises: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content that is separate from the content items; while playing the visual content of the first aggregated content item and the audio content, detecting, via the one or more input devices, a user input; and in response to detecting the user input: modifying audio content that is playing while continuing to play visual content of the first aggregated content item.
In accordance with some embodiments, a method is described. The method comprises: at a computer system that is in communication with a display generation component and one or more input devices: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a media library that includes photos and/or videos taken by a user of the computer system, wherein the first plurality of content items is selected based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content; after playing at least a portion of the visual content of the first aggregated content item, detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria; and subsequent to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria: in accordance with a determination that a playback condition of a first set of one or more playback conditions is met, playing visual content of a second aggregated content item different from the first aggregated content item, wherein the second aggregated content item comprises an ordered sequence of a second plurality of content items different from the first plurality of content items, and further wherein the second plurality of content items is selected from the media library that includes photos and/or videos taken by a user of the computer system, wherein the second plurality of content items is selected based on a second set of selection criteria.
In accordance with some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a media library that includes photos and/or videos taken by a user of the computer system, wherein the first plurality of content items is selected based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content; after playing at least a portion of the visual content of the first aggregated content item, detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria; and subsequent to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria: in accordance with a determination that a playback condition of a first set of one or more playback conditions is met, playing visual content of a second aggregated content item different from the first aggregated content item, wherein the second aggregated content item comprises an ordered sequence of a second plurality of content items different from the first plurality of content items, and further wherein the second plurality of content items is selected from the media library that includes photos and/or videos taken by a user of the computer system, wherein the second plurality of content items is selected based on a second set of selection criteria.
In accordance with some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a media library that includes photos and/or videos taken by a user of the computer system, wherein the first plurality of content items is selected based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content; after playing at least a portion of the visual content of the first aggregated content item, detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria; and subsequent to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria: in accordance with a determination that a playback condition of a first set of one or more playback conditions is met, playing visual content of a second aggregated content item different from the first aggregated content item, wherein the second aggregated content item comprises an ordered sequence of a second plurality of content items different from the first plurality of content items, and further wherein the second plurality of content items is selected from the media library that includes photos and/or videos taken by a user of the computer system, wherein the second plurality of content items is selected based on a second set of selection criteria.
In accordance with some embodiments, a computer system is described. The computer system is configured to communicate with a display generation component and one or more input devices, and comprises: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a media library that includes photos and/or videos taken by a user of the computer system, wherein the first plurality of content items is selected based on a first set of selection criteria; while playing the visual content of the first aggregated content item, playing audio content; after playing at least a portion of the visual content of the first aggregated content item, detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria; and subsequent to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria: in accordance with a determination that a playback condition of a first set of one or more playback conditions is met, playing visual content of a second aggregated content item different from the first aggregated content item, wherein the second aggregated content item comprises an ordered sequence of a second plurality of content items different from the first plurality of content items, and further wherein the second plurality of content items is selected from the media library that includes photos and/or videos taken by a user of the computer system, wherein the second plurality of content items is selected based on a second set of selection criteria.
In accordance with some embodiments, a method is described. The method comprises: at a computer system that is in communication with a display generation component and one or more input devices: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, detecting, via the one or more input devices, a user input; and in response to detecting the user input: pausing playback of the visual content of the first aggregated content item; and displaying, via the display generation component, a user interface, wherein displaying the user interface includes concurrently displaying a plurality of representations of content items in the first plurality of content items, including: a first representation of a first content item of the first plurality of content items, and a second representation of a second content item of the first plurality of content items.
In accordance with some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, detecting, via the one or more input devices, a user input; and in response to detecting the user input: pausing playback of the visual content of the first aggregated content item; and displaying, via the display generation component, a user interface, wherein displaying the user interface includes concurrently displaying a plurality of representations of content items in the first plurality of content items, including: a first representation of a first content item of the first plurality of content items, and a second representation of a second content item of the first plurality of content items.
In accordance with some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, detecting, via the one or more input devices, a user input; and in response to detecting the user input: pausing playback of the visual content of the first aggregated content item; and displaying, via the display generation component, a user interface, wherein displaying the user interface includes concurrently displaying a plurality of representations of content items in the first plurality of content items, including: a first representation of a first content item of the first plurality of content items, and a second representation of a second content item of the first plurality of content items.
In accordance with some embodiments, a computer system is described. The computer system is configured to communicate with a display generation component and one or more input devices, and comprises: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: playing, via the display generation component, visual content of a first aggregated content item, wherein the first aggregated content item comprises an ordered sequence of a first plurality of content items that are selected from a set of content items based on a first set of selection criteria; while playing the visual content of the first aggregated content item, detecting, via the one or more input devices, a user input; and in response to detecting the user input: pausing playback of the visual content of the first aggregated content item; and displaying, via the display generation component, a user interface, wherein displaying the user interface includes concurrently displaying a plurality of representations of content items in the first plurality of content items, including: a first representation of a first content item of the first plurality of content items, and a second representation of a second content item of the first plurality of content items.
Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, devices are provided with faster, more efficient methods and interfaces for navigating, viewing, and editing media items, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces may complement or replace other methods for navigating, viewing, and editing media items.
For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments.
There is a need for electronic devices that provide efficient methods and interfaces for navigating, viewing, and editing content items (e.g., media items (e.g., photos and/or videos)). For example, there is a need for techniques that eliminate extensive manual effort by a user to retrieve media content that is related to a current context, and/or techniques that eliminate extensive manual effort by a user to modify content items, such as aggregated content items. Such techniques can reduce the cognitive burden on a user who navigates, views, and/or edits content items, thereby enhancing productivity. Further, such techniques can reduce processor and battery power otherwise wasted on redundant user inputs.
Below,
The processes described below enhance the operability of the devices and make the user-device interfaces more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without requiring further user input, and/or additional techniques. These techniques also reduce power usage and improve battery life of the device by enabling the user to use the device more quickly and efficiently.
In addition, in methods described herein where one or more steps are contingent upon one or more conditions having been met, it should be understood that the described method can be repeated in multiple repetitions so that over the course of the repetitions all of the conditions upon which steps in the method are contingent have been met in different repetitions of the method. For example, if a method requires performing a first step if a condition is satisfied, and a second step if the condition is not satisfied, then a person of ordinary skill would appreciate that the claimed steps are repeated until the condition has been both satisfied and not satisfied, in no particular order. Thus, a method described with one or more steps that are contingent upon one or more conditions having been met could be rewritten as a method that is repeated until each of the conditions described in the method has been met. This, however, is not required of system or computer readable medium claims where the system or computer readable medium contains instructions for performing the contingent operations based on the satisfaction of the corresponding one or more conditions and thus is capable of determining whether the contingency has or has not been satisfied without explicitly repeating steps of a method until all of the conditions upon which steps in the method are contingent have been met. A person having ordinary skill in the art would also understand that, similar to a method with contingent steps, a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the contingent steps have been performed.
Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. The first touch and the second touch are both touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif. Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad). In some embodiments, the electronic device is a computer system that is in communication (e.g., via wireless communication, via wired communication) with a display generation component. The display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection. In some embodiments, the display generation component is integrated with the computer system. In some embodiments, the display generation component is separate from the computer system. As used herein, “displaying” content includes causing to display the content (e.g., video data rendered or decoded by display controller 156) by transmitting, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content.
In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
Attention is now directed toward embodiments of portable devices with touch-sensitive displays.
As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button).
As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
It should be appreciated that device 100 is only one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in
Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs (such as computer programs (e.g., including instructions)) and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips.
RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry 108 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.1 in, and/or IEEE 802.11 ac), voice over Internet Protocol (VoTP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g., 212,
I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, depth camera controller 169, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/send electrical signals from/to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some embodiments, input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g., 208,
A quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a Device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g., 206) optionally turns power to device 100 on or off. The functionality of one or more of the buttons are, optionally, user-customizable. Touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
Touch-sensitive display 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives and/or sends electrical signals from/to touch screen 112. Touch screen 112 displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects.
Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112. In an exemplary embodiment, a point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, Calif.
A touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Pat. No. 6,323,846 (Westerman et al.), U.S. Pat. No. 6,570,557 (Westerman et al.), and/or U.S. Pat. No. 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
A touch-sensitive display in some embodiments of touch screen 112 is described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application Ser. No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A Computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
In some embodiments, in addition to the touch screen, device 100 optionally includes a touchpad for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
Device 100 also includes power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
Device 100 optionally also includes one or more optical sensors 164.
Device 100 optionally also includes one or more depth camera sensors 175.
Device 100 optionally also includes one or more contact intensity sensors 165.
Device 100 optionally also includes one or more proximity sensors 166.
Device 100 optionally also includes one or more tactile output generators 167.
Device 100 optionally also includes one or more accelerometers 168.
In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136. Furthermore, in some embodiments, memory 102 (
Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
In some embodiments, contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter).
Contact/motion module 130 optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
Text input module 134, which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e-mail 140, or IM 141; and so forth.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephone module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data.
In conjunction with touch screen 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, or delete a still image or video from memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, e-mail client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo!Widgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124). In some embodiments, device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notes module 153 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, e-mail client module 140, and browser module 147, online video module 155 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module 141, rather than e-mail client module 140, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety.
Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs (such as computer programs (e.g., including instructions)), procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. For example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152,
In some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad.
Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. Event sorter 170 includes event monitor 171 and event dispatcher module 174. In some embodiments, application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
In some embodiments, application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripherals interface 118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110). Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
In some embodiments, event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
In some embodiments, operating system 126 includes event sorter 170. Alternatively, application 136-1 includes event sorter 170. In yet other embodiments, event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
A respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others. In some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test.
In some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type.
When a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
In some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
In some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized.
Device 100 optionally also include one or more physical buttons, such as “home” or menu button 204. As described previously, menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen 112.
In some embodiments, device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (SIM) card slot 210, headset jack 212, and docking/charging external port 124. Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Each of the above-identified elements in
Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device 100.
It should be noted that the icon labels illustrated in
Although some of the examples that follow will be given with reference to inputs on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No. PCT/US2013/040061, titled “Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, published as WIPO Publication No. WO/2013/169849, and International Patent Application Serial No. PCT/US2013/069483, titled “Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed Nov. 11, 2013, published as WIPO Publication No. WO/2014/105276, each of which is hereby incorporated by reference in their entirety.
In some embodiments, device 500 has one or more input mechanisms 506 and 508. Input mechanisms 506 and 508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device 500 to be worn by a user.
Input mechanism 508 is, optionally, a microphone, in some examples. Personal electronic device 500 optionally includes various sensors, such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
Memory 518 of personal electronic device 500 can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 700-1100 (
As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (
As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in
As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation.
As used herein, an “installed application” refers to a software application that has been downloaded onto an electronic device (e.g., devices 100, 300, and/or 500) and is ready to be launched (e.g., become opened) on the device. In some embodiments, a downloaded application becomes an installed application by way of an installation program that extracts program portions from a downloaded package and integrates the extracted portions with the operating system of the computer system.
As used herein, the terms “open application” or “executing application” refer to a software application with retained state information (e.g., as part of device/global internal state 157 and/or application internal state 192). An open or executing application is, optionally, any one of the following types of applications:
As used herein, the term “closed application” refers to software applications without retained state information (e.g., state information for closed applications is not stored in a memory of the device). Accordingly, closing an application includes stopping and/or removing application processes for the application and removing state information for the application from the memory of the device. Generally, opening a second application while in a first application does not close the first application. When the second application is displayed and the first application ceases to be displayed, the first application becomes a background application.
Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that are implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Media library user interface 604 also includes selectable options 606I and 606J. Option 606I is selectable to toggle the aspect ratios at which media items are presented in media library user interface 604. In
Media library user interface 604 also includes selectable options 606E-606H. Option 606E is selectable to display media library user interface 604. Option 606F is selectable to display a curated content user interface presenting the user with one or more media items that have been selected and/or curated for the user based on selection criteria. Option 606G is selectable to display one or more collections of media items (e.g., one or more albums). The one or more collections, in various embodiments, include one or more user-defined collections and/or one or more automatically generated collections. Option 606H is selectable to allow a user to search for media items in the media library (e.g., perform a keyword search for media items).
In
Curated content user interface 610 also includes one or more featured media items, including featured media item 612C. Featured media items are media items (e.g., photos and/or videos) from a user's media library that have been selected (e.g., automatically selected) for presentation to the user based on one or more selection criteria. In some embodiments, featured media items presented in curated content user interface 610 change over time (e.g., change from one day to the next, or from one week to the next). In
In
In
In
In
In
In
In
Options 636E-636H correspond to different duration options for the first aggregated content item, and are selectable to modify and/or specify a duration of the first aggregated content item. For example, the first aggregated content item currently has a duration corresponding to option 636F (e.g., a medium duration), and the specified duration is a duration of 38 media items. Option 636E is selectable to shorten the duration of the first aggregated content item by decreasing the number of media items in the first aggregated content item (e.g., from 38 media items to 24 media items). Option 636G is selectable to increase the duration of the first aggregated content item by increasing the number of media items in the first aggregated content item. In the depicted embodiment, option 636G corresponds to a specific time duration (e.g., 1 minute 28 seconds), and the time duration corresponds to a maximum time duration that is allowable for sharing the first aggregated content item. Option 636H is selectable to increase the duration of the first aggregated content item to match a duration of the audio track that has been applied to the first aggregated content item. In
In
In
Recipes user interface 642 includes recipe indication 644A which, in
In
In
In
In
In
In
As discussed above, and demonstrated in the figures, when a user swipes between different recipes in the recipe user interface 642, a user can switch between combinations of visual filters and audio tracks to be applied to the first aggregated content item. In some embodiments, in addition to changing the visual filter and the audio track that is applied to the first aggregated content item, when a user swipes between different recipes (e.g., different combinations of visual filters and audio tracks), electronic device 600 also changes other audio and/or visual characteristics of playback of the first aggregated content item, such as the types of visual transitions that are applied between media items presented during playback of the first aggregated content item. For example, a first recipe (e.g., a first visual filter/audio track combination) can utilize a first set of visual translations (e.g., fade in, fade out), while a second recipe can utilize a second set of visual translations different from the first set (e.g., swipe in, swipe out). In some embodiments, visual transitions applied between media items are selected based on audio characteristics of an audio track that is part of the applied visual filter/audio track combination. For example, higher energy or faster audio tracks (e.g., audio tracks with a beats-per-minute value that exceed a threshold) can utilize a first set of visual transitions, while lower energy or slower audio tracks (e.g., audio tracks with a beats-per-minute value below the threshold) can utilize a second set of visual transitions.
In
In
In
As noted above, while displaying visual filter selection user interface 670, electronic device 600 maintains playback (e.g., audio and visual playback) of the first aggregated content item, and different ones of tiles 674A-6740 depict playback of the visual content of the first aggregated content item with a different visual filter applied. In
In
In
In
In
In
In
In
In
In
In
As described below, method 700 provides an intuitive way for viewing and editing content items. The method reduces the cognitive burden on a user for viewing and editing content items, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to view and edit content items faster and more efficiently conserves power and increases the time between battery charges.
The computer system plays (702), via the display generation component, visual content of a first aggregated content item (e.g., media item 628A in
While playing the visual content of the first aggregated content item (704) (e.g., media item 628A in
While playing the visual content of the first aggregated content item and the audio content (708), the computer system detects (710), via the one or more input devices, a user input (e.g., 648) (e.g., a gesture (e.g., via a touch-sensitive display and/or a touch-sensitive surface) (e.g., a tap gesture, a swipe gesture) and/or a voice input).
In response to detecting the user input (712), the computer system modifies (714) audio content that is playing (e.g., a non-volume audio parameter (e.g., an audio parameter different from volume)) (e.g., changes the audio content from a first audio track to a second audio track different from the first audio track (e.g., from a first music track to a second music track different from the first music track)) while continuing to play visual content of the first aggregated content item (e.g.,
In some embodiments, in response to detecting the user input (e.g., 648), the computer system modifies (716) a visual parameter of playback of visual content of the first aggregated content item (e.g.,
In some embodiments, playing the visual content of the first aggregated content item (e.g., prior to detecting the user input) includes displaying the visual content with a first visual filter applied to a first region (e.g., a first display region) of the visual content (e.g.,
In some embodiments, modifying the visual parameter of playback of visual content of the first aggregated content item while continuing to play visual content of the first aggregated content item includes displaying the visual content with a second visual filter different from the first visual filter applied to the first region of the visual content (e.g.,
In some embodiments, playing audio content that is separate from the content items while playing the visual content of the first aggregated content item includes playing a first audio track separate from the content items while playing the visual content of the first aggregated content item (e.g.,
In some embodiments, while playing the first audio track (e.g., audio track 1 in
In some embodiments, modifying audio content that is playing while continuing to play visual content of the first aggregated content item includes playing a second audio track separate from the content items and different from the first audio track (e.g., audio track 2,
In some embodiments, while playing the second audio track (e.g.,
In some embodiments, the second audio track (e.g.,
In some embodiments, the first predefined combination (e.g., MEMORY RECIPE 1 OF 6 of
In some embodiments, the computer system applies the first predefined combination to the first aggregated content item (e.g., playing the first audio track and displaying the visual content of the first aggregated content item with the first visual filter applied to the first region); while the first predefined combination is applied to the first aggregated content item, the computer system detects the user input; and in response to detecting the user input, the computer system applies the second predefined combination to the first aggregated content item (e.g., playing the second audio track and displaying the visual content of the first aggregated content item with the second visual filter applied to the first region). In some embodiments, in response to detecting the user input, the computer system ceases to apply the first predefined combination (e.g., ceasing playing the first audio track, and ceasing applying the first visual filter to the first region). In some embodiments, the second predefined combination is applied in response to the user input based on the second predefined combination being adjacent to the first predefined combination in the order (e.g., in accordance with a determination that the second predefined combination is adjacent to the first predefined combination in the order). In some embodiments, the user input comprises a direction, and the direction of the user input is indicative of a request to apply a next predefined combination in the order, and the second predefined combination is applied in response to the user input based on the second predefined combination being immediately subsequent to the first predefined combination in the order. In some embodiments, the user input comprises a (e.g., different) direction, and the direction of the user input is indicative of a request to apply a previous predefined combination in the order, and the second predefined combination is applied in response to the user input based on the second predefined combination being immediately before the first predefined combination in the order.
In some embodiments the first visual filter (e.g., 646B) is selected to be part of the first predefined combination with the first audio track (e.g., audio track 1) based on one or more audio characteristics of the first audio track (e.g., beats per minute and/or sound wave characteristics) and one or more visual characteristics of the first visual filter (e.g., exposure, brightness, saturation, hue, and/or contrast). In some embodiments, the second visual filter is selected to be part of the second predefined combination with the second audio track based on one more audio characteristics of the second audio track and one or more visual characteristics of the second visual filter. Selecting a first visual filter to pair with the first audio track based on one or more audio characteristics of the first audio track improves the quality of filter/audio track combinations provided to a user, thereby providing an improved means for selection by the user. Otherwise, additional inputs would be required to further locate the desired combination of visual filter and audio track.
In some embodiments, playing the visual content of the first aggregated content item (e.g., prior to detecting the user input) comprises: concurrently displaying, via the display generation component: the visual content (e.g., 628A) with the first visual filter (e.g., 646B) applied to the first region of the visual content (e.g.,
In some embodiments, playing the visual content of the first aggregated content item (e.g., prior to detecting the user input) further comprises: while concurrently displaying the visual content (e.g., 628A) with the first visual filter (e.g., 646B) applied to the first region and the second visual filter (e.g., 646C) applied to the second region, displaying, via the display generation component, the visual content with a third visual filter (e.g., 646A) different from the first visual filter and the second visual filter applied to a third region of the visual content different from the first region and the second region (e.g.,
In some embodiments, playing the visual content of the first aggregated content item (e.g., 628A, 628B, 628C, 628D) (e.g., prior to detecting the user input) includes applying transitions of a first visual transition type (e.g., a crossfade, a fade to black, an exposure bleed, a pan, a scale, and/or a rotate) to the visual content of the first aggregated content item (e.g., applying a first type of visual transition between content items in the first aggregated content item), and modifying the visual parameter of playback of visual content of the first aggregated content item while continuing to play visual content of the first aggregated content item includes modifying the transitions to a second visual transition type different from the first visual transition type (e.g., applying a second type of visual transition between content items in the first aggregated content item). In some embodiments, playing the visual content of the first aggregated content item (e.g., prior to detecting the user input) includes: displaying a first content item of the first aggregated content item (e.g., a first image and/or a first video), displaying a transition from the first content item to a second content item of the first aggregated content item, wherein the transition is of the first visual transition type, and after displaying the transition from the first content item to the second content item, displaying the second content item. After detecting the user input and modifying the visual parameter of playback of visual content of the first aggregated content item (e.g., including modifying the first visual transition type to the second visual transition type): the computer system displays a third content item of the first aggregated content item, after displaying the third content item, the computer system displays a transition from the third content item to a fourth content item of the first aggregated content item, wherein the transition is of the second visual transition type different from the first visual transition type. Modifying visual transitions applied to visual content of the first aggregated content item in response to detecting a user input enables a user to quickly modify visual transitions applied to the visual content of the first aggregated content item, thereby reducing the number of inputs needed for modifying visual transitions that are applied to the visual content.
In some embodiments, the first visual transition type is selected from a plurality of visual transition types based on the audio content (e.g., track 1,
In some embodiments, the first visual transition type is selected from a first set of visual transition types based on a tempo (e.g., beats per minute information) for the audio content (e.g., track 1,
In some embodiments, playing the visual content (e.g., 628A, 628B, 628C, 628D) of the first aggregated content item (e.g., prior to detecting the user input) includes: displaying the visual content (e.g., 628A,
In some embodiments, in response to detecting the user input (e.g., 648), the computer system shifts the divider in concurrence with the user input (e.g., shifting the blank space between visual filters 646B, 646C in
In some embodiments, prior to detecting the user input (e.g., 648), the first aggregated content item is configured to display a first content item (e.g., 628A) of (or, optionally, each content item of) the first plurality of content items for a first duration of time (e.g., one second, or three seconds); and modifying the visual parameter of playback of visual content of the first aggregated content item comprises configuring the first aggregated content item to display the first content item (e.g., 628A) (or, optionally, each content item) for a second duration of time that is different from the first duration of time (e.g., two seconds, or four seconds). In some embodiments, prior to detecting the user input, the first aggregated content item is configured to display a second content item of the first plurality of content items for a third duration of time; and modifying the visual parameter of playback of visual content of the first aggregated content item comprises, in response to detecting the user input, configuring the first aggregated content item to display the second content item for a fourth duration of time that is different from the third duration of time. In some embodiments, the second duration of time is shorter than the first duration of time based on a determination that the user input causes playing of faster audio content (e.g., modifying the audio content includes playing new audio content that has a faster tempo (e.g., a greater beats per minute value) than the audio content). In some embodiments, the second duration of time is longer than the first duration of time based on a determination that the user input causes playing of slower audio content (e.g., modifying the audio content includes playing new audio content that has a slower tempo (e.g., a lower beats per minute value) than the audio content). Modifying the duration of time that content items are displayed in response to detecting a user input enables a user to quickly modify the duration of time that content items are displayed, thereby reducing the number of inputs needed for modifying display durations for content items.
In some embodiments, the user input (e.g., 648, 650A, 650B) comprises a gesture (e.g., via a touch-sensitive display and/or a touch sensitive surface) (e.g., a tap gesture, a swipe gesture, and/or a different gesture) (e.g., a touchscreen gesture and/or a non-touchscreen gesture such as a mouse click or hover gesture). Modifying audio content in response to detecting a gesture enables a user to quickly modify audio content that is applied to visual content, thereby reducing the number of inputs needed for modifying audio content that is applied to visual content. Modifying audio content in response to detecting a gesture provides the user with feedback about the current state of the device (e.g., that the device has detected the gesture).
In some embodiments, modifying audio content that is playing while continuing to play visual content of the first aggregated content item comprises changing the audio content from a first audio track (e.g., track 1,
In some embodiments, changing the audio content from the first audio track to the second audio track comprises: ceasing playing the first audio track at a first playback position of the first audio track (e.g.,
In some embodiments, the computer system detects, via the one or more inputs devices, one or more duration setting inputs (e.g., one or more inputs selecting options 636E-636H) (e.g., one or more tap inputs and/or one or more non-tap inputs) (e.g., while playing the visual content of the first aggregated content item and the audio content). In response to detecting the one or more duration setting inputs, the computer system modifies a duration (e.g., length) of the first aggregated content item (e.g., a duration of the visual content of the first aggregated content item) (e.g., from a first duration to a second duration). In some embodiments, prior to detecting the one or more duration setting inputs, the rate at which content of the first aggregated content item is displayed would result in the computer system taking a first duration to play the first aggregated content and, after detecting the one or more duration setting inputs, the rate at which content of the first aggregated content item is displayed would result in the computer system taking a second duration (different from the first duration) to play the first aggregated content. Modifying the duration of the first aggregated content item in response to detecting a user input enables a user to quickly modify the duration of the first aggregated content item, thereby reducing the number of inputs needed for modifying the duration of the aggregated content item.
In some embodiments, modifying audio content that is playing while continuing to play visual content of the first aggregated content item comprises changing the audio content from a first audio track (e.g., a first music track and/or a first song) to a second audio track (e.g., a second music track and/or a second song) different from the first audio track while continuing to play visual content of the first aggregated content item, wherein the first audio track has a first duration (e.g., length), and the second audio track has a second duration (e.g., length) different from the first duration. In response to detecting the user input, the computer system modifies a duration (e.g., length) of the first aggregated content item (e.g., a duration of the visual content of the first aggregated content item) based on the second duration (e.g., option 636H “full song”) (e.g., modifying the duration of the first aggregated content item to the second duration (e.g., to equal the second duration)). In some embodiments, modifying the duration of the first aggregated content item includes modifying, for each content item of at least a subset of the first plurality of content items, a respective duration that the content item is configured to be displayed (e.g., modifying a duration a first content item is to be displayed, modifying a duration a second content item is to be displayed). In some embodiments, modifying the duration of the first aggregated content item includes modifying the number of content items to be displayed in the first aggregated content item (e.g., modifying the number of content items in the first plurality of content items). Automatically modifying the duration of the first aggregated content item based on the duration of the second audio track allows the user to quickly modify the duration of the first aggregated content item without further user inputs.
In some embodiments, while playing the audio content, the computer system detects, via the one or more inputs devices, one or more duration fitting inputs (e.g., one or more inputs selecting option 636H) (e.g., one or more tap inputs and/or one or more non-tap inputs) (e.g., while playing the visual content of the first aggregated content item and the audio content). In response to detecting the one or more duration fitting inputs, and in accordance with a determination that the audio content has a first duration, the computer system modifies a duration (e.g., length) of the first aggregated content item (e.g., a duration of the visual content of the first aggregated content item) from a second duration different from the first duration to the first duration (e.g., based on a determination that the audio content has the first duration). In some embodiments, in response to detecting the one or more duration fitting inputs, and in accordance with a determination that the audio content has a third duration different from the first duration and the second duration, the computer system modifies the duration of the first aggregated content item from the second duration to the third duration. Modifying the duration of the first aggregated content item in response to detecting a user input enables a user to quickly modify the duration of the first aggregated content item, thereby reducing the number of inputs needed for modifying the duration of the aggregated content item.
In some embodiments, while playing the visual content of the first aggregated content item (e.g., 628A, 628B, 628C, 628D) and the audio content that is separate from the content items (e.g., audio track 1, audio track 2, audio track 3 of
In some embodiments, the first user interface object is displayed in a first region of the visual filter selection user interface, and the second user interface object is displayed in a second region of the visual filter selection user interface that does not overlap the first region. In some embodiments, displaying the visual filter selection user interface comprises concurrently displaying, with the first user interface object and the second user interface object, a third user interface object (e.g., corresponding a third visual filter different from the first and second visual filters) displaying the continued playing of the visual content of the first aggregated content item with a third visual filter different from the first and second visual filters applied to the visual content. In some embodiments, the method further comprises: while displaying the visual filter selection user interface including the first user interface object and the second user interface object, detecting, via the one or more input devices, a user input corresponding to selection of the first user interface object; and in response to detecting the user input: ceasing display of the visual filter selection user interface (e.g., ceasing display of the second user interface object); and displaying continued playing of the visual content of the first aggregated content item with the first visual filter applied to the visual content. In some embodiments, selection of the first user interface object and/or selection of the second user interface object maintains continued playing of the audio content that is separate from the content items (e.g., selection of a user interface object in the visual filter selection user interface does not affect audio content that is playing). In some embodiments, selection of the first user interface object causes second audio content different from the audio content to play (e.g., selection of a user interface object in the visual filter selection user interface changes audio content that is playing and/or applied to the first aggregated content item).
In some embodiments, while playing the visual content of the first aggregated content item and the audio content that is separate from the content items, the computer system displays, via the display generation component, a second selectable object (e.g., 644C) that is selectable to display a plurality of audio track options (e.g., corresponding to a plurality of audio tracks) (e.g., each audio track option corresponds to a respective audio track). In some embodiments, while displaying the second selectable object, the computer system detects, via the one or more input devices, a second selection input (e.g., 652) corresponding to selection of the second selectable object (e.g., a tap input) (e.g., a non-tap input). In response to detecting the second selection input, the computer system displays an audio track selection user interface (e.g., 654) (in some embodiments, while continuing playing the visual content of the first aggregated content item) (in some embodiments, in response to detecting the second selection input, pausing playing of the visual content of the first aggregated content item). The audio track selection user interface comprises: a third user interface object (e.g., 656A) corresponding to a first audio track, wherein the third user interface object is selectable to initiate a process for applying the first audio track to the first aggregated content item (e.g., playing the first audio track while playing the visual content of the first aggregated content item); and a fourth user interface object (e.g., 656B) corresponding to a second audio track different from the first audio track, wherein the fourth user interface object is selectable to initiate a process for applying the second audio track to the first aggregated content item (e.g., playing the second audio track while playing the visual content of the first aggregated content item). Concurrently displaying the third user interface object corresponding to the first audio track and the fourth user interface object corresponding to the second audio track enables a user to quickly select a desired audio track, thereby reducing the number of inputs needed for selecting an audio track.
In some embodiments, the audio track selection user interface further comprises a fifth user interface object corresponding to a third audio track different from the first and second audio tracks, wherein the fifth user interface object is selectable to initiate a process for applying the third audio track to the first aggregated content item (e.g., playing the third audio track while playing the visual content of the first aggregated content item). In some embodiments, the second selection input is detected while the visual content of the first aggregated content item is displayed with a first visual filter applied, and selection of the third user interface object and/or selection of the fourth user interface object maintains application of the first visual filter to the visual content of the first aggregated content item (e.g., selection of a user interface object in the audio track selection user interface does not affect a visual filter that is applied to the visual content). In some embodiments, selection of the third user interface object causes a second visual filter different from the first visual filter to be applied to the visual content (e.g., selection of a user interface object in the audio track selection user interface changes a visual filter that is applied to the visual content of the first aggregated content item). In some embodiments, the first audio track and the second audio track are selected for inclusion in the audio track selection user interface based on visual content of the first aggregated content item (e.g., song suggestions are generated and/or provided based on visual content included in the first aggregated content item) (e.g., climbing related songs for a first aggregated content item about a climbing trip, or surfing related songs about a first aggregated content item about a surfing trip).
In some embodiments, the third user interface object (e.g., 656A) includes display of a track title (e.g., a song title) corresponding to the first audio track; and the fourth user interface object (e.g., 656B) includes display of a track title (e.g., a song title) corresponding to the second audio track. In some embodiments, the third user interface object further displays album art corresponding to the first audio track; and the fourth user interface object further displays album art corresponding to the second audio track. Displaying the third user interface object including the track title corresponding to the first audio track and the fourth user interface object including the track title corresponding to the second audio track enables a user to quickly select a desired audio track, thereby reducing the number of inputs needed for selecting an audio track.
In some embodiments, while displaying the audio track selection user interface (e.g., 654), including the third user interface object (e.g., 656A-656N) and the fourth user interface object (e.g., 656A-656N), the computer system detects, via the one or more input devices, a third selection input (e.g., 660) (e.g., a tap input and/or a non-tap input). In response to detecting the third selection input: in accordance with a determination that the third selection input corresponds to selection of the third user interface object, the computer system plays the first audio track from the beginning of the first audio track; and in accordance with a determination that the third selection input corresponds to selection of the fourth user interface object, the computer system plays the second audio track from the beginning of the second audio track. Playing the first audio track from the beginning of the first audio track or playing the second audio track from the beginning of the second audio track in response to the third selection input enables a user to quickly listen to and select a desired audio track, thereby reducing the number of inputs needed for selecting an audio track.
In some embodiments, playing the first audio track from the beginning of the first audio track and/or playing the second audio track from the beginning of the second audio track while playing the visual content of the first aggregated content item. In some embodiments, modifying audio content in response to the user input includes changing the audio content from a first audio track to a second audio track, and the second audio track is started from a playback position that is not a beginning position of the second audio track (e.g., a certain set of user inputs causes switching of the audio track mid-track (e.g., a user input corresponding to changing from a first predefined combination of a first visual filter and a first audio track to a second predefined combination of a second visual filter and a second audio track causes switching of the audio track mid-track) (e.g., causes the second audio track to start playing from a playback position that is not a beginning of the second audio track (e.g., greater than a threshold duration of time into the second audio track)), and, in contrast, selection of an audio track from the audio track selection user interface causes the selected audio track to play from the beginning of the audio track.
In some embodiments, while displaying the audio track selection user interface (e.g., 654), including the third user interface object (e.g., 656A-656N) and the fourth user interface object (e.g., 656A-656N), the computer system detects, via the one or more input devices, a fourth selection input (e.g., 660) corresponding to selection of the third user interface object (e.g., 656D). In response to detecting the fourth selection input, in accordance with a determination that a user of the computer system (e.g., a user account logged into the computer system) is not subscribed to an audio service (e.g., an audio service that provides access to the first audio track and/or a predefined audio service), the computer system initiates a process to display a prompt for the user to subscribe to the audio service (e.g.,
In some embodiments, while displaying the audio track selection user interface (e.g., 654), including the third user interface object (e.g., 656A-656N) and the fourth user interface object (e.g., 656A-656N), the computer system detects, via the one or more input devices, a fifth selection input (e.g., 660) (e.g., a tap input and/or a non-tap input) corresponding to selection of the third user interface object (e.g., 656D). In response to detecting the fifth selection input, and in accordance with a determination that a user of the computer system (e.g., a user account logged into the computer system) is not subscribed to an audio service (e.g., an audio service that provides access to the first audio track), the computer system initiates a process to display a preview user interface (e.g., 662), wherein displaying the preview user interface includes playing a preview of the first aggregated content item in which the first audio track (e.g., track 3,
In some embodiments, while playing the visual content of the first aggregated content item and the audio content that is separate from the content items, the computer system displays a fifth user interface object (e.g., 632D) that is selectable to cause the computer system to enter an editing mode. In some embodiments, entering the editing mode includes displaying an editing user interface.
In some embodiments, subsequent to displaying the fifth user interface object (e.g., while displaying the fifth user interface object and/or after displaying and no longer displaying the fifth user interface object), the computer system detects, via the one or more input devices, a second user input (e.g., 648 (e.g., a swipe gesture)) (e.g., a gesture (e.g., via a touch-sensitive display and/or a touch-sensitive surface) (e.g., a tap gesture, a swipe gesture) and/or a voice input). In response to detecting the second user input, and in accordance with a determination that the computer system is in the editing mode (e.g.,
In some embodiments, while playing the visual content of the first aggregated content item (e.g.,
While displaying the sixth selectable user interface object, the computer system detects, via the one or more input devices, a sixth selection input (e.g., a tap input and/or a non-tap input) corresponding to selection of the sixth user interface object (e.g., 680). In response to detecting the sixth selection input, the computer system pauses playing of the visual content of the first aggregated content item (e.g., displaying the visual content of the first aggregated content item in a paused state). In some embodiments, the computer system also pauses playing of the audio content separate from the content items. In response to detecting the sixth selection input, the computer system replaces display of the fifth user interface object (e.g., 632D) (e.g., a “recipes” option) with a seventh user interface object (e.g., 632G) (e.g., an aspect ratio toggle option) that is selectable to modify an aspect ratio of the visual content of the first aggregated content item. While displaying the seventh user interface object (e.g., and while the visual content of the first aggregated content item is paused and/or while displaying visual content of the first aggregated content item in the paused state), the computer system detects, via the one or more input devices, a seventh selection input (e.g., 683) (e.g., a tap input and/or a non-tap input) corresponding to selection of the seventh user interface object. In response to detecting the seventh selection input, the computer system displays, via the display generation component, the visual content of the first aggregated content item (e.g., 628C,
In some embodiments, while playing of the visual content of the first aggregated content item is paused, while displaying the seventh user interface object, and while displaying the visual content of the first aggregated content item in the second aspect ratio, the computer system displays, via the display generation component, an eighth user interface object that is selectable to resume playing of the visual content of the first aggregated content item; while displaying the eight user interface object, the computer system displays an eighth selection input (e.g., a tap input and/or a non-tap input) corresponding to selection of the eight selectable user interface object; and in response to detecting the eighth selection input: the computer system displays, via the display generation component, the visual content of the first aggregated content item transition from being displayed at the second aspect ratio to being displayed at the first aspect ratio, and resumes playing of the visual content of the first aggregated content item (e.g., in the first aspect ratio) (in some embodiments, also resuming playing of the audio content that is separate from the content items).
In some embodiments, while playing the visual content of the first aggregated content item, the computer system detects, via the one or more input devices, a pause input (e.g., 680) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to a request to pause playing of the visual content of the first aggregated content item (e.g., a tap input selecting a pause option). In response to detecting the pause input, the computer system pauses playing of the visual content of the first aggregated content item (e.g.,
In some embodiments, displaying the visual navigation user interface element (e.g., 682) includes concurrently displaying: a representation of a first content item of the first plurality of content items, and a representation of a second content item (e.g., different from the first content item) of the first plurality of content items (e.g.,
In some embodiments, in response to detecting the pause input (e.g., 1226), the computer system displays, via the display generation component, and concurrently with the visual navigation user interface element (e.g., 1228) (in some embodiments, while playing of the visual content of the first aggregated content item is paused), a duration control option (e.g., 1232A). While displaying the duration control option, the computer system detects, via the one or more input devices, a duration control input (e.g., 1242) (e.g., one or more remote control inputs and/or one or more non-remote control inputs) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to a selection of the duration control option. In response to detecting the duration control input, the computer system concurrently displays, via the display generation component: a first playback duration option (e.g., 1233A-1244E) corresponding to a first playback duration (e.g., a short playback duration option); and a second playback duration option (e.g., 1244A-1244E) corresponding to a second playback duration different from the first playback duration (e.g., a long playback duration option). In some embodiments, selection of the first playback duration option and/or the second playback duration option causes the first aggregated content item to be modified based on the selected playback duration option (e.g., increases and/or decreases the number of content items included in the first aggregated content item based on the selected playback duration option). Concurrently displaying the first playback duration option and the second playback duration option enables a user to quickly set the playback duration for the first aggregated content item, thereby reducing the number of inputs needed for setting a playback duration.
In some embodiments, in response to detecting the pause input (e.g., 1226), the computer system displays, via the display generation component, and concurrently with the visual navigation user interface element (e.g., 1228) (in some embodiments, while playing of the visual content of the first aggregated content item is paused), an audio track control option (e.g., 1232B). While displaying the audio track control option, the computer system detects, via the one or more input devices, an audio track control input (e.g., 1248) (e.g., one or more remote control inputs and/or one or more non-remote control inputs) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to a selection of the audio track control option. In response to detecting the audio track control input, the computer system concurrently displays, via the display generation component, a first audio track option (e.g., 1250A-1250E) corresponding to a first audio track; and a second audio track option (e.g., 1250A-1250E) corresponding to a second audio track different from the first audio track. In some embodiments, selection of the first audio track option causes the first audio track to be applied to the first aggregated content item (e.g., causes the first audio track to play while visual content of the first aggregated content item is played), and selection of the second audio track option causes the second audio track to be applied to the first aggregated content item (e.g., causes the second audio track to play while visual content of the first aggregated content item is played). Concurrently displaying the first audio track option and the second audio track option enables a user to quickly set the audio track applied to the first aggregated content item, thereby reducing the number of inputs needed for setting the audio track.
In some embodiments, playing the visual content of the first aggregated content item includes: displaying, via the display generation component, at a first time, a first content item (e.g., 628A,
In some embodiments, while playing the visual content of the first aggregated content item (e.g., 628C,
In some embodiments, the first gesture is a long press gesture (e.g., 686) (e.g., sustained contact with a touchscreen display, sustained contact with a touchpad, and/or sustained click of a mouse); and modifying playing of the visual content of the first aggregated content item in the first manner includes maintaining display of a currently displayed content item during (e.g., for some or all of the duration of) the long press gesture (e.g.,
In some embodiments, while maintaining display of the currently displayed content item during the long press gesture (e.g., 686), the computer system detects, via the one or more input devices, termination of the long press gesture. After detecting termination of the long press gesture (e.g., in response to detecting termination of the long press gesture), the computer system modifies a playback duration for one or more subsequent content items (e.g., all subsequent content items) to be displayed subsequent to the currently displayed content item (e.g., decreasing a playback duration for the one or more subsequent content items (e.g., decreasing the amount of time that each content item of the one or more subsequent content items will be displayed)). In some embodiments, prior to detecting the long press gesture, a first subsequent content item configured to be displayed subsequent to the currently displayed content item is configured to be displayed for a first duration of time during playback of the visual content; and, after detecting the long press gesture, the first subsequent content item is configured to be displayed for a second duration of time different from the first duration of time (e.g., a second duration of time shorter than the first duration of time). Automatically adjusting playback durations for one or more subsequent content items in response to termination of a long press gesture that caused extended display of a content item allows a user to adjust playback of the visual content to account for the extended playback duration of the content item without further user inputs.
In some embodiments, the first gesture is a first tap gesture (e.g., 688, 690) (e.g., a tap gesture in a first region of a touch-screen display). Modifying playing of the visual content of the first aggregated content item in the first manner includes navigating to a previous content item in the ordered sequence of content items in the first aggregated content item (e.g.,
In some embodiments, the first gesture is a first swipe gesture (e.g., a swipe gesture in a first direction); and modifying playing of the visual content of the first aggregated content item in the first manner includes navigating to a previous content item in the ordered sequence of content items in the first aggregated content item (e.g.,
In some embodiments, modifying playing of the visual content of the first aggregated content item in the first manner comprises modifying playing of the visual content of the first aggregated content item in the first manner while continuing to play the audio content that is separate from the content items (e.g.,
In some embodiments, while displaying, via the display generation component, a first content item of the first aggregated content item (e.g., during playing of the visual content of the first aggregated content item), detecting, via the one or more input devices, a third user input (e.g., a long press input, a tap input, a swipe input, and/or a different input); and in response to detecting the third user input (e.g., 614), the computer system concurrently displays, via the display generation component: a tagging option (e.g., 616D) that is selectable to initiate a process for identifying a person depicted in the first content item (e.g., tagging a person depicted in the first content item); and a removal option (e.g., 616E, 616F) that is selectable to initiate a process for removing one or more content items from the first aggregated content item that depict a person that is also depicted in the first content item. Displaying a tagging option that is selectable to initiate a process for identifying a person depicted in the first content item enables a user to quickly identify people depicted in the first content item, thereby reducing the number of inputs needed to tag and/or identify depicted people. Displaying a removal option that is selectable to initiate a process for removing one or more content items from the first aggregated content item that depict a person that is also depicted in the first content item enables a user to quickly and easily remove content items that depict particular people, thereby reducing the number of inputs needed to remove such content items.
In some embodiments, the removal option is a “feature this person less” option that reduces the number of instances (e.g., number of content items) in the first aggregated content item in which the person is depicted. In some embodiments, the removal option reduces the number of instances (e.g., the number of content items) in the first aggregated content item in which only the person is depicted (and no other people are depicted). In some embodiments, the removal option is a “never feature this person” option in which all instances (e.g., all content items) in which the person is depicted are removed from the first aggregated content item.
In some embodiments, in response to detecting the third user input, the computer system displays the tagging option (e.g., without displaying the removal option). In some embodiments, in response to detecting the third user input, the computer system displays the removal option (e.g., without displaying the tagging option). In some embodiments, the tagging option and/or the removal option are accessible by interacting with a content item in a media library user interface and/or by interacting with a content item in a featured photos user interface.
Note that details of the processes described above with respect to method 700 (e.g.,
In
Next content item user interface 800 includes countdown timer 802A that indicates for a user that, without further user input, a next aggregated content item (e.g., “PALM SPRINGS 2017”) will begin playing at the end of the countdown timer 802A. Next content item user interface 800 also includes replay option 802B, that is selectable to replay the first aggregated content item, and share option 802C, that is selectable to initiate a process for sharing the first aggregated content item via one or more communications mediums.
In
In
In
In
In
As described below, method 900 provides an intuitive way for navigating and viewing content items. The method reduces the cognitive burden on a user for navigating and viewing content items, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to navigate and view content items faster and more efficiently conserves power and increases the time between battery charges.
The computer system plays (902), via the display generation component, visual content of a first aggregated content item (e.g., media item 826Z of the first aggregated content item in
While playing the visual content of the first aggregated content item (904), the computer system plays (906) audio content (e.g.,
After playing at least a portion of the visual content of the first aggregated content item (908), the computer system detects (910) that playback of the visual content of the first aggregated content item meets one or more termination criteria (e.g., detecting that playback of the first aggregated content item has completed, detecting that playback of the first aggregated content item has surpassed a threshold playback time, and/or detecting that less than a threshold duration of time remains in the first aggregated content item).
Subsequent to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (912) (e.g., in response to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria): in accordance with a determination that a playback condition of a first set of one or more playback conditions is met (914) (e.g., in accordance with a determination that a threshold duration of time has elapsed since detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria, in accordance with a determination that a threshold duration of time has elapsed since playback of the visual content of the first aggregated content item has completed, and/or in accordance with a determination that a user input has been received corresponding to a request to begin playing visual content of a second aggregated content item), the computer system plays (916) visual content of a second aggregated content item different from the first aggregated content item (e.g.,
In some embodiments, subsequent to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (e.g., has finished): in accordance with the determination that the playback condition of the first set of one or more playback conditions is met, and/or while playing the visual content of the second aggregated content item, the computer system plays second audio content (e.g., different from the audio content) (e.g., automatically and/or without user input) (e.g., and ceasing playback of the audio content that was being played during visual playback of the visual content of the first aggregated content item) (e.g., outputting and/or causing output (e.g., via one or more speakers, one or more headphones, and/or one or more earphones) of an audio track while the visual content of the second aggregated content item is being displayed via the display generation component) (e.g., audio content that corresponds to and/or is part of the second aggregated content item (e.g., audio from one or more videos incorporated into the aggregated content item) and/or audio content that is separate from the second aggregated content item (e.g., an audio track that is overlaid on the second aggregated content item and/or played while visual content of the second aggregated content item is played and/or displayed)).
In some embodiments, the computer system detects, via the one or more input devices, an image capture input (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to a request to capture image data using a camera; and in response to detecting the image capture input, the computer system adds a new content item (e.g., a new photo and/or a new video) (e.g., a new photo and/or a new video that is captured using a camera in response to detecting the image capture input) to the media library (e.g., media library user interface 604). Automatically adding a new content item to the media library in response to detecting an image capture input allows a user to save captured images without requiring additional input.
In some embodiments, prior to playing the visual content of the second aggregated content item (e.g.,
In some embodiments, while displaying the timer (e.g., 802A), the computer system detects, via the one or more input devices, a first input (e.g., 808A, 808B, 808C, 808D, 808E) (e.g., a tap input and/or a non-tap input); and in response to detecting the first input, the computer system cancels automatic playback of the second aggregated content item (e.g.,
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (in some embodiments, prior to playing visual content of the second aggregated content item), the computer system displays, via the display generation component, a first user interface object (e.g., 804A) corresponding to (e.g., corresponding uniquely to) the second aggregated content item (in some embodiments, while continuing playing the audio content). While displaying the first user interface object, the computer system detects, via the one or more input devices, a second input (e.g., 808A) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the first user interface object. In response to detecting the second input, the computer system plays visual content of the second aggregated content item (e.g., FIG. 8F) (e.g., without waiting for the first set of one or more playback conditions to be met and/or without waiting for a displayed countdown timer to expire). Displaying a first user interface object that is selectable to play visual content of the second aggregated content item enables a user to quickly select a next aggregated content item to be played, thereby reducing the number of inputs needed for selecting a next aggregated content item.
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria, the computer system displays, via the display generation component, a first user interface object (e.g., 804A) corresponding to (e.g., corresponding uniquely to) the second aggregated content item. In some embodiments, the computer system displays the first user interface object while continuing playing the audio content. While displaying the first user interface object, the computer system detects, via the one or more input devices, a third input (e.g., 808B, 808C, 808D, 808E) (e.g., one or more tap inputs and/or one or more non-tap inputs) that does not correspond to selection of the first user interface object (e.g., at a location on a displayed user interface that does not correspond to the first user interface object) (e.g., that does not correspond to selection of any user interface object). In response to detecting the third input, the computer system cancels automatic playback of (e.g., forgoing automatically playing) visual content of the second aggregated content item (e.g.,
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (e.g., in response to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria), the computer system displays, via the display generation component, a replay user interface object (e.g., 802B). In some embodiments, the computer system displays the replay user interface object while continuing playing the audio content. In some embodiments, the computer system displays, concurrently with the replay user interface object, a first user interface object corresponding to the second aggregated content item (and selectable to begin playing visual content of the second aggregated content item). While displaying the replay user interface object, the computer system detects, via the one or more input devices, a fourth input (e.g., 808B) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the replay user interface object. In response to detecting the fourth input, the computer system plays visual content of the first aggregated content item from the beginning of the first aggregated content item (e.g.,
In some embodiments, the second aggregated content item (e.g., Palm Springs 2017 in
In some embodiments, prior to playing visual content of the second aggregated content item (e.g., immediately prior to playing visual content of the second aggregated content item), the computer system gradually ceases (e.g., fading) playing the audio content (e.g., track 3 in
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (e.g., in response to detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria) (in some embodiments, prior to playing visual content of the second aggregated content item), the computer system displays, via the display generation component, a first user interface object (e.g., 804A) corresponding to (e.g., corresponding uniquely to) the second aggregated content item. In some embodiments, the computer system displays the first user interface object while continuing playing the audio content. While displaying the first user interface object, the computer system detects, via the one or more input devices, a fifth input (e.g., 808D) (e.g., one or more swipe inputs and/or one or more non-swipe inputs). In response to detecting the fifth input, the computer system displays, via the display generation component, a user interface object (e.g., 804D, 804E) corresponding to (e.g., corresponding uniquely to) a third aggregated content item different from the first aggregated content item and the second aggregated content item, wherein the third aggregated content item comprises an ordered sequence of a third plurality of content items different from the first plurality of content items and the second plurality of content items, and further wherein the third plurality of content items is selected from the media library that includes photos and/or videos taken by a user of the device, wherein the third plurality of content items is selected based on a third set of selection criteria (e.g., different from the first set of selection criteria and/or the second set of selection criteria). In some embodiments, the third aggregated content item depicts an ordered sequence of a plurality of photos and/or videos and/or an automatically generated collection of photos and/or videos (e.g., a collection of photos and/or videos that are automatically aggregated and/or selected from the set of content items based on one or more shared characteristics). In some embodiments, the plurality of photos and/or videos that make up the third plurality of content items are selected from a set of photos and/or videos that are associated with the computer system (e.g., stored on the computer system, associated with a user of the computer system, and/or associated with a user account associated with (e.g., signed into) the computer system)(e.g., selected from the same set of photos and/or videos from which the first plurality of content items of the first aggregated content item were selected). In some embodiments, while displaying the second user interface object, the computer system detects, via the one or more inputs devices, a user input corresponding to selection of the second user interface object, and in response to detecting the user input corresponding to selection of the second user interface object, the computer system plays visual content of the third aggregated content item. Displaying a second user interface object corresponding to a third aggregated content item in response to detecting the fifth input enables a user to quickly select a next content item to be played, thereby reducing the number of inputs needed for selecting a next content item.
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (and, optionally, prior to playing visual content of the second aggregated content item), the computer system concurrently displays, via the display generation component (and, optionally, while continuing playing the audio content): a first user interface object (e.g., 804A) corresponding to (e.g., corresponding uniquely to) the second aggregated content item; and a second user interface object (e.g., 804B-804E) corresponding to (e.g., corresponding uniquely to) a third aggregated content item different from the first aggregated content item and the second aggregated content item, wherein the third aggregated content item comprises an ordered sequence of a third plurality of content items different from the first plurality of content items and the second plurality of content items, and further wherein the third plurality of content items is selected from the media library that includes photos and/or videos taken by a user of the device, wherein the third plurality of content items is selected based on a third set of selection criteria (e.g., different from the first set of selection criteria and/or the second set of selection criteria). Displaying a first user interface object corresponding to a second aggregated content item and a second user interface object corresponding to a third aggregated content item enables a user to quickly select a next content item to be played, thereby reducing the number of inputs needed for selecting a next content item.
In some embodiments, while concurrently displaying the first user interface object (e.g., 804A) and the second user interface object (e.g., 804B-804E), the computer system continues playing the audio content (e.g.,
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (in some embodiments, prior to playing visual content of the second aggregated content item), the computer system displays, via the display generation component, at a first time, a first user interface object (e.g., 804A) corresponding to (e.g., corresponding uniquely to) the second aggregated content item (in some embodiments, while continuing playing the audio content), wherein displaying the first user interface object includes concurrently displaying: a first content item of the second plurality of content items in the second aggregated content item (e.g., image of user in water in
In some embodiments, at the second time, the computer system displays, via the display generation component, the title information (e.g., 627) in a first display region (e.g.,
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria, the computer system concurrently displays, via the display generation component (and, optionally, prior to playing visual content of the second aggregated content item and/or while continuing playing the audio content): a first user interface object (e.g., 804A) corresponding to (e.g., corresponding uniquely to) the second aggregated content item; and a share user interface object (e.g., 802C) that is selectable to initiate a process for sharing the first aggregated content item (e.g., sharing the first aggregated content via one or more communications mediums (e.g., text message, electronic mail, near field wireless communication and/or file transfer, uploading to a shared media album, and/or uploading to a third party platform)). In some embodiments, while concurrently displaying the first user interface object and the share user interface object, the computer system detects, via the one or more input devices, an input corresponding to selection of the share user interface object; and in response to detecting the input, the computer system displays, via the display generation component, a share user interface, wherein displaying the share user interface includes concurrently displaying: a first share object corresponding to a first communication medium and a second share object corresponding to a second communication medium. Displaying the share user interface object that is selectable to initiate a process for sharing the first aggregated content item enables a user to quickly share the first aggregated content item, thereby reducing the number of inputs needed for sharing the first aggregated content item.
In some embodiments, while concurrently displaying the first user interface object (e.g., 804A) and the share user interface object (e.g., 802C), the computer system detects, via the one or more input devices, a sixth input (e.g., 808E) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the share user interface object. In response to detecting the sixth input, in accordance with a determination that audio content applied to the first aggregated content item is not permitted to be shared by a user of the computer system (e.g., a user account logged into the computer system) (e.g., the user of the computer system is not authorized to share the audio content applied to the first aggregated content item), the computer system displays, via the display generation component, an indication that the audio content applied to the first aggregated content item is not permitted to be shared by the user (e.g., 818,
In some embodiments, while concurrently displaying the first user interface object (e.g., 804A) and the share user interface object (e.g., 802C), the computer system detects, via the one or more input devices, a seventh input (e.g., 808E) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the share user interface object. In response to detecting the seventh input, in accordance with a determination that audio content applied to the first aggregated content item is not permitted to be shared by a user of the computer system (e.g., a user account logged into the computer system) (e.g., the user of the computer system is not authorized to share the audio content applied to the first aggregated content item), the computer system displays, via the display generation component, a playback duration option (e.g., 820A, 802B) that is selectable to initiate a process for shortening a playback duration of the first aggregated content item (e.g., shorten the playback duration of the first aggregated content item to less than a threshold playback duration) (e.g., decrease the number of content items included in the first aggregated content item (e.g., to less than a threshold number of content items)). In some embodiments, while displaying the playback duration option, the computer system detects, via the one or more input devices, an input corresponding to selection of the playback duration option; and, in response to detecting the input, the computer system modifies the first aggregated content item to decrease the playback duration of the first aggregated content item (e.g., decrease the number of content items included in the first aggregated content item). In some embodiments, in response to detecting the seventh input, and in accordance with a determination that audio content applied to the first aggregated content item is permitted to be shared by the user of the computer system, the computer system displays a sharing user interface comprising one or more selectable objects that are selectable to initiate a process and/or further a process for sharing the first aggregated content item via one or more communication mediums (e.g., a first selectable object that is selectable to initiate a process for sharing the first aggregated content item via a first communication medium, and a second selectable object that is selectable to initiate a process for sharing the first aggregated content item via a second communication medium). Displaying a playback duration option in accordance with a determination that audio content applied to the first aggregated content item is not permitted to be shared by the user of the computer system provides the user with feedback about the current state of the device (e.g., that the device has determined that the audio content applied to the first aggregated content item is not permitted to be shared by the user).
In some embodiments, while concurrently displaying the first user interface object (e.g., 804A) and the share user interface object (e.g., 802C), the computer system detects, via the one or more input devices, an eighth input (e.g., 808E) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the share user interface object. In response to detecting the eighth input, in accordance with a determination that audio content applied to the first aggregated content item is not permitted to be shared by a user of the computer system (e.g., a user account logged into the computer system) (e.g., the user of the computer system is not authorized to share the audio content applied to the first aggregated content item), the computer system displays, via the display generation component, an audio content option (e.g., 820B) that is selectable to initiate a process for selecting different audio content to be applied to the first aggregated content item. In some embodiments, in response to detecting the eighth input, and in accordance with a determination that audio content applied to the first aggregated content item is permitted to be shared by the user of the computer system, the computer system displays a sharing user interface comprising one or more selectable objects that are selectable to initiate a process and/or further a process for sharing the first aggregated content item via one or more communication mediums (e.g., a first selectable object that is selectable to initiate a process for sharing the first aggregated content item via a first communication medium, and a second selectable object that is selectable to initiate a process for sharing the first aggregated content item via a second communication medium). In some embodiments, while displaying the audio content option, the computer system detects, via the one or more input devices, an input corresponding to selection of the audio content option; and, in response to detecting the input, the computer system concurrently displays, via the display generation component, a first audio content option corresponding to first audio content and a second audio content option corresponding to second audio content (e.g., different from the first audio content). In some embodiments, while concurrently displaying the first audio content option and the second audio content option, the computer system detects, via the one or more input devices, a selection input; and in response to detecting the selection input: in accordance with a determination that the selection input corresponds to selection of the first audio content option, the computer system applies the first audio content to the first aggregated content item (e.g., without applying the second audio content); and in accordance with a determination that the selection input corresponds to selection of the second audio content option, the computer system applies the second audio content to the first aggregated content item (e.g., without applying the first audio content). In some embodiments, the first audio content option and the second audio content option are selected for display based on a determination that the user is authorized to share the first audio content and the second audio content. Displaying an audio content option in accordance with a determination that audio content applied to the first aggregated content item is not permitted to be shared by the user of the computer system provides the user with feedback about the current state of the device (e.g., that the device has determined that the audio content applied to the first aggregated content item is not permitted to be shared by the user).
In some embodiments, while concurrently displaying the first user interface object (e.g., 804A) and the share user interface object (e.g., 802C), the computer system detects, via the one or more input devices, a ninth input (e.g., 808E) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the share user interface object. In response to detecting the ninth input, and in accordance with a determination that the first plurality of content items in the first aggregated content item includes a first content item that is not saved locally to the computer system, the computer system displays, via the display generation component, a sync option (e.g., 824A) that is selectable to initiate a process for saving the first content item to the media library. In some embodiments, while displaying the sync option, the computer system detects, via the one or more input devices, an input corresponding to selection of the sync option; and, in response to detecting the input, the computer system saves the first content item to the computer system. In some embodiments, in accordance with a determination that the first plurality of content items in the first aggregated content item includes one or more content items that are not saved locally to the computer system, the computer system displays, via the display generation component, a sync option that is selectable to initiate a process for saving the one or more content items to the computer system; while displaying the sync option, the computer system detects, via the one or more input devices, an input corresponding to selection of the sync option; and, in response to detecting the input, the computer system saves the one or more content items to the computer system. Displaying a sync option in accordance with a determination that the first plurality of content items in the first aggregated content item includes a first content item that is not saved locally to the computer system provides the user with feedback about the current state of the device (e.g., that the device has determined that the first plurality of content items includes a first content item that is not saved locally to the computer system).
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (and, optionally, prior to playing visual content of the second aggregated content item), the computer system displays, via the display generation component, a preview object (e.g., 1276A) displaying an animated preview of visual content of the second aggregated content item (e.g., a moving preview and/or a preview video). In some embodiments, the computer system displays the preview object displaying an animated preview of visual content of the second aggregated content item while continuing to play the audio content. In some embodiments, while displaying the preview object, the computer system detects, via the one or more input devices, a selection input corresponding to selection of the preview object; and in response to detecting the selection input, the computer system plays visual content of the first aggregated content item. Displaying a preview object displaying an animated preview of visual content of the second aggregated content item enables a user to quickly preview and select a next aggregated content item to be played, thereby reducing the number of inputs needed for viewing and selecting a next aggregated content item.
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (in some embodiments, prior to playing visual content of the second aggregated content item), the computer system displays, via the display generation component, a places object (e.g., 1282D, 1282E) corresponding to a geographic location and that is selectable to display one or more aggregated content item options corresponding to the geographic location. In some embodiments, the computer system displays the places object while continuing playing the audio content. In some embodiments, while displaying the places object, the computer system detects, via the one or more input devices, a selection input corresponding to selection of the places object; and in response to detecting the selection input, the computer system displays, via the display generation component, a first option representative of a fourth aggregated content item corresponding to the geographic location. In some embodiments, the computer system displays, via the display generation component, concurrently with the places object, a second places object corresponding to a second geographic location different from the geographic location and that is selectable to display one or more aggregated content item options corresponding to the second geographic location; while concurrently displaying the places object and the second places object, the computer system detects a selection input; and in response to detecting the selection input: in accordance with a determination that the selection input corresponds to selection of the places option, the computer system displays, via the display generation component, a first option representative of a fourth aggregated content item corresponding to the geographic location (e.g., without displaying the second option); and in accordance with a determination that the selection input corresponds to selection of the second places option, displaying, via the display generation component, a second option representative of a fifth aggregated content item corresponding to the second geographic location (e.g., without displaying the first option). Displaying a places object corresponding to a geographic location that is selectable to display one or more aggregated content item options corresponding to the geographic location enables a user to quickly view and select aggregated content items corresponding to a particular geographic location, thereby reducing the number of inputs needed for selecting a next aggregated content item.
In some embodiments, subsequent to (e.g., in response to) detecting that playback of the visual content of the first aggregated content item meets one or more termination criteria (in some embodiments, prior to playing visual content of the second aggregated content item), the computer system displays, via the display generation component, a first people object (e.g., 1282A, 1282B, 1282C) corresponding to a first person and that is selectable to display one or more aggregated content item options corresponding to the first person. In some embodiments, the computer system displays the first people object while continuing to play the audio content. In some embodiments, while displaying the first people object, the computer system detects, via the one or more input devices, a selection input corresponding to selection of the first people object; and in response to detecting the selection input, the computer system displays, via the display generation component, a first option representative of a fourth aggregated content item corresponding to the first person. In some embodiments, the computer system displays, via the display generation component, concurrently with the first people object, a second people object corresponding to a second person different from the first person and that is selectable to display one or more aggregated content item options corresponding to the second person; while concurrently displaying the first people object and the second people object, the computer system detects a selection input; and in response to detecting the selection input: in accordance with a determination that the selection input corresponds to selection of the first people option, the computer system displays, via the display generation component, a first option representative of a fourth aggregated content item corresponding to the first person (e.g., without displaying the second option); and in accordance with a determination that the selection input corresponds to selection of the second people option, displaying, via the display generation component, a second option representative of a fifth aggregated content item corresponding to the second person (e.g., without displaying the first option). Displaying a people object corresponding to a first person that is selectable to display one or more aggregated content item options corresponding to the first person enables a user to quickly view and select aggregated content items corresponding to a particular person, thereby reducing the number of inputs needed for selecting a next aggregated content item.
In some embodiments, the computer system displays, via the display generation component, a media library user interface (e.g., 1208). In accordance with a determination that a first setting (e.g., 1220) is enabled (e.g., a “show library” option), the media library user interface provides access to (e.g., displays, and/or displays one or more options that are selectable to cause display and/or initiate a process for displaying) a plurality of aggregated content items (e.g., 1210A, 1212A-1212E) including the first aggregated content item, and the media library that includes photos and/or videos taken by the user of the computer system (e.g., 1210D, 604). In accordance with a determination that the first setting (e.g., 1220) is disabled, the media library user interface provides access to the plurality of aggregated content items without providing access to the media library that includes photos and/or videos taken by the user of the computer system (e.g., provides access to the plurality of aggregated content items that are generated using the photos and/or videos in the media library, but does not provide access to the individual photos and/or videos and/or the full set of individual photos and/or videos that make up the media library). Providing a first setting that can remove access to the media library enhances security by restricting access to the media library by an unauthorized user. Providing improved security enhances the operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access) which, additionally, reduces power usage and improves battery life of the device by limiting the performance of restricted operations.
Note that details of the processes described above with respect to method 900 (e.g.,
In
In
In
Options 1012G-1012J correspond to different duration options for the first aggregated content item, and are selectable to modify and/or specify a duration of the first aggregated content item. For example, the first aggregated content item currently has a duration corresponding to option 1012G (e.g., a short duration), and the specified duration is a duration of 10 media items. Option 1012H is selectable to increase the duration of the first aggregated content item by increasing the number of media items in the first aggregated content item (e.g., from 10 media items to 30 media items). Option 1012I is selectable to even further increase the duration of the first aggregated content item by increasing the number of media items in the first aggregated content item. In the depicted embodiment, option 1012I corresponds to a specific time duration (e.g., 1 minute 28 seconds), and the time duration corresponds to a maximum time duration that is allowable for sharing the first aggregated content item. Option 1012J is selectable to increase the duration of the first aggregated content item to match a duration of the audio track that has been applied to the first aggregated content item. In
In
Add media items user interface 1015 includes a plurality of tiles 1018A-10180 representative of a plurality of media items (e.g., photos and/or videos) that are not currently included in the first aggregated content item. In the depicted embodiment, the plurality of media items that are represented in the add media items user interface 1015 are selected for inclusion in the add media items user interface 1015 based on content depicted in each media item, and the relevance of the media item to the first aggregated content item. Add media items user interface 1015 also includes option 1016C, that is selectable to display representations (e.g., tiles) of all photos in the user's media library, and option 1016D, that is selectable to display a plurality of media item collections (e.g., albums) stored on electronic device 600. In
In
In
In
In
In
In
In
In
In
As shown above, content grid user interface 1004 and various options presented within content grid user interface 1004 allows a user to add, remove, and/or re-order media items within the first aggregated content item. Furthermore, addition, removal, and/or re-ordering of media items within the first aggregated content item can also cause a change in visual transitions presented between media items during playback of the first aggregated content item. For example, in some embodiments, visual transitions between two adjacent media items in the first aggregated content item can be selected based on a level of similarity between the two media items. For example, if the two media items are determined to be similar, visual transitions of a first type may be used between the two media items, whereas if the two media items are determined not to be substantially similar, then visual transitions of a second type may be used between the two media items.
In
As described below, method 1100 provides an intuitive way for viewing and editing content items. The method reduces the cognitive burden on a user for viewing and editing content items, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to view and edit content items faster and more efficiently conserves power and increases the time between battery charges.
The computer system plays (1102), via the display generation component, visual content of a first aggregated content item (e.g., 628A,
While playing the visual content of the first aggregated content item (1104), the computer system detects (1106), via the one or more input devices, a user input (e.g., 1002) (e.g., a gesture (e.g., via a touch-sensitive display and/or a touch-sensitive surface) (e.g., a tap gesture, a swipe gesture) and/or a voice input) (e.g., a user input corresponding to selection of an option and/or a user input corresponding to a request to pause playback of the first aggregated content item).
In response to detecting the user input (1108), the computer system pauses (1110) playback of the visual content of the first aggregated content item (e.g., freezing and/or ceasing video playback of the visual content of the first aggregated content item); and displays (1112), via the display generation component, a user interface (e.g., 1004) (e.g., replacing display of the visual content of the first aggregated content item with display of the user interface, and/or overlaying the user interface on the visual content of the first aggregated content item), wherein displaying the user interface includes concurrently displaying a plurality of representations of content items in the first plurality of content items (e.g., without displaying content items that are not in the first plurality of content items), including: a first representation of a first content item (e.g., 1008A-10080) of the first plurality of content items, and a second representation of a second content item (e.g., 1008A-10080) of the first plurality of content items. In some embodiments, the user input is detected while a respective content item of the plurality of content items is being displayed (e.g., within and/or as part of playback of the first aggregated content item). In some embodiments, the user interface includes a representation of the respective content item. In some embodiments, in accordance with a determination that the user input was detected while the respective content item was displayed, the user interface includes a representation of the respective content item. Displaying the user interface including concurrently displaying the plurality of representations of content items in the first plurality of content items provides the user with feedback about the current state of the device (e.g., that the first aggregated content item being played by the device includes the first plurality of content items).
In some embodiments, the first content item corresponds to a first playback position (e.g., a first playback time) of the first aggregated content item. In some embodiments, the second content item corresponds to a second playback position (e.g., a second playback time) of the first aggregated content item different from the first playback position. In some embodiments, while concurrently displaying the first representation of the first content item (e.g., 1008A-10080) and the second representation of the second content item (e.g., 1008A-10080), the computer system detects, via the one or more input devices, a selection input (e.g., 1058) (e.g., one or more tap inputs and/or one or more non-tap inputs). In response to detecting the selection input: in accordance with a determination that the selection input corresponds to selection of the first representation of the first content item (e.g., a tap input on the first representation of the first content item and/or a remote control input while the first representation of the first content item is selected and/or in focus), the computer system plays visual content of the first aggregated content item from the first playback position (e.g.,
In some embodiments, the computer system displays, via the display generation component, an add content option (e.g., 1012D) that is selectable to initiate a process for adding one or more content items to the first aggregated content item. While displaying the add content option, the computer system detects, via the one or more input devices, a second selection input (e.g., 1014) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of the add content option. In response to detecting the second selection input, the computer system displays, via the display generation component, representations of a plurality of content items (e.g., 1018A-10180) that are not included in the first aggregated content item, including concurrently displaying: a third representation of a third content item (e.g., 1018A-10180), and a fourth representation of a fourth content item (e.g., 1018A-10180). In some embodiments, the computer system ceases display of the plurality of representations of content items in the first plurality of content items and/or replaces display of the plurality of representations of content items in the first plurality of content items with display of representations of the plurality of content items that are not included in the first aggregated content item (e.g., a view of a media library of the user). While concurrently displaying the third representation of the third content item and the fourth representation of the fourth content item, the computer system detects, via the one or more input devices, a first set of inputs (e.g., 1020, 1024) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to a request to add the third content item to the first aggregated content item (e.g., without adding the fourth content item to the first aggregated content item). In response to detecting the first set of inputs, the computer system modified the first aggregated content item to include the third content item (e.g.,
In some embodiments, displaying the third representation of the third content item (e.g., 1018A-10180) comprises: in accordance with a determination that the third content item satisfies one or more relevance criteria with respect to the first aggregated content item (e.g., based on metadata associated with the third content item (e.g., location data and/or time data) (e.g., location data associated with the third content item corresponds to location data for the first aggregated content item and/or time data associated with the third content item corresponds to time data for the first aggregated content item)), displaying the representation of the third content item in a first manner (e.g., highlighting the representation of the third content item (e.g., displaying the third content item with a first set of colors and/or at a first brightness level)); and in accordance with a determination that the third content item does not satisfy the one or more relevance criteria with respect to the first aggregated content item (e.g., based on metadata associated with the third content item (e.g., location data and/or time data) (e.g., location data associated with the third content item does not correspond to location data for the first aggregated content item and/or time data associated with the third content item does not correspond to time data for the first aggregated content item)), displaying the representation of the third content item in a second manner different from the first manner (e.g., visually deemphasizing the representation of the third content item (e.g., displaying the third content item with a second set of color and/or at a second brightness level (e.g., darker than the first brightness level))). In some embodiments, displaying the fourth representation of the fourth content item (e.g., 1018A-10180) comprises: in accordance with a determination that the fourth content item satisfies the one or more relevance criteria with respect to the first aggregated content item (e.g., based on metadata associated with the fourth content item (e.g., location data and/or time data) (e.g., location data associated with the fourth content item corresponds to location data for the first aggregated content item and/or time data associated with the fourth content item corresponds to time data for the first aggregated content item)), displaying the representation of the fourth content item in the first manner; and in accordance with a determination that the fourth content item does not satisfy the one or more relevance criteria with respect to the first aggregated content item (e.g., based on metadata associated with the fourth content item (e.g., location data and/or time data) (e.g., location data associated with the fourth content item does not correspond to location data for the first aggregated content item and/or time data associated with the fourth content item does not correspond to time data for the first aggregated content item)), displaying the representation of the fourth content item in the second manner. Displaying the fourth content item in the first manner in accordance with a determination that the fourth content item satisfies the one or more relevance criteria provides the user with feedback about the current state of the device (e.g., that the device has determined that the fourth content item satisfies the one or more relevance criteria with respect to the first aggregated content item).
In some embodiments, the computer system displays, via the display generation component, a related content option (e.g., 1012D) that is selectable to initiate a process for displaying additional content related to the first aggregated content item (e.g., selection of add photos option 1012D displays (e.g., in
In some embodiments, while displaying the plurality of representations of content items in the first plurality of content items (e.g., 1008A-1008P), the computer system detects, via the one or more input devices, a fourth selection input (e.g., 1036A, 1036B) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of one or more content items of the first plurality of content items including the first content item. In response to detecting the fourth selection input, the computer system displays, via the display generation component, a share option (e.g., 1034C) that is selectable to initiate a process for sharing (e.g., to one or more external computer systems and/or one or more users) the selected one or more content items via one or more communication mediums (e.g., text message, electronic mail, near field wireless communication and/or file transfer, uploading to a shared media album, and/or uploading to a third party platform). In some embodiments, while displaying the share option (e.g., and while the one or more content items are selected), the computer system detects, via the one or more input devices, a selection input corresponding to selection of the share option; and in response to detecting the selection input, the computer system displays, via the display generation component, a share user interface, wherein displaying the share user interface comprises concurrently displaying: a first option that is selectable to initiate a process for sharing the selected one or more content items via a first communication medium (e.g., text message, electronic mail, near field wireless communication and/or file transfer, uploading to a shared media album, and/or uploading to a third party platform); and a second option that is selectable to initiate a process for sharing the selected one or more content items via a second communication medium different from the first communication medium. Displaying a share option that is selectable to initiate a process for sharing the selected one or more content items via one or more communication mediums enables a user to quickly share content items, thereby reducing the number of inputs needed to share content items.
In some embodiments, while displaying the plurality of representations of content items in the first plurality of content items (e.g., 1008A-1008P), the computer system detects, via the one or more input devices, a fifth selection input (e.g., 1036A, 1036B) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to selection of one or more content items of the first plurality of content items including the first content item. In response to detecting the fifth selection input, the computer system displays, via the display generation component, a remove option (e.g., 1034D) that is selectable to initiate a process for removing the selected one or more content items from the first aggregated content item (e.g., such that the removed content items are no longer displayed when the first aggregated content item is played). In some embodiments, subsequent to displaying the remove option (e.g., while displaying the remove option), the computer system detects, via the one or more input devices, one or more inputs corresponding to a request to remove the selected one or more content items from the first aggregated content item; and in response to detecting the one or more inputs, the computer system modifies the first aggregated content item to remove the selected one or more content items. Displaying a remove option that is selectable to initiate a process for removing the selected one or more content items from the first aggregated content item enables a user to quickly remove items from the first aggregated content item, thereby reducing the number of inputs needed to remove content items from the first aggregated content item.
In some embodiments, prior to displaying the user interface, the first content item is positioned at a first sequential position in the ordered sequence of the first plurality of content items. In some embodiments, playing the visual content of the first aggregated content item includes sequentially displaying the content items of the first plurality of content items according to the ordered sequence. In some embodiments, displaying the user interface comprises displaying the first representation of the first content item at a first display position corresponding to the first sequential position (e.g., tile 1008A, representative of media item 628A, is displayed at a first position, tile 1008K, representative of a media item, is displayed at an 11th position). While displaying the plurality of representations of content items in the first plurality of content items, the computer system detects, via the one or more input devices, a gesture (e.g., 1026) (e.g., a hold and drag gesture and/or a different gesture) corresponding to the first representation of the first content item (e.g., 1008K). In response to detecting the gesture: the computer system moves the first representation of the first content item from the first display position to a second display position different from the first display position (e.g.,
In some embodiments, while displaying the user interface (e.g., 1004), the computer system detects, via the one or more input devices, a set of user inputs (e.g., 1010) (e.g., one or more tap inputs and/or one or more non-tap inputs). In response to detecting the set of user inputs, the computer system concurrently displays, via the display generation component: a first content length option (e.g., 1012G) corresponding to a first number of content items (e.g., 10 content items, 15 content items, and/or 20 content items), wherein displaying the first content length option comprises displaying the first number of content items (in some embodiments, the first number of content items is indicative of the number of content items to be included in the first aggregated content item if the first content length option is selected); and a second content length option (e.g., 1012H) corresponding to a second number of content items different from the first number of content items (e.g., 25 content items, 30 content items, and/or 35 content items), wherein displaying the second content length option comprises displaying the second number of content items (in some embodiments, the second number of content items is indicative of the number of content items to be included in the first aggregated content item if the second content length option is selected). In some embodiments, while concurrently displaying the first content length option and the second content length option, the computer system detects, via the one or more input devices, a selection input; and in response to detecting the selection input: in accordance with a determination that the selection input corresponds to selection of the first content length option, the computer system modifies the first aggregated content item to include the first number of content items (e.g., adding content items to and/or removing content items from the first aggregated content item so that the first aggregated content item includes (e.g., exactly) the first number of content items); and in accordance with a determination that the selection input corresponds to selection of the second content length option, the computer system modifies the first aggregated content item to include the second number of content items (e.g., adding content items to and/or removing content items from the first aggregated content item so that the first aggregated content item includes (e.g., exactly) the second number of content items). Displaying a first content length option and a second content length option enables a user to quickly modify the length of the first aggregated content item, thereby reducing the number of inputs needed to modify the length of the first aggregated content item.
In some embodiments, while displaying the user interface (e.g., 1004), the computer system detects, via the one or more input devices, a second set of user inputs (e.g., 1010) (e.g., one or more tap inputs and/or one or more non-tap inputs). In response to detecting the second set of user inputs: the computer system concurrently displays, via the display generation component: a third content length option (e.g., 1012G) corresponding to a first number of content items (e.g., a first playback duration); and a fourth content length option (e.g., 1012H) corresponding to a second number of content items different from the first number of content items (e.g., a second playback duration different from the first playback duration). While concurrently displaying the third content length option (e.g., 1012G) and the fourth content length option (e.g., 1012H), the computer system detects, via the one or more input devices, a sixth selection input (e.g., one or more tap inputs and/or one or more non-tap inputs). In response to detecting the sixth selection input: in accordance with a determination that the sixth selection input corresponds to selection of the third content length option, the computer system modifies the user interface (e.g., 1004) to display representations of the first number of content items (e.g., display exactly the first number of content items); and in accordance with a determination that the sixth selection input corresponds to selection of the fourth content length option, the computer system modifies the user interface (e.g., 1004) to display representations of the second number of content items (e.g., display exactly the second number of content items). Modifying the user interface to display representations of the first number of content items or the second number of content items in response to the sixth selection input provides the user with feedback about the current state of the device (e.g., that the device has modified the first aggregated content item to include the first number of content items or the second number of content items in response to the sixth selection input).
In some embodiments, subsequent to displaying the user interface (e.g., 1004) (e.g., while displaying the user interface), the computer system detects, via the one or more input devices, a third set of inputs (e.g., 1014, 1020, 1024, 1036A, 1036B, 1052, 1056) (e.g., one or more tap inputs and/or one or more non-tap inputs) corresponding to a request to add a first set of one or more additional content items to the first aggregated content item and/or remove a first set of one or more removed content items from the first aggregated content item. In response to detecting the third set of inputs, the computer system modifies the first aggregated content item to include the one or more additional content items and/or modifying the first aggregated content item to exclude one or more removed content items (e.g.,
In some embodiments, in response to detecting the fourth set of inputs, in accordance with a determination that the fourth set of inputs includes a request to decrease the duration of the first aggregated content item, the computer system reduces the duration of the first aggregated content item by removing the second set of removed content items without removing any of the first set of one or more additional content item (e.g., removes the second set of removed content items without removing tile 1008P, which has been manually added by a user). Automatically removing content from the first aggregated content item in response to a user input corresponding to a request to modify the duration of the first aggregated content item allows a user to quickly and effectively modify the duration of the first aggregated content item without further user input.
In some embodiments, in response to detecting the fourth set of inputs, in accordance with a determination that the fourth set of inputs includes a request to increase the duration of the first aggregated content item, the computer system increases the duration of the first aggregated content item by adding the second set of additional content items without adding any of the first set of one or more removed content item (e.g., adds the second set of additional content items without adding the media items corresponding to tiles 1008B and 1008E, which have been manually removed by a user). Automatically adding or removing content from the first aggregated content item in response to a user input corresponding to a request to modify the duration of the first aggregated content item allows a user to quickly and effectively modify the duration of the first aggregated content item without further user input.
In some embodiments, playing the visual content of the first aggregated content item comprises: displaying, via the display generation component, the first content item; and subsequent to displaying the first content item, (e.g., immediately after displaying the first content item and/or while displaying the first content item) displaying, via the display generation component, a transition from the first content item to the second content item (e.g., a subsequent content item and/or a next content item in the ordered sequence of the first plurality of content items), wherein: in accordance with a determination that the second content item satisfies one or more similarity criteria with respect to the first content item (e.g., similarity in content, similarity in location, and/or similarity in date and/or time of capture), the transition from the first content item to the second content item is of a first visual transition type (e.g., transitions between media items represented by tiles 1008C, 1008D, 1008E, and 1008P are of the first visual transition type based on similarities between these media items) (e.g., a crossfade, a fade to black, an exposure bleed, a pan, a scale, and/or a rotate); and in accordance with a determination that the second content item does not satisfy the one or more similarity criteria with respect to the first content item, the transition from the first content item to the second content item is of a second visual transition type different from the first visual transition type (e.g., transitions between media items represented by tiles 1008P and 1008K in
In some embodiments, the one or more similarity criteria includes one or more of: a time-based similarity criteria (e.g., similarity in date and/or time when content items were captured); a location-based similarity criteria (e.g., similarity in geographic location where content items were captured); and/or a content-based similarity criteria (e.g., similarity in content depicted in the content items). Automatically selecting transition types based on time-based similarity criteria, location-based similarity criteria, and/or content-based similarity criteria improves the quality of visual transitions suggested to a user, and allows a user to apply transition types without further user input.
In some embodiments, the transition from the first content item to the subsequent content item is of the first visual transition type; and playing the visual content of the first aggregated content item further comprises: subsequent to displaying the transition from the first content item to the second content item (e.g., immediately after displaying the transition from the first content item to the second content item and/or while displaying the transition from the first content item to the second content item), displaying, via the display generation component, the second content item; subsequent to displaying the second content item (e.g., immediately after displaying the second content item and/or while displaying the second content item), displaying, via the display generation component, a transition from the second content item to a third content item different from the first and second content items (e.g., a subsequent content item and/or a next content item in the ordered sequence of the first plurality of content item), wherein: in accordance with a determination that the third content item satisfies one or more similarity criteria with respect to the second content item, the transition from the second content item to the third content item is of the first visual transition type (e.g., a crossfade, a fade to black, an exposure bleed, a pan, a scale, and/or a rotate) (e.g., maintain the same transition type between the first and second content item and between the second and third content items based on similarity between the first, second, and third content items). Automatically selecting transition types based on similarity criteria between content items improves the quality of visual transitions suggested to a user and allows a user to apply transition types without further user input.
In some embodiments, displaying the transition from the second content item to the third content item further comprises: wherein, in accordance with a determination that the third content item does not satisfy the one or more similarity criteria with respect to the second content item, the transition from the first content item to the second content item is of a third visual transition type different from the first visual transition type (e.g., a crossfade, a fade to black, an exposure bleed, a pan, a scale, and/or a rotate). Automatically selecting transition types based on similarity criteria between content items improves the quality of visual transitions suggested to a user and allows a user to apply transition types without further user input.
In some embodiments, while playing the visual content of the first aggregated content item (e.g.,
In some embodiments, displaying the user interface (e.g., 1222 in
In some embodiments, while displaying the video navigation user interface element (e.g., 1228), including concurrently displaying the first representation of the first content item (e.g., 1230A-1230I) and the second representation of the second content item (e.g., 1230A-1230I), the computer system detects, via the one or more input devices, a first set of navigation inputs (e.g., 1234, 1238) (e.g., one or more swipe gesture inputs and/or one or more directional inputs). In response to detecting the first set of navigation inputs: at a first time (e.g., at a start of the first set of navigation inputs), the computer system concurrently displays, via the display generation component: the first representation of the first content item in a first manner (e.g., tile 1230B in
In some embodiments, prior to detecting the one or more navigation inputs (e.g., 1234, 1238), the computer system concurrently displays, via the display generation component: the first representation of the first content item in the first manner (e.g., tile 1230B in
In some embodiments, after concurrently displaying the paused visual content of the first aggregated content item (e.g., 1224A,
In some embodiments, while concurrently displaying the paused visual content of the first aggregated content item (e.g., 1224A,
Note that details of the processes described above with respect to method 1100 (e.g.,
Media browsing user interface 1208 includes selectable options 1210A, 1210B, 1210C, 1210D. Option 1210A is selectable to display representations of one or more aggregated content items. Option 1210B is selectable to display representations of one or more shared media items (e.g., media items that have been shared with a user and/or have been shared by the user). Option 1210C is selectable to display representations of one or more collections of media items (e.g., albums). Option 1210D is selectable to display representations of media items in a media library. In some embodiments, a setting (e.g., setting 1220 shown in
Media browsing user interface 1208 also includes tiles 1212A-1212E. Each tile 1212A-1212E is representative of a respective aggregated content item. For example, tile 1212A is representative of a first aggregated content item, tile 1212B is representative of a second aggregated content item, and so forth. In some embodiments, each tile 1212A-1212E displays a preview (e.g., an animated preview and/or a moving preview) of its corresponding aggregated content item (e.g., when a focus selection is on the respective tile 121A-1212E). In
In
In
Option 1232A is selectable to display a plurality of duration options. Option 1232B is selectable to display a plurality of audio track options. Option 1232C is selectable to display a plurality of menu options. Option 1232D is selectable to display a plurality of aggregated content item options. Option 1232E is selectable to display one or more people options and/or one or more places options that allow a user to view aggregated content items pertaining to particular people and/or places. In
In
In
In
In
In
In
In
In
In
In
In
In
In
In
In
Next content item user interface 1270 includes countdown timer 1274 that indicates for a user that, without further user input, a next aggregated content item (e.g., “PALM SPRINGS 2017”) will begin playing at the end of the countdown timer 1274. In
In
In
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is the gathering and use of data available from various sources to improve the presentation of media content or any other content that may be of interest to users. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to present targeted content that is of greater interest to the user. Accordingly, use of such personal information data enables users to have calculated control of the presented content. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of media content presentation services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and presented to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content presentation services, or publicly available information.
This application claims priority to U.S. Provisional Patent Application No. 63/195,645, entitled “AGGREGATED CONTENT ITEM USER INTERFACES,” filed on Jun. 1, 2021, the contents of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63195645 | Jun 2021 | US |