The present disclosure relates generally to computer user interfaces, and more specifically to techniques for displaying visual effects.
Visual effects are used to enhance a user's experience when capturing and viewing media using electronic devices. Visual effects can alter the appearance of image data or can represent an idealized or completely fictional representation of an environment captured in an image.
Some techniques for displaying visual effects using electronic devices, however, are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes. Existing techniques require more time than necessary, wasting user time and device energy. This latter consideration is particularly important in battery-operated devices.
Accordingly, the present technique provides electronic devices with faster, more efficient methods and interfaces for displaying visual effects. Such methods and interfaces optionally complement or replace other methods for displaying visual effects. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For battery-operated computing devices, such methods and interfaces conserve power and increase the time between battery charges.
A method is described. The method is performed at an electronic device having a camera, a display apparatus, and one or more input devices. The method comprises: displaying, via the display apparatus, a messaging user interface of a message conversation including at least a first participant, the messaging user interface including a camera affordance; detecting, via the one or more input devices, a first input directed to the camera affordance; in response to detecting the first input, displaying a camera user interface, the camera user interface including a capture affordance; detecting, via the one or more input devices, a second input directed to the capture affordance; in response to detecting the second input: capturing image data using the camera; ceasing to display the capture affordance; and displaying a send affordance at a location in the camera user interface that was previously occupied by the capture affordance; detecting, via the one or more input devices, a third input directed to the send affordance; and in response to detecting the third input, initiating a process to send the captured image data to the first participant.
A non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a camera, a display apparatus, and one or more input devices, the one or more programs including instructions for: displaying, via the display apparatus, a messaging user interface of a message conversation including at least a first participant, the messaging user interface including a camera affordance; detecting, via the one or more input devices, a first input directed to the camera affordance; in response to detecting the first input, displaying a camera user interface, the camera user interface including a capture affordance; detecting, via the one or more input devices, a second input directed to the capture affordance; in response to detecting the second input: capturing image data using the camera; ceasing to display the capture affordance; and displaying a send affordance at a location in the camera user interface that was previously occupied by the capture affordance; detecting, via the one or more input devices, a third input directed to the send affordance; and in response to detecting the third input, initiating a process to send the captured image data to the first participant.
A transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a camera, a display apparatus, and one or more input devices, the one or more programs including instructions for: displaying, via the display apparatus, a messaging user interface of a message conversation including at least a first participant, the messaging user interface including a camera affordance; detecting, via the one or more input devices, a first input directed to the camera affordance; in response to detecting the first input, displaying a camera user interface, the camera user interface including a capture affordance; detecting, via the one or more input devices, a second input directed to the capture affordance; in response to detecting the second input: capturing image data using the camera; ceasing to display the capture affordance; and displaying a send affordance at a location in the camera user interface that was previously occupied by the capture affordance; detecting, via the one or more input devices, a third input directed to the send affordance; and in response to detecting the third input, initiating a process to send the captured image data to the first participant.
An electronic device is described. The electronic device comprises: a camera; a display apparatus; one or more input devices; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display apparatus, a messaging user interface of a message conversation including at least a first participant, the messaging user interface including a camera affordance; detecting, via the one or more input devices, a first input directed to the camera affordance; in response to detecting the first input, displaying a camera user interface, the camera user interface including a capture affordance; detecting, via the one or more input devices, a second input directed to the capture affordance; in response to detecting the second input: capturing image data using the camera; ceasing to display the capture affordance; and displaying a send affordance at a location in the camera user interface that was previously occupied by the capture affordance; detecting, via the one or more input devices, a third input directed to the send affordance; and in response to detecting the third input, initiating a process to send the captured image data to the first participant.
An electronic device is described. The electronic device comprises: a camera; a display apparatus; one or more input devices; means for displaying, via the display apparatus, a messaging user interface of a message conversation including at least a first participant, the messaging user interface including a camera affordance; means for detecting, via the one or more input devices, a first input directed to the camera affordance; means, responsive to detecting the first input, for displaying a camera user interface, the camera user interface including a capture affordance; means for detecting, via the one or more input devices, a second input directed to the capture affordance; means, responsive to detecting the second input, for: capturing image data using the camera; ceasing to display the capture affordance; and displaying a send affordance at a location in the camera user interface that was previously occupied by the capture affordance; means for detecting, via the one or more input devices, a third input directed to the send affordance; and means, responsive to detecting the third input, for initiating a process to send the captured image data to the first participant.
A method is described. The method is performed at an electronic device having a camera and a display apparatus. The method comprises: displaying, via the display apparatus, a camera user interface, the camera user interface including: a camera display region including a representation of image data captured via the camera; and a first affordance associated with a first camera display mode; while a subject is positioned within a field of view of the camera and a representation of the subject and a background are displayed in the camera display region, detecting a gesture directed to the first affordance; in response to detecting the gesture directed to the first affordance, activating the first camera display mode, wherein activating the first camera display mode includes: displaying an avatar selection region including a selected one of a plurality of avatar options; and displaying a representation of the selected avatar option on the representation of the subject in the camera display region; while the first camera display mode is active, detecting a change in pose of the subject; and in response to detecting the change in pose of the subject, changing an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background.
A non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a camera and a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a camera user interface, the camera user interface including: a camera display region including a representation of image data captured via the camera; and a first affordance associated with a first camera display mode; while a subject is positioned within a field of view of the camera and a representation of the subject and a background are displayed in the camera display region, detecting a gesture directed to the first affordance; in response to detecting the gesture directed to the first affordance, activating the first camera display mode, wherein activating the first camera display mode includes: displaying an avatar selection region including a selected one of a plurality of avatar options; and displaying a representation of the selected avatar option on the representation of the subject in the camera display region; while the first camera display mode is active, detecting a change in pose of the subject; and in response to detecting the change in pose of the subject, changing an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background.
A transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a camera and a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a camera user interface, the camera user interface including: a camera display region including a representation of image data captured via the camera; and a first affordance associated with a first camera display mode; while a subject is positioned within a field of view of the camera and a representation of the subject and a background are displayed in the camera display region, detecting a gesture directed to the first affordance; in response to detecting the gesture directed to the first affordance, activating the first camera display mode, wherein activating the first camera display mode includes: displaying an avatar selection region including a selected one of a plurality of avatar options; and displaying a representation of the selected avatar option on the representation of the subject in the camera display region; while the first camera display mode is active, detecting a change in pose of the subject; and in response to detecting the change in pose of the subject, changing an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background.
An electronic device is described. The electronic device comprises: a camera; a display apparatus; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display apparatus, a camera user interface, the camera user interface including: a camera display region including a representation of image data captured via the camera; and a first affordance associated with a first camera display mode; while a subject is positioned within a field of view of the camera and a representation of the subject and a background are displayed in the camera display region, detecting a gesture directed to the first affordance; in response to detecting the gesture directed to the first affordance, activating the first camera display mode, wherein activating the first camera display mode includes: displaying an avatar selection region including a selected one of a plurality of avatar options; and displaying a representation of the selected avatar option on the representation of the subject in the camera display region; while the first camera display mode is active, detecting a change in pose of the subject; and in response to detecting the change in pose of the subject, changing an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background.
An electronic device is described. The electronic device comprises: a camera; a display apparatus; one or more input devices; means for displaying, via the display apparatus, a camera user interface, the camera user interface including: a camera display region including a representation of image data captured via the camera; and a first affordance associated with a first camera display mode; means, while a subject is positioned within a field of view of the camera and a representation of the subject and a background are displayed in the camera display region, for detecting a gesture directed to the first affordance; means, responsive to detecting the gesture directed to the first affordance, for activating the first camera display mode, wherein activating the first camera display mode includes: displaying an avatar selection region including a selected one of a plurality of avatar options; and displaying a representation of the selected avatar option on the representation of the subject in the camera display region; means, while the first camera display mode is active, for detecting a change in pose of the subject; and means, responsive to detecting the change in pose of the subject, for changing an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background.
A method is described. The method is performed at an electronic device having a display apparatus. The method comprises: displaying, via the display apparatus, a media user interface, the media user interface including: a media display region including a representation of a media item; and an effects affordance; detecting a gesture directed to the effects affordance; in response to detecting the gesture directed to the effects affordance, displaying a plurality of effects options for applying effects to the media item concurrently with a representation of the media item, including: in accordance with a determination that the media item is associated with corresponding depth data, the plurality of effects options include a respective effects option for applying effects based on the depth data; and in accordance with a determination that the image data does not include the depth data, the respective effects option is not available for activation in the plurality of effects options.
A non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a media user interface, the media user interface including: a media display region including a representation of a media item; and an effects affordance; detecting a gesture directed to the effects affordance; in response to detecting the gesture directed to the effects affordance, displaying a plurality of effects options for applying effects to the media item concurrently with a representation of the media item, including: in accordance with a determination that the media item is associated with corresponding depth data, the plurality of effects options include a respective effects option for applying effects based on the depth data; and in accordance with a determination that the image data does not include the depth data, the respective effects option is not available for activation in the plurality of effects options.
A transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a media user interface, the media user interface including: a media display region including a representation of a media item; and an effects affordance; detecting a gesture directed to the effects affordance; in response to detecting the gesture directed to the effects affordance, displaying a plurality of effects options for applying effects to the media item concurrently with a representation of the media item, including: in accordance with a determination that the media item is associated with corresponding depth data, the plurality of effects options include a respective effects option for applying effects based on the depth data; and in accordance with a determination that the image data does not include the depth data, the respective effects option is not available for activation in the plurality of effects options.
An electronic device is described. The electronic device comprises: a display apparatus; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display apparatus, a media user interface, the media user interface including: a media display region including a representation of a media item; and an effects affordance; detecting a gesture directed to the effects affordance; in response to detecting the gesture directed to the effects affordance, displaying a plurality of effects options for applying effects to the media item concurrently with a representation of the media item, including: in accordance with a determination that the media item is associated with corresponding depth data, the plurality of effects options include a respective effects option for applying effects based on the depth data; and in accordance with a determination that the image data does not include the depth data, the respective effects option is not available for activation in the plurality of effects options.
An electronic device is described. The electronic device comprises: a display apparatus; one or more input devices; means for displaying, via the display apparatus, a media user interface, the media user interface including: a media display region including a representation of a media item; and an effects affordance; means for detecting a gesture directed to the effects affordance; means, responsive to detecting the gesture directed to the effects affordance, for displaying a plurality of effects options for applying effects to the media item concurrently with a representation of the media item, including: in accordance with a determination that the media item is associated with corresponding depth data, the plurality of effects options include a respective effects option for applying effects based on the depth data; and in accordance with a determination that the image data does not include the depth data, the respective effects option is not available for activation in the plurality of effects options.
A method is described. The method is performed at an electronic device having a display apparatus. The method comprises: displaying, via the display apparatus, a live video communication user interface of a live video communication application, the live video communication user interface including: a representation of a subject participating in a live video communication session, and a first affordance; detecting a gesture directed to the first affordance; and in response to detecting the gesture directed to the first affordance: activating a camera effects mode; and increasing a size of the representation of the subject participating in the live video communication session.
A non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a live video communication user interface of a live video communication application, the live video communication user interface including: a representation of a subject participating in a live video communication session, and a first affordance; detecting a gesture directed to the first affordance; and in response to detecting the gesture directed to the first affordance: activating a camera effects mode; and increasing a size of the representation of the subject participating in the live video communication session.
A transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a live video communication user interface of a live video communication application, the live video communication user interface including: a representation of a subject participating in a live video communication session, and a first affordance; detecting a gesture directed to the first affordance; and in response to detecting the gesture directed to the first affordance: activating a camera effects mode; and increasing a size of the representation of the subject participating in the live video communication session.
An electronic device is described. The electronic device comprises: a display apparatus; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display apparatus, a live video communication user interface of a live video communication application, the live video communication user interface including: a representation of a subject participating in a live video communication session, and a first affordance; detecting a gesture directed to the first affordance; and in response to detecting the gesture directed to the first affordance: activating a camera effects mode; and increasing a size of the representation of the subject participating in the live video communication session.
An electronic device is described. The electronic device comprises: a display apparatus; one or more input devices; means for displaying, via the display apparatus, a live video communication user interface of a live video communication application, the live video communication user interface including: a representation of a subject participating in a live video communication session, and a first affordance; means for detecting a gesture directed to the first affordance; and means, responsive to detecting the gesture directed to the first affordance, for: activating a camera effects mode; and increasing a size of the representation of the subject participating in the live video communication session.
A method is described. The method is performed at an electronic device having a camera and a display apparatus. The method comprises: displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject, in order for the depth-based display criteria to be met, including as part of the representation of the virtual avatar, a representation of the first portion of the virtual avatar that is displayed in place of the first portion of the subject; and in accordance with a determination, based on the depth data, that the first portion of the virtual avatar does not satisfy the set of depth-based display criteria for the first portion of the subject, excluding, from the representation of the virtual avatar, the representation of the first portion of the virtual avatar and displaying the first portion of the subject in the region that would have been occupied by the first portion of the virtual avatar.
A non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a camera and a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject, in order for the depth-based display criteria to be met, including as part of the representation of the virtual avatar, a representation of the first portion of the virtual avatar that is displayed in place of the first portion of the subject; and in accordance with a determination, based on the depth data, that the first portion of the virtual avatar does not satisfy the set of depth-based display criteria for the first portion of the subject, excluding, from the representation of the virtual avatar, the representation of the first portion of the virtual avatar and displaying the first portion of the subject in the region that would have been occupied by the first portion of the virtual avatar.
A transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a camera and a display apparatus, the one or more programs including instructions for: displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject, in order for the depth-based display criteria to be met, including as part of the representation of the virtual avatar, a representation of the first portion of the virtual avatar that is displayed in place of the first portion of the subject; and in accordance with a determination, based on the depth data, that the first portion of the virtual avatar does not satisfy the set of depth-based display criteria for the first portion of the subject, excluding, from the representation of the virtual avatar, the representation of the first portion of the virtual avatar and displaying the first portion of the subject in the region that would have been occupied by the first portion of the virtual avatar.
An electronic device is described. The electronic device comprises: a camera; a display apparatus; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject, in order for the depth-based display criteria to be met, including as part of the representation of the virtual avatar, a representation of the first portion of the virtual avatar that is displayed in place of the first portion of the subject; and in accordance with a determination, based on the depth data, that the first portion of the virtual avatar does not satisfy the set of depth-based display criteria for the first portion of the subject, excluding, from the representation of the virtual avatar, the representation of the first portion of the virtual avatar and displaying the first portion of the subject in the region that would have been occupied by the first portion of the virtual avatar.
An electronic device is described. The electronic device comprises: a camera; and a display apparatus; means for displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; means for displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: means in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject, in order for the depth-based display criteria to be met, for including as part of the representation of the virtual avatar, a representation of the first portion of the virtual avatar that is displayed in place of the first portion of the subject; and means in accordance with a determination, based on the depth data, that the first portion of the virtual avatar does not satisfy the set of depth-based display criteria for the first portion of the subject, for excluding, from the representation of the virtual avatar, the representation of the first portion of the virtual avatar and displaying the first portion of the subject in the region that would have been occupied by the first portion of the virtual avatar.
Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, devices are provided with faster, more efficient methods and interfaces for displaying visual effects, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces may complement or replace other methods for displaying visual effects.
For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments.
There is a need for electronic devices that provide efficient methods and interfaces for displaying visual effects. For example, while programs already exist for displaying visual effects, these programs are inefficient and difficult to use compared to the techniques below, which allow a user to displaying visual effects in various applications. Such techniques can reduce the cognitive burden on a user who displays visual effects in an application, thereby enhancing productivity. Further, such techniques can reduce processor and battery power otherwise wasted on redundant user inputs.
Below,
Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. The first touch and the second touch are both touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif. Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
Attention is now directed toward embodiments of portable devices with touch-sensitive displays.
As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button).
As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
It should be appreciated that device 100 is only one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in
Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips.
RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry 108 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g., 212,
I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, depth camera controller 169, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/send electrical signals from/to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g., 208,
A quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a Device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g., 206) optionally turns power to device 100 on or off. The functionality of one or more of the buttons are, optionally, user-customizable. Touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
Touch-sensitive display 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives and/or sends electrical signals from/to touch screen 112. Touch screen 112 displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects.
Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112. In an exemplary embodiment, a point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, Calif.
A touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Pat. No. 6,323,846 (Westerman et al.), U.S. Pat. No. 6,570,557 (Westerman et al.), and/or U.S. Pat. No. 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
A touch-sensitive display in some embodiments of touch screen 112 is described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application Ser. No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A Computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
In some embodiments, in addition to the touch screen, device 100 optionally includes a touchpad for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
Device 100 also includes power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
Device 100 optionally also includes one or more optical sensors 164.
Device 100 optionally also includes one or more depth camera sensors 175.
In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint's Z-axis where its corresponding two-dimensional pixel is located. In some embodiments, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0-255). For example, the “0” value represents pixels that are located at the most distant place in a “three dimensional” scene and the “255” value represents pixels that are located closest to a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor) in the “three dimensional” scene. In other embodiments, a depth map represents the distance between an object in a scene and the plane of the viewpoint. In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user's face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
Device 100 optionally also includes one or more contact intensity sensors 165.
Device 100 optionally also includes one or more proximity sensors 166.
Device 100 optionally also includes one or more tactile output generators 167.
Device 100 optionally also includes one or more accelerometers 168.
In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136. Furthermore, in some embodiments, memory 102 (
Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
In some embodiments, contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter).
Contact/motion module 130 optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
Text input module 134, which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e-mail 140, or IM 141; and so forth.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephone module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data.
In conjunction with touch screen 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, or delete a still image or video from memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, e-mail client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124). In some embodiments, device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notes module 153 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, e-mail client module 140, and browser module 147, online video module 155 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module 141, rather than e-mail client module 140, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety.
Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. For example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152,
In some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad.
Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. Event sorter 170 includes event monitor 171 and event dispatcher module 174. In some embodiments, application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
In some embodiments, application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripherals interface 118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110). Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
In some embodiments, event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
In some embodiments, operating system 126 includes event sorter 170. Alternatively, application 136-1 includes event sorter 170. In yet other embodiments, event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
A respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others. In some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test.
In some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type.
When a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
In some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
In some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized.
Device 100 optionally also include one or more physical buttons, such as “home” or menu button 204. As described previously, menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen 112.
In some embodiments, device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (SIM) card slot 210, headset jack 212, and docking/charging external port 124. Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Each of the above-identified elements in
Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device 100.
It should be noted that the icon labels illustrated in
Although some of the examples that follow will be given with reference to inputs on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No. PCT/US2013/040061, titled “Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, published as WIPO Publication No. WO/2013/169849, and International Patent Application Serial No. PCT/US2013/069483, titled “Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed Nov. 11, 2013, published as WIPO Publication No. WO/2014/105276, each of which is hereby incorporated by reference in their entirety.
In some embodiments, device 500 has one or more input mechanisms 506 and 508. Input mechanisms 506 and 508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device 500 to be worn by a user.
Input mechanism 508 is, optionally, a microphone, in some examples. Personal electronic device 500 optionally includes various sensors, such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
Memory 518 of personal electronic device 500 can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 700, 900, 1100, 1300, and 1500 (
As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (
As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in
As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation.
In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity.
The intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures.
An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a “light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a “deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch-surface. A decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments, the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a “down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an “up stroke” of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an “up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances).
For ease of explanation, the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold.
Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that are implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
In some examples, electronic device 600 includes a depth camera, such as an infrared camera, a thermographic camera, or a combination thereof. In some examples, the device further includes a light-emitting device (e.g., light projector), such an IR flood light, a structured light projector, or a combination thereof. The light-emitting device is, optionally, used to illuminate the subject during capture of the image by a visible light camera and a depth camera (e.g., an IR camera) and the information from the depth camera and the visible light camera are used to determine a depth map of different portions of subject captured by the visible light camera. In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint's Z-axis where its corresponding two-dimensional pixel is located. In some examples, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0-255). For example, the “0” value represents pixels that are located at the most distant place in a “three dimensional” scene and the “255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the “three dimensional” scene. In other examples, a depth map represents the distance between an object in a scene and the plane of the viewpoint.) In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user's face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction. In some embodiments, the lighting effects described herein are displayed using disparity information from two cameras (e.g., two visual light cameras) for rear facing images and using depth information from a depth camera combined with image data from a visual light camera for front facing images (e.g., selfie images). In some embodiments, the same user interface is used when the two visual light cameras are used to determine the depth information and when the depth camera is used to determine the depth information, providing the user with a consistent experience, even when using dramatically different technologies to determine the information that is used when generating the lighting effects. In some embodiments, while displaying the camera user interface with one of the lighting effects applied, the device detects selection of a camera switching affordance and switches from the front facing cameras (e.g., a depth camera and a visible light camera) to the rear-facing cameras (e.g., two visible light cameras that are spaced apart from each other) (or vice versa) while maintaining display of the user interface controls for applying the lighting effect and replacing display of the field of view of the front facing cameras to the field of view of the rear facing cameras (or vice versa).
In
In
In
Camera application user interface 615 also includes a region above image display region 620 that includes camera-specific affordances 617 and done affordance 618 for exiting camera application user interface 615. Camera-specific affordances include affordance 617-1 associated with a camera flash function, affordance 617-2 associated with a camera mode function, affordance 617-3 associated with a timer function, and affordance 617-4 associated with a filter function.
Camera application user interface 615 also includes camera options region 625 positioned below image display region 620. Camera options region 625 includes camera selector affordance 627 for switching between cameras (e.g., a rear-facing camera and camera 602), and camera option affordances 619 associated with different capture modes in which a camera can record image data. For example, video affordance 619-1 is associated with a function for activating a video recording capture mode of the camera, and photo affordance 619-2 is associated with a function for activating a still image capture mode of the camera. In the embodiments discussed below with respect to
Camera options region 625 further includes effects affordance 622 for enabling and disabling a mode of device 600 in which device 600 is enabled or disabled for displaying visual effects in image display region 620. This mode of device 600 is often referred to herein as an effects mode.
Camera options region 625 also includes capture affordance 621, which can be selected to capture image data represented in image display region 620. In some embodiments, device 600 captures the image data in a manner based on the currently enabled capture mode (e.g., video recording capture mode or image capture mode). In some embodiments, device 600 captures the image data depending on the type of gesture detected on capture affordance 621. For example, if device 600 detects a tap gesture on capture affordance 621, device 600 captures a still image of the image data represented in image display region 620 at the time the tap gesture occurs. If device 600 detects a tap-and-hold gesture on capture affordance 621, device 600 captures a video recording of the image data represented in image display region 620 during a period of time for which the tap-and-hold gesture persists. In some embodiments, the video recording stops when the finger lifts off of the affordance. In some embodiments, the video recording continues until a subsequent input (e.g., a tap input) is detected at a location corresponding to the affordance. In some embodiments, the captured image (e.g., still image or video recording) is then inserted into message-compose field 608 to be subsequently sent to a participant in the message conversation. In some embodiments, the captured image is sent directly to the participant in the message conversation without inserting the captured image in message-compose field 608.
In
In
Device 600 also highlights effects affordance 622 to indicate visual effects are enabled for display, and updates camera options region 625 by replacing camera option affordances 619 with visual effects option affordances 624. The visual effects option affordances include avatar effects affordance 624-1 and sticker effects affordance 624-2. Visual effects option affordances 624 correspond to different visual effects that can be applied to the image displayed in image display region 620. By selecting one of the visual effect option affordances (e.g., 624-1 or 624-2) a menu is displayed with visual effects options corresponding to the selected visual effects option affordance.
A user can activate or deactivate the effects mode of device 600 by selecting effects affordance 622. When effects affordance 622 is highlighted, an effects mode of device 600 is enabled to display visual effects in image display region 620. If a user taps on highlighted affordance 622, effects affordance 622 is no longer highlighted, and the effects mode is disabled such that visual effects are not enabled for display in image display region 620. In some embodiments, when the effects mode is enabled, device 600 updates the image shown in image display region 620 to display one or more visual effects that have been applied to the image (including visual effects that are applied to a live image stream) and, when the effects mode is disabled, device 600 removes or hides the visual effects from the image shown in image display region 620.
In
In
Avatar options 630 correspond to a virtual avatar visual effect applied to a representation of the subject in image display region 620. Specifically, each avatar option 630 corresponds to a virtual avatar that, when selected, is transposed onto the face of the subject in the image display region, while other portions of the image in image display region (such as a background or other portions of the user, such as their body) remain displayed. A user (e.g., subject 632) positioned in the field-of-view of camera 602 can control visual aspects of the virtual avatar by changing the pose (e.g., rotation or orientation) of their face, including moving various facial features (e.g., winking, sticking out their tongue, smiling, etc.). Details for controlling display and movement of virtual avatars is provided in U.S. patent application Ser. No. 15/870,195, which is hereby incorporated by reference for all purposes.
In some embodiments, a virtual avatar is a representation of the user that can be graphically depicted (e.g., a graphical representation of the user). In some embodiments, the virtual avatar is non-photorealistic (e.g., is cartoonish). In some embodiments, the virtual avatar includes an avatar face having one or more avatar features (e.g., avatar facial features). In some embodiments, the avatar features correspond (e.g., are mapped) to one or more physical features of a user's face such that detected movement of the user's physical features (e.g., as determined based on a camera such as a depth sensing camera) affects the avatar feature (e.g., affects the feature's graphical representation).
In some examples, a user is able to manipulate characteristics or features of a virtual avatar using a camera sensor (e.g., camera module 143, optical sensor 164) and, optionally, a depth sensor (e.g., depth camera sensor 175). As a user's physical features (such as facial features) and position (such as head position, head rotation, or head tilt) changes, the electronic device detects the changes and modifies the displayed image of the virtual avatar to reflect the changes in the user's physical features and position. In some embodiments, the changes to the user's physical features and position are indicative of various expressions, emotions, context, tone, or other non-verbal communication. In some embodiments, the electronic device modifies the displayed image of the virtual avatar to represent these expressions, emotions, context, tone, or other non-verbal communication.
In some embodiments, the virtual avatars are customizable avatars (e.g., customizable avatar 835). Customizable avatars are virtual avatars that can be selected and customized by a user, for example, to achieve a desired appearance (e.g., to look like the user). The customizable avatars generally have an appearance of a human character, rather than a non-human character such as an anthropomorphic construct of an animal or other nonhuman object. Additionally, features of the avatar can be created or changed, if desired, using an avatar editing user interface (e.g., such as the avatar editing user interface discussed below with respect to
In some embodiments, the virtual avatars are non-customizable avatars. Non-customizable avatars are virtual avatars that can be selected by a user, but generally are not fundamentally configurable, though their appearance can be altered via face tracking, as described in more detail below. Instead, non-customizable avatars are preconfigured and generally do not have feature components that can be modified by a user. In some embodiments, the non-customizable avatars have an appearance of a non-human character, such as an anthropomorphic construct of an animal or other nonhuman object (e.g., see robot avatar 633, rabbit avatar 634). Non-customizable avatars cannot be created by a user or modified to achieve a significant change in the physical appearance, physical construct, or modeled behavior of non-customizable avatars.
Because robot avatar 630-3 is selected in
As shown in
Once device 600 detects the user's head returning to the field-of-view of camera 602, device continues updating the selected avatar (e.g., robot avatar 633) based on changes detected in the user's face, as shown in
In
In
In
In
In
For example, in
In
In
When visual effects are enabled for device 600 (e.g., effects affordance 622 is shown highlighted), applied visual effects (such as avatars and stickers, for example) can be removed or hidden from image display region 620 by un-selecting highlighted effects affordance 622 (e.g., selecting effects affordance 622 when it is highlighted to disable visual effects). For example, in
In some embodiments, after the visual effects mode is disabled (e.g., by un-selecting highlighted effects affordance 622), the removed visual effects can be restored, for example, by reselecting effects affordance 622 within a predetermined amount of time. For example, in
In
In some embodiments, when the image is captured (e.g., stored as a media item), device 600 encodes depth data into the media item. Storing the depth data in the media permits the later application of depth-based effects (e.g., effects based on the location of objects (e.g., the user's face) in the z direction). In some embodiments, when the image is captured while an effect is applied, the effect is directly encoded in the visual (e.g., RGB) information for improved compatibility with other devices.
In
Device 600 also replaces the camera-specific affordances (e.g., affordances 617 shown in
Device 600 displays camera options region 625, including visual effects option affordances 624. Visual effects option affordances 624 can be selected to display their respective option menus, which can be used to modify captured media item 620-2 (as well as recorded video media item 620-4 discussed below).
Device 600 also updates camera options region 625 to replace capture affordance 621 and camera selector affordance 627 with markup affordance 677, edit affordance 678, and send affordance 680. Markup affordance 677 allows a user to mark-up media item 620-2. Edit affordance 678 allows a user to edit media item 620-2 such as by cropping the image or adjusting other characteristics of media item 620-2. As seen in
Send affordance 680 allows the user to immediately send media item 620-2 to the recipient indicated by recipient identifier 606. For example, in
In some embodiments, media item 620-2 is not immediately sent to the participant in the messaging conversation. For example, in
In some embodiments, sending media item 685 includes sending the media item with encoded depth data in the media item. Sending the media item with depth data in the media permits the later application (e.g., later application by the recipient) of depth-based effects (e.g., effects based on the location of objects (e.g., the user's face) in the z direction). In some embodiments, when the media item is sent, the effects are directly encoded in the visual (e.g., RGB) information for improved compatibility with other devices.
In some embodiments, after sending representation of media item 685, device 600 displays messaging user interface 603 as shown in
In some embodiments, application dock 690 remains displayed in messaging user interface 603 until a user selects message compose field 608 or keyboard region 612 (actions associated with composing text for a message). For example, as shown in
In some embodiments, application display region 699 includes graphical objects 6102 that can be selected for use in messaging user interface 603. In some embodiments, the type of graphical object displayed in application display region 699 depends on the application affordance that was selected to invoke display of the application display region. In the embodiment illustrated in
As previously mentioned, the foregoing embodiments described with respect to
In
In
In
Device 600 also removes stop affordance 6114 and image capture affordance 6116, and displays camera options region 625 having video scrubber 6120 for recorded video media item 620-4, effects affordance 622, edit affordance 678, markup affordance 677, and send affordance 680. Camera options region 625 also includes visual effects option affordances 624. Visual effects option affordances 624 can be selected to display their respective option menus, which can be used to modify captured media item 620-4.
As discussed above with respect to
As described below, method 700 provides an intuitive way for displaying visual effects in a messaging application. The method reduces the cognitive burden on a user for applying visual effects to an image for sending in a messaging application, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to display visual effects in an image faster and more efficiently conserves power and increases the time between battery charges.
The electronic device (e.g., 600) displays (702), via the display apparatus (e.g., 601), a messaging user interface (e.g., 603) of a message conversation including at least a first participant, the messaging user interface including a camera affordance (e.g., 609, a selectable icon associated with a function for activating a camera application).
The electronic device (e.g., 600) detects (704), via the one or more input devices, a first input (e.g., 616, a touch gesture on a touch screen display at a location that corresponds to the camera affordance) directed to the camera affordance.
In response to detecting the first input (e.g., 616), the electronic device (e.g., 600) displays (706) a camera user interface (e.g., 615). The camera user interface includes (708) a capture affordance (e.g., 621, a selectable icon associated with a function for capturing image data using the camera of the electronic device).
In some embodiments, the camera user interface (e.g., 615) includes (710) an effects mode affordance (e.g., 622, a selectable icon associated with a function for activating a mode in which various visual effects are available for modifying image data) associated with a mode in which visual effects are enabled for display in the captured image data. Including an effects mode affordance in the camera user interface enables the user to recognize that certain effects (e.g., visual effects) can be applied to an image via the camera user interface. Providing additional control of the device enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the visual effects can be added to a representation of image data within a field-of-view of the camera. In some embodiments, the visual effects can be added to captured image data. In some embodiments, the visual effects are based on depth data. In some embodiments, the electronic device detects, via the one or more input devices, a selection (e.g., 623) of the effects mode affordance. In some embodiments, in response to detecting the selection of the effects mode affordance, the electronic device transitions the electronic device from a first camera mode (e.g., standard camera mode) to a second camera mode, different from the first camera mode (e.g., an effects camera mode; a mode in which various visual effects are available for modifying image data). In some embodiments, while the device is in the second camera mode, a visual indication that the second camera mode is operative is displayed (e.g., the effects mode affordance is highlighted).
In some embodiments, further in response to detecting selection (e.g., 623) of the effects mode affordance (e.g., 622), the electronic device (e.g., 600) ceases to display the one or more camera mode affordances. In some embodiments, further in response to detecting selection of the effects mode affordance, the electronic device displays a plurality of effects option affordances (e.g., 624, selectable icons each associated with a function for creating a visual effect). In some embodiments, the effects option affordances include a sticker affordance (e.g., 624-2) and/or an avatar affordance (e.g., 624-1) at a location in the camera user interface (e.g., 615) that was previously occupied by the one or more camera mode affordances. In some embodiments, the locations at which the effects option affordances are displayed are any locations in a particular region (e.g., camera effects region 625) in which the camera mode affordances were previously displayed. In some embodiments, the locations at which respective effects option affordances are displayed are the same locations that respective camera mode affordances were displayed in the region. In some embodiments, displaying the effects option affordances includes replacing the camera mode affordances with the effects option affordances).
In some embodiments, the electronic device (e.g., 600) detects, via the one or more input devices, selection (e.g., 654) of a first one of the effects option affordances (e.g., stickers affordance 624-2). In some embodiments, in response to detecting selection of the first one of the effects option affordances, the electronic device ceases to display the plurality of effects option affordances (e.g., 624) and displays a plurality of selectable graphical icons (e.g., 658, stickers). Ceasing to display the plurality of effects option affordances and displaying the plurality of selectable graphical icons in response to detecting selection of the first one of the effects option affordances enables the user to quickly and easily recognize that the first one of the effects option affordances relates to graphical icon (e.g., sticker) options, thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, displaying the stickers includes displaying a region (e.g., sticker menu 656) over the effects option affordances, where the region includes a plurality of sticker options that can be selected for display on an image represented in the camera user interface. In some embodiments, a user selects (e.g., 660) a sticker (e.g., 658-1) by tapping on the sticker, and the sticker is automatically displayed on the image (e.g., at a default location such as the center of the image). In some embodiments, a user selects a sticker by touching the sticker and dragging it from the sticker menu onto the image. In some embodiments, while displaying the capture affordance and further in response to detecting selection of the first one of the effects option affordances, the electronic device ceases to display the capture affordance.
In some embodiments, the electronic device (e.g., 600) detects, via the one or more input devices, selection (e.g., 626) of a second one of the effects option affordances (e.g., avatar affordance 624-1). In some embodiments, in response to detecting selection of the second one of the effects option affordances, the electronic device ceases to display the plurality of effects option affordances (e.g., 624) and displays an avatar selection region (e.g., avatar menu 628) having a plurality of avatar affordances (e.g., 630, displayed in a linear arrangement) (e.g., affordances that represent avatars). Ceasing to display the plurality of effects option affordances and displaying an avatar selection region in response to detecting selection of the second one of the effects option affordances enables the user to quickly and easily recognize that the second of the effects option affordances relates to avatar selection, thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the avatar affordances correspond to avatars that are customizable, non-customizable, or a combination thereof. In some embodiments, displaying the avatar selection region includes displaying the region (e.g., avatar menu 628) over the effects option affordances, the region including a plurality of avatar affordances that can be selected to display a corresponding avatar on an image represented in the camera user interface.
In some embodiments, the electronic device (e.g., 600) detects, via the one or more input devices, a swipe input (e.g., 646, a vertical swipe gesture) on the avatar selection region (e.g., 628). In some embodiments, in response to detecting the swipe input on the avatar selection region, the electronic device increases a size of the avatar selection region (e.g., 628-1) and displays the plurality of avatar affordances (e.g., 630, arranged in a matrix. Increasing a size of the avatar selection region and displaying the plurality of avatar affordances arranged in a matrix in response to detecting the swipe input on the avatar selection region enables the user to (concurrently) view one or more additional selectable avatars that were not (concurrently) visible in the avatar selection region. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, increasing the size of the avatar display region includes extending the avatar display region in a vertical direction to present a full-screen display of the avatar display region, with the avatar affordances displayed in a matrix in the avatar display region.
In some embodiments, the camera user interface (e.g., 615) further includes a first representation of image data (e.g., a live camera preview 620-1). Providing the first representation of image data (e.g., a live camera preview) provides visual feedback about one or more modifications (to an image) made by the user prior to saving/confirming the modifications. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, further in response to detecting selection of the effects mode affordance (e.g., 624-1, 624-2), the electronic device (e.g., 600), in accordance with a determination that the first representation of image data corresponds to image data obtained from a second camera (e.g., a rear-facing camera), ceases to display the first representation of image data and displays a second representation of image data (e.g., a live camera preview), the second representation of image data corresponding to image data obtained from the camera (e.g., a front-facing camera). In some embodiments, a representation of image data corresponding to the front-facing camera includes a representation of a user positioned in the field-of-view of the front-facing camera.
In some embodiments, while the electronic device (e.g., 600) is in the second camera mode (e.g., an effects camera mode; a mode in which various visual effects are available for modifying image data), the electronic device receives a request to transition (e.g., a selection of an active visual effects affordance) to the first camera mode (e.g., the normal mode, a non-effects mode). In some embodiments, in response to receiving the request to transition to the first camera mode, the electronic device transitions the electronic device from the second camera mode to the first camera mode. In some embodiments, in accordance with a first visual effect being active (e.g., actively applied to captured image data or a preview of image data for capture), the electronic device deactivates (e.g., disabling, ceasing to display the displayed first visual effect) the first visual effect.
In some embodiments, after deactivating the first visual effect, the electronic device (e.g., 600) detects, via the one or more input devices, subsequent selection of the effects mode affordance (e.g., 624-1, 624-2). In some embodiments, in response to detecting the subsequent selection of the effects mode affordance, the electronic device, in accordance with a determination that the subsequent selection of the effects mode affordance occurs within a predetermined amount of time after deactivating the first visual effect, re-activates the first visual effect. Re-activating the first visual effect in accordance with the determination that the subsequent selection of the effects mode affordance occurs within a predetermined amount of time after deactivating the first visual effect enables a user to quickly and easily revert back to a previous visual effect (e.g., without having to re-select/re-create the effect). Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, after selecting the effects mode affordance to remove the visual effects from the image, if the effects mode affordance is selected again within a predetermined time period, the removed visual effects are restored to the image.
The electronic device (e.g., 600) detects (712), via the one or more input devices, a second input (e.g., 676, a touch gesture on a touch screen display at a location that corresponds to the capture affordance) directed to the capture affordance (e.g., 621).
In response to detecting the second input (714), the electronic device (e.g., 600) captures (716) image data using the camera (e.g., 602). In some embodiments, capturing the image data includes, in accordance with a value of a characteristic (e.g., a duration of contact) of the second input meeting a first capture mode criteria (e.g., less than a threshold duration), capturing the image data in a first image capture mode (e.g., a photo capture mode, a still image capture mode). In some embodiments, capturing the image data includes, in accordance with the value of the characteristic of the second input meeting a second capture mode criteria (e.g., greater than a second threshold duration), capturing the image data in a second image capture mode (e.g., a video capture mode, a continuous capture mode). In some embodiments, the capture affordance (e.g., 621) is a multi-function capture affordance. In some embodiments, the electronic device captures a photo (e.g., a still image) when a tap is detected on the capture affordance. In some embodiments, the electronic device captures a video (e.g., a continuous image) when a press-and-hold gesture is detected on the capture affordance.
In response to detecting the second input (e.g., 676) (714), the electronic device (e.g., 600) ceases (718) to display the capture affordance (e.g., 621).
In some embodiments, after capturing image data using the camera (e.g., 602), the electronic device (e.g., 600) displays a mark-up affordance (e.g., 677), an edit affordance (e.g., 678), and a retake affordance (e.g., 679). In some embodiments, while displaying the mark-up affordance, the edit affordance, and the retake affordance, the electronic device receives, via the one or more input devices, a fourth user input. In some embodiments, in response to detecting the fourth user input, in accordance with the fourth user input corresponding to the edit affordance, the electronic device initiates a process for editing the captured image data. In some embodiments, the process for editing includes displaying one or more affordances for editing the captured image data. In some embodiments, in response to detecting the fourth user input, in accordance with the fourth user input corresponding to the mark-up affordance, the electronic device initiates a process for marking-up the captured image data. In some embodiments, the process for editing includes displaying one or more affordances for marking up the captured image data. In some embodiments, in response to detecting the fourth user input, in accordance with the fourth user input corresponding to the retake affordance, the electronic device initiates a process for retaking the captured image data. In some embodiments, initiating the process for retaking the captured image data includes capturing new image data and replacing the captured image data with the new image data.
In response to detecting the second input (e.g., 676) (714), the electronic device (e.g., 600) displays (720) a send affordance (e.g., 680, a selectable icon associated with a function for sending captured image data to a participant in a conversation, or for presenting the captured image data in a compose region prior to subsequent sending) at a location in the camera user interface (e.g., 615) that was previously occupied by the capture affordance (e.g., 621). Displaying the send affordance at the location in the camera user interface that was previously occupied by the capture affordance provides visual feedback that the captured image is ready to be transmitted to an intended recipient. Providing visual feedback to the user without cluttering the UI enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the location at which the send affordance (e.g., 680) is displayed is any location in a particular region (e.g., camera effects region 625) in which the capture affordance was previously displayed. In some embodiments, the location at which the send affordance is displayed is the same location that the capture affordance was displayed in the region. In some embodiments, displaying the send affordance includes replacing the capture affordance with the send affordance.
In some embodiments, while displaying the send affordance (e.g., 680), the electronic device (e.g., 600) displays (722) a representation of the first participant (e.g., an icon, picture, avatar, or other identifier associated with the first participant). Displaying the representation of the first participant while displaying the send affordance enables the user to quickly and easily recognize the intended recipient, thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the representation of the first participant serves as an indication to the user that the captured photo will be sent to the first participant). In some embodiments, the representation of the first participant is not displayed (724) prior to capturing the image data using the camera (e.g., 602). In some embodiments, the camera user interface (e.g., 615) includes camera-specific affordances (e.g., 619, corresponding to filters, lighting options, timer options, etc.) that are displayed prior to capturing the image data. In some embodiments, displaying the representation of the first participant replaces the displayed camera-specific affordances with the representation of the first participant.
The electronic device (e.g., 600) detects (726), via the one or more input devices, a third input (e.g., a touch gesture on a touch screen display at a location that corresponds to the send affordance 680) directed to the send affordance.
In response to detecting the third input, the electronic device (e.g., 600) initiates (728) a process (e.g., immediately sending or presenting the captured image data in a compose region prior to subsequent sending) to send the captured image data to the first participant.
In some embodiments, prior to detecting the third input directed to the send affordance (e.g., 680), the electronic device (e.g., 600) displays a done affordance (e.g., a selectable icon associated with a function for closing the camera user interface to display the messaging user interface). In some embodiments, the electronic device detects, via the one or more input devices, selection of the done affordance. In some embodiments, in response to detecting selection of the done affordance, the electronic device displays the messaging user interface (e.g., 603), the messaging user interface having a message-compose region (e.g., 608). In some embodiments, in response to detecting selection of the done affordance, the electronic device displays a representation of the captured image data in the message-compose region. In some embodiments, selecting the done affordance closes the camera user interface and displays the captured image data in a message-compose field of the messaging user interface, without sending the captured image data.
In some embodiments, prior to detecting the third user input, the electronic device (e.g., 600) is in a first image capture mode (e.g., photo capture mode, video capture mode). In some embodiments, further in response to detecting the third user input, the electronic device maintains the first image capture mode. In some embodiments, the electronic device can be configured (e.g., user configured) to capture image data according to a plurality of modes (e.g., photo capture mode, video capture mode). In some embodiments, a selection of a image capture mode is persistent, even when the electronic transitions from a first camera mode (e.g., a standard camera mode, a non-effects camera mode) to a second camera mode (e.g., an effects camera mode).
In some embodiments, initiating the process to send the captured image data to the first participant includes sending the captured image data to the first participant (e.g., without displaying the messaging user interface 615). In some embodiments, selecting the send affordance (e.g., 680) from the camera user interface immediately sends the captured image data to another participant in a message conversation without displaying any intermediate user interface or requiring further input from the user.
In some embodiments, initiating the process to send the captured image data to the first participant includes re-displaying the messaging user interface (e.g., 615), where the messaging user interface further includes a keyboard region (e.g., 612) and an application menu affordance (e.g., 610, a selectable icon associated with a function for displaying an application menu user interface). Re-displaying the messaging user interface as part of initiating the process to send the captured image data to the first participant provides visual feedback that the captured image data is being sent via the message conversation. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the application menu affordance is displayed adjacent a message-compose field in the messaging user interface. In some embodiments, the electronic device (e.g., 600) detects, via the one or more input devices, selection of the application menu affordance. In some embodiments, in response to detecting selection of the application menu affordance, the electronic device displays an application menu region adjacent (e.g., above) the keyboard region, where the application menu region has a plurality of application affordances (e.g., selectable icons each associated with a function for initiating an application associated with the respective application affordance). In some embodiments, the application affordances include stickers affordances and avatar affordances.
In some embodiments, the electronic device (e.g., 600) detects a fourth input on the messaging user interface (e.g., 615). In some embodiments, in accordance with a determination the fourth input corresponds to a location of the keyboard region (e.g., 612) or a location of a message-compose region (e.g., 608) in the messaging user interface, the electronic device ceases to display the application menu region and displays a text-suggestion region (e.g., a region having a listing of suggested words for convenient selection by a user) at a location in the messaging user interface that was previously occupied by the application menu region.
In some embodiments, the electronic device (e.g., 600) detects selection of one of the plurality of application affordances (e.g., a stickers affordance or an avatar affordance) in the application menu region. In some embodiments, in response to detecting selection of the application affordance (e.g., 610), the electronic device ceases to display the keyboard region (e.g., 612) and displays an application display region at a location in the messaging user interface (e.g., 615) that was previously occupied by the keyboard region, where the application display region includes a plurality of graphical objects (e.g., avatars or stickers) corresponding to the selected application affordance. In some embodiments, the selected application affordance is a stickers affordance and the graphical objects displayed in the application display region are stickers. In some embodiments, the selected application affordance is an avatar affordance and the graphical objects displayed in the application display region are avatars (e.g., customizable avatars and/or non-customizable avatars).
Note that details of the processes described above with respect to method 700 (e.g.,
In
In
Camera application user interface 815 includes image display region 820 which displays a representation of image data such as, for example, streamed image data (e.g., a live camera preview, live camera recording, or live video communications session) representing objects positioned within a field-of-view of a camera (e.g., a rear-facing camera or camera 602), or a media item such as, for example, a photograph or a video recording. In the embodiment illustrated in
Camera application user interface 815 also includes a region above image display region 820 that includes camera-specific affordances 817. Camera-specific affordances include affordance 817-1 associated with a camera flash function, affordance 817-2 associated with a camera mode function, affordance 817-3 associated with a timer function, and affordance 817-4 associated with a filter function.
Camera application user interface 815 also includes camera options region 825 (similar to camera options region 625) positioned below image display region 820 (similar to image display region 620). Camera options region 825 includes camera selector affordance 827 for switching between cameras (e.g., a rear-facing camera and camera 602), and camera option affordances 819 associated with different capture modes in which a camera can record image data. For example, video affordance 819-1 is associated with a function for activating a video recording capture mode of the camera, and photo affordance 819-2 is associated with a function for activating a still image capture mode of the camera. In the embodiments discussed below with respect to
Camera options region 825 further includes effects affordance 822 for enabling and disabling a mode (visual effects mode, effects mode) of device 600 in which device 600 is enabled or disabled for displaying visual effects in image display region 820. Effects affordance 822 is similar to effects affordance 622 and, therefore, has the same functionality as effects affordance 622, unless specified otherwise. Accordingly, effects affordance 822 can be selected to enable display of visual effects, and deselected to disable display of visual effects.
Camera options region 825 also includes capture affordance 821, which functions in a manner similar to capture affordance 621 discussed above. Capture affordance 821 can be selected to capture image data represented in image display region 820. In some embodiments, device 600 captures the image data in a manner based on the currently enabled capture option (e.g., video recording capture mode or image capture mode). In some embodiments, device 600 captures the image data depending on the type of gesture detected on capture affordance 821. For example, if device 600 detects a tap gesture on capture affordance 821, device 600 captures a still image of the image data represented in image display region 820 at the time the tap gesture occurs. If device 600 detects a tap-and-hold gesture on capture affordance 821, device 600 captures a video recording of the image data represented in image display region 820 during a period of time for which the tap-and-hold gesture persists. In some embodiments, the video recording stops when the finger lifts off of the affordance. In some embodiments, the video recording continues until a subsequent input (e.g., a tap input) is detected at a location corresponding to the affordance. In some embodiments, the captured image (e.g., still image or video recording) can be shared with other devices, for example, using a messaging application.
In
In
Device 600 also highlights effects affordance 822 to indicate visual effects are enabled for display, and updates camera options region 825 by replacing camera option affordances 819 with visual effects option affordances 824. The visual effects option affordances include avatar effects affordance 824-1 and sticker effects affordance 824-2. Visual effects option affordances 824 are similar to visual effects option affordances 624 described above. Visual effects option affordances 824 correspond to different visual effects that can be applied to the image displayed in image display region 820. By selecting one of the visual effect option affordances (e.g., 824-1 or 824-2) a menu is displayed with visual effects options corresponding to the selected visual effects option affordance.
In
Avatar options 830 have a similar functionality to avatar options 630. Thus, avatar options 830 correspond to a virtual avatar visual effect applied to a representation of the subject in image display region 820. Specifically, each avatar option 830 corresponds to a virtual avatar that, when selected, is transposed onto the face of the subject in the image display region, while other portions of the image in image display region (such as a background or other portions of the user, such as their body) remain displayed. A user (e.g., subject 832) positioned in the field-of-view of camera 602 can control visual aspects of the virtual avatar by changing the pose (e.g., rotation or orientation) of their face as discussed above.
Avatar options 830 can be scrolled by gestures on the avatar options menu 828. For example, a horizontal gesture 844a is shown in
In
In
As shown in
In some embodiments, when device 600 no longer detects the subject's face within the field-of-view of camera 602, device 600 again applies a blurring effect 803 (similar to the blurring effect 644) to the background and displays prompt 838 instructing the user to return their face to the field-of-view of camera 602. In the embodiment shown in
In some embodiments, when device 600 detects the user's face returning to the field-of-view of camera 602, device 600 displays avatar 835 moving from the center position of image display region 820 to the position of the user's face, and resumes modifying the avatar based on detected changes to the user's face.
In some embodiments, avatar options menu 828 can be expanded with a vertical gesture (e.g., 805) to display an enlarged version of the avatar options menu as shown in
In some embodiments, device 600 displays different avatars on the user's head in response to detecting swipe gestures on image display region 820. For example, in
As the avatar options 830 begin to scroll, the currently selected avatar option (e.g. robot avatar option 830-3 in
In the embodiment illustrated in
In some embodiments, applied visual effects can include a lighting effect such as shadow 851 shown on the subject's neck below applied custom avatar 835 or light reflections on glasses. As device 600 modifies avatar 835 to mirror the real-time movements of the user, device 600 also modifies the lighting effects on avatar 835 and those projected onto the subject, including moving displayed locations of reflections shadows based on a relative position of a modeled light source and avatar 835.
As shown in
Device 600 modifies avatar 835 when different avatar feature options are selected. For example, in
In response to detecting selection of done affordance 812, device 600 exits avatar editing user interface 808 and returns to camera application user interface 815 showing the selected avatar option 830-5 and corresponding avatar 835 updated based on the hair texture option selected in the avatar editing user interface.
In response to detecting selection of cancel icon 850 in
In response to detecting selection of sticker effects affordance 824-2, device 600 displays sticker options menu 856 having a scrollable listing of sticker 858 in
In
In
In
Device 600 also replaces the camera-specific affordances (e.g., affordances 817 shown in
Device 600 also updates camera options region 825 to replace capture affordance 821 and camera selector affordance 827 with markup affordance 877, edit affordance 878, and share affordance 880. Markup affordance 877 allows a user to mark-up media item 820-2. Edit affordance 878 allows a user to edit media item 820-2 such as by cropping the image or adjusting other characteristics of media item 820-2. Share affordance 880 allows a user to send media item 820-2 to another device, such as, for example in a messaging application or email application.
Device 600 displays camera options region 825, including visual effects option affordances 824. Visual effects option affordances 824 can be selected to display their respective option menus, which can be used to modify captured media item 820-2 (as well as recorded video media item 820-4 discussed below). For example,
In
Stickers can be added to recorded media items that are in a video format in a manner that is similar to that described above for media item 820-2 (still image). For example,
In
In some embodiments, displayed stickers can have different modeled behaviors in a video media item (or live video stream). For example, some stickers have an appearance of being applied to the display (e.g., 601) and remain static as objects in the image data move. An example of such a sticker is demonstrated by heart sticker 858-3 in
Other stickers have the appearance of being applied to the display and moving to follow an object (e.g., an item in the field of view of the camera including, for example, an avatar or a representation of the subject) in the image. In some embodiments, the sticker is placed at a location remote from the object the sticker follows. An example of such a sticker is demonstrated by helmet sticker 858-1 in
Yet other stickers have the appearance of being applied to an object in the field of view of camera 602 and move to follow the object within the field of view (e.g., having an appearance of depth as the sticker adjusts with the object in the image). An example of such a sticker is demonstrated by rabbit sticker 858-2 in
In some embodiments, a sticker's behavior changes based on its position relative to objects in the media item (or live camera preview or field of view of the camera). In some embodiments, the behavior of a sticker changes in response to detecting changes in the position of the sticker relative to the object. Examples of such stickers are shown in
In some embodiments, a sticker or other virtual object that is applied to a location of the representation of the field of view of one or more cameras that includes a respective object (e.g., a face, hand, or other body part of a user of the device) being tracked in three dimensions (e.g., via depth information from a depth sensor) is attached to the respective object such that the size and/or orientation of the virtual object changes as the distance of the respective object from the one or more cameras and/or the orientation of the respective object with respect to the one or more cameras changes in addition to moving laterally (e.g., side to side and/or up and down) as the respective object moves laterally (e.g., side to side and/or up and down) in the field of view of the one or more cameras. For example, as shown in
In
In
As shown in
In
As demonstrated in
In some embodiments, stickers can have a behavior that is determined based on conditions (e.g., position of the sticker relative to other objects, the presence (or absence) of objects when the sticker is placed, etc.) of the sticker's placement. For example, in some embodiments, a sticker can have a first type of behavior when positioned remote from an object or region, and a second type of behavior when positioned on the object or region. In some embodiments, a sticker can have a third type of behavior if an object is not present when the sticker is placed. In some embodiments, the behavior of the sticker can change based on changes to the sticker's placement (e.g., relative to an object).
For example,
In
When the stickers 858 are positioned away from (e.g., not on) the subject's face, they have a first type of behavior. For example, the stickers follow movement of the subject's face laterally (e.g., side to side and up/down), forwards (e.g., towards camera 602), and backwards (e.g., away from camera 602), but not rotational movement of the subject's face (e.g., not following the pitch and yaw of the subject's face).
In
In some embodiments, device 600 provides an indication when the sticker moves to a location that corresponds to the object the sticker is following. For example, in
In some embodiments, when device 600 detects that the position of the sticker (or other visual effect) moves to a location that corresponds to the object the sticker is following, device 600 modifies the appearance of the sticker based on the position of the object and modifies the behavior of the sticker (in some embodiments, the behavior of the sticker is modified after detecting termination of input 889). Device 600 also modifies the appearance and behavior of the sticker in an opposite manner when the sticker is moved from the location that corresponds to the object the sticker is following, to a location remote from the object.
In
Device 600 also changes the behavior of the sticker to a second type of behavior (e.g., a behavior different than the first type of behavior). As shown in
In some embodiments, brackets 890 persist with input 889 while input 889 is positioned on the subject's face (e.g., within the brackets). Thus, when device 600 detects termination of input 889, device 600 ceases displaying brackets 890, as shown in
In
In
In
The display of visual effects, as discussed herein, is similar across different embodiments. For example, unless specified otherwise, visual effects can be displayed and manipulated in a similar manner in a camera application, a messaging application, an avatar editing application, a live video messaging application, or any other application discussed herein. Additionally, visual effects can be displayed and manipulated in a similar manner across different types of image data. For example, unless specified otherwise, visual effects can be displayed and manipulated in a similar manner in a live camera preview, a media item, streamed image data, or any other image data discussed herein. For example,
In
In
As described below, method 900 provides an intuitive way for displaying visual effects in a camera application. The method reduces the cognitive burden on a user for applying visual effects to an image viewed in a camera application, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to display visual effects in an image faster and more efficiently conserves power and increases the time between battery charges.
The electronic device (e.g., 600) displays (902), via the display apparatus (e.g., 601), a camera user interface (e.g., 815). The camera user interface includes (904) a camera display region (e.g., 820) including a representation (e.g., 835) of image data captured via the camera (e.g., 602).
In some embodiments, the image data includes (906) depth data (e.g., image data that includes a depth aspect (e.g., depth data independent of RGB data) of a captured image or video. In some embodiments, the image data includes at least two components: an RGB component that encodes the visual characteristics of a captured image, and depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background. In some embodiments, the depth data is a depth map. In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint's z-axis where its corresponding two-dimensional pixel is located. In some examples, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0-255). For example, the “0” value represents pixels that are located at the most distant place in a “three dimensional” scene and the “255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the “three dimensional” scene. In other examples, a depth map represents the distance between an object in a scene and the plane of the viewpoint.) In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user's face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
In some embodiments, the depth data has a first depth component (e.g., a first portion of depth data that encodes a spatial position of the subject in the camera display region; a plurality of depth pixels that form a discrete portion of the depth map, such as a foreground or a specific object) that includes the representation of the subject in the camera display region (e.g., 820). In some embodiments, the depth data has a second depth component (e.g., a second portion of depth data that encodes a spatial position of the background in the camera display region; a plurality of depth pixels that form a discrete portion of the depth map, such as a background), separate from the first depth component, the second depth aspect including the representation of the background in the camera display region. In some embodiments, the first depth aspect and second depth aspect are used to determine a spatial relationship between the subject in the camera display region and the background in the camera display region. This spatial relationship can be used to distinguish the subject from the background. This distinction can be exploited to, for example, apply different visual effects (e.g., visual effects having a depth component) to the subject and background). In some embodiments, all areas of the image data that do not correspond to the first depth component (e.g., areas of the image data that are out of range of the depth camera) are segmented out (e.g., excluded) from the depth map.
In some embodiments, the representation (e.g., 835) of image data captured via the camera (e.g., 602) is a live camera preview (e.g., a stream of image data that represents what is in the field of view of the camera).
In some embodiments, while the first camera display mode is active, the electronic device (e.g., 600) detects a swipe gesture on the camera display region (e.g., 820). In some embodiments, in response to detecting the swipe gesture on the camera display region (e.g., the electronic device (e.g., 600) changes an appearance of the displayed representation of the selected avatar option in the camera display region from a first appearance (e.g., an appearance based on the currently selected avatar option) to a second appearance (e.g., an appearance based on a different avatar option (e.g., a null avatar option or an avatar option corresponding to a different avatar, including avatars of different types (e.g., customizable, non-customizable))), where the second appearance corresponds to a different one of the plurality of avatar options (e.g., a different avatar option included in the avatar selection region). Changing the appearance of the displayed representation of the selected avatar option in response to detecting a swipe gesture on the camera display region provides the user with a quick and easy method to change a representation of a selected avatar. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently
In some embodiments, when the different one of the plurality of avatar options is a null avatar option, the device (e.g., 600) ceases to display the representation of the avatar on the representation of the subject (e.g., the device foregoes replacing image data of the user's head with a virtual avatar). In some embodiments, when the different one of the plurality of avatar options is an avatar option of a different avatar character (including a customizable or non-customizable avatar character), the device replaces the selected avatar character with the different avatar character (e.g., the device replaces the representation of the avatar with a representation of a different avatar). In some embodiments, replacing the selected avatar character with the different avatar character includes displaying an animation of the different avatar character moving to the center of the screen. In some embodiments, replacing the selected avatar character with the different avatar character includes displaying an animation of the different avatar character moving to the user's head. In some embodiments, replacing the selected avatar character with the different avatar character includes blurring the background while the selected avatar is being replaced. Displaying an animation (e.g., the different avatar character moving to the center of the screen, the different avatar character moving to the user's head, blurring the background) once/while the selected avatar character is replaced with the different avatar character provides visual feedback that the avatar character is being changed. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the currently selected avatar option corresponds to an avatar of a first type (e.g., a customizable avatar), and the different avatar option corresponds to an avatar of a second type (e.g., a non-customizable avatar).
In some embodiments, changing the appearance of the displayed representation of the selected avatar option in the camera display region (e.g., 820) from the first appearance to the second appearance includes moving a first version of the representation of the selected avatar option, off of the display, the first version having the first appearance. In some embodiments, changing the appearance of the displayed representation of the selected avatar option in the camera display region from the first appearance to the second appearance includes moving a second version of the representation of the selected avatar option to substantially the center of the display, the second version having the second appearance. Moving the first version of the representation of the selected avatar option off of the display and moving the second version of the representation of the selected avatar option to substantially the center of the display provides visual feedback that the first version is being replaced by the second version. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, changing the appearance of the displayed representation of the selected avatar option from the first appearance to the second appearance includes moving a first version of the representation of the selected avatar off of the display, the first version having the first appearance. In some embodiments, changing the appearance of the displayed representation of the selected avatar option from the first appearance to the second appearance includes moving a second version of the representation of the selected avatar option to substantially the position of the representation of the subject displayed in the camera display region (e.g., 820), the second version having the second appearance. Moving the first version of the representation of the selected avatar option off of the display and moving the second version of the representation of the selected avatar option to substantially the position of the representation of the subject displayed in the camera display region provides visual feedback that the first version is being replaced by the second version. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, changing the appearance of the displayed representation of the selected avatar option from the first appearance to the second appearance includes modifying the visual appearance of the background displayed in the camera display region (e.g., 820) (e.g., blurring the background, desaturating the background).
The camera user interface also includes (908) a first affordance (e.g., an affordance that corresponds to a virtual avatar) associated with a first camera display mode (e.g., a mode in which image data of the user's head is replaced with a virtual avatar).
In some embodiments, the camera user interface (e.g., 815) further includes a sticker affordance (e.g., 824-2, an affordance that corresponds to a function for enabling the display of stickers) associated with a sticker display mode (e.g., a mode in which stickers are enabled to be applied to the image data). In some embodiments, while displaying the image data (and optionally a representation of the selected avatar option) in the camera display region (e.g., 820), the electronic device (e.g., 600) detects a gesture (e.g.,
In some embodiments, while displaying the representation of the selected sticker option (e.g., 858-1, 858-2, 858-3, 858-4, 858-5, 858-6) on the image data in the camera display region (e.g., 820), the device (e.g., 600) detects lateral movement of the subject (e.g., 832) in the field of view of the one or more cameras (e.g., 602). In response to detecting the lateral movement of the subject in the field of view of the one or more cameras, the device moves the representation of the selected sticker option laterally in accordance with the movement of the subject in the field of view of the one or more cameras (e.g., without regard to a relationship of the sticker to the subject) (e.g., see helmet sticker 858-1 in
In some embodiments, while displaying the representation of the selected sticker option (e.g., 858-1, 858-2, 858-3, 858-4, 858-5, 858-6) on the image data in the camera display region (e.g., 820), the device (e.g., 600) detects rotation of the subject (e.g., 832) in the field of view of the one or more cameras (e.g., 602) (e.g., rotation relative to an axis perpendicular to the display; e.g., the subject turning their head). In response to detecting the rotation of the subject in the field of view of the one or more cameras, the device performs one or more of the following steps. In accordance with a determination that the representation of the selected sticker option has (e.g., was placed with) a first relationship to the subject (e.g., the sticker was initially (or is currently) placed at a location on the display that corresponds to the subject; e.g., the sticker is placed on the representation of the subject's face or other designated area (e.g., shown with brackets (e.g., 890))), the device rotates the representation of the selected sticker option in accordance with a magnitude and direction of the rotation of the subject (e.g., the sticker rotates and turns to follow the pitch and yaw of the subject's face) (e.g., see glasses sticker 858-4 in
In some embodiments, while displaying the representation of the selected sticker option (e.g., 858-1, 858-2, 858-3, 858-4, 858-5, 858-6) on the image data in the camera display region (e.g., 820), the device (e.g., 600) detects movement of the subject (e.g., 832) toward (or away from) the one or more cameras (e.g., 602). In response to detecting the movement of the subject toward (or away from) the one or more cameras, the device performs one or more of the following steps. In accordance with a determination that the representation of the selected sticker option has (e.g., was placed with) the first relationship to the subject (e.g., the sticker was initially (or is currently) placed at a location on the display that corresponds to the subject; e.g., the sticker was placed when the representation of the subject's face was present (e.g., detected) within the field of view of the camera), the device enlarges (or shrinks) the representation of the selected sticker option in accordance with a magnitude of movement of the subject toward (or away from) the one or more cameras. For example, the rabbit sticker 858-2 enlarges as shown in
While a subject is positioned within a field of view of the camera (e.g., 602) and a representation of the subject and a background (e.g., objects in the field of view of the camera other than the subject) are displayed in the camera display region (e.g., 820), the electronic device (e.g., 600) detects (910) a gesture directed to the first affordance. In some embodiments, the electronic device detects (e.g., recognizes) that the subject is positioned in the field of view.
In some embodiments, the camera user interface (e.g., 815), while displaying the capture affordance, further includes a camera display region (e.g., 820) including a representation of a live preview of a field of view of the camera (e.g., a stream of image data that represents what is in the field of view of the camera). In some embodiments, while a subject is positioned within the field of view of the camera (e.g., the electronic device detects/recognizes that the subject is positioned in the field of view) and a representation of the subject and a background (e.g., objects in the field of view of the camera other than the subject) are displayed in the camera display region, the electronic device (e.g., 600) displays a representation of a selected avatar on the representation of the subject in the camera display region (e.g., a displayed head or face portion of the user is replaced with (or overlaid by (e.g., opaquely, transparently, translucently)) a head of a virtual avatar that corresponds to the selected avatar). In some embodiments, while displaying the representation of the selected avatar on the representation of the subject in the camera display region, the electronic device receives a request to display an avatar selection region. In some embodiments, in response to receiving the request to display an avatar selection region, the electronic device ceases to display the capture affordance and displays (e.g., at a location in the camera user interface that was previously occupied by the capture affordance) an avatar selection region (e.g., avatar menu 828) having a plurality of avatar affordances. In some embodiments, in response to (or in conjunction with) the avatar selection region no longer being displayed, the capture affordance is displayed (e.g., re-displayed).
In some embodiments, the camera user interface (e.g., 815), while displaying the capture affordance (e.g., 821), further includes a camera display region (e.g., 820) including a representation of a live preview of a field of view of the camera (e.g., a stream of image data that represents what is in the field of view of the camera). In some embodiments, while a subject is positioned within the field of view of the camera and a representation of the subject and a background (e.g., objects in the field of view of the camera other than the subject) are displayed in the camera display region, the electronic device (e.g., 600) displays a representation of a selected avatar on the representation of the subject in the camera display region (e.g., a displayed head or face portion of the user is replaced with (or overlaid by (e.g., opaquely, transparently, translucently)) a head of a virtual avatar that corresponds to the selected avatar). In some embodiments, the electronic device detects (e.g., recognizes) that the subject is positioned in the field of view. In some embodiments, while displaying the representation of the selected avatar on the representation of the subject in the camera display region, the electronic device detects a change in pose (e.g., position and/or orientation) of the subject. In some embodiments, the change in pose is detected when the user moves their head or any facial features. In some embodiments, in response to detecting the change in pose of the subject, the electronic device changes an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background (e.g., as described with respect to method 900 and
In response to detecting the gesture directed to the first affordance, the electronic device (e.g., 600) activates the first camera display mode. Activating the first camera display mode includes displaying (914) an avatar selection region (e.g., 829) (e.g., including a selected one of a plurality of avatar options (e.g., affordances that represent different virtual avatars that can be selected to appear over the user's head in the camera display region (e.g., 820) of the camera user interface (e.g., 815)).
In some embodiments, the avatar selection region (e.g., 829) further includes an option for ceasing to display the representation of the selected avatar option on the representation of the subject in the camera display. In some embodiments, the electronic device (e.g., 600) receives a user input corresponding to selection of the option for ceasing to display the representation of the selected avatar option on the representation of the subject in the camera display region (e.g., 820). In some embodiments, in response to receiving a user input corresponding to selection of the option for ceasing to display the representation of the selected avatar option on the representation of the subject in the camera display region, the electronic device ceases to display the representation of the selected avatar option on the representation of the subject in the camera display region.
In some embodiments, the avatar selection region (e.g., 829) includes a null avatar option (e.g., 830-2). When the null avatar option is selected, no avatar is displayed on the representation of the subject in the camera display region (e.g., 820) (e.g., the device forgoes replacing image data of the user's head with a virtual avatar). In some embodiments, the avatar selection region includes a “cancel” affordance (e.g., an “x” icon located in the corner of the avatar selection region). When the cancel affordance is selected, the device ceases to display the avatar selection region and, optionally, ceases to display any selected avatar on the representation of the subject (e.g., the device foregoes replacing image data of the user's head with a virtual avatar).
In some embodiments, activating the first camera display mode (e.g., an avatar display mode in which image data of the user's head is replaced with a virtual avatar) further includes, prior to displaying the representation of the selected avatar option on the representation of the subject in the camera display region (e.g., 820), displaying (916) the representation of the subject in the camera display region without displaying a representation of the selected avatar option on the representation of the subject. In some embodiments, after entering the avatar display mode, the device initially displays the representation of the subject without an avatar (e.g., the device foregoes replacing image data of the user's head with a virtual avatar). In some embodiments, the avatar option that is initially selected when entering the avatar display mode corresponds to a null avatar option. When the null avatar option is selected, the device foregoes replacing image data of the subject's head with a virtual avatar.
Activating the first camera display mode includes displaying (918) a representation of the selected avatar option on the representation of the subject in the camera display region (e.g., 820) (e.g., a displayed head or face portion of the user is replaced with (or overlaid by (e.g., opaquely, transparently, translucently)) a head of a virtual avatar that corresponds to the selected avatar option). Displaying the representation of the selected avatar option on the representation of the subject in the camera display region enables the user to quickly and easily recognize that selected avatar option relates to representation of the subject. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, displaying a representation of the selected avatar option on the representation of the subject includes using depth information obtained using one or more depth cameras of the electronic device (e.g., 600).
In some embodiments, activating the first camera display mode further includes displaying the selected avatar option with a static appearance (e.g., the avatar appearance does not change based on detected changes in the user's face) in the avatar selection region (e.g., 829). In some embodiments, activating the first camera display mode further includes updating the selected avatar option to have a dynamic appearance that changes based on the detected change in pose of the subject (e.g., the avatar changes to mirror the detected changes in the user's face). In some embodiments, activating the first camera display mode further includes displaying an animation of the selected avatar having the dynamic appearance moving from the avatar selection region to the representation of the subject (e.g., a representation of the user's face) in the camera display region (e.g., 820). In some embodiments, the avatar continues to track changes in the user's face during the animated movement from the avatar selection region to the user's face in the camera display region.
In some embodiments, updating the selected avatar option to have a dynamic appearance that changes based on the detected change in pose of the subject includes initially displaying the avatar option having the dynamic appearance with an initial pose that corresponds (e.g., matches) a pose of the avatar option having the static appearance, prior to changing the appearance of the avatar option based on the detected change in pose of the subject.
While the first camera display mode is active, the electronic device (e.g., 600) detects (920) a change in pose (e.g., position and/or orientation of the subject). In some embodiments, the change in pose is detected when the user moves their head or any facial features.
In response to detecting the change in pose of the subject, the electronic device (e.g., 600) changes (922) an appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background (e.g., 836) (e.g., the virtual avatar displayed on the user is responsive to detected changes in the user's head and face such that a change in the user's head or face effects a change in the displayed virtual avatar while still displaying the background). Changing the appearance of the displayed representation of the selected avatar option based on the detected change in pose of the subject while maintaining display of the background enables the user to quickly and easily recognize that movements of the avatar correspond to and/or are based on detected movements of the user. Providing additional control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the camera user interface includes one or more features/functions of the camera user interface described with respect to the embodiment shown in
In some embodiments, the electronic device (e.g., 600) detects a horizontal swipe gesture on the avatar selection region (e.g., 829). In some embodiments, in response to detecting the horizontal swipe gesture, the electronic device displays an avatar creation affordance associated with a function for adding a new avatar option to the plurality of avatar options. Displaying the avatar creation affordance associated with a function for adding a new avatar option to the plurality of avatar options in response to detecting a horizontal swipe gesture enables the user to quickly and easily access the avatar creation affordance from the avatar selection region. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, a horizontal swipe gesture on the avatar selection region scrolls the displayed avatar options to reveal an avatar creation affordance. In some embodiments, the avatar creation affordance can be selected to create a new avatar. When the new avatar is created, a new avatar option representing the created avatar is added to the plurality of avatar options (e.g., 830) in the avatar selection region.
In some embodiments, while the first camera display mode is active, the electronic device (e.g., 600) detects a swipe gesture on the avatar selection region (e.g., 829). In some embodiments, in response to detecting the swipe gesture on the avatar selection region, the electronic device changes an appearance of the displayed representation of the selected avatar option in the camera display region (e.g., 820) from a first appearance (e.g., an appearance based on the currently selected avatar option) to a second appearance (e.g., an appearance based on a different avatar option (e.g., a null avatar option or an avatar option corresponding to a different avatar, including avatars of different types (e.g., customizable, non-customizable))), where the second appearance corresponds to a different one of the plurality of avatar options (e.g., a different avatar option included in the avatar selection region). Changing the appearance of the displayed representation of the selected avatar option in response to detecting a swipe gesture on the avatar selection region enables the user to quickly and easily change the appearance of the selected avatar option. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, when the different one of the plurality of avatar options is a null avatar option (e.g., 830-2), the device ceases to display the representation of the avatar on the representation of the subject (e.g., the device forgoes replacing image data of the user's head with a virtual avatar). In some embodiments, when the different one of the plurality of avatar options is an avatar option of a different avatar character (including a customizable or non-customizable avatar character), the device replaces the selected avatar character with the different avatar character (e.g., the device replaces the representation of the avatar with a representation of a different avatar). In some embodiments, replacing the selected avatar character with the different avatar character includes displaying an animation of the different avatar character moving to the center of the screen. In some embodiments, replacing the selected avatar character with the different avatar character includes displaying an animation of the different avatar character moving to the user's head. In some embodiments, replacing the selected avatar character with the different avatar character includes blurring the background (e.g., 836) while the selected avatar is being replaced. In some embodiments, the currently selected avatar option corresponds to an avatar of a first type (e.g., a customizable avatar), and the different avatar option corresponds to an avatar of a second type (e.g., a non-customizable avatar).
In some embodiments, while the first camera display mode is active, in response to a determination that the subject is no longer positioned in the field of view of the camera (e.g., face tracking is lost), the electronic device (e.g., 600) displays an animation of the representation of the selected avatar option moving to a center location in the camera display region (e.g., 820). Displaying an animation of the representation of the selected avatar option moving to a center location in the camera display region in response to a determination that the subject is no longer positioned in the field of view of the camera provides visual feedback to the user that the user is no longer being detected by the camera. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, when the user is no longer detected in the field of view of the camera, the avatar moves to the center of the camera display region. In some embodiments, the background is blurred when the user is no longer detected in the field-of-view of the camera.
In some embodiments, while the first camera display mode is active, in response to a determination that the subject is no longer positioned in the field of view of the camera (e.g., face tracking is lost), the electronic device (e.g., 600) modifies the visual appearance of the background (e.g., 836) displayed in the camera display region (e.g., 820) (e.g., blurring the background, desaturating the background).
In some embodiments, while the first camera display mode is active and the representation of the selected avatar option (e.g., a representation of a customizable avatar option selected from the avatar selection region) is displayed on the representation of the subject in the camera display region (e.g., 820) (e.g., image data of the user's head is replaced with the customizable avatar), the electronic device (e.g., 600) detects a touch gesture (e.g., a tap gesture) on the selected avatar option in the avatar selection region (e.g., 829). In some embodiments, in response to detecting the touch gesture, the electronic device displays an avatar editing user interface (e.g., a user interface for editing one or more features of the selected avatar option (e.g., a selected customizable avatar) having a plurality of options (e.g., edit affordances that are selectable to modify various features of the customizable avatar) for editing the selected avatar option. Displaying an avatar editing user interface in response to detecting a touch gesture on the selected avatar option in the avatar selection region enables a user to quickly and easily access the avatar editing user interface to edit the avatar. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the camera user interface (e.g., 815) further includes a second affordance (e.g., 824-2, an affordance that corresponds to a function for displaying stickers) associated with a second camera display mode (e.g., a mode in which virtual effects (e.g., stickers) are applied to the image data). In some embodiments, while the subject is positioned within the field of view of the camera (e.g., 602) and the representation of the subject and the background (e.g., 836) are displayed in the camera display region (e.g., 820), the electronic device (e.g., 600) detects a gesture directed to the second affordance. In some embodiments, in response to detecting the gesture directed to the second affordance, the electronic device activates the second camera display mode, where activating the second camera display mode includes displaying a visual effects selection region including a plurality of graphical objects (e.g., stickers).
In some embodiments, while the second camera display mode is active, the electronic device (e.g., 600) detects a selection of one of the plurality of graphical objects (e.g., a sticker) in the visual effects selection region (e.g., 824). In some embodiments, in response to detecting the selection, the electronic device displays a representation of the selected graphical object in the camera display region (e.g., 820). In some embodiments, the selected sticker is displayed in the camera display region during a live camera preview (e.g., 820-1). In some embodiments, displaying the sticker in the live camera preview includes immediately displaying the sticker at a default location (e.g., the center of the screen) of the camera display region. In some embodiments, displaying the sticker in the live camera preview includes displaying an animation of the sticker moving from the visual effects selection region to a location on the camera display region. In some embodiments, this animation is determined based on a drag gesture of the user selection of the sticker (e.g., a gesture in which the user touches the sticker and drags it to a location on the camera display region).
In some embodiments, the representation of image data captured via the camera (e.g., 602) is a media item (e.g., 820-2, a still image or recorded video). In some embodiments, the camera user interface (e.g., 815) further includes a third affordance (e.g., an affordance that corresponds to a function for displaying stickers) associated with a third camera display mode (e.g., a mode in which virtual effects (e.g., stickers) are applied to a photograph or recorded video). In some embodiments, the electronic device (e.g., 600) detects a gesture directed to the third affordance. In some embodiments, in response to detecting the gesture directed to the third affordance, the electronic device activates the third camera display mode, where activating the third camera display mode includes displaying a visual effects selection region including a plurality of graphical objects (e.g., stickers).
In some embodiments, while the third camera display mode is active, the electronic device (e.g., 600) detects a selection of one of the plurality of graphical objects (e.g., a sticker) in the visual effects selection region. In some embodiments, in response to detecting the selection, the electronic device displays a representation of the selected graphical object on the media item (e.g., 820-2) in the camera display region (e.g., 820). In some embodiments, the selected sticker is displayed in the camera display region when viewing a photograph or recorded video. In some embodiments, displaying the sticker on the photograph or recorded video includes immediately displaying the sticker at a default location (e.g., the center of the screen) of the camera display region. In some embodiments, displaying the sticker in the photograph or recorded video includes displaying an animation of the sticker moving from the visual effects selection region to a location on the camera display region. In some embodiments, this animation is determined based on a drag gesture of the user selection of the sticker (e.g., a gesture in which the user touches the sticker and drags it to a location on the camera display region).
Note that details of the processes described above with respect to method 900 (e.g.,
In
In
Media item 1010 is an image that does not include encoded depth data (e.g., it was captured by a camera that does not encode depth data into captured media items (e.g., a camera other than camera 602)). Thus, media item 1010 does not include depth data, which, as discussed herein, is used to enable certain visual effects in an image.
In
In
In
Because media item 1010 does not include depth data to enable depth-based visual effects, no depth-based visual effects are displayed in media item 1010 when effects affordance 1016 is selected. Thus, media item 1010 remains unchanged in
In
In
In
In
In
In
In some embodiments, device 600 modifies avatar 1037 based on detected changes in a user's face positioned in the field-of-view of camera 602, which is encoded in the depth data of media item 1028. Thus, although media item 1028 is described in this embodiment as a still image, it should be appreciated that media item 1028 is not limited to a still image and may include other media items such as a recorded video, including a recorded video having depth data. Similarly, device 600 can modify the position of stickers applied to media item 1028 based on detected changes in the position of objects in the media item, which is encoded in the depth data.
Visual effects, including depth-based visual effects, can be applied to media item 1028 and edited in accordance with the embodiments discussed herein. For example, avatar effects affordance 1024-1 can be selected to remove, modify, and/or switch the selected avatar (e.g., avatar 1037) in accordance with the various embodiments disclosed herein. Additionally, sticker effects affordance 1024-2 can be selected to remove, modify, and/or add stickers to media item 1028 in accordance with the various embodiments disclosed herein.
In
In
In
In
In
In
In some embodiments, the screen effects can interact with visual effects and objects in media item 1028 based on the depth of the objects and visual effects in media item 1028. For example, a confetti screen effect can show confetti falling in front of, and behind, objects in media item 1028 (e.g., subject 1062) and visual effects (stickers and an avatar), and also falling on top of these objects and visual effects. For example, the confetti can be displayed falling on the avatar and falling off the side of the avatar based on a physics model of the falling confetti.
As described below, method 1100 provides an intuitive way for displaying visual effects in a media item viewing mode. The method reduces the cognitive burden on a user for displaying visual effects in an image or video, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to display visual effects faster and more efficiently conserves power and increases the time between battery charges.
The electronic device (e.g., 600) displays (1102), via the display apparatus (e.g., 601), a media user interface (e.g., 1005). The media user interface includes (1104) a media display region (e.g., 1008) including a representation (e.g., 1010) of a media item (e.g., a still image or video). In some embodiments, the depth data corresponding to the media item is obtained by a camera of the electronic device after detecting a prior selection of the effects affordance.
In some embodiments, the media item is a recorded image or video, and the effects are applied based on the depth data after the media item is recorded. In some embodiments, visual effects such as stickers, virtual avatars, and full screen effects can be added to image data, or changed to a different visual effect (e.g., replacing a sticker with a virtual avatar), after the image data is captured (e.g., recorded).
The media user interface (e.g., 1005) includes (1106) an effects affordance (e.g., 1016, an affordance associated with a function for activating an image display mode (e.g., a mode in which depth data is displayed when the image data contains depth data)).
The electronic device (e.g., 600) detects (1108) a gesture (e.g., 1021) directed to the effects affordance (e.g., 1016). In some embodiments, the respective effects option (e.g., 1024-1) corresponds (1110) to an effect for displaying an avatar in (e.g., overlaid on) the media item. In some embodiments, when displaying an avatar in the media item, image data of a person's head is replaced with a virtual avatar. Displaying the avatar in the media item, where image data of a person's head is replaced with a virtual avatar, provides visual feedback that the avatar relates to and/or is associated with the person being replaced. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the avatar is customizable. In some embodiments, the avatar is non-customizable.
In some embodiments, the respective effects option (e.g., 1024-3) corresponds (1112) to an effect for displaying a plurality of virtual objects (e.g., 1062, confetti, balloons, etc.) moving in (e.g., animatedly overlaid on) the media item. In some embodiments, a trajectory of the plurality of objects moving in the media item is modified based on a presence of at least one of an object in (e.g., represented in; identified in) the media item (e.g., an object that is encoded in the original media item, not an object that is the product of an effect applied to the media item, such as a person in the original image or video, but not a virtual avatar) or a visual effect (e.g., an avatar) applied to the media item. In some embodiments, objects such as confetti or balloons are displayed in front of, behind, and/or on a user in the image. In some embodiments, the image includes another effect, such as an avatar, and objects such as confetti or balloons are displayed in front of, behind, and/or landing on the avatar.
In some embodiments, the respective effects option (e.g., 1024-2) corresponds (1114) to an effect for displaying one or more selectable graphical icons (e.g., 1042, stickers) in (e.g., overlaid on) the media item.
In response to detecting the gesture directed to the effects affordance, the electronic device (e.g., 600) displays (1116) a plurality of effects options (e.g., stickers affordance, avatar affordance) for applying effects to the media item concurrently with a representation of the media item, including, in accordance with a determination (1118) that the media item is associated with corresponding depth data (e.g., as described herein), the plurality of effects options include a respective effects option (e.g., 1024) for applying effects (e.g., stickers, virtual avatars, full screen effects, etc.) based on the depth data. In some embodiments, in response to detecting the gesture, the electronic device activates an image display mode (e.g., a depth-data-based image display mode.
In some embodiments, a sticker affordance (e.g., 1024-2) is selectable to display a plurality of sticker options (e.g., 1042) that can be displayed on the media item (e.g., still image or video) based on depth data. For example, a sticker can be placed on the media item and modified based on depth data associated with the media item. In some embodiments, a sticker is associated with a relative position of an object in a video. Movement of the object in the video has a depth component that is used to modify a displayed aspect (e.g., size, orientation, position, etc.) of the sticker based on the movement of the object. For example, the sticker is displayed on the object, and as the object moves away from the camera (e.g., 602) (e.g., backwards), the sticker gets smaller to give the appearance the sticker is moving away from the camera with the object. In some embodiments, an avatar affordance (e.g., 1024-1) is selectable to display a plurality of avatar options (e.g.,
In response to detecting the gesture directed to the effects affordance, the electronic device (e.g., 600) displays (1116) a plurality of effects options (e.g., 1024, stickers affordance, avatar affordance) for applying effects to the media item concurrently with a representation of the media item, including, in accordance with a determination (1120) that the image data does not include the depth data, the respective effects option is not available for activation in the plurality of effects options (e.g., the respective effects option is excluded from the displayed plurality of effects options or is disabled in the displayed plurality of effects options). The respective effects option not being available for activation in the plurality of effects options in accordance with a determination that the image data does not include the depth data provides feedback that the image data does not included needed depth data. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, affordances that correspond to depth data (e.g., the effects affordance) are not displayed or are not selectable when the image data does not include depth data.
In some embodiments, the plurality of effects options (e.g., 1024) includes an option (e.g., 1020) for adding labels to (e.g., overlaid on) the media item. In some embodiments, text labels can be added to the image or video. In some embodiments, the plurality of effects options includes an option for applying one or more image filters to (e.g., overlaid on) the media item.
Note that details of the processes described above with respect to method 1100 (e.g.,
In
In
In
In
In
Avatar options 1218 may be selected to apply a corresponding avatar to the subject's face in device image data 1201 in a manner similar to that described above with respect to
In
After detecting selection (via input 1221) of close icon 1220 in
In
Stickers 1230 may be selected to apply a corresponding sticker to device image data 1201 in a manner similar to that described above with respect to
In some embodiments, selected stickers 1230 are not visible to other participants in the live communication session until the user places the sticker in device image data 1201. In some embodiments, modifications to placed stickers 1230 are not visible to other participants until the modification is complete. In some embodiments, once a selected sticker appears over device image data 1201 in user interface 1200, the sticker is visible to participants in the video communication session, even if the user has not yet placed the sticker 1230. Similarly, modifications to placed stickers are visible such that continued adjustments of the sticker are visible to other participants in the live video communication session, even if the user is still modifying placement of the sticker.
In
Device image data 1201 can be enlarged again (by again switching positions with participant image data 1204) in response to receiving gesture 1233 on window 1202, as shown in
The foregoing description for displaying visual effects in a live video communication session also applies to a live video communication session having three or more participants.
In
In
In
Visual effects can be applied to device image data 1201 using visual effects option affordances 1214, as explained above. For example, device 600 can apply stickers in a manner consistent with that described above with respect to
In
In
In
As described below, method 1300 provides an intuitive way for displaying visual effects in a live video communication session. The method reduces the cognitive burden on a user for displaying visual effects, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to display visual effects faster and more efficiently conserves power and increases the time between battery charges.
The electronic device (e.g., 600) displays (1302), via the display apparatus (e.g., 601), a live video communication user interface (e.g., 1200) of a live video communication application. The live video communication user interface includes (1304) a representation (e.g., 1201) of a subject participating in a live video communication session. Including a representation of a subject participating in a live video communication session enables the user to quickly and easily recognize the other participant(s) of the live video communication session. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the representation of the subject participating in the live video communication session includes (1306) image data captured by a camera (e.g., 602) associated with the electronic device. In some embodiments, the subject is a user of the electronic device. In some embodiments, the representation of the subject participating in the live video communication session includes image data transmitted to the electronic device from a second electronic device. In some embodiments, the second electronic device is a device of another user, and the subject is the other user.
In some embodiments, the live video communication user interface (e.g., 1200) further includes a representation (e.g., 1204) of a second participant in the live video communication session and a representation of a third participant in the live video communication session. In some embodiments, displayed sizes of the representations of the second and third participants in the live video communication session are adjusted so all representations of the participants can fit on the screen. Adjusting the sizes of the representations of the second and third participants to fit on the screen allows the user to simultaneously view their reactions to the visual effects applied to the representation of the user, thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by allowing the user to easily view the reactions of other participants without manual inputs) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
The live video communication user interface (e.g., 1200) includes (1308) a first affordance (e.g., 1208, an effects affordance) (e.g., an affordance associated with a function for activating a camera effects mode (e.g., a mode in which various camera effects can be applied to the representation of a user in a live video communication session)).
In some embodiments, prior to displaying the first affordance (e.g., 1208), the electronic device (e.g., 600) detects a first input (e.g., 1205) on the live video communication user interface (e.g., 1200) (e.g., a tap gesture on the live video communication user interface to display video call options), the first input corresponding to a request to display one or more options (e.g., an option to end the call, an option to switch a camera view, etc.) associated with the live video communication session. In some embodiments, in response to detecting the first input, the electronic device displays the one or more options (e.g., 1208, 1210, 1212) associated with the live video communication session. In some embodiments, the electronic device displays the first affordance (e.g., 1208).
The electronic device (e.g., 600) detects (1310) a gesture (e.g., 1213) directed to the first affordance (e.g., 1208). In response to detecting (1312) the gesture directed to the first affordance, the electronic device activates (1314) a camera effects mode.
In some embodiments, in response to detecting the gesture directed to the first affordance (e.g., 1208), the electronic device (e.g., 600) displays a first visual-effect affordance associated with a first type of visual effect and a second visual-effect affordance associated with a second type of visual effect that is different from the first type of visual effect and, optionally, a third visual-effect affordance associated with a third type of visual effect that is different from the first type of visual effect and the second type of visual effect (e.g., a sticker affordance 1214-2, an avatar affordance 1214-1, an affordance associated with a full-screen effect). In some embodiments, a sticker affordance is associated with a visual effect in which a sticker is displayed in the representation of the subject participating in the live video communication session. In some embodiments, an avatar affordance is associated with a visual effect in which a virtual avatar is displayed on the representation of the subject participating in the live video communication session. In some embodiments, a full-screen effect includes a visual effect in which graphical objects such as confetti or balloons are displayed in front of, behind, and/or on a participant in the live video communication session.
In some embodiments, the electronic device (e.g., 600) detects a selection (e.g., 1215) of one of the affordances (e.g., a sticker affordance 1214-2, an avatar affordance 1214-1, an affordance associated with a full-screen effect) associated with a type of visual effect. In some embodiments, in response to detecting the selection of the affordance associated with the visual effect, the electronic device displays a plurality of visual effect options (e.g., 1218) corresponding to the visual effect. Displaying a plurality of visual effect options corresponding to the visual effect in response to detecting a selection of the affordance associated with the visual effect allows the user to quickly and easily access corresponding visual effect options. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, a sticker affordance is associated with a visual effect that includes displaying a representation of a static graphical object (e.g., a hat, a star, glasses, etc.) in image data (e.g., the representation of the subject participating in the live video communication session). In some embodiments, an avatar affordance is associated with a visual effect that includes displaying a representation of a virtual avatar (e.g., a customizable virtual avatar or a non-customizable virtual avatar) such that image data of a person's head is replaced with a graphical representation of the virtual avatar. In some embodiments, a full-screen effect includes a visual effect in which graphical objects such as confetti or balloons are displayed in front of, behind, and/or on a participant in the live video communication session.
In response to detecting (1312) the gesture directed to the first affordance, the electronic device (e.g., 600) increases (1316) a size of the representation (e.g., 1201) of the subject participating in the live video communication session. Increasing the size of the representation of the subject participating in the live video communication session in response to detecting (1312) the gesture directed to the first affordance enables the user to quickly and easily adjust the size of the representation of the subject. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, increasing the size of the representation of the subject includes switching the position of the displayed representation of the subject with the position of a displayed participant in the live video communication session.
In some embodiments, while the camera effects mode is activated (1318), the electronic device (e.g., 600) detects (1320) a selection of an effects option affordance (e.g., a selectable icon associated with a function for displaying a visual effect in the representation of the subject participating in the live video communication session). In some embodiments, the effects option affordance is a stickers affordance, an avatar affordance, or an affordance associated with a full-screen effect such as confetti or balloons.
In some embodiments, in response to detecting selection of the effects option affordance, the electronic device modifies (1322) an appearance of the representation of the subject participating in the live video communication session based on a visual effect (e.g., displaying a sticker, avatar, or full-screen effect) associated with the selected effects option affordance. Modifying an appearance of the representation of the subject participating in the live video communication session based on a visual effect associated with the selected effects option affordance provides visual feedback that application of the selected visual effect was successful. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, in response to detecting selection of a sticker affordance, the representation of the user participating in the live video communication session is modified to display a selected sticker. In some embodiments, in response to detecting selection of an avatar affordance, the representation of the user participating in the live video communication session is modified to display an avatar positioned on the face of the user. In some embodiments, in response to detecting selection of an affordance associated with a full-screen effect, a full-screen effect is displayed in the representation of the user participating in the live video communication session (e.g., confetti is displayed falling in front of, behind, and on the representation of the user).
In some embodiments, the electronic device (e.g., 600) detects a second input on the live video communication user interface (e.g., 1200), the second input corresponding to a request to reduce the size of the representation of the subject participating in the live video communication session. In some embodiments, in response to detecting the second input, the electronic device concurrently displays the representation of the subject participating in the live video communication session having the modified appearance based on the visual effect associated with the selected effects option affordance and one or more representations of respective participants in the live video communication session. Concurrently displaying the representation of the subject participating in the live video communication session and one or more representations of respective participants in the live video communication session in response to detecting the second input enables the user to quickly and easily view (simultaneously) other participants of the live video communication. Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the representation of the user in the live video communication session is reduced so that it is displayed on the screen with representations of other participants in the live video communication session.
In some embodiments, while the camera effects mode is activated (1318), the electronic device (e.g., 600) modifies (1324) an appearance of the representation (e.g., 1201) of the subject participating in the live video communication session to display one or more visual effects. In some embodiments, when the visual effect is a sticker effect, the appearance of the representation of the subject participating in the live video communication session is modified to include display of a static graphical object (e.g., a sticker). In some embodiments, the static graphical object (e.g., sticker) interacts with the representation of the subject participating in the live video communication session. In some embodiments, when the visual effect is an avatar effect, the appearance of the representation of the subject participating in the live video communication session is modified to display a representation of a virtual avatar (e.g., a customizable virtual avatar or a non-customizable virtual avatar) replacing the subject's head. In some embodiments, when the visual effect is a full-screen effect, the appearance of the representation of the subject participating in the live video communication session is modified to display graphical objects (e.g., graphical confetti or graphical balloons) displayed in front of, behind, and/or on a participant in the live video communication session).
In some embodiments, the modified appearance is sent/transmitted to other participants in the live video communication session. In some embodiments, transmitting the data includes transmitting the image data (e.g., a real-time stream of image data) from the field of view of the camera) along with data (e.g., separate data) representing the modifications made based on the selected visual effect. In some embodiments, transmitting the data includes transmitting composite video data that includes the image data from the field of view of the camera combined with data representing the modifications made based on the selected visual effect.
In some embodiments, while the camera effects mode is activated, the electronic device (e.g., 600) modifies an appearance of the representation (e.g., 1200) of the subject participating in the live video communication session to display a virtual avatar. In some embodiments, the electronic device detects a change in a face in a field of view of one or more cameras (e.g., 602) of the electronic device. In some embodiments, the electronic device changes an appearance of the virtual avatar based on the detected change in the face. Changing the appearance of the virtual avatar based on the detected change in the face provides visual feedback that the virtual avatar is based on/associated with the face. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the virtual avatar is modified to mirror movement of the subject participating in the live video communication session. In some embodiments, the change in the face is detected using one or more depth cameras and/or depth maps, as discussed herein.
In some embodiments, while the camera effects mode is activated, the electronic device (e.g., 600) displays a first visual effect (e.g., a sticker) in the representation of the subject participating in the live video communication session. In some embodiments, the electronic device detects an input (e.g., a touch input) corresponding to the first visual effect. In some embodiments, in response to detecting the input corresponding to the first visual effect, in accordance with a determination that the input is a first type (e.g., a touch-and-drag gesture), the electronic device modifies a location of the first visual effect in the representation of the subject participating in the live video communication session based on a magnitude (e.g., a distance the gesture is moved) and direction of the input. In some embodiments, in response to detecting the input corresponding to the first visual effect, in accordance with a determination that the input is a second type (e.g., a pinch or de-pinch gesture), the electronic device modifies a size of the first visual effect based on the magnitude (e.g., the adjusted distance between the contact points of the pinch/de-pinch gesture) of the input. Modifying the location of the first visual effect in the representation of the subject participating in the live video communication session based on a magnitude of the input or modifying the size of the first visual effect based on the magnitude (e.g., the adjusted distance between the contact points of the pinch/de-pinch gesture) of the input based on the type of the input enables the user to quickly and easily adjust the location or the size of a visual effect (by simply changing the type of the input). Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, modifying the location of the first visual effect includes one or more of the following steps: prior to detecting termination of the input (e.g., 889), displaying movement of the first visual effect (e.g., 858-4) based on the magnitude and direction of the input; and in accordance with a determination that the first visual effect (e.g., a sticker (858-4)) moves across a border region of a predetermined location (e.g., a location corresponding to a portion of the representation of the subject (e.g., the subject's face)), generating an indication (e.g., display a bracket (e.g., 890) or generate a haptic response (e.g., 892)) that the first visual effect has crossed (e.g., or is crossing) the border region (e.g., displaying a bracket around the representation of the subject's face as shown in
Generating an indication that the first visual effect has crossed (or is crossing) the border region provides visual and/or haptic feedback to the user that the behavior and placement of the sticker has changed, without requiring the user to terminate the gesture and experiment with the modeled behavior. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, modifying the location of the first visual effect includes displaying movement of the first visual effect from a first location to a second location. In some embodiments, modifying the size of the first visual effect includes displaying a transition of the first visual effect from a first displayed size to a second displayed size. In some embodiments, when a sticker is being moved on the display or resized, the sticker movement and/or resizing is displayed such that the user and other participants in the live video communication session can see the changes, including the intermediate movement/resizing as the sticker is being modified.
In some embodiments, modifying the location of the first visual effect includes transitioning the first visual effect from appearing at a first location to appearing at a second location, without displaying the first visual effect appearing at an intermediate location. In some embodiments, modifying the size of the first visual effect includes transitioning the first visual effect from a first displayed size to a second displayed size, without displaying the first visual effect having an intermediate size. In some embodiments, when a sticker is being moved on the display or resized, the sticker movement and/or resizing is displayed such that only the user can see the changes, including the intermediate movement/resizing as the sticker is being modified, but other participants in the live video communication session cannot see the intermediate movement/resizing of the sticker. Thus, other participants only see the sticker (or updates to the sticker) after it has been modified.
In some embodiments, a plurality of participants are participating in the live video communication session, the plurality of participants including the subject (e.g., a user of the electronic device) and a first remote participant (e.g., a user of a second electronic device, remote from the first electronic device. In some embodiments, the live video communication user interface (e.g., 1200) further includes a representation of the first remote participant. In some embodiments, the representation of the first remote participant includes image or video data received from a remote device/a remote camera. In some embodiments, further in response to detecting the gesture directed to the first affordance, the electronic device reduces a size of the representation of the first remote participant.
In some embodiments, a plurality of participants are participating in the live video communication session, the plurality of participants including the subject (e.g., a user of the electronic device) and a first remote participant (e.g., a user of a second electronic device, remote from the first electronic device), and where the representation of the subject is a live preview of a field of view of a camera (e.g., 602) of the electronic device (e.g., 600) (e.g., a stream of image data that represents what is in the field of view of the camera). In some embodiments, after modifying the appearance of the representation (e.g., 1201) of the subject participating in the live video communication session based on a visual effect associated with the selected effects option affordance, the electronic device transmits data corresponding to the modified appearance of the representation of the subject participating in the live video communication session to at least the remote participant of the plurality of participants. In some embodiments, transmitting the data includes transmitting the image data (e.g., a real-time stream of image data) from the field of view of the camera) along with data (e.g., separate data) representing the modifications made based on the selected visual effect. In some embodiments, transmitting the data includes transmitting composite video data that includes the image data from the field of view of the camera combined with data representing the modifications made based on the selected visual effect.
In some embodiments, the electronic device (e.g., 600) displays the live video communication user interface (e.g., 1200) without the first affordance. In some embodiments, the electronic device detects a touch input on the live video communication user interface (e.g., 1206) (e.g., a tap gesture on the live video communication user interface to display video call options). In some embodiments, in response to detecting the touch input, in accordance with a determination that the camera effects mode is activated, the electronic device displays a live video communication options user interface including the first affordance and a plurality of visual effects affordances (e.g., a sticker affordance, an avatar affordance, an affordance associated with a full-screen effect). In some embodiments, in response to detecting the touch input, in accordance with a determination that the camera effects mode is not activated, the electronic device displays the live video communication options user interface including the first affordance and excluding the plurality of visual effects affordances. Displaying the live video communication options user interface including the first affordance and either including the plurality of visual effects affordances or excluding the plurality of visual effects affordances based on a determination that the camera effects mode is or is not activated indicates whether or not the camera effects mode is currently activated. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
Note that details of the processes described above with respect to method 1300 (e.g.,
The embodiment illustrated in
In
In
As shown in
In some embodiments, the dynamic modification of avatar 1421 is achieved using one or more depth sensors (e.g., depth camera sensor 175) to capture an initial depth map of the objects in the field of view of camera 602 (including the subject (corresponding to representation of subject 1422) and background (corresponding to background 1426)). The initial depth map is then modified (e.g., using one or more of a blurring, fading, or smoothing transition of the initial depth map) to decrease instances of abrupt transitions between displaying and hiding portions of the avatar. This provides a more fluid, dynamic appearance of avatar 1421, particularly as various portions of the avatar are hidden or displayed in response to movement of the subject.
In
Device 600 displays avatar 1421 having long hair that hangs in front of, and behind, the representation of the subject's shoulders. The position of certain portions of the hair, relative to the representation of the subject's shoulders, is determined based on depth data that indicates the spatial positioning of avatar 1421 (including the avatar hair) relative to the depth position of representation of subject 1422 (and specific portions of representation of subject 1422 (e.g., representations of the subject's neck and/or shoulders)). In some embodiments, portions of the avatar that are dynamically displayed (e.g., portions of the avatar that can be either displayed or hidden depending on the spatial relationship with representation of subject 1422) are shown having a blending effect 1432 at locations adjacent to representation of subject 1422. This blending effect smooths a displayed transition between the portion of the avatar and the representation of subject 1422.
Device 600 modifies avatar 1421 in response to detected changes in the head and face of the subject. For example, as shown in
In some embodiments, portions of avatar 1421 are persistently displayed regardless of any spatial relationship to representation of subject 1422 or any other objects in the depth map. For example, although representation of subject 1422 is wearing hat 1423, which includes a bill that sticks out in front of representation of subject 1422, avatar head 1421-3 and avatar face 1421-4 are persistently displayed in front of the representation of the subject's head and hat 1423. This prevents objects in the field of view of camera 602, particularly objects on representation of subject 1422 (or portions of the representation of the subject), from appearing through portions of avatar 1421 (e.g., specifically, portions of avatar 1421 that should always be displayed to render an appearance of avatar 1421 positioned on representation of subject 1422). In some embodiments, the persistently displayed portions of avatar 1421 can include the avatar's face (1421-4), head (1421-3), and portions of the avatar's hair (1421-1).
As another example,
In
In
In
Unicorn avatar 1435 also includes shadow 1436 displayed on a portion of unicorn avatar 1435 and on representation of subject 1432 (e.g., a representation of a shadow cast onto the representation of the subject by the avatar). In some embodiments, a displayed shadow has a shape and position determined based on the shape of the avatar and a relative position of the avatar and representation of subject 1432 to a light source (e.g., a light source detected in the field of view of camera 602 or a simulated light source). As shown in
As described below, method 1500 provides an intuitive way for displaying visual effects in a camera application. The method reduces the cognitive burden on a user for applying visual effects to an image viewed in a camera application, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to display visual effects in an image faster and more efficiently conserves power and increases the time between battery charges.
The electronic device (e.g., 600) displays (1502), via the display apparatus (e.g., 601), a representation of image data (e.g., 1420-1) captured via the one or more cameras (e.g., 602). In some embodiments, the representation includes a representation of a subject (e.g., 1422) (e.g., a representation of at least a portion of a subject) and the image data corresponds to depth data (e.g., the image data includes data captured by a visible light camera and a depth camera) that includes depth data for the subject (e.g., information about the relative depth positioning of one or more portions of the subject with respect to other portions of the subject and/or to other objects within the field of view of the one or more cameras). In some embodiments, depth data is in the form of a depth map or depth mask.
In some embodiments, the electronic device (e.g., 600) includes one or more depth sensors (e.g., 175, 602). In some embodiments, prior to displaying a representation of the virtual avatar (e.g., 1421), the electronic device captures initial depth data (e.g., an initial or unmodified depth map and/or depth mask corresponding to the image data captured by the one or more cameras (e.g., 602); an initial or unmodified depth mask of the subject) for the subject via the one or more depth sensors. The electronic device generates the depth data for the subject by modifying the initial depth data for the subject. In some embodiments, modifying the initial depth data can decrease instances of abrupt transitions between including and excluding the representation of the first portion of the virtual avatar (e.g., 1421-2), particularly as the pose of the subject changes with respect to the electronic device. Modifying the initial depth data for the subject allows for smoother transitions in displaying the representation of the virtual avatar as the pose of the user changes, thereby improving the visual feedback of detected changes in the subject (represented by the corresponding changes to the virtual avatar). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, modifying the initial depth data for the subject includes performing one or more transformations on the initial depth data selected from the group consisting of blurring (e.g., defocusing the initial depth data to blend the transitions between different levels of the depth data; e.g., blurring the values (e.g., greyscale values) of an initial depth mask) the initial depth data, fading out (e.g., modulating the depth data downwards to reduce the depth values) to the initial depth data, and smoothing (e.g., applying a mathematical function to blend the initial depth data, particularly at the transitions between different depth layers of the initial depth data) the initial depth data.
The electronic device (e.g., 600) displays (1504), via the display apparatus (e.g., 601), a representation of a virtual avatar (e.g., 1421) (e.g., a visual representation of a virtual avatar construct that can include some or all of the construct, when represented) that is displayed in place of (e.g., occludes or is displayed on top of) at least a portion of (e.g., with at least a portion of the virtual avatar partially or completely overlaying (e.g., obscuring) at least a portion of the subject) the representation of the subject (e.g., 1422, 1423). Displaying a visual representation of the virtual avatar over at least a portion of the representation of the subject provides the user with visual feedback of how the virtual avatar looks when overlaid the subject. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the virtual avatar is placed at a simulated depth (e.g., at a location selected so that the virtual avatar is displayed slightly in front of the representation of the subject in a depth dimension of the user interface) relative to the representation of the subject as determined based on the depth data for the subject.
In some embodiments, in accordance with a determination, based on the depth data, that a first portion of the virtual avatar (e.g., 1421-2) (e.g., an avatar hair portion) satisfies a set of depth-based display criteria, device (e.g., 600) includes (1506) as part of the representation of the virtual avatar (e.g., 1421), a representation of the first portion of the virtual avatar (e.g., 1421-2) that is displayed in place of the first portion of the subject (e.g., the first portion of the representation of the subject) (for example, a portion 1421-2 of avatar hair is displayed over a portion of a representation of subject's hand 1425 as shown in
In some embodiments, in accordance with a determination, based on the depth data, that the first portion of the virtual avatar (e.g., 1421-2) does not satisfy the set of depth-based display criteria for the first portion of the subject (e.g., 1425) (e.g., because the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is behind the corresponding first portion of the subject), the device (e.g., 600) excludes (1508), from the representation of the virtual avatar (e.g., 1421), the representation of the first portion of the virtual avatar (e.g., hair is not displayed because it is positioned behind the subject's shoulder 1422-1 at region 1424-1) (e.g., additional avatar hair 1421-2 is not shown in
In some embodiments, the first portion of the virtual avatar (e.g., 1421) (e.g., an avatar head) moves based on movement of the subject. In some embodiments, the first portion of the virtual avatar moves based on the movement of the subject's head or the representation of the subject's head.
In some embodiments, the representation of the virtual avatar includes a representation of a second portion (e.g., 1421-1, 1421-3, 1421-4, 1435-1) (e.g., a top of an avatar head (1421-3)) of the virtual avatar that is displayed over a second portion (e.g., 1425, 1423) of the representation of the subject without regard to whether or not the depth data indicate that the second portion of the virtual avatar has a simulated depth that is in front of or behind the corresponding second portion of the representation of the subject. A representation of a second portion of the virtual avatar such as the top of the avatar's head is persistently displayed. This allows the second portion of the virtual avatar to always be displayed, even if the representation of the subject includes an object that is positioned closer to the camera (e.g., 602) than the avatar (e.g., a hat (1423) positioned on the representation of the subject's head will be covered by the avatar (1421)). Persistently displaying a portion of the virtual avatar provides the user with more control of the device by allowing the user to display a selected avatar without having to adjust depth settings of the device to ignore certain objects. Providing additional control of the device without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the second portion of the virtual avatar (e.g., an avatar head) is persistently displayed in place of the corresponding second portion of the representation of the subject. In some embodiments, portions of the avatar that are displayed irrespective of depth-based criteria are persistently displayed over the subject to avoid displaying portions of the representation of the subject (e.g., a hat (1423), the subject's hair) protruding through the virtual avatar, even when a spatial relationship of the portions of the virtual avatar and the portions of the representation of the subject would otherwise indicate that the portions of the virtual avatar should be obscured by the portions of the subject. In some embodiments, the second portion of the virtual avatar (e.g., an avatar head) moves based on movement of the subject (e.g., 1422) (e.g., based on movement of the subject's head or based on movement of the representation of the subject's head).
In some embodiments, the first portion of the virtual avatar (e.g., 1421-2) (e.g., a portion that is included or excluded based on depth data) is a first sub-portion of a first avatar feature (e.g., an element of the virtual avatar such as avatar hair, an avatar ear, an avatar accessory (e.g., avatar earrings)) and the second portion (e.g., 1421-1, 1421-3, 1421-4, 1435-1, 1435-2) of the virtual avatar (e.g., a portion that is not included or excluded based on depth data; that is included independent of the depth data) is a second sub-portion of the first avatar feature (e.g., avatar hair). In some embodiments, the first sub-portion is a portion (e.g., 1435-3) of the virtual avatar (e.g., 1435) that is positioned on the backside of the virtual avatar when the virtual avatar is in a neutral position (e.g., as shown in
In some embodiments, the virtual avatar (e.g., 1421, 1435) includes an avatar hair feature (e.g., avatar hair that is long) that includes the first portion (e.g., 1421-1) of the virtual avatar. The electronic device (e.g., 600) displays the representation of the virtual avatar by displaying a first portion of the avatar hair feature (e.g., 1421-1) and conditionally displays a second portion of the avatar hair feature (e.g., 1421-2) based on whether or not a simulated depth of the second portion of the avatar hair feature is in front of or behind a third portion of the representation of the subject (e.g., 1422-1) (e.g., neck, shoulders, and/or body) based on the depth data for the subject (e.g., displaying a representation of a persistent portion of the avatar hair feature and, variably including (or excluding) depending on depth, the first portion of the virtual avatar). Determining whether the first portion of the avatar hair feature and a second portion of the hair feature if it is in front of or behind a third portion of the representation of the subject, such as the neck, shoulders or body of the subject. Determining the visibility of the second portion of the hair feature prior to displaying allows the user-device interface to be more efficient in only displaying portions of the avatar hair feature that will be visible to the user. Providing visual feedback of the virtual avatar allows the user to see the resulting image when the avatar hair feature is displayed with the representation of the subject. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the virtual avatar includes an avatar neck feature (e.g., 1435-3) (e.g., a neck of an equine avatar (e.g., unicorn or horse); e.g., an avatar mane) that includes the first portion (e.g., 1435-4) of the virtual avatar. The electronic device displays the representation of the virtual avatar (e.g., 1435) by conditionally displaying a portion of the avatar neck feature based on whether or not a simulated depth of the portion of the avatar neck feature is in front of or behind a fourth portion (e.g., 1422-1) of the representation of the subject (e.g., neck or shoulder) based on the depth data for the subject (e.g., displaying a representation of a persistent portion of the avatar neck feature and, variably including (or excluding) depending on depth, the first portion of the virtual avatar). Determining whether the portion of the avatar neck feature if it is in front of or behind a fourth portion of the representation of the subject, such as the neck of the subject. Determining the visibility of the portion of the neck feature prior to displaying allows the user-device interface to be more efficient in only displaying portions of the avatar neck feature that will be visible to the user. Providing visual feedback of the virtual avatar allows the user to see the resulting image when the avatar hair feature is displayed with the representation of the subject. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first portion (e.g., 1421-2, 1435-3) of the virtual avatar (e.g., 1421, 1435) includes an obscured portion (e.g., 1435-4) of the virtual avatar (e.g., the back of the avatar's neck) that is not displayed when the portion of the representation of the subject (e.g., the subject's head) has a pose that (directly) faces the one or more cameras (e.g., the subject's head is positioned forward, facing the camera). Obscured portions of the avatar are not displayed because the user would not be able to see that portion of the avatar. Determining the visibility of the first portion of virtual avatar prior to displaying allows the user-device interface to be more efficient in only displaying portions of the avatar neck feature that will be visible to the user. Providing visual feedback of the virtual avatar allows the user to see the resulting image when the avatar hair feature is displayed with the representation of the subject. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the obscured portion of the virtual avatar includes the back of the avatar's neck (e.g., 1435-2, 1435-3) or portions of the virtual avatar (e.g., back of avatar hair) that are positioned behind the subject's neck or head. In some embodiments, this prevents the back of the avatar's neck (or portions of the avatar on the back of the avatar head or positioned behind the subject's neck) from being displayed protruding through the representation of the subject's neck when the subject's head is tilted up (e.g., looking up).
In some embodiments, displaying the representation of the virtual avatar (e.g., 1421) further includes modifying the visual appearance (e.g., blending, blurring, feathering, or otherwise gradually changing the degree of hiding) of a third portion (e.g., 1432) of the virtual avatar that is adjacent to the first portion (e.g., 1421-1) of the virtual avatar (e.g., and also adjacent at least a portion of the representation of the subject (e.g., 1422-1)) to an appearance that is based on both the appearance of the avatar and the appearance of the representation of the subject. In some embodiments, a portion of avatar hair (e.g., 1432) is blended with the representation of the subject (e.g., 1422) at a portion of the representation of the virtual avatar where the portion of the avatar hair intersects the shoulders (1422-1) of the displayed representation of the subject. In some embodiments, a bottom portion of the avatar head is blended (e.g., 1434) with the representation of the subject at a portion of the representation of the virtual avatar where the bottom portion of the avatar head intersects the displayed representation of the subject's neck (e.g.,
In some embodiments, the electronic device (e.g., 600) detects a change in pose of a head portion of the subject (e.g., 1422) (e.g., the subject's head turns to the side). In response to the electronic device detecting the change in pose of the head portion of the subject, the electronic device modifies (e.g., increasing or decreasing), based on the depth data and the change in pose, an amount of the virtual avatar (e.g., 1421-2) that is excluded from (e.g., a size that is either included or excluded from the representation of the virtual avatar) the representation of the virtual avatar (e.g., the avatar's hair). Updating the displayed virtual avatar based on changes in the pose of the head portion of the subject provides visual feedback of the virtual avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, modification includes increasing or decreasing an amount of the avatar's hair that is displayed in the representation of the virtual avatar (e.g., 1421) when the avatar's head (e.g., 1421-3) is turned to the side to match the movement of the subject's head. In some embodiments, the displayed amount of the first portion of the avatar (e.g., the avatar's hair) is modified depending on whether the portion of the avatar is obscured by a portion of the representation of the subject in response to the change in pose. For example, a displayed amount (e.g., size) of a portion of the avatar's hair is decreased when the portion of the avatar hair is obscured by the user's neck or shoulders (e.g., 1422-1) in response to turning the avatar's head (e.g., turning the avatar's head causes previously displayed hair positioned in front of the subject's shoulders to no longer be displayed because turning the head positioned the avatar hair behind the subject's shoulders (e.g., 1424-1)). Additionally, a displayed amount of the portion of the avatar's hair increases when the portion of the hair (e.g., 1424-2) that was previously hidden behind the subject's shoulders, neck, or head is visible as a result of the avatar head turning to the side (e.g., hair positioned behind the subject's shoulders is now visible because turning the avatar's head caused the avatar hair to be positioned in front of the subject's shoulders).
In some embodiments, the device (e.g., 600) detects (1510) a change in pose of the subject. In response to detecting the change in pose of the subject (e.g., 1422) (e.g., detecting a movement of a hand (e.g., 1425) over the user's shoulder (e.g., 1422-1); e.g., detecting a turning or tilting of the subject's head), the electronic device (e.g., 600) modifies (1512) the displayed representation of the virtual avatar (e.g., 1421) based on the change in pose. Updating the displayed virtual avatar based on changes in the pose of the subject provides visual feedback of the virtual avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, in accordance with a determination, based on the depth data that takes into account the change in pose of the subject, that the first portion (e.g., 1421-2) of the virtual avatar (e.g., 1421) satisfies the set of depth-based display criteria, the electronic device updates (1514) an appearance of the representation of the virtual avatar from a first appearance (e.g.,
In some embodiments, the electronic device (e.g., 600) detects a change (e.g., a change in pose (e.g., orientation, rotation, translation, etc.); e.g., a change in a facial expression) in the portion of the representation of the subject (e.g., 1422). The electronic device changes an appearance of the virtual avatar (e.g., 1421, 1435) based on the detected change in the portion of the representation of the subject (e.g., modifying, in real time, a position and/or facial expression of the virtual avatar based on the detected change in the portion of the representation of the subject). Updating the displayed virtual avatar based on changes in the expressions of the subject provides visual feedback of the virtual avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the electronic device (e.g., 600) displays, via the display apparatus (e.g., 601), a representation of a shadow (e.g., 1430, 1436) cast by the virtual avatar (e.g., 1421, 1435) that is displayed on at least a fifth portion (e.g., the subject's chest, neck, or shoulder) of the representation of the subject. The device displays a representation of a shadow cast by the virtual representation over a portion of the representation of the subject to provide a more realistic representation of the displayed virtual avatar with a simulated light source. Providing visual feedback of the virtual avatar allows the user to see the resulting image. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the representation of the shadow cast by the virtual avatar is overlaid on the representation of the subject with an opacity less than 100%. In some embodiments, a portion of the subject that is determined based on a relative position of the displayed virtual avatar and a simulated light source that is, optionally, determined based on a detected light source in the field of view of the camera. In some embodiments, one or more characteristics (e.g., position, intensity, shape, etc.) of the displayed representation of the shadow are based on a shape of the virtual avatar. In some embodiments, a shape of the displayed shadow is determined based on the shape of the virtual avatar such that different avatars appear to cast shadows of different shapes.
In some embodiments, one or more characteristics (e.g., position, intensity, shape, etc.) of the displayed representation of the shadow (e.g., 1430, 1436) are based on a lighting condition (e.g., a detected amount of ambient light, a detected light source, etc.) in the field of view of the one or more cameras (e.g., 602). In some embodiments, the position of the shadow is determined based on a position of a light source in the field of view of the camera. For example, if a light source (e.g., a detected light source or a modeled light source) is positioned behind the representation of the subject (e.g., 1422) in the field of view of the camera, the shadow is positioned on the representation of the subject opposite from the position of the light source relative to the representation of the subject. In some embodiments, the intensity of the shadow is determined based on the brightness of the lighting conditions detected in the field of view of the one or more cameras (e.g., the shadow is more intense (distinct, darker, etc.) for brighter lighting conditions, and less intense for darker lighting conditions).
In some embodiments, one or more characteristics (e.g., position, intensity, shape, etc.) of the displayed representation of the shadow (e.g., 1430, 1436) are based on the depth data. In some embodiments, the position and/or shape of the shadow is determined using the depth data (e.g., in the form of a depth map or depth mask) to provide a more realistic representation of the shadow effect that is based on the three-dimensional positioning of the representation of the subject (e.g., 1422) in the field of view of the one or more cameras (e.g., so that the shadow of the avatar appears to fall onto the subject based on a simulated distance from the avatar to the subject and a simulated distance from the light source to the avatar).
Note that details of the processes described above with respect to method 1500 (e.g.,
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is the gathering and use of data available from various sources for sharing with other users. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to better represent a user in a conversation. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of sending an avatar, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
This application is a continuation of U.S. application Ser. No. 16/599,433, entitled “Creative Camera,” filed Oct. 11, 2019, which is a continuation of U.S. application Ser. No. 16/143,097, now U.S. Pat. No. 10,523,879, entitled “Creative Camera,” filed Sep. 26, 2018, which claims the benefit of: U.S. Provisional Application No. 62/668,227, entitled “Creative Camera,” filed May 7, 2018; and U.S. Provisional Application No. 62/679,934, entitled “Creative Camera,” filed Jun. 3, 2018, the contents of which are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4518237 | Mizokami | May 1985 | A |
4847819 | Hong | Jul 1989 | A |
4933702 | Komatsuzaki et al. | Jun 1990 | A |
4945521 | Klaus | Jul 1990 | A |
5463443 | Tanaka et al. | Oct 1995 | A |
5557358 | Mukai et al. | Sep 1996 | A |
5615384 | Allard et al. | Mar 1997 | A |
6084598 | Chekerylla | Jul 2000 | A |
6262769 | Anderson et al. | Jul 2001 | B1 |
6268864 | Chen et al. | Jul 2001 | B1 |
6278466 | Chen | Aug 2001 | B1 |
6621524 | Iijima et al. | Sep 2003 | B1 |
6677981 | Mancuso et al. | Jan 2004 | B1 |
6900840 | Schinner et al. | May 2005 | B1 |
7180524 | Axelrod | Feb 2007 | B1 |
7227976 | Jung et al. | Jun 2007 | B1 |
7515178 | Fleischman et al. | Apr 2009 | B1 |
7583892 | Okumura | Sep 2009 | B2 |
7716057 | Horvitz | May 2010 | B2 |
7751285 | Cain et al. | Jul 2010 | B1 |
7908554 | Blattner | Mar 2011 | B1 |
8156060 | Borzestowski et al. | Apr 2012 | B2 |
8169438 | Baraff et al. | May 2012 | B1 |
8185839 | Jalon et al. | May 2012 | B2 |
8234218 | Robinson et al. | Jul 2012 | B2 |
8295546 | Craig et al. | Oct 2012 | B2 |
8390628 | Harding | Mar 2013 | B2 |
8405680 | Gomes et al. | Mar 2013 | B1 |
8423089 | Song et al. | Apr 2013 | B2 |
8493408 | Williamson et al. | Jul 2013 | B2 |
8576304 | Ishibashi | Nov 2013 | B2 |
8638371 | Laberge et al. | Jan 2014 | B2 |
8723988 | Thorn | May 2014 | B2 |
8736704 | Jasinski et al. | May 2014 | B2 |
8736716 | Prentice | May 2014 | B2 |
8848097 | Makii | Sep 2014 | B2 |
8885978 | Cote et al. | Nov 2014 | B2 |
8896652 | Ralston | Nov 2014 | B2 |
9001226 | Ng et al. | Apr 2015 | B1 |
9094576 | Karakotsios | Jul 2015 | B1 |
9153031 | El-saban et al. | Oct 2015 | B2 |
9158974 | Laska et al. | Oct 2015 | B1 |
9207837 | Paretti et al. | Dec 2015 | B2 |
9223486 | Shin et al. | Dec 2015 | B2 |
9230241 | Singh et al. | Jan 2016 | B1 |
9230355 | Ahuja et al. | Jan 2016 | B1 |
9245177 | Perez | Jan 2016 | B2 |
9246961 | Walkin et al. | Jan 2016 | B2 |
9250797 | Roberts et al. | Feb 2016 | B2 |
9264660 | Petterson et al. | Feb 2016 | B1 |
9288476 | Sandrew et al. | Mar 2016 | B2 |
9298263 | Geisner et al. | Mar 2016 | B2 |
9313397 | Harris et al. | Apr 2016 | B2 |
9313401 | Frey et al. | Apr 2016 | B2 |
9342230 | Bastien et al. | May 2016 | B2 |
9349414 | Furment et al. | May 2016 | B1 |
9360671 | Zhou | Jun 2016 | B1 |
9411506 | Marra et al. | Aug 2016 | B1 |
9448708 | Bennett et al. | Sep 2016 | B1 |
9467812 | Jung et al. | Oct 2016 | B2 |
9542070 | Xu et al. | Jan 2017 | B2 |
9544563 | Cheng et al. | Jan 2017 | B1 |
9592428 | Binder | Mar 2017 | B2 |
9600178 | Yun et al. | Mar 2017 | B2 |
9602559 | Barros et al. | Mar 2017 | B1 |
9609221 | Kim et al. | Mar 2017 | B2 |
9625987 | Lapenna et al. | Apr 2017 | B1 |
9626589 | Graham et al. | Apr 2017 | B1 |
9628416 | Henderson | Apr 2017 | B2 |
9667881 | Harris et al. | May 2017 | B2 |
9686497 | Terry | Jun 2017 | B1 |
9704250 | Gilmour et al. | Jul 2017 | B1 |
9716825 | Manzari et al. | Jul 2017 | B1 |
9760976 | Kameyama | Sep 2017 | B2 |
9767613 | Bedikian et al. | Sep 2017 | B1 |
9786084 | Bhat et al. | Oct 2017 | B1 |
9819912 | Maruta | Nov 2017 | B2 |
9948589 | Gonnen et al. | Apr 2018 | B2 |
9949697 | Iscoe et al. | Apr 2018 | B2 |
10015298 | Yang et al. | Jul 2018 | B2 |
10021294 | Kwon et al. | Jul 2018 | B2 |
10055887 | Gil et al. | Aug 2018 | B1 |
10062133 | Mishra et al. | Aug 2018 | B1 |
10091411 | Ha et al. | Oct 2018 | B2 |
10095385 | Walkin et al. | Oct 2018 | B2 |
10139218 | Matsushita | Nov 2018 | B2 |
10152222 | Ozawa et al. | Dec 2018 | B2 |
10176622 | Waggoner et al. | Jan 2019 | B1 |
10187587 | Hasinoff et al. | Jan 2019 | B2 |
10225463 | Yun et al. | Mar 2019 | B2 |
10230901 | Harris et al. | Mar 2019 | B2 |
10270983 | Van Os et al. | Apr 2019 | B1 |
10289265 | Kulkarni | May 2019 | B2 |
10297034 | Nash et al. | May 2019 | B2 |
10304231 | Saito | May 2019 | B2 |
10313652 | Falstrup et al. | Jun 2019 | B1 |
10325416 | Scapel et al. | Jun 2019 | B1 |
10325417 | Scapel et al. | Jun 2019 | B1 |
10326942 | Shabtay et al. | Jun 2019 | B2 |
10345592 | Samec et al. | Jul 2019 | B2 |
10375313 | Van Os et al. | Aug 2019 | B1 |
10376153 | Tzvieli et al. | Aug 2019 | B2 |
10379719 | Scapel et al. | Aug 2019 | B2 |
10397469 | Yan et al. | Aug 2019 | B1 |
10397500 | Xu et al. | Aug 2019 | B1 |
10410434 | Scapel et al. | Sep 2019 | B1 |
10447908 | Lee et al. | Oct 2019 | B2 |
10467729 | Perera et al. | Nov 2019 | B1 |
10467775 | Waggoner et al. | Nov 2019 | B1 |
10505726 | Andon et al. | Dec 2019 | B1 |
10521091 | Anzures et al. | Dec 2019 | B2 |
10521948 | Rickwald et al. | Dec 2019 | B2 |
10523879 | Dye et al. | Dec 2019 | B2 |
10574895 | Lee et al. | Feb 2020 | B2 |
10580221 | Scapel et al. | Mar 2020 | B2 |
10585551 | Lee et al. | Mar 2020 | B2 |
10614139 | Fujioka et al. | Apr 2020 | B2 |
10620590 | Guzman et al. | Apr 2020 | B1 |
10628985 | Mishra et al. | Apr 2020 | B2 |
10638058 | Matsunaga | Apr 2020 | B2 |
10645294 | Manzari et al. | May 2020 | B1 |
10652470 | Manzari et al. | May 2020 | B1 |
10657695 | Chand et al. | May 2020 | B2 |
10659405 | Chang et al. | May 2020 | B1 |
10674072 | Manzari et al. | Jun 2020 | B1 |
10681282 | Manzari et al. | Jun 2020 | B1 |
10698575 | Walkin et al. | Jun 2020 | B2 |
10708545 | Rivard et al. | Jul 2020 | B2 |
10735642 | Manzari et al. | Aug 2020 | B1 |
10735643 | Manzari et al. | Aug 2020 | B1 |
10789753 | Miller et al. | Sep 2020 | B2 |
10791273 | Manzari et al. | Sep 2020 | B1 |
10796480 | Chen et al. | Oct 2020 | B2 |
10798035 | Lewis et al. | Oct 2020 | B2 |
10810409 | Bacivarov et al. | Oct 2020 | B2 |
10817981 | Belkin | Oct 2020 | B1 |
10845968 | Scapel et al. | Nov 2020 | B2 |
10855910 | Tano | Dec 2020 | B2 |
10902661 | Mourkogiannis et al. | Jan 2021 | B1 |
10958850 | Kwak et al. | Mar 2021 | B2 |
11039074 | Manzari et al. | Jun 2021 | B1 |
11054973 | Manzari et al. | Jul 2021 | B1 |
11061372 | Chen et al. | Jul 2021 | B1 |
11107261 | Scapel et al. | Aug 2021 | B2 |
11120528 | Seely et al. | Sep 2021 | B1 |
11212449 | Manzari et al. | Dec 2021 | B1 |
11321857 | Stauber et al. | May 2022 | B2 |
11350026 | Manzari et al. | May 2022 | B1 |
11418699 | Manzari et al. | Aug 2022 | B1 |
11468625 | Manzari et al. | Oct 2022 | B2 |
11539876 | Manzari et al. | Dec 2022 | B2 |
20010050689 | Park | Dec 2001 | A1 |
20020054157 | Hayashi et al. | May 2002 | A1 |
20020070945 | Kage | Jun 2002 | A1 |
20020167604 | Ban et al. | Nov 2002 | A1 |
20030107664 | Suzuki | Jun 2003 | A1 |
20030122930 | Schofield et al. | Jul 2003 | A1 |
20030135769 | Loughran | Jul 2003 | A1 |
20030140309 | Saito et al. | Jul 2003 | A1 |
20030160756 | Numano | Aug 2003 | A1 |
20030174216 | Iguchi et al. | Sep 2003 | A1 |
20040041924 | White et al. | Mar 2004 | A1 |
20040061796 | Honda et al. | Apr 2004 | A1 |
20040075699 | Franchi et al. | Apr 2004 | A1 |
20040090469 | Moon et al. | May 2004 | A1 |
20040201699 | Parulski et al. | Oct 2004 | A1 |
20040203342 | Sibecas et al. | Oct 2004 | A1 |
20040225966 | Besharat et al. | Nov 2004 | A1 |
20050024517 | Luciano | Feb 2005 | A1 |
20050124389 | Yang | Jun 2005 | A1 |
20050189419 | Igarashi et al. | Sep 2005 | A1 |
20050190653 | Chen | Sep 2005 | A1 |
20050206981 | Hung | Sep 2005 | A1 |
20050210380 | Kramer et al. | Sep 2005 | A1 |
20050210403 | Satanek | Sep 2005 | A1 |
20050248574 | Ashtekar et al. | Nov 2005 | A1 |
20050248660 | Stavely et al. | Nov 2005 | A1 |
20050261031 | Seo et al. | Nov 2005 | A1 |
20050270397 | Battles | Dec 2005 | A1 |
20060020904 | Aaltonen et al. | Jan 2006 | A1 |
20060033831 | Ejima et al. | Feb 2006 | A1 |
20060132482 | Oh et al. | Jun 2006 | A1 |
20060158730 | Kira | Jul 2006 | A1 |
20060166708 | Kim et al. | Jul 2006 | A1 |
20060170781 | Sobol | Aug 2006 | A1 |
20060187322 | Janson et al. | Aug 2006 | A1 |
20060188144 | Sasaki et al. | Aug 2006 | A1 |
20060209067 | Pellacini et al. | Sep 2006 | A1 |
20060228040 | Simon et al. | Oct 2006 | A1 |
20060294465 | Ronen et al. | Dec 2006 | A1 |
20070024614 | Tam et al. | Feb 2007 | A1 |
20070025723 | Baudisch et al. | Feb 2007 | A1 |
20070031062 | Pal et al. | Feb 2007 | A1 |
20070052851 | Ochs et al. | Mar 2007 | A1 |
20070097088 | Battles | May 2007 | A1 |
20070101355 | Chung et al. | May 2007 | A1 |
20070113099 | Takikawa et al. | May 2007 | A1 |
20070113181 | Blattner et al. | May 2007 | A1 |
20070140675 | Yanagi et al. | Jun 2007 | A1 |
20070153112 | Ueda et al. | Jul 2007 | A1 |
20070171091 | Nisenboim et al. | Jul 2007 | A1 |
20070192718 | Voorhees et al. | Aug 2007 | A1 |
20070222789 | Yoshio et al. | Sep 2007 | A1 |
20070226653 | Moore et al. | Sep 2007 | A1 |
20070228259 | Hohenberger | Oct 2007 | A1 |
20070257992 | Kato | Nov 2007 | A1 |
20070260984 | Marks et al. | Nov 2007 | A1 |
20070273769 | Takahashi | Nov 2007 | A1 |
20070291152 | Suekane et al. | Dec 2007 | A1 |
20080030592 | Border et al. | Feb 2008 | A1 |
20080052242 | Merritt et al. | Feb 2008 | A1 |
20080084484 | Ochi et al. | Apr 2008 | A1 |
20080095470 | Chao et al. | Apr 2008 | A1 |
20080098031 | Ducharme | Apr 2008 | A1 |
20080106601 | Matsuda | May 2008 | A1 |
20080129759 | Jeon et al. | Jun 2008 | A1 |
20080129825 | Deangelis et al. | Jun 2008 | A1 |
20080131019 | Ng | Jun 2008 | A1 |
20080143840 | Corkum et al. | Jun 2008 | A1 |
20080192020 | Kang et al. | Aug 2008 | A1 |
20080201438 | Mandre et al. | Aug 2008 | A1 |
20080218611 | Parulski et al. | Sep 2008 | A1 |
20080222558 | Cho et al. | Sep 2008 | A1 |
20080298571 | Kurtz et al. | Dec 2008 | A1 |
20080309811 | Fujinawa et al. | Dec 2008 | A1 |
20090009612 | Tico et al. | Jan 2009 | A1 |
20090021576 | Linder et al. | Jan 2009 | A1 |
20090021600 | Watanabe | Jan 2009 | A1 |
20090022422 | Sorek et al. | Jan 2009 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090027515 | Maruyama et al. | Jan 2009 | A1 |
20090027539 | Kunou | Jan 2009 | A1 |
20090040332 | Yoshino et al. | Feb 2009 | A1 |
20090044113 | Jones et al. | Feb 2009 | A1 |
20090046097 | Franklin | Feb 2009 | A1 |
20090051783 | Kim et al. | Feb 2009 | A1 |
20090066817 | Sakamaki | Mar 2009 | A1 |
20090073285 | Terashima | Mar 2009 | A1 |
20090077460 | Li et al. | Mar 2009 | A1 |
20090077497 | Cho et al. | Mar 2009 | A1 |
20090102918 | Sakamoto et al. | Apr 2009 | A1 |
20090109316 | Matsui | Apr 2009 | A1 |
20090144173 | Mo et al. | Jun 2009 | A1 |
20090144639 | Nims et al. | Jun 2009 | A1 |
20090167671 | Kerofsky | Jul 2009 | A1 |
20090167672 | Kerofsky | Jul 2009 | A1 |
20090175511 | Lee et al. | Jul 2009 | A1 |
20090202114 | Morin et al. | Aug 2009 | A1 |
20090216691 | Borzestowski et al. | Aug 2009 | A1 |
20090244318 | Makii | Oct 2009 | A1 |
20090251484 | Zhao et al. | Oct 2009 | A1 |
20090254859 | Arrasvuori et al. | Oct 2009 | A1 |
20090254862 | Viginisson et al. | Oct 2009 | A1 |
20090263044 | Imagawa et al. | Oct 2009 | A1 |
20090276700 | Anderson et al. | Nov 2009 | A1 |
20090297022 | Pettigrew et al. | Dec 2009 | A1 |
20090300513 | Nims et al. | Dec 2009 | A1 |
20090319897 | Kotler et al. | Dec 2009 | A1 |
20090322901 | Subbotin et al. | Dec 2009 | A1 |
20090325701 | Andres Del Valle | Dec 2009 | A1 |
20100009747 | Reville et al. | Jan 2010 | A1 |
20100020221 | Tupman et al. | Jan 2010 | A1 |
20100020222 | Jones et al. | Jan 2010 | A1 |
20100033615 | Mori | Feb 2010 | A1 |
20100039522 | Huang | Feb 2010 | A1 |
20100042926 | Bull et al. | Feb 2010 | A1 |
20100066853 | Aoki et al. | Mar 2010 | A1 |
20100066889 | Ueda et al. | Mar 2010 | A1 |
20100066890 | Ueda et al. | Mar 2010 | A1 |
20100066895 | Ueda et al. | Mar 2010 | A1 |
20100093400 | Ju et al. | Apr 2010 | A1 |
20100097375 | Tadaishi et al. | Apr 2010 | A1 |
20100123737 | Williamson et al. | May 2010 | A1 |
20100123915 | Kashimoto | May 2010 | A1 |
20100124941 | Cho | May 2010 | A1 |
20100149573 | Pat et al. | Jun 2010 | A1 |
20100153847 | Fama | Jun 2010 | A1 |
20100164893 | Shin et al. | Jul 2010 | A1 |
20100188426 | Ohmori et al. | Jul 2010 | A1 |
20100194931 | Kawaguchi et al. | Aug 2010 | A1 |
20100203968 | Gill et al. | Aug 2010 | A1 |
20100208122 | Yumiki | Aug 2010 | A1 |
20100211899 | Fujioka | Aug 2010 | A1 |
20100218089 | Chao et al. | Aug 2010 | A1 |
20100231777 | Shintani et al. | Sep 2010 | A1 |
20100232703 | Aiso | Sep 2010 | A1 |
20100238327 | Griffith et al. | Sep 2010 | A1 |
20100257469 | Kim et al. | Oct 2010 | A1 |
20100259645 | Kaplan et al. | Oct 2010 | A1 |
20100277470 | Margolis | Nov 2010 | A1 |
20100283743 | Coddington | Nov 2010 | A1 |
20100289825 | Shin et al. | Nov 2010 | A1 |
20100302280 | Szeliski et al. | Dec 2010 | A1 |
20100317410 | Song et al. | Dec 2010 | A1 |
20110007174 | Bacivarov et al. | Jan 2011 | A1 |
20110008033 | Ichimiya et al. | Jan 2011 | A1 |
20110013049 | Thörn | Jan 2011 | A1 |
20110018970 | Wakabayashi | Jan 2011 | A1 |
20110019058 | Sakai et al. | Jan 2011 | A1 |
20110072394 | Victor et al. | Mar 2011 | A1 |
20110074710 | Weeldreyer et al. | Mar 2011 | A1 |
20110074807 | Inada et al. | Mar 2011 | A1 |
20110074830 | Rapp et al. | Mar 2011 | A1 |
20110109581 | Ozawa et al. | May 2011 | A1 |
20110119610 | Hackborn et al. | May 2011 | A1 |
20110138332 | Miyagawa | Jun 2011 | A1 |
20110157379 | Kimura | Jun 2011 | A1 |
20110176039 | Lo | Jul 2011 | A1 |
20110187879 | Ochiai | Aug 2011 | A1 |
20110199495 | Laberge et al. | Aug 2011 | A1 |
20110221755 | Geisner et al. | Sep 2011 | A1 |
20110242369 | Misawa et al. | Oct 2011 | A1 |
20110248992 | Van et al. | Oct 2011 | A1 |
20110249073 | Cranfill et al. | Oct 2011 | A1 |
20110249078 | Abuan et al. | Oct 2011 | A1 |
20110252344 | Van | Oct 2011 | A1 |
20110256848 | Bok et al. | Oct 2011 | A1 |
20110296163 | Abernethy et al. | Dec 2011 | A1 |
20110304632 | Evertt et al. | Dec 2011 | A1 |
20120002898 | Côté et al. | Jan 2012 | A1 |
20120017180 | Flik et al. | Jan 2012 | A1 |
20120019551 | Pettigrew et al. | Jan 2012 | A1 |
20120026378 | Pang et al. | Feb 2012 | A1 |
20120036480 | Warner et al. | Feb 2012 | A1 |
20120056830 | Suzuki et al. | Mar 2012 | A1 |
20120056997 | Jang | Mar 2012 | A1 |
20120069028 | Bouguerra | Mar 2012 | A1 |
20120075328 | Goossens | Mar 2012 | A1 |
20120079378 | Goossens | Mar 2012 | A1 |
20120113762 | Frost | May 2012 | A1 |
20120127189 | Park et al. | May 2012 | A1 |
20120127346 | Sato et al. | May 2012 | A1 |
20120133797 | Sato et al. | May 2012 | A1 |
20120162242 | Amano et al. | Jun 2012 | A1 |
20120169776 | Rissa et al. | Jul 2012 | A1 |
20120188394 | Park et al. | Jul 2012 | A1 |
20120194559 | Lim | Aug 2012 | A1 |
20120206452 | Geisner et al. | Aug 2012 | A1 |
20120206495 | Endo et al. | Aug 2012 | A1 |
20120206619 | Nitta et al. | Aug 2012 | A1 |
20120206621 | Chen et al. | Aug 2012 | A1 |
20120210263 | Perry et al. | Aug 2012 | A1 |
20120235990 | Yamaji | Sep 2012 | A1 |
20120243802 | Fintel et al. | Sep 2012 | A1 |
20120256967 | Baldwin et al. | Oct 2012 | A1 |
20120274830 | Kameyama et al. | Nov 2012 | A1 |
20120293611 | Lee | Nov 2012 | A1 |
20120293686 | Karn et al. | Nov 2012 | A1 |
20120299945 | Aarabi | Nov 2012 | A1 |
20120309520 | Evertt et al. | Dec 2012 | A1 |
20120314047 | Kasahara et al. | Dec 2012 | A1 |
20130010170 | Matsuzawa et al. | Jan 2013 | A1 |
20130038546 | Mineo | Feb 2013 | A1 |
20130038759 | Jo et al. | Feb 2013 | A1 |
20130055119 | Luong | Feb 2013 | A1 |
20130076908 | Bratton et al. | Mar 2013 | A1 |
20130083222 | Matsuzawa et al. | Apr 2013 | A1 |
20130088413 | Raffle et al. | Apr 2013 | A1 |
20130088614 | Lee | Apr 2013 | A1 |
20130101164 | Leclerc et al. | Apr 2013 | A1 |
20130135315 | Bares et al. | May 2013 | A1 |
20130141362 | Asanuma et al. | Jun 2013 | A1 |
20130141513 | Setton et al. | Jun 2013 | A1 |
20130141524 | Karunamuni et al. | Jun 2013 | A1 |
20130147933 | Kulas et al. | Jun 2013 | A1 |
20130155308 | Wu et al. | Jun 2013 | A1 |
20130155474 | Roach et al. | Jun 2013 | A1 |
20130157646 | Ferren et al. | Jun 2013 | A1 |
20130159900 | Pendharkar | Jun 2013 | A1 |
20130165186 | Choi | Jun 2013 | A1 |
20130179831 | Izaki | Jul 2013 | A1 |
20130194378 | Brown | Aug 2013 | A1 |
20130198210 | Lee et al. | Aug 2013 | A1 |
20130201104 | Ptucha et al. | Aug 2013 | A1 |
20130201203 | Warner | Aug 2013 | A1 |
20130201307 | Schloter et al. | Aug 2013 | A1 |
20130210563 | Hollinger | Aug 2013 | A1 |
20130222663 | Rydenhag et al. | Aug 2013 | A1 |
20130234964 | Kim et al. | Sep 2013 | A1 |
20130239057 | Ubillos et al. | Sep 2013 | A1 |
20130246948 | Chen et al. | Sep 2013 | A1 |
20130265467 | Matsuzawa et al. | Oct 2013 | A1 |
20130286161 | Lv et al. | Oct 2013 | A1 |
20130290905 | Luvogt et al. | Oct 2013 | A1 |
20130293686 | Blow et al. | Nov 2013 | A1 |
20130305189 | Kim | Nov 2013 | A1 |
20130322218 | Burkhardt et al. | Dec 2013 | A1 |
20130336545 | Pritikin et al. | Dec 2013 | A1 |
20130342730 | Lee et al. | Dec 2013 | A1 |
20130346916 | Williamson et al. | Dec 2013 | A1 |
20140009639 | Lee | Jan 2014 | A1 |
20140022399 | Rashid et al. | Jan 2014 | A1 |
20140028872 | Lee et al. | Jan 2014 | A1 |
20140028885 | Ma et al. | Jan 2014 | A1 |
20140033043 | Kashima | Jan 2014 | A1 |
20140033100 | Noda et al. | Jan 2014 | A1 |
20140037178 | Park | Feb 2014 | A1 |
20140043368 | Yu | Feb 2014 | A1 |
20140043517 | Yim et al. | Feb 2014 | A1 |
20140047389 | Aarabi | Feb 2014 | A1 |
20140049536 | Neuman et al. | Feb 2014 | A1 |
20140055554 | Du et al. | Feb 2014 | A1 |
20140063175 | Jafry et al. | Mar 2014 | A1 |
20140063313 | Choi et al. | Mar 2014 | A1 |
20140071061 | Lin et al. | Mar 2014 | A1 |
20140071325 | Kawahara et al. | Mar 2014 | A1 |
20140078144 | Berriman et al. | Mar 2014 | A1 |
20140092272 | Choi | Apr 2014 | A1 |
20140095122 | Appleman et al. | Apr 2014 | A1 |
20140118560 | Bala et al. | May 2014 | A1 |
20140118563 | Mehta et al. | May 2014 | A1 |
20140123005 | Forstall et al. | May 2014 | A1 |
20140132735 | Lee et al. | May 2014 | A1 |
20140137013 | Matas | May 2014 | A1 |
20140143693 | Goossens et al. | May 2014 | A1 |
20140152886 | Morgan-Mar et al. | Jun 2014 | A1 |
20140176469 | Lim | Jun 2014 | A1 |
20140176565 | Adeyoola et al. | Jun 2014 | A1 |
20140192233 | Kakkori et al. | Jul 2014 | A1 |
20140205207 | Bhatt | Jul 2014 | A1 |
20140218371 | Du et al. | Aug 2014 | A1 |
20140229831 | Chordia et al. | Aug 2014 | A1 |
20140232838 | Jorgensen et al. | Aug 2014 | A1 |
20140240471 | Srinivasa et al. | Aug 2014 | A1 |
20140240531 | Nakai et al. | Aug 2014 | A1 |
20140267126 | Aberg et al. | Sep 2014 | A1 |
20140267618 | Esteban et al. | Sep 2014 | A1 |
20140267867 | Lee et al. | Sep 2014 | A1 |
20140281983 | Xian et al. | Sep 2014 | A1 |
20140282223 | Bastien et al. | Sep 2014 | A1 |
20140285698 | Geiss | Sep 2014 | A1 |
20140300635 | Suzuki | Oct 2014 | A1 |
20140300722 | Garcia | Oct 2014 | A1 |
20140300779 | Yeo et al. | Oct 2014 | A1 |
20140327639 | Papakipos et al. | Nov 2014 | A1 |
20140333671 | Phang et al. | Nov 2014 | A1 |
20140333824 | Xiu | Nov 2014 | A1 |
20140336808 | Taylor et al. | Nov 2014 | A1 |
20140351720 | Yin | Nov 2014 | A1 |
20140351753 | Shin et al. | Nov 2014 | A1 |
20140354845 | Mølgaard et al. | Dec 2014 | A1 |
20140359438 | Matsuki | Dec 2014 | A1 |
20140362091 | Bouaziz et al. | Dec 2014 | A1 |
20140362274 | Christie et al. | Dec 2014 | A1 |
20140364228 | Rimon | Dec 2014 | A1 |
20140368601 | Decharms | Dec 2014 | A1 |
20140368719 | Kaneko et al. | Dec 2014 | A1 |
20140372856 | Radakovitz et al. | Dec 2014 | A1 |
20150011204 | Seo et al. | Jan 2015 | A1 |
20150022649 | Koppal | Jan 2015 | A1 |
20150033192 | Bohannon et al. | Jan 2015 | A1 |
20150035825 | Zhou et al. | Feb 2015 | A1 |
20150036883 | Deri et al. | Feb 2015 | A1 |
20150037545 | Sun | Feb 2015 | A1 |
20150042571 | Lombardi et al. | Feb 2015 | A1 |
20150042852 | Lee et al. | Feb 2015 | A1 |
20150043046 | Iwamoto | Feb 2015 | A1 |
20150043806 | Sunkavalli et al. | Feb 2015 | A1 |
20150058754 | Rauh | Feb 2015 | A1 |
20150062052 | Bernstein et al. | Mar 2015 | A1 |
20150067513 | Zambetti et al. | Mar 2015 | A1 |
20150070362 | Hirai | Mar 2015 | A1 |
20150077502 | Jordan et al. | Mar 2015 | A1 |
20150078621 | Choi et al. | Mar 2015 | A1 |
20150078726 | Shakib et al. | Mar 2015 | A1 |
20150082193 | Wallace et al. | Mar 2015 | A1 |
20150082446 | Flowers et al. | Mar 2015 | A1 |
20150085174 | Shabtay et al. | Mar 2015 | A1 |
20150091896 | Tarquini et al. | Apr 2015 | A1 |
20150092077 | Feder et al. | Apr 2015 | A1 |
20150109417 | Zirnheld | Apr 2015 | A1 |
20150116353 | Miura et al. | Apr 2015 | A1 |
20150116448 | Gottlieb | Apr 2015 | A1 |
20150135109 | Zambetti et al. | May 2015 | A1 |
20150135234 | Hall | May 2015 | A1 |
20150138079 | Lannsjö | May 2015 | A1 |
20150145950 | Murphy et al. | May 2015 | A1 |
20150146079 | Kim | May 2015 | A1 |
20150149899 | Bernstein et al. | May 2015 | A1 |
20150149927 | Walkin et al. | May 2015 | A1 |
20150150141 | Szymanski et al. | May 2015 | A1 |
20150154448 | Murayama et al. | Jun 2015 | A1 |
20150172534 | Miyakawa et al. | Jun 2015 | A1 |
20150181135 | Shimosato | Jun 2015 | A1 |
20150189138 | Xie et al. | Jul 2015 | A1 |
20150194186 | Lee et al. | Jul 2015 | A1 |
20150208001 | Nonaka et al. | Jul 2015 | A1 |
20150212723 | Lim et al. | Jul 2015 | A1 |
20150213001 | Levy et al. | Jul 2015 | A1 |
20150213604 | Li | Jul 2015 | A1 |
20150220249 | Snibbe et al. | Aug 2015 | A1 |
20150248198 | Somlai-fisher et al. | Sep 2015 | A1 |
20150248235 | Offenberg et al. | Sep 2015 | A1 |
20150248583 | Sekine et al. | Sep 2015 | A1 |
20150249775 | Jacumet | Sep 2015 | A1 |
20150249785 | Mehta et al. | Sep 2015 | A1 |
20150253740 | Nishijima et al. | Sep 2015 | A1 |
20150254855 | Patankar et al. | Sep 2015 | A1 |
20150256749 | Frey et al. | Sep 2015 | A1 |
20150277686 | Laforge et al. | Oct 2015 | A1 |
20150281145 | Ji | Oct 2015 | A1 |
20150286724 | Knaapen et al. | Oct 2015 | A1 |
20150289104 | Jung et al. | Oct 2015 | A1 |
20150301731 | Okamoto et al. | Oct 2015 | A1 |
20150302624 | Burke | Oct 2015 | A1 |
20150310583 | Hume et al. | Oct 2015 | A1 |
20150312182 | Langholz | Oct 2015 | A1 |
20150312184 | Langholz et al. | Oct 2015 | A1 |
20150312185 | Langholz et al. | Oct 2015 | A1 |
20150317945 | Andress et al. | Nov 2015 | A1 |
20150334075 | Wang et al. | Nov 2015 | A1 |
20150334291 | Cho et al. | Nov 2015 | A1 |
20150341536 | Huang et al. | Nov 2015 | A1 |
20150347824 | Saari et al. | Dec 2015 | A1 |
20150350141 | Yang et al. | Dec 2015 | A1 |
20150350533 | Harris et al. | Dec 2015 | A1 |
20150362998 | Park et al. | Dec 2015 | A1 |
20150365587 | Ha et al. | Dec 2015 | A1 |
20150370458 | Chen | Dec 2015 | A1 |
20150370529 | Zambetti et al. | Dec 2015 | A1 |
20160005211 | Sarkis et al. | Jan 2016 | A1 |
20160006987 | Li et al. | Jan 2016 | A1 |
20160012567 | Siddiqui et al. | Jan 2016 | A1 |
20160026371 | Lu et al. | Jan 2016 | A1 |
20160030844 | Nair et al. | Feb 2016 | A1 |
20160034133 | Wilson et al. | Feb 2016 | A1 |
20160044236 | Matsuzawa et al. | Feb 2016 | A1 |
20160048598 | Fujioka et al. | Feb 2016 | A1 |
20160048599 | Fujioka et al. | Feb 2016 | A1 |
20160048725 | Holz et al. | Feb 2016 | A1 |
20160048903 | Fujioka et al. | Feb 2016 | A1 |
20160050169 | Ben Atar et al. | Feb 2016 | A1 |
20160050351 | Lee et al. | Feb 2016 | A1 |
20160050446 | Fujioka et al. | Feb 2016 | A1 |
20160065832 | Kim et al. | Mar 2016 | A1 |
20160065861 | Steinberg et al. | Mar 2016 | A1 |
20160065930 | Chandra et al. | Mar 2016 | A1 |
20160077725 | Maeda | Mar 2016 | A1 |
20160080639 | Choi et al. | Mar 2016 | A1 |
20160086387 | Os et al. | Mar 2016 | A1 |
20160088280 | Sadi et al. | Mar 2016 | A1 |
20160092035 | Crocker et al. | Mar 2016 | A1 |
20160092043 | Missig et al. | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160117829 | Yoon et al. | Apr 2016 | A1 |
20160127636 | Ito et al. | May 2016 | A1 |
20160127638 | Guo et al. | May 2016 | A1 |
20160132200 | Walkin et al. | May 2016 | A1 |
20160132201 | Shaw et al. | May 2016 | A1 |
20160134840 | Mcculloch | May 2016 | A1 |
20160142649 | Yim | May 2016 | A1 |
20160148384 | Bud et al. | May 2016 | A1 |
20160150215 | Chen et al. | May 2016 | A1 |
20160162039 | Eilat et al. | Jun 2016 | A1 |
20160163084 | Corazza et al. | Jun 2016 | A1 |
20160173869 | Srikanth et al. | Jun 2016 | A1 |
20160187995 | Rosewall | Jun 2016 | A1 |
20160188181 | Smith | Jun 2016 | A1 |
20160212319 | Harris et al. | Jul 2016 | A1 |
20160217601 | Tsuda et al. | Jul 2016 | A1 |
20160219217 | Williams et al. | Jul 2016 | A1 |
20160225175 | Kim et al. | Aug 2016 | A1 |
20160226926 | Singh et al. | Aug 2016 | A1 |
20160227016 | Kim et al. | Aug 2016 | A1 |
20160227121 | Matsushita | Aug 2016 | A1 |
20160247288 | Omori et al. | Aug 2016 | A1 |
20160247309 | Li et al. | Aug 2016 | A1 |
20160255268 | Kang et al. | Sep 2016 | A1 |
20160259413 | Anzures et al. | Sep 2016 | A1 |
20160259497 | Bauer et al. | Sep 2016 | A1 |
20160259498 | Foss et al. | Sep 2016 | A1 |
20160259499 | Kocienda et al. | Sep 2016 | A1 |
20160259518 | King et al. | Sep 2016 | A1 |
20160259519 | Foss et al. | Sep 2016 | A1 |
20160259527 | Kocienda et al. | Sep 2016 | A1 |
20160259528 | Foss et al. | Sep 2016 | A1 |
20160267067 | Mays et al. | Sep 2016 | A1 |
20160275724 | Adeyoola et al. | Sep 2016 | A1 |
20160283097 | Voss et al. | Sep 2016 | A1 |
20160284123 | Hare et al. | Sep 2016 | A1 |
20160307324 | Higuchi et al. | Oct 2016 | A1 |
20160327911 | Eim et al. | Nov 2016 | A1 |
20160328875 | Fang et al. | Nov 2016 | A1 |
20160337570 | Tan et al. | Nov 2016 | A1 |
20160337582 | Shimauchi et al. | Nov 2016 | A1 |
20160353030 | Tang et al. | Dec 2016 | A1 |
20160357282 | Block et al. | Dec 2016 | A1 |
20160357353 | Miura et al. | Dec 2016 | A1 |
20160357387 | Bovet et al. | Dec 2016 | A1 |
20160360097 | Penha et al. | Dec 2016 | A1 |
20160360116 | Penha et al. | Dec 2016 | A1 |
20160366323 | Chen et al. | Dec 2016 | A1 |
20160366344 | Pan et al. | Dec 2016 | A1 |
20160370974 | Stenneth | Dec 2016 | A1 |
20160373650 | Kim et al. | Dec 2016 | A1 |
20170011773 | Lee | Jan 2017 | A1 |
20170013179 | Kang et al. | Jan 2017 | A1 |
20170018289 | Morgenstern | Jan 2017 | A1 |
20170019604 | Kim et al. | Jan 2017 | A1 |
20170024872 | Olsson et al. | Jan 2017 | A1 |
20170026565 | Hong et al. | Jan 2017 | A1 |
20170034449 | Eum et al. | Feb 2017 | A1 |
20170039686 | Miura et al. | Feb 2017 | A1 |
20170041677 | Anderson et al. | Feb 2017 | A1 |
20170046065 | Zeng et al. | Feb 2017 | A1 |
20170048450 | Lee et al. | Feb 2017 | A1 |
20170048461 | Lee et al. | Feb 2017 | A1 |
20170048494 | Boyle et al. | Feb 2017 | A1 |
20170054960 | Chien et al. | Feb 2017 | A1 |
20170061635 | Oberheu | Mar 2017 | A1 |
20170064200 | Castillo et al. | Mar 2017 | A1 |
20170064205 | Choi et al. | Mar 2017 | A1 |
20170082983 | Katzer et al. | Mar 2017 | A1 |
20170083086 | Mazur et al. | Mar 2017 | A1 |
20170094019 | Ahmed et al. | Mar 2017 | A1 |
20170094161 | Graham et al. | Mar 2017 | A1 |
20170109912 | Lee et al. | Apr 2017 | A1 |
20170111567 | Pila | Apr 2017 | A1 |
20170111616 | Li et al. | Apr 2017 | A1 |
20170140214 | Matas et al. | May 2017 | A1 |
20170164888 | Matsuda et al. | Jun 2017 | A1 |
20170178287 | Anderson | Jun 2017 | A1 |
20170180811 | Quirino et al. | Jun 2017 | A1 |
20170186162 | Mihic et al. | Jun 2017 | A1 |
20170193684 | Du et al. | Jul 2017 | A1 |
20170206095 | Gibbs et al. | Jul 2017 | A1 |
20170220212 | Yang et al. | Aug 2017 | A1 |
20170230576 | Sparks et al. | Aug 2017 | A1 |
20170230585 | Nash et al. | Aug 2017 | A1 |
20170236298 | Vetter | Aug 2017 | A1 |
20170237888 | Harris et al. | Aug 2017 | A1 |
20170243389 | Wild et al. | Aug 2017 | A1 |
20170244896 | Chien et al. | Aug 2017 | A1 |
20170244897 | Jung et al. | Aug 2017 | A1 |
20170255169 | Lee et al. | Sep 2017 | A1 |
20170257596 | Murata et al. | Sep 2017 | A1 |
20170264817 | Yan et al. | Sep 2017 | A1 |
20170269715 | Kim et al. | Sep 2017 | A1 |
20170272654 | Poindexter, Jr. | Sep 2017 | A1 |
20170285764 | Kim et al. | Oct 2017 | A1 |
20170285916 | Xu | Oct 2017 | A1 |
20170286913 | Liu et al. | Oct 2017 | A1 |
20170287220 | Khalid et al. | Oct 2017 | A1 |
20170302840 | Hasinoff et al. | Oct 2017 | A1 |
20170315772 | Lee et al. | Nov 2017 | A1 |
20170323266 | Seo | Nov 2017 | A1 |
20170324784 | Taine et al. | Nov 2017 | A1 |
20170336926 | Chaudhri et al. | Nov 2017 | A1 |
20170336928 | Chaudhri et al. | Nov 2017 | A1 |
20170336961 | Heo et al. | Nov 2017 | A1 |
20170337554 | Mokhasi et al. | Nov 2017 | A1 |
20170352379 | Oh et al. | Dec 2017 | A1 |
20170354888 | Benedetto et al. | Dec 2017 | A1 |
20170358071 | Yamaoka et al. | Dec 2017 | A1 |
20170359504 | Manzari et al. | Dec 2017 | A1 |
20170359505 | Manzari et al. | Dec 2017 | A1 |
20170359506 | Manzari et al. | Dec 2017 | A1 |
20170366729 | Itoh | Dec 2017 | A1 |
20180004404 | Delfino et al. | Jan 2018 | A1 |
20180007315 | Kim et al. | Jan 2018 | A1 |
20180021684 | Benedetto | Jan 2018 | A1 |
20180034867 | Zahn et al. | Feb 2018 | A1 |
20180035031 | Kwak et al. | Feb 2018 | A1 |
20180047200 | O'hara et al. | Feb 2018 | A1 |
20180052571 | Seol et al. | Feb 2018 | A1 |
20180059903 | Lim et al. | Mar 2018 | A1 |
20180067633 | Wilson et al. | Mar 2018 | A1 |
20180074693 | Jones et al. | Mar 2018 | A1 |
20180077332 | Shimura et al. | Mar 2018 | A1 |
20180081515 | Block et al. | Mar 2018 | A1 |
20180091728 | Brown et al. | Mar 2018 | A1 |
20180091732 | Wilson et al. | Mar 2018 | A1 |
20180095649 | Valdivia et al. | Apr 2018 | A1 |
20180096487 | Nash et al. | Apr 2018 | A1 |
20180107367 | Rinneberg et al. | Apr 2018 | A1 |
20180109722 | Laroia et al. | Apr 2018 | A1 |
20180113577 | Burns et al. | Apr 2018 | A1 |
20180114543 | Novikoff | Apr 2018 | A1 |
20180120661 | KiLGore et al. | May 2018 | A1 |
20180121060 | Jeong et al. | May 2018 | A1 |
20180124299 | Brook | May 2018 | A1 |
20180129224 | Hur | May 2018 | A1 |
20180131878 | Charlton et al. | May 2018 | A1 |
20180146132 | Manzari et al. | May 2018 | A1 |
20180152611 | Li et al. | May 2018 | A1 |
20180165862 | Sawaki | Jun 2018 | A1 |
20180184008 | Kondo | Jun 2018 | A1 |
20180184061 | Kitsunai et al. | Jun 2018 | A1 |
20180189549 | Inomata | Jul 2018 | A1 |
20180191944 | Carbonell et al. | Jul 2018 | A1 |
20180198985 | Ishitsuka | Jul 2018 | A1 |
20180199025 | Holzer et al. | Jul 2018 | A1 |
20180213144 | Kim et al. | Jul 2018 | A1 |
20180213161 | Kanda et al. | Jul 2018 | A1 |
20180227479 | Parameswaran et al. | Aug 2018 | A1 |
20180227482 | Holzer et al. | Aug 2018 | A1 |
20180227505 | Baltz et al. | Aug 2018 | A1 |
20180234608 | Sudo et al. | Aug 2018 | A1 |
20180246639 | Han et al. | Aug 2018 | A1 |
20180253194 | Javadi | Sep 2018 | A1 |
20180267703 | Kamimaru et al. | Sep 2018 | A1 |
20180268589 | Grant | Sep 2018 | A1 |
20180270420 | Lee et al. | Sep 2018 | A1 |
20180278823 | Horesh | Sep 2018 | A1 |
20180284979 | Choi et al. | Oct 2018 | A1 |
20180288310 | Goldenberg | Oct 2018 | A1 |
20180302551 | Yamajo et al. | Oct 2018 | A1 |
20180302568 | Kim et al. | Oct 2018 | A1 |
20180308282 | Yokoi | Oct 2018 | A1 |
20180324353 | Kim et al. | Nov 2018 | A1 |
20180329587 | Ko et al. | Nov 2018 | A1 |
20180335901 | Manzari et al. | Nov 2018 | A1 |
20180335927 | Anzures et al. | Nov 2018 | A1 |
20180335929 | Scapel et al. | Nov 2018 | A1 |
20180335930 | Scapel et al. | Nov 2018 | A1 |
20180336715 | Rickwald et al. | Nov 2018 | A1 |
20180343383 | Ito et al. | Nov 2018 | A1 |
20180349008 | Manzari et al. | Dec 2018 | A1 |
20180349795 | Boyle et al. | Dec 2018 | A1 |
20180352165 | Zhen et al. | Dec 2018 | A1 |
20180376122 | Park et al. | Dec 2018 | A1 |
20190007589 | Kadambala et al. | Jan 2019 | A1 |
20190028650 | Bernstein et al. | Jan 2019 | A1 |
20190029513 | Gunnerson et al. | Jan 2019 | A1 |
20190050045 | Jha et al. | Feb 2019 | A1 |
20190051032 | Chu et al. | Feb 2019 | A1 |
20190058827 | Park | Feb 2019 | A1 |
20190082097 | Manzari et al. | Mar 2019 | A1 |
20190089873 | Misawa et al. | Mar 2019 | A1 |
20190108684 | Callaghan | Apr 2019 | A1 |
20190114740 | Ogino et al. | Apr 2019 | A1 |
20190121216 | Shabtay et al. | Apr 2019 | A1 |
20190138259 | Bagaria et al. | May 2019 | A1 |
20190139207 | Jeong et al. | May 2019 | A1 |
20190141030 | Cockerill et al. | May 2019 | A1 |
20190149706 | Rivard et al. | May 2019 | A1 |
20190158735 | Wilson et al. | May 2019 | A1 |
20190174054 | Srivastava et al. | Jun 2019 | A1 |
20190199926 | An et al. | Jun 2019 | A1 |
20190205861 | Bace | Jul 2019 | A1 |
20190206031 | Kim et al. | Jul 2019 | A1 |
20190222769 | Srivastava et al. | Jul 2019 | A1 |
20190235743 | Ono | Aug 2019 | A1 |
20190235748 | Seol et al. | Aug 2019 | A1 |
20190266807 | Lee et al. | Aug 2019 | A1 |
20190289201 | Nishimura et al. | Sep 2019 | A1 |
20190289271 | Paulus et al. | Sep 2019 | A1 |
20190318538 | Li et al. | Oct 2019 | A1 |
20190339847 | Scapel et al. | Nov 2019 | A1 |
20190342507 | Dye et al. | Nov 2019 | A1 |
20190347868 | Scapel et al. | Nov 2019 | A1 |
20190379821 | Kobayashi et al. | Dec 2019 | A1 |
20190379837 | Kim et al. | Dec 2019 | A1 |
20200045245 | Van Os et al. | Feb 2020 | A1 |
20200053288 | Kim et al. | Feb 2020 | A1 |
20200059605 | Liu et al. | Feb 2020 | A1 |
20200068095 | Nabetani | Feb 2020 | A1 |
20200068121 | Wang | Feb 2020 | A1 |
20200082599 | Manzari | Mar 2020 | A1 |
20200089302 | Kim et al. | Mar 2020 | A1 |
20200104038 | Kamath et al. | Apr 2020 | A1 |
20200105003 | Stauber et al. | Apr 2020 | A1 |
20200106952 | Missig et al. | Apr 2020 | A1 |
20200128191 | Sun et al. | Apr 2020 | A1 |
20200142577 | Manzari et al. | May 2020 | A1 |
20200204725 | Li | Jun 2020 | A1 |
20200221020 | Manzari et al. | Jul 2020 | A1 |
20200226848 | Van Os et al. | Jul 2020 | A1 |
20200234481 | Scapel et al. | Jul 2020 | A1 |
20200234508 | Shaburov et al. | Jul 2020 | A1 |
20200236278 | Yeung et al. | Jul 2020 | A1 |
20200242788 | Jacobs et al. | Jul 2020 | A1 |
20200244879 | Hohjoh | Jul 2020 | A1 |
20200285806 | Radakovitz et al. | Sep 2020 | A1 |
20200285851 | Lin et al. | Sep 2020 | A1 |
20200335133 | Vaucher | Oct 2020 | A1 |
20200336660 | Dong et al. | Oct 2020 | A1 |
20200358963 | Manzari et al. | Nov 2020 | A1 |
20200380768 | Harris et al. | Dec 2020 | A1 |
20200380781 | Barlier et al. | Dec 2020 | A1 |
20200410763 | Hare et al. | Dec 2020 | A1 |
20200412975 | Al Majid et al. | Dec 2020 | A1 |
20210005003 | Chong et al. | Jan 2021 | A1 |
20210056769 | Scapel et al. | Feb 2021 | A1 |
20210058351 | Viklund et al. | Feb 2021 | A1 |
20210065448 | Goodrich et al. | Mar 2021 | A1 |
20210065454 | Goodrich et al. | Mar 2021 | A1 |
20210096703 | Anzures et al. | Apr 2021 | A1 |
20210099568 | Depue et al. | Apr 2021 | A1 |
20210099761 | Zhang | Apr 2021 | A1 |
20210146838 | Goseberg et al. | May 2021 | A1 |
20210152505 | Baldwin et al. | May 2021 | A1 |
20210168108 | Antmen et al. | Jun 2021 | A1 |
20210195093 | Manzari et al. | Jun 2021 | A1 |
20210264656 | Barlier et al. | Aug 2021 | A1 |
20210287343 | Kaida | Sep 2021 | A1 |
20210318798 | Manzari et al. | Oct 2021 | A1 |
20210335055 | Scapel et al. | Oct 2021 | A1 |
20210349426 | Chen et al. | Nov 2021 | A1 |
20210349427 | Chen et al. | Nov 2021 | A1 |
20210349611 | Chen et al. | Nov 2021 | A1 |
20210349612 | Triverio | Nov 2021 | A1 |
20210373750 | Manzari et al. | Dec 2021 | A1 |
20210375042 | Chen | Dec 2021 | A1 |
20210390753 | Scapel et al. | Dec 2021 | A1 |
20220006946 | Missig et al. | Jan 2022 | A1 |
20220044459 | Zacharia et al. | Feb 2022 | A1 |
20220053142 | Manzari et al. | Feb 2022 | A1 |
20220103758 | Manzari et al. | Mar 2022 | A1 |
20220124241 | Manzari et al. | Apr 2022 | A1 |
20220262022 | Stauber et al. | Aug 2022 | A1 |
20220264028 | Manzari et al. | Aug 2022 | A1 |
20220276041 | Dryer et al. | Sep 2022 | A1 |
20220294992 | Manzari et al. | Sep 2022 | A1 |
20220319100 | Manzari et al. | Oct 2022 | A1 |
20220345785 | Yang et al. | Oct 2022 | A1 |
20220353425 | Manzari et al. | Nov 2022 | A1 |
20220382440 | Manzari et al. | Dec 2022 | A1 |
20220382443 | Clarke et al. | Dec 2022 | A1 |
20220392132 | Sepulveda et al. | Dec 2022 | A1 |
20230004270 | Chen et al. | Jan 2023 | A1 |
20230020616 | Manzari et al. | Jan 2023 | A1 |
20230043249 | Van Os et al. | Feb 2023 | A1 |
Number | Date | Country |
---|---|---|
2015101639 | Dec 2015 | AU |
2013368443 | Mar 2016 | AU |
2017100683 | Jan 2018 | AU |
2015297035 | Jun 2018 | AU |
2356232 | Mar 2002 | CA |
2729392 | Aug 2011 | CA |
2965700 | May 2016 | CA |
2729392 | May 2017 | CA |
1437365 | Aug 2003 | CN |
1499878 | May 2004 | CN |
1901717 | Jan 2007 | CN |
101055646 | Oct 2007 | CN |
101068311 | Nov 2007 | CN |
101282422 | Oct 2008 | CN |
101300830 | Nov 2008 | CN |
101310519 | Nov 2008 | CN |
101329707 | Dec 2008 | CN |
101355655 | Jan 2009 | CN |
101364031 | Feb 2009 | CN |
101388965 | Mar 2009 | CN |
101576996 | Nov 2009 | CN |
101681462 | Mar 2010 | CN |
101692681 | Apr 2010 | CN |
101742053 | Jun 2010 | CN |
101778220 | Jul 2010 | CN |
101883213 | Nov 2010 | CN |
101931691 | Dec 2010 | CN |
102035990 | Apr 2011 | CN |
201788344 | Apr 2011 | CN |
102075727 | May 2011 | CN |
102084327 | Jun 2011 | CN |
102088554 | Jun 2011 | CN |
102142149 | Aug 2011 | CN |
102271241 | Dec 2011 | CN |
102272700 | Dec 2011 | CN |
102298797 | Dec 2011 | CN |
102428655 | Apr 2012 | CN |
102457661 | May 2012 | CN |
102474560 | May 2012 | CN |
102567953 | Jul 2012 | CN |
202330968 | Jul 2012 | CN |
102622085 | Aug 2012 | CN |
102625036 | Aug 2012 | CN |
102750070 | Oct 2012 | CN |
102854979 | Jan 2013 | CN |
102855079 | Jan 2013 | CN |
103037075 | Apr 2013 | CN |
103051837 | Apr 2013 | CN |
103051841 | Apr 2013 | CN |
103052961 | Apr 2013 | CN |
103297719 | Sep 2013 | CN |
103309602 | Sep 2013 | CN |
103324329 | Sep 2013 | CN |
103516894 | Jan 2014 | CN |
103685925 | Mar 2014 | CN |
103702039 | Apr 2014 | CN |
103703438 | Apr 2014 | CN |
103777742 | May 2014 | CN |
103927190 | Jul 2014 | CN |
103947190 | Jul 2014 | CN |
103970472 | Aug 2014 | CN |
104182741 | Dec 2014 | CN |
104246793 | Dec 2014 | CN |
104270597 | Jan 2015 | CN |
104346080 | Feb 2015 | CN |
104346099 | Feb 2015 | CN |
104376160 | Feb 2015 | CN |
104423946 | Mar 2015 | CN |
104461288 | Mar 2015 | CN |
104753762 | Jul 2015 | CN |
104754203 | Jul 2015 | CN |
104813322 | Jul 2015 | CN |
104836947 | Aug 2015 | CN |
104903834 | Sep 2015 | CN |
104952063 | Sep 2015 | CN |
105100462 | Nov 2015 | CN |
105138259 | Dec 2015 | CN |
105190511 | Dec 2015 | CN |
105190700 | Dec 2015 | CN |
105229571 | Jan 2016 | CN |
105245774 | Jan 2016 | CN |
105338256 | Feb 2016 | CN |
105391937 | Mar 2016 | CN |
105474163 | Apr 2016 | CN |
105493138 | Apr 2016 | CN |
105589637 | May 2016 | CN |
105611215 | May 2016 | CN |
105611275 | May 2016 | CN |
105620393 | Jun 2016 | CN |
105630290 | Jun 2016 | CN |
105637855 | Jun 2016 | CN |
105653031 | Jun 2016 | CN |
105765967 | Jul 2016 | CN |
105794196 | Jul 2016 | CN |
105981372 | Sep 2016 | CN |
105991915 | Oct 2016 | CN |
106067947 | Nov 2016 | CN |
106161956 | Nov 2016 | CN |
10625540 | Dec 2016 | CN |
106210184 | Dec 2016 | CN |
106210550 | Dec 2016 | CN |
106257909 | Dec 2016 | CN |
106303280 | Jan 2017 | CN |
106303690 | Jan 2017 | CN |
106341611 | Jan 2017 | CN |
106375662 | Feb 2017 | CN |
106412214 | Feb 2017 | CN |
106412412 | Feb 2017 | CN |
106412445 | Feb 2017 | CN |
106445219 | Feb 2017 | CN |
106791357 | May 2017 | CN |
106791377 | May 2017 | CN |
106791420 | May 2017 | CN |
106921829 | Jul 2017 | CN |
107077274 | Aug 2017 | CN |
107079141 | Aug 2017 | CN |
107533356 | Jan 2018 | CN |
107566721 | Jan 2018 | CN |
107580693 | Jan 2018 | CN |
107770448 | Mar 2018 | CN |
107800945 | Mar 2018 | CN |
107820011 | Mar 2018 | CN |
107924113 | Apr 2018 | CN |
108353126 | Jul 2018 | CN |
108391053 | Aug 2018 | CN |
108513070 | Sep 2018 | CN |
108549522 | Sep 2018 | CN |
108668083 | Oct 2018 | CN |
108712609 | Oct 2018 | CN |
108848308 | Nov 2018 | CN |
108886569 | Nov 2018 | CN |
109005366 | Dec 2018 | CN |
109061985 | Dec 2018 | CN |
109313530 | Feb 2019 | CN |
109496425 | Mar 2019 | CN |
109639970 | Apr 2019 | CN |
109644229 | Apr 2019 | CN |
111901475 | Nov 2020 | CN |
201670652 | Dec 2017 | DK |
201670753 | Jan 2018 | DK |
201670755 | Jan 2018 | DK |
201670627 | Feb 2018 | DK |
0579093 | Jan 1994 | EP |
0651543 | May 1995 | EP |
0651543 | Dec 1997 | EP |
1215867 | Jun 2002 | EP |
1278099 | Jan 2003 | EP |
1429291 | Jun 2004 | EP |
1592212 | Nov 2005 | EP |
1736931 | Dec 2006 | EP |
0651543 | Sep 2008 | EP |
2194508 | Jun 2010 | EP |
2416563 | Feb 2012 | EP |
2430766 | Mar 2012 | EP |
2454872 | May 2012 | EP |
2482179 | Aug 2012 | EP |
2487613 | Aug 2012 | EP |
2487913 | Aug 2012 | EP |
2430766 | Dec 2012 | EP |
2579572 | Apr 2013 | EP |
2634751 | Sep 2013 | EP |
2640060 | Sep 2013 | EP |
2682855 | Jan 2014 | EP |
2830297 | Jan 2015 | EP |
2843530 | Mar 2015 | EP |
2950198 | Dec 2015 | EP |
2966855 | Jan 2016 | EP |
2972677 | Jan 2016 | EP |
2430766 | Mar 2016 | EP |
2990887 | Mar 2016 | EP |
3008575 | Apr 2016 | EP |
3012732 | Apr 2016 | EP |
3026636 | Jun 2016 | EP |
3033837 | Jun 2016 | EP |
3047884 | Jul 2016 | EP |
3051525 | Aug 2016 | EP |
3101958 | Dec 2016 | EP |
3104590 | Dec 2016 | EP |
3107065 | Dec 2016 | EP |
3033837 | Mar 2017 | EP |
3190563 | Jul 2017 | EP |
3209012 | Aug 2017 | EP |
3211587 | Aug 2017 | EP |
2194508 | Dec 2017 | EP |
3333544 | Jun 2018 | EP |
2556665 | Aug 2018 | EP |
3033837 | Oct 2018 | EP |
3393119 | Oct 2018 | EP |
3135028 | Jan 2019 | EP |
2482179 | Mar 2019 | EP |
3457680 | Mar 2019 | EP |
3012732 | May 2019 | EP |
3008575 | Jul 2019 | EP |
3120217 | Apr 2020 | EP |
3633975 | Apr 2020 | EP |
2682855 | Feb 2021 | EP |
3787285 | Mar 2021 | EP |
2307383 | May 1997 | GB |
2515797 | Jan 2015 | GB |
2519363 | Apr 2015 | GB |
2523670 | Sep 2015 | GB |
53-31170 | Mar 1978 | JP |
56-621 | Jan 1981 | JP |
H02-179078 | Jul 1990 | JP |
3007616 | Feb 1995 | JP |
H099072 | Jan 1997 | JP |
H09116792 | May 1997 | JP |
10-506472 | Jun 1998 | JP |
H11-109066 | Apr 1999 | JP |
H11355617 | Dec 1999 | JP |
2000-76460 | Mar 2000 | JP |
2000-162349 | Jun 2000 | JP |
2000-207549 | Jul 2000 | JP |
2000-244905 | Sep 2000 | JP |
2001-144884 | May 2001 | JP |
2001-245204 | Sep 2001 | JP |
2001-273064 | Oct 2001 | JP |
2001-298649 | Oct 2001 | JP |
2001-313886 | Nov 2001 | JP |
2002-251238 | Sep 2002 | JP |
2003-008964 | Jan 2003 | JP |
2003-9404 | Jan 2003 | JP |
2003-018438 | Jan 2003 | JP |
2003-032597 | Jan 2003 | JP |
2003-219217 | Jul 2003 | JP |
2003-233616 | Aug 2003 | JP |
2003-241293 | Aug 2003 | JP |
2003-248549 | Sep 2003 | JP |
2004-015595 | Jan 2004 | JP |
2004-28918 | Jan 2004 | JP |
2004-135074 | Apr 2004 | JP |
2004-184396 | Jul 2004 | JP |
2005-031466 | Feb 2005 | JP |
2005-191641 | Jul 2005 | JP |
2005-191985 | Jul 2005 | JP |
2005-521890 | Jul 2005 | JP |
2005-311699 | Nov 2005 | JP |
2006-520053 | Aug 2006 | JP |
3872041 | Jan 2007 | JP |
2007-028211 | Feb 2007 | JP |
2007-124398 | May 2007 | JP |
2007-528240 | Oct 2007 | JP |
2008-066978 | Mar 2008 | JP |
2008-236534 | Oct 2008 | JP |
2009-105919 | May 2009 | JP |
2009-212899 | Sep 2009 | JP |
2009-273023 | Nov 2009 | JP |
2009-545256 | Dec 2009 | JP |
2010-117444 | May 2010 | JP |
2010-119147 | May 2010 | JP |
2010-160581 | Jul 2010 | JP |
2010-182023 | Aug 2010 | JP |
2010-268052 | Nov 2010 | JP |
2011-087167 | Apr 2011 | JP |
2011-091570 | May 2011 | JP |
2011-124864 | Jun 2011 | JP |
2011-517810 | Jun 2011 | JP |
2011-525648 | Sep 2011 | JP |
2011-209887 | Oct 2011 | JP |
2011-211552 | Oct 2011 | JP |
2012-038292 | Feb 2012 | JP |
2012-079302 | Apr 2012 | JP |
2012-089973 | May 2012 | JP |
2012-124608 | Jun 2012 | JP |
2012-147379 | Aug 2012 | JP |
2013-3671 | Jan 2013 | JP |
2013-070303 | Apr 2013 | JP |
2013-92989 | May 2013 | JP |
2013-97760 | May 2013 | JP |
2013-101528 | May 2013 | JP |
2013-106289 | May 2013 | JP |
2013-232230 | Nov 2013 | JP |
2013-546238 | Dec 2013 | JP |
2014-023083 | Feb 2014 | JP |
2014-206817 | Oct 2014 | JP |
2014-212415 | Nov 2014 | JP |
2015-001716 | Jan 2015 | JP |
2015-005255 | Jan 2015 | JP |
2015-022716 | Feb 2015 | JP |
2015-25897 | Feb 2015 | JP |
2015-050713 | Mar 2015 | JP |
2015-076717 | Apr 2015 | JP |
2015-091098 | May 2015 | JP |
2015-146619 | Aug 2015 | JP |
2015-149095 | Aug 2015 | JP |
2015-180987 | Oct 2015 | JP |
2015-201839 | Nov 2015 | JP |
2016-066978 | Apr 2016 | JP |
2016-072965 | May 2016 | JP |
2016-129315 | Jul 2016 | JP |
2016-136324 | Jul 2016 | JP |
2016-175175 | Oct 2016 | JP |
2017-034474 | Feb 2017 | JP |
2017-527917 | Sep 2017 | JP |
2017-531225 | Oct 2017 | JP |
6240301 | Nov 2017 | JP |
6266736 | Jan 2018 | JP |
2018-514838 | Jun 2018 | JP |
2018-106365 | Jul 2018 | JP |
2018-107711 | Jul 2018 | JP |
2018-116067 | Jul 2018 | JP |
2018-121235 | Aug 2018 | JP |
2019-062556 | Apr 2019 | JP |
2019-145108 | Aug 2019 | JP |
2020-42602 | Mar 2020 | JP |
6982047 | Nov 2021 | JP |
10-2004-0046272 | Jun 2004 | KR |
10-2004-0107489 | Dec 2004 | KR |
10-2005-0086630 | Aug 2005 | KR |
10-2008-0050336 | Jun 2008 | KR |
10-2010-0086052 | Jul 2010 | KR |
10-2011-0028581 | Mar 2011 | KR |
10-2012-0025872 | Mar 2012 | KR |
10-2012-0048397 | May 2012 | KR |
10-2012-0054406 | May 2012 | KR |
10-2012-0057696 | Jun 2012 | KR |
10-2012-0093322 | Aug 2012 | KR |
10-2012-0132134 | Dec 2012 | KR |
10-2013-0033445 | Apr 2013 | KR |
101341095 | Dec 2013 | KR |
10-2014-0019631 | Feb 2014 | KR |
10-2014-0033088 | Mar 2014 | KR |
10-2014-0049340 | Apr 2014 | KR |
10-2014-0049850 | Apr 2014 | KR |
10-2014-0062801 | May 2014 | KR |
10-2015-0008996 | Jan 2015 | KR |
10-2015-0014290 | Feb 2015 | KR |
10-2015-0024899 | Mar 2015 | KR |
20150024899 | Mar 2015 | KR |
10-2015-0067197 | Jun 2015 | KR |
101540544 | Jul 2015 | KR |
101587115 | Jan 2016 | KR |
10-2016-0016910 | Feb 2016 | KR |
10-2016-0019145 | Feb 2016 | KR |
10-2016-0020396 | Feb 2016 | KR |
10-2016-0020791 | Feb 2016 | KR |
20160047891 | May 2016 | KR |
10-2016-0075583 | Jun 2016 | KR |
101674959 | Nov 2016 | KR |
10-2017-0081391 | Jul 2017 | KR |
10-2017-0123125 | Nov 2017 | KR |
10-1799223 | Nov 2017 | KR |
10-2017-0135975 | Dec 2017 | KR |
10-2018-0017227 | Feb 2018 | KR |
10-2018-0037076 | Apr 2018 | KR |
10-1875907 | Jul 2018 | KR |
10-2018-0095331 | Aug 2018 | KR |
10-2018-0108847 | Oct 2018 | KR |
10-2018-0137610 | Dec 2018 | KR |
10-2019-0034248 | Apr 2019 | KR |
10-2019-0114034 | Oct 2019 | KR |
102338576 | Dec 2021 | KR |
1610470 | Nov 1990 | SU |
9840795 | Sep 1998 | WO |
9939307 | Aug 1999 | WO |
03085460 | Oct 2003 | WO |
2005043892 | May 2005 | WO |
2007120981 | Oct 2007 | WO |
2007126707 | Nov 2007 | WO |
2008014301 | Jan 2008 | WO |
2008020655 | Feb 2008 | WO |
2008109644 | Sep 2008 | WO |
2009073607 | Jun 2009 | WO |
2009114239 | Sep 2009 | WO |
2009133710 | Nov 2009 | WO |
2010059426 | May 2010 | WO |
2010077048 | Jul 2010 | WO |
2010102678 | Sep 2010 | WO |
2010077048 | Oct 2010 | WO |
2010131869 | Nov 2010 | WO |
2010134275 | Nov 2010 | WO |
2011007264 | Jan 2011 | WO |
2010131869 | Feb 2011 | WO |
2010059426 | May 2011 | WO |
2011127309 | Oct 2011 | WO |
2012001947 | Jan 2012 | WO |
2012006251 | Jan 2012 | WO |
2012019163 | Feb 2012 | WO |
2012051720 | Apr 2012 | WO |
2012170354 | Dec 2012 | WO |
2013082325 | Jun 2013 | WO |
2013120851 | Aug 2013 | WO |
2013152453 | Oct 2013 | WO |
2013152454 | Oct 2013 | WO |
2013152455 | Oct 2013 | WO |
2013189058 | Dec 2013 | WO |
2014053063 | Apr 2014 | WO |
2014066115 | May 2014 | WO |
2014094199 | Jun 2014 | WO |
2014105276 | Jul 2014 | WO |
2014159779 | Oct 2014 | WO |
2014160819 | Oct 2014 | WO |
2014165141 | Oct 2014 | WO |
2014200734 | Dec 2014 | WO |
2014200798 | Dec 2014 | WO |
2015023044 | Feb 2015 | WO |
2015026864 | Feb 2015 | WO |
2015034960 | Mar 2015 | WO |
2015059349 | Apr 2015 | WO |
2015080744 | Jun 2015 | WO |
2015085042 | Jun 2015 | WO |
2015112868 | Jul 2015 | WO |
2014200798 | Aug 2015 | WO |
2015144209 | Oct 2015 | WO |
2015166684 | Nov 2015 | WO |
2015183438 | Dec 2015 | WO |
2015187494 | Dec 2015 | WO |
2015190666 | Dec 2015 | WO |
2016022203 | Feb 2016 | WO |
2016022204 | Feb 2016 | WO |
2016022205 | Feb 2016 | WO |
2016028806 | Feb 2016 | WO |
2016028807 | Feb 2016 | WO |
2016028808 | Feb 2016 | WO |
2016028809 | Feb 2016 | WO |
2016036218 | Mar 2016 | WO |
2016042926 | Mar 2016 | WO |
2016045005 | Mar 2016 | WO |
2016057062 | Apr 2016 | WO |
2016064435 | Apr 2016 | WO |
2016073804 | May 2016 | WO |
2016101124 | Jun 2016 | WO |
2016101131 | Jun 2016 | WO |
2016101132 | Jun 2016 | WO |
2016073804 | Jul 2016 | WO |
2016144385 | Sep 2016 | WO |
2016145129 | Sep 2016 | WO |
2016161556 | Oct 2016 | WO |
2016172619 | Oct 2016 | WO |
2016203282 | Dec 2016 | WO |
2016204936 | Dec 2016 | WO |
2017058834 | Apr 2017 | WO |
2017071559 | May 2017 | WO |
2017153771 | Sep 2017 | WO |
2017201326 | Nov 2017 | WO |
2017218193 | Dec 2017 | WO |
2018006053 | Jan 2018 | WO |
2018048838 | Mar 2018 | WO |
2018049430 | Mar 2018 | WO |
2018057268 | Mar 2018 | WO |
2018057272 | Mar 2018 | WO |
2018099037 | Jun 2018 | WO |
2018144339 | Aug 2018 | WO |
2018159864 | Sep 2018 | WO |
2018212802 | Nov 2018 | WO |
2019050562 | Mar 2019 | WO |
2019216999 | Nov 2019 | WO |
WO-2019216997 | Nov 2019 | WO |
2020227386 | Nov 2020 | WO |
2021050190 | Mar 2021 | WO |
Entry |
---|
Intention to Grant received for Danish Patent Application No. PA202070623, dated Jul. 20, 2022, 2 pages. |
Notice of Allowance received for Japanese Patent Application No. 2020-159338, dated Jul. 19, 2022, 3 pages (1 page of English Translation and 2 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 16/144,629, dated Jul. 25, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/373,163, dated Jul. 27, 2022, 8 pages. |
Examiner-Initiated Interview Summary received for U.S. Appl. No. 17/356,322, dated Sep. 29, 2022, 4 pages. |
Intention to Grant received for European Patent Application No. 21733324.4, dated Sep. 13, 2022, 7 pages. |
Notice of Allowance received for Danish Patent Application No. PA202070623, dated Sep. 20, 2022, 2 pages. |
Office Action received for Danish Patent Application No. PA202070625, dated Sep. 23, 2022, 4 pages. |
Office Action received for European Patent Application No. 21163791.3, dated Sep. 20, 2022, 6 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 17/031,671, dated Jun. 13, 2022, 7 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/144,629, dated Jun. 23, 2022, 5 pages. |
Decision on Appeal received for Korean Patent Application No. 10-2021-7002582, mailed on May 13, 2022, 29 pages (2 pages of English Translation and 27 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2021-565919, dated Jun. 13, 2022, 4 pages (2 pages of English Translation and 2 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2020-0123852, dated Jun. 9, 2022, 10 pages (4 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2020-0123857, dated Jun. 9, 2022, 12 pages (5 pages of English Translation and 7 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2020-0123887, dated Jun. 9, 2022, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
[B612] Addition of facial recognition bear/cat stamps and AR background function having moving sparkles or hearts, Available Online at: <URL, htpps://apptopi.jp/2017/0I/22/b612>, Jan. 22, 2017, 11 pages. |
Advisory Action received for U.S. Appl. No. 13/082,035, dated Jun. 19, 2015, 5 pages. |
Advisory Action received for U.S. Appl. No. 13/082,035, dated Oct. 23, 2013, 3 pages. |
Advisory Action received for U.S. Appl. No. 16/144,629, dated Dec. 13, 2019, 9 pages. |
Advisory Action received for U.S. Appl. No. 16/144,629, dated Jan. 6, 2021, 10 pages. |
Advisory Action received for U.S. Appl. No. 16/259,771, dated Feb. 26, 2020, 3 pages. |
Advisory Action received for U.S. Appl. No. 16/259,771, dated Jul. 14, 2020, 6 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 16/259,771, dated May 5, 2020, 10 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 17/031,671, dated Aug. 2, 2021, 5 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 13/082,035, dated Apr. 4, 2013, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 13/082,035, dated Aug. 1, 2016, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 13/082,035, dated Jan. 29, 2015, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 13/082,035, dated Oct. 30, 2013, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 14/866,560, dated Jan. 30, 2019, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 14/866,560, dated Jul. 26, 2018, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 14/866,560, dated May 14, 2019, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 14/866,560, dated Oct. 21, 2019, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/144,629, dated Jul. 2, 2020, 5 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/144,629, dated Nov. 23, 2020, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/519,850, dated Jun. 26, 2020, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/528,941, dated Jun. 19, 2020, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/528,941, dated Nov. 10, 2020, 2 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/584,100, dated Feb. 19, 2020, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/586,344, dated Feb. 27, 2020, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/663,062, dated Dec. 18, 2020, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/733,718, dated Nov. 2, 2020, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/833,436, dated Jul. 1, 2021, 2 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/027,317, dated Dec. 21, 2020, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,654, dated Feb. 1, 2021, 2 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,765, dated Sep. 22, 2021, 5 pages. |
Brief Communication Regarding Oral Proceedings received for European Patent Application No. 19172407.9, mailed on Nov. 9, 2020, 1 page. |
Brief Communication Regarding Oral Proceedings received for European Patent Application No. 19172407.9, mailed on Nov. 20, 2020, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2017100683, dated Jan. 16, 2018, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2019100420, dated Jul. 3, 2019, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2019100497, dated Jul. 29, 2019, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2019100794, dated Dec. 19, 2019, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2019101019, dated Nov. 12, 2019, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2019101667, dated Mar. 20, 2020, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2020100189, dated May 12, 2020, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2020100675, dated Jun. 30, 2020, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2020101715, dated Oct. 6, 2020, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2020104220, dated Apr. 1, 2021, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2021103004, dated Sep. 13, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/273,453, dated Dec. 21, 2017, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/273,453, dated Feb. 8, 2018, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/273,453, dated Nov. 27, 2017. 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/273,503, dated Nov. 2, 2017, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/273,503, dated Nov. 24, 2017. 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/713,490, dated May 1, 2019, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 15/858,175, dated Sep. 21, 2018, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/142,288, dated Jul. 30, 2019, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/143,097, dated Nov. 8, 2019, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/191,117, dated Dec. 9, 2019, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/191,117, dated Feb. 28, 2020, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/191,117, dated Nov. 20, 2019, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/519,850, dated Nov. 2, 2020, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/519,850, dated Sep. 8, 2020, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/582,595, dated Apr. 7, 2020, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/582,595, dated Apr. 22, 2020, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/583,020, dated Mar. 24, 2020, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,044, dated Apr. 16, 2020, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,044, dated Jan. 29, 2020, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,044, dated Mar. 4, 2020. 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,100, dated Feb. 21, 2020, 9 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,693, dated Feb. 21, 2020,15 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,693, dated Mar. 4, 2020, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/584,693, dated Mar. 20, 2020, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/586,314, dated Apr. 8, 2020, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/586,314, dated Mar. 4, 2020, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/586,344, dated Apr. 7, 2020, 4 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/586,344, dated Jan. 23, 2020, 4 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/586,344, dated Mar. 17, 2020, 4 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/663,062, dated Jul. 21, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Aug. 18, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/825,879, dated Aug. 13, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/825,879, dated Sep. 15, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Aug. 10, 2021, 4 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Aug. 13, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Jul. 28, 2021, 4 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Jun. 14, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/027,484, dated May 14, 2021, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/027,484, dated May 28, 2021, 5 pages. |
Decision of Refusal received for Japanese Patent Application No. 2018-545502, dated Feb. 25, 2019, 11 pages. |
Decision on Appeal received for Japanese Patent Application No. 2018-225131, mailed on Mar. 11, 2021, 5 pages. |
Decision on Appeal received for Japanese Patent Application No. 2018-545502, mailed on Mar. 25, 2021, 3 pages. |
Decision on Appeal received for U.S. Appl. No. 16/259,771, mailed on Aug. 19, 2021, 12 pages. |
Decision to Grant received for Danish Patent Application No. PA201670627, dated Nov. 29, 2018, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201670753, dated Mar. 6, 2019, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201670755, dated Mar. 6, 2019, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201870372, dated Jun. 17, 2020, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201870375, dated Jul. 24, 2019, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201870377, dated May 14, 2019, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201970601, dated Feb. 3, 2021, 2 pages. |
Decision to Grant received for Danish Patent Application No. PA201970603, dated May 21, 2021, 2 pages. |
Decision to Grant received for European Patent Application No. 18176890.4, dated Jul. 9, 2020, 3 pages. |
Decision to Grant received for European Patent Application No. 18183054.8, dated Jan. 21, 2021, 3 pages. |
Decision to Grant received for European Patent Application No. 18209460.7, dated Apr. 9, 2021, 2 pages. |
Decision to Grant received for European Patent Application No. 18214698.5, dated Sep. 10, 2020, 3 pages. |
Decision to Grant received for European Patent Application No. 19172407.9, dated Jun. 17, 2021, 2 pages. |
Decision to Grant received for Japanese Patent Application No. 2018-243463, dated Aug. 17, 2020, 2 pages. |
Decision to Grant received for Japanese Patent Application No. 2020-070418, dated Feb. 8, 2021, 3 pages. |
Decision to Grant received for Japanese Patent Application No. 2020-184470, dated Jul. 1, 2021, 3 pages. |
Decision to Grant received for Japanese Patent Application No. 2020-184471, dated Jul. 1, 2021, 3 pages. |
Decision to Grant received for Japanese Patent Application No. 2020-193703, dated Aug. 10, 2021, 3 pages. |
Decision to Grant received for Japanese Patent Application No. 2021-051385, dated Jul. 8, 2021, 3 pages. |
Decision to Refuse received for European Patent Application No. 19724959.2, dated Jun. 22, 2021, 13 pages. |
Decision to Refuse received for Japanese Patent Application No. 2018-225131, dated Jul. 8, 2019, 6 pages. |
Decision to Refuse received for Japanese Patent Application No. 2018-243463, dated Jul. 8, 2019, 5 pages. |
Decision to Refuse received for Japanese Patent Application No. 2018-545502, dated Jul. 8, 2019, 5 pages. |
European Search Report received for European Patent Application No. 18209460.7, dated Mar. 15, 2019, 4 pages. |
European Search Report received for European Patent Application No. 18214698.5, dated Mar. 21, 2019, 5 pages. |
European Search Report received for European Patent Application No. 19172407.9, dated Oct. 9, 2019, 4 pages. |
European Search Report received for European Patent Application No. 19181242.9, dated Nov. 27, 2019, 4 pages. |
European Search Report received for European Patent Application No. 20168021.2, dated Jul. 8, 2020, 4 pages. |
European Search Report received for European Patent Application No. 20206196.6, dated Dec. 8, 2020, 4 pages. |
European Search Report received for European Patent Application No. 20206197.4, dated Nov. 30, 2020, 4 pages. |
European Search Report received for European Patent Application No. 20210373.5, dated Apr. 13, 2021, 4 pages. |
European Search Report received for European Patent Application No. 21157252.4, dated Apr. 16, 2021, 4 pages. |
European Search Report received for European Patent Application No. 21163791.3, dated May 6, 2021, 5 pages. |
Examiner Initiated-Interview Summary received for U.S. Appl. No. 16/528,941, dated Dec. 1, 2020, 2 pages. |
Examiner's Answer to Appeal Brief received for U.S. Appl. No. 16/144,629, mailed on Jul. 21, 2021, 21 pages. |
Examiner's Answer to Appeal Brief received for U.S. Appl. No. 16/259,771, mailed on Oct. 23, 2020, 15 pages. |
Extended European Search Report received for European Patent Application No. 17853657.9, dated May 28, 2020, 9 pages. |
Extended European Search Report received for European Patent Application No. 19204230.7, dated Feb. 21, 2020, 7 pages. |
Extended European Search Report received for European Patent Application No. 19212057.4, dated Feb. 27, 2020, 8 pages. |
Extended European Search Report received for European Patent Application No. 20168009.7, dated Sep. 11, 2020, 12 pages. |
Final Office Action received for U.S. Appl. No. 13/082,035, dated Apr. 16, 2015, 24 pages. |
Final Office Action received for U.S. Appl. No. 13/082,035, dated Aug. 15, 2013, 24 pages. |
Final Office Action received for U.S. Appl. No. 14/866,560, dated Oct. 9, 2018, 22 pages. |
Final Office Action received for U.S. Appl. No. 15/728,147, dated Aug. 29, 2018, 39 pages. |
Final Office Action received for U.S. Appl. No. 15/728,147, dated May 28, 2019, 45 pages. |
Final Office Action received for U.S. Appl. No. 16/116,221, dated Mar. 22, 2019, 35 pages. |
Final Office Action received for U.S. Appl. No. 16/144,629, dated Sep. 11, 2020, 22 pages. |
Final Office Action received for U.S. Appl. No. 16/528,941, dated Jul. 13, 2020, 15 pages. |
Final Office Action received for U.S. Appl. No. 16/833,436, dated Sep. 21, 2021, 29 pages. |
Final Office Action received for U.S. Appl. No. 17/031,671, dated Sep. 7, 2021, 27 pages. |
Intention to Grant received for Danish Patent Application No. PA201670753, dated Oct. 29, 2018, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201670755, dated Nov. 13, 2018, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201870372, dated Feb. 13, 2020, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201870375, dated Jun. 3, 2019, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201870375, dated Mar. 26, 2019, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201870377, dated Mar. 26, 2019, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201970593, dated Apr. 13, 2021, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201970601, dated Sep. 21, 2020, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA201970603, dated Jan. 13, 2021, 2 pages. |
Intention to Grant received for Danish Patent Application No. PA202070611, dated May 5, 2021, 2 pages. |
Intention to Grant received for European Patent Application No. 17809168.2, dated Jun. 25, 2021, 8 pages. |
Intention to Grant received for European Patent Application No. 18176890.4, dated Feb. 28, 2020, 8 pages. |
Intention to Grant received for European Patent Application No. 18183054.8, dated Nov. 5, 2020, 6 pages. |
Intention to Grant received for European Patent Application No. 18209460.7, dated Jan. 15, 2021, 8 pages. |
Intention to Grant received for European Patent Application No. 18214698.5, dated Apr. 21, 2020, 8 pages. |
Intention to Grant received for European Patent Application No. 19172407.9, dated Feb. 11, 2021, 9 pages. |
Intention to Grant received for European Patent Application No. 20168021.2, dated Apr. 15, 2021, 8 pages. |
Intention to Grant received for European Patent Application No. 20168021.2, dated Sep. 20, 2021, 8 pages. |
Intemational Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2017/035321, dated Dec. 27, 2018, 11 Pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2017/049795, dated Apr. 4, 2019, 16 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2018/015591, dated Dec. 19, 2019, 10 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2019/023793, dated Nov. 19, 2020, 12 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2019/024067, dated Nov. 19, 2020, 12 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2019/049101, dated Mar. 25, 2021, 17 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2020/014176, dated Jul. 29, 2021, 9 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2019/023793, dated Aug. 27, 2019, 17 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2019/024067, dated Oct. 9, 2019, 18 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2019/049101, dated Dec. 16, 2019, 26 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/014176, dated Mar. 26, 2020, 12 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/031643, dated Dec. 2, 2020, 33 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/031643, dated Nov. 2, 2020, 34 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2021/031212, dated Sep. 21, 2021, 21 pages. |
Invitation to Pay Addition Fees received for PCT Patent Application No. PCT/US2017/035321, dated Aug. 17, 2017, 3 pages. |
Invitation to Pay Additional Fees and Partial International Search Report received for PCT Patent Application No. PCT/US2019/049101, dated Oct. 24, 2019, 17 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2017/049795, dated Nov. 3, 2017, 3 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2019/023793, dated Jul. 5, 2019, 11 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2019/024067, dated Jul. 16, 2019, 13 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2020/031643, dated Sep. 9, 2020, 30 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2021/031212, dated Jul. 28, 2021, 19 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2021/034304, dated Aug. 20, 2021, 16 pages. |
Invitation to Pay Search Fees received for European Patent Application No. 18704732.9, dated Jun. 2, 2021, 3 pages. |
Invitation to Pay Search Fees received for European Patent Application No. 19724959.2, dated Feb. 25, 2020, 3 pages. |
Minutes of the Oral Proceedings received for European Patent Application No. 19181242.9, mailed on Dec. 15, 2020, 6 pages. |
Minutes of the Oral Proceedings received for European Patent Application No. 19724959.2, mailed on Jun. 14, 2021, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/528,257, dated Jul. 30, 2021, 12 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/082,035, dated Apr. 21, 2016, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/082,035, dated Dec. 19, 2012, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/082,035, dated Sep. 11, 2014, 23 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/866,560, dated Apr. 19, 2018, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/866,560, dated Apr. 30, 2019, 23 pages. |
Non-Final Office Action received for U.S. Appl. No. 15/273,522, dated Nov. 30, 2016, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 15/273,544, dated May 25, 2017, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 15/728,147, dated Feb. 22, 2018, 20 pages. |
Non-Final Office Action received for U.S. Appl. No. 15/728,147, dated Jan. 31, 2019, 41 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/116,221, dated Nov. 13, 2018, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/142,288, dated Nov. 20, 2018, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/142,305, dated Nov. 23, 2018, 32 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/142,328, dated Nov. 8, 2018, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/143,097, dated Feb. 28, 2019, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/144,629, dated Mar. 13, 2020, 24 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/144,629, dated Mar. 29, 2019, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/259,771, dated May 8, 2019, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/519,850, dated Mar. 23, 2020, 8 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/528,941, dated Dec. 7, 2020, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/582,595, dated Nov. 26, 2019, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/583,020, dated Nov. 14, 2019, 9 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/733,718, dated Sep. 16, 2020, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/825,879, dated May 5, 2021, 12 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/833,436, dated Mar. 29, 2021, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/027,317, dated Nov. 17, 2020, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,654, dated Nov. 19, 2020, 12 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,671, dated Apr. 30, 2021, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,765, dated Jun. 28, 2021, 32 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/091,460, dated Sep. 10, 2021, 10 pages. |
Notice of Acceptance received for Australian Patent Application No. 2017286130, dated Apr. 26, 2019, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2017330212, dated Apr. 28, 2020, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2018279787, dated Dec. 10, 2019, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2019213341, dated Aug. 25, 2020, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2019265357, dated Dec. 24, 2020, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2019266049, dated Nov. 24, 2020, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020213402, dated Sep. 21, 2020, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020267151, dated Dec. 9, 2020, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020277216, dated Mar. 15, 2021, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021201167, dated Mar. 15, 2021, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021203210, dated Jul. 9, 2021, 3 pages. |
Notice of Allowance received for Brazilian Patent Application No. 112018074765-3, dated Oct. 8, 2019, 2 pages. |
Notice of Allowance received for Chinese Patent Application No. 201780002533.5, dated Apr. 14, 2020, 2 pages. |
Notice of Allowance received for Chinese Patent Application No. 201810566134.8, dated Apr. 7, 2020, 3 pages. |
Notice of Allowance received for Chinese Patent Application No. 201810664927.3, dated Jul. 19, 2019, 2 pages. |
Notice of Allowance received for Chinese Patent Application No. 201811512767.7, dated Jul. 27, 2020, 4 pages. |
Notice of Allowance received for Chinese Patent Application No. 201910379481.4, dated Nov. 9, 2020, 6 pages. |
Notice of Allowance received for Chinese Patent Application No. 201911202668.3, dated Feb. 4, 2021, 5 pages. |
Notice of Allowance received for Chinese Patent Application No. 201911219525.3, dated Sep. 29, 2020, 2 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010218168.5, dated Aug. 25, 2021, 6 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010287953.6, dated Mar. 18, 2021, 7 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010287958.9, dated Aug. 27, 2021, 6 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010287961.0, dated Mar. 9, 2021, 8 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010287975.2, dated Mar. 1, 2021, 7 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010600151.6, dated Aug. 13, 2021, 2 pages. |
Notice of Allowance received for Japanese Patent Application No. 2018-171188, dated Jul. 16, 2019, 3 pages. |
Notice of Allowance received for Japanese Patent Application No. 2018-184254, dated Jun. 15, 2020, 4 pages. |
Notice of Allowance received for Japanese Patent Application No. 2019-511767, dated Mar. 30, 2020, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2018-7026743, dated Mar. 20, 2019, 7 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2018-7028849, dated Feb. 1, 2019, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2018-7034780, dated Jun. 19, 2019, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2018-7036893, dated Jun. 12, 2019, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2019-7005369, dated Oct. 26, 2020, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2019-7027042, dated Nov. 26, 2020, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2019-7035478, dated Apr. 24, 2020, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2020-0052618, dated Mar. 23, 2021, 5 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2020-0143726, dated Nov. 10, 2020, 5 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2020-0155924, dated Nov. 23, 2020, 7 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2020-7021870, dated Apr. 26, 2021, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2020-7031855, dated Mar. 22, 2021, 5 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2020-7032147, dated May 12, 2021, 4 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2021-7000954, dated Aug. 18, 2021, 5 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2021-7019525, dated Jul. 13, 2021, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 16/528,941, dated Aug. 10, 2021, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 16/528,941, dated May 19, 2021, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 13/082,035, dated Oct. 5, 2016, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 14/866,560, dated Nov. 15, 2019, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,453, dated Oct. 12, 2017. 11 pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,503, dated Aug. 14, 2017, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,522, dated Mar. 28, 2017. 9 Pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,522, dated May 19, 2017, 2 pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,522, dated May 23, 2017, 2 pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,544, dated Mar. 13, 2018, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 15/273,544, dated Oct. 27, 2017, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 15/713,490, dated Mar. 20, 2019, 15 pages. |
Notice of Allowance received for U.S. Appl. No. 15/728,147, dated Aug. 19, 2019, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 15/858,175, dated Jun. 1, 2018, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 15/858,175, dated Sep. 12, 2018, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/110,514, dated Apr. 29, 2019, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 16/110,514, dated Mar. 13, 2019, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 16/116,221, dated Nov. 22, 2019, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 16/116,221, dated Sep. 20, 2019, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 16/142,288, dated Jun. 24, 2019, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 16/142,288, dated Mar. 27, 2019, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 16/142,288, dated May 1, 2019, 4 pages. |
Notice of Allowance received for U.S. Appl. No. 16/142,305, dated Apr. 3, 2019, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 16/142,305, dated May 1, 2019, 2 pages. |
Notice of Allowance received for U.S. Appl. No. 16/142,328, dated Apr. 5, 2019, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 16/143,097, dated Aug. 29, 2019, 23 pages. |
Notice of Allowance received for U.S. Appl. No. 16/143,201, dated Feb. 8, 2019, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 16/143,201, dated Nov. 28, 2018, 14 pages. |
Notice of Allowance received for U.S. Appl. No. 16/191,117, dated Oct. 29, 2019, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 16/519,850, dated Aug. 26, 2020, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/582,595, dated Mar. 20, 2020, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 16/583,020, dated Apr. 1, 2020, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 16/583,020, dated Feb. 28, 2020, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,044, dated Dec. 11, 2019, 15 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,044, dated Mar. 30, 2020, 16 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,044, dated Nov. 14, 2019, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,100, dated Apr. 8, 2020, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,100, dated Jan. 14, 2020, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,693, dated Jan. 14, 2020, 15 pages. |
Notice of Allowance received for U.S. Appl. No. 16/584,693, dated May 4, 2020, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 16/586,314, dated Apr. 1, 2020, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/586,314, dated Jan. 9, 2020, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 16/586,344, dated Dec. 16, 2019, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 16/586,344, dated Mar. 27, 2020, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 16/663,062, dated Mar. 24, 2021, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Feb. 5, 2021, 14 pages. |
Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Jul. 29, 2021, 26 pages. |
Notice of Allowance received for U.S. Appl. No. 16/825,879, dated Jul. 13, 2021, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 16/825,879, dated Sep. 28, 2021, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Jul. 23, 2021, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Jun. 1, 2021, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/027,317, dated Apr. 12, 2021, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/027,317, dated Jan. 13, 2021, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/027,484, dated May 3, 2021, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/031,654, dated Feb. 10, 2021, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/031,654, dated May 27, 2021, 8 pages. |
Office Action received for Australian Patent Application No. 2017100683, dated Sep. 20, 2017. 3 pages. |
Office Action received for Australian Patent Application No. 2017100684, dated Jan. 24, 2018, 4 pages. |
Office Action received for Australian Patent Application No. 2017100684, dated Oct. 5, 2017. 4 pages. |
Office Action Received for Australian Patent Application No. 2017286130, dated Jan. 21, 2019, 4 pages. |
Office Action received for Australian Patent Application No. 2017330212, dated Feb. 21, 2020, 2 pages. |
Office Action received for Australian Patent Application No. 2019100794, dated Oct. 3, 2019, 4 pages. |
Office Action received for Australian Patent Application No. 2019213341, dated Jun. 30, 2020, 6 pages. |
Office Action received for Australian Patent Application No. 2020100189, dated Apr. 1, 2020, 3 pages. |
Office Action received for Australian Patent Application No. 2020100720, dated Jul. 9, 2020, 7 pages. |
Office Action received for Australian Patent Application No. 2020100720, dated Sep. 1, 2020, 5 pages. |
Office Action received for Australian Patent Application No. 2020101043, dated Aug. 14, 2020, 5 pages. |
Office Action received for Australian Patent Application No. 2020101043, dated Oct. 30, 2020, 4 pages. |
Office Action received for Australian Patent Application No. 2020201969, dated Sep. 25, 2020, 5 pages. |
Office Action received for Australian Patent Application No. 2020239717, dated Jun. 23, 2021, 7 pages. |
Office Action received for Australian Patent Application No. 2020239749, dated Jul. 16, 2021, 5 pages. |
Office Action received for Australian Patent Application No. 2020260413, dated Jun. 24, 2021, 2 pages. |
Office Action received for Australian Patent Application No. 2020277216, dated Dec. 17, 2020, 5 pages. |
Office Action received for Australian Patent Application No. 2021103004, dated Aug. 12, 2021, 5 pages. |
Office Action received for Chinese Patent Application No. 201780002533.5, dated Apr. 25, 2019, 17 pages. |
Office Action received for Chinese Patent Application No. 201780002533.5, dated Feb. 3, 2020, 6 pages. |
Office Action received for Chinese Patent Application No. 201780002533.5, dated Sep. 26, 2019, 21 pages. |
Office Action received for Chinese Patent Application No. 201810566134.8, dated Aug. 13, 2019, 14 pages. |
Office Action received for Chinese Patent Application No. 201810664927.3, dated Mar. 28, 2019, 11 pages. |
Office Action received for Chinese Patent Application No. 201811446867.4, dated Dec. 31, 2019, 12. |
Office Action received for Chinese Patent Application No. 201811446867.4, dated Sep. 8, 2020, 9 pages. |
Office Action received for Chinese Patent Application No. 201811512767.7, dated Dec. 20, 2019, 14 pages. |
Office Action received for Chinese Patent Application No. 201811512767.7, dated Jun. 4, 2020, 6 pages. |
Office Action received for Chinese Patent Application No. 201910379481.4, dated Mar. 2, 2020, 18 pages. |
Office Action received for Chinese Patent Application No. 201910691865.X, dated Aug. 4, 2021, 10 pages. |
Office Action received for Chinese Patent Application No. 201910691865.X, dated Feb. 4, 2021, 16 pages. |
Office Action received for Chinese Patent Application No. 201910691865.X, dated Jul. 8, 2020, 17 pages. |
Office Action received for Chinese Patent Application No. 201910691872.X, dated Jun. 3, 2020, 10 pages. |
Office Action received for Chinese Patent Application No. 201910692978.1, dated Apr. 3, 2020, 19 pages. |
Office Action received for Chinese Patent Application No. 201910692978.1, dated Nov. 4, 2020, 4 pages. |
Office Action received for Chinese Patent Application No. 201911199054.4, dated Jan. 20, 2021, 19 pages. |
Office Action received for Chinese Patent Application No. 201911199054.4, dated Jun. 10, 2021, 13 pages. |
Office Action received for Chinese Patent Application No. 201911202668.3, dated Aug. 4, 2020, 13 pages. |
Office Action received for Chinese Patent Application No. 201911219525.3, dated Jul. 10, 2020, 7 pages. |
Office Action received for Chinese Patent Application No. 202010218168.5, dated Feb. 9, 2021, 21 pages. |
Office Action received for Chinese Patent Application No. 202010287950.2, dated Aug. 10, 2021, 12 pages. |
Office Action received for Chinese Patent Application No. 202010287950.2, dated Feb. 20, 2021, 22 pages. |
Office Action received for Chinese Patent Application No. 202010287953.6, dated Jan. 14, 2021, 14 pages. |
Office Action received for Chinese Patent Application No. 202010287958.9, dated Jan. 5, 2021, 16 pages. |
Office Action received for Chinese Patent Application No. 202010287961.0, dated Dec. 30, 2020, 16 pages. |
Office Action received for Chinese Patent Application No. 202010287975.2, dated Dec. 30, 2020, 17 pages. |
Office Action received for Chinese Patent Application No. 202010330318.1, dated Jul. 13, 2021, 12 pages. |
Office Action received for Chinese Patent Application No. 202010330318.1, dated Mar. 31, 2021, 13 pages. |
Office Action received for Chinese Patent Application No. 202010330318.1, dated Nov. 19, 2020, 18 pages. |
Office Action received for Chinese Patent Application No. 202010600151.6, dated Apr. 29, 2021, 11 pages. |
Office Action received for Chinese Patent Application No. 202010600197.8, dated Jul. 2, 2021, 14 pages. |
Office Action received for Chinese Patent Application No. 202010601484.0, dated Jun. 3, 2021, 13 pages. |
Office Action received for Chinese Patent Application No. 202011480411.7, dated Aug. 2, 2021, 12 pages. |
Office Action received for Danish Patent Application No. PA201670627, dated Apr. 5, 2017, 3 pages. |
Office Action received for Danish Patent Application No. PA201670627, dated Nov. 6, 2017, 2 pages. |
Office Action received for Danish Patent Application No. PA201670627, dated Oct. 11, 2016, 8 pages. |
Office Action received for Danish Patent Application No. PA201670753, dated Dec. 20, 2016, 7 pages. |
Office Action received for Danish Patent Application No. PA201670753, dated Jul. 5, 2017. 4 pages. |
Office Action received for Danish Patent Application No. PA201670753, dated Mar. 23, 2018, 5 pages. |
Office Action received for Danish Patent Application No. PA201670755, dated Apr. 6, 2017, 5 pages. |
Office Action received for Danish Patent Application No. PA201670755, dated Apr. 20, 2018, 2 pages. |
Office Action received for Danish Patent Application No. PA201670755, dated Dec. 22, 2016, 6 pages. |
Office Action received for Danish Patent Application No. PA201770563, dated Aug. 13, 2018, 5 pages. |
Office Action received for Danish Patent Application No. PA201770563, dated Jan. 28, 2020, 3 pages. |
Office Action received for Danish Patent Application No. PA201770563, dated Jun. 28, 2019, 5 pages. |
Office Action received for Danish Patent Application No. PA201770719, dated Aug. 14, 2018, 6 pages. |
Office Action received for Danish Patent Application No. PA201770719, dated Feb. 19, 2019, 4 pages. |
Office Action received for Danish Patent Application No. PA201770719, dated Jan. 17, 2020, 4 pages. |
Office Action received for Danish Patent Application No. PA201770719, dated Jun. 30, 2021, 3 pages. |
Office Action received for Danish Patent Application No. PA201770719, dated Nov. 16, 2020, 5 pages. |
Office Action received for Danish Patent Application No. PA201870366, dated Aug. 22, 2019, 3 pages. |
Office Action received for Danish Patent Application No. PA201870366, dated Dec. 12, 2018, 3 pages. |
Office Action received for Danish Patent Application No. PA201870367, dated Dec. 20, 2018, 5 pages. |
Office Action received for Danish Patent Application No. PA201870368, dated Dec. 20, 2018, 5 pages. |
Office Action received for Danish Patent Application No. PA201870368, dated Oct. 1, 2019, 6 pages. |
Office Action received for Danish Patent Application No. PA201870372, dated Aug. 20, 2019, 2 pages. |
Office Action received for Danish Patent Application No. PA201870372, dated Jan. 31, 2019, 4 pages. |
Office Action received for Danish Patent Application No. PA201870374, dated Feb. 6, 2019, 5 pages. |
Office Action received for Danish Patent Application No. PA201870374, dated Jun. 17, 2019, 5 pages. |
Office Action received for Danish Patent Application No. PA201870375, dated Jan. 31, 2019, 4 pages. |
Office Action received for Danish Patent Application No. PA201870377, dated Jan. 31, 2019, 4 pages. |
Office Action received for Danish Patent Application No. PA201870623, dated Jan. 30, 2020, 2 pages. |
Office Action received for Danish Patent Application No. PA201870623, dated Jul. 12, 2019, 4 pages. |
Office Action received for Danish Patent Application No. PA201970592, dated Mar. 2, 2020, 5 pages. |
Office Action received for Danish Patent Application No. PA201970592, dated Oct. 26, 2020, 5 pages. |
Office Action received for Danish Patent Application No. PA201970593, dated Apr. 16, 2020, 2 pages. |
Office Action received for Danish Patent Application No. PA201970593, dated Feb. 2, 2021, 2 pages. |
Office Action received for Danish Patent Application No. PA201970593, dated Mar. 10, 2020, 4 pages. |
Office Action received for Danish Patent Application No. PA201970595, dated Mar. 10, 2020, 4 pages. |
Office Action received for Danish Patent Application No. PA201970600, dated Mar. 9, 2020, 5 pages. |
Office Action received for Danish Patent Application No. PA201970601, dated Aug. 13, 2020, 3 pages. |
Office Action received for Danish Patent Application No. PA201970601, dated Jan. 31, 2020, 3 pages. |
Office Action received for Danish Patent Application No. PA201970601, dated Nov. 11, 2019, 8 pages. |
Office Action received for Danish Patent Application No. PA201970603, dated Nov. 4, 2020, 3 pages. |
Office Action received for Danish Patent Application No. PA201970605, dated Mar. 10, 2020, 5 pages. |
Office Action received for Danish Patent Application No. PA202070611, dated Dec. 22, 2020, 7 pages. |
Office Action received for Danish Patent Application No. PA202070623, dated Aug. 24, 2021, 3 pages. |
Office Action received for Danish Patent Application No. PA202070624, dated Jun. 16, 2021, 5 pages. |
Office Action received for Danish Patent Application No. PA202070625, dated Jun. 16, 2021, 3 pages. |
Office Action received for European Patent Application 17809168.2, dated Jan. 7, 2020, 5 pages. |
Office Action received for European Patent Application 17809168.2, dated Oct. 8, 2020, 4 pages. |
Office Action received for European Patent Application No. 17853657.9, dated Apr. 1, 2021, 6 pages. |
Office Action received for European Patent Application No. 18176890.4, dated Oct. 16, 2018, 8 pages. |
Office Action received for European Patent Application No. 18183054.8, dated Feb. 24, 2020, 6 pages. |
Office Action received for European Patent Application No. 18183054.8, dated Nov. 16, 2018, 8 Pages. |
Office Action received for European Patent Application No. 18209460.7, dated Apr. 10, 2019, 7 pages. |
Office Action received for European Patent Application No. 18209460.7, dated Apr. 21, 2020, 5 pages. |
Office Action received for European Patent Application No. 18214698.5, dated Apr. 2, 2019, 8 pages. |
Office Action received for European Patent Application No. 18704732.9, dated Sep. 7, 2021, 10 pages. |
Office Action received for European Patent Application No. 19172407.9, dated Oct. 18, 2019, 7 pages. |
Office Action received for European Patent Application No. 19204230.7, dated Sep. 28, 2020, 6 pages. |
Office Action received for European Patent Application No. 19212057.4, dated Mar. 9, 2021, 6 pages. |
Office Action received for European Patent Application No. 19724959.2, dated Apr. 23, 2020, 10 pages. |
Office Action received for European Patent Application No. 20168009.7, dated Apr. 20, 2021, 6 pages. |
Office Action received for European Patent Application No. 20168009.7, dated Sep. 13, 2021, 8 pages. |
Office Action received for European Patent Application No. 20168021.2, dated Jul. 22, 2020, 8 pages. |
Office Action received for European Patent Application No. 20206196.6, dated Jan. 13, 2021, 10 pages. |
Office Action received for European Patent Application No. 20206197.4, dated Aug. 27, 2021, 6 pages. |
Office Action received for European Patent Application No. 20206197.4, dated Jan. 12, 2021, 9 pages. |
Office Action received for European Patent Application No. 20210373.5, dated May 10, 2021, 9 pages. |
Office Action received for European Patent Application No. 21157252.4, dated Apr. 23, 2021, 8 pages. |
Office Action received for European Patent Application No. 21163791.3, dated Jun. 2, 2021, 8 pages. |
Office Action received for Indian Patent Application No. 201814036470, dated Feb. 26, 2021, 7 pages. |
Office Action received for Indian Patent Application No. 201814036472, dated Jul. 8, 2021, 8 pages. |
Office Action received for Indian Patent Application No. 201917053025, dated Mar. 19, 2021, 7 pages. |
Office Action received for Indian Patent Application No. 202018006172, dated May 5, 2021, 6 pages. |
Office Action received for Japanese Patent Application No. 2018-182607, dated Apr. 6, 2020, 6 pages. |
Office Action received for Japanese Patent Application No. 2018-182607, dated Jul. 20, 2020, 5 pages. |
Office Action received for Japanese Patent Application No. 2018-182607, dated Sep. 8, 2021, 7 pages. |
Office Action received for Japanese Patent Application No. 2018-184254, dated Mar. 2, 2020, 8 pages. |
Office Action received for Japanese Patent Application No. 2018-225131, dated Aug. 17, 2020, 21 pages. |
Office Action received for Japanese Patent Application No. 2018-225131, dated Mar. 4, 2019, 10 pages. |
Office Action received for Japanese Patent Application No. 2018-545502, dated Aug. 17, 2020, 14 pages. |
Office Action received for Japanese Patent Application No. 2019-203399, dated Aug. 10, 2021, 4 pages. |
Office Action received for Japanese Patent Application No. 2019-215503, dated Feb. 5, 2021, 12 pages. |
Office Action received for Japanese Patent Application No. 2019-215503, dated Jul. 3, 2020, 12 pages. |
Office Action received for Japanese Patent Application No. 2020-070418, dated Aug. 3, 2020, 22 pages. |
Office Action received for Japanese Patent Application No. 2020-120086, dated May 21, 2021, 6 pages. |
Office Action received for Japanese Patent Application No. 2020-120086, dated Nov. 20, 2020, 6 pages. |
Office Action received for Japanese Patent Application No. 2020-184470, dated May 10, 2021, 3 pages. |
Office Action received for Japanese Patent Application No. 2020-184471, dated May 10, 2021, 3 pages. |
Office Action received for Japanese Patent Application No. 2020-193703, dated Apr. 19, 2021, 4 pages. |
Office Action received for Korean Patent Application No. 10-2018-7026743, dated Jan. 17, 2019, 5 pages. |
Office Action received for Korean Patent Application No. 10-2018-7034780, dated Apr. 4, 2019, 11 pages. |
Office Action received for Korean Patent Application No. 10-2018-7036893, dated Apr. 9, 2019, 6 pages. |
Office Action received for Korean Patent Application No. 10-2019-7005369, dated Mar. 13, 2020, 12 pages. |
Office Action received for Korean Patent Application No. 10-2019-7027042, dated May 13, 2020, 6 pages. |
Office Action received for Korean Patent Application No. 10-2019-7035478, dated Jan. 17, 2020, 17 pages. |
Office Action received for Korean Patent Application No. 10-2020-0052618, dated Aug. 18, 2020, 11 pages. |
Office Action received for Korean Patent Application No. 10-2020-7031855, dated Nov. 24, 2020, 6 pages. |
Office Action received for Korean Patent Application No. 10-2020-7032147, dated Feb. 16, 2021, 6 pages. |
Office Action received for Korean Patent Application No. 10-2021-0022053, dated Mar. 1, 2021, 11 pages. |
Office Action received for Korean Patent Application No. 10-2021-7000954, dated Jan. 28, 2021, 5 pages. |
Office Action received for Korean Patent Application No. 10-2021-7002582, dated Apr. 16, 2021, 13 pages. |
Office Action received for Korean Patent Application No. 10-2021-7020693, dated Jul. 14, 2021, 7 pages. |
Office Action received for Taiwanese Patent Application No. 100111887, dated Oct. 7, 2013, 23 pages. |
Pre-Appeal Review Report received for Japanese Patent Application No. 2018-182607, mailed on Jan. 21, 2021, 4 pages. |
PreAppeal Review Report received for Japanese Patent Application No. 2018-225131, mailed on Jan. 24, 2020, 8 pages. |
PreAppeal Review Report received for Japanese Patent Application No. 2018-545502, mailed on Jan. 24, 2020, 8 pages. |
Procamera Capture the Moment, Online Available at: http://www.procamera-app.com/procamera_manual/ProCamera_Manual_EN.pdf, Apr. 21, 2016, 63 pages. |
Record of Oral Hearing received for U.S. Appl. No. 16/259,771, mailed on Aug. 4, 2021, 15 pages. |
Result of Consultation received for European Patent Application No. 19172407.9, dated Nov. 5, 2020, 17 pages. |
Result of Consultation received for European Patent Application No. 19204230.7, dated Nov. 16, 2020, 3 pages. |
Result of Consultation received for European Patent Application No. 19204230.7, dated Sep. 24, 2020, 5 pages. |
Result of Consultation received for European Patent Application No. 19724959.2, dated Sep. 4, 2020, 3 pages. |
Result of Consultation received for European Patent Application No. 19181242.9, dated Dec. 1, 2020, 12 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201770563, dated Oct. 10, 2017, 9 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870366, dated Aug. 27, 2018, 9 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870367, dated Aug. 27, 2018, 9 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870368, dated Sep. 6, 2018, 7 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870372, dated Sep. 14, 2018, 8 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870372, dated Sep. 17, 2018, 10 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870374, dated Aug. 27, 2018, 9 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870375, dated Aug. 23, 2018, 8 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870377, dated Sep. 4, 2018, 8 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201870623, dated Dec. 20, 2018, 8 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201970592, dated Nov. 7, 2019, 8 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201970593, dated Oct. 29, 2019, 10 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201970595, dated Nov. 8, 2019, 16 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201970600, dated Nov. 5, 2019, 11 pages. |
Search Report and Opinion received for Danish Patent Application No. PA201970605, dated Nov. 12, 2019, 10 pages. |
Search Report and Opinion received for Danish Patent Application No. PA202070623, dated Dec. 21, 2020, 9 pages. |
Search Report and Opinion received for Danish Patent Application No. PA202070624, dated Dec. 10, 2020, 10 pages. |
Search Report and Opinion received for Danish Patent Application No. PA202070625, dated Dec. 17, 2020, 9 pages. |
Search Report received for Danish Patent Application No. PA201770719, dated Oct. 17, 2017, 9 pages. |
Summons to Attend Oral Proceedings received for European Patent Application No. 19172407.9, mailed on Jun. 24, 2020, 14 pages. |
Summons to Attend Oral Proceedings received for European Patent Application No. 19181242.9, mailed on Jun. 16, 2020, 12 pages. |
Summons to Attend Oral Proceedings received for European Patent Application No. 19204230.7, mailed on May 25, 2021, 10 pages. |
Summons to Attend Oral Proceedings received for European Patent Application No. 19724959.2, mailed on Feb. 1, 2021, 9 pages. |
Summons to Attend Oral Proceedings received for European Patent Application No. 19724959.2, mailed on Mar. 31, 2021, 3 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 15/713,490, dated May 30, 2019, 2 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 16/143,201, dated Dec. 13, 2018, 2 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 16/143,201, dated Dec. 19, 2018, 2 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 16/143,201, dated Jan. 10, 2019, 2 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Mar. 9, 2021, 21 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Mar. 29, 2021, 2 pages. |
Supplementary European Search Report received for European Patent Application No. 18176890.4, dated Sep. 20, 2018, 4 pages. |
Ali et al. “Facial Expression Recognition Using Human to Animated-Character Expression Translation”, Oct. 12, 2019, 8 pages. |
Android Police,“Galaxy S9+ In-Depth Camera Review”, See Especially 0:43-0:53; 1:13-1:25; 1:25-1:27; 5:11-5:38; 6:12-6:26, Available Online at <https://www.youtube.com/watch?v=GZHYCdMCv-w>, Apr. 19, 2018, 3 pages. |
Applivgames,““Super Mario Run” Stickers for iMessage: Free Delivery Started!”, Available online at: <https://games.app-liv.jp/archives/178627>, Sep. 13, 2016, 3 pages. |
AstroVideo,“AstroVideo enables you to use a low-cost,low-light video camera to capture astronomical images.”, Available online at: https://www.coaa.co.uk/astrovideo.htm, Retrieved on: Nov. 18, 2019, 5 pages. |
Carretero et al. “Preserving Avatar Genuineness in Different Display Media”, Mobile Networks and Applications, Kluwer Academic Publishers, BO, vol. 13, No. 6, Jul. 15, 2008, pp. 627-634. |
Channel Highway,“Virtual Makeover in Real-time and in full 3D”, Available online at:—https://www.youtube.com/watch?v=NgUbBzb5qZg, Feb. 16, 2016, 1 page. |
Clover Juli, “Moment Pro Camera App for iOS Gains Zebra Striping for Displaying Over and Underexposed Areas”, Online Available at: https://web.archive.org/web/20190502081353/https://www.macrumors.com/2019/05/01/momentcamera-app-zebra-striping-and-more/, May 1, 2019, 8 pages. |
Contents Pocket,“Line Stamp Information”, Available online at: <https://web.archive.org/web/20150404080541/http://contents-pocket.net/linestamp.html>, Apr. 2015, 2 pages. |
Digital Trends,“ModiFace Partners With Samsung To Bring AR Makeup To The Galaxy S9”, Available online at:—https://www.digitaltrends.com/mobile/modiface-samsung-partnership-ar-makeup-galaxy-s9/, 2018, 16 pages. |
Enterbrain,“No. 5 Create your own Avatar Mii Studio”, vol. 26, No. 11, p. 138, Feb. 24, 2011, 4 pages. |
Fedko Daria, “AR Hairstyles”, Online Available at <https://www.youtube.com/watch?v=FrS6tHRbFE0>, Jan. 24, 2017, 2 pages. |
Feng et al. “3D Direct Human-Computer Interface Paradigm Based on Free Hand Tracking”, Chinese Journal of Computers, vol. 37, No. 6, Jun. 30, 2014, 15 pages. |
Flatlinevertigo,“Black Desert Online :: Intro to Hair Customization”, Online Available at: <https://www.youtube.com/watch?v=9MCbfd_eMEg>, Sep. 9, 2015, 3 pages. |
Gadgets Portal,“Galaxy J5 Prime Camera Review! (vs J7 Prime) 4K”, Available Online at :—https://www.youtube.com/watch?v=Rf2Gy8QmDqc, Oct. 24, 2016, 3 pages. |
Gao et al. “Automatic Unpaired Shape Deformation Transfer”, ACM Transactions on Graphics, Online available at: https://doi.org/10.1145/3272127.3275028, 2018, 11 pages. |
Gavin's Gadgets,“Honor 10 Camera App Tutorial—How to use All Modes + 90 Photos Camera Showcase”, See Especially 2:58-4:32, Available Online at <https://www.youtube.com/watch?v=M5XZwXJcK74>, May 26, 2018, 3 pages. |
Gibson Andrews. , “Aspect Ratio: What it is and Why it Matters”, Retrieved from <https://web.archive.org/web/20190331225429/https:/digital-photography-school.com/aspect-ratio-what-it-is-and-why-it-matters/>, Paragraphs: “Adjusting aspect ratio in-camera”, “Cropping in post-processing”, Mar. 31, 2019, 10 pages. |
GSM Arena,“Honor 10 Review : Camera”, Available Online at <https://web.archive.org/web/20180823142417/https://www.gsmarena.com/honor_10-review-1771p5.php>, Aug. 23, 2018, 11 pages. |
Hall Brent, “Samsung Galaxy Phones Pro Mode (S7/S8/S9/Note 8/Note 9): When, why, & How To Use It”, See Especially 3:18-5:57, Available Online at <https://www.youtube.com/watch?v=KwPxGUDRkTg>, Jun. 19, 2018, 3 pages. |
Helpvideostv,“How to Use Snap Filters on Snapchat”, Retrieved from <https://www.youtube.com/watch?v=oR-7cIWPszU&feature=youtu.be>, Mar. 22, 2017, pp. 1-2. |
Hernández Carlos, “Lens Blur in the New Google Camera App”, Available online at: https://research.googleblog.com/2014/04/lens-blur-in-new-google-camera-app.html, https://ai.googleblog.com/2014/04/1ens-blur-in-new-google-camera-app.html, Apr. 16, 2014, 6 pages. |
Huawei Mobile PH,“Huawei P10 Tips & Tricks: Compose Portraits With Wide Aperture (Bokeh)”, Available Online at <https://www.youtube.com/watch?v=WM4yo5-hrrE>, Mar. 30, 2017, 2 pages. |
Iluvtrading,“Galaxy S10 / S10+: How to Use Bright Night Mode for Photos (Super Night Mode)”, Online Available at: https://www.youtube.com/watch?v=SfZ7Us1S1Mk, Mar. 11, 2019, 4 pages. |
Iluvtrading,“Super Bright Night Mode: Samsung Galaxy S1O vs Huawei P30 Pro (Review/How to/Explained)”, Online Available at: https://www.youtube.com/watch?v=d4r3PWioY4Y, Apr. 26, 2019, 4 pages. |
Imagespacetv,“Olympus OM-D E-M1 Mark II—Highlights & Shadows with Gavin Hoey”, Online available at: https://www.youtube.com/watch?v=goEhh1n--hQ, Aug. 3, 2018, 3 pages. |
KK World,“Redmi Note 7 Pro Night Camera Test I Night Photography with Night Sight & Mode”, Online Available at: https://www.youtube.com/watch?v=3EKjGBjX3PY, Mar. 26, 2019, 4 pages. |
Koti Kotresh, “Colour with Asian Paints.A Mobail App by Android Application—2018”, Available Online at <https://www.youtube.com/watch?v=M6EIO7ErYd0&feature=youtu.be&t=81 >, May 6, 2018, 2 pages. |
Kozak Tadeusz, “When You're Video Chatting on Snapchat, How Do You Use Face Filters?”, Quora, Online Available at: https://www.quora.com/When-youre-video-chatting-on-Snapchat-how-do-you-use-face-filters, Apr. 29, 2018, 1 page. |
Kyoko Makino, “How to Make a Lookalike Face Icon for Your Friend”, ASCII, Japan Weekly, ASCII Media Works Inc. vol. 24, pp. 90-93, Jul. 17, 2014, 7 pages. |
Lang Brian, “How to Audio & Video Chat with Multiple Users at the Same Time in Groups”, Snapchat 101, Online Available at: <https://smartphones.gadgethacks.com/how-to/snapchat-101-audio-video-chat-with-multiple-users-same-time-groups-0184113/>, Apr. 17, 2018, 4 pages. |
Mitsuru Takeuchi, “Face Shape Selection for Automatic Avatar Generation”, 13th Annual Conference Proceedings of Virtual Reality Society of Japan tournament Papers [DVD-ROM], The Virtual Reality Society of Japan, Sep. 24, 2008, 7 pages. |
Mobiscrub,“Galaxy S4 mini camera review”, Available Online at :—https://www.youtube.com/watch?v=KYKOydw8QT8, Aug. 10, 2013, 3 pages. |
Mobiscrub,“Samsung Galaxy S5 Camera Review—HD Video”, Available Online on:—https://www.youtube.com/watch?v=BFgwDtNKMjg, Mar. 27, 2014, 3 pages. |
Modifacechannel,“Sephora 3D Augmented Reality Mirror”, Available Online at: https://www.youtube.com/watch?v=wwBO4PU9EXI, May 15, 2014, 1 page. |
Neurotechnology,“Sentimask SDK”, Available at: https://www.neurotechnology.com/sentimask.html, Apr. 22, 2018, 5 pages. |
Noh et al. “Expression Cloning”, Proceedings of the 28th annual conference on Computer Graphics and Interactive Techniques, ACM Siggraph, Los Angeles, CA, USA, Aug. 12-17, 2001, 12 pages. |
Osxdaily,“How to Zoom the Camera on iPhone”, Available Online at: https://osxdaily.com/2012/04/18/zoom-camera-iphone/, Apr. 18, 2012, 6 pages. |
Paine Steve, “Samsung Galaxy Camera Detailed Overview—User Interface”, Retrieved from: <https://www.youtube.com/watch?v=td8UYSySulo&feature=youtu.be>, Sep. 18, 2012, pp. 1-2. |
PC World,“How to make AR Emojis on the Samsung Galaxy S9”, You Tube, Available Online: https://www.youtube.com/watch?v=8wQICfulkz0, Feb. 25, 2018, 2 pages. |
Phonearena,“Sony Xperia Z5 camera app and UI overview”, Retrieved from <https://www.youtube.com/watch?v=UtDzdTsmkfU&feature=youtu.be>, Sep. 8, 2015, pp. 1-3. |
Pumarola et al. “GANimation: Anatomically-aware Facial Animation from a Single Image”, Proceedings of the European Conference on Computer Vision (ECCV), Jul. 24, 2018, 16 pages. |
Pyun et al. “An Example-Based Approach for Facial Expression Cloning”, SIGGRAPH Symposium on Computer Animation, The Eurographics Association (2003), 2003, 10 pages. |
Rosa et al. “Stripe Generator—a Free Tool for the Web Design Community”, Available online at: http://www.stripegenerator.com/, Mar. 28, 2019, 10 pages. |
Schiffhauer Alexander, “See the Light with Night Sight”, Available online at: https://www.blog.google/products/pixel/see-light-night-sight, Nov. 14, 2018, 6 pages. |
Shaw et al. ““Skills for Closeups Photography””, Watson-Guptill Publications, Nov. 1999, 5 pages. |
shiftdelete.net,“Oppo Reno 10x Zoom Ön Inceleme—Huawei P30 Pro'ya rakip mi geliyor?”, Available online at <https://www.youtube.com/watch?v=ev2wlUztdrg>, See especially 5:34-6:05. Apr. 24, 2019, 2 pages. |
Singh Lovepreet, “Samsung Galaxy Watch: How to Change Watch Face—Tips and Tricks”, Online available at: <https://www.youtube.com/watch?pp=desktop&v=IN7gPxlZ1qU>, Retrieved on Dec. 10, 2020, Dec. 4, 2018, 1 page. |
Slashgear,“Samsung AR Emoji demo on the Galaxy S9”, Available Online at <https://www.youtube.com/watch?v=GQwNKzY4C9Y>, Feb. 25, 2018, 3 pages. |
Smart Reviews,“Honor10 AI Camera's In Depth Review”, See Especially 2:37-2:48; 6:39-6:49, Available Online at <https://www.youtube.com/watch?v=oKFqRvxeDBQ>, May 31, 2018, 2 pages. |
Snapchat Lenses,“How To Get All Snapchat Lenses Face Effect Filter on Android”, Retrived from: <https://www.youtube.com/watch?v=0PfnF1RIntw&feature=youtu.be>, Sep. 21, 2015, pp. 1-2. |
Sony,“User Guide, Xperia XZ3, H8416/H9436/H9493”, Sony Mobile Communications Inc. Retrieved from <https://www-support-downloads.sonymobile.com/h8416/userguide_EN_H8416-H9436-H9493_2_Android9.0.pdf>, See pp. 86-102. 2018, 121 pages. |
Spellburst,“The Sims 3: Create a Sim With Me | #2—Dark Fairy + Full CC List!”, Available online at: <https://www.youtube.com/watch?v=Dy_5g9B-wkA>, Oct. 9, 2017, 2 pages. |
Tech With Brett,“How to Create Your AR Emoji on the Galaxy S9 and S9+”, Available online at: <https://www.youtube.com/watch?v=HHMdcBpC8MQ>, Mar. 16, 2018, 5 pages. |
Techtag,“Samsung J5 Prime Camera Review | True Review”, Available online at :—https://www.youtube.com/watch?v=a_p906ai6PQ, Oct. 26, 2016, 3 pages. |
Techtag,“Samsung J7 Prime Camera Review (Technical Camera)”, Available Online at:—https://www.youtube.com/watch?v=AJPcLP8GpFQ, Oct. 4, 2016, 3 pages. |
Telleen et al. “Synthetic Shutter Speed Imaging”, University of California, Santa Cruz, vol. 26, No. 3, 2007, 8 pages. |
The Nitpicker,“Sony Xperia XZ3 | in-depth Preview”, Avalaible online at <https://www.youtube.com/watch?v=TGCKxBuiO5c>, See especially 12:40-17:25, Oct. 7, 2018, 3 pages. |
Tico et al. “Robust method of digital image stabilization”, Nokia Research Center, ISCCSP, Malta,, Mar. 12-14, 2008, pp. 316-321. |
Tsuchihashi et al. “Generation of Caricatures by Automatic Selection of Templates for Shapes and Placement of Facial Parts”, Technical Report of the Institute of Image Information and Television Engineers, Japan, The Institute of Image Information and Television Engineers,vol. 33, No. 11, pp. 77-80. Feb. 8, 2009, 7 pages. |
Vickgeek,“Canon 80D Live View Tutorial | Enhance your image quality”, Available online at:—https://www.youtube.com/watch?v=JGNCiy6Wt9c, Sep. 27, 2016, 3 pages. |
Vidstube,“Bitmoji Clockface on Fitbit Versa Sense/Versa 3/Versa 2”, Available online at: <https://www.youtube.com/watch?v=4V_xDnSLeHE>, Retrieved on Dec. 3, 2020, Jun. 30, 2019, 1 page. |
Vivo India,“Bokeh Mode | Vivo V9”, Available Online at <https://www.youtube.com/watch?v=B5AIHhH5Rxs>, Mar. 25, 2018, 3 pages. |
Wong Richard, “Huawei Smartphone (P20/P10/P9 ,Mate 10/9) Wide Aperture Mode Demo”, Available Online at <https://www.youtube.com/watch?v=eLY3LsZGDPA>, May 7, 2017, 2 pages. |
Woolsey Amanda, “How To Customize The Clock on the Apple Watch”, Available online at: <https://www.youtube.com/watch?v=t-3Bckdd9B4>, Retrieved on Dec. 11, 2020, Apr. 25, 2015, 1 page. |
Xeetechcare,“Samsung Galaxy S10—Super Night Mode & Ultra Fast Charging!”, Online Available at: https://www.youtube.com/watch?v=3bguV4FX6aA, Mar. 28, 2019, 4 pages. |
X-Tech,“Test Make up via Slick Augmented Reality Mirror Without Putting It on”, Available Online at: http://x-tech.am/test-make-up-via-slick-augmented-reality-mirror-without-putting-it-on/, Nov. 29, 2014, 5 pages. |
Zhang et al. “Facial Expression Retargeting from Human to Avatar Made Easy”, IEEE Transactions On Visualization And Computer Graphics, Aug. 2020, 14 pages. |
Zhao et al. “An Event-related Potential Comparison of Facial Expression Processing between Cartoon and Real Faces”, Online available at https://www.biorxiv.org/content/10.1101/333898v2, Jun. 18, 2018, 31 pages. |
ZY News,“Generate Cartoon Face within Three Seconds, You are the New-generation Expression Emperor”, Online available at: <http://inews.ifeng.com/48551936/news.shtml>, Apr. 22, 2016, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/093,408, dated Jul. 1, 2022, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/373,163, dated Jun. 27, 2022, 5 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/370,505, dated Jul. 6, 2022, 14 pages. |
Notice of Acceptance received for Australian Patent Application No. 2019338180, dated Jun. 27, 2022, 3 pages. |
Office Action received for Australian Patent Application No. 2021202254, dated Jun. 20, 2022, 2 pages. |
Office Action received for Indian Patent Application No. 202118028159, dated Jun. 27, 2022, 6 pages. |
Office Action received for Korean Patent Application No. 10-2022-7010505, dated Jun. 14, 2022, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Corrected Notice of Allowance received for U.S. Appl. No. 17/373,163, dated Jul. 15, 2022, 5 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021203177, dated Jul. 14, 2022, 3 pages. |
Notice of Allowance received for U.S. Appl. No. 16/833,436, dated Jul. 7, 2022, 8 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 16/833,436, dated Jul. 14, 2022, 2 pages. |
Final Office Action received for U.S. Appl. No. 17/031,765, dated Sep. 12, 2022, 37 pages. |
Hourunranta et al., “Video and Audio Editing for Mobile Applications”, IEEE International Conference on Multimedia and Expo, ICME 2006, Jul. 9, 2006, pp. 1305-1308. |
Hurwitz, Jon, “Interface For Small-Screen Media Playback Control”, Online available at: https://www.tdcommons.org/cgi/viewcontent.cgi?article=4231&context=dpubs_series, Technical Disclosure Commons, Apr. 17, 2020, pp. 1-9. |
Lein et al., “Patternizer”, Available online at: https://patternizer.com/, Apr. 2016, 5 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/093,408, dated Sep. 14, 2022, 46 pages. |
Notice of Allowance received for Japanese Patent Application No. 2019-215503, dated Aug. 26, 2022, 3 pages (1 page of English Translation and 2 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 16/833,436, dated Sep. 8, 2022, 8 pages. |
Final Office Action received for U.S. Appl. No. 16/259,771, dated Aug. 12, 2022, 25 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2022/024964, dated Aug. 4, 2022, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/356,322, dated Aug. 11, 2022, 17 pages. |
Notice of Allowance received for U.S. Appl. No. 17/483,684, dated Aug. 16, 2022, 9 pages. |
Office Action received for Chinese Patent Application No. 202111323807.5, dated Jul. 15, 2022, 12 pages (6 pages of English Translation and 6 pages of Official Copy). |
Office Action received for European Patent Application No. 20206196.6, dated Aug. 10, 2022, 13 pages. |
Office Action received for Korean Patent Application No. 10-2022-7023077, dated Jul. 25, 2022, 6 pages (2 pages of English Translation and 4 pages of Official Copy). |
Final Office Action received for U.S. Appl. No. 17/093,408, dated May 18, 2022, 41 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021201295, dated May 10, 2022, 3 pages. |
Notice of Allowance received for Chinese Patent Application No. 202180002106.3, dated May 5, 2022, 6 pages (3 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 17/373,163, dated May 11, 2022, 8 pages. |
Office Action received for Australian Patent Application No. 2021203177, dated May 4, 2022, 7 pages. |
Office Action received for Korean Patent Application No. 10-2022-7003364, dated Apr. 22, 2022, 14 pages (6 pages of English Translation and 8 pages of Official Copy). |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/259,771, dated Apr. 18, 2022, 2 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/833,436, dated Jan. 27, 2022, 2 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,765, dated Dec. 15, 2021, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/093,408, dated Mar. 1, 2022, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/220,596, dated Aug. 18, 2021, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/373,163, dated Apr. 11, 2022, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/144,629, dated Apr. 21, 2022, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/528,257, dated Feb. 3, 2022, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/091,460, dated Feb. 16, 2022, 6 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/091,460, dated Feb. 25, 2022, 6 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/220,596, dated Nov. 4, 2021, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/220,596, dated Nov. 18, 2021, 27 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/354,376, dated Apr. 11, 2022, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/354,376, dated Feb. 16, 2022, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/354,376, dated Mar. 23, 2022, 6 pages. |
Decision on Appeal received for U.S. Appl. No. 16/144,629, mailed on Jan. 18, 2022, 8 pages. |
Decision to Grant received for Danish Patent Application No. PA201770719, dated Feb. 3, 2022, 2 pages. |
Decision to Grant received for European Patent Application No. 20168021.2, dated Feb. 3, 2022, 2 pages. |
Decision to Grant received for Japanese Patent Application No. 2018-182607, dated Apr. 13, 2022, 3 pages (1 page of English Translation and 2 pages of Official Copy). |
Decision to Grant received for Japanese Patent Application No. 2019-566087, dated Jan. 26, 2022, 2 pages (1 page of English Translation and 1 page of Official Copy). |
Decision to Refuse received for European Patent Application No. 19204230.7, dated Feb. 4, 2022, 15 pages. |
Examiner-Initiated Interview Summary received for U.S. Appl. No. 17/220,596, dated Oct. 7, 2021, 2 pages. |
Extended European Search Report received for European Patent Application No. 22151131.4, dated Mar. 24, 2022, 6 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2021/046877, dated Mar. 1, 2022, 17 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2021/046877, dated Jan. 5, 2022, 10 pages. |
Minutes of the Oral Proceedings received for European Patent Application No. 19204230.7, mailed on Feb. 2, 2022, 9 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/259,771, dated Jan. 25, 2022, 20 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,671, dated Apr. 1, 2022, 32 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,765, dated Mar. 29, 2022, 33 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/220,596, dated Jun. 10, 2021, 31 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/373,163, dated Jan. 27, 2022, 14 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020294208, dated Mar. 2, 2022, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2022200966, dated Feb. 25, 2022, 3 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010287950.2, dated Mar. 22, 2022, 7 pages (03 pages of English Translation and 04 pages of Official Copy). |
Notice of Allowance received for Chinese Patent Application No. 202010600197.8, dated Feb. 9, 2022, 5 pages (1 page of English Translation and 4 pages of Official Copy). |
Notice of Allowance received for Chinese Patent Application No. 202011480411.7, dated Feb. 18, 2022, 6 pages (3 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Japanese Patent Application No. 2020-159825, dated Mar. 25, 2022, 5 pages (1 page of English Translation and 4 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2021-7020693, dated Dec. 27, 2021,5 pages (1 page of English Translation and 4 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2021-7023617, dated Dec. 21, 2021, 6 pages (2 pages of English Translation and 4 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2021-7035687, dated Dec. 30, 2021, 5 pages (1 page of English Translation and 4 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2021-7036337, dated Apr. 5, 2022, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2022-7002829, dated Feb. 12, 2022, 6 pages (1 page of English Translation and 5 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 16/144,629, dated Apr. 7, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 16/528,257, dated Jan. 14, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/091,460, dated Apr. 28, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/091,460, dated Feb. 4, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/220,596, dated Oct. 21, 2021, 43 pages. |
Notice of Allowance received for U.S. Appl. No. 17/354,376, dated Jan. 27, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/354,376, dated Mar. 4, 2022, 5 pages. |
Notice of Allowance received for U.S. Appl. No. 17/354,376, dated Mar. 30, 2022, 5 pages. |
Office Action received for Australian Patent Application No. 2019338180, dated Feb. 18, 2022, 3 pages. |
Office Action received for Australian Patent Application No. 2020239717, dated Dec. 15, 2021, 6 pages. |
Office Action received for Australian Patent Application No. 2020239717, dated Mar. 16, 2022, 4 pages. |
Office Action received for Australian Patent Application No. 2020239749, dated Jan. 21, 2022, 4 pages. |
Office Action received for Australian Patent Application No. 2020294208, dated Dec. 17, 2021, 2 pages. |
Office Action received for Australian Patent Application No. 2021107587, dated Feb. 1, 2022, 6 pages. |
Office Action received for Australian Patent Application No. 2021201295, dated Jan. 14, 2022, 3 pages. |
Office Action received for Chinese Patent Application No. 201910315328.5, dated Nov. 30, 2021, 21 pages (10 pages of English Translation and 11 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201910691872.X, dated Nov. 10, 2021, 16 pages (9 pages of English Translation and 7 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202010287950.2, dated Nov. 19, 2021, 8 pages (5 pages of English Translation and 3 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202011480411.7, dated Jan. 12, 2022, 7 pages (4 pages of English Translation and 3 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202110766668.7, dated Feb. 16, 2022, 12 pages (6 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202110820692.4, dated Mar. 15, 2022, 18 pages (9 pages of English Translation and 9 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202180002106.3, dated Feb. 16, 2022, 12 pages (6 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Danish Patent Application No. PA202070624, dated Feb. 4, 2022, 4 pages. |
Office Action received for Danish Patent Application No. PA202070625, dated Feb. 8, 2022, 2 pages. |
Office Action received for European Patent Application No. 20206197.4, dated Mar. 18, 2022, 7 pages. |
Office Action received for Indian Patent Application No. 201818025015, dated Feb. 4, 2022, 7 pages. |
Office Action received for Indian Patent Application No. 201818046896, dated Feb. 2, 2022, 7 pages. |
Office Action received for Indian Patent Application No. 202014041530, dated Dec. 8, 2021, 7 pages. |
Office Action received for Indian Patent Application No. 202118021941, dated Mar. 23, 2022, 5 pages. |
Office Action received for Indian Patent Application No. 202118046032, dated Apr. 25, 2022, 6 pages. |
Office Action received for Indian Patent Application No. 202118046033, dated Apr. 25, 2022, 7 pages. |
Office Action received for Indian Patent Application No. 202118046044, dated Apr. 25, 2022, 6 pages. |
Office Action received for Japanese Patent Application No. 2020-159338, dated Dec. 8, 2021, 9 pages (5 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2020-159823, dated Dec. 23, 2021, 8 pages (4 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2020-159824, dated Dec. 17, 2021, 13 pages (7 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2020-159825, dated Dec. 10, 2021, 4 pages (2 pages of English Translation and 2 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2021-092483, dated Apr. 1, 2022, 8 pages (4 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2021-7036337, dated Dec. 8, 2021, 6 pages (2 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2022-7006310, dated Mar. 8, 2022, 6 pages (2 pages of English Translation and 4 pages of Official Copy). |
Record of Oral Hearing received for U.S. Appl. No. 16/144,629, mailed on Jan. 28, 2022, 13 pages. |
Demetriou Soteris, “Analyzing & Designing the Security of Shared Resources on Smartphone Operating Systems”, Dissertation, University of Illinois at Urbana-Champaign Online available at: https://www.ideals.illinois.edu/bitstream/handle/2142/100907/DEMETRIOU-DISSERTATION-2018.pdf?sequence=1&isAllowed=n, 2018, 211 pages. |
Dutta Tushars., “Warning! iOS Apps with Camera Access Permission Can Spy on You”, Online available at: https://web.archive.org/web/20180219092123/https://techviral.net/ios-apps-camera-can-spy/, Feb. 19, 2018, 3 pages. |
Ilovex, “Stripe Generator”, a tool that makes it easy to create striped materials, Online available at: https://www.ilovex.co.jp/blog/system/webconsulting/stripe-generator.html, May 2, 2012, 3 pages (Official Copy Only) See Communication Under 37 CFR § 1.98(a) (3). |
King Juliea., “How to Check the Exposure Meter on Your Nikon D5500”, Online available at: https://www.dummies.com/article/home-auto-hobbies/photography/how-to-check-the-exposuremeter-on-your-nikon-d5500-142677, Mar. 26, 2016, 6 pages. |
Messelodi et al., “A Kalman filter based background updating algorithm robust to sharp illumination changes.”, International Conference on Image Analysis and Processing. Springer, Berlin, Heidelberg, 2005, pp. 163-170. |
Whitacre Michele, “Photography 101 | Exposure Meter”, Online available at: https://web.archive.org/web/20160223055834/http://www.michelewhitacrephotographyblog.com, Feb. 23, 2016, 4 pages. |
Wu et al., “Security Threats to Mobile Multimedia Applications: Camera-Based Attacks on Mobile Phones”, IEEE Communications Magazine, Available online at: http://www.ieeeprojectmadurai.in/BASE/ANDROID/Security%20Threats%20to%20Mobile.pdf, Mar. 2014, pp. 80-87. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,765, dated May 23, 2022, 5 pages. |
Certificate of Examination received for Australian Patent Application No. 2021107587, dated Apr. 29, 2022, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/484,279, dated Feb. 15, 2022, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/484,279, dated Feb. 28, 2022, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/484,307, dated Apr. 20, 2022, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/484,307, dated Feb. 10, 2022, 7 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/484,321, dated Mar. 24, 2022, 2 pages. |
Extended European Search Report received for European Patent Application No. 22154034.7, dated May 11, 2022, 14 pages. |
Intention to Grant received for European Patent Application No. 20168009.7, dated May 17, 2022, 9 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020239749, dated May 27, 2022, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2022202377, dated May 11, 2022, 3 pages. |
Notice of Allowance received for Japanese Patent Application No. 2021-510849, dated May 16, 2022, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 17/091,460, dated May 23, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/483,684, dated Apr. 27, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/484,279, dated Jan. 26, 2022, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/484,279, dated May 13, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/484,307, dated Mar. 8, 2022, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/484,307, dated Nov. 30, 2021, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/484,321, dated Nov. 30, 2021, 10 pages. |
Office Action received for Danish Patent Application No. PA202070623, dated May 23, 2022, 3 pages. |
Summons to Attend Oral Proceedings received for European Patent Application No. 19181242.9, mailed on May 19, 2022, 7 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 17/484,321, dated Mar. 1, 2022, 6 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 17/031,671, dated Nov. 8, 2021, 5 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 17/190,879, dated Oct. 26, 2021, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/528,257, dated Nov. 18, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Nov. 17, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/190,879, dated Nov. 19, 2021, 2 pages. |
Decision to Grant received for European Patent Application No. 17809168.2, dated Oct. 21, 2021, 3 pages. |
Decision to Grant received for Japanese Patent Application No. 2019-203399, dated Oct. 20, 2021, 3 pages (1 page of English Translation and 2 pages of Official Copy). |
Examiner's Pre-Review Report received for Japanese Patent Application No. 2019-215503, dated Aug. 20, 2021, 15 pages (8 pages of English Translation and 7 pages of Official Copy). |
Final Office Action received for U.S. Appl. No. 17/031,765, dated Oct. 29, 2021, 34 pages. |
Intention to Grant received for European Patent Application No. 19181242.9, dated Oct. 28, 2021, 16 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2020/031643, dated Nov. 18, 2021, 27 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2021/031096, dated Oct. 13, 2021, 16 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2021/034304, dated Oct. 11, 2021, 24 pages. |
Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2021/031096, dated Aug. 19, 2021, 8 pages. |
Nikon Digital Camera D7200 User's Manual, Online available at: https://download.nikonimglib.com/archive3/dbHI400jWws903mGr6q98a4k8F90/D7200UM_SG(En)05.pdf, 2005, 416 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/093,408, dated Dec. 8, 2021, 37 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/190,879, dated Oct. 13, 2021, 10 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020260413, dated Oct. 14, 2021, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021254567, dated Nov. 17, 2021, 3 pages. |
Notice of Allowance received for Chinese Patent Application No. 202010601484.0, dated Nov. 23, 2021, 2 pages (1 page of English Translation and 1 page of Official Copy). |
Notice of Allowance received for Japanese Patent Application No. 2020-120086, dated Nov. 15, 2021, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2021-0022053, dated Nov. 23, 2021, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 16/733,718, dated Oct. 20, 2021, 24 pages. |
Notice of Allowance received for U.S. Appl. No. 16/835,651, dated Nov. 10, 2021, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/190,879, dated Nov. 10, 2021, 8 pages. |
Office Action received for Australian Patent Application No. 2020239717, dated Sep. 28, 2021, 6 pages. |
Office Action received for Danish Patent Application No. PA201770719, dated Nov. 16, 2021, 2 pages. |
Office Action received for European Patent Application No. 20210373.5, dated Dec. 9, 2021, 7 pages. |
Office Action received for Indian Patent Application No. 201817024430, dated Sep. 27, 2021, 8 pages. |
Office Action received for Indian Patent Application No. 201818045872, dated Oct. 13, 2021, 7 pages. |
Office Action received for Japanese Patent Application No. 2019-566087, dated Oct. 18, 2021, 10 pages (6 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2021-7002582, dated Oct. 29, 2021, 6 pages (3 pages of English Translation and 3 pages of Official Copy). |
Theunlockr, “Galaxy Watch Complete Walkthrough: The Best Watch They've Made So Far”, Available online at: https://www.youtube.com/watch?v=xiECIfe1SN4, Sep. 11, 2018, 27 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/144,629, dated Aug. 24, 2022, 6 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/483,684, dated Aug. 24, 2022, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/479,897, dated Aug. 30, 2022, 10 pages. |
Notice of Allowance received for Chinese Patent Application No. 201910315328.5, dated Aug. 24, 2022, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2020-159823, dated Aug. 15, 2022, 6 pages (3 pages of English Translation and 3 pages of Official Copy). |
Notice of Acceptance received for Australian Patent Application No. 2020239717, dated Jun. 1, 2022, 3 pages. |
Notice of Allowance received for Korean Patent Application No. 10-2022-7016421, dated May 25, 2022, 6 pages (2 pages of English Translation and 4 pages of Official Copy). |
Office Action received for European Patent Application No. 20210373.5, dated May 31, 2022, 5 pages. |
Decision to Refuse received for Japanese Patent Application No. 2020-159824, dated Sep. 30, 2022, 6 pages (3 pages of English Translation and 3 pages of Official Copy). |
Notice of Acceptance received for Australian Patent Application No. 2022215297, dated Sep. 26, 2022, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2022220279, dated Sep. 27, 2022, 3 pages. |
Notice of Allowance received for Japanese Patent Application No. 2021-092483, dated Sep. 30, 2022, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Japanese Patent Application No. 2021-565919, dated Oct. 3, 2022, 3 pages (1 page of English Translation and 2 of pages Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2022-7006310, dated Sep. 20, 2022, 8 pages (2 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Brazilian Patent Application No. BR122018076550-0, dated Sep. 28, 2022, 7 pages (1 page of English Translation and 6 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2021-166686, dated Oct. 3, 2022, 3 pages (2 pages of English Translation and 1 page of Official Copy). |
Advisory Action received for U.S. Appl. No. 17/031,765, dated Dec. 12, 2022, 7 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 16/599,433, dated Apr. 20, 2021, 7 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,671, dated Dec. 9, 2022, 5 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,765, dated Nov. 16, 2022, 5 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/093,408, dated Jan. 5, 2023, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/356,322, dated Dec. 27, 2022, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/370,505, dated Oct. 17, 2022, 4 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/479,897, dated Oct. 31, 2022, 3 pages. |
Brief Communication Regarding Oral Proceedings received for European Patent Application No. 19181242.9, dated Oct. 5, 2022, 4 pages. |
Certificate of Examination received for Australian Patent Application No. 2020100720, dated Nov. 11, 2020, 2 pages. |
Certificate of Examination received for Australian Patent Application No. 2020101043, dated Dec. 22, 2020, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/599,433, dated Aug. 13, 2021, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/599,433, dated Oct. 14, 2021, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/663,062, dated Apr. 14, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 16/825,879, dated Jul. 23, 2021, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/566,094, dated Jan. 5, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/566,094, dated Jan. 23, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/740,032, dated Nov. 3, 2022, 6 pages. |
Decision of Refusal received for Japanese Patent Application No. 2018-243463, dated Feb. 25, 2019, 8 pages (5 pages of English Translation and 3 pages of Official Copy). |
Decision to Grant received for Danish Patent Application No. PA201970593, dated Sep. 7, 2021, 2 pages. |
Drunk Beauty Flower Digital Technology, “iPhone Xs Max Camera Tips, Tricks, Features and Complete Tutorial”, Available online at: https://www.ixigua.com/6606874981844386308?wid_try=1, Oct. 2, 2018, 2 pages (Official Copy Only) (See Communication under 37 CFR § 1.98(a) (3)). |
European Search Report received for European Patent Application No. 22184844.3, dated Nov. 4, 2022, 4 pages. |
European Search Report received for European Patent Application No. 22184853.4, dated Nov. 14, 2022, 5 pages. |
Extended Search Report received for European Patent Application 17809168.2, dated Jun. 28, 2018, 9 pages. |
Final Office Action received for U.S. Appl. No. 16/144,629, dated Sep. 18, 2019, 22 pages. |
Final Office Action received for U.S. Appl. No. 16/259,771, dated Nov. 18, 2019, 13 pages. |
Final Office Action received for U.S. Appl. No. 17/031,671, dated Nov. 15, 2022, 27 pages. |
Final Office Action received for U.S. Appl. No. 17/356,322, dated Nov. 29, 2022, 19 pages. |
Final Office Action received for U.S. Appl. No. 17/479,897, dated Jan. 10, 2023, 15 pages. |
Here are Warez Files:Eve Online Character Creator, Online Available at: <http://theherearewarezfiles.blogspot.com/2014/03/eve-online-character-creator-download.html>, Mar. 3, 2014, 7 pages. |
Intention to Grant received for Danish Patent Application No. PA201670627, dated Jun. 11, 2018, 2 pages. |
Intention to Grant received for European Patent Application No. 18704732.9, dated Dec. 6, 2022, 10 pages. |
Intention to Grant received for European Patent Application No. 19181242.9, dated Nov. 17, 2022, 9 pages. |
Intention to Grant received for European Patent Application No. 20168009.7, dated Oct. 31, 2022, 9 pages. |
Intention to Grant received for European Patent Application No. 20206197.4, dated Dec. 15, 2022, 10 pages. |
Intention to Grant received for European Patent Application No. 20210373.5, dated Jan. 10, 2023, 12 pages. |
Intention to Grant received for European Patent Application No. 21733324.4, dated Jan. 9, 2023, 9 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2011/031616, dated Oct. 18, 2012, 6 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2017/035321, dated Dec. 27, 2018, 11 Pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2021/031096, dated Nov. 24, 2022, 11 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2021/031212, dated Nov. 24, 2022, 16 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2021/034304, dated Dec. 15, 2022, 19 pages. |
International Search Report and Written Opinion received for PCT Application No. PCT/US2017/049795, dated Dec. 27, 2017., 26 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/031616, dated Aug. 30, 2011, 8 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2017/035321, dated Oct. 6, 2017., 15 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2018/015591, dated Jun. 14, 2018, 14 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2022/030589, dated Sep. 5, 2022, 26 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2022/030704, dated Nov. 9, 2022, 19 pages. |
Invitation to Pay Additional Fees and Partial International Search Report received for PCT Patent Application No. PCT/US2022/030704, dated Sep. 15, 2022, 12 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/528,941, dated Jan. 30, 2020, 14 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/599,433, dated Jan. 28, 2021, 16 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/663,062, dated Oct. 28, 2020, 14 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/041,412, dated Dec. 5, 2022, 13 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/461,014, dated Dec. 7, 2022, 22 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/510,168, dated Dec. 6, 2022, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/944,765, dated Jan. 18, 2023, 9 pages. |
Notice of Acceptance received for Australian Patent Application No. 2020201969, dated Mar. 26, 2021, 3 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021202254, dated Nov. 16, 2022, 3 pages. |
Notice of Allowance received for Brazilian Patent Application No. BR122018076550-0, dated Jan. 3, 2022, 3 pages (1 page of English Translation and 2 pages of Official Copy). |
Notice of Allowance received for Chinese Patent Application No. 201910692978.1, dated Feb. 4, 2021, 6 pages (3 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Chinese Patent Application No. 202110820692.4, dated Nov. 16, 2022, 2 pages (1 page of English Translation and 1 page of Official Copy). |
Notice of Allowance received for Chinese Patent Application No. 202111323807.5, dated Jan. 10, 2023, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2020-0123852, dated Nov. 28, 2022, 7 pages (2 pages of English Translation and 5 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2020-0123887, dated Nov. 28, 2022, 7 pages (2 pages of English Translation and 5 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2022-7010505, dated Dec. 26, 2022, 7 pages (2 pages of English Translation and 5 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2022-7023077, dated Nov. 1, 2022, 8 pages (2 pages of English Translation and 6 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 16/599,433, dated May 14, 2021, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 16/599,433, dated Oct. 4, 2021, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 16/663,062, dated Jul. 13, 2021, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/483,684, dated Oct. 24, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/566,094, dated Nov. 22, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/732,191, dated Nov. 9, 2022, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/740,032, dated Oct. 13, 2022, 11 pages. |
Office Action received for Australian Patent Application No. 2021290292, dated Nov. 24, 2022, 2 pages. |
Office Action received for Chinese Patent Application No. 201780058426.4, dated Dec. 2, 2022, 11 pages (5 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201811446867.4, dated May 6, 2020, 10 pages (5 pages of English Translation and 5 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201910691872.X, dated Jun. 23, 2021, 10 pages (5 pages of English Translation and 5 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201910691872.X, dated Mar. 24, 2021, 19 pages (9 pages of English Translation and 10 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201910692978.1, dated Apr. 3, 2020, 19 pages (8 pages of English Translation and 11 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201911199054.4, dated Jul. 3, 2020, 15 pages (9 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202110766668.7, dated Sep. 15, 2022, 18 pages (9 pages of English Translation and 9 pages of Official Copy). |
Office Action received for Danish Patent Application No. PA201670755, dated Oct. 20, 2017., 4 pages. |
Office Action received for European Patent Application No. 19181242.9, dated Dec. 6, 2019, 9 pages. |
Office Action received for European Patent Application No. 19769316.1, dated Jan. 12, 2023, 10 pages. |
Office Action received for European Patent Application No. 22184844.3, dated Nov. 16, 2022, 7 pages. |
Office Action received for European Patent Application No. 22184853.4, dated Nov. 25, 2022, 7 pages. |
Office Action received for Indian Patent Application No. 202215010325, dated Oct. 10, 2022, 7 pages. |
Office Action received for Japanese Patent Application No. 2021-153573, dated Oct. 17, 2022, 4 pages (2 pages of English Translation and 2 pages of Official Copy). |
Office Action received for Japanese Patent Application No. 2022-027861, dated Nov. 21, 2022, 4 pages (2 pages of English Translation and 2 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2020-0123857, dated Dec. 16, 2022, 8 pages (4 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2020-7021870, dated Nov. 11, 2020, 11 pages (5 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2021-7006145, dated Oct. 12, 2022, 14 pages (6 pages of English Translation and 8 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2022-7003364, dated Dec. 26, 2022, 8 pages (3 pages of English Translation and 5 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2022-7009437, dated Nov. 30, 2022, 6 pages (2 pages of English Translation and 4 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2022-7043663, dated Jan. 6, 2023, 12 pages (5 pages of English Translation and 7 pages of Official Copy). |
Pavlakos et al., “Expressive Body Capture: 3D Hands, Face, and Body from a Single Image”, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 2019, online available at https://arxiv.org/abs/1904.05866, 2019, pp. 10975-10985. |
Search Report and Opinion received for Danish Patent Application No. PA201970603, dated Nov. 15, 2019, 9 pages. |
Sony Xperia XZ3 Camera Review—The Colors, Duke, The Colors!, Android Headlines—Android News & Tech News, Available online at <https://www.youtube.com/watch?v=mwpYXzWVOgw>, See especially 1:02-1:27, 2:28-2:30, Nov. 3, 2018, 3 pages. |
Supplementary European Search Report received for European Patent Application No. 18183054.8, dated Oct. 11, 2018, 4 pages. |
Zollhöfer et al., “State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications”, In Computer graphics forum May 2018 (vol. 37, No. 2), online available at https://studios.disneyresearch.com/wp-content/uploads/2019/03/State-of-the-Art-on-Monocular-3D-Face-Reconstruction-Tracking-and-Applications-1.pdf., 2018, 28 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/041,412, dated Jan. 31, 2023, 7 pages. |
Corrected Notice of Allowance received for U.S. Patent Application No. 17/566,094, dated Feb. 8, 2023, 2 pages. |
Non-Final Office Action received for U.S. Patent Application No. 17/721,039, dated Feb. 2, 2023, 65 pages. |
Notice of Acceptance received for Australian Patent Application No. 2021290292, dated Jan. 23, 2023, 3 pages. |
Notice of Allowance received for U.S. Patent Application No. 17/356,322, dated Feb. 2, 2023, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/370,505, dated Feb. 2, 2023, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/740,032, dated Feb. 1, 2023, 9 pages. |
Office Action received for Chinese Patent Application No. 2022100630/0.6, dated Jan. 5, 2023, 12 pages (6 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2020-0124139, dated Jan. 17, 2023, 10 pages (5 pages of English Translation and 5 pages of Official Copy). |
Result of Consultation received for European Patent Application No. 22184844.3, dated Feb. 1, 2023, 3 pages. |
Office Action received for Indian Patent Application No. 202015008746, dated Mar. 6, 2023, 7 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 17/721,039, dated Mar. 10, 2023, 3 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/461,014, dated Feb. 21, 2023, 3 pages. |
Board Opinion received for Chinese Patent Application No. 201811446867.4, dated Feb. 14, 2023, 11 pages (4 pages of English Translation and 7 pages of Official Copy). |
Corrected Notice of Allowance received for U.S. Appl. No. 17/041,412, dated Mar. 23, 2023, 7 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/041,412, dated Mar. 31, 2023, 6 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/356,322, dated Feb. 15, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/356,322, dated Mar. 8, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/370,505, dated Apr. 4, 2023, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/370,505, dated Mar. 8, 2023, 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/510,168, dated Mar. 16, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/510,168, dated Mar. 29, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/566,094, dated Mar. 7, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/740,032, dated Feb. 15, 2023, 6 pages. |
Decision to Grant received for European Patent Application No. 19181242.9, dated Mar. 23, 2023, 3 pages. |
Final Office Action received for U.S. Appl. No. 17/093,408, dated Mar. 2, 2023, 51 pages. |
Final Office Action received for U.S. Appl. No. 17/461,014, dated Apr. 6, 2023, 24 pages. |
Intention to Grant received for European Patent Application No. 20168009.7, dated Feb. 28, 2023, 10 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2021/046877, dated Apr. 6, 2023, 12 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,671, dated Mar. 17, 2023, 34 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/031,765, dated Mar. 28, 2023, 31 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/542,947, dated Mar. 2, 2023, 59 pages. |
Notice of Allowance received for Japanese Patent Application No. 2021-153573, dated Feb. 17, 2023, 4 pages (1 page of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for Japanese Patent Application No. 2022-027861, dated Feb. 13, 2023, 3 pages (1 page of English Translation and 2 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2020-0123852, dated Mar. 9, 2023, 7 pages (2 pages of English Translation and 5 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2020-0123857, dated Feb. 21, 2023, 6 pages (1 page of English Translation and 5 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2021-7006145, dated Mar. 6, 2023, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 17/041,412, dated Mar. 15, 2023, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/510,168, dated Feb. 13, 2023, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/566,094, dated Feb. 23, 2023, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/732,191, dated Feb. 27, 2023, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/941,962, dated Mar. 10, 2023, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/944,765, dated Apr. 5, 2023, 9 pages. |
Office Action received for Australian Patent Application No. 2022200965, dated Feb. 14, 2023, 4 pages. |
Office Action received for Australian Patent Application No. 2022218463, dated Mar. 17, 2023, 2 pages. |
Office Action received for Chinese Patent Application No. 202110766668.7, dated Jan. 20, 2023, 11 pages (6 pages of English Translation and 5 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 202210849242.2, dated Jan. 20, 2023, 12 pages (6 pages of English Translation and 6 pages of Official Copy). |
Office Action received for European Patent Application No. 20704768.9, dated Mar. 24, 2023, 8 pages. |
Office Action received for Indian Patent Application No. 202015008747, dated Mar. 15, 2023, 10 pages. |
Office Action received for Indian Patent Application No. 202117009020, dated Feb. 6, 2023, 7 pages. |
Office Action received for Indian Patent Application No. 202215026505, dated Feb. 8, 2023, 9 pages. |
Office Action received for Japanese Patent Application No. 2021-187533, dated Feb. 6, 2023, 7 pages (4 pages of English Translation and 3 pages of Official Copy). |
Pre-Appeal Review Report received for Japanese Patent Application No. 2020-159823, dated Jan. 12, 2023, 4 pages (2 pages of English Translation and 2 pages of Official Copy). |
Droid Life, “20+ Galaxy S9, S9+ Tips and Tricks”, Available Online at: https://www.youtube.com/watch?v=sso0mYTfV6w, Mar. 22, 2018, pp. 1-33. |
Gauging Gadgets, “How to Customize Watch Faces—Garmin Venu Tutorial”, Online Available at: https://www.youtube.com/watch?v=dxajKKulaP0, Jan. 7, 2020, 14 pages. |
Takahashi et al., “Neural network modeling of altered facial expression recognition in autism spectrum disorders based on predictive processing framework”, Scientific reports, online available at:—https://www.nature.com/articles/s41598-021-94067-x, Jul. 26, 2021, 14 pages. |
Applicant-Initiated Interview Summary received for U.S. Appl. No. 17/031,765, dated Apr. 17, 2023, 4 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/041,412, dated Apr. 12, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/941,962, dated Apr. 14, 2023, 6 pages. |
Hearing Notice received for Indian Patent Application No. 201817024430, mailed on Apr. 6, 2023, 2 pages. |
Office Action received for Chinese Patent Application No. 202211072958.2, dated Apr. 5, 2023, 11 pages (6 pages of English Translation and 5 pages of Official Copy). |
Office Action received for Indian Patent Application No. 202215026045, dated Mar. 31, 2023, 8 pages. |
Decision to Grant received for Japanese Patent Application No. 2021-166686, dated Apr. 20, 2023, 2 pages (1 page of English Translation and 1 page of Official Copy). |
Number | Date | Country | |
---|---|---|---|
20220070385 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
62679934 | Jun 2018 | US | |
62668227 | May 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16599433 | Oct 2019 | US |
Child | 17525664 | US | |
Parent | 16143097 | Sep 2018 | US |
Child | 16599433 | US |