This disclosure relates generally to creating media presentations for display on mobile devices.
A mobile device can be held by a user in portrait or landscape orientation. The mobile device includes sensors that can detect the orientation of the mobile device. Upon detection of the device orientation, an application running on the mobile device can rotate and resize a graphical user interface (GUI) to fully use a display aspect ratio. Many mobile devices include a media player application that plays media presentations such as, for example, a video or slideshow. Some applications can automatically create a media presentation for the user.
A media presentation can be a composition of many different media assets and special effects (FX), including video clips, digital photographs, graphics, animation and transitions. Media assets are selected by authors or automated processes to create a media presentation that is aesthetically pleasing to users. A media presentation that is optimized for a particular device orientation, however, may result in a poor viewing experience for a user who is viewing the presentation in a different device orientation.
The disclosed embodiments are directed to dynamic aspect media presentations.
In an embodiment, a method comprises: obtaining, by a mobile device, a first set of instructions for playing a media presentation in a first format, the media presentation including a composition of media assets; playing, by a media player of the mobile device, the media presentation in a media player view in the first format according to the first set of instructions; obtaining, by the mobile device, a switching event trigger signal; responsive to the switching event trigger signal: obtaining, by the mobile device, a second set of instructions for playing the media presentation in a second format that is different than the first format; and playing, by the media player, the media presentation in the media player view in the second format according to the second set of instructions.
In an embodiment, a system comprises: one or more processors; memory coupled to the one or more processors and configured to store instructions, which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: obtaining a first set of instructions for playing a media presentation in a first format, the media presentation including a composition of media assets; playing, by a media player, the media presentation in a media player view in the first format according to the first set of instructions; obtaining a switching event trigger signal; responsive to the switching event trigger signal: obtaining a second set of instructions for playing the media presentation in a second format that is different than the first format; and playing, by the media player, the media presentation in the media player view in the second format according to the second set of instructions.
Other embodiments can include an apparatus, computing device and non-transitory, computer-readable storage medium.
Particular embodiments disclosed herein may provide one or more of the following advantages. Two or more dynamic aspect media presentations are created for display on a mobile device. The visual aesthetics of each presentation is optimized for a particular orientation view, such as landscape or portrait. During playback by a media player executing on the mobile device, one of the media presentations is selected for display based on a current orientation of the mobile device. Each time the device orientation changes, the media presentation also changes. In an embodiment, an animated GUI transition is used to hide the sometimes visually jarring transition between media presentations from the user. The result is an improved user experience where the user can view an aesthetically pleasing media presentation regardless of the orientation of their mobile device.
The details of one or more implementations of the subject matter are set forth in the accompanying drawings and the description below. Other features, aspects and advantages of the subject matter will become apparent from the description, the drawings and the claims.
At the top of
During the playing of segments 102-104, software switch (SW) 105, selects one of instruction sets 106, 107 to be executed. The switch can occur while a set of instructions are executed, or between instructions, and with or without a media asset change. For example, at the first time to, the mobile device is in landscape orientation and instruction set 106a is selected by SW 105 to be executed by the media player. Instruction set 106a includes compositing instructions (e.g., a composition graph) for compositing media assets for segment 102, which can include one or more of frame instructions 110a, animation instructions 111a and other special effects (FX) instructions 112a. Frame instructions 110a specify a portion or portions of one or more media assets (e.g., photographs, video) to be composited in segment 102. Animation instructions 111a specify frames in segment 102 to be animated (if at all) and other FX instructions 112a specify other FX to apply to segment 102 (if any).
Instructions 110a, 111a, 112a, are examples of instructions that could be included in an instruction set. In practice, however, each instruction set can include more or fewer instructions. Note that in
Continuing with the example, at the second time t1, and while instruction set 106a is executing, the user rotates the mobile device from landscape orientation to portrait orientation and instruction set 107b is selected by SW 105 to be executed by the media player, including executing one or more of frame instructions 110d, animation instructions 111d and other FX instructions 112d. Instruction set 107b includes a frame instruction to generate face crop 109 from a media asset used in segment 102. For example, during the display of segment 102 and the execution of instruction set 106a, the user rotates the mobile device to portrait orientation, causing frame instruction 110d to be executed, resulting in face crop 109 being displayed in portrait view in segment 103. Face crop 109 was selected as a media asset because it has an aesthetically pleasing appearance in portrait view.
At the third time t2, and while instruction set 107b is executing, the user rotates the mobile device from portrait orientation back to landscape orientation and instruction set 106c is selected by SW 105 to be executed by the media player, including executing one or more of frame instructions 110e, animation instructions 111e and other FX instructions 112e.
Segments, media assets and instructions sets can be stored on the mobile device and/or downloaded from a network-based media server. As described in more detail below, the media assets (e.g., photographs, video clips) used in segments can be edited and animated in a manner to ensure an aesthetically pleasing display in landscape or portrait views. For example, animation can differ between views to best highlight the chosen presentation for each view and content.
In an embodiment, instructions sets are built for each orientation and SW switch 105 between the instruction sets as described above. In other embodiments, the instruction sets are generated on demand. In general, system 100 composites a media presentation for a plurality of display sizes or aspect ratios with dynamic instantaneous switching, where one of the switch triggers is rotation of the mobile device. Another switch trigger could be a change in display size or aspect ratio made by a user or application. For example, the user or application may choose a different display size or aspect ratio for exporting a media presentation from the mobile device. In an embodiment, the system may choose alternate media assets between a plurality of media presentations, where the system has determined that the alternate media assets are similar to the media assets in the presentations but more appropriate for the specific presentation. For example, in the case where the user has shot both a landscape and portrait version of the same object, the system would choose the appropriate version between the instruction sets.
In an embodiment, media player view 201 is created by a media player for displaying media. Upon detection of mobile device 100 orientation, an orientation signal is generated (e.g., by the mobile device operating system) that indicates the orientation of mobile device 100. In the example shown in
During the GUI transition of the composite of media player view 201, snapshot 202 and blur layer rotate together 90 degrees and scale from a portrait display aspect ratio to a landscape display aspect ratio. Snapshot 202 and blur layer 203 are used together hide the visually jarring artifacts that may occur when transitioning between media presentations. During the rotation and scaling, the blur radius of the blur layer increases linearly (becomes more blurred) and the opacity of snapshot 202 and blur layer 203 decrease linearly (become more transparent).
In an embodiment, an alternative to the rotation transition described above can be used, including but not limited to: removing the blur layer, animating a zoom to a crop while rotating and using a second player instead of a snapshot.
The creation of a dynamic, aspect-aware media presentation includes analyzing various media assets in a collection of media assets using various algorithms, editing the media assets based on results of the analysis and other metadata (e.g., user-defined metadata tags), compositing the edited media assets into frames using layout rules and optionally applying transitions, animation and other FX to the frames.
In an embodiment, an automated process analyzes the content and metadata of a corpus of media assets to create one or more collections of media assets from which media presentations (e.g., video presentations) can be created. Various methods can be used, alone or in combination, to match media assets in the corpus (e.g., videos, photographs) to a collection, including but not limited to: location-bounded collections (e.g., media assets captured within a geographic region defined by a radial distance or geofence), time-bounded collections (e.g., media assets captured within a particular time range and/or date range), time-bounded and location-bounded collections (e.g., media assets associated with holidays and events at the family home), content-defined collections (e.g., media assets containing people smiling), user-metadata based collections (e.g., media assets having user-defined metadata tags) and people-based collections (e.g., media assets that include the face of a particular individual or individuals). In an embodiment, the quality of media assets is considered when forming collections, such as whether the assets are blurry or noisy. In an embodiment, the quantity of media assets is considered when forming collections.
In an embodiment, a collection of media assets can be created by matching attributes of a template collection, such as a theme (e.g., holidays, smiling people), to content and/or metadata of media assets in a corpus of media assets. For example, if a template collection has a theme of “Christmas”, the corpus of media assets can be searched to find media assets that are tagged with metadata and/or include content that matches the “Christmas” theme. The matching media assets can be included in the template collection and used to generate orientation-based, media presentations. Additional techniques for grouping image assets into templates and generating media presentations is described in co-pending U.S. Provisional Patent Application No. 62/235,550, for “Novel Media Compositing Application,” which provisional patent application is hereby incorporated by reference in its entirety.
Some media assets (e.g., photographs) can be captured in landscape and portrait orientations. Contained in each media asset, are objects of interest which can be identified using various algorithms, such as saliency detection, face recognition and/or user-defined metadata tags. Saliency detection is a known image analysis technique that attempts to determine where the average viewer will look at a photograph to identify an object of interest. Face recognition detects faces in photographs or video frames using a feature vector describing geometric characteristics of a human face. The media assets can be edited to fit into a particular orientation view to ensure that the objects of interest are the primary focus in the view and not lost off-screen. For example, if the title of a media presentation is “Natalie—Christmas 2017”, media assets that include Natalie's face can be used to create both portrait and landscape media presentations. In some cases, a media asset that was captured in portrait orientation is used in a landscape media presentation and vice-versa. Typically, a portrait media asset cannot be scaled to fit the landscape aspect ratio without changing the aspect ratio of the media asset, which may result in a distorted image. In other cases, a media asset that was captured in landscape orientation may not fit in a portrait aspect ratio with editing (e.g., image cropping). The techniques described below allow an automated process to choose any media asset in a collection, regardless of how the media asset was captured, and include it in either a portrait or landscape view without compromising the visual aesthetics of the overall media presentation.
In an embodiment, a processed union of important spatial regions in a media asset are used to determine the positioning of the media asset in multiple orientations. In another embodiment, a processed union of important regions of interest in the N media assets is used to determine the potential for the combination of those N media assets into a composition.
In an embodiment, in addition to formatting for display aspect ratio, a media presentation can be formatted to avoid other on-screen elements like physical device characteristics, titles, etc.
Process 700 includes obtaining a first set of instructions for playing a media presentation in a first format (701), playing, by a media player, the media presentation in a media player view in the first format according to the first set of instructions (702) and obtaining a switching event trigger signal (703). Responsive to the switching event trigger signal, process 700 continues by obtaining a second set of instructions for playing the media presentation in a second format that is different than the first format (704), and playing, by the media player, the media presentation in the media player view in the second format according to the second set of instructions (705). In an embodiment, the switching event trigger signal can be generated in response to sensor data from, for example, an accelerometer, gyro sensor or magnetometer that indicates the orientation of the mobile device. In an embodiment, the switching event trigger signal can be generated by an application or operating system that indicates a size or aspect ratio of a device screen or media player view. For example, the user or an application may export the media presentation to a device with a different screen size or aspect ratio, such as a television screen or computer monitor. When the export is requested, a switching event trigger signal can be generated to trigger the second set of instructions to be selected for displaying the media presentation. The first and second set of instructions can include instructions for framing (e.g., cropping), animating (e.g., panning and zooming) and other special effects. Each instruction set is designed to generate a view for a particular format (e.g., portrait or landscape view) that is aesthetically pleasing to a viewer.
Process 800 can begin by generating a snapshot of a media player view (802), compositing the snapshot on the media player view with a first opacity value (804) and compositing a blur layer on the snapshot with a first blur radius and a first blur opacity value (806). The snapshot is a still image of the current content displayed by the media player view prior to initiation of the GUI transition. The blur layer can be a Gaussian blur layer with an adjustable blur radius and adjustable opacity.
Process 800 continues by rotating and scaling of the media player view, snapshot and blur layer (808), and during the rotation and scaling, adjusting the first snapshot opacity value to a second snapshot opacity value, and adjusting the first blur radius and the first blur opacity value to a second blur radius and a second blur opacity value (810). For example, during rotation and scaling, the blur radius of the blur layer can be increased linearly while simultaneously decreasing linearly the opacity of the snapshot and blur layer.
Process 900 can begin by obtaining a media asset collection and associated metadata (902) and editing and compositing media assets in the collection into frames of dynamic aspect media presentations (906). In an embodiment, an automated process analyzes the content and metadata of media assets to define one or more collections of media assets from which media presentations can be created. Various methods can be used, alone or in combination, to match media assets to a collection, including but not limited to: location-bounded collections, time-bounded collections, time-bounded and location-bounded collections, content-defined collections, user-metadata based collections and people-based collections. Editing can include image cropping objects of interest from image assets identified through image analysis (e.g., salience detection, face detection) and/or the metadata. In an embodiment, media assets can be composited in a frame in a manner that ensures that objects of interest are displayed prominently in the frame (e.g., front and center) and avoid cutting off a portion of the media asset. Some examples of orientation-based layouts include placing N media assets side-by-side in a landscape view or sequentially in multiple portrait views.
Process 900 continues by optionally adding animation, transitions and other FX to each media presentation (908). In an embodiment, transitions can be added (e.g., fade in/out, dissolve, iris) and animation (e.g., panning and zooming effect) can be added to the frames to provide a Ken Burns effect. For example, panning and zooming can be added to portrait and landscape media presentations in a manner that maintains continuity and flow after the transitioning. This can be accomplished by panning in the same direction at the same speed and starting the panning effect at the same location in corresponding frames of the orientation-based, media presentations. Likewise, zooming can start at the same location in the corresponding frames and at same the same zoom level or depth, and also zoom at same zoom rate in each orientation-based, media presentation.
In some embodiments, collection generator 1001 and layout generator 1002 perform an automated process that analyzes media assets (e.g., analyzes the content and/or metadata of the media assets) to define one or more collections and produces multiple dynamic aspect media presentations. In performing their respective operations, these modules in some embodiments use context identifier 1003 and scoring engine 1004. More specifically, to define the collections, collection generator 1001 uses one or more media asset collection templates (hereinafter also referred to as “templates”) in media collection template storage 1009 to associate each media asset stored in media asset storage 1008 with one or more template instances. In some embodiments, a template in media collection template storage 1009 is defined by reference to a set of media matching attributes. Collection generator 1001 compares a template's attribute set with the content and/or metadata of the media assets to identify media assets that match the template attributes. When a sufficient number of media assets match the attribute set of a template, system 1000 defines a template instance by reference to the matching media assets, and stores this template instance in media asset collection storage 1010. In some embodiments, a template instance includes a list of media asset identifiers that identify the media assets that matched the instance's template attribute set.
In some embodiments, collection generator 1001 defines multiple template instances for a template. For instance, the templates can include location-bounded templates (e.g., videos and/or photos captured within a geographic region with a particular radius), (2) time-bounded templates (e.g., videos and/or photos captured within a particular time range and/or date range), (3) time-bounded and location-bounded templates (e.g., Christmas in Tahoe), (4) content-defined templates (e.g., videos and/or photos containing people smiling) and (5) user-defined metadata based templates (e.g., media assets from photo albums created by the user, media assets shared by a user with others, media assets having particular user-defined metadata tags). Collection generator 1001 stores the definition of the template instances that it generates in media as set collection storage 1010. In an embodiment, collection generator 1001 repeats its collection operation to update the template instance definitions in media asset collection storage 1010. For instance, collection generator 1001 can repeat its collection operation periodically (e.g., every hour, six hours, twelve hours, twenty-four hours). Conjunctively, or alternatively, collection generator 1001 performs its collection operation in response to an application or a user request.
In some embodiments, collection generator 1001 performs its collection operation each time a new media asset is stored, or a certain number of media assets are stored, in media asset storage 1008. For example, in some embodiments, system 1000 is implemented on a mobile device that captures a variety of media assets data (e.g., still photos, burst-mode photos, video clips). Each time the mobile device captures a media asset (e.g., a photo, a video clip, etc.), collection generator 1001 tries to associate the captured media asset with one or more template instances.
Based on a template definition, layout generator 1002 generates user interface (UI) layouts that identify the defined template instances as media asset collections for which system 1000 can display orientation-based media presentations. At any given time, layout generator 1002 generates a UI layout that identifies a subset of the defined template instance that would be contextually relevant to a user of the device at that time based on contextual attributes provided by context identifier 1003 (e.g., the current time, a future time, the current location of the device, a future predicted location of the device based on future events) and one or more scores computed by scoring engine 1004. In an embodiment, an aggregate score is calculated by scoring engine 1004 as a weighted sum of multiple scores. Some examples of scores included in the aggregate score, include but are not limited to: a contextual score (e.g., based on contextual attributes), a quality score (e.g., based on the quality of media assets) and a quantity score (e.g., based on the number media assets). Further details regarding the calculation of scores are described in co-pending U.S. Provisional Patent Application No. 62/235,550.
System 1000 can generate multiple dynamic aspect media presentations and store them in video storage 1012. Responsive to a switching event trigger signal of a mobile device, one of the presentations stored in video storage, and corresponding audio in storage 1013, are rendered by rendering engine 1007. In an embodiment, a fully rendered file is generated for each device orientation for subsequent playback by a media player of a mobile device.
Sensors, devices and subsystems can be coupled to peripherals interface 1106 to facilitate multiple functionalities. For example, one or more motion sensors 1110, light sensor 1112 and proximity sensor 1114 can be coupled to peripherals interface 1106 to facilitate motion sensing (e.g., acceleration, rotation rates), lighting and proximity functions of the mobile device. Location processor 1115 can be connected to peripherals interface 1106 to provide geopositioning. In some implementations, location processor 1115 can be a GNSS receiver, such as a Global Positioning System (GPS) receiver chip. Electronic magnetometer 1116 (e.g., an integrated circuit chip) can also be connected to peripherals interface 1106 to provide data that can be used to determine the direction of magnetic North. Electronic magnetometer 1116 can provide data to an electronic compass application. Motion sensor(s) 1110 can include one or more accelerometers and/or gyros configured to determine change of speed and direction of movement of the mobile device. Barometer 1117 can be configured to measure atmospheric pressure around the mobile device.
Camera subsystem 1120 and an optical sensor 1122, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as capturing photographs and recording video clips.
Communication functions can be facilitated through one or more wireless communication subsystems 1124, which can include radio frequency (RF) receivers and transmitters (or transceivers) and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem 1124 can depend on the communication network(s) over which a mobile device is intended to operate. For example, architecture 1100 can include communication subsystems 1124 designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi™ or Wi-Max™ network and a Bluetooth™ network. In particular, the wireless communication subsystems 1124 can include hosting protocols, such that the mobile device can be configured as a base station for other wireless devices.
Audio subsystem 1126 can be coupled to a speaker 1128 and a microphone 1130 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording and telephony functions. Audio subsystem 1126 can be configured to receive voice commands from the user.
I/O subsystem 1140 can include touch surface controller 1142 and/or other input controller(s) 1144. Touch surface controller 1142 can be coupled to a touch surface 1146 or pad. Touch surface 1146 and touch surface controller 1142 can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch surface 1146. Touch surface 1146 can include, for example, a touch screen. I/O subsystem 1140 can include a haptic engine or device for providing haptic feedback (e.g., vibration) in response to commands from a processor.
Other input controller(s) 1144 can be coupled to other input/control devices 1148, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of speaker 1128 and/or microphone 1130. Touch surface 1146 or other controllers 1144 (e.g., a button) can include, or be coupled to, fingerprint identification circuitry for use with a fingerprint authentication application to authenticate a user based on their fingerprint(s).
In one implementation, a pressing of the button for a first duration may disengage a lock of the touch surface 1146; and a pressing of the button for a second duration that is longer than the first duration may turn power to the mobile device on or off. The user may be able to customize a functionality of one or more of the buttons. The touch surface 1146 can, for example, also be used to implement virtual or soft buttons and/or a virtual touch keyboard.
In some implementations, the mobile device can present recorded audio and/or video files, such as MP3, AAC and MPEG files. In some implementations, the mobile device can include the functionality of an MP3 player. Other input/output and control devices can also be used.
Memory interface 1102 can be coupled to memory 1150. Memory 1150 can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices and/or flash memory (e.g., NAND, NOR). Memory 1150 can store operating system 1152, such as iOS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. Operating system 1152 may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, operating system 1152 can include a kernel (e.g., UNIX kernel).
Memory 1150 may also store communication instructions 1154 to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers, such as, for example, instructions for implementing a software stack for wired or wireless communications with other devices. Memory 1150 may include graphical user interface instructions 1156 to facilitate graphic user interface processing; sensor processing instructions 1158 to facilitate sensor-related processing and functions; phone instructions 1160 to facilitate phone-related processes and functions; electronic messaging instructions 1162 to facilitate electronic-messaging related processes and functions; web browsing instructions 1164 to facilitate web browsing-related processes and functions; media processing instructions 1166 to facilitate media processing-related processes and functions; GNSS/Location instructions 1168 to facilitate generic GNSS and location-related processes and instructions; and camera instructions 1170 to facilitate camera-related processes and functions. Memory 1150 further includes media player instructions 1172, and orientation-based, media presentation instructions 1174 for performing the features and processes described in reference to
Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. Memory 1150 can include additional instructions or fewer instructions. Furthermore, various functions of the mobile device may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language (e.g., SWIFT, Objective-C, C#, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, a browser-based web application, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random-access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor or a retina display device for displaying information to the user. The computer can have a touch surface input device (e.g., a touch screen) or a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The computer can have a voice input device for receiving voice commands from the user.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
A system of one or more computers can be configured to perform particular actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
One or more features or steps of the disclosed embodiments may be implemented using an Application Programming Interface (API). An API may define on or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation. The API may be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter may be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters may be implemented in any programming language. The programming language may define the vocabulary and calling convention that a programmer will employ to access functions supporting the API. In some implementations, an API call may report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc.
As described above, some aspects of the subject matter of this specification include gathering and use of data available from various sources to improve services a mobile device can provide to a user. The present disclosure contemplates that in some instances, this gathered data may identify a particular location or an address based on device usage. Such personal information data can include location-based data, addresses, subscriber account identifiers, or other identifying information.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
In the case of advertisement delivery services, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of advertisement delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publically available information.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.