Many users have devices, such as mobile phones, tablets, glasses or other wearable devices, etc., capable of capturing multimedia content. In an example, a user may capture video of a college campus while walking to class. In another example, a user may capture a photo of friends and/or other bystanders at a restaurant. In this way, various types of multimedia content, depicting entities (e.g., a person, a business, military equipment or personnel, documents, a prototype car, a monument, etc.), may be captured. Such multimedia content may be published and/or shared with other users. In an example, a user may post a video to a social network. In another example, a user may share an image through an image sharing service. Accordingly, entities, such as bystanders, may inadvertently be captured within multimedia content and then undesirably exposed through multimedia content made available to other individuals (e.g., a bystander walking across the college campus may not want photos of herself posted and/or tagged through a social network). Moreover, such tagging may occur in an automated fashion, such as where a social network utilizes automatic tagging and/or recognition algorithms, such as facial recognition algorithms. In this manner, a bystander may be recognized and/or tagged, such as being at a particular location at a particular time, where the bystander would instead prefer to remain anonymous with her whereabouts remaining undisclosed.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Among other things, one or more systems and/or techniques for providing and/or applying privacy preferences for an entity are provided herein. In an example of providing a privacy preference, a multimedia device may capture multimedia content associated with an entity. In an example, the multimedia device may have created the multimedia content (e.g., the multimedia device, such as a mobile phone, may comprise a camera used to capture a photo depicting a group of people at a baseball game). In another example, the multimedia device may have captured the multimedia content by obtaining the multimedia content from a device that created the multimedia content or from another source (e.g., the photo may have been transferred to the multimedia device, such as from a laptop, using a memory device, a download process, email, etc.).
A privacy preference provider component may be configured to receive a query from the multimedia device (e.g., the privacy preference provider component may be hosted by a server remotely accessible to the multimedia device and/or a local instantiation of the privacy preference provider component may be hosted locally on the multimedia device). The query may specify an entity identifier of the entity associated with the multimedia content. In an example, the entity identifier may correspond to John who was recognized based upon photo recognition, voice recognition, and/or other types of recognition. In another example, the entity identifier may have been identified by the multimedia device based upon a signal broadcast from a device associated with the entity (e.g., a device, such as John's mobile phone, may have broadcast an RF signal, a Bluetooth signal, or other signal comprising the entity identifier).
The privacy preference provider component may be configured to identify an entity profile matching the entity identifier (e.g., John may have setup an entity profile specifying that users may publish pictures of John, but cannot tag John and cannot log activities of John, such as through social networks). Accordingly, the privacy preference provider component may provide a privacy preference, such as a no tagging privacy preference and a no logging privacy preference, to the multimedia device. In this way, the multimedia device may apply the privacy preference to the multimedia content.
It may be appreciated that the multimedia device may be configured to identify and/or apply privacy preferences based upon a variety of information, such as a signal broadcast from a device associated with the entity (e.g., the device may broadcast a privacy preference to blur photos of the user), object recognition of a privacy object (e.g., an amulet may be identified as specifying that the user is not to be tagged and/or to blur photos of the user), a gesture recognition of a gesture (e.g., John may cross his arms indicating that video of John is not to be captured), etc.
To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings.
The claimed subject matter is now described with reference to the drawings, wherein like reference numerals are generally used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, structures and devices are illustrated in block diagram form in order to facilitate describing the claimed subject matter.
As devices, such as cell phones, tablets, wearables and/or other devices, become increasingly connected and capable of capturing information about entities (e.g., posting a photo of a person to a social network; sharing a video of a person through a video sharing service; streaming an audio recording of a song through a website; etc.), privacy concerns arise. For example, a user may capture a photo of John at a restaurant. The user may upload the photo to a social network, tag John in the photo, and/or allow various services to track and/or profile John based upon the photo, which may go against the desires of John who may wish to not have his photo taken, shared, tagged, etc. (e.g., John may not wish to be associated with a specific location at a certain time and/or with certain individuals documented in the photo). Accordingly, as provided herein, privacy preferences for entities may be provided and/or applied to multimedia content.
An embodiment of applying a privacy preference for an entity is illustrated by an exemplary method 200 of
At 206, a privacy preference for the entity may be identified. In an example of identifying a privacy preference, a signal broadcast from a device associated with the entity may be identified (e.g., an app of a mobile phone may cause the phone to broadcast an RF signal, a Bluetooth signal, or other type of signal; a privacy device, such as an amulet, may broadcast the signal; etc.). The signal may be evaluated to identify the privacy preference for the entity. For example, the signal may be received and/or decoded by the multimedia device comprising the multimedia content. The decoded signal may specify the privacy preference for the entity (e.g., a no facial recognition privacy preference). In this way, the multimedia device may directly identify the privacy preference based upon the signal specifying the privacy preference. In another example of identifying a privacy preference, a signal broadcast from a device associated with the entity may be identified. The signal may be evaluated to identify an entity identifier for the entity. For example, the multimedia device may receive and/or decode the signal to obtain the entity identifier. The entity identifier may correspond to a unique identifier used by a privacy preference provider component, such as a cloud service accessible to the multimedia device, to associate the entity with an entity profile comprising privacy preferences for the entity. In this way, the privacy preference provider component may use the entity identifier to identify the privacy preference for the entity (e.g., a publishing privacy preference that restrict publishing of photos of the entity for particular websites, social networks, email, messaging, etc.).
In another example of identifying a privacy preference, a recognition technique, such as facial recognition and/or voice recognition, may be performed on the multimedia content to identify an entity identifier for the entity. For example, facial recognition may identify a user John as being depicted within a photo. In this way, the privacy preference provider component may use the entity identifier to identify the privacy preference for the entity (e.g., a no tagging privacy preference for a particular social network specified by John). In another example of identifying a privacy preference, gesture recognition may be performed on the multimedia content to identify a gesture associated with the entity. For example, a photo may depict a user crossing their arms in a particular manner, which may be identified as a no photography gesture. Such a no photography gesture and/or other gestures may be universally identifiable gestures that may be recognizable to society as activating privacy protection technology. The gesture may be evaluated to identify the privacy preference for the entity (e.g., a no photography privacy preference). In another example of identifying a privacy preference, object recognition may be performed on the multimedia content to identify a privacy object associated with the entity (e.g., a visual amulet, a sticker, a tee-shirt, a bracelet, a military label, etc.). For example, a prototype car may comprise a label, bar code, QR code, etc. as the privacy object. The privacy object may be evaluated to identify the privacy preference for the entity (e.g., the multimedia device may match the label to a privacy object database to identify a no logging privacy preference specifying that activity and locational data for the prototype car cannot be logged, a social media privacy preference specifying that photos of the prototype car cannot be uploaded to a particular social network, and/or other privacy preferences). In this way, a wide variety of techniques may be performed to identify one or more privacy preferences for the entity associated with the multimedia content. It is to be appreciated that the instant application, including the scope of the appended claims, is not intended to be limited by the examples provided herein. Rather, more than merely the examples provided are contemplated herein.
At 208, the privacy preference may be applied to the multimedia content. In an example, a blur effect may be applied to a depiction of the entity within the multimedia content based upon a no photography privacy preference. In another example, audio of the entity within video multimedia content may be muffled, filtered, etc. based upon a no audio privacy preference. In another example, a tag restriction may be applied to a depiction of the entity within the multimedia content based upon a no tagging privacy preference. In another example, facial recognition on a depiction of the entity within the multimedia content may be restricted based upon a facial recognition privacy preference (e.g., the entity may be wearing a privacy object, such as an amulet, design, logo, etc., specifying the no facial recognition privacy preference). In another example, a log activity restriction may be applied to the multimedia content with respect to the entity based upon a no logging privacy preference (e.g., a social network and/or other service may be blocked from logging information about a user as having eaten at a restaurant as depicted by the multimedia content). In another example, a profiling activity restriction may be applied to the multimedia content with respect to the entity based upon a no profiling privacy preference (e.g., a social network and/or other service may be blocked from building and/or updating a profile for a user depicted within the multimedia content). In this way, privacy preferences for entities may be applied to multimedia content. It will be appreciated that the instant application, including the scope of the appended claims, is not intended to be limited to or by the examples provided herein. For example, an object can have any shape, form, configuration, etc. (e.g., universally known, agreed upon, etc.) to indicate one or more privacy preferences. Moreover, it is contemplated that a privacy preference may be applied based upon a current law, regulation, mandate, etc. for a particular location, such as a state within which the multimedia content was created. At 210, the method ends.
The privacy preference provider component 314 may receive a query 312 from the multimedia device 306. The query 312 may specify the entity identifier. The privacy preference provider component 314 may be configured to query an entity profile repository 318, comprising one or more entity profiles, to identify 316 an entity profile matching the entity identifier. The entity profile may comprise one or more privacy preferences, such as a privacy preference 320, specified by the entity 302 (e.g., through a configuration interface and/or otherwise, such as discussed with respect to
The privacy preference provider component 414 may receive a query 408 from the multimedia device 406. The query 408 may specify the entity identifier. The privacy preference provider component 414 may be configured to query an entity profile repository 418, comprising one or more entity profiles, to identify 416 an entity profile matching the entity identifier. The entity profile may comprise one or more privacy preferences, such as a privacy preference 420, specified by the entity 402 (e.g., through a configuration interface and/or otherwise, such as discussed with respect to
Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device is illustrated in
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims.
As used in this application, the terms “component,” “module,” “system”, “interface”, and/or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Although not required, embodiments are described in the general context of “computer readable instructions” being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
In other embodiments, device 912 may include additional features and/or functionality. For example, device 912 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in
The term “computer readable media” as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 918 and storage 920 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 912. Computer storage media does not, however, include propagated signals. Rather, computer storage media excludes propagated signals. Any such computer storage media may be part of device 912.
Device 912 may also include communication connection(s) 926 that allows device 912 to communicate with other devices. Communication connection(s) 926 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 912 to other computing devices. Communication connection(s) 926 may include a wired connection or a wireless connection. Communication connection(s) 926 may transmit and/or receive communication media.
The term “computer readable media” may include communication media. Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
Device 912 may include input device(s) 924 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device(s) 922 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 912. Input device(s) 924 and output device(s) 922 may be connected to device 912 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device(s) 924 or output device(s) 922 for computing device 912.
Components of computing device 912 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components of computing device 912 may be interconnected by a network. For example, memory 918 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 930 accessible via a network 928 may store computer readable instructions to implement one or more embodiments provided herein. Computing device 912 may access computing device 930 and download a part or all of the computer readable instructions for execution. Alternatively, computing device 912 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 912 and some at computing device 930.
Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments.
Further, unless specified otherwise, “first,” “second,” and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object.
Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used herein, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.