This background description is provided for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, material described in this section is neither expressly nor impliedly admitted to be prior art to the present disclosure or the appended claims.
Current techniques for sharing visual media, such as photos and video clips, can be time consuming and cumbersome. If a mother of a young child wants to share photos with the child's four grandparents and three living great-grandparents, for example, she may have to select, through various cumbersome interfaces, to share the photo and further how to share the photo for each of the seven interested grandparents and great-grandparents. Thus, one grandparent may want photos sent via text, another through email, another downloaded to a digital picture frame, and another through printed hardcopies. To share the photo to the desired people and in the desired way, the mother selects one grandparent's cell number from a contact list, enter another's email address, find another's URL from which the digital picture frame retrieves photos, and enter still another's physical address to send the printed hardcopies.
Techniques and apparatuses for sharing visual media are described with reference to the following drawings. The same numbers are used throughout the drawings to reference like features and components:
This document describes techniques that allow a user to quickly and easily share visual media. In some cases the techniques share visual media with an interested person automatically and without needing interaction from the user, such as to select the person or the manner in which to share an image. Further, the interested person need not be in the visual media, instead, the interested person can simply be someone that has a previously established interest in a person or object that is within the visual media. For example, a video clip or photo of a grandchild can be automatically shared with the grandchild's grandmother without an explicit selection by the person taking the video or photo.
The following discussion first describes an operating environment, followed by techniques that may be employed in this environment, and proceeding with example user interfaces and apparatuses.
In more detail, remote device 104 of
In more detail, sharing module 112 receives or determines interest associations 118 and preferred communication 120 for each of entities 116 relative to persons 122 and objects 124. Sharing module 112 can determine these interest associations 118 and preferred communications 120 based on a history of explicitly selected sharing of other visual media that also include person 122 or object 124, an explicit selection to automatically share visual media having the person 122 or object 124 (e.g., by a user or controller of the visual media), or an indication received from an entity.
Visual media 114 includes photos 126, videos 128, and slideshows/highlights 130. Videos 128 and slideshows/highlights 130 can include audio, and can also include various modifications, such as songs added to a slideshow, transitions between images or video in a highlight reel, and so forth. Other types of visual media can also be included, these are illustrated for example only.
Remote CRM 110 also includes facial recognition engine 132 and object recognition engine 134. Sharing module 112 may use these engines to recognize persons and objects (e.g., persons 122 and objects 124) within visual media 114. While these engines can recognize people and objects without assistance, in some cases prior tagging by users (e.g., a user capturing the visual media or others, local or remote) can assist the engines and improve accuracy or even supplant them and thus sharing module 112 may forgo use of these engines. Accuracy can also affect sharing, which is described further below.
As noted in part above, time-consuming and explicit selection of entities with which to share, as well as their preferred communication to received media, can be avoided by the user if he or she desires. Sharing module 112 may share automatically or responsive to selection (e.g., in an easy-to-use interface detailed below) and in other manners detailed herein.
With regard to the example computing device 102 of
Computing device 102 includes or is able to communicate with a display 202 (eight are shown in
These and other capabilities, as well as ways in which entities of
Example Methods for Sharing Visual Media
At 302, visual media is captured at a mobile computing device and through a visual-media, capture device. Thus, a user may capture a photo of herself and two friends on a bike trip through her smartphone 102-3 (shown in
At 304, a person or object in the visual media is recognized. As noted in part above, sharing module 112 may recognize persons and objects in the captured visual media, such as by using facial recognition engine 132 and object recognition engine 134 of
In some cases this recognizing can be in conjunction with, or simply selected by, a user or other entity. Thus, after operation 306 or 304 at operation 308. At 308, a recognized person is confirmed or selected. A recognized person can be recognized with a high confidence or a less-than high confidence. Sharing module 112 is capable of assigning a confidence (e.g., a probability) that a recognition is correct. This confidence can be used to determine whether or not to present a user interface enabling selection to confirm an identity of the recognized person or object prior to sharing the visual media with an interested entity (e.g., at 308). For probabilities below some threshold of confidence (e.g., 99, 95, or 90 percent), sharing module 112 may determine not to share the visual media without an explicit selection from a user, thereby attempting to avoid sending media to a person that is not interested in the media.
Assume, for this example, that the threshold is 95% to share media without an explicit selection. In such a case sharing module 112 can present a user interface asking for an explicit selection to share, this is illustrated in
At 306, an entity having an interest in a person or object is determined. An interest can be determined based on a history of sharing visual media (e.g., captured prior to newly captured visual media) having a recognized person or object, as noted above. Other manners can be used, such as a prior explicit selection to have visual media shared, such as selecting visual media that has a recognized grandchild to be automatically shared with a grandmother.
Still other manners can be used, such as based on an indication being received from the entity through near-field communication or a personal-area network from a mobile device associated with the entity. Assume, for example, that two kids, Calvin and John, are at a park, have recently met, and are having great fun playing together. Assume also that each kid has a parent at the park watching them—Calvin's Dad and John's Mom. Assume further that one of the parents, Calvin'Dad, takes pictures of both kids—both Calvin and John. John's Mom can ask for the photo of both of the kids—and, with a simple tap of the two parent's phones together (NFC) or a PAN communication (e.g., prompting a user interface to select the interest and share), John's Mom can be made an entity 116 having an interest association 118 with her son John (person 122). Here we assume that the preferred communication 120 by which to share the photo is the same as the manner in which the indication is received, though that is not required. Responsive to receiving this indication of interest, the particular photo is shared by Calvin's Dad's smartphone (e.g., smartphone 102-3). With the interest, entity, and preferred communication established, additional photos can be shared automatically. As will be described in greater detail below, when visual media has the other parent's child (John) recognized in it, the other media can be shared, even automatically, from the first parent's device (Calvin's Dad) to the other person's device (John's Mom).
Note also that determining an entity in this manner may be used as an aid in recognizing persons or objects without user interaction. Continuing the example of the two parents and two kids, when John's Mom indicates her interest in John, share module 112 may note this for future facial recognition. As Calvin and John are the only two people in the photo, and Calvin is already known and recognized, John's face can be noted, whether with a name or without, as a person 122 with which the particular entity (John's Mom) has an interest. Then, when recognizing faces in other photos or videos taken by Calvin's Dad (especially that same day), a baseline for John can be known and used by facial recognition engine 132.
Returning to methods 300, at 310 the visual media is shared with the determined entity. This sharing can be through transmitter/transceiver 210, such as through a cellular network, the internet (e.g., through a social media network associated with the entity), NFC, PAN, and so forth.
For the example of Calvin and John, assume that Calvin's Dad takes a short video of the boys at 302. Sharing module 112, at 304, recognizes that John is in the video. At 306, sharing module 112 determines that John's Mom has an interest in John based on the prior-received indication. At 310, sharing module 112 shares, even automatically and without further interaction from Calvin's Dad, the video with John's Mom. Note how simple and easy this can make sharing visual media with interested entities. Instead of Calvin's Dad having to take down John's Mom's email address and so forth, later remember to send media to her, then enter her email, find the video, select the video, and so forth, the visual media is immediately sent to John's Mom.
Alternatively or additionally, methods 300 may receive a selection or de-selection of a determined entity prior to sharing the visual media at operation 310. This is shown generally at operation 312. In some cases this is performed through operations 314, 316, and 318.
At 314, a user interface having a visual identifier for the determined entity is presented. This is illustrated in
Interest associations 118 are illustrated for these entities in
Returning to methods 300, at 316 selection through the visual identifier to select or de-select to share with the entity is enabled. Thus, Bella (assuming the method is operating on or through her mobile device), can de-select Maria, Mark, or Ryan to share photo 402. At 318, selection to select or deselect the determined entity is received. Here assume that Bella taps on Maria's visual identifier (her thumbnail) thereby de-selecting to share picture 402 to Maria. Responsive to this selection, de-selection, or simply to accept the determined entities as presented, sharing module 312 shares the visual media.
Note that entities, while described as persons, need not be. Thus, an entity may be an album or database having an interest association with persons and objects. Assume, for example, that Bella selects that any visual media having a bicycle or helmet be automatically shared with a database, such as her triathlon team's shared database. Bella may select that visual media having similar objects be shared with a databases, e.g., her photos and videos having same or similar objects or types of objects be compiled in the database. Thus, Bella's media that includes flowers can automatically be stored in a flower album or media of herself in a self-titled album.
Example Device-to-Device Sharing
As noted in part above, the apparatuses and techniques enable device-to-device sharing of visual media. This is but one example of the many ways in which visual media can be shared.
At 902, an indication of interest in a person or object is received. This indication can be received at a first mobile device and from a second mobile device, such as through NFC or PAN communication. Examples of an indication received through these communications are set forth above, such as through tapping two mobile devices together.
At 904, visual media associated with the first mobile device that includes the indicated person or object is determined. This can be performed by sharing module 112 as noted above, such as to determine, by selection or process of elimination, a person or object of interest to a person associated with a mobile device from which the indication is received. Thus, John's Mom indicates an interest in a photo just taken of Calvin and John by Calvin's Dad and sharing module 112 determines that the person of interest is John based on Calvin having been recognized previously and known to Calvin s Dad's facial recognition engine 132 and sharing module 112. Or, for example, sharing module 112 may determine that a person associated with the second mobile device is both the entity and the person interest (e.g., Mark taps Mark's phone with Bella's phone to receive media that has Mark in it).
At 906, the visual media that includes the indicated person or object is shared with the second mobile device by the first mobile device. Concluding the above example, Calvin's Dad's smartphone shares the video of Calvin and John with John's Mom. Note also that other, later-taken or prior-captured visual media may also be shared, either automatically or responsive to selection.
Example Device
Example device 1000 can be implemented in a fixed or mobile device being one or a combination of a media device, desktop computing device, television set-top box, video processing and/or rendering device, appliance device (e.g., a closed-and-sealed computing resource, such as some digital video recorders or global-positioning-satellite devices), gaming device, electronic device, vehicle, workstation, laptop computer, tablet computer, smartphone, video camera, camera, computing watch, computing ring, computing spectacles, and netbook.
Example device 1000 can be integrated with electronic circuitry, a microprocessor, memory, input-output (I/O) logic control, communication interfaces and components, other hardware, firmware, and/or software needed to run an entire device. Example device 1000 can also include an integrated data bus (not shown) that couples the various components of the computing device for data communication between the components.
Example device 1000 includes various components such as an input-output (I/O) logic control 1002 (e.g., to include electronic circuitry) and microprocessor(s) 1004 (e.g., microcontroller or digital signal, processor). Example device 1000 also includes a memory 1006, which can be any type of random, access memory (RAM), a low-latency nonvolatile memory (e.g., flash memory), read only memory (ROM), and/or other suitable electronic data storage. Memory 1006 includes or has access to sharing module 112, visual media 114, facial recognition engine 132, and/or object recognition engine 134. Sharing module 112 is capable of performing one more actions described for the techniques, though other components may also be included.
Example device 1000 can also include various firmware and/or software, such as an operating system 1008, which, along with other components, can be computer-executable instructions maintained by memory 1006 and executed by microprocessor 1004. Example device 1000 can also include other various communication interfaces and components, wireless LAN (WLAN) or wireless PAN (WPAN) components, other hardware, firmware, and/or software.
Other examples capabilities and functions of these entities are described with reference to descriptions and figures above. These entities, either independently or in combination with other modules or entities, can be implemented as computer-executable instructions maintained by memory 1006 and executed by microprocessor 1004 to implement various embodiments and/or features described herein.
Alternatively or additionally, any or all of these components can be implemented as hardware, firmware, fixed logic circuitry, or any combination thereof that is implemented in connection with the I/O logic control 1002 and/or other signal processing and control circuits of example device 1000. Furthermore, some of these components may act separate from device 1000, such as when remote (e.g., cloud-based) services perform one or more operations for sharing module 112. For example, photo and video are not required to all be in one location, some may be on a user's smartphone, some on a server, some downloaded to another device (e.g., a laptop or desktop). Further, some images may be taken by a device, indexed, and then stored remotely, such as to save memory resources on the device.
Although sharing visual media have been described in language specific to structural features and/or methodological acts, the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing techniques and apparatuses for sharing visual media.
This application claims the benefit of U.S. Provisional Application Ser. No. 61/986,135, filed Apr. 30, 2014, the entire contents of which are hereby incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61986135 | Apr 2014 | US |