The present disclosure relates generally to content distribution over a network, and more particularly to techniques for enabling the synchronized display and manipulation of images between nodes in a network.
With the popularity of online social networks and media sharing/distribution networks, more and more people are sharing their personal media content (e.g., pictures, videos, etc.) with friends, family, and others. The most popular method for achieving this sharing experience has been through the uploading of content by a user, from platforms such as a PC or a mobile phone through the Wide Area Network (“WAN”), to an online service such as Facebook or YouTube. Once uploaded, other people can gain access to the content through a method determined by the service. Unfortunately, this experience is predominantly “static” in nature; in other words, upon the availability of the content, other users access the content asynchronously, with little or no interaction with the original uploader. Accordingly, it would desirable to have improved techniques for content sharing that provide a more dynamic and interactive user experience.
Embodiments of the present invention provide techniques for enabling synchronized, curated media sharing experiences between nodes in a network in a real-time and interactive fashion. In one embodiment, a first computing device can receive, from a user, a selection of one or more images, and can cause the one or more images to be presented synchronously on the first computing device and one or more second computing devices. While a first image in the one or more images is concurrently presented on the displays of the first computing device and the one or more second computing devices, the first computing device can receive, from the user, an input signal corresponding to an image zoom or pan operation to be performed with respect to the first image, and can update the display of the first computing device to reflect the image zoom or pan operation. The first computing device can then transmit, to the one or more second computing devices, a command for updating the displays of the one or more second computing devices to reflect the image zoom or pan operation.
A further understanding of the nature and advantages of the embodiments disclosed herein can be realized by reference to the remaining portions of the specification and the attached drawings.
In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of embodiments of the present invention. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details, or can be practiced with modifications or equivalents thereof.
Embodiments of the present invention provide techniques for enabling synchronized, curated media sharing experiences between nodes in a network in a real-time and interactive fashion. In one set of embodiments, a first computing device can receive, from a user (i.e., curator), a selection of a group of images resident on the first computing device. The selected group of images can correspond to images that the curator would like to share with other individuals in an interactive, “slideshow” presentation format. The first computing device can further establish connections with one or more second computing devices over a network. In a particular embodiment, the network can be an ad-hoc, wireless peer-to-peer (P2P) network. In other embodiments, the network can by any type of computer network conventionally known in the art. The first computing device can then cause the selected images to be presented in a synchronized manner on the first computing device and the one or more second computing devices.
For example, in one embodiment, the first computing device can cause a first image in the selected group of images to be displayed concurrently on an output device of the first computing device and on output devices of the one or more second computing devices. The first computing device can subsequently receive, from the curator, an input signal (e.g., a “swipe right or left” gesture) to transition from the first image to a second image in the selected group of images. Upon receiving the input signal, the first computing device can display the second image on the output device of the first computing device. At the same time, the first computing device can transmit a command identifying the second image to the one or more second computing devices, thereby causing those computing devices to simultaneously (or near simultaneously) transition to displaying the second image. In this manner, both the curator and the users operating the second computing devices (i.e., viewers) can view the same sequence of images at substantially the same time.
While a particular image is being displayed on the first computing device and the one or more second computing devices, the curator (and/or one of the viewers) can enter, on his/her respective computing device, an input signal for manipulating or otherwise modifying the presented image. Examples of such image manipulation/modification functions include resizing the image (i.e., zooming in or out), panning the image, rotating the image, annotating (i.e., “doodling” on) the image, and the like. In response, the image manipulations or modifications can be displayed on the computing device where the input signal was entered, as well as propagated, in real-time or near real-time, to the other connected computing devices. Thus, the image manipulations/modifications can be concurrently viewed on all of these devices.
Further, while a particular image is being presented on the first computing device and the one or more second computing devices, one of the viewers can enter, on his/her respective computing device, an input signal (e.g., a “swipe down” gesture) for locally saving the presented image on the device. In certain embodiments, this feature can be controlled by a content sharing policy that is defined by the curator. If the content sharing policy allows local saving of the image, the image can be stored in a user-defined local storage location.
Network 110 can be any type of data communications network known in the art, such as a local area network (LAN), a wide-area network (WAN), a virtual network (e.g., VPN), or the Internet. In certain embodiments, network 106 can comprise a collection of interconnected networks.
In operation, peer devices 102-108 can communicate to enable various networked image sharing functions in accordance with embodiments of the present invention. For example, as described above, one peer device (e.g., 102) can be operated by an individual (i.e., “curator”) that wishes to share images in a slideshow presentation format with one or more users of the other peer devices (e.g., 104-108). In this case, the curator can invoke an image sharing application 112 on peer device 102 (i.e., the “curator device”) and select, from a collection of images resident on curator device 102, the images he/she wishes to share. The curator can further cause curator device 102 to search for other peer devices (i.e., “viewer devices”) on network 110 that have the same image sharing application 112 installed and wish to connect to curator device 102. Once such viewer devices are found, the curator can select one or more of the viewer devices to join an image sharing session. Curator device 102 can then enter a “slideshow” mode and cause the selected images to be displayed, in synchrony, on curator device 102 and the participating viewer devices.
In one embodiment, the curator operating curator device 102 can control the flow of the image slideshow by providing an input signal on curator device 102 (e.g., a “swipe left or right” gesture) for transitioning to the next or previous image. In response, curator device 102 can send a command to the connected viewer devices to simultaneously (or near simultaneously) transition to the appropriate image. In another embodiment, the curator (or a viewer operating one of the viewer devices) can provide an input signal for modifying or otherwise manipulating a particular image being displayed during the slideshow. This image manipulation/modification can be propagated and displayed in real-time (or near real-time) on all of the connected devices. In yet another embodiment, a viewer operating one of the viewer devices can provide an input signal (e.g., a “swipe down” gesture) for locally saving the original version of a particular image being displayed during the slideshow The specific processing steps that can be performed by devices 102-108 to carry out these functions are described in further detail below.
It should be appreciated that systems 100 and 150 are illustrative and not intended to limit embodiments of the present invention. For example, the various components depicted in systems 100 and 150 can have other capabilities or include other subcomponents that are not specifically described. One of ordinary skill in the art will recognize many variations, modifications, and alternatives.
Bus subsystem 204 can provide a mechanism for letting the various components and subsystems of computing device 200 communicate with each other as intended. Although bus subsystem 204 is shown schematically as a single bus, alternative embodiments of the bus subsystem can utilize multiple busses.
Network interface subsystem 216 can serve as an interface for communicating data between computing device 200 and other computing devices or networks. Embodiments of network interface subsystem 216 can include wired (e.g., coaxial, twisted pair, or fiber optic Ethernet) and/or wireless (e.g., Wi-Fi, cellular, Bluetooth, etc.) interfaces.
User interface input devices 212 can include a keyboard, pointing devices (e.g., mouse, trackball, touchpad, etc.), a scanner, a barcode scanner, a touch-screen incorporated into a display, audio input devices (e.g., voice recognition systems, microphones, etc.), and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and mechanisms for inputting information into computing device 200.
User interface output devices 214 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices, etc. The display subsystem can be a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), or a projection device. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computing device 200.
Storage subsystem 206 can include a memory subsystem 208 and a file/disk storage subsystem 210. Subsystems 208 and 210 represent non-transitory computer-readable storage media that can store program code and/or data that provide the functionality of embodiments of the present invention.
Memory subsystem 208 can include a number of memories including a main random access memory (RAM) 218 for storage of instructions and data during program execution and a read-only memory (ROM) 220 in which fixed instructions are stored. File storage subsystem 210 can provide persistent (i.e., non-volatile) storage for program and data files, and can include a magnetic or solid-state hard disk drive, an optical drive along with associated removable media (e.g., CD-ROM, DVD, Blu-Ray, etc.), a removable flash memory-based drive or card, and/or other types of storage media known in the art.
It should be appreciated that computing device 200 is illustrative and not intended to limit embodiments of the present invention. Many other configurations having more or fewer components than device 200 are possible.
At block 306, curator device 102 can setup a network for communicating with one or more viewer devices (e.g., devices 104-108 of
In an alternative embodiment (not shown), the onboarding process can follow an “invite” model. In this model, the curator device 102 does not broadcast itself as a group owner. Instead, the curator device 102 sends invitations to one or more other users that have been selected by the curator for participating in the image sharing session. For example, the users may be selected from the curator's contact list, Facebook friends list, etc. Upon receiving the invitations, those users can connect, via their respective viewer devices, to the network/session created by the curator device 102.
Once viewer devices 104-108 have joined in the image sharing session, curator device 102 can enter synchronized slideshow mode and display the first image in the selected group of images on an output device (e.g., touchscreen display) of device 102 (block 310). At substantially the same time as block 310, curator device 102 can transmit all of the images selected at block 304, along with image metadata (e.g., name, date, unique identifier, etc.) to the connected viewer devices (block 312). In addition, curator device 102 can send a command to the connected viewer devices instructing them to display the first image (block 314). In this manner, all of the devices in the session can be synchronized to display the same image.
After some period of time, curator device 102 can receive, from the curator, an input signal (e.g., a “swipe left or right” gesture) to transition to the next (or previous) image in the slideshow (block 316). Alternatively, this image transition signal can be generated automatically by device 102. Upon receiving/generating this signal, curator device 102 can update its display to show the next image (block 318). Further, curator device 102 can send a command identifying the next image to the connected viewer devices, thereby causing those devices to simultaneously (or near simultaneously) transition to displaying the next image (block 320). In this manner, the viewer devices can remain in synch with curator device 102 as the curator and/or device 102 navigates through the slideshow.
Blocks 316-320 can be repeated until the end of the slideshow has been reached (or until the curator terminates the session) (block 322). Curator device 102 can then send a message to the connected viewer devices indicating that the session has ended (block 324) and exit the synchronized slideshow mode (block 326).
It should be appreciated that process 300 is illustrative and that variations and modifications are possible. For example, although block 312 indicates that curator device 102 transmits all of the images in the slideshow to the connected viewer devices at once, in other embodiments the images may be transmitted on an as-needed basis (e.g., immediately prior to display on the viewer devices). In yet other embodiments, the images may be transmitted in batches (e.g., three images at a time, ten images at a time, etc.). This batch size may be configurable based on a number of different factors, such as the total number of images in the slideshow, the available storage space on each viewer device, and so on. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
At block 352, viewer device 104-108 can launch image sharing application 112 (i.e., the same application running on curator device 102). At block 354, viewer device 104-108 can discover that an image sharing session is being broadcast by curator device 102 in the discovery phase. In response, viewer device 104-108 can connect to the session (block 356). In situations where multiple sessions are being broadcast concurrently by multiple curator devices, the user of viewer device 104-108 can select one session out of the multiple sessions to join.
At block 358, viewer device 104-108 can enter synchronized slideshow mode and can receive image data from curator device 102 corresponding to the data sent at block 312. Further, viewer device 104-108 can receive a command from curator device 102 identifying a particular image to display (corresponding to the command sent at block 314 or 320) (block 360). Viewer device 104-108 can then display the image on an output device (e.g., touchscreen display) of the device (block 362). Blocks 360 and 362 can be repeated until a message is received from curator device 102 indicating that the session has ended (corresponding to the message sent at block 324) (block 364). If the session has ended, viewer device 104-108 can exit the synchronized slideshow mode (block 366).
In certain embodiments, during the course of a synchronized slideshow, either the curator or a user operating a connected viewer device can provide, via his/her respective device, one or more input signals for manipulating or modifying a currently displayed image. Examples of such image manipulation and modification functions include image zooming, image panning, image rotation, image annotations or “doodling,” and more.
At block 402, curator device 102 can receive, from the curator, an input signal indicating that the currently displayed image should be zoomed in/out, panned in a particular direction, or rotated. In the case of a zooming operation, the input signal can be a “pinch-to-zoom” gesture that this typically performed on touchscreen devices. In the case of a panning operation, the input signal can be a swiping gesture. In the case of a rotation operation, the input signal can correspond to a physical rotation of curator device 102 (e.g., from landscape to portrait orientation, or vice versa).
At block 404, curator device 102 can update the display of the image to reflect the zooming, panning, or rotation operation. At substantially the same time, curator device 102 can transmit a command to the connected viewer devices identifying the image manipulation operation (e.g., zooming, panning, or rotation), as well as including data needed to replicate the operation (block 406). In certain embodiments, this data can include, e.g., coordinate information and/or vectors corresponding to the input gesture received at block 402.
At block 452, viewer device 104-108 can receive an image manipulation command from curator device 102 (corresponding to the command sent at block 406). As noted above, this command can identify an image manipulation operation to be performed with respect to the image currently displayed on viewer device 104-108, as well as data (e.g., coordinates, vectors, etc.) for carrying out the operation. In the case of an image zoom or pan operation, viewer device 104-108 can automatically update the display of the image to reflect the operation (block 454). In the case of an image rotation operation, viewer device 104-108 can provide an indication to the device user (via, e.g., a visible “rotation” symbol, an audible tone, etc.) that he/she should rotate viewer device 104-108 in order to view the image with the same orientation as the curator.
At block 502, curator device 102 can receive, from the curator, an input signal for entering an image augmentation/annotation mode for the currently displayed image. In response, curator device 102 can send a command to the connected viewer devices instructing them to also enter this mode (block 504).
At block 506, curator device 102 can enter the image augmentation/annotation mode. Curator device 102 can then receive, from the curator, one or more annotations or “doodles” to be superimposed on the currently displayed image (block 508). Examples of such annotations or doodles can include mustaches, hats, hairstyles, glasses, eyes, eye-lashes, noses, mouths, lips, ears, scars, texts, text bubbles, and so on. In certain embodiments, the annotations or doodles can be drawn “freehand” by the curator via the touchscreen display of curator device 102. In other embodiments, the curator can select and apply the annotations/doodles from a preconfigured group of symbols/images (e.g., emoticons) or text (e.g., letters or numbers).
At block 510, curator device 102 can update the display of the image to reflect the received annotations/doodles. At substantially the same time, curator device 102 can transmit an image augmentation command to the connected viewer devices that includes data needed to replicate the annotations/doodles (block 512).
Blocks 508-512 can be repeated until the curator either transitions to the next image in the slideshow, or enters an input signal indicating that the image augmentation/annotation mode should be exited (block 514). Curator device 102 can then exit the mode (block 516).
In some cases, the image annotation process can be carried out by a viewer device (e.g., 104-108 of
At block 552, viewer device 104-108 can receive, from the viewer, an input signal for entering an image augmentation/annotation mode for the currently displayed image. In response, viewer device 104-108 can send a command to curator device 102 indicating its intent to enter this mode (block 554). Upon receiving this command, curator device 102 can forward it to all of the other connected viewer devices.
At block 556, viewer device 104-108 can enter the image augmentation/annotation mode. Viewer device 104-108 can then receive, from the viewer, one or more annotations or “doodles” to be superimposed on the currently displayed image (block 558), in manner that is substantially similar to block 508 of
At block 560, viewer device 104-108 can update the display of the image to reflect the received annotations/doodles. At substantially the same time, viewer device 104-108 can transmit an image augmentation command to curator device 102 that includes data needed to replicate the annotations/doodles (block 562). In response, curator device 102 can forward this command and its associated data to the other connected viewer devices so that they can render the annotation/doodle on their respective output devices.
Blocks 558-562 can be repeated until the viewer enters an input signal indicating that the image augmentation/annotation mode should be exited (block 564). Viewer device 104-108 can then exit this mode (block 566).
In some cases, during the course of a synchronized slideshow, a viewer operating a connected viewing device (e.g., 104-108) may wish to locally save the currently displayed image.
At block 602, curator device 102 can receive, from the curator, a selection or definition of a content sharing policy for images to be shared with viewer devices 104-108. The content sharing policy can indicate, e.g., whether the images may be locally saved by a viewer device during the course of a synchronized slideshow. In one embodiment, the content sharing policy can apply different rules to different individual images, such that local saving is enabled or disabled on a per image basis. In alternative embodiments, the content sharing policy can apply a single rule to a group of images.
At block 604, curator device 102 transmit the content sharing policy to viewer devices 104-108. This transmission may occur at the start of the synchronized slideshow. At a later point during the slideshow, curator device 102 can receive a notification indicating that a local save was attempted by one of the viewer devices (block 606).
At block 652, viewer device 104-108 can receive, from the curator device, the content sharing policy transmitted at block 604 of
At block 654, viewer device 104-108 can receive, from the viewer operating the device, an input signal indicating that the currently displayed image should be locally saved. In one embodiment, this input signal can correspond to a “swipe down” gesture on the touchscreen display of the viewer device.
In response, viewer device 104-108 can check the content sharing policy received from curator device 102; if local saving of the current image is allowed, viewer device 104-108 can store the image locally (e.g., on a storage device resident on device 104-108) (block 656). Viewer device 104-108 can then transmit a notification to curator device 102 indicating that local saving of the image was completed/attempted (block 658).
In a further embodiment, while a synchronized slideshow is in progress between curator device 102 and viewer devices 104-108, viewer devices 104-108 can enter an “offline viewing mode.” In this mode, a viewer device can “stay” on a particular image in the slideshow, even if curator device 102 has moved on to the next image. In addition, while in this mode, the viewer using the viewer device can zoom, pan, rotate, or otherwise manipulate the image in any manner, completely independently of curator device 102. Once the viewer wishes to “catch up” with the latest image in the synchronized slideshow, the viewer can activate a “resume” or “catch up” control, which will cause the viewer device to jump to the image that is currently being displayed on curator device 102.
At block 704, the viewer can freely manipulate the current image (e.g., zoom, pan, rotate, etc.), independently of the curator device's status.
At block 706, while the viewer is viewing or manipulating the current image, viewer device 104-108 can receive a command from curator device 102 to display the next image in the slideshow. In a particular embodiment, the receipt of this command can be accompanied by an audible tone that is played by viewer device 104-108 (thereby informing the viewer that the curator has moved on to another image). Upon receiving the command, viewer device 104-108 can cache a copy of the next image in local storage (block 708).
At block 710, viewer device 104-108 may receive an input signal from the viewer indicating that he/she wishes to catch up with the curator. If so, the process can move on to block 712, where viewer device 104-108 can display the copy of the next image that was cached at block 708.
If viewer device 104-108 does not receive any input signal from the viewer at block 710, the process can loop back to block 704. This can continue until the viewer finally decides to catch up with the curator.
The remaining figures in the present disclosure (
The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. For example, although certain embodiments have been described with respect to particular process flows and steps, it should be apparent to those skilled in the art that the scope of the present invention is not strictly limited to the described flows and steps. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added, or omitted. As another example, although certain embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are possible, and that specific operations described as being implemented in software can also be implemented in hardware and vice versa.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. Other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as set forth in the following claims.
The present application claims the benefit and priority under 35 U.S.C. 119(e) of U.S. Provisional Application No. 61/647,704, filed May 16, 2012, entitled “NETWORK IMAGE SHARING WITH SYNCHRONIZED IMAGE DISPLAY AND MANIPULATION,” the entire contents of which are incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
61647704 | May 2012 | US |