1. Field of the Invention
The present invention relates to interacting with media content and, more particularly, to interacting with media content so as to provide a unified experience of the media content across different devices.
2. Description of the Related Art
Powered by recent advances in digital media technology, there is a rapid increase in variety of different ways of interacting with digital media content, such as images (e.g., photos), text, audio items (e.g., audio files, including music or songs), or videos (e.g., movies). In the past, consumers were constrained to interacting with digital media content on their desktop or in their living room of their home. Today, portability lets people enjoy digital media content at any time and in any place, using a variety of different media devices.
While portability of media content and availability of a variety of different media devices with different sizes, weights and capabilities offers many options to users, some challenges still remain. One difficulty is that interaction with media content across different devices may be tedious, difficult or confusing to some users. Further, while in any given set of circumstances, one device may be preferred over another, changing from one device to another tends to be difficult, confusing or inconvenient.
For example, while a full size device may provide a rich experience of a football game video at home, circumstances change when a viewer is interrupted with needing to leave home, for example, to catch a ride to the airport. Under such changed circumstances, a portable device would be needed to view the video. The user would need to provide the football game video to the portable device and thereafter start playback of the video while riding to the airport. Hence, a significant amount of care and effort is require for a user to change between devices.
Thus, there is a need for improved techniques for interacting with media content across different devices.
Improved techniques are disclosed for interacting with media content so as to provide a unified experience of the media content across different devices. The media content may comprise digital media content, such as or images (e.g., photos), text, audio items (e.g., audio files, including music or songs), or videos (e.g., movies). One of the devices may comprise a handheld multifunction device capable of various media activities, such as playing or displaying each of images (e.g., photos), text, audio items (e.g., audio files, including music or songs), and videos (e.g., movies) in digital form. Another one of the devices may comprise a non-handheld base computing unit, which is also capable of such various media activities.
The invention can be implemented in numerous ways, including as a method, system, device, apparatus (including graphical user interface), or computer readable medium. Several embodiments of the invention are discussed below.
As a computer readable medium including at least computer program code stored therein for presenting media content on a display of another device, one embodiment includes at least: computer program code for displaying content on a display of a portable multifunction device; computer program code for detecting a predefined gesture with respect to the portable multifunction device; and computer program code for communicating a status of the portable multifunction device to a remote device in response to detection of the predefined gesture with respect to the portable multifunction device.
As a computer implemented method, one embodiment includes at least the acts of: displaying media content on a touch screen display of a portable multifunction device; communicating a status of the portable multifunction device to a remote device with a remote display; and displaying the media content on the remote display in response to a predefined gesture on the touch screen display.
As a computer implemented method, another embodiment includes at least the acts of: displaying media content on a remote display of a remote device; communicating a status of the remote device to a portable multifunction device with a touch screen display; and displaying the media content on the touch screen display in response to a predefined gesture on the touch screen display.
As a computer implemented method, yet another embodiment includes at least the acts of: providing a first device with a first display, and a second device with a second display; displaying media content on the first display of the first device; detecting a presence of the first device or the second device, or detecting a proximity of the first device and the second device; detecting a predefined gesture of a user; and displaying the media content on the second display in response to detecting the predefined gesture and detecting the presence or the proximity.
As computer readable medium including at least computer program code for managing display of media content on a first device with a first display, and a second device with a second display, one embodiment includes at least: computer program code for displaying media content on the first display of the first device; computer program code for detecting a presence of the first device or the second device, or detecting a proximity of the first device and the second device; computer program code for detecting a predefined gesture of a user; and computer program code for displaying the media content on the second display in response to detecting the predefined gesture and detecting the presence or the proximity.
As a computer system one embodiment includes at least: a first device hosting media content and having a first display; a first user interface for controlling display of the media content on the first display; a second device having a second display; at least one first sensor for sensing a predefined gesture of a user; at least one second sensor for sensing a presence of the first device or the second device, or for sensing a proximity of the first device and the second device; and control logic coupled with the first and second sensors and configured for facilitating display of the media content on the second display in response to detecting the predefined gesture and detecting the presence or the proximity.
Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention.
The invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
Improved techniques are disclosed for interacting with media content so as to provide a unified experience of the media content across different devices. The media content may comprise digital media content, such as or images (e.g., photos), text, audio items (e.g., audio files, including music or songs), or videos (e.g., movies). One of the devices may comprise a handheld multifunction device capable of various media activities, such as playing or displaying each of images (e.g., photos), text, audio items (e.g., audio files, including music or songs), and videos (e.g., movies) in digital form. Another one of the devices may comprise a non-handheld base computing unit, which is also capable of such various media activities.
Embodiments of the invention are discussed below with reference to
Control logic 140 of the first device 130 may utilize one or more of a plurality of sensors 150. Further, the control logic 140 of the first device 130 may be coupled with one or more of a plurality of sensors 150 for presence or proximity recognition and for gesture recognition (a presence or proximity recognition and gesture recognition component 142 of the control logic 140 of the first device 130 may be used), media activity status recognition (using a media activity status recognition component 144 of the control logic 140 of the first device 130) or media content distribution (using a media content distribution component 146 of the control logic 140 of the first device 130).
Similarly, a second user interface 220 may be coupled with a second device 230 for controlling operation of one or more of a plurality of media activities 222 of the second device 230. The second device may comprise a remote device. More specifically, the second device or remote device may comprise a non-handheld base computing unit, capable of various media activities. As examples, the second device can pertain to a desktop computer, a large display screen, a set-top box, or a portable computer. Control logic 240 of the second device 230 may utilize one or more of the plurality of sensors 150. Further, the control logic 240 of the second device 230 may be coupled with one or more of the plurality of sensors 150 for presence or proximity recognition and for gesture recognition (a presence or proximity recognition and gesture recognition component 242 of the control logic 240 of the second device 230 may be used), media activity status recognition (using a media activity status recognition component 244 of the control logic 240 of the second device 230) or media content distribution (using a media content distribution component 246 of the control logic 240 of the second device 230).
Media activity status 112 of media content displayed on one device may be sensed, and may be transferred to and recognized by the other device, so that the other device may display the media content according to the transferred media activity status 112. The media activity status 112 may comprise status of progress of the one device in playing media content, which may be sensed and may be transferred to and recognized by the other device, so that other device may play the media content according to such progress. For example, such media activity status 112 may comprise current status of progress of playing a particular video. For example, the first device 130 may have played the particular video up to an event (e.g., a touchdown event). Such progress may be sensed and may be transferred to and recognized by the second device 230, so that the second device 130 may continue playing the particular video according to such progress, at the point of the event. The foregoing may provide a unified experience of the media content across different devices, wherein the first and second devices 130, 230 may be different devices.
In particular, the plurality of sensors 150 may comprise a software sensor for sensing the media activity status of media content displayed on the first device 130. The media activity status of the first device 130 may be sensed by the software sensor, and may be transferred and recognized using the media activity status recognition component 244 of the control logic 240 of the second device 230, so that the second device 230 may display the media content according to the transferred media activity status 112.
Similarly, the plurality of sensors 150 may further comprise a software sensor for sensing the media activity status of media content displayed on the second device 230. The media activity status of the second device 230 may be sensed by the software sensor, and may be transferred and recognized using the media activity status recognition component 144 of the control logic 140 of the first device 130, so that the first device 130 may display the media content according to the transferred media activity status 112.
Further, the plurality of sensors 150 may comprise one or more software sensors S1, S2, . . . , SN for sensing presence of media content stored in long term memory of the first device 130, and/or may comprise one or more software sensors S1, S2, . . . , SN for sensing presence of media content stored in long term memory of the second device 230. If the software sensors sense that media content stored in the first device 130 is not already stored in the second device 230 (i.e. is absent), then media content 114 may be distributed to the second device 230 using the media content distribution component 146 of the control logic 140 of the first device 130, so that the second device 230 may display the media content 114.
Similarly, if the software sensors sense that media content stored in the second device 230 is not already stored in the first device 130 (i.e., is absent), then media content 114 may be distributed to the first device 130 using the media content distribution component 246 of the control logic 240 of the second device 230, so that the first device 130 may display the media content 114.
In one embodiment, the plurality of sensors 150 may comprise one or more software sensors for sensing media content shown in an active window display of the user interface 120 of the first device 130, and may comprise one or more software sensors for sensing media content shown in an active window display of the user interface 220 of the second device 230. The control logic 140 may be configured for transferring to the second device 230 the media content shown in the active window display of the first device 130. The control logic 240 may be configured for transferring to the first device 130 the media content shown in the active window display of the second device 230.
In light of the foregoing, it should be understood that the control logic 140 may be configured for automatically determining the media content for transfer to the second device 230, and transferring the media content to the second device 230 (or may be configured for automatically determining the media content for transfer to the first device, and transferring the media content to the first device). Media content may be distributed wirelessly, using wireless communication electronics. For example, near field communication electronics or Bluetooth™ electronics or WiFi networking electronics may be used.
In discussions of the control logic 140 of the first device 130 and of the control logic 240 of the second device 230, as well as discussions of any other logics herein, it should be understood that “logic”, includes but is not limited to hardware, firmware, software and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another logic, method, and/or system. For example, based on a desired application or needs, logic may include a software controlled microprocessor, discrete logic like an application specific integrated circuit (ASIC), a programmed logic device, a memory device containing instructions, or the like. Logic may include one or more gates, combinations of gates, or other circuit components. Logic may also be fully embodied as software or software components. Where multiple logical logics are described, it may be possible to incorporate the multiple logical logics into one physical logic.
Further, a media application framework 160 of the first device 130 may be employed provide media application services and/or functionality to the plurality of media activities 122 of the first device 130. The media application framework 160 of the first device 130 may control the plurality of media activities 122 of the first device 130. Similarly, a media application framework 260 of the second device 230 may be employed to provide media application services and/or functionality to the plurality of media activities 222 of the second device 230. The media application framework 260 of the second device 230 may control the plurality of media activities 222 of the second device 230. As shown in
The plurality of sensors 150 may comprise a software sensor 216 or a plurality of software sensors for sensing media content or media activity status. One or more of the devices may have displayed one or more active windows that highlight particular media content (e.g., a photograph from a photograph library, a photograph that was taken by a device camera or camera functionality, or an audio or video track). One or more software sensors may sense particular or highlighted media content, or may sense media content within an active window.
Further, the plurality of sensors may comprise a software sensor for sensing the media activity status in an active display window of the media activity. In particular, the software sensor may sense media activity status of progress of the media activity of playing media content in an active display window of one device, so that the media activity status can be transferred to the other device. The other device may continue playing the media content in an active window of the other device, according to the transferred media activity status. One or more of any of the foregoing software sensors may sense commands, or machine state, or may be of a trap type for manipulating data and making operations on known variables.
The process 300 may begin with detecting 302 presence of one device or the other device, or proximity of the one device relative to the other device. In one embodiment, the presence or proximity can be detected using one or more suitable sensors of the plurality of sensors 150. The process 300 may continue with recognizing 302 a desire to transfer status (e.g., media activity status) from one device to the other device, at least in part based on proximity of the two devices, or on presence of one device or the other device 304. A transmission handshake (or a wireless transmission handshake) may be initiated between one device and the other device, upon recognizing the desire to transfer status.
The process may continue with transferring 306 the status (e.g., media activity status) from the first device to the second device. The process 300 can then end. The status may be transferred using wireless communication. For example, near field communication electronics, Bluetooth™ electronics or WiFi networking electronics may be used.
For example, the process 400 may be employed by displaying 401 media content of a media activity on a touch screen display of a handheld multifunction device. The process 400 may continue with controlling 403 media activity operation through a user interface of the handheld multifunction device. The process 400 may continue with sensing 405 a predefined gesture of a user on the touch screen display of the handheld multifunction device. The process 400 may continue with displaying 407 the media content on a remote display of a remote device according to the transferred media activity status, in response to the predefined gesture. For example, the remote device may be the non-handheld base computing unit. The process 400 may continue with controlling 409 media activity operation on the remote device, through a user interface of the remote device 409.
As another example, the process 400 may be employed by displaying 401 media content of a media activity on a remote display of a remote device. The process 400 may continue with controlling 408 media activity operation through a user interface of the remote device. The process 400 may continue with sensing 405 a predefined gesture of a user on a touch screen display of a handheld multifunction device. The process 400 may continue with displaying 407 the media content on the touch screen display of the handheld multifunction device according to the transferred media activity status, in response to the predefined gesture. The process 400 may continue with controlling 409 media activity operation on the handheld multifunction device, through the user interface of the handheld multifunction device.
The process 500 may begin with detecting 502 presence or proximity. For example, presence of one device or the other device, or proximity 502 of the one device relative to the other device can use one or more suitable sensors, of the plurality of sensors 150. The process 500 may continue with detecting 504 a predefined gesture of a user. The process 500 may continue with recognizing 506 a desire to display content on the other device, at least in part based on the predefined gesture and on the presence or proximity. The process 500 may continue with displaying 508 the media content on the other device 508. After displaying 508 the media content, the process 500 can end.
In an alternative embodiment of the process 500 for displaying media content, the handheld multifunction device may be the other device, and the non-handheld base computing unit may be the one device. In this embodiment, the media content may be displayed initially in an active display window of the non-handheld base computing unit, so that the media content can be displayed subsequently on the handheld multifunction device, as the other device.
For example, the process 600 may be employed by displaying 601 media content of a media activity on a touch screen display of a handheld multifunction device. The process 600 may continue with controlling 603 media activity operation through a user interface of the handheld multifunction device. The process 600 may continue with sensing 605 presence of the handheld multifunction device or a remote device (such as a non-handheld base computing unit), or proximity of the handheld multifunction device relative to the remote device. The process 600 may continue with sensing 607 a predefined gesture of a user on the touch screen display of the handheld multifunction device. The process 600 may continue with displaying 609 the media content on the remote display of the remote device, in response to the predefined gesture and to the presence or proximity. The process 600 may continue with controlling 611 media activity operation on the remote device, through a user interface of the remote device. Thereafter the process 600 can end.
As another example, the process 600 may be employed by displaying 601 media content of a media activity on a remote display of a remote device. The process 600 may continue with controlling 603 media activity operation through the user interface of the remote device. The process 600 may continue with sensing 605 presence of a handheld multifunction device or the remote device (such as the non-handheld base computing unit), or proximity of the handheld multifunction device relative to the non-handheld base computing unit. The process 600 may continue with sensing 607 a predefined gesture of a user on the touch screen display of the handheld multifunction device. The process 600 may continue with displaying 609 the media content on the touch screen display of the handheld multifunction device, in response to the predefined gesture and to the presence or proximity. The process 600 may continue with controlling 611 media activity operation on the handheld multifunction device, through the user interface of the handheld multifunction device 609. Thereafter, the process 600 can end.
One or more sensors 750 may sense presence of the handheld multifunction device 710, or may sense proximity of the handheld multifunction device 710 relative to the remote device 730. Although one or more of the sensors 750 are shown in
As shown in
As the handheld multifunction device 710 may be moved by a user through alternative positions, from the distal positions to the proximate position, the handheld multifunction device 710 may cross a preselected presence or proximity threshold of a presence or proximity recognition component of control logic. Upon crossing such presence or proximity threshold, the presence or proximity recognition component of the control logic may detect the presence or proximity.
A user interface may comprise a notification for notifying the user upon the handheld multifunction device crossing the presence or proximity threshold. Further, the user interface may comprise a notification for notifying the user upon the control logic transferring media activity status. For example, the notification can be visual (e.g., displayed notification) or audio (e.g., sound notification).
As another example, the user interface may comprise a haptic notification for notifying the user. More particularly, a haptic device may be disposed in or on the handheld multifunction device 710 (or in or on the housing of the handheld multifunction device 710). The haptic device may be in operative communication with, and activated by, the user interface, so that the user's hand (shown holding the handheld multifunction device 710 in
The proximate position of the handheld multifunction device 710 may be understood as proximate relative to the non-handheld base computing unit 730. Accordingly, the one or more sensors 750 may comprise a proximity sensor for sensing proximity of the handheld multifunction device 710 and the non-handheld base unit 730. Similarly, it should be understood that although the one or more sensors 750 may be broadly referenced herein, proximity may be particularly sensed by one or more of near field communication electronics, piconet (e.g., Bluetooth™) electronics, an optical device, a camera (such as a webcam, a digital camera or a digital video camera), a touch screen display, an accelerometer, or a wireless transmitter and/or receiver. Notwithstanding the foregoing description of functionality for sensing presence or proximity, it should be understood the foregoing may convey, transfer or distribute media activity status and/or media content.
In response to the one or more sensors 750 and the presence or proximate position of the handheld multifunction device 710 relative to the non-handheld base computing unit 730, one or more presence or proximity recognition components of one or more control logics may detect the presence or proximity of the handheld multifunction device 710 or/and the non-handheld base computing unit 730. Upon detecting the presence or proximity of the handheld multifunction device 710 or/and the non-handheld base computing unit 730, the media activity status can be transferred.
For example, the first device shown
Further, in
As the user's thumb moves through alternative positions of the predefined swiping touch gesture, from the distal position to the proximate position on the touch screen display 812, the predefined swiping touch gesture may be sensed by touch sensing components of the touch screen display 812, and may substantially match a predefined swiping gesture data template of a gesture recognition component of control logic. Upon substantially matching the predefined swiping gesture data template, the gesture recognition component of the control logic may detect the predefined swiping touch gesture.
The second device shown in
Operation of the media activity (e.g., playing the video) may be controlled through the user interface of the remote device 830.
As another example, operation as just discussed may be reversed with respect to the handheld multifunction device 810 and the remote device 830. Specifically, media content 834 of a media activity may be displayed initially in an active window on the remote display 832 of the remote device 830. Operation of the media activity (e.g., playing the video) on the remote device 830 may be controlled through the user interface of the remote device 830. The media content 814 may be displayed subsequently on the touch screen display 812 of the handheld multifunction device 810, according to the transferred media activity status, and in response to sensing and detecting the user's predefined gesture on the touch screen display 812. Operation of the media activity (e.g., playing the video) on the handheld multifunction device 810 may be controlled through the user interface of the handheld multifunction device 810. Alternatively, media activity operation on the other device could be remotely controlled from the device.
As the user's thumb moves through alternative positions of the predefined flicking touch gesture, from the contracted position to the extended position on the touch screen display, the predefined flicking touch gesture may be sensed by touch sensing components of the touch screen display, and may substantially match a predefined flicking gesture data template of a gesture recognition component of control logic. Upon substantially matching the predefined flicking gesture data template, the gesture recognition component of the control logic may detect the predefined flicking touch gesture.
As the user's thumb and forefinger move through alternative positions of the predefined multipoint touch gesture, from distal spread positions to the proximate pinching position on the touch screen display, the predefined multipoint touch gesture may be sensed by touch sensing components of the touch screen display, and may substantially match a predefined multipoint gesture data template of the gesture recognition component of the control logic. Upon substantially matching the predefined multipoint gesture data template, the gesture recognition component of the control logic may detect the predefined multipoint touch gesture.
As the user moves the handheld multifunction device through alternative positions of the predefined shaking gesture to the resting position, the predefined shaking gesture may be sensed by the gesture sensor (for example one or more accelerometers), and may substantially match a predefined shaking gesture data template of a gesture recognition component of control logic. Upon substantially matching the predefined shaking gesture data template, the gesture recognition component of the control logic may detect the predefined shaking gesture.
As the user moves the handheld multifunction device through alternative positions of the predefined shaking gesture to the rotated position, the predefined rolling gesture may be sensed by the gesture sensor (for example one or more accelerometers), and may substantially match a predefined rolling gesture data template of a gesture recognition component of control logic. Upon substantially matching the predefined rolling gesture data template, the gesture recognition component of the control logic may detect the predefined rolling gesture.
As the user moves the handheld multifunction device through alternative positions of the predefined throwing gesture to the extended position, the predefined throwing gesture may be sensed by the gesture sensor (for example one or more accelerometers), and may substantially match a predefined throwing gesture data template of a gesture recognition component of control logic. Upon substantially matching the predefined throwing gesture data template, the gesture recognition component of the control logic may detect the predefined throwing gesture.
As the user moves the handheld multifunction device through alternative positions of the predefined tap gesture to the impact position, the predefined tap gesture may be sensed by a gesture sensor (for example one or more accelerometers), and may substantially match a predefined tapping gesture data template of a gesture recognition component of control logic. Upon substantially matching the predefined tap gesture data template, the gesture recognition component of the control logic may detect the predefined tap gesture. In one embodiment, because there is an impact, either or both of a gesture sensor at the handheld multifunction device and a gesture sensor at the remote device can sense the tap gesture. The tap gesture can also serve to identify the other device. Still further, the tap gesture can authorization a wireless data exchange therebetween.
As shown in
One or more sensors 1550 may sense presence of the handheld multifunction device 1510 may sense presence of the second device 1530, or may sense proximity of the handheld multifunction device 1510 relative to the second device 1530. As the handheld multifunction device 1510 may be moved by a user through alternative positions, from the distal positions to the proximate position, the handheld multifunction device 1510 may cross a preselected presence or proximity threshold of a presence or proximity recognition component of control logic. Upon crossing such presence or proximity threshold, the presence or proximity recognition component of the control logic may detect the presence or proximity. As shown in
The second user interface may substantially depict the first device (the handheld multifunction device) in the active window 1634 on the second display 1632. Substantially contemporaneous with the transfer of media content from the first device 1610 to the second device 1630, the second user interface may depict animation, for example an animated whirling vortex, which is shown in
As shown in
The first user interface may comprise media content shown as selected by a user in a menu display of the first device. For example, as shown in
One or more software sensors for sensing media content “movie1” selected by the user in the menu displayed on the first device. Control logic may be configured for transferring to the second device 1630 the media content “movie1”, which is shown in
The first user interface may comprise media content shown as a recently viewed file in a listing display of the first device. For example, as shown in
One or more software sensors for sensing media content of the recently viewed file “movie1”. The control logic may be configured for transferring to the second device 1630 the media content “movie1”, which is shown in
The invention is preferably implemented by software, but can also be implemented in hardware or a combination of hardware and software. The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
The advantages of the invention are numerous. Different aspects, embodiments or implementations may yield one or more of the following advantages. One advantage of the invention is that transitioning a media activity, such as presentation of media content, from one device to a different device may be perceived by a user as convenient, intuitive or user-friendly. Another advantage of the invention may be automatic transfer of media activity status from one device to a different device. More particularly, another advantage of the invention may be automatically transfer of status of progress of a one device in playing media content, so that a different device may play the media content according to such progress. Still another advantage of the invention may be automatic media content distribution.
The many features and advantages of the present invention are apparent from the written description and, thus, it is intended by the appended claims to cover all such features and advantages of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, the invention should not be limited to the exact construction and operation as illustrated and described. Hence, all suitable modifications and equivalents may be resorted to as falling within the scope of the invention.