[Not Applicable]
[Not Applicable]
[Not Applicable]
Present media systems are incapable of providing for and/or conveniently providing for user-selection of objects in a still image (e.g., a photograph). Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
Various aspects of the present invention provide a system and method for providing information of selectable objects in a still image and/or data stream, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims. These and other advantages, aspects and novel features of the present invention, as well as details of illustrative aspects thereof, will be more fully understood from the following description and drawings.
The following discussion will refer to various communication modules, components or circuits. Such modules, components or circuits may generally comprise hardware and/or a combination of hardware and software (e.g., including firmware). Such modules may also, for example, comprise a computer readable medium (e.g., a non-transitory medium) comprising instructions (e.g., software instructions) that, when executed by a processor, cause the processor to perform various functional aspects of the present invention. Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of particular hardware and/or software implementations of a module, component or circuit unless explicitly claimed as such. For example and without limitation, various aspects of the present invention may be implemented by one or more processors (e.g., a microprocessor, digital signal processor, baseband processor, microcontroller, etc.) executing software instructions (e.g., stored in volatile and/or non-volatile memory). Also for example, various aspects of the present invention may be implemented by an application-specific integrated circuit (“ASIC”) and/or other hardware components.
Additionally, the following discussion will refer to various media system modules (e.g., image presentation system modules, personal electronic device modules, computer system modules, camera modules, television modules, television receiver modules, television controller modules, modules of a user's local media system, modules of a geographically distributed media system, etc.). It should be noted that the following discussion of such various modules is segmented into such modules for the sake of illustrative clarity. However, in actual implementation, the boundaries between various modules may be blurred. For example, any or all of the functional modules discussed herein may share various hardware and/or software components. For example, any or all of the functional modules discussed herein may be implemented wholly or in-part by a shared processor executing software instructions. Additionally, various software sub-modules that may be executed by one or more processors may be shared between various software modules. Accordingly, the scope of various aspects of the present invention should not be limited by arbitrary boundaries between various hardware and/or software components, unless explicitly claimed.
The following discussion may also refer to communication networks and various aspects thereof. For the following discussion, a communication network is generally the communication infrastructure through which a communication device (e.g., a portable communication device, personal computer device, media presentation system, image presentation system, camera, media server, image server, television, television control device, television provider, television programming provider, television receiver, video and/or image recording device, etc.) may communicate with other systems. For example and without limitation, a communication network may comprise a cable and/or satellite television communication network, a cellular communication network, a wireless metropolitan area network (WMAN), a wireless local area network (WLAN), a wireless personal area network (WPAN), a general data communication network (e.g., the Internet), any home or premises communication network, etc. A particular communication network may, for example, generally have a corresponding communication protocol according to which a communication device may communicate with the communication network. Unless so claimed, the scope of various aspects of the present invention should not be limited by characteristics of a particular type of communication network.
The following discussion may at times refer to an on-screen pointing location. Such a pointing location refers to a location on a video screen (e.g., a computer display, a display of a portable electronic device, a display of a digital photograph display, a television display, a primary television screen, a secondary television screen, etc.) to which a user (either directly or with a pointing device) is pointing. Such a pointing location is to be distinguished from other types of on-screen location identification, such as, for example, using arrow keys and/or a mouse to move a cursor or to traverse blocks (e.g., on an on-screen program guide) without pointing. Various aspects of the present invention, while referring to on-screen pointing location, are also readily extensible to such other forms of on-screen location identification.
Additionally, the following discussion will at times refer to television programming. Such television programming may, for example, communicate still images. Such television programming generally includes various types of television programming (e.g., television programs, news programs, sports programs, music television, movies, television series programs and/or associated advertisements, educational programs, live or recorded television programming, broadcast/multicast/unicast television programming, etc.). Such television programming may, for example, comprise real-time television broadcast programming (or multicast or unicast television programming) and/or user-stored television programming that is stored in a user device (e.g., a VCR, PVR, etc.). Such television programming video content is to be distinguished from other non-programming video content that may be displayed on a television screen (e.g., an electronic program guide, user interface menu, a television set-up menu, a typical web page, a graphical video game, etc.).
The following discussion will at times refer to still images. Such still images may, for example, comprise pictures. For example, such still images may correspond to still photographs (e.g., taken with a digital camera), scanned images created by a scanner, facsimile images, etc. Such still images may, for example, be represented in a data file (e.g., a JPEG file, a bitmap, a TIFF file, etc.), or other data structure, and may be communicated in one or more streams of data.
Also, the following discussion will at times refer to user-selectable objects in a still image. Such user-selectable objects includes both animate (i.e., living) and inanimate (i.e., non-living) objects. Such objects may, for example, comprise characteristics of any of a variety of objects present in still images. Such objects may, for example and without limitation, comprise inanimate objects, such as consumer good objects (e.g., clothing, automobiles, shoes, jewelry, furniture, food, beverages, appliances, electronics, toys, artwork, cosmetics, recreational vehicles, sports equipment, safety equipment, computer equipment, communication devices, books, etc.), premises objects (e.g., business locations, stores, hotels, signs, doors, buildings, landmarks, historical sites, entertainment venues, hospitals, government buildings, etc.), objects related to services (e.g., objects related to transportation, objects related to emergency services, objects related to general government services, objects related to entertainment services, objects related to food and/or drink services, etc.), objects related to location (e.g., parks, landmarks, streets, signs, road signs, etc.), etc. Such objects may, for example, comprise animate objects, such as people (e.g., actors/actresses, athletes, musicians, salespeople, commentators, reports, analysts, hosts/hostesses, entertainers, etc.), animals (e.g., pets, zoo animals, wild animals, etc.) and plants (e.g., flowers, trees, shrubs, fruits, vegetables, cacti, etc.).
Turning first to
The media information provider 110 may, for example, provide information related to a still image (e.g., information describing or otherwise related to user-selectable objects in still images, etc.). As will be discussed below in more detail, the media information provider 110 may operate to create and/or communicate a still image (e.g., a still image data set, still image data stream, etc.) that includes embedded information of user-selectable objects in the still image. For example and without limitation, such a media information provider 110 may operate to receive a completed initial still image data set (e.g., a data file or other bounded data structure, a data stream, etc.), for example via a communication network and/or on a physical media, and embed information of user-selectable objects in the completed initial still image data set. Also for example, such a media information provider 110 may operate to form an original still image data set (e.g., a data file or other bounded data structure, etc.) and embed information of user-selectable objects in the original still image data set during such formation (e.g., in the studio, on an enterprise computing system, on a personal computer, etc.).
Note that the media information provider 110 may be remote from a user's local image presentation system (e.g., located at a premises different from the user's premises) or may be local to the user's local image presentation system (e.g., a personal media player, a digital photo presentation system, a DVR, a personal computing device, a personal electronic device, a personal cellular telephone, a personal digital assistant, a camera, a moving picture camera, an image recorder, a data server, an image and/or television receiver, a television, etc.).
The media information provider 110 may alternatively, for example, operate to form and/or communicate a user-selectable object data set that includes information of user-selectable objects in a still image. Such a user-selectable object data set for a still image may, for example, be independent of a data set that generally represents the still image (e.g., generally represents the still image without information of user-selectable objects in such still image). For example and without limitation, such a media information provider 110 may operate to receive a completed still image data set (e.g., a data file or other finite group of data, a data stream, etc.), for example via a communication network and/or on a physical medium, and form the user-selectable object data set independent of the completed still image data set. Also for example, such a media information provider 110 may operate to form both an original still image data set and form the corresponding user-selectable object data set.
The exemplary media system 100 may also include a third party image information provider 120. Such a provider may, for example, provide information related to a still image. Such information may, for example, comprise information describing user-selectable objects in still images, media guide information, etc. As will be discussed below in more detail, such a third party image information provider (e.g., a party that may be independent of a still image source, media network operator, etc.) may operate to create a still image (or still image data set, still image data stream, etc.) that includes embedded information of user-selectable objects in the still image. For example and without limitation, such a third party image information provider 120 may operate to receive an initial completed still image data set (e.g., a data file or other bounded data structure, a data stream, etc.), for example via a communication network and/or on a physical media, and embed information of user-selectable objects in the initial completed still image data set.
The exemplary media system 100 may include one or more communication networks (e.g., the communication network(s) 130). The exemplary communication network 130 may comprise characteristics of any of a variety of types of communication networks over which still image and/or information related to still images (e.g., information related to user-selectable objects in still images) may be communicated. For example and without limitation, the communication network 130 may comprise characteristics of any one or more of: a cable television network, a satellite television network, a telecommunication network, the Internet, a local area network (LAN), a personal area network (PAN), a metropolitan area network (MAN), any of a variety of different types of home networks, etc.
The exemplary media system 100 may include a first media presentation device 140. Such a first media presentation device 140 may, for example, comprise networking capability enabling such media presentation device 140 to communicate directly with the communication network(s) 130. For example, the first media presentation device 140 may comprise one or more embedded media receivers or transceivers (e.g., a cable television transceiver, satellite television transceiver, Internet modem, wired and/or wireless LAN transceiver, wireless PAN transceiver, etc.). Also for example, the first media presentation device 140 may comprise one or more recording devices (e.g., for recording and/or playing back media content, still images, etc.). The first media presentation device 140 may, for example, operate to (which includes “operate when enabled to”) perform any or all of the functionality discussed herein. The first media presentation device 140 may, for example, operate to receive and process still image information (e.g., via a communication network, stored on a physical medium or computer readable medium (e.g., a non-transitory computer readable medium), etc.), where such still image information comprises embedded information of user-selectable objects. The first media presentation device 140 may also, for example, operate to receive and process information of a still image and information of user-selectable objects in the still image, where such user-selectable object information and such image information are communicated independently (e.g., received in independent data files, received in independent data streams, etc.).
The exemplary media system 100 may include a first media controller 160. Such a first media controller 160 may, for example, operate to (e.g., which may include “operate when enabled to”) control operation of the first media presentation device 140. The first media controller 160 may comprise characteristics of any of a variety of media presentation controlling devices. For example and without limitation, the first media controller 160 may comprise characteristics of a dedicated media center control device, a dedicated image presentation device controller, a dedicated television controller, a universal remote control, a cellular telephone or personal computing device with media presentation control capability, etc.
The first media controller 160 may, for example, transmit signals directly to the first media presentation device 140 to control operation of the first media presentation device 140. The first media controller 160 may also, for example, operate to transmit signals (e.g., via the communication network(s) 130) to the media information provider 110 and/or the third party image information provider 120 to control image information (or information related to an image) being provided to the first media presentation device 140 or other device with image presentation capability, or to conduct other transactions (e.g., business transactions, etc.).
As will be discussed in more detail later, the first media controller 160 may operate to communicate screen (or display) pointing information with the first media presentation device 140 and/or other devices. Also, as will be discussed in more detail later, various aspects of the present invention include a user pointing to a location on a display (e.g., pointing to an animate or inanimate user-selectable object presented in an image on the display). In such a scenario, the user may perform such pointing in any of a variety of manners. One of such exemplary manners includes pointing with a user device. The first media controller 160 provides a non-limiting example of a device that a user may utilize to point to an on-screen location.
Additionally, for example in a scenario in which the first media controller 160 comprises an on-board display, the first media controller 160 may operate to receive and process still image information (e.g., via a communication network, stored on a physical medium or computer readable medium (e.g., a non-transitory computer readable medium), etc.), where such still image information comprises embedded information of user-selectable objects. As another example, in such a scenario, the first media controller 160 may operate to receive and process still image information and information of user-selectable objects in the still image (e.g., via one or more communication networks, stored on one or more physical media or computer readable media (e.g., a non-transitory computer readable media), etc.), where such still image information and user-selectable object information are communicated independently.
As will be mentioned throughout the following discussion, various aspects of the invention will be performed by one or more devices, components and/or modules of a user's local media presentation system. The first media presentation device 140 and first media controller 160 provide a non-limiting example of a user's local media presentation system. Such a user's local media presentation system, for example, generally refers to the media-related devices that are local to the media presentation system currently being utilized by the user. For example, when a user is utilizing a media presentation system located at the user's home, the user's local media presentation system generally refers to the media-related devices that make up the user's home media presentation system. Also for example, when a user is utilizing a media presentation system at a premises away from the user's home (e.g., at another home, at a hotel, at an office, etc.), the user's local media presentation system generally refers to the media-related devices that make up the premises media presentation system Such a user's local media presentation system does not, for example, comprise media network infrastructure devices that are generally outside of the user's current premises (e.g., Internet nodes, cable and/or satellite head-end apparatus, cable and/or satellite communication intermediate communication network nodes) and/or media source devices that are generally managed by media enterprises and generally exist outside of the user's premises. Such entities, which may be communicatively coupled to the user's local media presentation system, may be considered to be entities remote from the user's local media presentation system (or “remote entities”).
The exemplary media system 100 may also include a media (e.g., still image) receiver 151. The media receiver 151 may, for example, operate to (e.g., which may include “operate when enabled to”) provide a communication link between a media presentation device and/or media controller and a communication network and/or information provider. For example, the media receiver 151 may operate to provide a communication link between the second media presentation device 141 and the communication network(s) 130, or between the second media presentation device 141 and the media information provider 110 (and/or third party image information provider 120) via the communication network(s) 130.
The media receiver 151 may comprise characteristics of any of a variety of types of media receivers. For example and without limitation, the media receiver 151 may comprise characteristics of a cable television receiver, a satellite television receiver, a still image receiver, a personal computer, a still picture (or still image) camera, a moving picture camera, etc. Also for example, the media receiver 151 may comprise a data communication network modem for data network communications (e.g., with the Internet, a LAN, PAN, MAN, telecommunication network, etc.). The media receiver 151 may also, for example, comprise recording capability (e.g., still image recording and playback, etc.).
Additionally, for example in a scenario in which the media receiver 151 comprises an on-board display and/or provides still image information to a display (or media presentation device) communicatively coupled thereto, the media receiver 151 may operate to receive and process still image information (e.g., via a communication network, stored on a physical medium or computer readable medium (e.g., a non-transitory computer readable medium), etc.), where such still image information comprises embedded information of user-selectable objects. As another example, in such a scenario, the media receiver 151 may operate to receive and process still image information and information of user-selectable objects in the still image (e.g., via one or more communication networks, stored on one or more physical media or computer readable media (e.g., non-transitory computer readable media), etc.), where such still image information and user-selectable object information are communicated independently.
The exemplary media system 100 may include a second media controller 161. Such a second media controller 161 may, for example, operate to (e.g., which may include “operate when enabled to”) control operation of the second media presentation device 141 and the media receiver 151. The second media controller 161 may comprise characteristics of any of a variety of media presentation controlling devices. For example and without limitation, the second media controller 161 may comprise characteristics of a dedicated media center control device, a dedicated image presentation device controller, a dedicated television controller, a universal remote control, a cellular telephone or personal computing device with media presentation control capability, etc.
The second media controller 161 may, for example, operate to transmit signals directly to the second media presentation device 141 to control operation of the second media presentation device 141. The second media controller 161 may, for example, operate to transmit signals directly to the media receiver 151 to control operation of the media receiver 151. The second media controller 161 may additionally, for example, operate to transmit signals (e.g., via the media receiver 151 and the communication network(s) 130) to the media information provider and/or the third party image information provider 120 to control image information (or information related to an image) being provided to the media receiver 151, or to conduct other transactions (e.g., business transactions, etc.).
As will be discussed in more detail later, various aspects of the present invention include a user selecting a user-selectable object in an image. Such selection may, for example, comprise the user pointing to a location on a display (e.g., pointing to an animate or inanimate object presented in an image on the display). In such a scenario, the user may perform such pointing in any of a variety of manners. One of such exemplary manners includes pointing with a user device. The second media controller 161 provides one non-limiting example of a device that a user may utilize to point to an on-screen location. Also, in a scenario in which the second media controller 161 comprises a touch screen, a user may touch a location of such touch screen to point to an on-screen location (e.g., to select a user-selectable object presented in an image presented on the touch screen).
As will be mentioned throughout the following discussion, and as mentioned previously in the discussion of the first media presentation system 140 and first media controller 160, various aspects of the invention will be performed by one or more devices, components and/or modules of a user's local media system. The second media presentation device 141, media receiver 151 and second media controller 161 provide another non-limiting example of a user's local media system.
Additionally, for example in a scenario in which the second media controller 161 comprises an on-board display, the second media controller 161 may operate to receive and process still image information (e.g., via a communication network, stored on a physical medium or computer readable medium (e.g., a non-transitory computer readable medium), etc.), where such still image information comprises embedded information of user-selectable objects. As another example, in such a scenario, the second media controller 161 may operate to receive and process still image information and information of user-selectable objects in the still image (e.g., via one or more communication networks, stored on one or more physical media or computer readable media (e.g., non-transitory computer readable media), etc.), where such still image information and user-selectable object information are communicated independently.
The exemplary media system 100 was provided to provide a non-limiting illustrative foundation for discussion of various aspects of the present invention. Thus, the scope of various aspects of the present invention should not be limited by any characteristics of the exemplary media system 100 unless explicitly claimed.
The exemplary method 200 may, for example, begin executing at step 205. The exemplary method 200 may begin executing in response to any of a variety of causes and/or conditions, non-limiting examples of which will now be provided. For example, the exemplary method 200 may begin executing in response to a user command to begin (e.g., a user at a media source, a user at a media production studio, a user at a media distribution enterprise, etc.), in response to still image information and/or information of user-selectable objects in a still image arriving at a system entity implementing the method 200, in response to an electronic request communicated from the external entity to a system entity implementing the method 200, in response to a timer, in response to a request from an end user and/or a component of a user's local media system for a still image including information of user-selectable objects, in response to a request from a user for a still image where such user is associated in a database with still images comprising user-selectable objects, upon reset and/or power-up of a system component implementing the exemplary method 200, in response to identification of a user and/or user equipment for which object selection capability is to be provided, in response to user payment of a fee, etc.
The exemplary method 200 may, for example at step 210, comprise receiving image information (e.g., picture information) for a still image. Various examples of such still images were provided above. Note that, depending on the particular implementation, such still image information may, for example, be received with or without information describing user-selectable objects in such a still image.
Step 210 may comprise receiving the still image information from any of a variety of sources, non-limiting examples of which will now be provided. For example and without limitation, step 210 may comprise receiving the still image information from a still image broadcasting company, from a data streaming company, from a still image studio, from a still image database or server, from a camera or other still image recording device, from a scanner, from a facsimile machine, from an Internet still image provider, etc.
Step 210 may comprise receiving the still image information via any of a variety of types of communication networks, non-limiting examples of which were provided above. Such networks may, for example, comprise any of variety of general data communication networks (e.g., the Internet, a local area network, a personal area network, a metropolitan area network, etc.). Such networks may, for example, comprise a wireless television network (e.g., terrestrial and/or satellite) and/or cable television network. Such networks may, for example, comprise a local wired network, point-to-point communication link between two devices, etc.
Step 210 may comprise receiving the still image information from any of a variety of types of hard media (e.g., optical storage media, magnetic storage media, etc.). Such hard media may, for example, comprise characteristics of optical storage media (e.g., compact disc, digital versatile disc, Blueray®, laser disc, etc.), magnetic storage media (e.g., hard disc, diskette, magnetic tape, etc.), computer memory device (e.g., non-transitory computer readable medium, flash memory, one-time-programmable memory, read-only memory, random access memory, thumb drive, etc.). Such memory may, for example, be a temporary and/or permanent component of the system entity implementing the method 200. For example, in a scenario including the utilization of such hard media, step 210 may comprise receiving the still image information from such a device and/or from a reader of such a device (e.g., directly via an end-to-end conductor or via a communication network).
In an exemplary scenario, step 210 may comprise receiving a completed still image data set (e.g., a complete picture data set) for a still image, the completed still image data set formatted for communicating the still image without information describing user-selectable objects in the still image. For example, the received completed still image data set may be in conformance with a still image standard (e.g., JPEG, TIFF, GIF, bmp, etc.). For example, such a data set may be a data file (or set of logically linked data files) formatted in a JPEG or pdf format for normal presentation on a user's local image presentation system. Such a data set of a still image, when received at step 210, might not have information of user-selectable objects in the still image. Such information of user-selectable objects may then, for example, be added, as will be explained below.
In another exemplary scenario, step 210 may comprise receiving still image information (e.g., picture information) for the still image prior to the still image information being formatted into a completed still image data set for communicating the still image. In an exemplary implementation, step 210 may comprise receiving still image information (e.g., a bitmap, partially encoded still image information, etc.) that will be formatted in accordance with a still image standard, but which has not yet been so formatted. Such a data set of a still image, when received at step 210, might not have information of user-selectable objects in the still image. Such information of user-selectable objects may then, for example, be added, as will be explained below.
In yet another exemplary scenario, step 210 may comprise receiving a completed still image data set (e.g., a complete picture data set) for the still image, the completed still image data set formatted for communicating the still image with information describing user-selectable objects in the still image. For example, the received completed still image data set may be in conformance with a still image standard (e.g., JPEG, TIFF, GIF, etc.), or a variant thereof, that specifically accommodates information of user-selectable objects in the still image. Also for example, the received completed still image (or picture) data set may be in conformance with a still image standard (e.g., JPEG et al., TIFF, GIF, JBIG et al., PNG, AGP, AI, ANI, PNG, BMP, DNG, DCS, DCR, ECW, EMF, ICO, PDF, etc.), or a variant thereof, that while not specifically accommodating information of user-selectable objects in the still image, allows for the incorporation of such information in unassigned data fields. For example, such a data set may be a data file (or set of logically linked data files) formatted in a JPEG format for normal presentation on a user's local image presentation system. Such a data set of a still image, when received at step 210, might comprise information of user-selectable objects in the still image. Such information of user-selectable objects may then, for example, be deleted, modified and/or appended, as will be explained below.
Step 210 may, for example, comprise receiving the still image information in digital and/or analog signals. Though the examples provided above generally concerned the receipt of digital data, such examples are readily extendible to the receipt of analog still image information.
In general, step 210 may comprise receiving still image information for a still image. Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of any particular type of still image information or by any particular manner of receiving still image information unless explicitly claimed.
The exemplary method 200 may, at step 220, comprise receiving object information corresponding to a user-selectable object in the still image. Many non-limiting examples of receiving such object information will now be provided.
Step 220 may comprise receiving the user-selectable object information from any of a variety of sources, non-limiting examples of which will now be provided. For example and without limitation, step 220 may comprise receiving the user-selectable object information from a media (or image) broadcasting company, from a media (or image) streaming company, from a media (or image) studio, from a still image database or server, from an advertising company, from a commercial enterprise associated with a user-selectable object in a still image, from a person or organization associated with a user-selectable object in a still image, from an Internet still image provider, from a third party still image information source, from an end user's process executing on an end user's personal computer, etc.
Step 220 may comprise receiving the user-selectable object information from a plurality of independent sources. For example, in an exemplary scenario in which a still image includes user-selectable objects corresponding to a plurality of respective interested parties (e.g., respective product sponsors, respective leagues or other associations, respective people, etc.), step 220 may comprise receiving the user-selectable object information from each of such respective interested parties. For example, step 220 may comprise receiving user-selectable object information corresponding to a user-selectable consumer good in a still image from a provider of such consumer good, receiving user-selectable object information corresponding to an entertainer in the still image from the entertainer's management company, receiving user-selectable object information corresponding to a user-selectable historical landmark in the still image from a society associated with the historical landmark, receiving user-selectable object information corresponding to a user-selectable object in the still image associated with a service from a provider of such service, etc. In such a multiple-source scenario, step 220 may comprise aggregating the user-selectable object information received from the plurality of sources (e.g., into a single user-selectable object data set) for ultimate combination of such user-selectable object information with received still image information.
Step 220 may, for example, comprise receiving the user-selectable object information from a same source as that from which the still image information was received at step 210 or may comprise receiving the user-selectable object information from a different source. For example and without limitation, step 220 may comprise receiving the user-selectable object information from an advertising company, while step 210 comprises receiving the still image information from a still image studio. In another example, step 220 may comprise receiving the user-selectable object information from a commercial enterprise associated with a consumer good object presented in the still image, while step 210 comprises receiving the still image information from an image server of a sports network.
In yet another example, step 220 may comprise receiving the user-selectable object information directly from a computer process that generates such information. For example, an operator may display a still image on an operator station and utilize graphical tools (e.g., boxes or other polygons, edge detection routines, etc.) to define a user-selectable object in the still image. Such a computer process may then output information describing the object in the still image. Step 220 may comprise receiving the information output from such process.
Step 220 may comprise receiving the user-selectable object information via any of a variety of types of communication networks, many examples of such networks were provided previously. Such networks may, for example, comprise any of variety of general data communication networks (e.g., the Internet, a local area network, a personal area network, a metropolitan area network, etc.). Such networks may, for example, comprise a media network (e.g., terrestrial and/or satellite media network).
Step 220 may, for example, comprise receiving the user-selectable object information via a same communication network as that via which the still image information was received at step 210 or may comprise receiving the user-selectable object information from a different communication network. For example and without limitation, step 220 may comprise receiving the user-selectable object information via a general data communication network (e.g., the Internet), while step 210 comprises receiving the still image information via a television network. In another example, step 220 may comprise receiving the user-selectable object information via a general data network, while step 210 comprises receiving the still image information from a computer readable medium (e.g., a non-transitory computer readable medium).
Step 220 may comprise receiving the user-selectable object information from any of a variety of types of hard media (e.g., optical storage media, magnetic storage media, etc.). Such hard media may, for example, comprise characteristics of optical storage media (e.g., compact disc, digital versatile disc, Blueray®, laser disc, etc.), magnetic storage media (e.g., hard disc, diskette, magnetic tape, etc.), computer memory device (e.g., non-transitory computer readable medium, flash memory, one-time-programmable memory, read-only memory, random access memory, thumb drive, etc.). Such memory may, for example, be a temporary and/or permanent component of the system entity implementing the method 200. For example, in a scenario including the utilization of such hard media, step 220 may comprise receiving the user-selectable object information from such a device and/or from a reader of such a device (e.g., directly via an end-to-end conductor or via a communication network).
The object information corresponding to one or more user-selectable objects that is received at step 220 may comprise any of a variety of characteristics, non-limiting examples of which will now be provided.
For example, such user-selectable object information may comprise information describing and/or defining the user-selectable object that is shown in the still image. Such information may, for example, be processed by a recipient of such information to identify an object that is being selected by a user. Such information may, for example, comprise information describing boundaries associated with a user-selectable object in the still image (e.g., actual object boundaries (e.g., an object outline), areas generally coinciding with a user-selectable object (e.g., a description of one or more geometric shapes that generally correspond to a user-selectable object), selection areas that when selected indicate user-selection of a user-selectable object (e.g., a superset and/or subset of a user-selectable object in the still image), etc. Such information may, for example, describe and/or define the user-selectable in a still image coordinate system.
Many examples of such object description information are provided in a variety of related U.S. Patent Applications. For example, as mentioned previously, U.S. patent application Ser. No. 12/774,380, filed May 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21037US02; U.S. patent application Ser. No. 12/850,832, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A DISTRIBUTED SYSTEM FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21038US02; U.S. patent application Ser. No. 12/850,866, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION RECEIVER FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21039US02; U.S. patent application Ser. No. 12/850,911, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION CONTROLLER FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21040US02; U.S. patent application Ser. No. 12/850,945, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION CONTROLLER FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21041US02; U.S. patent application Ser. No. 12/851,036, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION SYSTEM FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21051US02; and U.S. patent application Ser. No. 12/851,075, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A PARALLEL TELEVISION SYSTEM FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21052US02, which are hereby incorporated herein by reference in their entirety, provide many examples of information describing (or otherwise related to) user-selectable objects in television programming, which may also, for example, apply herein to user-selectable objects in still images.
Also for example, such user-selectable object information may comprise information describing the object, where such information may be presented to the user upon user-selection of a user selectable object. For example, such object information may comprise information describing physical characteristics of a user-selectable object, background information, historical information, general information of interest, location information, financial information, travel information, commerce information, personal information, etc.
Additionally for example, such user-selectable object information may comprise information describing and/or defining actions that may be taken upon user-selection of a user-selectable object, non-limiting examples of such actions and/or related information corresponding to a respective user-selectable object will now be presented.
For example, such user-selectable object information may comprise information describing a one or more manners of determining information to present to the user (e.g., retrieving such information from a known location, conducting a search for such information, etc.), establishing a communication session by which a user may interact with networked entities associated with a user-selected object, interacting with a user regarding display of a user-selected object and/or associated information, etc.
For example, such user-selectable object information may comprise information describing one or more manners of obtaining one or more sets of information, where such information may then, for example, be presented to the user. For example, such information may comprise a memory address (or data storage address) and/or a communication network address (e.g., an address of a networked data server, a URL, etc.), where such address may correspond to a location at which information corresponding to the identified object may be obtained. Such information may, for example, comprise a network address of a component with which a communication session may be initiated and/or conducted (e.g., to obtain information regarding the user-selected object, to interact with the user regarding the selected object, etc.).
In an exemplary scenario in which the user-selectable object information comprises information to present to a user upon user-selection of a selectable object in a still image, such information may comprise any of a variety of different types of information related to the user-selected object. For example and without limitation, such information may comprise information describing the user-selectable object (e.g., information describing aspects of the object, history of the object, design of the object, source of the object, price of the object, critiques of the object, information provided by commercial enterprises producing and/or providing such object, etc.), information indicating to the user how the user may obtain the selected object, information indicating how the user may utilize the selected object, etc. The information may, for example, comprise information of one or more non-commercial organizations associated with, and/or having information pertaining to, the identified user-selected object (e.g., non-profit and/or government organization contact information, web site address information, etc.).
In another exemplary scenario, the information corresponding to a user-selectable object in the still image may comprise information related to conducting a search for information corresponding to the user-selectable object. Such information may, for example, comprise network search terms that may be utilized in a search engine to search for information corresponding to the user-selected object. Such information may also comprise information describing the network boundaries of such a search, for example, identifying particular search networks, particular servers, particular addresses, particular databases, etc.
In an exemplary scenario the information corresponding to a user-selectable object may describe a manner in which a system is to intact with a user to more clearly identify information desired by the user. For example, such information may comprise information specifying user interaction that should take place when an amount of information available and corresponding to a user-selectable object exceeds a particular threshold. Such user interaction may, for example, help to reduce the amount of information that may ultimately be presented to the user. For example, such information may comprise information describing a user interface comprising providing a list (or menu) of types of information available to the user and soliciting information from the user regarding the selection of one or more of the listed types of information.
In yet another exemplary scenario, in which an action associated with a user-selectable object comprises the establishment and/or management of a communication session between the user and one or more networked entities, the user-selectable object information may comprise information describing the manner in which a communication session may be established and/or management.
In still another exemplary scenario, in which an action associated with a user-selectable object comprises providing a user interface by which a user may initiate and perform a commercial transaction regarding a user-selectable object, the user-selectable object information may comprise information describing the manner in which the commercial transaction is to be performed (e.g., order forms, financial information exchange, order tracking, etc.).
As shown above, various user-selectable objects (or types of objects) may, for example, be associated with any of a variety of respective actions that may be taken upon selection of a respective user-selectable object by a user. Such actions (e.g., information retrieval, information searching, communication session management, commercial transaction management, etc.) may, for example, be included in a table or other data structure indexed by the identity of a respective user-selectable object.
Other non-limiting examples of object information corresponding to user-selectable objects in a still image may comprise: athlete information (e.g., statistics, personal information, professional information, history, etc.), entertainer information (e.g., personal information, discography and/or filmography information, information of related organizations, fan club information, photograph and/or video information, etc.), landmark information (e.g., historical information, visitation information, location information, mapping information, photo album information, visitation diary, charitable donation information, etc.), political figure information (e.g., party affiliation, stances on particular issues, history, financial information, voting record, attendance record, etc.), information regarding general types of objects (e.g., information describing actions to take upon user-selection of a person object, of a consumer good object, of a landmark object, etc.) and/or specific objects (e.g., information describing actions to take when a particular person object is selected, when a particular consumer good object is selected, when a particular landmark object is selected, etc.).
For additional non-limiting examples of actions that may be performed related to user-selectable objects (e.g., in still images as well as in television programming), and related user-selectable object information that may be combined with still image information, the reader is directed to U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A DISTRIBUTED SYSTEM FOR RESPONDING TO USER-SELECTION OF AN OBJECT IN A TELEVISION PROGRAM”, Attorney Docket No. 21045US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A LOCAL TELEVISION SYSTEM FOR RESPONDING TO USER-SELECTION OF AN OBJECT IN A TELEVISION PROGRAM”, Attorney Docket No. 21046US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION SYSTEM FOR RESPONDING TO USER-SELECTION OF AN OBJECT IN A TELEVISION PROGRAM BASED ON USER LOCATION”, Attorney Docket No. 21047US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION SYSTEM FOR PRESENTING INFORMATION ASSOCIATED WITH A USER-SELECTED OBJECT IN A TELEVISION PROGRAM”, Attorney Docket No. 21048US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION SYSTEM FOR PRESENTING INFORMATION ASSOCIATED WITH A USER-SELECTED OBJECT IN A TELEVISION PROGRAM”, Attorney Docket No. 21049US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION SYSTEM FOR RESPONDING TO USER-SELECTION OF AN OBJECT IN A TELEVISION PROGRAM UTILIZING AN ALTERNATIVE COMMUNICATION NETWORK”, Attorney Docket No. 21050US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION FOR PROVIDING ADVERTISING INFORMATION ASSOCIATED WITH A USER-SELECTED OBJECT IN A TELEVISION PROGRAM”, Attorney Docket No. 21053US02; U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION FOR PROVIDING INFORMATION ASSOCIATED WITH A USER-SELECTED PERSON IN A TELEVISION PROGRAM”, Attorney Docket No. 21054US02; and U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD IN A TELEVISION FOR PROVIDING INFORMATION ASSOCIATED WITH A USER-SELECTED INFORMATION ELEMENT IN A TELEVISION PROGRAM”, Attorney Docket No. 21055US02. The entire contents of each of such applications are hereby incorporated herein by reference in their entirety.
In general, the above-mentioned types of information corresponding to user-selectable objects a still image may be general to all eventual viewers (or recipients) of the still image, but may also be customized to a particular target user and/or end user. For example, such information may be customized to a particular user (e.g., based on income level, demographics, age, employment status and/or type, education level and/or type, family characteristics, religion, purchasing history, neighborhood characteristics, home characteristics, health characteristics, etc. For example, such information may also be customized to a particular geographical location or region.
In general, step 220 may comprise receiving object information corresponding to a user-selectable object in the still image. Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of any particular type of such user-selectable object information or by any particular manner of receiving such user-selectable object information unless explicitly claimed.
The exemplary method 200 may, at step 230, comprise combining the received still image information (e.g., as received at step 210) and the received user-selectable object information (e.g., as received at step 220) in a combined data set. Many non-limiting examples of such combining will now be provided.
As mentioned previously, step 210 may comprise receiving still image information (e.g., a still image data set) for a still image (e.g., a photographic image) by, at least in part, receiving a completed still image data set for the still image (e.g., formatted in accordance with a still image communication and/or compression standard), where the completed still image data set is formatted for communicating (or storing) the still image without information describing user-selectable objects in the still image. In such an exemplary scenario, step 230 may comprise combining the received still image information and the received user-selectable object information by, at least in part, inserting the received user-selectable object information in the completed still image data set to create a combined data set comprising the received still image data set and the received user-selectable object information.
For example, in an exemplary scenario in which the received completed still image data set, as received, is formatted in accordance with a still image standard (e.g., a JPEG standard), step 230 may comprise inserting the received user-selectable object information in data fields of the completed still image data set that are not assigned by the still image standard for any specific type of information (e.g., inserting such information into unassigned data fields and/or metadata fields provided by the still image standard, adding new data fields to the still image standard, etc.).
Such inserting may, for example, comprise inserting the received user-selectable object information in data fields of the completed still image data set that are interleaved with data fields carrying still image data. For example, such inserting may be performed in accordance with a format alternating still image data and user-selectable object information (or data) on a pixel-by-pixel basis (e.g., sequencing pixel 1 still image data, pixel 1 user-selectable object information, sequencing pixel 2 still image data, pixel 2 user-selectable object information, etc.), by groups of pixels (e.g., pixel 1-A still image data, pixel 1-A user-selectable object information, pixel A-N still image data, pixel A-N user-selectable object information, etc.), by lines of pixels, by blocks of pixels, etc. Also for example, utilizing pixel, coordinate or other spatial information, user-selectable object information need not be strictly placed with the still image data for the still image in which the user-selectable object appears. For example, information of user-selectable objects in a still image and/or portion thereof may be communicated before and/or after the image data set for the entire still image is communicated.
Also for example, in another exemplary scenario in which the received completed still image data set (e.g., a picture data set), as received, is formatted in accordance with a still image standard that specifically assigns data fields to information of user-selectable objects, step 230 may comprise inserting the received user-selectable object information in the data fields of the completed still image data set that are specifically assigned by the still image standard to contain information of user-selectable objects.
Also as mentioned previously, step 210 may comprise receiving still image information (e.g., a still image data set) for a still image (e.g., a photographic image) by, at least in part, receiving still image information for the still image prior to the still image information being formatted into a completed still image data set for communicating (or storing) the still image. For example, such a scenario may comprise receiving information describing the still image that has yet to be formatted into a data set that conforms to a particular still image standard (e.g., bitmap information, DCT information, etc., which has yet to be placed into a self-contained JPEG data set for communicating and/or storing the still image). In such an exemplary scenario, step 230 may comprise combining the received still image information and the received user-selectable object information into a completed still image data set that is formatted for communicating and/or storing the still image with information describing user-selectable objects in the still image (e.g., into a single cohesive data set, for example, a single data file or other data structure, into a plurality of logically linked data files or other data structures, etc.).
In an exemplary scenario, such a completed still image data set may be formatted in accordance with a still image standard that specifically assigns respective data fields (or elements) to information describing the still image and to user-selectable object information. In another exemplary scenario, such a completed still image data set may be formatted in accordance with a still image standard that specifically assigns data fields to information describing a still image, but does not specifically assign data fields to user-selectable object information (e.g., utilizing general-purpose unassigned data fields, adding new data fields to the standard, etc.).
Also as mentioned previously, step 210 may comprise receiving still image information for a still image by, at least in part, receiving an initial combined still image data set that comprises initial still image information and initial user-selectable object information corresponding to user-selectable objects in the still image. For example, prior to being received, the received initial combined still image data set may have already been formed into a single cohesive data set that comprises the still image information for the still image and information of user-selectable objects in the still image.
In such an exemplary scenario, step 230 may comprise modifying the initial user-selectable object information of the initial combined still image data set in accordance with the received user-selectable object information (e.g., as received at step 220). Such modifying may, for example and without limitation, comprise adding the received object information to the initial object information in the initial combined still image data set (e.g., in unused unassigned data fields and/or in unused data fields that have been specifically assigned to contain user-selectable object information, etc.).
Also such modifying may comprise changing at least a portion of the initial object information of the initial combined still image data set in accordance with the received user-selectable object information (e.g., changing information defining a user-selectable object in a presented still image, changing information about a user-selectable object to be presented to a user, changing information regarding any action that may be performed upon user-selection of a user-selectable object, etc.). Additionally, such modifying may comprise deleting at least a portion of the initial object information in accordance with the received user-selectable object information (e.g., in a scenario in which the received user-selectable object information includes a command or directive to remove a portion or all information corresponding to a particular user-selectable object).
In the previously provided examples of combining the received still image information and the received user-selectable object information, step 230 may comprise performing such operations automatically (i.e., without real-time interaction with a user while such operations are being performed) and may also be performed with user interaction. For example, the received still image information and the received user-selectable object information may each be uniquely identified to assist in merging such information. For example, step 230 may comprise analyzing such respective unique identifications to determine the still image data set in which the user-selectable object information is to be inserted. For example, the user-selectable object information for a particular user-selectable object may comprise information indentifying the specific still image in which the user-selectable object appears. Such information may be utilized at step 230 to determine the appropriate data set (e.g., still image data file or other bounded data set0 in which to place the user-selectable object information.
In another example, step 230 may comprise presenting an operator with a view of the still image and a view of a user-selectable object in such still image for which information is being added to a combined dataset. Step 230 may then comprise interacting with the operator to obtain permission and/or directions for combining the still image and user-selectable object information.
Note that step 230 may comprise encrypting the user-selectable object information or otherwise restricting access to such information. For example, in a scenario in which access to such information is provided on a subscription basis, in a scenario in which providers of such information desire to protect such information from undesirable access and/or manipulation, etc., such information protection may be beneficial.
In general, step 230 may comprise combining the received still image information (e.g., as received at step 210) and the received user-selectable object information (e.g., as received at step 220) in a combined data set. Accordingly, the scope of various aspects of the present invention should not be limited by any particular manner of performing such combining and/or any particular format in which such a combined data set may be placed unless specifically claimed.
The exemplary method 200 may, at step 240, comprise communicating the combined data set(s) (e.g., as formed at step 230) to one or more recipient systems or devices. Such communication may comprise characteristics of any of a variety of types of communication, non-limiting examples of which will now be presented.
Step 240 may, for example, comprise communicating the combined data set(s) via a communication network (e.g., a television communication network, a telecommunication network, a general data communication network (e.g., the Internet, a LAN, a PAN, etc.), etc.). Many non-limiting examples of such communication network were provided previously. Step 240 may, for example, comprise broadcasting, multi-casting and/or uni-casting the combined data set over one or more communication networks. Step 240 may also, for example, comprise communicating the combined data set(s) to another system and/or device via a direct conductive path (e.g., via a wire, circuit board trace, conductive trace on a die, etc.).
Additionally for example, step 240 may comprise storing the combined data set(s) on a computer readable medium (e.g., a DVD, a CD, a Blueray ® disc, a laser disc, a magnetic tape, a hard drive, a diskette, etc.). Such a computer readable medium may then, for example, be shipped to a distributor and/or ultimate recipient of the computer readable medium. Further for example, step 240 may comprise storing the combined data set(s) in a volatile and/or non-volatile memory device (e.g., a flash memory device, a one-time-programmable memory device, an EEPROM, a RAM, etc.).
Further for example, step 240 may comprise storing (or causing or otherwise participating in the storage of) the combined data set(s) in a media system component (e.g., a component or device of the user's local media (or still image presentation) system and/or a component or device of a media (or still image) provider and/or a component or device of any still image information source. For example and without limitation, step 240 may comprise storing the combined dataset(s), or otherwise participating in the storage of the combined dataset(s), in a component of the user's local media system (e.g., in an image presentation device, a digital video recorder, a media receiver, a media player, a media system controller, personal communication device, a local networked database, a local networked personal computer, etc.).
Step 240 may, for example, comprise communicating the combined data set in serial fashion. For example, step 240 may comprise communicating the combined data set (comprising interleaved still image information and user-selectable object information) in a single data stream (e.g., via a general data network, via a television or other media network, stored on a hard medium, for example a non-transitory computer-readable medium, in such serial fashion, etc.). Also for example, step 240 may comprise communicating the combined data set in parallel data streams, each of which comprises interleaved still image information and user-selectable object information (e.g., as opposed to separate distinct respective data streams for each of still image information and user-selectable object information).
In general, step 240 may comprise communicating the combined data set(s) (e.g., as formed at step 230) to one or more recipient systems or devices (e.g., an end user or associated system, media (or image) provider or associated system, an advertiser or associated system, a media (or image) producer or associated system, a media (or image) database, a media (or image) server, etc.). Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of any particular manner of performing such communicating or by any particular recipient of such communication unless explicitly claimed.
The exemplary method 200 may, for example at step 295, comprise performing continued operations. Step 295 may comprise performing any of a variety of continued operations, non-limiting examples of such continued operation(s) will be presented below. For example, step 295 may comprise returning execution flow to any of the previously discussed method steps. For example, step 295 may comprise returning execution flow of the exemplary method 200 to step 220 for receiving additional user-selectable object information to combine with still image information. Also for example, step 295 may comprise returning execution flow of the exemplary method 200 to step 210 for receiving additional still image information and user-selectable object information to combine with such received still image information. Additionally for example, step 295 may comprise returning execution flow of the exemplary method 200 to step 240 for additional communication of the combined information to additional recipients.
In general, step 295 may comprise performing continued operations (e.g., performing additional operations corresponding to combining still image information and information of user-selectable objects in such still images, etc.). Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of any particular type of continued processing unless explicitly claimed.
Turning next to
The exemplary method 300 may, for example, begin executing at step 305. The exemplary method 300 may begin executing in response to any of a variety of causes or conditions. Step 305 may, for example, share any or all characteristics with step 205 of the exemplary method 200 illustrated in
The exemplary method 300 may, for example at step 310, comprise receiving image information for a still image. Step 310 may, for example, share any or all characteristics with step 210 of the exemplary method 200 illustrated in
For example, step 310 may comprise, for example at sub-step 312, receiving a completed still image data set for the still image, the completed still image data set formatted for communicating and/or storing the still image without information describing user-selectable objects in the still image. Alternatively for example, step 310 may comprise, for example at sub-step 314, receiving still image information for the still image prior to the still image information being formatted into a completed still image data set for communicating and/or storing the still image. Alternatively for example, step 310 may comprise, for example at sub-step 316, receiving a completed still image data set for the still image, the completed still image data set formatted for communicating and/or storing the still image with information describing user-selectable objects in the still image.
The exemplary method 300 may, for example at step 320, comprise receiving object information corresponding to a user-selectable object in the still image. Step 320 may, for example, share any or all characteristics with step 220 of the exemplary method 200 illustrated in
For example, step 320 may comprise, for example at sub-step 322, receiving user-selectable object information comprising information describing and/or defining the user-selectable object that is shown in the still image (e.g., object dimension information, object movement information, etc.). Also for example, step 320 may comprise, for example at sub-step 324, receiving user-selectable object information comprising information regarding the user-selectable object that may be presented to the user upon user-selection of such object in a still image.
Additionally for example, step 320 may comprise, for example at sub-step 326, receiving user-selectable object information comprising information describing and/or defining actions that may be taken upon user-selection of a user-selectable object (e.g., retrieving and/or obtaining and/or searching for information about a user-selectable object, information specifying a manner in which a system is to interact with a user regarding a user-selected object, searching for information, establishing and/or maintaining communication sessions, information describing the manner in which the commercial transaction is to be performed, etc.).
The exemplary method 300 may, for example at step 330, comprise combining the received still image information (e.g., as received at step 310) and the received user-selectable object information (e.g., as received at step 320) in a combined data set. Step 330 may, for example, share any or all characteristics with step 230 of the exemplary method 200 illustrated in
For example, step 330 may comprise, for example at sub-step 332, inserting the received user-selectable object information in a completed still image data set that was received at step 320 (e.g., inserting such user-selectable object information in fields of the still image data set that are specified by a standard for carrying such user-selectable object information, inserting such user-selectable object information in fields of the still image data set that are not specifically allocated for a particular type of data, etc.).
Also for example, step 330 may comprise, for example at sub-step 334, combining received still image data and received user-selectable object information into a completed still image data set that is formatted for communicating the still image with information describing user-selectable objects in the still image. Additionally for example, step 330 may comprise, for example at sub-step 336, modifying initial user-selectable object information of an initial combined still image data set in accordance with received user-selectable object information.
The exemplary method 300 may, for example at step 340, comprise communicating the combined data set(s) (e.g., as formed at step 230) to one or more recipient systems or devices. Step 340 may, for example, share any or all characteristics with step 240 of the exemplary method 200 illustrated in
For example, step 340 may comprise, for example at sub-step 342, communicating the combined data set(s) via a communication network (e.g., any of a variety of communication networks discussed herein, etc.). Also for example, step 340 may comprise, for example, at sub-step 344, communicating the combined data set(s) by storing the combined data set(s) on a non-transitory computer readable medium and/or by transmitting the combined data set(s) to another device or system to perform such storage. Additionally for example, step 340 may comprise, for example, at sub-step 346, communicating the combined data set in a single serial stream (e.g., comprising interleaved still image data and user-selectable object information). Further for example, step 340 may comprise, for example, at sub-step 348, communicating the combined data set in a plurality of parallel serial streams (e.g., each of such streams comprising interleaved still image data and user-selectable object information).
The exemplary method 300 may, for example at step 395, comprise performing continued operations. Step 395 may, for example, share any or all characteristics with step 295 of the exemplary method 200 illustrated in
As discussed previously with regard to
The exemplary method 400 may, for example, begin executing at step 405. The exemplary method 400 may begin executing in response to any of a variety of causes and/or conditions. Step 405 may, for example, share any or all characteristics with steps 205 and 305 of the exemplary methods 200 and 300 illustrated in
The exemplary method 400 may, for example at step 410, comprise receiving image information for a still image. Step 410 may, for example, share any or all characteristics with steps 210 and 310 of the exemplary methods 200 and 300 illustrated in
The exemplary method 400 may, for example at step 420, comprise determining user-selectable object information corresponding to one or more user-selectable objects in a still image. Step 420 may, for example, share any or all characteristics with steps 220 and 320 of the exemplary methods 200 and 300 illustrated in
For example, step 420 may comprise receiving the user-selectable object information from any of a variety of sources, non-limiting examples of which were provided previously (e.g., in the discussion of step 220 and elsewhere herein). The object information corresponding to one or more user-selectable objects that is determined at step 210 (e.g., developed by and received from a local process and/or received from an external source) may comprise any of a variety of characteristics, numerous examples of such object information were provided previously (e.g., in the discussion of step 220 and elsewhere herein).
The exemplary method 400 may, at step 430, comprise forming a user-selectable object data set comprising the determined user-selectable object information (e.g., as determined at step 420), where the user-selectable object data set is independent of a still image data set (e.g., as received at step 410) generally representative of the still image. Step 430 may comprise performing such data set formation in any of a variety of manners, non-limiting examples of which will now be presented.
For example, step 430 may comprise forming the user-selectable object data set (e.g., a data file or other data structure, a logical grouping of data, etc.) in a manner that is spatially synchronized with a still image (or a still image data set representative of a still image).
For example, in an exemplary scenario in which a still image data set is parsed into blocks (e.g., groups of pixels), step 430 may comprise forming the user-selectable object data set by, at least in part, parsing the user-selectable object information in a manner that logically mirrors the still image data set blocks. For example, in a scenario where a user-selectable object appears in block N of a still image, the user-selectable object information describing the user-selectable object may be placed in a corresponding block (e.g., Nth block, data segment, etc.) of the user-selectable object data set. In such a scenario, the user-selectable object data set might include null (or no) information in blocks corresponding to still image blocks that do not include any user-selectable objects. For example, the user-selectable object data set need not include information for block P if corresponding block P of the still image does not include any user-selectable objects.
As another example, in an exemplary scenario in which a still image data set is parsed into blocks, step 430 may comprise forming the user-selectable object data set by, at least in part, including information indicating the blocks of the still image in which the user-selectable object appears (e.g., along with the dimensions of the user-selectable object and/or other spatially descriptive information). For example, in an exemplary scenario in which a user-selectable object appears in blocks A-B of a still image, step 430 may comprise incorporating information into the user-selectable object data set that indicates the user-selectable object appears in blocks A-B of the still image, along with information describing the dimensions and/or locations of the user-selectable object in such blocks of the still image.
Note that in an exemplary scenario in which the user-selectable object data set includes information that spatially synchronizes the user-selectable object data set to the still image data set, not all information of the user-selectable object data set need be so synchronized. For example, information corresponding to user-selectable objects that is not spatially-specific may be included in the user-selectable object data set in an unsynchronized (or asynchronous) manner. In an exemplary scenario, information describing user-selectable objects (or selectable regions thereof) as such user-selectable objects appear in a presented still image may be spatially-synchronized (e.g., block-synchronized) to the still image data set, while information to be presented to the user upon user-selection of such user-selectable objects and/or information describing any action to take upon user-selection of such user-selectable objects may be included in the user-selectable object data set in an unsynchronized manner (e.g., in a data structure (or sub-data structure) that is indexed by object identity to retrieve such information).
Though the above examples were directed to spatially-based synchronization of the user-selectable object data set to the still image (e.g., a corresponding still image data set), other synchronization information may also be utilized. For example, in an exemplary scenario in which a still image is presented for a particular time window, the user-selectable object data set may comprise time synchronization information indicating that such user-selectable object data set corresponds to the particular time window. Also for example, step 430 may comprise incorporating data markers into the user-selectable object data set that correspond to respective markers in a still image data set. Additionally for example, step 430 may comprise incorporating data pointers into the user-selectable object data set that point to respective absolute and/or relative locations within a still image data set.
The above examples generally apply to information describing the presence of user-selectable objects in the still image. As discussed previously, the user-selectable object information may also comprise information to be provided to the user upon selection of a user-selectable object, information describing communication sessions and/or other actions that may be performed upon selection of the user-selectable object, etc. Note that in particular exemplary scenarios, such information may be incorporated into the user-selectable object data set at step 430. For example, step 430 may comprise incorporating such user-selectable object information into the user-selectable object data set in a manner that provides for indexing such information by object identity. For example, such information need only be incorporated into the user-selectable object data set one time (e.g., positioned in the user-selectable object data set such that a recipient of the user-selectable object data set will have received such information prior to user selection of the user-selectable object corresponding to such information). For example, in an exemplary scenario involving a user-selectable consumer good in a still image, step 430 may comprise forming the user-selectable object data set such that, when communicated to a user's local media (or image) presentation system, information of actions to perform upon user selection of the consumer good in the still image will have been received by the user's local media (or image) presentation system prior to the user's first opportunity to select the consumer good in the still image.
As discussed above, the user-selectable object data set formed at step 430 may comprise characteristics of different types of data sets (or structures). For example, step 430 may comprise forming a data file that comprises the user-selectable object information. Such a user-selectable object data file may, for example, comprise metadata that correlates the user-selectable object data file to one or more corresponding still image data files that are utilized to communicate the general still image (e.g., without user-selectable object information).
Step 430 may also, for example, comprise forming an array of the user-selectable object information. Such an array may, for example, comprise an array or records associated with respective user-selectable objects in a still image and may be indexed and/or sorted by object identification. Similarly, step 430 may comprise forming a linked list of respective data records corresponding to user-selectable objects in the still image. Such a linked list may, for example, be a multi-dimensional linked list with user-selectable object in a first dimension and respective records associated with different types of information associated with a particular user-selectable object in a second dimension.
As mentioned above, the user-selectable object data set may be independent of one or more still image data sets generally representative of the still image. Such an implementation advantageously provides for independent formation and maintenance of the user-selectable object data set that corresponds to the still image. For example, in such an implementation, a data set (e.g., a still image data file, JPEG file, etc.) for a still image may be developed (e.g., by a image studio) for communication of the still image to all users, while a data set for user-selectable objects in the still image may be developed (e.g., by an advertising company, by a sponsor, by a network operator, by one or more components of a user's local media system, etc.) independently. In such a scenario, the user-selectable object data set may be developed and/or changes may be made to the user-selectable object data set without impacting the still image data set. Also, in such a scenario, as mentioned above, user-selectable object information may be customized to a user or group of users. In such a scenario, a plurality of different user-selectable object data sets may be developed that each correspond to the same still image data set. For example, step 220 may comprise forming a first user-selectable object data set for a New York audience or recipient of a still image, and forming a second user-selectable object data set for a Los Angeles audience or recipient of the still image without necessitating modification of the still image data set, which communicates the still image in the same manner to each of the New York and Los Angeles audiences or recipients.
In general, step 430 may comprise forming a user-selectable object data set comprising the determined user-selectable object information (e.g., as determined at step 220), where the user-selectable object data set is independent of a still image data set generally representative of the still image. Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of particular types of user-selectable object data, characteristics of particular types of user-selectable object data sets, and/or characteristics of any particular manner of forming user-selectable object data sets unless explicitly claimed.
The exemplary method 400 may, at step 440, comprise communicating the formed user-selectable object data set (e.g., as formed at step 430) to one or more recipients. Step 440 may comprise performing such communicating in any of a variety of manners, non-limiting examples of which will now be provided.
For example, step 440 may comprise communicating the user-selectable object data set in one or more data streams (which may be called “user-selectable object data streams” herein) independent of one or more still image data streams that generally communicate the still image (i.e., that generally communicate the still image data set). Note that, while such still image data set generally need not comprise information of user-selectable objects therein, such information may be present. For example, the user-selectable object data set may comprise information of user-selectable objects in the still image that supplement (e.g., append and/or amend) information of user-selectable objects that might be present in the still image data set.
Step 440 may, for example, comprise communicating the user-selectable object data set time-synchronized to communication of the still image data set. For example, even in a scenario in which the user-selectable object data set is independent of the general still image data set, step 440 may still time-synchronize communication of the user-selectable object data set with communication of the general still image data set.
For example, in such an exemplary scenario, step 440 may comprise communicating the user-selectable object data concurrently (e.g., simultaneously and/or pseudo-simultaneously in a time-sharing manner) with communication of the still image data set that generally communicates the still image. For example, such concurrent communication may comprise communicating at least a portion of the user-selectable object data set and at least a portion of the still image data set in a time-multiplexed manner (e.g., via a shared communication channel (e.g., a frequency channel, a code channel, a time/frequency channel, etc.)). Also for example, such concurrent communication may comprise communicating the user-selectable object data set in parallel with communication of the still image data set (e.g., on separate respective sets of one or more parallel communication channels).
Also for example, step 440 may comprise communicating the user-selectable object data set via at least one communication channel that is different from one or more communication channels over which the still image data set is communicated. For example, even in a scenario in which the user-selectable object data set and the still image data set are communicated over at least one shared communication channel, step 440 may comprise communicating the user-selectable object data set in at least one communication channel that is different from the communication channel(s) over which the still image data set is communicated.
Step 440 may, for example, comprise communicating the user-selectable object data set over a first communication network that is different from a second communication network over which the still image data set is communicated. As a non-limiting example, step 440 may comprise communicating the user-selectable object data set over a first communication network (e.g., a first general data communication network), where the still image data set is communicated over a second communication network (e.g., a second general data communication network).
Step 440 may, for example, comprise communicating the user-selectable object data set over a first type of communication network that is different from a second type of communication network over which the still image data set is communicated. As a non-limiting example, step 440 may comprise communicating the user-selectable object data set over a first general data communication network, where the still image data set is communicated over a television communication network (e.g., a cable television network, a satellite television network, etc.).
Also for example, step 440 may comprise communicating the user-selectable object data set utilizing a first communication protocol that is different from a second communication protocol that is utilized to communicate the still image data set. For example, step 440 may comprise communicating the user-selectable data set utilizing TCP/IP, while the general still image data set is communicated utilizing a cable television protocol.
Also for example, step 440 may comprise communicating the user-selectable object data set to a first set of one or more user local media (or image) presentation systems, where the first set is a subset of a second set of user local media (or image) presentation systems to which the still image data set is communicated. For example, step 440 may comprise multicasting the user-selectable object data set to a multicast group, where the still image data set is broadcast to a superset of the multicast group. Also for example, step 440 may comprise unicasting the user-selectable object data set to a single user local media (or image) presentation system, where the still image data set is broadcast or multicast to a superset of the single user.
Additionally for example, step 440 may comprise communicating the user-selectable object data set to a first set of one or more components of a user's local media (or image) presentation system, where at least a portion of such first set is different from a second set of one or more components of the user's local media (or image) presentation system to which the still image data set is communicated. For example, in a non-limiting exemplary scenario in which the still image data set is being communicated to a media receiver and a media controller of a user's local media system, step 440 may comprise communicating the user-selectable object data set to the media controller and not to the media receiver.
Step 440 may comprise communicating the user-selectable object data set with or without regard for the timing of the communication of the still image (e.g., the still image data set) to which the user-selectable object data set corresponds. For example, step 440 may comprise communicating the user-selectable object data set whenever the still image data set is communicated. Also for example, step 440 may comprise communicating the entire user-selectable object data set before the still image data set is communicated. In such a scenario, the recipient of the communicated user-selectable object data set may be assured of having received such data set prior to receipt of the still image to which the user-selectable object data set corresponds.
Though the previous examples generally concerned step 440 communicating the user-selectable object data set via a communication network to one or more destination systems, step 440 may also comprise communicating the user-selectable object data set to a storage device where the user-selectable object data set is stored in a storage medium, for example an optical storage media (e.g., compact disc, digital versatile disc, Blueray®, laser disc, etc.), magnetic storage media (e.g., hard disc, diskette, magnetic tape, etc.), computer memory device (e.g., non-transitory computer-readable medium, flash memory, one-time-programmable memory, read-only memory, random access memory, thumb drive, etc.), etc. Such memory may, for example, be a temporary and/or permanent component of the system entity implementing the method 400.
In such a scenario, step 440 may comprise communicating the user-selectable object data set to a storage device where the user-selectable object data set is stored in a same storage medium as a medium on which the still image data set is stored. For example, the user-selectable object data set may be stored in one or more data structures that are independent of one or more data structures in which the still image data set is stored (e.g., stored in one or more separate data files).
Also, in such a scenario, step 440 may comprise communicating the user-selectable object data set to one or more devices of the user's local media system (e.g., a media receiver, a digital video recorder, a media presentation device, a media controller, a personal computer, etc.) and/or one or more devices of a media source system and/or one or more devices of a media distribution system for storage in such device(s).
In general, step 440 may comprise communicating the formed user-selectable object data set (e.g., as formed at step 430) to one or more recipients (e.g., an end user or associated system, still image provider or associated system, an advertiser or associated system, a still image producer or associated system, a still image database, a still image server, etc.). Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of any particular manner of performing such communicating or by any particular recipient of such communication unless explicitly claimed.
The exemplary method 400 may, for example at step 495, comprise performing continued operations. Step 495 may comprise performing any of a variety of continued operations, non-limiting examples of such continued operation(s) will be presented below. For example, step 495 may comprise returning execution flow to any of the previously discussed method steps. For example, step 495 may comprise returning execution flow of the exemplary method 400 to step 420 for receiving additional user-selectable object information to form into an independent user-selectable object data set and communicate. Additionally for example, step 495 may comprise returning execution flow of the exemplary method 400 to step 440 for additional communication of the user-selectable object data set (e.g., to additional recipients).
In general, step 495 may comprise performing continued operations (e.g., performing additional operations corresponding to forming and/or communicating user-selectable object data sets related to user-selectable objects in a still image. Accordingly, the scope of various aspects of the present invention should not be limited by characteristics of any particular type of continued processing unless explicitly claimed.
Turning next to
The exemplary media system 500 includes a first communication interface module 510. The first communication interface module 510 may, for example, operate to communicate over any of a variety of communication media and utilizing any of a variety of communication protocols. For example, though the first communication interface module 510 is illustrated coupled to a wireless RF antenna via a wireless port 512, the wireless medium is merely illustrative and non-limiting. The first communication interface module 510 may, for example, operate to communicate with one or more communication networks (e.g., cable television networks, satellite television networks, telecommunication networks, general data communication networks, the Internet, local area networks, personal area networks, metropolitan area networks, etc.) via which still image-related information (e.g., still image information, information of user-selectable objects in a still image, still image information with and without embedded information of user-selectable objects) and/or other data is communicated. Also for example, the first communication interface module 510 may operate to communicate with local sources of still image-related content or other data (e.g., disc drives, computer-readable medium readers, video or image recorders, cameras, computers, receivers, personal electronic devices, cellular telephones, personal digital assistants, personal media players, etc.). Additionally, for example, the first communication interface module 510 may operate to communicate with a remote controller (e.g., directly or via one or more intermediate communication networks).
The exemplary media system 500 includes a second communication interface module 520. The second communication interface module 520 may, for example, operate to communicate over any of a variety of communication media and utilizing any of a variety of communication protocols. For example, the second communication interface module 520 may communicate via a wireless RF communication port 522 and antenna, or may communicate via a non-tethered optical communication port 524 (e.g., utilizing laser diodes, photodiodes, etc.). Also for example, the second communication interface module 520 may communicate via a tethered optical communication port 526 (e.g., utilizing a fiber optic cable), or may communicate via a wired communication port 528 (e.g., utilizing coaxial cable, twisted pair, HDMI cable, Ethernet cable, any of a variety of wired component and/or composite video connections, etc.). The second communication interface module 520 may, for example, operate to communicate with one or more communication networks (e.g., cable television networks, satellite television networks, telecommunication networks, general data communication networks, the Internet, local area networks, personal area networks, metropolitan area networks, etc.) via which still image-related information (e.g., still image information, information of user-selectable objects in a still image, image information with and without embedded information of user-selectable objects) and/or other data is communicated. Also for example, the second communication module 520 may operate to communicate with local sources of still image-related information (e.g., disc drives, computer-readable medium readers, video or image recorders, cameras, computers, receivers, personal electronic devices, cellular telephones, personal digital assistants, personal media players, etc.). Additionally, for example, the second communication module 520 may operate to communicate with a remote controller (e.g., directly or via one or more intervening communication networks).
The exemplary media system 500 may also comprise additional communication interface modules, which are not illustrated (some of which may also be shown in
The exemplary media system 500 may also comprise a communication module 530. The communication module 530 may, for example, operate to control and/or coordinate operation of the first communication interface module 510 and the second communication interface module 520 (and/or additional communication interface modules as needed). The communication module 530 may, for example, provide a convenient communication interface by which other components of the media system 500 may utilize the first 510 and second 520 communication interface modules. Additionally, for example, in an exemplary scenario where a plurality of communication interface modules are sharing a medium and/or network, the communication module 530 may coordinate communications to reduce collisions and/or other interference between the communication interface modules.
The exemplary media system 500 may additionally comprise one or more user interface modules 540. The user interface module 540 may generally operate to provide user interface functionality to a user of the media system 500. For example, and without limitation, the user interface module 540 may operate to provide for user control of any or all standard media system commands (e.g., channel control, volume control, on/off, screen settings, input selection, etc.). The user interface module 540 may, for example, operate and/or respond to user commands utilizing user interface features disposed on the media system 500 (e.g., buttons, etc.) and may also utilize the communication module 530 (and/or first 510 and second 520 communication interface modules) to communicate with other systems and/or components thereof, regarding still image-related information, regarding user interaction that occurs during the formation of combined dataset(s), etc. (e.g., a media system controller (e.g., a dedicated media system remote control, a universal remote control, a cellular telephone, personal computing device, gaming controller, etc.)). In various exemplary scenarios, the user interface module(s) 540 may operate to utilize the optional display 570 to communicate with a user regarding user-selectable object information and/or to present still image information to a user.
The user interface module 540 may also comprise one or more sensor modules that operate to interface with and/or control operation of any of a variety of sensors that may be utilized during the performance of the combined data set(s). For example, the one or more sensor modules may be utilized to ascertain an on-screen pointing location, which may for example be utilized to input and/or received user-selectable object information (e.g., to indicate and/or define user-selectable objects in a still image). For example and without limitation, the user interface module 540 (or sensor module(s) thereof) may operate to receive signals associated with respective sensors (e.g., raw or processed signals directly from the sensors, through intermediate devices, via the communication interface modules 510, 520, etc.). Also for example, in scenarios in which such sensors are active sensors (as opposed to purely passive sensors), the user interface module 540 (or sensor module(s) thereof) may operate to control the transmission of signals (e.g., RF signals, optical signals, acoustic signals, etc.) from such sensors. Additionally, the user interface module 540 may perform any of a variety of still image output functions (e.g., presenting still image information to a user, presenting user-selectable object information to a user, providing visual feedback to a user regarding an identified user-selected object in a presented still image, etc.).
The exemplary media system 500 may comprise one or more processors 550. The processor 550 may, for example, comprise a general purpose processor, digital signal processor, application-specific processor, microcontroller, microprocessor, etc. For example, the processor 550 may operate in accordance with software (or firmware) instructions. As mentioned previously, any or all functionality discussed herein may be performed by a processor executing instructions. For example, though various modules are illustrated as separate blocks or modules in
The exemplary media system 500 may comprise one or more memories 560. As discussed above, various aspects may be performed by one or more processors executing instructions. Such instructions may, for example, be stored in the one or more memories 560. Such memory 560 may, for example, comprise characteristics of any of a variety of types of memory. For example and without limitation, such memory 560 may comprise one or more memory chips (e.g., ROM, RAM, EPROM, EEPROM, flash memory, one-time-programmable OTP memory, etc.), hard drive memory, CD memory, DVD memory, etc.
The exemplary media system 500 may comprise one or more modules 552 (e.g., still image information receiving module(s)) that operate to receive still image information for a still image. Such one or more modules 552 may, for example, operate to utilize the communication module 530 (e.g., and at least one of the communication interface modules 510, 520) and/or the user interface module(s) 540 to receive such still image information. For example, such one or more modules 552 may operate to perform step 210 of the exemplary method 200 discussed previously and/or step 310 of the exemplary method 300 discussed previously.
The exemplary media system 500 may comprise one or more module(s) 554 (e.g., user-selectable object information receiving module(s)) that operate to receive object information corresponding to one or more user-selectable objects in a still image. Such one or more modules 554 may, for example, operate to utilize the communication module 530 (e.g., and at least one of the communication interface modules 510, 520) and/or the user interface module(s) 540 to receive such still image user-selectable object information. For example, such one or more modules 554 may operate to perform step 220 of the exemplary method 200 discussed previously and/or step 320 of the exemplary method 300 discussed previously.
The exemplary media system 500 may comprise one or more modules 556 (e.g., still image and user-selectable object information combining module(s)) that operate to combine received still image information (e.g., as received by the module(s) 552) and received user-selectable object information (e.g., as received by the module(s) 554) into a combined data set. Such one or more modules 556 may, for example, operate to receive still image information from the module(s) 552, receive user-selectable object information from the module(s) 554, combine such received still image information and user-selectable object information into a combined data set, and output such combined data set. Such one or more modules 556 may operate to perform step 230 of the exemplary method 200 discussed previously and/or step 330 of the exemplary method 300 discussed previously.
The exemplary media system 500 may comprise one or more modules 558 (e.g., combined data set communication module(s)) that operate to communicate the combined data set to at least one recipient system and/or device. For example, such module(s) 558 may operate to utilize the communication module(s) 530 (and, for example, one or both of the first communication interface module(s) 510 and second communication interface module(s) 520)) to communicate the combined data set. Also for example, such module(s) 558 may operate to communicate the combined data set to one or more system devices that store the combined data set on a physical medium (e.g., a non-transitory computer-readable medium). Such one or more modules 558 may operate to perform step 240 of the exemplary method 200 discussed previously and/or step 340 of the exemplary method 300 discussed previously.
Though not illustrated, the exemplary media system 500 may, for example, comprise one or more modules that operate to perform any or all of the processing discussed previously with regard to the exemplary method 400, discussed previously. Such modules (e.g., as with the one or more modules 552, 554, 556 and 558) may be performed by the processor(s) 550 executing instructions stored in the memory 560. Such module(s) may, for example comprise one or more image receiving module(s) that operate to perform the still image receiving functionality discussed previously with regard to step 410. Such module(s) may also, for example comprise one or more user-selectable objection information determining module(s) that operate to perform the information determining functionality discussed previously with regard to step 420. Such module(s) may additionally, for example comprise one or more user-selectable object data set formation module(s) that operate to perform the data set formation functionality discussed previously with regard to step 430. Such module(s) may further, for example, comprise one or more user-selectable object data set communication module(s) that operate to perform the communication functionality discussed previously with regard to step 440.
Also, though not illustrated, the exemplary media system 500 may, for example, comprise one or more modules that operate to perform any or all of the continued processing discussed previously with regard to step 295 of the exemplary method 200, step 395 of the exemplary method 300, and step 495 of the exemplary method 400 discussed previously. Such modules (e.g., as with the one or more modules 552, 554, 556 and 558) may be performed by the processor(s) 550 executing instructions stored in the memory 560.
Turning next to
For example, the media system 600 comprises a processor 630. Such a processor 630 may, for example, share any or all characteristics with the processor 550 discussed with regard to
Also for example, the media system 600 may comprise any of a variety of user interface module(s) 650. Such user interface module(s) 650 may, for example, share any or all characteristics with the user interface module(s) 540 discussed previously with regard to
The exemplary media system 600 may also, for example, comprise any of a variety of communication modules (605, 606, and 610). Such communication module(s) may, for example, share any or all characteristics with the communication interface module(s) 510, 520 discussed previously with regard to
The exemplary media system 600 may also comprise any of a variety of signal processing module(s) 690. Such signal processing module(s) 690 may share any or all characteristics with modules of the exemplary media system 500 that perform signal processing. Such signal processing module(s) 690 may, for example, be utilized to assist in processing various types of information discussed previously (e.g., with regard to sensor processing, position determination, video processing, image processing, audio processing, general user interface information data processing, etc.). For example and without limitation, the signal processing module(s) 690 may comprise: video/graphics processing modules (e.g. MPEG-2, MPEG-4, H.263, H.264, JPEG, TIFF, 3-D, 2-D, MDDI, etc.); audio processing modules (e.g., MP3, AAC, MIDI, QCELP, AMR, CMX, etc.); and/or tactile processing modules (e.g., Keypad I/O, touch screen processing, motor control, etc.).
In summary, various aspects of the present invention provide a system and method for providing information of selectable objects in a still image and/or data stream. While the invention has been described with reference to certain aspects and embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
This patent application is related to and claims priority from provisional patent application Ser. No. 61/242,234 filed Sep. 14, 2009, and titled “TELEVISION SYSTEM,” the contents of which are hereby incorporated herein by reference in their entirety. This patent application is also related to U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD FOR PROVIDING INFORMATION OF SELECTABLE OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21042US02; and U.S. patent application Ser. No. ______, filed concurrently herewith, titled “SYSTEM AND METHOD FOR PROVIDING INFORMATION OF SELECTABLE OBJECTS IN A TELEVISION PROGRAM IN AN INFORMATION STREAM INDEPENDENT OF THE TELEVISION PROGRAM”, Attorney Docket No. 21043US02. This patent application is further related to U.S. patent application Ser. No. 12/774,380, filed May 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21037US02; U.S. patent application Ser. No. 12/850,832, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A DISTRIBUTED SYSTEM FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21038US02; U.S. patent application Ser. No. 12/850,866, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION RECEIVER FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21039US02; U.S. patent application Ser. No. 12/850,911, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION CONTROLLER FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21040US02; U.S. patent application Ser. No. 12/850,945, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION CONTROLLER FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21041US02; U.S. patent application Ser. No. 12/851,036, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A TELEVISION SYSTEM FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21051US02; U.S. patent application Ser. No. 12/851,075, filed Aug. 5, 2010, titled “SYSTEM AND METHOD IN A PARALLEL TELEVISION SYSTEM FOR PROVIDING USER-SELECTION OF OBJECTS IN A TELEVISION PROGRAM”, Attorney Docket No. 21052US02. The contents of each of the above-mentioned applications are hereby incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61242234 | Sep 2009 | US |