This disclosure relates generally to sending notifications corresponding to information in data feeds to user devices.
Electronic displays, for example in public areas, are useable to display useful information to people who are nearby. In an airport, electronic displays are used to provide information about arrivals, departures, baggage claim, security checkpoints, etc. In shopping centers, electronic displays are used to provide information about the stores inside the shopping center, events that will be occurring at the shopping center, etc. Currently, a person near an electronic display can photograph the electronic display, and thereby capture the information currently displayed on the electronic display.
The content on such an electronic display can be provided in various ways such as by physical media that is accessed by a computer system coupled to the electronic display. Content can also be provided to an electronic display by a server.
In various embodiments, a user captures a camera image of a display image that was visible on a screen with a camera on a user device. The user device then sends this camera image to a server computer system. In some embodiments, the user annotates the camera image before it is sent. The server computer system stores a plurality of display generation objects, one of which was used to generate the display image that was shown on the screen, and receives the camera image from the user device. In various embodiments, the server computer system compares the camera image received from the use device to a plurality of stored display images (e.g., display images generated by the server computer system, display images generated by display computer systems and sent to the server computer system) to identify a display image that matches the camera image. Having identified the particular display image, the server computer system identifies the particular display generation object used to generate the particular display image, and selects one or more data feeds to which to subscribe the user device based on the particular display generation object (and the annotation information if any). The server computer system sends the user device one or more notifications corresponding to the selected data feeds.
This disclosure includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—[entity] configured to [perform one or more tasks]—is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure can be said to be “configured to” perform some task even if the structure is not currently being operated. A “computer system configured to receive a camera image” is intended to cover, for example, a computer system has circuitry that performs this function during operation, even if the computer system in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible. Thus, the “configured to” construct is not used herein to refer to a software entity such as an application programming interface (API).
The term “configured to” is not intended to mean “configurable to.” An unprogrammed FPGA, for example, would not be considered to be “configured to” perform some specific function, although it may be “configurable to” perform that function and may be “configured to” perform the function after programming.
Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Accordingly, none of the claims in this application as filed are intended to be interpreted as having means-plus-function elements. Should Applicant wish to invoke Section 112(f) during prosecution, it will recite claim elements using the “means for” [performing a function] construct.
As used herein, the terms “first,” “second,” etc. are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless specifically stated. For example, references to “first” and “second” display computer system would not imply an ordering between the two unless otherwise stated.
As used herein, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect a determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is thus synonymous with the phrase “based at least in part on.”
As used herein, the word “module” refers to structure that stores or executes a set of operations. A module refers to hardware that implements the set of operations, or a memory storing the set of instructions such that, when executed by one or more processors of a computer system, cause the computer system to perform the set of operations. A module may thus include an application-specific integrated circuit implementing the instructions, a memory storing the instructions and one or more processors executing said instructions, or a combination of both.
Referring now to
User device 110 is any of a number of computing devices including but not limited to a cellular phone, a smartphone, a tablet computer, or a laptop computer. In various embodiments, user device 110 includes user interface 112 and camera 114. In various embodiments, user device 110 is remote from server computer system 120 and the various display computer systems 130, although as discussed herein, in various instances user device 110 is physically proximate to one or more screens 132 such that camera 114 is useable to capture a camera image 116 that includes at least part of a particular display image 134A generated by a particular one of the plurality of display generation objects 124. In various embodiments, camera 114 is any of a number of devices useable to capture visual information in any of a number of formats and resolutions. In various embodiments, camera 114 includes one or more lenses, one or more optical sensors, and circuitry configured to take input from an optical sensor and produce camera image 116. In various embodiments, user interface 112 receives annotation information indicating a subportion of the camera image 116 (discussed in further detail in reference to
Server computer system 120 is one or more computer systems that communicate with user device 110 and various display computer systems 130 via network 140 as discussed herein. In various embodiments, server computer system 110 is remote from user device 110 and the display computer systems 130. Server computer system 120 may be implemented on a single computer system or a cloud of computer systems working in concert. As discussed in further detail in reference to
As discussed in further detail herein, server computer system 120 is configured to receive camera image 116, which includes at least part of a particular display image 134 generated by a particular one of the plurality of display generation objects 124. In various embodiments, server computer system 120 is configured to receive annotation information indicating a subportion of the camera image 116. In various embodiments, such annotation information is sent with camera image 116 (e.g., by being drawn on top of an image captured by camera 114 of user device 110) or is sent separately from camera image 116 (e.g., in one or more files sent from user device 110 to server computer system 120.). In various embodiments, server computer system 120 is configured to determine that the particular display image 134 included in camera image 116 corresponds to a particular display generation object 124 (e.g., the particular display generation object 124 that was executed to generate the particular display image 134). In various embodiments, server computer system 120 is configured to subscribe user device 110 to one or more data feeds 128. In various embodiments, the data feeds 128 are selected based on the particular display generation object 124 (i.e., the display generation object corresponding to the display image 134 included in camera image 116). In some of such embodiments, this selection is also based on the subportion of camera image 116 indicated by the annotation information. In various embodiments, server computer system 120 is configured to send one or more notifications 118 corresponding to the one or more selected data feeds 124 to user device 110.
The one or more display computer systems 130 are computer systems that communicate with server computer system 120 via network 140 as discussed herein. In various embodiments, there are a plurality of display computer systems 130, shown in
In various embodiments, network 140 includes one or more computer networks and allows the various components of computer system 100 to communicate with one another. In various embodiments, network 140 includes any number of wired and/or wireless transmission mediums. In various embodiments, network 140 includes the Internet. As discussed herein, in various embodiments, user device 110 sends camera image 116 to server computer system 120 and receives notification(s) 118 from server computer system 120 via network 140. Moreover, server computer system 120 and the various display computer systems 130 are able to communicate via network 140, and in various embodiments send messages including display generation objects 124, information from data feeds 128, and/or display images 134 as discussed herein.
In various embodiments, computer system 100 is operable to enable a user to utilize their user device 110 to capture a camera image 116 of a display 132 and get their user device 110 subscribed to data feeds 128. The subscribed user device 110 can then receive notifications 118 corresponding to information on the display 132. As an example, using the techniques discussed herein, a user in an airport is able to capture a camera image 116 of a screen 132 in the departures area of the airport and receive notifications 118 corresponding to plane departures (e.g., flight delays, gate changes). In another example, a user is able to capture a camera image 116 of a screen 132 in a shopping center and receive notifications 118 about the shopping center (e.g., a map of the shopping center, a directory of stores) and/or notifications 118 about goings-on outside of the shopping center (e.g., a trailer for a movie advertised on screen 132, a link to a website advertised on screen 132).
Referring now to
As discussed in reference to
For example, in some embodiments where a screen 132 is located in the ticketing area of an airport, the particular display image 134 for that particular screen 132 includes information about the departures of various planes. In such an instance, the display generation object 124 includes pointers to one or more data feeds 128 that provide the information about the various planes (e.g., carrier, flight number, scheduled departure time), one or more data feeds about conditions at the airport (e.g., estimated departure time of delayed planes, gate information for the planes), and a schema to arrange the information (e.g., with a heading reading “Departures” and rows of information for each plane). In another example, in embodiments where a particular screen 132 is located in a shopping center, the particular display image 134 includes advertisements for retailers in the shopping center and entertainment feeds for customers (e.g., a sports ticker, a trailer for a movie). In such an example, the display generation object 124 includes pointers to the various data feeds 128 (e.g., a mall advertisement data feed 128, a sports data feed 128, and a movie trailer data feed 128) and a schema to control where information from the data feeds will be displayed in display image 134.
In some embodiments discussed herein, server computer system 120 uses the display generation objects 124 to generate the respective display images 134 (e.g., with display image generator module 230) and sends them to their respective display computer systems 130 for display. In other embodiments, server computer system 120 sends one or more particular display generation objects 124 to one or more particular remote display computer systems 130, and these display generation objects 124 are useable by the remote display computer system 130 to cause the particular display image 134 to be displayed on one or more particular screens 132. In some of such embodiments, one or more of the display computer system 130 receive information from one or more data feeds 128 directly. In some of such embodiments, the display generation objects 124 may be modified using the display computer system 130 to, for example, add additional information (e.g., advertising, additional data feeds 128) or to reorganize the schema.
As discussed in reference to
In various embodiments, server computer system 120 includes storage 200. In various embodiments, storage 200, storage 122 and/or storage 126 are implemented separately (e.g., on separate storage systems, on the same storage system but logically separated) or may be implemented together with the same storage system using any type of storage medium (e.g., one or more hard drives, solid state storage). In various embodiments, server computer system 120 stores one or more display images 134 in storage 200. In some embodiments, one or more of the display images 134 in storage 200 had been generated by a display computer system 130 and received by server computer system 120 (e.g., via network 140). In some embodiments, one or more of the display images 134 in storage 200 had been generated by server computer system 120 using display image generation module 230.
In various embodiments, server computer system 120 includes display image generation module 230. In such embodiments, display image generation module 230 is configured to generate respective display image files 134 corresponding to ones of the plurality of display generation objects 124. In various embodiments, display image generation module 230 accesses information from one or more data feeds 128 as indicated by a particular display generation object 124 and assembles such information into the corresponding particular display image 134 using the schema included in the particular display generation object 124. In various embodiments, server computer system 120 sends these generated display image files 134 to various display computer systems 130 for display on respective screens 132. In various embodiments, server computer system 120 uses display image generation module 230 to regularly generate display images 134 periodically or when updated information is received from data feeds 128. In such embodiments, server computer system 120 uses display image generation module 230 to generate revised display images 134 that reflect more up-to-date information from data feeds 128. In embodiments where server computer system 120 generates the display images 134 for display computer systems 130, these revised display images 134 are in turn sent to their respective display computer systems 130.
Image recognition module 210 is configured to determine that a particular display image 134 included in the camera image 116 corresponds to a particular display generation object 124. In some embodiments, determining that the particular display image included in the camera image corresponds to the particular display generation object includes comparing camera image 116 to one or more display images 134 (e.g., copies generated by server computer system 120, copies generated by a display computer system 130 and send to server computer system 120) in storage 200 to find a match. As used herein, a “match” includes an identical match where camera image 116 is identical to a particular display image 134 and an approximate match in which camera image 116 is sufficiently similar (e.g., above a positive match threshold) to a particular display image 134 when compared. In some embodiments, image recognition module 120 uses “fuzzy matching” techniques that determine which portions of camera image 116 match one or more display images 134 (e.g., the portion of camera image 116 that shows a display image 134) and determines which portions of camera image 116 do not match (e.g., the area around screen 132, obstructions in front of screen 132 or glare on screen 132, portions of display image 134 that have been changed locally such as locally-inserted advertisements). In such embodiments, image recognition module 120 is configured to identify the display image 134 having the highest approximate match score or match percentage for camera image 116. If the match score/percentage is above a threshold value (e.g., 85% match although any other threshold can be used), image recognition module 210 determines that the picture of screen 132 in camera image 116 matches a particular display image 134 and the image recognition module 210 identifies the particular display generation object 124 that was used to generate that particular display image 134. In some embodiments, if the match score/percentage is below the threshold value for a positive match, but above a lower threshold (e.g., 50% match although any other threshold can be used), server computer system 120 is configured to send a message to user device 110 with one or more candidate display images 134 and receive a selection by the user of the display images 134 user captured an image of in the camera image 116.
In some embodiments, in addition to comparing the visual information of camera image 116 to a plurality of display images 134, image recognition module 210 is configured to determine that the particular display image 134 included in camera image 116 corresponds to the particular display generation object 124 based on metadata associated with camera image 116. In some embodiments, image recognition module 210 is configured to use a time and date at which the camera image 116 was captured included in metadata to match camera image 116 to a display image 134 (e.g., by matching camera image 116 with a particular display image 134 that was being displayed on a screen 132 approximately when the camera image 116 was captured). In some embodiments, image recognition module 210 is configured to use a location at which camera image 116 was captured included in the metadata to match camera image 116 to a display image 134 (e.g., by matching camera image 116 with a particular display image 134 that was being displayed on a screen 132 nearby the location where camera image 116 was captured).
User notification module 220 subscribes the remote user device 110 to one or more selected data feeds 128 and sends notifications to the user device 110 corresponding to the selected data feeds 128. In various embodiments, the data feeds 128 are selected based on the particular display generation object 124 (e.g., determined by image recognition module 210) and, in some embodiments, annotation information (e.g., the annotation information 308 discussed in reference to
In various embodiments, notifications 118 are any of a number of electronic messages sent from server computer system 120 to user device 110. For example, notifications 118 include but are not limited to push notification sent to an application installed on user device 110, SMS or MMS messages sent to user device 110, emails sent to an email account associated with a user of user device 110, etc. In some embodiments, separate notifications 118 are sent for each selected data feed 128, but in other embodiments a notification 118 may include information from a plurality of selected data feeds 128. In various embodiments, notifications 118 include: (i) information from the one or more selected data feeds 128 that was being displayed at the time camera image 116 was captured (for example, flight departure information as shown on a screen 132 in the departures area of an airport), (ii) information from the one or more selected data feeds 128 displayed after the time camera image 116 was captured (for example, updated flight departure information that was shown on the same screen 132 after user captured camera image 116), or both (i) and (ii). In some embodiments, notifications include information from the one or more selected data feeds 128 that is related to information that was displayed but is not itself displayed. In such embodiments, such information might include (but is not limited to) information presented at a higher level of detail (e.g., notification 118 includes a play-by-play for a sporting event whereas display image 134 included only the score of the sporting event), information presented in a different format (e.g., notification 118 includes a video of a movie trailer whereas display image 134 included only the movie's poster; notification 118 includes an audio version of text appearing on display image 134), information translated into a different language (e.g., notification 118 includes English whereas the display image 134 includes Japanese but not English).
In various embodiments, user notification module 220 is configured to send notifications 118 for a limited amount of time. This amount of time may be limited, for example, by a period of time set by a user of user device 110 (e.g., in a profile associated with the user in which the user has indicated that he or she wants to receive notifications 118 for two hours after camera image 116 is captured). In various embodiments where notifications 118 are sent for a predefined period of time, the start of such a predefined period of time may be defined by metadata associated with the camera image that indicates a time and date at which the camera image was captured. In such embodiments, user notification module 220 determines, based on the metadata, the time and date at which camera image 116 was captured, sends notifications 118 for a predefined period of time (e.g., two hours, although any time period may be used) after the time and data at which camera image 116 was captured, and ceasing to send notifications 118 after the predefined period of time. In various embodiments, notifications 118 may be halted based on receiving a command from user device 110 to cease sending notifications 118.
In various embodiments, server computer system 120 includes one or more communications modules 240 configured to send and receive messages from other computer systems such as user device 110, display computer system 130, and/or data feed server 250. Various communications modules 240 are configured to communicate via network 140 (e.g., over the Internet, over a closed network, over a cellular network).
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Using the techniques discussed herein, a user is able to capture a camera image 116 of a screen 132, send the camera image 116 to a server computer system 120, and become subscribed to notifications 118 related to information that is visible on screen 132. For example, a user in the departure area of an airport is able to capture a camera image 116 of a screen 132 in the departure area and annotate the image 116 to indicate about which visual area(s) the user would like to receive notifications 118. The user's user device 110 then sends the camera image 116 (and any annotation information) to server computer system 120. Server computer system 120 receives the camera image 116 and annotation information, determines which particular display image 134 was visible on screen 132 when the camera image 116 was captured, determines which display generation object 124 was used to generate the display image 134 shown on the screen in the departures area, and subscribes the user device 110 to one or more data feeds 128 corresponding to the display generation object 124 (and as indicated by the annotation information). The server computer system 120 then sends the user notifications 118 corresponding to these selected data feeds 128 (e.g., notification 118 relating to planes that are departing that day).
Moreover, using the techniques disclosed herein, a user is able to capture a camera image 116 of an image on a screen 132 or even not on a screen (e.g., on a poster) and receive notifications 118 corresponding to the image in various embodiments. In some of such embodiments, the user is able to annotate camera image 116 to indicate a subportion of the image. In such embodiments, the image was not generated using a display generation object 124, but server computer system 120 is configured to identify the image (e.g., with an image matching library, with a neural network) and send notifications 118 to user device 110 including additional information about what is depicted in the image. For example, in some embodiments, the user captures a camera image 116 of a map and circles Jamaica on the camera image 116. This camera image 116 and annotation information is sent to server computer system 120, which determines that camera image 116 depicts a map in which Jamaica has been circled. After identifying Jamaica in the camera image 116, server computer system 120 accesses one or more data feeds 128 relating to Jamaica (e.g., a first data feed 128 of weather in Jamaica, a second data feed 128 of foreign exchange rates for Jamaican dollars, etc.) and sends one or more notifications 118 to user device 110 corresponding to information from these data feeds 128.
Turning now to
Processor subsystem 660 may include one or more processors or processing units. In various embodiments of computer system 600, multiple instances of processor subsystem 660 may be coupled to interconnect 660. In various embodiments, processor subsystem 660 (or each processor unit within 660) may contain a cache or other form of on-board memory.
System memory 620 is usable to store program instructions executable by processor subsystem 660 to cause system 600 perform various operations described herein. System memory 620 may be implemented using different physical memory media, such as hard disk storage, floppy disk storage, removable disk storage, flash memory, random access memory (RAM—SRAM, EDO RAM, SDRAM, DDR SDRAM, RAMBUS RAM, etc.), read only memory (PROM, EEPROM, etc.), and so on. Memory in computer system 600 is not limited to primary storage such as memory 620. Rather, computer system 600 may also include other forms of storage such as cache memory in processor subsystem 660 and secondary storage on I/O Devices 650 (e.g., a hard drive, storage array, etc.). In some embodiments, these other forms of storage may also store program instructions executable by processor subsystem 660.
I/O interfaces 640 may be any of various types of interfaces configured to couple to and communicate with other devices, according to various embodiments. In one embodiment, I/O interface 640 is a bridge chip (e.g., Southbridge) from a front-side to one or more back-side buses. I/O interfaces 640 may be coupled to one or more I/O devices 650 via one or more corresponding buses or other interfaces. Examples of I/O devices 650 include storage devices (hard drive, optical drive, removable flash drive, storage array, SAN, or their associated controller), network interface devices (e.g., to a local or wide-area network), or other devices (e.g., graphics, user interface devices, etc.). In one embodiment, computer system 600 is coupled to a network via a network interface device 650 (e.g., configured to communicate over WiFi, Bluetooth, Ethernet, etc.).
Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure.
The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.