Facilitating the querying and indexing of media content, such as video and audio, is a process that requires extensive time due to manual processes. Such media querying and indexing is currently done via manual inspection and review of the media content as one traverses it forward and backward in time. In addition, manual annotation of points in time (e.g., embedding text descriptions of scenes within a video) is sometimes performed to enable faster and more descriptive querying. This is adequate for applications where the annotations will not change over time (e.g., for a movie or recorded stage production) or where the annotations mainly pertain to descriptions or interpretations of the actual media.
However, querying and indexing of unscripted media, such as sports media, or other media that is naturally the subject of viewer opinion (or sentiment), has additional requirements. In particular, consumers also have a desire to view segments of sports media that are associated with or connected to, or that elicit, result in, cause, or generate, crowd (or general viewing public) reaction, response, opinion, or emotion, especially since sports content viewing is often a communal event. Additionally, such crowd-related data may change over time or may vary based on type of event, sport, action, location, or other factors or variables.
Accordingly, it would be desirable to have a method and system that provides indexing and querying of sports (or similar) media and provides the ability to identify segments of media that are of interest to sports fans (or the general public) within a larger sporting (or other) event, and present it to a user in view of the aforementioned factors or requirements.
As discussed in more detail below, the present disclosure is directed to methods and systems for identifying segments of audiovisual (AV) media (by time and location) associated with unscripted (or unplanned or unpredictable) “anomalous” events (or “AEs” or “mini-events”) within a larger sporting event (such as a football game or other sporting event), that elicit a social media response (or reaction), by a given segment of the population, and providing a selectable graphic visualization (or graphic user interface or GUI) showing the level of interest and the sentiment (e.g., positive, negative, neutral) for each Anomalous Event (AE). The GUI may also show the position (or location) on the court or field of play (as actions during sporting events occur at specific locations on the court or field of play), as opposed to movies and similar scripted media, which are not associated with physical locations in the real world. The present disclosure also provides methods and systems for associating spatiotemporal anomalous events identified in AV media events with social media content and signals (e.g., sentiment), and provides a spatiotemporal visualization on an analogous playing field graphic, e.g., as an augmented reality.
For example, an amazing catch made by a football player on a given sports team, is an unplanned, anomalous event (AE) within the football game that a sports fan may like to know happened, or know more details about, and may also like to know the extent to which others had opinions, reactions or responses to the catch.
The visualization or GUI may also provide the user with the ability to view details about each AE, such as when it occurred and how long each AE segment lasted, and the ability to view the AE video segment. Users may also select and adjust how they view the AE in the GUI (e.g., the format of the display) and the time window within a sporting event to view the AEs.
Feedback may also be provided to adjust or re-tune the logic that identifies the AEs, and provides the ability for users to set-up and receive alerts when AEs occur having certain user-selectable or autonomously-determined criteria.
The AV (audio/visual) media sources 16 provide digital source media data for a given event (streamed live or pre-recorded), e.g., a sporting event (or other event that has unplanned or unpredictable anomalous events occur), for analysis by the AE & SMR Logic 12 and ultimately for viewing on the display 38 of the user device 34 by the user 40, as discussed herein. The AV media sources 12 may include, for example, one or more video cameras, audio/visual players, production or master control centers/rooms, playback servers, media routers and the like.
The user device 34 may be a computer-based device, which may interact with the user 40. The user device 34 may be a smartphone, a tablet, a smart TV, a laptop, cable set-top box, or the like. The device 34 may also include the AE App 36 loaded thereon, for providing a desired graphic user interface or GUI or visualization (as described herein) for display on the user device 34. The AE App 36 runs on, and interacts with, a local operating system (not shown) running on the computer (or processor) within the user device 34, and may also receive inputs from the user 40, and may provide audio and video content to audio speakers/headphones (not shown) and the visual display 38 of the user device 34. The user 40 may interact with the user device 34 using the display 38 (or other input devices/accessories such as a keyboard, mouse, or the like) and may provide input data to the device 34 to control the operation of the AE software application running on the user device (as discussed further herein).
The display 38 also interacts with the local operating system on the device 34 and any hardware or software applications, video and audio drivers, interfaces, and the like, needed to view or listen to the desired media and display the appropriate graphic user interface (GUI) for the AE App, such as the AE visualization, and to view or listen to an AE segment on the user device 34.
The AE & SMR Processing Logic 12 identifies segments of media (by time and location) associated with the “Anomalous Events” (AEs) within a sporting (or other) event AV media stream or file, and stores the AE details onto the Anomalous Event Server (AE Server) 24, as described further herein. The logic 12 also searches (or watches or scans) various social media sources (e.g., online social networks, blogs, news feeds, webpage content, and the like) and identifies social media responses (SMRs) (including readers' comments from various online sources) associated with each of the AE's, determines the sentiment of the SMRs, and stores the SMRs on the Social Media Response (SMR) Server 26, as described further herein. The logic 12 may also use information about the user 40 stored in the User Attributes Server 28 (or the user device 34 or otherwise) to update the AE & SMR Processing Logic 12, and the logic 12 may also provide alerts to the user device 34 when an AE has occurred (or is occurring) based on user settings and predictive logic, as described further herein. The AE App 36 running on the user device 34 provides a graphic user interface (GUI) visualization of the AEs on the display 38 based on the information in the AE Server and SMR servers and based on inputs and options settings from the user 40, as described herein. The logic and processes of the system 10 described herein may analyze the game or event in realtime, e.g., using live AV media data (such as a live video stream, or video that is being played as if it were live), and provide the associated realtime AE information, or may analyze the game or event after it has happened or been broadcast, e.g., using pre-recorded AV media data input. Also, the system 10 may analyze a single game or event or a plurality of games occurring at the same time, or simultaneously analyze some live games and some games that are pre-recorded. It may also analyze an entire season (or group of games) for a given sport or event type, or may analyze games from multiple different sports. The user may select the types of sports and games the system 10 analyzes and provides AE information for, as discussed more hereinafter.
Referring to
Referring to
The Language Processing Logic 304 analyzes the AV media data on the line 17 and the influencer data on the line 19, and compares it against Generic AE Language data on the line 9, to provide the status (e.g., presence or absence) of respective AE factors associated with the language (or words or key words) used by the commentators in the AV media data content and used by the influencers (e.g., professional sports writers, sports personalities, sports bloggers, and the like) in the influencer data content that may be indicative of an AE. Other language sources may be used if desired.
Referring to
To the right of the column 408 are: column 410 for Team/Player(s) involved with the AE, column 412 for Date/Time when the AE occurred, column 414 for Game Clock time and Quarter (or Half, Period, Inning, Set, or the like, depending on the game/event) when the AE occurred, column 416 for AE Length (or time duration of AE), column 418 for the AE Factor(s), e.g., the number of AE Factors present in the assessment of whether an event is sufficient to be identified as an AE, as discussed further herein, column 420 for Location, e.g., where AE occurred on the court or field of play, column 422 for Video (or Audio) clip link, which provides an address or pointer to the AV media segment content for the associated AE; and column 424 for the total number of Social Media Responses (SMRs) that aligned with this AE.
Referring to
Referring to
Next, a block 504 analyzes the AV media data for Anomalous Events (AEs) by performing the AE Identification Logic 300 (
If the result of the block 506 is YES, an AE has been identified in the event/game from the AV media data, and a block 508 extracts the relevant corresponding AV media segment (AE segment) from the AV media data corresponding to the AE and stores it on the AE Server 24 (
Referring to
In other embodiments, the logic 12 may receive signals directly from an audio sound mixer (or sound board) located at the event/game or at the studio producing or broadcasting the event. In that case, if an audio engineer or other sound mixer worker boosts (or increases) the audio signal for the crowd noise (e.g., a crowd noise audio channel on the mixer) in response to, or anticipation of, an important event during the game, the logic 12 (e.g., blocks 602-606) may detect the increase in the audio signal and use the increase as an indicator that the crowd noise is at an anomalous level and set the crowd noise factor to Yes. Any other technique for using or detecting crowd noise levels for identifying exciting events may be used if desired.
After performing block 606, or if the result of Block 604 is NO, block 608 performs analysis on the AV media data for commentator voice intonation. Next, a block 610 determines if the commentator voice intonation is indicative of an AE occurring, such as loud volume, excited tone, and the like. If YES, a Commentator Voice Factor is set to Yes at a block 612. Two example techniques for detecting emotion in a speaker's voice that may be used with the present disclosure are described in: Published US Patent Application US 2002/0194002 A1, to Petrushin, entitled “Detecting Emotions Using Voice Signal Analysis” and U.S. Pat. No. 6,151,571 A, to Petrushin, entitled “System, Method And Article Of Manufacture For Detecting Emotion In Voice Signals Through Analysis Of A Plurality Of Voice Signal Parameters,” which are incorporated herein by reference to extent needed to understand the present invention. Any other technique for detecting emotion in a speaker's voice may be used if desired.
After performing block 612, or if the result of Block 610 is NO, block 614 performs analysis on the AV media data for AE Attributes, such as a football play formation not often used by one of the teams in a given game context, or a hot batter coming to the plate with the bases loaded, a large score disparity after a team scores, recent fervor of a given athlete in the news that makes a good or bad play, or any other situations that may be likely to produce an AE that is of interest to a large number of individuals (or may be a precursor to an AE).
For example, the AE attributes may be used within the context of a football game, using the formation of players on a field, in combination with the context of the game. More specifically, the recognition of a “non-kicking” formation for an offense facing “fourth down and seven” near the end of a game may be a notable event. Similar scenarios can be identified in different sports. The use of player location data (e.g., available from electronic player tracking devices or optical object tracking or the like) in combination with visual pattern recognition and detection techniques can be used to implement the prior example.
The pattern recognition component (recognizing play formations) may be implemented using neural networks for pattern recognition, such as that described in C. Bishop, “Pattern Recognition And Machine Learning”. Other techniques may be used for visual pattern recognition, such as that described in I. Atmosukarto, et al., “Automatic Recognition Of Offensive Field Formation In American Football Plays”, 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Any other technique may be used to perform pattern recognition to identify AE attributes.
Once a given formation is recognized (and hence labeled) as an AE, it may be saved for future use (e.g., in the AE Server), to identify the next time an AE attribute occurs.
Machine learning techniques, such as Support Vector Machines (SVMs) may be used to learn over time the patterns, formations, plays, or the like, that are AEs. For example, One Class Support Vector Machines (OCSVM) may be used to identify AEs. In that case, the OCSVM “trains” (or “learns”) using data belonging to only one “class.” In the above example, the “class” being data representing the majority of football play formations observed in a given context (or stage) of a game. If the resulting classifier does not recognize the combination of play formation and context as being part of the “normal” class, then it may be judged as an “outlier,” and thus may be identified as an AE Attribute.
Alternatively, certain AE Attributes may be “precursors” to an AE occurring (i.e., an AE Precursor), which can be used to identify the high likelihood (e.g., greater than 75% chance) that an AE is about to occur during or shortly after the AE attribute occurs. For example, in a baseball game, recognizing when a highly skilled team member is up to bat when the bases are loaded, may be an AE Precursor. Multiple techniques can be used to provide such analysis. One technique is the use of “association rule mining,” which identifies the strength of relationships among variables in a given dataset.
More specifically, let “I” define a set of binary attributes including but not limited to “player X to bat,” “bases loaded,” and “high crowd reaction” and let D define a set of transactions, each comprising a subset of “I.” Rules may be defined in the form of: X implies Y, where X and Y are subsets or equal to I and X and Y are mutually exclusive. Given the above explanation, the rule of interest may be “{player x to bat, bases loaded} implies {high crowd reaction},” where “high crowd reaction” represents an AE. A basic objective may be to check if the previous rule is of high “confidence” and low “lift,” or more broadly identify all rules of high confidence and low lift that involve “high crowd reaction” as being the “consequent” (or consequence) of the rule (i.e., appearing on the right-hand side of the implication of the rule. Requiring low lift supports the removal of candidate rules where the sets involved are independent of each other, and hence no real “rule” exists. Hence, “association rule mining” may help identify AE precursors that most likely lead to an AE.
Next, a block 616 determines if the anomalous event AE Attribute is indicative of an AE or an AE about to occur (or AE Precursor). If NO, the process exits. If YES, an Anomalous Event Attribute Factor is set to Yes at a block 618 and the process exits.
The crowd noise levels, vocal intonation, and attributes/precursors may be pre-defined (or pre-set or pre-stored) to compare the AV media against and to identify certain predetermined conditions or scenarios, such as when a known commentator always says the same thing (a “signature” phrase or remark) when an AE occurs, e.g., John Sterling for the New York Yankees, says: “It is high, it is far, it is gone”, for every home run hit by the Yankees. Alternatively, the crowd noise levels, vocal intonation, and attributes/precursors may be learned over time by the logic of the present disclosure, e.g., using machine learning techniques, such as support vector machines (SVMs), neural networks, computer vision algorithms, as described herein, or any other machine learning techniques that perform the functions of the present disclosure.
Referring to
After the language analysis has been performed at the block 702, a block 704 determines whether the commentator AE language/words indicative of an AE have been detected. If YES, a Commentator Language Factor is set to Yes at a block 706. After performing block 706, or if the result of Block 704 is NO, block 708 performs language analysis on the Influencer Sources data for key words, phrases, or the like, from influential sources, such as sports writers, sports personalities, bloggers, or other sports authorities, that may be blogging, tweeting, or otherwise commenting on events that occurred at the game (either in realtime during the game or shortly after the game). Next, a block 710 determines if the language of the Influencer Sources data is indicative of an AE occurring, similar to that described above for block 708 for commentator language analysis. If NO, the process exits. If YES, an Influencer Language Factor is set to Yes at a block 712, and the process exits.
Referring to
For example, an AE identification model may be used to determine if an AE has occurred, the model having model parameters, such as that shown below in Equation 1 (Eq. 1) which is a linear weighted AE Factor Sum equation, where A, B, C, D, and E are the weightings (or weighting coefficients) for the respective factors: Crowd Noise Factor (CNF), Commentator Voice Factor (CVF), AE Attribute Factor (AAF), Commentator Language Factor (CLF), and Influencer Language Factor (ILF). The weighting coefficients A-E may be adjusted (or tuned) as the logic learns (e.g., through machine learning or other techniques) which AE Factors are useful in determining when an AE has occurred, as discussed more hereinafter with the Adjustment and Alert Logic 204 (
A*CNF+B*CVF+C*AAF+D*CLF+E*ILF=AE Factor Sum Eq. 1
Also, in some embodiments, the AE's may be determined (or detected or identified) from the AV media data using various other AE identification models, such as machine learning models. In that case, when identifying and extracting AV media segments for AE's, one or more machine learning models may be used, which may have various model parameters. For example, a Support Vector Machine (SVM) may be used as a machine learning tool having model parameters, e.g., “c” and “gamma” of a SVM kernel, which can be adjusted (or tuned) based on the performance of the SVM classifier, which is related to the “quality” of the output AE media segment (e.g., how closely the detected AE relates to or resonates with viewers), as determined by the amount or level of social media responses (SMRs) that are aligned with (or related to) the AE, as discussed hereinafter with
If block 806 determines that an AE occurred, then a block 808 sets an Identified Anomalous Event to YES, which may be used (as discussed with
Referring to
If the result of block 902 is NO, there is no AE location tracking data available, and block 906 performs image processing on images or video in the AV media data, or performs audio processing of the AV media data, to identify the AE sports object location corresponding to the AE and the process exits. For example, in that case, the logic may use the time when the AE occurred (from the AE detection logic), and identify from the image processing or audio processing where the sports object was located at the time the AE was identified. If there are multiple possible answers, the logic determines the most likely result, similar to that described above, by using the AE type/action together with the image or audio processing results, to identify the most likely location of the AE.
An example of a technique for tracking athlete movement using sensor data that may be used with the present disclosure is described in H. Gandhi et al., “Real-time tracking of game assets in American football for automated camera selection and motion capture”, 2010, 8th Conference of the International Sports Engineering Association. Another example for tracking objects such as people in video data may be used such as that described in S. Weng, “Video object tracking using adaptive Kalman filter”, 2006 Journal of Visual Communication and Image Representation, vol. 17, no. 6, and T. Zhang, “Robust multi-object tracking via cross-domain contextual information for sports video analysis”, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing.
If video-based tracking is used, then the location of the AE may be constrained to a limited range or a specific player by selecting specific tracked objects or object trajectories to assign to the AE. For example, facial or jersey number/name recognition techniques (based on, e.g., neural networks or the like) can be used to label the objects tracked in the video. Any other type of recognition technique may be used if desired. In addition, data from commentator audio can be used to assist in locating the AE. For example, names of players from voice commentary can be used to select a player already labeled from the video and thereby select specific locations to map to the AE. Since the output of the aforementioned location analysis routines may be trajectories, specific locations can be chosen using commentator audio. For example, if the commentator mentions “player x scores a touchdown,” then the location of player x closest to the touchdown region can be used as the location. As another example, if the commentator mentions with excitement that “an incredible catch” was made, then the location mapped to when the location of the ball meets the location of the receiver can be associated with the AE. The AE location results data of blocks 904 or 906 (i.e., the location of the ball or player associated with the AE) may be used by the AE Identification & Location logic process 500 (
Referring to
Referring to
The SMR User Attributes information/data in column 1112 of
If there are multiple topics in a given SMR post that are likely associated with different AEs, e.g., if an SMR user is commenting on two different AEs in a single post, the logics 1000, 1200 may separate the two comments/responses from the single post and create additional SMR entries for each comment/response. In some embodiments, the alignment logic (
Referring to
The Social Media Filter Logic 1000 identifies social media content relevant to the game/event in general, not necessarily AE's within the game. Thus, the logic 1000 can be viewed as a “pre-filter” of the social media content to identify items that are relevant to the general game/event being analyzed that may have an AE occur therein. Also, in some embodiments, the Social Media Filter, Alignment & Sentiment Logic 202 (or one or more of the logics 1000, 1004, 1008 therein) may run concurrently (or in parallel) with the AE Identification & Location Logic 200. Further, in some embodiments, instead of performing the Logic 1000 as a pre-filter to find potential SMRs, the system 10 may use the Logic 1000 to identify actual SMRs after an AE has been identified. In that case, the Logic 1000 may use a narrower time filter to identify SMRs related to (or commenting on) specific AEs (instead of the entire game/event), and the Alignment Logic 1004 would not be performed. In that case, the Sentiment Logic 1008 may be performed as part of the identification process or after the SMRs have been identified. In addition, in some embodiments, the Sentiment Logic 1008 may be performed before the Alignment Logic 1004. In that case, each of the SMR entries in the SMR Listing Table 1100 would have a sentiment indicated.
When the Social Media Filter Logic 1000 has populated the SMR Listing table 1100 (or a predetermined portion thereof), it provides a signal on a line 1002 to perform the Alignment Logic 1004. In some embodiments, the Alignment Logic may run continuously or be initiated when an AE is identified in the AE Listing table 400. The Alignment Logic 1004 obtains data on a line 1009 from the SMR Listing table 1100 on the SMR Server 26 and data on the line 214 the AE Listing table 400 (
Referring to
Next, a block 1206 determines whether the Social Media topic was found for a particular social media response content (e.g., a tweet or comment online). If a topic was not able to be found, a block 1208 uses generic social media topic identification data (e.g., content model and topic data) to attempt to identify the topic of the social media post. For example, in one embodiment, a generic approach would be to collect social media content samples (e.g., tweets, replies) from a social media account (or source) known to be associated with sports, or a specific sport. An example includes a social media account for the sports section of a news organization. Named entities (e.g., names, locations) can be removed so that only more generic sports terms remain. Using the resulting processed data set, a text classification approach employing combinations of techniques such as term frequency inverse document frequency (tf-idf), latent semantic indexing (lsi), and support vector machines (SVMs) may be used to model and automatically recognize word usage indicative of an a given sport, similar to that discussed herein with
After performing block 1208 or if the result of block 1206 was YES, a block 1210 determines if the topic of the social media post is sufficient to link the post to the sporting event or game. For example, if the content topic matches one or more of the game metadata topics (e.g., same sport, same team, same player, same location, or the like), the logic may conclude that is sufficient. The amount of the matching or correlation between the identified content topic and the event metadata that is sufficient may be pre-defined (or pre-set or pre-stored) as a default or may be learned over time by the logic, using machine learning such as that described herein, or other learning techniques.
If the result of block 1210 is NO, a block 1212 classifies the content of the social media post based on the SMR user's social media attributes, such as an indication on the user's Facebook® page of his/her favorite sports team or favorite player(s), e.g., see the discussion herein with
If the result of block 1214 is NO, then the topic of the social media post being analyzed could not be identified sufficient to link it to the sporting event being analyzed for AEs, and the process 1200 exits and that social media post is not added to the SMR Response table 1100 (
The Social Media Filter Logic 1000 may use natural language processing (e.g., speech tagging or other processing), vector space modeling (VSM), and document similarity analysis, e.g., using term frequency-inverse document frequency transformation and cosine similarity measurement, which may be similar to that discussed with the Sentiment Logic 1008 and the process 1500 (
Referring to
If the result of block 1306 is NO, the process Exits. If YES, a block 1308 determines whether any features of the SMR match similar features of the AE, e.g., same sport, same team, same players, AE action matches SMR topic, or the like. If NO, the process exits. If there is some amount of matching, a block 1310 determines if the matching is sufficient to align the SMR to the AE being reviewed. For example, if at least one feature (e.g., topic, attribute, or the like) of the SMR content matches similar features of the AE, e.g., same sport, same team, same players, AE action matches SMR topic, or the like, the logic may conclude that there is sufficient alignment between the SMR and the AE to determine that the SMR is commenting on the AE. The amount of the matching or correlation between the SMR features and the AE features that is sufficient to conclude alignment may be pre-defined (or pre-set or pre-stored) as a default or may be learned over time by the logic, using machine learning such as that described herein, or other learning techniques.
If the result of block 1310 is NO, there is not sufficient matching to align the SMR with the AE and the process exits. If YES, there is sufficient matching (or alignment) between the SMR and the AE, and the SMR table 1100 (
More specifically,
Referring to
Referring to
Next, a block 1506 updates the SMR Listing table 1100 (
The results of the sentiment logic process of
Referring to
Next, a block 1706 assigns a rank to each AE based on the AE performance results from block 1704. Next, a block 1708 assigns weights to corresponding models and parameters used for identifying each AE. Next, a block 1710 updates the models and parameters of the AE identification & Location Logic 200 (
In some embodiments, the weightings that are adjusted (or retuned) by the adjustment logic may be a simple linear weighting, as shown in Eq. 1, as discussed above. Also, in some embodiments, AE's are determined by machine learning models. In that case, when extracting AV media segments for AE's, one or more machine learning models may be used, which may have different parameters. For example, a Support Vector Machine (SVM) may be used as a machine learning tool having parameters “c” and “gamma” of the SVM kernel, which can be “tuned” (or adjusted) based on the performance of the SVM classifier, which is directly related to the “quality” of the output AE media segment. These parameters, and the corresponding SVM model that is used for extracting an AE segment, may be stored on the AE server. The retuning or adjusting of the parameters of the AE Identification and Location Logic 200 may be done globally (or generally) for all AE App users (e.g., such that it applies as default parameters for all AEs), or may be done as targeted or personalized for a given user or user group having some common characteristics or attributes. Any other technique for adjusting the logic 200 or for adjusting the parameters, algorithms, factors, weightings, or the like identifying or locating the AE in the AV Media, may be used if desired.
In some embodiments, the adjustment logic 204 may also provide an adjustment on a line 218 (Adj2) (
Referring to
Referring to
More specifically, the User Attributes Listing table 1900 of
Referring again to
In some embodiments, the block 1802 may review certain “features” or attributes (e.g., age group, gender, time/day, physical location (home/office or realtime GPS location), hobbies, and the like) of the SMR audience that provided a significant number of responses (measured over many AE's or many games) and correlate it to the features/attributes of the AE's (e.g., from the AE Listing table 400) they were commenting about, e.g., sport, team, players, AE Type/Action, game situation/circumstance, or the like. For example, over time, it may be determined by the logic herein that certain types of audiences or individuals (e.g., age and gender demographics) exhibit sentiment about AEs involving certain entities (e.g., team, player) and at certain times (e.g., evenings, weekends), and at certain locations (e.g., at home, at a game). Once this has been determined, it can be used to send AE alerts for a limited set of AEs to specific audiences or individuals and under certain contexts, when those conditions are met.
In some embodiments, logistic regression techniques may be used determine the probability that an individual will like (or respond to or express sentiment about) a given AE (having certain features or attributes as discussed above), and can, therefore, be used to determine which AE Alerts to send to which (if any) users when an AE happens, when the conditions are met to do so.
More specifically, the logic can review the SMR Listing table and determine the features or attributes of the AEs that elicit a significant social media response (SMR), and then determine if there is a select group of the SMR users (or social media audience or individual source) attributes that apply. For example, the features/attributes of a game (or event) may be: the Sport (e.g., football, soccer, tennis, basketball, hockey, baseball, track & field, auto racing, horse racing, bowling, sailing, or any other sport), AE Action/Type (e.g., score, block, sack, turnover, etc.), Game Situation/Circumstance (e.g., close score, overtime, deciding game, etc.). Also, the features/attributes of the SMR user (or social media audience) may be: age group (e.g., 18-24, 21-24, etc.), physical location (e.g., Boston, Chicago, Miami, Bristol, N.Y., etc.), and hobbies (e.g., sport-type, video games, travel, etc.), which in this case may be used for labeling the types of audience members, but not for driving logistic regression modeling. Certain features may be selected as “control” features. For example, for SMR user (audience) features/attributes, the logistic regression logic may solve for updating preferences for AE features/attributes only for 18-24 year olds (the control), in which case “hobbies” may represent the unique types of audience members.
More specifically, for example, below shows one example of a logistic regression modeling and estimating approach. The logistic regression modeling problem may be phrased as calculating the probability that a user (or type of user) expresses strong (positive or negative) sentiment on a given AE. For simplicity, we only consider individual users in the following discussion. A general linear regression model may be expressed using the following Equation 2 (Eq. 2):
P(ci|xi)=[logit−1(α+βτxi)]c
where ci={0,1}; and where ci=0 (meaning no sentiment was expressed for the AE), and ci=1 (being the opposite), and xi is the vector of features or attributes of a social media audience member (or SMR user), i.e., the AE's that this SMR user has commented on in the past. Because ci can only be 1 or 0, the above equation mathematically reduces to the following linear function, shown as Equation 3 (Eq. 3), which may be used for the logistic regression model for the logic described herein:
logit(P(ci=1|xi))=α+βτxi Eq. 3
The values for α and β in Eq. 3 above may be estimated by solving Eq. 3 with a maximum likelihood estimation (MLE) and convex optimization algorithm. The MLE function can be defined as follows, shown as Equation 4 (Eq. 4):
L(Θ|X1,X2, . . . ,Xn)=P(X|Θ)=P(X1|Θ) . . . P(Xn|Θ) Eq. 4
where Xi are independent, i=1, . . . , n and represents n users, and θ denotes the pair {α,β}. From Eq. 4, we can mathematically obtain the following MLE function shown as Equation 5 (Eq. 5) below, which can be solved or optimized, under reasonable conditions (variables are not linearly dependent), using techniques such as Newton's method or stochastic gradient descent, such as is described in Ryaben'kii, Victor S.; Tsynkov, Semyon V., A Theoretical Introduction to Numerical Analysis, 2006.
ΘMLE=arg maxΘΠ1npic
If block 1802 determines that the AE features/attributes match the appropriate amount of SMR User features/attributes, as described herein, a block 1804 sends an AE alert message indicating that an AE has occurred. The AE alert message may be sent directly to the User Device 34 (e.g., text message or SMS or the like) or to a personal online account of the user, e.g., email or the like. In other embodiments, the AE alert message may be sent or posted via social media, such as Facebook, Twitter, Instagram or other on line social media outlet, and may be sent to certain social media groups or news feeds (e.g., sports or team or player related social media groups, or the like).
The graphical format and content of the AE alert may be a pre-defined, such as a pop-up box having text or graphics, such as: “An AE has occurred. Click this alert box to get more information,” or it may specify the game, such as: “An AE has occurred in the football game: Team A vs. Team C. Click this alert box to get more information.” In some embodiments, if the user clicks on the alert box, the AE App is launched and the user can explore the AE in more detail, e.g., with the AE GUIs discussed herein. Any other format and content for the AE alerts may be used if desired. The content and format of the AE alert may also be set by the user if desired.
After performing the block 1804, or if the result of blocks 1801 or 1802 are NO, a block 1806 determines whether the AE matches (or substantially matches) an AE that the user has indicated he/she would like to see more of in the AE Likes column 1916 of the User Attributes Listing table 1900 (
Referring to
At the bottom of the GUI is an AE Time Selector 2014 which allows the user to select a time window 2016 during the game to view the AE's, having a begin slider arrow 2018 and an end slider arrow 2020 to set the time window 2016. In this case, it is shown as being set to view AE's that occur during the first quarter Q1 of the game. Note the time window 2016 setting shown in this example is different from what would be set to show AE1-AE4 in
To the left of the AE Time Selector 2014 is an SMR data collection sample window control graphic 2034 (i.e., the SMR sample window or SMR detection window). As discussed herein, the logics 1200, 1300 (
The AE location information for where the AE occurred on the actual playing field, may be obtained from the AE Listing table 400 (
Also, if the user taps (or otherwise interacts with) an AE pylon graphic, e.g., pylon 2004, a pop-up window 2022 may appear providing an option to view the AE video segment, full screen or in a window on the screen, and the pop-up may show an image from the video and have a clickable link to view the video segment. In some embodiments, another pop-up window 2024 may also appear and provide selectable options to indicate AE user feedback, such as if the AE App user likes this type of AE and would want to see more like it, or does not like it or would not like to see an AE like this again. Also, in some embodiments, a pop-up window 2030 may also appear (with the other windows 2022,2024) showing the sentiment allocation (positive, negative, neutral) for that particular AE, as shown for the AE pylon 2004, having 75% positive (black) and 25% negative (white). Sentiment allocation windows 2032,3034, are shown for the pylons 2008,2010, respectively, which would appear when selected, as described above, or otherwise as described herein or as set by user settings.
The GUI 2000 may also provide an AE Type selector 2026, e.g., scores, catches, fumbles, turnovers, or any other AE type, which allows the user to select or filter the AE types/actions that the user would like to see, and the AE App will only display AE's of the types selected. In addition, the GUI 2000 may also provide a section 2028 of the display showing game metadata, such as the teams playing, start/stop date/time, location of game, current game clock, current score, and the like. Also, if the user touches (for a touch-screen application) the pylon twice in rapid succession (or double clicks on it), a pop-up window 2056 may appear showing the AE Details for that particular AE, such as AE Type, Duration, Location, date/time, team/player. Other AE Details may be displayed in the AE Details window 2056, such as the data shown in the AE Listing table 400 (
Referring to
Also, in some embodiments, AE pylons 2064-6070 (corresponding to the AE pylons 2004-2010 of
Referring to
Referring to
Referring to
In particular, referring to
A checkbox 2320 is also provided to indicate that the shading of the AE pylon in the AE App GUI is indicative SMR Social Media Source allocation. When the box 2320 is checked, a series of options 2324 may appear to select the desired shape of the pylon, e.g., cylinder, square cylinder, and cone. Other shapes may be used for the pylons if desired. In addition, a series of options 2326 also appear for AE pylon color for each social media source, e.g., Facebook, Twitter, Instagram, Google+, and the like. For each social media source option, there is a drop-down menu with a list of colors to select from. For example,
If both the boxes 2308 and 2320 are checked, the GUI may show two pylons at each AE location (see
A checkbox 2330 is also provided to turn on (or off) the SMR posts screen section 2040 (
In addition, the user may set the SMR detection or sample window, which sets how long (minutes, hours, days) the Social Media Filter Logic 1000 (
Also, a box 2360 is provided to select an AE filter be used to filter out or to select certain desired types of AE results. If the box 2360 is selected, a series of options 2362 are provided, such as Same Age Group (to provide AEs only from the same age group as the user), Same Fan Base (to provide AEs only from users with the same favorite teams), Same Location (to provide AEs only from users in the same location as the user, e.g., MN).
The User Attributes section 2304 allows the user to provide information about himself/herself which is placed in the User Attributes Listing table 1900 (
Next, a series of checkboxes and corresponding dropdown menus 2374 allow the user to select Teams, Players, General AE Types, and Game Situations that the user is interested in. For example, the user may select two teams, e.g., Team A and Team B, and two players, e.g., Player A and Player B, and General AE Type (applicable to all selected sports), e.g., Fights, Catches, Drops, Fumbles, and Game Situations (fixed or variable), e.g., Playoff, Close Game, Last 2 min. Also, there may be checkboxes 2380 that allow the user to provide more information about himself/herself, such as Age Group, Location/Home Address (City and State), and Hobbies (e.g., Fishing). Other user attributes may be provided if desired, or as needed or discussed herein to perform the functions of the present disclosure.
The Alert Settings section 2306 allows the user to set-up alerts for certain types of AEs. In particular, a checkbox 2390 is provided to turn on (or off) AE alerts. Also, a checkbox 2392 is provided to turn on (or off) autonomous alerts that are automatically generated by the logic of the present disclosure based on information about the user, as discussed herein. A checkbox 2394 is provided to allow the user to select when an AE occurs that the user may be interested in, based on certain criteria. When the box 2394 is selected, a series of options 2396 are provided that allow the user to select when an AE occurs involving certain ones of the users preferences, e.g., Sports, Teams, Players, AE Types, and Game situations. Other preferences may be used if desired.
Similarly, a checkbox 2398 is provided to allow the user to select when an AE occurs that the user may be interested in, based on certain exclusionary criteria, i.e., do not send me alerts when certain criteria are met. When the box 2398 is selected, a series of options 2396 are provided that allow the user to select which items (or other information), when they occur, will NOT trigger an AE alert to the user, e.g., Sports, Teams, Players, AE Types, and Game situations (i.e., the negative preferences, or exclusionary criteria). Other negative preferences or exclusionary criteria may be used if desired. Other alert settings may be used if desired, or as needed or discussed herein to perform the functions of the present disclosure.
Referring to
Next, a block 2408 determines whether an AE Alert has been received. If YES, a block 2410 generates a pop-up message on the user device 34 display indicating an AE has occurred and the user can then go to the AE GUI screen 2000 (
Some of the AE App settings data (
Referring to
Also, the computer-based user devices 34 may also communicate with separate computer servers via the network 60 for the AE Server 24, Social Media Response (SMR) Server 26, and the User Attribute Server 28. The servers 24,26,28 may be any type of computer server with the necessary software or hardware (including storage capability) for performing the functions described herein. Also, the servers 24,26,28 (or the functions performed thereby) may be located, individually or collectively, in a separate server on the network 60, or may be located, in whole or in part, within one (or more) of the User Devices 34 on the network 60. In addition, the Social Media Sources 20 and the Influencer Sources 18 (shown collectively as numeral 64), and the Language Sources 8 and the Topic & Sentiment Sources 22 (shown collectively as numeral 66), and the Location Tracking Sources 14, as well as the SMR Server 26, the AE Server 24 and the User Attributes Server 28, may each communicate via the network 60 with the AE & SMR Processing Logic 12, and with each other or any other network-enabled devices or logics as needed, to provide the functions described herein. Similarly, the User Devices 34 may each also communicate via the network 60 with the Servers 24,26,28 and the AE & SMR Processing Logic 12, and any other network-enabled devices or logics necessary to perform the functions described herein.
Portions of the present disclosure shown herein as being implemented outside the user device 34, may be implemented within the user device 34 by adding software or logic to the user devices, such as adding logic to the AE App software 36 or installing a new/additional application software, firmware or hardware to perform some of the functions described herein, such as some or all of the AE & SMR Processing Logic 12, or other functions, logics, or processes described herein. Similarly, some or all of the AE & SMR Processing Logic 12 of the present disclosure may be implemented by software in one or more of the Anomalous Events (AE) Server 24, the User Attributes server 28, or the Social Media Response (SMR) Server 26, to perform the functions described herein, such as some or all of the AE & SMR Processing Logic 12, or some or all of the functions performed by the AE App software 36 in the user device 34.
The system, computers, servers, devices and the like described herein have the necessary electronics, computer processing power, interfaces, memory, hardware, software, firmware, logic/state machines, databases, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces, to provide the functions or achieve the results described herein. Except as otherwise explicitly or implicitly indicated herein, process or method steps described herein may be implemented within software modules (or computer programs) executed on one or more general purpose computers. Specially designed hardware may alternatively be used to perform certain operations. Accordingly, any of the methods described herein may be performed by hardware, software, or any combination of these approaches. In addition, a computer-readable storage medium may store thereon instructions that when executed by a machine (such as a computer) result in performance according to any of the embodiments described herein.
In addition, computers or computer-based devices described herein may include any number of computing devices capable of performing the functions described herein, including but not limited to: tablets, laptop computers, desktop computers, smartphones, smart TVs, set-top boxes, e-readers/players, and the like.
Although the disclosure has been described herein using exemplary techniques, algorithms, or processes for implementing the present disclosure, it should be understood by those skilled in the art that other techniques, algorithms and processes or other combinations and sequences of the techniques, algorithms and processes described herein may be used or performed that achieve the same function(s) and result(s) described herein and which are included within the scope of the present disclosure.
Any process descriptions, steps, or blocks in process or logic flow diagrams provided herein indicate one potential implementation, do not imply a fixed order, and alternate implementations are included within the scope of the preferred embodiments of the systems and methods described herein in which functions or steps may be deleted or performed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art.
It should be understood that, unless otherwise explicitly or implicitly indicated herein, any of the features, characteristics, alternatives or modifications described regarding a particular embodiment herein may also be applied, used, or incorporated with any other embodiment described herein. Also, the drawings herein are not drawn to scale, unless indicated otherwise.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, but do not require, certain features, elements, or steps. Thus, such conditional language is not generally intended to imply that features, elements, or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, or steps are included or are to be performed in any particular embodiment.
Although the invention has been described and illustrated with respect to exemplary embodiments thereof, the foregoing and various other additions and omissions may be made therein and thereto without departing from the spirit and scope of the present disclosure.
Number | Name | Date | Kind |
---|---|---|---|
5354063 | Curchod | Oct 1994 | A |
5729471 | Jain | Mar 1998 | A |
5860862 | Junkin | Jan 1999 | A |
5993314 | Dannenberg | Nov 1999 | A |
6151571 | Pertrushin | Nov 2000 | A |
6707487 | Aman | Mar 2004 | B1 |
6976031 | Toupal et al. | Dec 2005 | B1 |
7667596 | Ozdemir | Feb 2010 | B2 |
7865887 | Kaiser | Jan 2011 | B2 |
7899611 | Downs | Mar 2011 | B2 |
8073190 | Gloudemans | Dec 2011 | B2 |
8224766 | Skibiski | Jul 2012 | B2 |
8391825 | Arseneau | Mar 2013 | B2 |
8516374 | Fleischman et al. | Aug 2013 | B2 |
8600984 | Fleischman | Dec 2013 | B2 |
8702504 | Hughes et al. | Apr 2014 | B1 |
8775431 | Jason | Jul 2014 | B2 |
8943534 | Reisner | Jan 2015 | B2 |
9077744 | Beutel | Jul 2015 | B2 |
9094615 | Aman | Jul 2015 | B2 |
9160692 | Socolof | Oct 2015 | B2 |
9266017 | Parker | Feb 2016 | B1 |
9363441 | Crookham | Jun 2016 | B2 |
9378240 | Jason | Jun 2016 | B2 |
9390501 | Marty | Jul 2016 | B2 |
9671940 | Malik | Jun 2017 | B1 |
9674435 | Monari | Jun 2017 | B1 |
9778830 | Dubin | Oct 2017 | B1 |
9912424 | Sheppard | Mar 2018 | B2 |
9965683 | Verdejo | May 2018 | B2 |
10133818 | Fleischman | Nov 2018 | B2 |
10555023 | McCarthy | Feb 2020 | B1 |
10574601 | Shioya | Feb 2020 | B2 |
20010048484 | Tamir | Dec 2001 | A1 |
20020194002 | Petrushin | Dec 2002 | A1 |
20050286774 | Porikli | Dec 2005 | A1 |
20060106743 | Horvitz | May 2006 | A1 |
20060277481 | Forstall | Dec 2006 | A1 |
20070118909 | Hertzog | May 2007 | A1 |
20070300157 | Clausi | Dec 2007 | A1 |
20080244453 | Cafer | Oct 2008 | A1 |
20090067719 | Sridhar et al. | Mar 2009 | A1 |
20090091583 | McCoy | Apr 2009 | A1 |
20090144122 | Ginsberg | Jun 2009 | A1 |
20090164904 | Horowitz | Jun 2009 | A1 |
20090264190 | Davis | Oct 2009 | A1 |
20090271821 | Zalewski | Oct 2009 | A1 |
20090287694 | McGowan | Nov 2009 | A1 |
20100030350 | House | Feb 2010 | A1 |
20110013087 | House | Jan 2011 | A1 |
20110154200 | Davis | Jun 2011 | A1 |
20110225519 | Goldman | Sep 2011 | A1 |
20110244954 | Goldman | Oct 2011 | A1 |
20120123854 | Anderson et al. | May 2012 | A1 |
20120151043 | Venkataraman | Jun 2012 | A1 |
20120166955 | Bender | Jun 2012 | A1 |
20120215903 | Fleischman | Aug 2012 | A1 |
20120291059 | Roberts | Nov 2012 | A1 |
20130086501 | Chow | Apr 2013 | A1 |
20130095909 | O'Dea | Apr 2013 | A1 |
20130226758 | Reitan | Aug 2013 | A1 |
20130238658 | Burris | Sep 2013 | A1 |
20130268620 | Osminer | Oct 2013 | A1 |
20130271602 | Bentley | Oct 2013 | A1 |
20140052785 | Sirpal | Feb 2014 | A1 |
20140081636 | Erhart | Mar 2014 | A1 |
20140129559 | Estes | May 2014 | A1 |
20140143043 | Wickramasuriya | May 2014 | A1 |
20140376876 | Bentley | Dec 2014 | A1 |
20150070506 | Chattopadhyay | Mar 2015 | A1 |
20150131845 | Forouhar | May 2015 | A1 |
20150148129 | Austerlade | May 2015 | A1 |
20150237464 | Shumaker | Aug 2015 | A1 |
20150244969 | Fisher | Aug 2015 | A1 |
20150248817 | Steir | Sep 2015 | A1 |
20150248917 | Chang | Sep 2015 | A1 |
20150318945 | Abdelmonem | Nov 2015 | A1 |
20150348070 | Boettcher | Dec 2015 | A1 |
20150358680 | Feldstein | Dec 2015 | A1 |
20150375117 | Thompson | Dec 2015 | A1 |
20150382076 | Davisson | Dec 2015 | A1 |
20160034712 | Patton | Feb 2016 | A1 |
20160105733 | Packard | Apr 2016 | A1 |
20160110083 | Kranendonk | Apr 2016 | A1 |
20160132754 | Akhbardeh | May 2016 | A1 |
20160320951 | Ernst | Nov 2016 | A1 |
20160342685 | Basu | Nov 2016 | A1 |
20160359993 | Hendrickson | Dec 2016 | A1 |
20170064240 | Mangat | Mar 2017 | A1 |
20170255696 | Pulitzer | Sep 2017 | A1 |
20170300755 | Bose | Oct 2017 | A1 |
20180060439 | Kula | Mar 2018 | A1 |
20180082120 | Verdejo | Mar 2018 | A1 |
20180082122 | Verdejo | Mar 2018 | A1 |
20180189691 | Oehrle | Jul 2018 | A1 |
20180336575 | Hwang | Nov 2018 | A1 |
20190205652 | Ray | Jul 2019 | A1 |
20190299057 | Vollbrecht | Oct 2019 | A1 |
Number | Date | Country |
---|---|---|
2 487 636 | Aug 2012 | EP |
2016166764 | Oct 2016 | WO |
Entry |
---|
Wang et al., Anomaly_Detection_Through_Enhanced_Sentiment_Analysis_On_Social_Media_Data, IEEE 2014 6th International Conference on Cloud Computing (Year: 2014). |
“Bottari: Location based Social Media Analysis with Semantic Web,” Celino, Irene (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.225.8097&rep1&type=pdf). |
“Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank,” Socher, Richard (http://nlp.stanford.edu/˜socherr/EMNLP2013_RNTN.pdf). |
“Detection of Abrupt Spectral Changes Using Support Vector Machines: An Application to Audio Signal Segmentation,” Davy, Manuel (http://www.mee.tcd.ie/˜moumir/articles/davy_1733.pdf). |
“Automatic Recognition of Offensive Team Formation in American Football Plays,” Atmosukarto, Indriyati (http://vision.ai.illinois.edu/publications/atmosukarto_cvsports13.pdf). |
“Video Object Tracking Using Adaptive Kalman Filter,” Weng, Shiuh-Ku (http://dl.acm.org/citation.cfm?id=1223208). |
Real-Time Tracking of Game Assets in American Football for Automated Camera Selection and Motion Capture, Gandhi, Heer (http://www.sciencedirect.com/science/article/pii/S1877705810003036). |
Sharghi, Aidean, et al. “Query-Focused Extractive Video Summarization,” Center for Research in Computer Vision at UCF, 2016, pp. 1-18. |
Kim, Gunhee, et al. “Joint Summarization of Large-Scale Collections of Web Images and Videos fro Storyline Reconstruction,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 4225-4232. |
Gupta, Abhinav, et al. “Understanding Videos, Constructing Plots, Learning a Visually Grounded Storyline Model from Annotated Videos,” 2009, pp. 1-8. |
Raventos, A, et al. “Automatic Summarization of Soccer Highlights Using Audio-Visual Descriptors,” 2015, pp. 1-19. |
Tjondronegoro, Dian, et al. “Sports Video Summarization Using Highlights and Play-Breaks,” 2003, pp. 1-8. |
Chauhan, D, et al. “Automatic Summarization of Basketball Sport Video,”—2016 2nd International Conference on Next Generation Computing Technologies (NGCT-2016) Dehradun, India Oct. 14-16, 2016, pp. 670-673. |
Nichols, Jeffrey, et al. “Summarzing Sporting Events Using Twitter,” IBM Research, San Jose, CA, 2012, pp. 1-10. |
Fan, Y.C., et al. “A Framework for Extracting Sports Video Highlights Using Social Media,” Department of Computer Science, National Chung Hsing University, Taichung, Taiwan, Springer International Publishing Switzerland 2015, pp. 670-677. |
Agresti, Alan; Categorical Data Analysis, Second Edition—2002, pp. 436-454, Canada. |
SAS/STAT(R) 9.22 User's Guide—Overview: PHREG Procedure https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_phreg_sect001.htm. |
Andersen, P.K., et al. “Cox's Regression Model for Counting Processes: A Large Sample Study,” The Annals of Statistics, 1982, vol. 10, pp. 1100-1120. |
Powell, Teresa M., et al. “Your ‘Survival’ Guide to Using Time-Dependent Covariates,” SAS Global Forum 2012, pp. 168-177. |
Wackersreuther, Bianca, et al. “Frequent Subgraph Discovery in Dynamic Networks”, 2010, pp. 155-162, Washington DC. |
Cook, et al. “Substructure Discovery Using Minimum Description Length and Background Knowledge,” Journal of Artificial Intelligence Research (JAIR), 1 (1994) pp. 231-255. |
Kazantseva, Anna, et al. Linear Text Segmentation Using Affinity Propagation, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 284-293, Edinburgh, Scotland, UK, Jul. 27-31, 2011. Association for Computational Linguistics. |
Choi, Freddy, “Advances in Domain Independent Linear Text Segmentation,” Artificial Intelligence Group, Department of Computer Science, University of Manchester, England, 2000, pp. 26-33. |
Lamprier, S., et al, “SegGen: A Genetic Algorithm for Linear Text Segmentation,” France, 2017, pp. 1647-1652. |
Hearst, Marti, “TextTiling: Segmenting Text into Multi-Paragraph Subtopic Passages,” 1997, Association for Computational Linguistics, pp. 33-64. |
Beeferman, Doug, et al. “Statistical Models for Text Segmentation,” School of Computer Science, PA, 1999, pp. 1-36. |
Indriyat Atmosukarto, Automatic Recognition of Offensive Team Formation in American Football Plays (Year: 2013). |
Number | Date | Country | |
---|---|---|---|
20180095652 A1 | Apr 2018 | US |