Embodiments of the present invention relate generally to natural language generation technologies and, more particularly, relate to a method, apparatus, and computer program product for motion detection.
Advances in computer processor speeds and other performance characteristics have occurred at a rapid pace in recent history, to the point that many human behaviors can now be thoroughly mimicked by machines. However, it has become apparent that current technology is insufficient for replication of certain activities. For example, the human brain tends to be quite adept at extracting data and drawing inferences and conclusions from complex sets of data. These inferences and conclusions may be used to describe the data in a way that allows another human to easily understand important events that occur in the data set. One such task that employs these reasoning faculties is the use of language to describe events in a concise, natural manner.
In an effort to enable computers and other machines to communicate data in a similar manner to human beings, example embodiments of the invention relate to Natural Language Generation (NLG) systems. These NLG systems function to parse data sets and to identify features within the dataset for communication to users, customers, other computer systems, or the like by expressing the features in a linguistic format. In some examples, a NLG system is configured to transform raw input data that is expressed in a non-linguistic format into a format that can be expressed linguistically, such as through the use of natural language. For example, raw input data may take the form of a value of a stock market index over time and, as such, the raw input data may include data that is suggestive of a time, a duration, a value and/or the like. Therefore, an NLG system may be configured to input the raw input data and output text that linguistically describes the value of the stock market index. For example, “securities markets rose steadily through most of the morning, before sliding downhill late in the day.”
Data that is input into a NLG system may be provided in, for example, a recurrent formal structure. The recurrent formal structure may comprise a plurality of individual fields and defined relationships between the plurality of individual fields. For example, the input data may be contained in a spreadsheet or database, presented in a tabulated log message or other defined structure, encoded in a ‘knowledge representation’ such as the resource description framework (RDF) triples that make up the Semantic Web and/or the like. In some examples, the data may include numerical content, symbolic content or the like. Symbolic content may include, but is not limited to, alphanumeric and other non-numeric character sequences in any character encoding, used to represent arbitrary elements of information. In some examples, the output of the NLG system is text in a natural language (e.g. English, Japanese or Swahili), but may also be in the form of synthesized speech.
In some examples, an NLG system may be configured to linguistically express a certain type of data. For example, the NLG system may be configured to describe sports statistics, financial data, weather data, or the like using terminology and linguistic expressions appropriate for the data set. Different terminology, phraseology, idioms, and the like may be used to describe different types of phenomena, and different data domains may require different analysis techniques for efficient generation of linguistic output. For example, an analysis operation for a set of sports data to generate a game recap may require different data analysis techniques than analysis of weather data to generate a weather forecast.
In some examples, input data may not be provided in a format that is readily usable for generation of natural language. In many cases, the NLG system may not be aware of how to extract relevant data from input sources that a human user can readily process. For example, it may be more straightforward for an NLG system to create a baseball game recap from a set of box score data than from a video replay of the game. In order to allow the NLG system to create the natural language recap, the data must be presented in a format that allows the NLG system to identify important relationships and relevant details among the data. One use case that presents such a challenge is a set of data related to object position over time. When presented with a set of raw image data describing the location of objects, current NLG systems are unable to detect relevant features of the location data that might be obvious to a human viewer.
Some example embodiments of a computer system may relate to detection of motion among a given set of data. Example embodiments may provide for identification of attributes of interest spatially located within a sequence of spatial data frames. The attributes of interest may be clustered and examined across frames of the spatial data to detect motion vectors. The system may then derive information about these clustered attributes of interest and their motion over time and identify moving and/or static objects, and the moving and/or static objects may be used to generate natural language messages describing the motion of the attributes of interest. For example, weather data may be provided as a set of precipitation data, where the data corresponds to a series of snapshots of precipitation recorded or predicted at a set of locations in a geographical region at a particular set of times. Example NLG systems may analyse the precipitation data to identify weather fronts or other features relevant to creating a weather report. Other example uses include description of oil spills, cellular growth (e.g., tumor progression), atmospheric conditions (e.g., the size of a hole in the ozone layer), or any other implementation where it may be desirable to detect motion vectors in a sequence of spatial data frames.
Methods, apparatuses, and computer program products are described herein that are configured to detect motion. Embodiments of the invention may provide a method for detecting motion. The method may include determining the location of one or more clusters in a sequence of spatial data frames at two or more of a plurality of time values. The sequence of spatial data frames may define one or more locations of the one or more clusters at the plurality of time values. The method may further include determining that a first cluster of the one or more clusters in a first of the two or more time values corresponds to a second cluster of the one or more clusters in a second of the two or more time values. The method may also include determining at least one motion vector between the first cluster and the second cluster, and determining, using a processor, a moving object based on information comprising the at least one motion vector.
Embodiments may further include an apparatus configured to detect motion. The apparatus may include a memory coupled to at least one processor. The processor may be configured to determine the location of one or more clusters in a sequence of spatial data frames at two or more of a plurality of time values. The sequence of spatial data frames may define one or more locations of the one or more clusters at the plurality of time values. The processor may be further configured to determine that a first cluster of the one or more clusters in a first of the two or more time values corresponds to a second cluster of the one or more clusters in a second of the two or more time values, determine at least one motion vector between the first cluster and the second cluster, and determine a moving object based on information comprising the at least one motion vector.
Yet further embodiments may provide a computer readable storage medium comprising instructions that, when executed by a processor, cause the processor to perform a method for detecting motion. The instructions may configure the processor to determine the location of one or more clusters in a sequence of spatial data frames at two or more of a plurality of time values. The sequence of spatial data frames may define one or more locations of the one or more clusters at the plurality of time values. The instructions may further configure the processor to determine that a first cluster of the one or more clusters in a first of the two or more time values corresponds to a second cluster of the one or more clusters in a second of the two or more time values, determine at least one motion vector between the first cluster and the second cluster, and determine a moving object based on information comprising the at least one motion vector.
Having thus described embodiments of the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
Example embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments are shown. Indeed, the embodiments may take many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. The terms “data,” “content,” “information,” and similar terms may be used interchangeably, according to some example embodiments, to refer to data capable of being transmitted, received, operated on, and/or stored. Moreover, the term “exemplary”, as may be used herein, is not provided to convey any qualitative assessment, but instead merely to convey an illustration of an example. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.
One of the primary factors that users generally consider in the analysis of spatio-temporal data is the concept of motion. The relative position of attributes of interest over time can help a user determine where the attributes of interest has been, where it is going, how fast it will get there, and other relevant data. From biology to chemistry to physics to meteorology, the movement of attributes of interest within a system can provide valuable information about the system and the attributes of interest within it.
In order to assist a user with interpretation of spatial data over time, a set of text describing the motion of attributes of interest within the system may be generated by a motion detection and analysis system as described herein. For example, precipitation data for a weather system may be analyzed to identify the movement of the precipitation system, and a weather forecast may be generated based on the movement information. Although example embodiments are described with respect to meteorological applications, the systems, apparatuses, methods, and computer products described herein could be equally applicable to analysis and text generation for any sequence of spatial data frames.
A motion detection and analysis system provided according to embodiments of the invention may be operable to identify motion among a sequence of spatial data frames, to determine one or more motion vectors of attributes of interest within the spatial data, and to identify motion vector types and other information about the attributes of interest based on the determined motion vectors.
Spatial data may be mapped onto or otherwise represented in terms of a geometric grid, such that one or more attributes of interest may be identified or otherwise captured as it moves across multiple frames of the geometric grid. In the present context, the term “frame” refers to a representation of the geometric grid at a particular time reference. Each frame may include a collection of locations that are uniquely addressable via an indexing system (e.g., a coordinate grid, with the location of attributes of interest represented by values associated with particular coordinates). The term “attribute of interest” may be used to describe data that indicates the presence or lack of a data item at a particular location. Frames may depict attributes of interest as binary values (e.g., either an attribute of interest is or is not present at a particular coordinate set), or as real-number values (e.g., attributes of interest are represented by real number values at particular coordinates). In the case of real-number value representations, the real-number may refer to an amount or other variable associated with the attribute of interest. For example, precipitation data may be associated with real-number values describing the amount of precipitation at the particular location.
The term “cluster” may refer to a collection of locations of attributes of interest in a frame that form a larger identifiable entity. For example, attributes of interest at adjacent locations in a single frame may be combined to form a single cluster. Although clusters may be contiguous, this is not necessarily the case, as clusters may be determined based on a particular threshold proximity between attributes of interest (e.g., within 2 units), or based on common proximate movement vectors (e.g., two attributes of interest that are within a threshold number of units across two or more frames). Although the term “cluster” as used in the instant examples is described as related to a plurality of attributes of interest, the term may also be understood related to a single attribute of interest (e.g., a point location).
The term “motion vector” may refer to a set of values that describe a transition between two clusters in successive frames. The motion vector may include a direction of the transition, a speed of the transition, and a domain-dependent label describing the transition type (e.g., spreading or receding, for a precipitation system, metastasizing for a tumor, etc.). The direction of the transition may be specified by a cardinal direction (e.g., North, South, East, West), as an angle in degrees or radians, or by any other method of expressing direction. The speed may be expressed as an integer or real-number representing a magnitude of the vector.
The term “moving object” in the context of objects detected in a sequence of spatial data frames may refer to a cluster that persists across a sequence of one or more frames. Although the objects are generally described as being detected in a “sequence” of spatial data frames, alternative embodiments may exist where analyzed frames are not presented in a linear sequence or as successive frames. For example, every other frame of a sequence of frames may analyzed to reduce the amount of processing resources required to review the data, or the first and last frames may be analyzed with selected frames in between. As such, the term “sequence of spatial data frames” should be understood to also refer to these alternative embodiments where the spatial data frames is not associated with successive frames. The moving object may be related to a sequence of one or more motion vectors derived from the spatial data frames. The cluster associated with the moving object may change location, shape, and/or size from one frame to the next. If a cluster persists across multiple frames with no change, then the cluster may be identified as a “static object” instead of as a moving object.
Moving objects and static objects may be characterized as “domain events” and “domain states”, respectively by attaching domain-specific cluster motion types (e.g., “spreading”, “receding”, or “a band of precipitation” for a weather domain, “gridlock”, “stop-and-go”, or “congested” for a traffic domain, or the like) to the respective object. Cluster motion types may be assigned to moving and static objects based on the cluster motion types of the motion vectors associated with the objects. Where the constituent motion vectors do not correspond to a simple domain event, the domain event may be classified as a “hybrid movement.” In order to express computed domain events and states linguistically by an NLG system, the domain specific cluster motion types may be analyzed to ensure that the domain specific cluster motion types fit into a language friendly ontology of domain events and states. As such, the computed domain events and states may be identified as linguistically describable using words and phrases from the sublanguage used in a specific domain (e.g. a sublanguage for weather reports). This process is particularly relevant to the field of natural language generation, as other techniques for identifying the motion of objects are not concerned with linguistic expressions of said motion. For example, a robot may be fitted with a computer vision module to drive a vehicle in real-world traffic. Such a robot might compute motion events and states which may not be describable in language, as driving the vehicle is the objective of this robot, but describing the other moving vehicles in linguistic terms is not.
In some example embodiments a sequence of spatial data frames 102 includes spatial data received from an external source, such as from one or more sensors or remote computers. For example, the sequence of spatial data frames 102 may include information that describes the position of one or more attributes of interest over time (e.g., over a plurality of frames). The sequence of spatial data frames 102 includes data that may be used in the motion detection and analysis environment 100 to detect motion. Example sources of spatial data frames might include weather data (e.g., a weather radar display, numerical weather prediction data from atmospheric simulation models, etc.), traffic data (e.g., areas of automobile congestion on a street map), scientific data (e.g., growth of cells in a petri dish), medical data (e.g., analysis of an electrocardiograph wave form), or network data (e.g., a representation of bandwidth in a network).
The sequence of spatial data frames 102 may in some example embodiments be received via data communication with one or more sensors, monitoring systems, storage devices, computing nodes, and/or the like. In examples in which the sequence of spatial data frames 102 is received from a monitoring system or a sensor, the sequence of spatial data frames 102 may be provided in a format that includes and/or is representable by one or more images.
The sequence of spatial data frames 102 may include data such as, but not limited to, data that indicates variation across location (e.g. rainfall in different regions), or spatial-temporal data that combines both time series data and spatial data (e.g. rainfall across time in different geographical output areas). The data contained or otherwise made accessible by the sequence of spatial data frames 102 may be provided in the form of numeric values (e.g., coordinate values) for specific parameters across time and space, but the raw input data may also contain alphanumeric symbols, such as the RDF notation used in the semantic web, or as the content of database fields. The data may be received from a plurality of sources and, as such, data received from each source, sub source or data that is otherwise related may be grouped into or otherwise referred to as the sequence of spatial data frames 102. An example of the sequence of spatial data frames 102 is described further below with respect to
The data analysis system 104 may identify motion of one or more attributes of interest in the sequence of spatial data frames 102 to detect moving and static objects. The data analysis system 104 may perform a frame-by-frame analysis of a series of attributes of interest (e.g., spatial data provided over time as one or more frames). The terms “frame” and “frames” are used to describe sets of data that share a particular temporal characteristic; it should be readily understood that the term is not intended to apply only to frames as known in video or other moving spatial data formats (e.g., GIF, PNG, etc.), but rather any set of data formatted according to a temporal characteristic such that motion characteristics may be identified over a period of time. The frames could also be sequenced based on any indexing scheme not necessarily time. The data analysis system 104 may identify a series of clusters in the sequence of spatial data frames 102, and derive motion vectors. The data analysis system 104 may further give domain-specific labels to moving and static object identified from a sequence of one or more motion vectors to assist a natural language generation system 106 with linguistically describing the moving and static objects. For example, moving objects that move according to a certain motion vector in a weather domain may be associated with different labels than moving objects that move according to the same motion vector in a medical domain, as terminology for describing weather moving objects may not be appropriate for describing medical objects (e.g., cells). Example methods for identifying motion vectors from the spatial data as may be employed by the data analysis system 104 are described further below with respect to
The moving and static objects detected by the data analysis system 104 may be used by a natural language generation system 106 to generate one or more messages describing the objects and motion vectors. Messages are language independent data structures that correspond to informational elements in a text and/or collect together underlying data, referred to as slots, arguments or features, which can be presented within a fragment of natural language such as a phrase or sentence. A message typically corresponds to a fact about the underlying data (for example, the existence of some observed event) that could be expressed via a simple sentence (although it may ultimately be realized by some other linguistic means). One such natural language generation system is described in Building Natural Language Generation Systems by Ehud Reiter and Robert Dale, Cambridge University Press (2000), which is incorporated by reference in its entirety herein.
In the example embodiment shown, computing system 200 comprises a computer memory (“memory”) 201, a display 202, one or more processors 203, input/output devices 204 (e.g., keyboard, mouse, CRT or LCD display, touch screen, gesture sensing device and/or the like), other computer-readable media 205, and communications interface 206. The processor 203 may, for example, be embodied as various means including one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an application-specific integrated circuit (ASIC) or field-programmable gate array (FPGA), or some combination thereof. Accordingly, although illustrated in
The data analysis system 104, and/or the natural language generation system 106 are shown residing in memory 201. The memory 201 may comprise, for example, transitory and/or non-transitory memory, such as volatile memory, non-volatile memory, or some combination thereof. Although illustrated in
In other embodiments, some portion of the contents, some or all of the components of the data analysis system 104, and/or the natural language generation system 106 may be stored on and/or transmitted over the other computer-readable media 205. The components of the data analysis system 104, and/or the natural language generation system 106 preferably execute on one or more processors 203 and are configured to generate natural language describing moving and/or static objects derived from spatial data, as described herein.
Alternatively or additionally, other code or programs 230 (e.g., an administrative interface, a Web server, and the like) and potentially other data repositories, such as data repository 240, also reside in the memory 201, and preferably execute on one or more processors 203. Of note, one or more of the components in
The data analysis system 104, and/or the natural language generation system 106 are further configured to provide functions such as those described with reference to
In an example embodiment, components/modules of the data analysis system 104, and/or the natural language generation system 106 are implemented using standard programming techniques. For example, the data analysis system 104 and/or the natural language generation system 106 may be implemented as a “native” executable running on the processor 203, along with one or more static or dynamic libraries. In other embodiments, the data analysis system 104, and/or the natural language generation system 106 may be implemented as instructions processed by a virtual machine that executes as one of the other programs 230. In general, a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented (e.g., Java, C++, C #, Visual Basic.NET, Smalltalk, and the like), functional (e.g., ML, Lisp, Scheme, and the like), procedural (e.g., C, Pascal, Ada, Modula, and the like), scripting (e.g., Perl, Ruby, Python, JavaScript, VBScript, and the like), and declarative (e.g., SQL, Prolog, and the like).
The embodiments described above may also use synchronous or asynchronous client-server computing techniques. Also, the various components may be implemented using more monolithic programming techniques, for example, as an executable running on a single processor computer system, or alternatively decomposed using a variety of structuring techniques, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more processors. Some embodiments may execute concurrently and asynchronously, and communicate using message passing techniques. Equivalent synchronous embodiments are also supported. Also, other functions could be implemented and/or performed by each component/module, and in different orders, and by different components/modules, yet still achieve the described functions.
In addition, programming interfaces to the data stored as part of the data analysis system 104, and/or the natural language generation system 106, such as by using one or more application programming interfaces can be made available by mechanisms such as through application programming interfaces (API) (e.g. C, C++, C #, and Java); libraries for accessing files, databases, or other data repositories; through scripting languages such as XML; or through Web servers, FTP servers, or other types of servers providing access to stored data. The sequence of spatial data frames 102 and the domain model 108 may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques. Alternatively or additionally, the sequence of spatial data frames 102 and the domain model 108 may be local data stores but may also be configured to access data from one or more remote sources.
Different configurations and locations of programs and data are contemplated for use with techniques described herein. A variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, Web Services (XML-RPC, JAX-RPC, SOAP, and the like). Other variations are possible. Also, other functionality could be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions described herein.
Furthermore, in some embodiments, some or all of the components of the sequence of spatial data frames 102, the data analysis system 104, and/or the natural language generation system 106 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more ASICs, standard integrated circuits, controllers executing appropriate instructions, and including microcontrollers and/or embedded controllers, FPGAs, complex programmable logic devices (“CPLDs”), and the like. Some or all of the system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium so as to enable or configure the computer-readable medium and/or one or more associated computing systems or devices to execute or otherwise use or provide the contents to perform at least some of the described techniques. Some or all of the system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations.
At time T1, the depicted frame includes clusters defined by attributes of interest at locations (1,1) and (2,1). At time T2, the clusters (and thus their component associated attributes of interest) move to locations (2, 2), (3,2), and (4,2). As can be readily discerned from the example, clusters may change in size and shape in addition to changing screen location. At time T3, the clusters move to locations (3,3) and (4,3), to locations (3,4) and (4,4) at time T4, and location (4, 5) at time T5. The clusters have left the frame at T5. The instant set of frames depicts attributes of interest in a binary manner (e.g., a given coordinate location either has an attribute of interest or it does not) but, as described above, a frame may also have real-number values at various locations. For example, a given location might be associated with values ranging from 0 to 1, from 0 to 10, from 0 to 100, or any other range. In some embodiments, each location may be associated with a value, and a filter may be provided to detect attributes of interest and/or clusters (e.g., a Kalman filter). Additionally or alternatively, one or more thresholds may be used to detect attributes of interest, such that attributes of interest are indicated at locations where the value exceeds a particular threshold.
Although the instant example depicts a square grid covering the entire frame, embodiments may also relate to irregular frame sizes. For example, a sequence of spatial data frames may correspond to a geographical area, with a grid defined solely for locations that have dry land, when preparing a weather report, or various other limitations may be imposed upon the grid based on desired output information. By constraining the grid in this manner, less area is analyzed, thus potentially improving the speed and efficiency of the motion detection operations.
The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowcharts' block(s). As such, the operations of
Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions and combinations of operations for performing the specified functions. It will also be understood that one or more blocks of the flowcharts', and combinations of blocks in the flowchart, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.
In some example embodiments, certain ones of the operations herein may be modified or further amplified as described below. Moreover, in some embodiments additional optional operations may also be included. It should be appreciated that each of the modifications, optional additions or amplifications described herein may be included with the operations herein either alone or in combination with any others among the features described herein.
At action 402, the method 400 receives a sequence of spatial data frames. As described above with respect to
At action 404, moving and/or static objects are identified within the spatial data. The moving and/or static objects may be identified by detecting attributes of interest in each frame, clustering the attributes of interest, and observing the movement of the clusters across the frames. Example methods for performing the process of detecting moving and/or static objects is described further below with respect to
At action 406, one or more of the identified moving and/or static objects may optionally be filtered out of the data. For example, static objects (e.g., objects with no associated movement vectors) may be identified as irrelevant to the motion detection process. For example, moving and/or static objects below a certain size or clusters below a certain density may be removed from consideration in the motion detection process. Various other criteria may also be employed for removing moving and/or static objects other than the motion vectors of the objects. For example, in the case of weather data, objects that are not near populated areas in a geographical weather map may be removed from consideration.
At action 408, messages may be created for the remaining moving and/or static objects based on the detected movement vectors and cluster motion types for the clusters associated with the objects. These messages may include conceptual labels that are applied to the identified moving and/or static objects so that they may be appropriately described at a later time. The messages may be created using a domain model that maps identified movement vectors and cluster motion types to terms and messages related to the domain model. These messages may be used in the generation of natural language describing the moving and/or static objects, such as by a natural language generation system 106 as described with respect to
At action 502, clusters are identified for a given frame of the image. Clusters may be identified by detecting attribute of interest locations that are contiguous or otherwise share relevant values. An example of a method 600 for detecting clusters is described below with respect to
At action 504, a determination is made as to whether all frames of the sequence of spatial data frames have been analyzed for detection of clusters. If all of the frames have been analyzed, the method proceeds to action 506. Otherwise, the method proceeds to action 505 and the next frame is selected for analysis. The method then returns to action 502 to identify clusters in the next unanalyzed frame.
At action 506, chains of corresponding clusters across the sequence of frames are identified. Cluster chains may be identified by comparing the location of clusters in sequential frames. For example, the data structure storing the location of each cluster in each frame may be compared with preceding and succeeding frames to identify similar clusters based on the size, shape, and location of the clusters. An example of a method 700 for identifying cluster chains is described further below with respect to
At action 508, motion vectors are determined for each pair of successive frames that include cluster chains. These motion vectors may be identified by comparing the location, size, and shape of clusters across the frames in which the cluster chain is present. Where the cluster shapes vary randomly (e.g. precipitation shapes change across frames randomly) cluster comparisons may be based on location and size. Although the frames being compared are described as successive, the frames are not necessarily adjacent. For example, one or more frames may be skipped to conserve processor resources. In other examples, only a portion of the available frames may be analyzed, for example although frames at 60 second intervals may be available, the method may only extract frames at 30 minute intervals to be analyzed. An example of a method 800 for determining motion vectors for a cluster chain is described further below with respect to
At action 510, a cluster motion type is determined based on the motion vectors determined at action 508. The cluster motion type may refer to a domain-specific value that linguistically describes the motion of the associated object or cluster of objects. This cluster motion type may be used by other processes, such as a natural language generation system. Although the instant example is related to natural language generation, a cluster motion type associated with a moving object could also be used for any other purpose for which defining an object motion might be useful. According to embodiments of the present invention, cluster motion types may be determined to be relevant for natural language generation. An example of a method 900 for determining a cluster motion type is described further below with respect to
At action 512, a determination is made as to whether additional cluster chains remain for analysis. If additional cluster chains remain, the method proceeds to action 514 to select the next cluster chain for analysis. If no cluster chains remain, the method 500 ends.
At action 602, a frame of spatial data is selected. As described above, the frame may include a set of attributes of interest at a particular point in time. As described with respect to
At action 604, areas with attributes of interest that share particular values are identified. These points may include contiguous locations (e.g., attributes of interest located at adjacent coordinate locations), or attributes of interest that are within a particular distance of each other. The particular values may be any value that is common to the attributes of interest. For example, in a weather map, attributes of interest with similar precipitation values may be identified.
At action 606, certain clusters may be removed or otherwise ignored from the detection algorithm. For example, clusters that are smaller than a particular size may be removed from consideration.
At action 608, clusters may be merged with one another. For example, clusters may be merged within certain ranges of proximity. The method ends after clusters have been merged, with output of the location of clusters for the analyzed frame. The method 600 may be repeated until clusters are identified for all frames of the sequence of spatial data frames.
At action 702, a frame sequence is selected. The frame sequence may include two or more successive frames in a sequence of spatial data frames. For example, the frame sequence may be two sequential frames in the sequence of spatial data frames. In some embodiments, the method 700 is applied to pairs of frames iteratively to identify possible chains across each pair of frames. Alternatively, frames may be analyzed in a recursive manner, with individual frames being analyzed to detect links to object chains identified in other frame pairs. As yet another alternative, frames may be analyzed in groups, and groups analyzed in sequence (e.g., analyzing frame 1 and frame 2 in a sequence together, analyzing frame 3 and frame 4 in a sequence together, and then analyzing the results of frame 1 and 2 with the results of frame 3 and 4 in a sequence).
At action 704, possible chains of clusters are identified across the selected frame sequence. Cluster chains may be identified by comparing the locations of clusters in each frame, and attempting to determine which clusters in the first frame correspond to clusters in the second frame. These similar clusters may be determined by a cluster similarity score for cluster pairs.
At action 706, clusters may be scored for similarity by assigning values for size, shape, and location. For example, clusters that have a similar size, shape, and location may receive a higher score than clusters that deviate in one or more of these characteristics. In some embodiments clusters may change in shape and size from frame to frame. For example, precipitation clusters may change in shape and size from frame to frame. In some embodiments, cluster similarity may be a function of certain domain values. For example, weather system clusters may exhibit different behavior from traffic patterns, as weather clusters (e.g., weather systems) operate according to different movement constraints than traffic objects (e.g., cars). As such, different functions and algorithms may be employed for computation of a similarity score as determined by a domain model, such as the domain model 108 described with respect to
At action 708, chains may be established by selecting clusters to maximize the similarity score. Various methods of selecting these clusters for linking may be employed. For example, clusters may be selected such that individual pairs of clusters have the highest similarity score, or clusters may be selected for pairing such that a frame similarity score for the entire sequence of frames is maximized. Identified chains may be stored for use in other operations as part of the motion detection process. As described above, the method 700 may be repeated across the frames of data until all frames have been processed by the chain detection method. After initial processing of two or more frames, the method 700 may also be performed on output data, in order to identify chains that span across previously analyzed frames into newly selected frames (e.g. recursive processing). Chains may be stored in a data structure indicating the cluster locations and the frames in which the cluster is present.
At action 802, two or more clusters that form a chain across two or more frames are selected. The method 800 may be employed on each set of cluster chains as identified by a method such as the method 700 described above. The method 800 may analyze individual frame pairs of a chain (e.g., pairs of frames from the start frame to the finishing frame), or from the start of the chain to the end (e.g., the first frame and the last frame), or for any other selection of frames of a given cluster chain.
At action 804, the content of the cluster in the first analyzed frame corresponding with the selected cluster frame (e.g., the locations of the cluster in the coordinate system, or the values associated with the locations in the case of clusters defined by real-number values) is compared with the content of the cluster in the second analyzed frame corresponding to the selected cluster frame. The content of the cluster may also include the relative locations of points in the cluster, the density of the cluster, or any other features which may be specified by a domain model established for analysis of the cluster. For example, cluster contents may be identified by creating a list of points at which the cluster exists that were not present in a previous frame, creating a list of points at which the cluster does not exist that were present in the previous frame, and using the two lists to determine a transition type. For example, if the number of points at which the cluster has appeared is less than 20% of the number of points at which the cluster has disappeared and the number of points at which the cluster has disappeared is greater than zero, then the moving object may be identified as “Clearing.” If the number of points at which the cluster has disappeared is less than 20% of the number of points at which the cluster has appeared and the number of points at which the cluster has appeared is greater than zero, the moving object may be identified as “Spreading.”
At action 806, based on the comparison, a transition type is determined between the two clusters. This transition type may be a generic transition, or it may be a domain-specific transition determined using a domain model. For example, a weather system might be assigned a transition type of “spreading” or “clearing” based on the content of the cluster.
At action 808, a point location, such as a centroid, may be computed for the each cluster in the selected chain. The centroid may be used to establish a central location to all of the points in the cluster. In some embodiments, the centroid may also take into account individual data values of points in the cluster, such as to identify a center of gravity based not only by the locations of points, but also their weight. Alternatively, other values than the centroid may be employed, such as a leading edge, a circumcenter, a barycenter, or the like.
At action 810, the speed and direction of the cluster from the first frame to the second frame is computed using the point locations determined at action 808. The speed of the cluster may be determined by comparing a time value associated with the first frame to a time value associated with the second frame, and computing the distance between the two points, such that the rate equals the distance divided by the change in time. The direction may be determined by the change in coordinates of the points to determine the direction of motion. The method 800 may be repeated across each set of frames across the chain, and the determined motion vectors associated with the object.
At action 902, a sequence of motion vectors is selected. These motion vectors may represent the output of a method for identifying the motion of a cluster over time, such as the method 800. The method 900 may be performed repeatedly on each cluster chain to identify one or more cluster motion types for each cluster of attributes of interest identified in the spatial data. In some embodiments, the method 900 may be performed multiple times on a single cluster, if the cluster is associated with multiple sequences of motion vectors.
At action 904, motion vectors that fall within an acceptable range of variation are identified. In some embodiments, motion vectors may be combined of smoothed. For example, two identical motion vectors across three frames may be combined into a single motion vector associated with all three frames, or multiple similar but not identical motion vectors may be averaged to reduce computational complexity. A given cluster chain may be associated with multiple motion vectors if the object changes direction over time.
At action 906, the identified sequences of motion vectors are used to determine one or more cluster motion types for the associated clusters. The cluster motion types may be derived from the transition types of individual motion vectors associated with the clusters. The cluster motion types may be specified by a domain model, such that motion in a particular direction (e.g., north, south, east, west), or motion relative to a particular point (e.g., away from the centroid of the cluster) is associated with a particular type value. Other characteristics that have been determined for the particular cluster may also be considered when determining a cluster motion type. Some clusters may be associated with multiple sequences of motion vectors that indicate a change in direction over time, or multiple sequences of motion vectors may be used to indicate changes to the size, shape, and/or density of the cluster. For example, in the context of precipitation, sequences of motion vectors that lead away from the centroid of the cluster along with a reduction in the density of the cluster may be classified as “dispersing.” In some embodiments, individual transition types of the motion vectors may also be used to derive a cluster motion type. For example, if a majority motion vectors associated with the cluster chain are described as “Spreading”, then the cluster motion type may be identified as “Spreading” as well. As another example, a precipitation object may move from a first region to a second region and increase in area, resulting in a classification of “spreading” over the second region. As yet another example, a precipitation object might move in a uniform direction without changing in size or shape, and be classified as a “moving band” of precipitation. The relationship between individual motion vector types and the cluster motion type as a whole may be a many to one relation or a one to one relationship. For example, many motion vectors, in aggregate, may be used to define a cluster motion type. Alternatively or additionally, a single repeating motion vector type may define the motion cluster type. Objects may also be classified as “static” (e.g., no associated motion vector), or “hybrid” (e.g., a more erratic or chaotic sequence of motion, with motion vectors in different directions), based on the analyzed motion vectors.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
The present application is a continuation of U.S. application Ser. No. 14/650,763, titled “METHOD AND APPARATUS FOR MOTION DETECTION,” filed Jun. 9, 2015, which is a National Phase Entry of International Application No. PCT/IB2012/057773, titled “METHOD AND APPARATUS FOR MOTION DETECTION,” filed Dec. 27, 2012, the contents of which are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5181250 | Morgan | Jan 1993 | A |
5237502 | White et al. | Aug 1993 | A |
5311429 | Tominaga | May 1994 | A |
5321608 | Namba et al. | Jun 1994 | A |
5629687 | Sutton et al. | May 1997 | A |
5794177 | Carus et al. | Aug 1998 | A |
5802488 | Edatsune | Sep 1998 | A |
6023669 | Suda et al. | Feb 2000 | A |
6078914 | Redfern | Jun 2000 | A |
6138087 | Budzinski | Oct 2000 | A |
6266617 | Evans | Jul 2001 | B1 |
6424370 | Courtney | Jul 2002 | B1 |
6442485 | Evans | Aug 2002 | B2 |
6466899 | Yano et al. | Oct 2002 | B1 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6717513 | Sandelman et al. | Apr 2004 | B1 |
6947885 | Bangalore et al. | Sep 2005 | B2 |
7043420 | Ratnaparkhi | May 2006 | B2 |
7167824 | Kallulli | Jan 2007 | B2 |
7231341 | Bangalore et al. | Jun 2007 | B2 |
7238313 | Ferencz et al. | Jul 2007 | B2 |
7269516 | Brunner et al. | Sep 2007 | B2 |
7305336 | Polanyi et al. | Dec 2007 | B2 |
7310969 | Dale | Dec 2007 | B2 |
7346493 | Ringger et al. | Mar 2008 | B2 |
7418447 | Caldwell et al. | Aug 2008 | B2 |
7424363 | Cheng et al. | Sep 2008 | B2 |
7444287 | Claudatos et al. | Oct 2008 | B2 |
7496621 | Pan et al. | Feb 2009 | B2 |
7526424 | Corston-Oliver et al. | Apr 2009 | B2 |
7533089 | Pan et al. | May 2009 | B2 |
7542934 | Markel | Jun 2009 | B2 |
7562005 | Bangalore et al. | Jul 2009 | B1 |
7684991 | Stohr et al. | Mar 2010 | B2 |
7711581 | Hood et al. | May 2010 | B2 |
7783486 | Rosser et al. | Aug 2010 | B2 |
7809552 | Pan et al. | Oct 2010 | B2 |
7849048 | Langseth et al. | Dec 2010 | B2 |
7849049 | Langseth et al. | Dec 2010 | B2 |
7856390 | Schiller | Dec 2010 | B2 |
7873509 | Budzinski | Jan 2011 | B1 |
7921091 | Cox et al. | Apr 2011 | B2 |
7930169 | Billerey-Mosier | Apr 2011 | B2 |
7933774 | Begeja et al. | Apr 2011 | B1 |
7966172 | Ruiz et al. | Jun 2011 | B2 |
7970601 | Burmester et al. | Jun 2011 | B2 |
7979267 | Ruiz et al. | Jul 2011 | B2 |
8019610 | Walker et al. | Sep 2011 | B2 |
8024331 | Calistri-Yeh et al. | Sep 2011 | B2 |
8037000 | Delmonico et al. | Oct 2011 | B2 |
8082144 | Brown et al. | Dec 2011 | B1 |
8090727 | Lachtarnik et al. | Jan 2012 | B2 |
8150676 | Kaeser | Apr 2012 | B1 |
8175873 | Di Fabbrizio et al. | May 2012 | B2 |
8180647 | Walker et al. | May 2012 | B2 |
8180758 | Cornali | May 2012 | B1 |
8229937 | Kiefer et al. | Jul 2012 | B2 |
8335786 | Pereira et al. | Dec 2012 | B2 |
8345984 | Ji et al. | Jan 2013 | B2 |
8355903 | Birnbaum et al. | Jan 2013 | B1 |
8374848 | Birnbaum et al. | Feb 2013 | B1 |
8425325 | Hope | Apr 2013 | B2 |
8473911 | Baxter | Jun 2013 | B1 |
8494944 | Schiller | Jul 2013 | B2 |
8515733 | Jansen | Aug 2013 | B2 |
8515737 | Allen | Aug 2013 | B2 |
8548814 | Manuel-Devadoss | Oct 2013 | B2 |
8548915 | Antebi et al. | Oct 2013 | B2 |
8561014 | Mengusoglu et al. | Oct 2013 | B2 |
8566090 | Di Fabbrizio et al. | Oct 2013 | B2 |
8589148 | Atallah et al. | Nov 2013 | B2 |
8589172 | Alonso et al. | Nov 2013 | B2 |
8616896 | Lennox | Dec 2013 | B2 |
8620669 | Walker et al. | Dec 2013 | B2 |
8626613 | Dale et al. | Jan 2014 | B2 |
8630844 | Nichols et al. | Jan 2014 | B1 |
8645291 | Hawkins et al. | Feb 2014 | B2 |
8655889 | Hua et al. | Feb 2014 | B2 |
8676691 | Schiller | Mar 2014 | B2 |
8688434 | Birnbaum et al. | Apr 2014 | B1 |
8700396 | Mengibar et al. | Apr 2014 | B1 |
8738384 | Bansal et al. | May 2014 | B1 |
8738558 | Antebi et al. | May 2014 | B2 |
8762134 | Reiter | May 2014 | B2 |
8762133 | Reiter | Jun 2014 | B2 |
8775161 | Nichols et al. | Jul 2014 | B1 |
8825533 | Basson et al. | Sep 2014 | B2 |
8843363 | Birnbaum et al. | Sep 2014 | B2 |
8849670 | Di Cristo et al. | Sep 2014 | B2 |
8874584 | Chen et al. | Oct 2014 | B1 |
8886520 | Nichols et al. | Nov 2014 | B1 |
8892417 | Nichols et al. | Nov 2014 | B1 |
8892419 | Lundberg et al. | Nov 2014 | B2 |
8898063 | Sykes et al. | Nov 2014 | B1 |
8903711 | Lundberg et al. | Dec 2014 | B2 |
8903718 | Akuwudike | Dec 2014 | B2 |
8909595 | Gandy et al. | Dec 2014 | B2 |
8914452 | Boston et al. | Dec 2014 | B2 |
8924330 | Antebi et al. | Dec 2014 | B2 |
8930305 | Namburu et al. | Jan 2015 | B2 |
8977953 | Pierre et al. | Mar 2015 | B1 |
8984051 | Olsen et al. | Mar 2015 | B2 |
9002695 | Watanabe et al. | Apr 2015 | B2 |
9002869 | Riezler et al. | Apr 2015 | B2 |
9015730 | Allen et al. | Apr 2015 | B1 |
9028260 | Nanjiani et al. | May 2015 | B2 |
9092276 | Allen et al. | Jul 2015 | B2 |
9104720 | Rakshit et al. | Aug 2015 | B2 |
9110882 | Overell et al. | Aug 2015 | B2 |
9110977 | Pierre et al. | Aug 2015 | B1 |
9111534 | Sylvester et al. | Aug 2015 | B1 |
9135244 | Reiter | Sep 2015 | B2 |
9135662 | Evenhouse et al. | Sep 2015 | B2 |
9146904 | Allen | Sep 2015 | B2 |
9164982 | Kaeser | Oct 2015 | B1 |
9190054 | Riley et al. | Nov 2015 | B1 |
9198621 | Fernstrom | Dec 2015 | B2 |
9208147 | Nichols et al. | Dec 2015 | B1 |
9229927 | Wolfram et al. | Jan 2016 | B2 |
9240197 | Begeja et al. | Jan 2016 | B2 |
9244894 | Dale et al. | Jan 2016 | B1 |
9251134 | Birnbaum et al. | Feb 2016 | B2 |
9251143 | Bird et al. | Feb 2016 | B2 |
9263039 | Di Cristo et al. | Feb 2016 | B2 |
9268770 | Kursun | Feb 2016 | B1 |
9323743 | Reiter | Apr 2016 | B2 |
9405448 | Reiter | Aug 2016 | B2 |
9600471 | Bradshaw et al. | Mar 2017 | B2 |
9640045 | Reiter | May 2017 | B2 |
9990360 | Sripada | Jun 2018 | B2 |
10026274 | Reiter | Jul 2018 | B2 |
10115202 | Sripada | Oct 2018 | B2 |
10467347 | Reiter | Nov 2019 | B1 |
20020026306 | Bangalore et al. | Feb 2002 | A1 |
20030131315 | Escher | Jul 2003 | A1 |
20030212545 | Kallulli | Nov 2003 | A1 |
20040141654 | Jeng | Jul 2004 | A1 |
20040186723 | Mizutani et al. | Sep 2004 | A1 |
20040246120 | Benner et al. | Dec 2004 | A1 |
20050039107 | Hander et al. | Feb 2005 | A1 |
20050203927 | Sull et al. | Sep 2005 | A1 |
20050228635 | Araki et al. | Oct 2005 | A1 |
20050256703 | Markel | Nov 2005 | A1 |
20050289183 | Kaneko et al. | Dec 2005 | A1 |
20060085667 | Kubota et al. | Apr 2006 | A1 |
20060178868 | Billerey-Mosier | Aug 2006 | A1 |
20060200253 | Hoffberg et al. | Sep 2006 | A1 |
20060259293 | Orwant | Nov 2006 | A1 |
20070078655 | Semkow et al. | Apr 2007 | A1 |
20070106628 | Adjali et al. | May 2007 | A1 |
20070112511 | Burfeind et al. | May 2007 | A1 |
20070129942 | Ban | Jun 2007 | A1 |
20070143099 | Balchandran et al. | Jun 2007 | A1 |
20080221865 | Wellmann | Sep 2008 | A1 |
20080221870 | Attardi et al. | Sep 2008 | A1 |
20080281781 | Zhao et al. | Nov 2008 | A1 |
20080312954 | Ullrich et al. | Dec 2008 | A1 |
20090089100 | Nenov et al. | Apr 2009 | A1 |
20090089126 | Odubiyi | Apr 2009 | A1 |
20090111486 | Burstrom | Apr 2009 | A1 |
20090156229 | Hein et al. | Jun 2009 | A1 |
20090198496 | Denecke | Aug 2009 | A1 |
20090222482 | Klassen et al. | Sep 2009 | A1 |
20090281839 | Lynn et al. | Nov 2009 | A1 |
20090286514 | Lichorowic et al. | Nov 2009 | A1 |
20090287567 | Penberthy et al. | Nov 2009 | A1 |
20100146491 | Hirano et al. | Jun 2010 | A1 |
20100153095 | Yang et al. | Jun 2010 | A1 |
20100153321 | Savvides et al. | Jun 2010 | A1 |
20100174545 | Otani | Jul 2010 | A1 |
20100191658 | Kannan et al. | Jul 2010 | A1 |
20100203970 | Hope | Aug 2010 | A1 |
20100281440 | Underkoffler et al. | Nov 2010 | A1 |
20100332235 | David | Dec 2010 | A1 |
20110010164 | Williams | Jan 2011 | A1 |
20110040760 | Fleischman et al. | Feb 2011 | A1 |
20110068929 | Franz et al. | Mar 2011 | A1 |
20110087486 | Schiller | Apr 2011 | A1 |
20110160986 | Wu et al. | Jun 2011 | A1 |
20110179006 | Cox et al. | Jul 2011 | A1 |
20110182469 | Ji et al. | Jul 2011 | A1 |
20110218822 | Buisman et al. | Sep 2011 | A1 |
20110225185 | Gupta | Sep 2011 | A1 |
20110257839 | Mukherjee | Oct 2011 | A1 |
20120078888 | Brown et al. | Mar 2012 | A1 |
20120084027 | Caine | Apr 2012 | A1 |
20120131008 | Ahn et al. | May 2012 | A1 |
20120136649 | Freising et al. | May 2012 | A1 |
20120158089 | Bocek et al. | Jun 2012 | A1 |
20120173475 | Ash et al. | Jul 2012 | A1 |
20120265764 | Agrawal et al. | Oct 2012 | A1 |
20120290289 | Manera et al. | Nov 2012 | A1 |
20120310990 | Viegas et al. | Dec 2012 | A1 |
20130030810 | Kopparapu et al. | Jan 2013 | A1 |
20130066873 | Salvetti et al. | Mar 2013 | A1 |
20130129307 | Choe et al. | May 2013 | A1 |
20130144606 | Birnbaum et al. | Jun 2013 | A1 |
20130145242 | Birnbaum et al. | Jun 2013 | A1 |
20130151238 | Beaurpere et al. | Jun 2013 | A1 |
20130174026 | Locke | Jul 2013 | A1 |
20130185050 | Bird et al. | Jul 2013 | A1 |
20130211855 | Eberle et al. | Aug 2013 | A1 |
20130238329 | Casella dos Santos | Sep 2013 | A1 |
20130238330 | Casella dos Santos | Sep 2013 | A1 |
20130238987 | Lutwyche | Sep 2013 | A1 |
20130251233 | Yang et al. | Sep 2013 | A1 |
20130268263 | Park et al. | Oct 2013 | A1 |
20130293363 | Plymouth et al. | Nov 2013 | A1 |
20130297293 | Di Cristo et al. | Nov 2013 | A1 |
20130311201 | Chatfield et al. | Nov 2013 | A1 |
20140019531 | Czajka et al. | Jan 2014 | A1 |
20140025371 | Min | Jan 2014 | A1 |
20140039878 | Wasson | Feb 2014 | A1 |
20140052696 | Soroushian | Feb 2014 | A1 |
20140062712 | Reiter | Mar 2014 | A1 |
20140067377 | Reiter | Mar 2014 | A1 |
20140072947 | Boguraev et al. | Mar 2014 | A1 |
20140072948 | Boguraev et al. | Mar 2014 | A1 |
20140089212 | Sbodio | Mar 2014 | A1 |
20140100846 | Haine et al. | Apr 2014 | A1 |
20140100901 | Haine et al. | Apr 2014 | A1 |
20140100923 | Strezo et al. | Apr 2014 | A1 |
20140143720 | Dimarco et al. | May 2014 | A1 |
20140149107 | Schilder | May 2014 | A1 |
20140164303 | Bagchi et al. | Jun 2014 | A1 |
20140164304 | Bagchi et al. | Jun 2014 | A1 |
20140188477 | Zhang | Jul 2014 | A1 |
20140201126 | Zadeh et al. | Jul 2014 | A1 |
20140278358 | Byron et al. | Sep 2014 | A1 |
20140281935 | Byron et al. | Sep 2014 | A1 |
20140281951 | Megiddo et al. | Sep 2014 | A1 |
20140297268 | Govrin et al. | Oct 2014 | A1 |
20140300684 | Fagadar-Cosma et al. | Oct 2014 | A1 |
20140316768 | Khandekar | Oct 2014 | A1 |
20140375466 | Reiter | Dec 2014 | A1 |
20140379322 | Koutrika et al. | Dec 2014 | A1 |
20140379378 | Cohen-Solal et al. | Dec 2014 | A1 |
20150006437 | Byron et al. | Jan 2015 | A1 |
20150032443 | Karov et al. | Jan 2015 | A1 |
20150081299 | Jasinschi et al. | Mar 2015 | A1 |
20150081307 | Cederstrom et al. | Mar 2015 | A1 |
20150081321 | Jain | Mar 2015 | A1 |
20150095015 | Lani et al. | Apr 2015 | A1 |
20150106307 | Antebi et al. | Apr 2015 | A1 |
20150142418 | Byron et al. | May 2015 | A1 |
20150142421 | Buurman et al. | May 2015 | A1 |
20150154359 | Harris et al. | Jun 2015 | A1 |
20150163358 | Klemm et al. | Jun 2015 | A1 |
20150169522 | Logan et al. | Jun 2015 | A1 |
20150169548 | Reiter | Jun 2015 | A1 |
20150169659 | Lee et al. | Jun 2015 | A1 |
20150169720 | Byron et al. | Jun 2015 | A1 |
20150169737 | Byron et al. | Jun 2015 | A1 |
20150179082 | Byron et al. | Jun 2015 | A1 |
20150227508 | Howald et al. | Aug 2015 | A1 |
20150242384 | Reiter | Aug 2015 | A1 |
20150261744 | Suenbuel et al. | Sep 2015 | A1 |
20150261836 | Madhani et al. | Sep 2015 | A1 |
20150279348 | Cao et al. | Oct 2015 | A1 |
20150310013 | Allen et al. | Oct 2015 | A1 |
20150310112 | Allen et al. | Oct 2015 | A1 |
20150310861 | Waltermann et al. | Oct 2015 | A1 |
20150324343 | Carter et al. | Nov 2015 | A1 |
20150324351 | Sripada et al. | Nov 2015 | A1 |
20150324374 | Sripada et al. | Nov 2015 | A1 |
20150324413 | Gubin et al. | Nov 2015 | A1 |
20150325000 | Sripada | Nov 2015 | A1 |
20150326622 | Carter et al. | Nov 2015 | A1 |
20150331845 | Guggilla et al. | Nov 2015 | A1 |
20150331846 | Guggilla et al. | Nov 2015 | A1 |
20150332670 | Akbacak et al. | Nov 2015 | A1 |
20150356127 | Pierre et al. | Dec 2015 | A1 |
20150363363 | Bohra et al. | Dec 2015 | A1 |
20150363364 | Sripada | Dec 2015 | A1 |
20150363382 | Bohra et al. | Dec 2015 | A1 |
20150363390 | Mungi et al. | Dec 2015 | A1 |
20150363391 | Mungi et al. | Dec 2015 | A1 |
20150371651 | Aharoni et al. | Dec 2015 | A1 |
20160019200 | Allen | Jan 2016 | A1 |
20160027125 | Bryce | Jan 2016 | A1 |
20160055150 | Bird et al. | Feb 2016 | A1 |
20160132489 | Reiter | May 2016 | A1 |
20160140090 | Dale et al. | May 2016 | A1 |
20160328385 | Reiter | Nov 2016 | A1 |
20170018107 | Reiter | Jan 2017 | A1 |
20180349361 | Sripada | Dec 2018 | A1 |
20190035232 | Reiter | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
2011247830 | Dec 2011 | AU |
2011253627 | Dec 2011 | AU |
2013201755 | Sep 2013 | AU |
2013338351 | May 2015 | AU |
2577721 | Mar 2006 | CA |
2826116 | Mar 2006 | CA |
103999081 | Aug 2014 | CN |
104182059 | Dec 2014 | CN |
104881320 | Sep 2015 | CN |
1336955 | May 2006 | EP |
2707809 | Mar 2014 | EP |
2750759 | Jul 2014 | EP |
2849103 | Mar 2015 | EP |
2518192 | Mar 2015 | GB |
61-221873 | Oct 1986 | JP |
2004-21791 | Jan 2004 | JP |
2014165766 | Sep 2014 | JP |
WO 2000074394 | Dec 2000 | WO |
WO 2002031628 | Apr 2002 | WO |
WO 2002073449 | Sep 2002 | WO |
WO 2002073531 | Sep 2002 | WO |
WO 2002031628 | Oct 2002 | WO |
WO 2006010044 | Jan 2006 | WO |
WO 2007041221 | Apr 2007 | WO |
WO 2009014465 | Jan 2009 | WO |
WO 2010049925 | May 2010 | WO |
WO 2010051404 | May 2010 | WO |
WO 2012071571 | May 2012 | WO |
WO 2013009613 | Jan 2013 | WO |
WO 2013042115 | Mar 2013 | WO |
WO 2013042116 | Mar 2013 | WO |
WO 2013177280 | Nov 2013 | WO |
WO 2014035402 | Mar 2014 | WO |
WO 2014098560 | Jun 2014 | WO |
WO 2014102568 | Jul 2014 | WO |
WO 2014140977 | Sep 2014 | WO |
WO 2014187076 | Nov 2014 | WO |
WO 2015028844 | Mar 2015 | WO |
WO 2015113301 | Aug 2015 | WO |
WO 2015148278 | Oct 2015 | WO |
WO 2015159133 | Oct 2015 | WO |
WO 2015164253 | Oct 2015 | WO |
WO 2015175338 | Nov 2015 | WO |
WO 2016004266 | Jan 2016 | WO |
Entry |
---|
Notice of Allowance for U.S. Appl. No. 15/188,423 dated Dec. 28, 2018. |
Notice of Allowance for U.S. Appl. No. 16/009,006 dated Jul. 31, 2019. |
Office Action for U.S. Appl. No. 15/074,425 dated Nov. 27, 2018. |
Office Action for U.S. Appl. No. 15/188,423 dated Oct. 30, 2018. |
Office Action for U.S. Appl. No. 16/009,006 dated Dec. 3, 2018. |
Alawneh et al., “Pattern Recognition Techniques Applied to the Abstraction of Traces of Inter-Process Communication,” Software Maintenance and Reengineering (CSMR), 2011 15th European Conference on Year: 2011, IEEE Conference Publications, pp. 211-220, (2011). |
Andre et al., “From Visual Data to Multimedia Presentations,” Grounding Representations: Integration of Sensory Information in Natural Language Processing, Artificial Intelligence and Neural networks, IEE Colloquium on, pp. 1-3, (1995). |
Andre et al., “Natural Language Access to Visual Data: Dealing with Space and Movement,” Report 63, German Research Center for Artificial Intelligence (DFKI) SFB 314, Project VITRA, pp. 1-21, (1989). |
Barzilay et al.; “Aggregation via Set Partitioning for Natural Language Generation”, Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL; pp. 359-366; (2006). |
Bhoedjang et al., “Optimizing Distributed Data Structures Using Application-Specific Network Interface Software,” Parallel Processing, Proceedings; International Conference on Year: 1998, IEEE Conference Publications, pp. 485-492, (1998). |
Cappozzo et al., “Surface-Marker Cluster Design Criteria for 3-D Bone Movement Reconstruction,” IEEE Transactions on Biomedical Engineering, 44(12):1165-1174, (1997). |
Dalianis et al.; “Aggregation in Natural Language Generation;” Trends in Natural Language Generation, an Artificial Intelligence Perspective; pp. 88-105; (1993). |
Dragon et al., “Multi-Scale Clustering of Frame-to-Frame Correspondences for Motion Segmentation,” Computer Vision ECCV, Springer Berlin Heidelberg, pp. 445-458, (2012). |
Gatt et al.,“From Data to Text in the Neonatal Intensive Care Unit: Using NLG Technology for Decision Support and Information Management,” AI Communication, pp. 153-186, (2009). |
Gorelov et al., “Search Optimization in Semistructured Databases Using Hierarchy of Document Schemas,” Programming and Computer Software, 31(6):321-331, (2005). |
Herzog et al., “Combining Alternatives in the Multimedia Presentation of Decision Support Information for Real-Time Control,” IFIP, 15 pages,(1998). |
Kojima et al., “Generating Natural Language Description of Human Behavior from Video Images,” IEEE, pp. 728-731, (2000). |
Kottke et al., “Motion Estimation via Cluster Matching,” 8180 IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(11):1128-1132, (1994). |
Kukich, “Knowledge-Based Report Generation: A Knowledge-Engineering Approach to Natural Language Report Generation,” Dissertation to the Interdisciplinary Department of Information Science, University of Pittsburg, 260 pages, (1983). |
Leonov et al., “Construction of an Optimal Relational Schema for Storing XML Documents in an RDBMS Without Using DTD/XML Schema,” Programming and Computer Software, 30(6):323-336, (2004). |
Perry et al., “Automatic Realignment of Data Structures to Improve MPI Performance,” Networks (ICN), Ninth International Conference on Year: 2010, IEEE Conference Publications, pp. 42-47, (2010). |
Quinlan, “Induction of Decision Trees,” Machine Learning, Kluwer Academic Publishers, 1(1):81-106, (1986). |
Radev et al.,“Generating Natural Language Summaries from Multiple On-Line Sources,” Association of Computational Linguistics, 24(3):469-500, (1998). |
Reiter et al., “Building Applied Natural Language Generation Systems,” Natural Language Engineering 1 (1), 31 pages, (1995). |
Reiter et al.; “Studies in Natural Language Processing—Building Natural Language Generation Systems,” Cambridge University Press, (2000). |
Reiter, “An Architecture for Data-to-Text Systems,” Proceedings of ENLG-2007, pp. 97-104, (2007). |
Shaw, “Clause Aggregation Using Linguistic Knowledge;” Proceedings of IWNLG, pp. 138-147, (1998). Retrieved from <http://acl.ldc.upenn.edu/W/W98/W98-1415.pdf>. |
Spillner et al., “Algorithms for Dispersed Processing,” Utility and Cloud Computing (UC), 204 IEEE/ACM 7th International Conference on Year: 2014, IEEE Conferenced Publications, pp. 914-921, (2014). |
Voelz et al., “Rocco: A RoboCup Soccer Commentator System,” German Research Center for Artificial Intelligence DFKI GmbH, 11 pages, (1999). |
Yu et al., “Choosing the Content of Textual Summaries of Large Time-Series Data Sets,” Natural Language Engineering, 13:1-28, (2007). |
International Preliminary Report on Patentability for Application No. PCT/IB2012/056513 dated May 19, 2015. |
International Preliminary Report on Patentability for Application No. PCT/IB2012/056514 dated May 19, 2015. |
International Preliminary Report on Patentability for Application No. PCT/IB2012/057773 dated Jun. 30, 2015. |
International Preliminary Report on Patentability for Application No. PCT/IB2012/057774 dated Jun. 30, 2015. |
International Preliminary Report on Patentability for Application No. PCT/IB2013/050375 dated Jul. 21, 2015. |
International Preliminary Report on Patentability for Application No. PCT/IB2013/058131 dated May 5, 2015. |
International Preliminary Report on Patentability for Application No. PCT/IB2014/060846 dated Oct. 18, 2016. |
International Preliminary Report on Patentability for Application No. PCT/US2012/053115 dated Mar. 3, 2015. |
International Preliminary Report on Patentability for Application No. PCT/US2012/053127 dated Mar. 3, 2015. |
International Preliminary Report on Patentability for Application No. PCT/US2012/053128 dated Mar. 3, 2015. |
International Preliminary Report on Patentability for Application No. PCT/US2012/053156 dated Mar. 3, 2015. |
International Preliminary Report on Patentability for Application No. PCT/US2012/053183 dated Mar. 3, 2015. |
International Preliminary Report on Patentability for Application No. PCT/US2012/061051 dated Mar. 3, 2015. |
International Preliminary Report on Patentability for Application No. PCT/US2012/063343 dated May 5, 2015. |
International Search Report and Written Opinion for Application No. PCT/IB2012/056513 dated Jun. 26, 2013. |
International Search Report and Written Opinion for Application No. PCT/IB2012/056514 dated Jun. 26, 2013. |
International Search Report and Written Opinion for Application No. PCT/IB2012/057773 dated Jul. 1, 2013. |
International Search Report and Written Opinion for Application No. PCT/IB2012/057774 dated Sep. 20, 2013. |
International Search Report and Written Opinion for Application No. PCT/IB2013/050375 dated May 7, 2013. |
International Search Report and Written Opinion for Application No. PCT/IB2013/058131 dated Jul. 3, 2014. |
International Search Report and Written Opinion for Application No. PCT/IB2014/060846 dated Feb. 4, 2015. |
International Search Report and Written Opinion for Application No. PCT/US2012/053115 dated Jul. 24, 2013. |
International Search Report and Written Opinion for Application No. PCT/US2012/053127 dated Jul. 24, 2013. |
International Search Report and Written Opinion for Application No. PCT/US2012/053128 dated Jun. 27, 2013. |
International Search Report and Written Opinion for Application No. PCT/US2012/053156 dated Sep. 26, 2013. |
International Search Report and Written Opinion for Application No. PCT/US2012/053183 dated Jun. 4, 2013. |
International Search Report and Written Opinion for Application No. PCT/US2012/061051 dated Jul. 24, 2013. |
International Search Report and Written Opinion for Application No. PCT/US2012/063343; dated Jan. 15, 2014. |
Notice of Allowance for U.S. Appl. No. 14/023,023 dated Apr. 11, 2014. |
Notice of Allowance for U.S. Appl. No. 14/023,056 dated Apr. 29, 2014. |
Notice of Allowance for U.S. Appl. No. 14/311,806 dated Dec. 28, 2016. |
Notice of Allowance for U.S. Appl. No. 14/311,998 dated Dec. 22, 2015. |
Notice of Allowance for U.S. Appl. No. 14/311,998 dated Jan. 21, 2016. |
Notice of Allowance for U.S. Appl. No. 14/634,035 dated Mar. 30, 2016. |
Notice of Allowance for U.S. Appl. No. 14/650,763 dated Jan. 30, 2018. |
Notice of Allowance for U.S. Appl. No. 14/650,763 dated Jun. 26, 2018. |
Notice of Allowance for U.S. Appl. No. 14/650,777 dated Jan. 30, 2018. |
Notice of Allowance for U.S. Appl. No. 15/421,921 dated Mar. 14, 2018. |
Office Action for U.S. Appl. No. 14/023,023 dated Mar. 4, 2014. |
Office Action for U.S. Appl. No. 14/023,056 dated Nov. 21, 2013. |
Office Action for U.S. Appl. No. 14/311,806 dated Jun. 10, 2016. |
Office Action for U.S. Appl. No. 14/311,998 dated Feb. 20, 2015. |
Office Action for U.S. Appl. No. 14/311,998 dated Oct. 7, 2015. |
Office Action for U.S. Appl. No. 14/634,035 dated Aug. 28, 2015. |
Office Action for U.S. Appl. No. 14/634,035 dated Dec. 10, 2015. |
Office Action for U.S. Appl. No. 14/634,035 dated Mar. 30, 2016. |
Office Action for U.S. Appl. No. 14/650,763 dated Dec. 16, 2016. |
Office Action for U.S. Appl. No. 14/650,763 dated Sep. 8, 2017. |
Office Action for U.S. Appl. No. 14/650,777 dated Mar. 6, 2017. |
Office Action for U.S. Appl. No. 14/650,777 dated Sep. 7, 2016. |
Office Action for U.S. Appl. No. 15/074,425 dated Feb. 26, 2018. |
Office Action for U.S. Appl. No. 15/074,425 dated May 10, 2017. |
Office Action for U.S. Appl. No. 15/188,423 dated Jul. 20, 2018. |
Office Action for U.S. Appl. No. 15/188,423 dated Oct. 23, 2017. |
Office Action for U.S. Appl. No. 15/421,921 dated Sep. 27, 2017. |
Statement in accordance with the Notice from the European patent Office dated Oct. 1, 2007 concerning business methods (OJ EPO Nov. 2007, 592-593, (XP002456414) 1 page. |
U.S. Appl. No. 12/779,636; entitled “System and Method for Using Data to Automatically Generate a Narrative Story” filed May 13, 2010. |
U.S. Appl. No. 13/186,308; entitled “Method and Apparatus for Triggering the Automatic Generation of Narratives” filed Jul. 19, 2011. |
U.S. Appl. No. 13/186,329; entitled “Method and Apparatus for Triggering the Automatic Generation of Narratives” filed Jul. 19, 2011. |
U.S. Appl. No. 13/186,337; entitled “Method and Apparatus for Triggering the Automatic Generation of Narratives” filed Jul. 19, 2011. |
U.S. Appl. No. 13/186,346; entitled “Method and Apparatus for Triggering the Automatic Generation of Narratives” filed Jul. 19, 2011. |
U.S. Appl. No. 13/464,635; entitled “Use of Tools and Abstraction in a Configurable and Portable System for Generating Narratives” filed May 4, 2012. |
U.S. Appl. No. 13/464,675; entitled “Configurable and Portable System for Generating Narratives” filed May 4, 2012. |
U.S. Appl. No. 13/464,716; entitled “Configurable and Portable System for Generating Narratives” filed May 4, 2012. |
U.S. Appl. No. 14/023,023; entitled “Method and Apparatus for Alert Validation;” filed Sep. 10, 2013. |
U.S. Appl. No. 14/023,056; entitled “Method and Apparatus for Situational Analysis Text Generation;” filed Sep. 10, 2013. |
U.S. Appl. No. 14/027,684; entitled “Method, Apparatus, and Computer Program Product for User-Directed Reporting;” filed Sep. 16, 2013. |
U.S. Appl. No. 14/027,775; entitled “Method and Apparatus for Interactive Reports;” filed Sep. 16, 2013. |
U.S. Appl. No. 14/311,806; entitled Method and Apparatus for Alert Validation; In re: Reiter, filed Jun. 23, 2014. |
U.S. Appl. No. 14/311,998, entitled Method and Apparatus for Situational Analysis Text Generation; In re: Reiter; filed Jun. 23, 2014. |
U.S. Appl. No. 14/634,035, entitled Method and Apparatus for Annotating a Graphical Output; In re: Reiter; filed Feb. 27, 2015. |
U.S. Appl. No. 14/650,763; entitled “Method and Apparatus for Motion Detection;” filed Jun. 9, 2015. |
U.S. Appl. No. 14/914,461, filed Feb. 25, 2016; In re: Reiter et al., entitled Text Generation From Correlated Alerts. |
U.S. Appl. No. 15/022,420, filed Mar. 16, 2016; In re: Mahamood, entitled Method and Apparatus for Document Planning. |
U.S. Appl. No. 15/074,425, filed Mar. 18, 2016; In re: Reiter, entitled Method and Apparatus for Situational Analysis Text Generation. |
U.S. Appl. No. 15/093,337, filed Apr. 7, 2016; In re: Reiter, entitled Method and Apparatus for Referring Expression Generation. |
U.S. Appl. No. 15/093,365, filed Apr. 7, 2016; In re: Logan et al., entitled Method and Apparatus for Updating a Previously Generated Text. |
U.S. Appl. No. 15/188,423, filed Jun. 21, 2016; In re: Reiter, entitled Method and Apparatus for Annotating a Graphical Output. |
U.S. Appl. No. 15/421,921, filed Feb. 1, 2017; In re: Reiter, entitled Method and Apparatus for Alert Validation. |
Number | Date | Country | |
---|---|---|---|
20190197697 A1 | Jun 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14650763 | US | |
Child | 16142445 | US |