The technical field relates to identification methods of videos, and identification systems and storage media thereof, and more particularly relates to a method for identifying extension messages of a video, and identification system and storage media thereof.
Advertising is the best form of marketing communication that employs an openly sponsored message to promote or sell a product or service. On-line advertising on a computer network (e.g., the Internet) has been competitive in recent years. Specifically, in addition to advertising on the website through messages and/or pictures, an advertiser/advertising agency (refers as advertiser hereinafter) may also use videos to promote or sell a product or service.
Prior to publishing an advertisement, an advertiser may hire staffs for studying the content of the video to determine whether it is appropriate to insert an advertisement to make sure the advertisement is related to the content of the video in order to increase the effectiveness of the advertisement among general consumers. However, visually identifying the content of a video by human takes a lot of labor hours thus it is cost prohibitive. Therefore, automatic identification technologies that can automatically identify features (e.g., constituent colors, persons, objects, etc.) of a video are developed and commercially available. The automatic identification technologies are able to determine the category of the advertisement to be inserted in the video based on the identified features.
However, the conventional automatic identification technologies can only identify significant features of a video which are used to match with the significant features of an advertisement but fail to identify abstract messages such as emotions, states, conditions, extended messages of the video (e.g. identifying the features of “Trump” and “President of US” when a video shows “Trump”). Therefore, the advertiser using the conventional automatic identification technologies may lose many opportunities of advertising due to incapability of identifying valuable messages within a video.
Further, the conventional automatic identification technologies cannot correct erroneously identified significant features of a video, which may lead to erroneously publishing a product or service and render a negative effect to the perception of the audience about the product or service advertised on the shot. As a result, a great amount of money spent on the advertisement is wasted while the effectiveness of the advertisement is undesirable.
For example, the conventional automatic identification technologies may determine that an advertisement of luggage is appropriate to be inserted in the video because a piece of luggage is identified within the video. As a result, a video for promoting the sale of luggage is shown in the shot. However, the scene of the video is a kitchen so the irrelevance between the video and the advertisement material failed to create a connection between the audiences and the product. The purpose of promoting the sale of luggage among the general consumers is not achieved.
Thus, there is a need for improvements on how the computer or artificial intelligence can depict the images/videos like or getting closer to human.
One of the objectives of the invention is to provide a method for identifying extension messages of a video by identifying significant features of the video so that the extension messages of the video can be inferred from the identified significant features to depict the content of the video. Thus, the content of the video can be interpreted like human based on the significant features and the extension messages.
One embodiment of the present invention is directed to a method for identifying extension messages of video, comprising the steps of: (a) providing a video; (b) converting content of the video into a content list including a plurality of descriptor lists, each of the descriptor lists recording a time interval and a raw descriptor for describing a feature presented in the video at the time interval; (c) providing a descriptor semantic model (DSM) including a plurality of node descriptors and a plurality of directed edges, wherein each node descriptor corresponds to a predetermined feature, and the directed edges define relation strengths among the node descriptors; (d) importing one of the descriptor lists of the content list into the DSM, wherein the node descriptors include the raw descriptors; (e) inferring an inferred descriptor from the node descriptors following step (d), the inferred descriptor having a relation with the raw descriptors; and (f) adding the inferred descriptor to the inputted descriptor list to update the descriptor list.
Another embodiment of the present invention is directed to a system for identifying extension messages of video, comprising: a video conversion module for selecting a video and converting content of the selected video into a content list, wherein the content list includes a plurality of descriptor lists, each descriptor list recording a time interval and a raw descriptor for describing a feature of the video presented in the time interval; a descriptor relation learning module for training and creating a descriptor semantic model (DSM) by using a plurality of datasets, wherein the DSM includes a plurality of node descriptors corresponding to a plurality of predetermined features respectively, and a plurality of directed edges, each defining a relational strength between two of the node descriptors; and an inference module for importing one of the descriptor lists of the content list into the DSM, wherein the node descriptors include the raw descriptors, the inference module obtains an inferred descriptor related to the raw descriptors from the node descriptors, and adds the inferred descriptor to the imported descriptor list for updating the descriptor list.
Another embodiment of the present invention is directed to a non-transitory storage media for storing a program which, when executed by a processing unit, performs operations comprising: providing a video; converting content of the video into a content list including a plurality of descriptor lists, each descriptor list recording a time interval and a raw descriptor for describing a feature presented in the video at the time interval; providing a descriptor semantic model (DSM) including a plurality of node descriptors and a plurality of directed edges, wherein each node descriptor corresponds to a predetermined feature, and the directed edges define relation strengths among the node descriptors; inputting one of the pluralities of descriptor lists of the content list into the DSM, wherein the node descriptors include the raw descriptors; inferring an inferred descriptor from the node descriptors, the inferred descriptor having a relation with the raw descriptors; refining the raw descriptors based on the directed edges corresponding to the raw descriptors in the DSM for converting the raw descriptors into a plurality of refined descriptors, wherein a number of the refined descriptors is equal to or less than a number of the raw descriptors; and updating the descriptor list based on the inferred descriptors and the refined descriptors.
The invention has the following advantages and benefits in comparison with the conventional art: content of the video shown in the shot can be interpreted correctly based on the significant features and the extension messages identified by computer vision. A shot of a video having the highest relational index with an advertisement can be selected for inserting, thereby increasing effectiveness of the advertisement, wherein there is no restriction to the format of the advertisement. Moreover, one or more significant features detected by the identification system of the invention can be refined to correct erroneous features detected by the identification system, thereby greatly increases detection accuracy.
The above and other objectives, features and advantages of the invention will become apparent from the following detailed description taken with the accompanying drawings.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings.
A system for identifying extension messages of a video is disclosed by the invention (called the identification system hereinafter). The identification system can analyze an imported video to identify significant features of the video, and further identify abstract and extension messages of the video. Consequently, when analyzing a shot of a video to insert an advertisement, the significant features and the extension messages are provided for the analysis so the accuracy is greatly improved. For the sake of helping ordinary artisans of the art to understand the invention, a descriptor (or a tag) will be used to represent a significant feature which but is not limited to such.
Referring to
In the identification system 1 of the embodiment, a descriptor semantic model (DSM) 120 is trained in the offline section and the DSM 120 is regularly updated (as discussed later). A user is not allowed to communicate with the offline section. The identification system 1 receives or selects a video 2 and an advertisement (not shown) to be analyzed by the user by enabling the online section. Thus, the identification system 1 can determine which shot of the video 2 is appropriate for the advertisement by matching the significant and abstract features of the advertisement with the significant and abstract features of the shot, or determine whether an advertisement is appropriate for a specific shot of the video 2. In other embodiments, the identification system 1 may not be categorized into online and offline sections, all modules are in the online section so the DSM 120 is updated online.
It is noted that in one embodiment as shown in
The data collection module 11 is adapted to access the Internet for collecting public data from a plurality of datasets 3. Specifically, the dataset 3 is encyclopedia, textbook, information from Wikipedia, network news, or network commentaries such as opinions on YouTube or opinions on Facebook which are updated as time revolves. Data stored in the dataset 3 can be and not limited to texts, pictures, videos and audios.
The data collection module 11 collects updated data from the datasets 3 in real time or collects updated data from the datasets 3 by using Crawler to access the Internet regularly. Further, data from the datasets 3 is inputted to the descriptor relation learning module 12. And in turn, the descriptor relation learning module 12 analyzes the data to train and output the DSM 120.
The descriptor relation learning module 12 uses data inputted from the datasets 3 to train the DSM 120. In one embodiment, the descriptor relation learning module 12 analyzes the inputted datasets 3 by using deep learning or artificial intelligence (AI) so as to obtain the relations among features (such as above texts, pictures and videos) and descriptors. Further, the descriptor relation learning module 12 obtains core meaning of the descriptors, and uses Hidden Markov Model algorithms to train the DSM 120. The purpose of obtaining the core meaning is to make the descriptors more consistent and reduce data redundancy. The descriptor relation learning module 12 may simplify terms and replace a plural form of a term with a singular form thereof. For example, the words “happy” and “happiness” are considered as “happy”, and words “book” and “books” are considered as “book” in the sematic space respectively.
Specifically, the DSM 120 is comprised of a plurality of node descriptors such as node descriptors 61, and a plurality of directed edges such as directed edges 62 of
In one embodiment, the number of the node descriptors 61 is thousands, ten thousands, or more. The node descriptors 61 comprise various features including and not limited to persons (e.g., Donald Trump and Michael Jordan), objects (e.g., cars, tables, cats, and dogs), actions (e.g., eating, drinking, lying, and running), emotions (e.g., happy and angry), mental states (e.g., easy, tense, and opposing), and titles (e.g., president and manager). Each of the directed edges 62 defines a relational strength between two node descriptors, i.e., two features such as a relational strength between Donald Trump and president, and a relational strength between eating and happy.
The video conversion module 13 functions to receive one of a plurality of videos 2 or select one of the videos 2 for analysis. Content of the received or selected video 2 is converted into a content list by the video conversion module 13. In the invention, the identification system 1 determines whether an advertisement is related to the content of the video 2 based on the content list.
Referring to
Specifically, the plurality of time intervals 51 is not overlapped. As shown in
The video conversion module 13 can identify and not limit to face, image, text, audio, action, object and scene as the significant features.
However, the video conversion module 13 cannot identify extension messages of the video 2. For example, the video conversion module 13 cannot obtain a descriptor representing “US President” after identifying a descriptor representing “Trump”. In a further example, the video conversion module 13 cannot obtain a descriptor representing “dangerous” or “urgent” after identifying a descriptor representing “a man pointing a gun toward another person”.
As described above, for further identifying the extension messages of the video 2, the identification system 1 of the invention provides the inference module 14 and the DSM 120 that is trained either online or offline.
After the video conversion module 13 finishes analysis, the inference module 14 imports one or all descriptor lists 5 of the content list 4 of the video 2 into the DSM 120. For the sake of simplicity, an example of the inference module 14 importing one descriptor list 5 of the content list 4 into the DSM 120 will be discussed in detail.
In the embodiment, the number of the node descriptors 61 in the DSM 120 is enormous. The node descriptors 61 include all raw descriptors 52 recorded in the imported descriptor lists 5. In the invention, the inference module 14 obtains one or more inferred descriptors related to the raw descriptors 52 from the node descriptors 61 in the DSM 120. The inferred descriptors are added to the descriptor lists 5 for updating the descriptor lists 5. Thus, the identification system 1 can increase the number of descriptors in the descriptor lists 5 and the descriptors are used for reference and analysis purposes.
Specifically, the inference module 14 obtains one or more of the node descriptors 61 related to the raw descriptors 52 based on the directed edges 62 related to the raw descriptors 52 and considers the obtained node descriptors 61 as the inferred descriptors. Generally speaking, features corresponded by the inferred descriptors are extension messages (e.g., descriptors representing “US President”, “dangerous”, and “urgent” as described above) that cannot be identified by the video conversion module 13.
In one embodiment, the inference module 14 calculates an index (i.e., relational index) of each raw descriptor 52 related to other node descriptors 61 based on the directed edges 62 related to the raw descriptors 52. One or more node descriptors having the highest relational index is(are) taken as the inferred descriptor(s). In the invention, the relational index means when there is a raw descriptor A, the probability of a node descriptor B exists. Hence, the higher the relational index the higher the probability that the inference module 14 sets the node descriptor B as the inference index. In the embodiment, if the number of the node descriptors 61 related to each raw descriptor 52 is large (e.g., 5,000), the inference module 14 takes a plurality of (e.g., five or ten) node descriptors 61 having the highest relational index as the inferred descriptor.
In another embodiment, the inference module 14 calculates index (i.e., relational index) of each raw descriptor 52 related to other node descriptors 61 based on the directed edges 62 related to the raw descriptors 52. One or more node descriptors having a relational index higher than a threshold value is(are) taken as the inferred descriptor(s). For example, if the number of the node descriptor 61 related to each raw descriptor 52 is large and the threshold value is 0.8, the inference module 14 takes a plurality of node descriptors 61 having a relational index higher than 0.8 as the inferred descriptor.
After using the inference module 14 and the DSM 120, the identification system 1 of the invention can further identify the extension messages of the video 2 and generate the inferred descriptor which is in turn added to the descriptor lists 5 for increasing the number of descriptors in the descriptor lists 5. For example, descriptors representing “dog”, “cat” and “pet” are identified in a scene/shot. The identification system 1 infers descriptors representing “pet food”, “lovely”, “fur” and “vacuum cleaner” by means of the inference module 14 and the DSM 120. In such a manner, when a video publisher needs to find out the additional kinds of advertisement that is related to the content of the video or an advertiser needs to find out which video is suitable for inserting the advertisement with specific content, a more accurate analysis can be obtained and the number of suitable advertisements that could be inserted can be increased.
It is noted that after the inference module 14 updates the descriptor lists 5, the identification system 1 of the invention may import the updated descriptor lists 5 into the DSM 120 again so as to find the inferred descriptor and update the descriptor lists 5 until the content of the descriptor lists 5 is not changed any more. Thus, it is possible to ensure a relationship of the obtained inferred descriptors and the raw descriptors 52.
In the invention, the video conversion module 13 identifies the video 2 to obtain significant features of the video 2 and generates the raw descriptors 52 by using conventional identification technologies such as Convolution Neural Network (CNN). But the accuracy of the conventional identification technologies is not 100%. Thus, the raw descriptors 52 may erroneously represent wrong features. For example, a refrigerator is erroneously identified as a luggage. For solving the problem by correcting or eliminating the erroneous descriptors, the identification system 1 of the invention further comprises the refinement module 15.
The refinement module 15 imports a descriptor list 5 of the content list 4 into the DSM 120 and refines a plurality of raw descriptors 52 based on the directed edges 62 in the DSM 120 that corresponds to the raw descriptors 52. As a result, parts of the raw descriptors 52 are converted into refined descriptors. And in turn, the refinement module 15 updates the raw descriptors 52 of the descriptor lists 5 based on the refined descriptors. In one embodiment, the number of the refined descriptors is equal to or less than that of the raw descriptors 52 of the descriptor lists 5 before the updating.
In the invention, the refinement module 15 determines the relation among the raw descriptors 52 of the descriptor lists 5 based on the DSM 120. If the relations between a specific raw descriptor 52 and other raw descriptors 52 are too low, the refinement module 15 determines the specific raw descriptors 52 is erroneous. The erroneous descriptor is corrected as a refined descriptor or being eliminated.
For example, if the descriptor lists 5 include raw descriptors 52 representing “luggage”, “kitchen”, “pan”, “bottle” and “water tank”, the refinement module 15 determines that the relations between the raw descriptor 52 representing “luggage” and other raw descriptors 52 are too low based on the directed edges 62 corresponding to the raw descriptor 52 representing “luggage”. And in turn, the raw descriptor 52 presenting “luggage” is eliminated. Further, the refinement module 15 determines that the relations between one node descriptor 61 representing “refrigerator” (e.g., the inferred descriptor) and other raw descriptors 52 are very high. Further, the refinement module 15 determines that the video conversion module 13 erroneously identifies “refrigerator” as “luggage” and subsequently corrects the descriptor as one representing “refrigerator”. But precedent example only describes a preferred embodiment of the invention and the invention is not limited to the example set forth above.
In one embodiment, the refinement module 15 calculates index (i.e., relational index) among the raw descriptors 52 based on the directed edges 62 related to the raw descriptors 52. One or more raw descriptors 52 having the highest relational index is(are) taken as the refined descriptor(s). For example, the descriptors having the highest relational index are taken as the refined descriptors. As a result, the descriptor lists 5 are updated. In another embodiment, one or more raw descriptors 52 having a relational index higher than a threshold value is (are) taken as the refined descriptor(s) by the refinement module 15. As a result, the descriptor lists 5 are updated.
It is noted that the inference module 14 and the refinement module 15 may be enabled simultaneously to generate the inferred descriptor(s) and the refined descriptor(s). In other words, the inferred descriptor(s) and the refined descriptor(s) may be generated simultaneously rather than in a fixed sequence.
Specifically, the inference module 14 may fetch a plurality of inferred descriptors related to the raw descriptors 52 from a plurality of node descriptors 61 in the DSM 120 prior to the generation of refined descriptors. Alternatively, the inference module 14 may fetch a plurality of inferred descriptors related to the refined raw descriptors from a plurality of node descriptors 61 after the generation of refined descriptors. Further, the refinement module 15 may refine a plurality of raw descriptors 52 based on the raw descriptors 52 and related directed edges 62 prior to the generation of inferred descriptors. Alternatively, the refinement module 15 may refine a plurality of raw descriptors 52 and inferred descriptors based on the raw descriptors 52, the inferred descriptors and related directed edges 62 after the generation of inferred descriptors.
As described above, the video conversion module 13 converts the video 2 into the content list 4 by using conventional identification technologies such as CNN. In the invention, after the refinement module 15 generating the refined descriptors (i.e., amending or eliminating the raw descriptors 52), the identification system 1 makes use of the refined descriptors to train the CNN online or offline. In such a manner, the longer the time of using the identification system 1, the more accurate the identification of the video conversion module 13 will be. Further, less raw descriptors are erroneously identified.
Referring to
As illustrated in
Next, the descriptor relation learning module 12 provides the trained DSM 120 (step S14) in which the DSM 120 is comprised of a plurality of node descriptors 61 and a plurality of directed edges 61. As described above, each of the node descriptors 61 corresponds to a predetermined feature and the directed edges 62 correspond to relational strengths among the node descriptors 61.
Next, the identification system 1 imports at least one descriptor list 5 of the content list 4 into the DSM 120 (step S16) in which the plurality of node descriptors 61 include all raw descriptors 52 recorded in the at least one descriptor list 5 imported by the identification system 1.
Next, the inference module 14 fetches a plurality of inferred descriptors related to the raw descriptors 52 from the node descriptors 61 (step S18), and updates the imported descriptor lists 5 based on the inferred descriptors.
Further, if the identification system 1 has the refinement module 15, the refinement module 15 refines the plurality of raw descriptors 52 based on the directed edges 62 in the DSM 120 related to the plurality of raw descriptors 52 so as to convert the raw descriptors 52 into a plurality of refined descriptors (step S20), and the refinement module 15 may update the imported descriptor lists 5 based on the refined descriptors.
Specifically, the sequence of performing steps S18 and S20 is not fixed, that is, the identification system 1 may selectively perform step S18 (or step S20), or perform step S18 and S20 simultaneously. Further, after performing steps S18 and S20, the identification system 1 updates the descriptor lists 5 through adding the inferred descriptors to the imported descriptor lists 5 and updating the plurality of raw descriptors 52 in the imported descriptor lists 5 based on the refined descriptors (step S22).
Specifically, in one embodiment, the identification system 1 repeatedly performs steps S18 to S22 for continuing the generation of inferred descriptors and refined descriptors and continuing the update of the descriptor lists 5 until content of the descriptor lists 5 is no longer changed. Therefore, it is possible of ensuring the relationship of the inferred descriptors and the raw descriptors 52 as well as improving the accuracy of the raw descriptors 52.
In step S18, the inference module 14 calculates index (i.e., relational index) of the raw descriptors related to other node descriptors 61 based on the directed edges 62 related to the raw descriptors 52. One or more node descriptors 61 having the highest relational index may be taken as the inferred descriptor(s). Alternatively, one or more node descriptors with a relational index higher than a threshold value may be taken as the inferred descriptor(s). Further, in step S20, the refinement module 15 calculates a relational index showing the relation among the raw descriptors 52 based on the directed edges 62 related to the raw descriptors 52. The refinement module 15 may take one or more raw descriptors 52 having the highest relational index as the refined descriptor(s). Alternatively, the refinement module 15 may take one or more raw descriptors 52 having a relational index higher than a threshold value as the refined descriptor(s).
It is noted that in step S18, the inference module 14 may fetch a plurality of inferred descriptors related to the raw descriptors 52 from the node descriptors 61. Alternatively, the inference module 14 may fetch a plurality of inferred descriptors related to the refined descriptors from the node descriptors 61. In step S20, the refinement module 15 may refine a plurality of raw descriptors 52 based on the raw descriptors 52 and related directed edges 62. Alternatively, the refinement module 15 may refine a plurality of raw descriptors 52 based on the raw descriptors 52, the inferred descriptors and related directed edges 62.
After step S22, the identification system 1 further determines whether the content list 4 has been identified or not (step S24). Specifically, in step S16, the identification system 1 imports only one descriptor list 5 of the content list 4 into the DSM 120. Steps S18 to S22 are performed to identify the imported descriptor list 5. In response to the determination of the content list 4 has not been completely identified in step S24, the identification system 1 returns to step S16. In step S16, the identification system 1 imports the next descriptor list 5 of the content list 4 into the DSM 120. Steps S18 to S22 are performed until all descriptor lists 5 of the content list 4 have been identified and updated.
In other embodiments, however, the identification system 1 may import all descriptor lists 5 of the content list 4 into the DSM 120 in step S16 as well as identify and update the descriptor lists 5 in the same time. In one embodiment, step S24 is omitted.
In response to the determination of the content list 4 has been completely identified in step S24, the identification system 1 outputs the updated descriptor lists 5 (step S26). Therefore, when the identification system 1 analyzes the content of each shot of the video 2 to determine what kind of advertisement is appropriate to be inserted, or analyzes a specific advertisement to determine which shot of the video 2 is appropriate for the specific advertisement, the updated content list 4 can be used for analysis. The updated content list 4 has more accurate descriptors (e.g., the refined descriptors) and descriptors (e.g., the inferred descriptors) with subtle, abstract and extended information. Thus, the identification system 1 can obtain a more accurate analysis result by using the identification method of the invention.
Referring to
The identification system 1 can understand a relation between a descriptor A and a descriptor B in view of the description of the DSM 120. In other words, after referencing to DSM 120, the identification system 1 understands the probability of the existence of the descriptor B with respect to the existence of descriptor A, and the probability of the existence of the descriptor A with respect to the existence of descriptor B. It is noted that the relational strength of descriptor A to descriptor B may be different from the relational strength of descriptor B to descriptor A.
For example, a relational strength between the descriptor “Michael Jordan” and the descriptor “President” is 0.05 because there is a news report that Michael Jordan met with US President. This means when the descriptor “Michael Jordan” exists, the probability of the co-existence of the descriptor “President” is very low. In another example, a relational strength between the descriptor “Donald Trump” and the descriptor “President” is 0.95 because the incumbent President of the United States is Donald Trump. This means when the descriptor “Donald Trump” exists, the probability of the co-existence of the descriptor “President” is very high.
Referring to
First, as shown in
Next, as shown in
Next, as shown in
Next, as shown in
After finishing the above actions, the identification system 1 adds the generated inferred descriptors 73 to the descriptor list 5 and updates the raw descriptors 71 of the descriptor list 5 based on the refined descriptors 72. Thus, when the identification system 1 analyzes the video 2 based on the updated descriptor list 5, a more accurate analysis can be obtained.
Referring to
Specifically, when the identification system 1 receives or selects a video 2, the video conversion module 13 divides the video 2 into a plurality of shots (step S30). More specifically, the video conversion module 13 divides the video 2 based on the predetermined time unit. In the embodiment, the time unit is (but is not limited to) the time interval 51 shown in
In a first embodiment, the video conversion module 13 may divide the video 2 into a plurality of shots in according to a predetermined time length (e.g., 3 seconds, 10 seconds, etc.). Each divided shot has the same time length corresponding to the predetermined time length.
In a second preferred embodiment, the video conversion module 13 can detect a scene change of the video 2 and divide the video 2 into a plurality of shots based on the scene change (i.e., each shot corresponds to a scene of the video 2). A detailed description of the scene change is omitted herein for the sake of brevity because its technologies are well known in the art.
In a third preferred embodiment, the video conversion module 13 may divide the video 2 into a plurality of shots frame-by-frame (i.e., the time length of each shot is according to a frame). The three embodiments of the invention above are set as the non-limiting examples to show there is no restriction on how the video conversion module 13 of the invention divides the video 2 into time segments.
After step S30, the video conversion module 13 further analyzes one of the shots to identify one or more features of the shot (step S32). And in turn, a raw descriptor 52 corresponding to the one or more features is created (step S34). The video conversion module 13 creates ten descriptors 52 if there are ten features within the shot.
Subsequently, the video conversion module 13 creates a descriptor list 5 based on the raw descriptors 52 of the shot and the time interval 51 corresponding to the shot (step S36).
As shown in
Subsequently, the video conversion module 13 determines whether the shots of the video 2 have been analyzed (step S38). If the shots of the video 2 have not been analyzed yet, the flowchart returns to step S32 to analyze the next shot of the video 2 in order to create a descriptor list 5 of the next shot.
In another embodiment, the video conversion module 13 analyzes all shots of the video 2 simultaneously to create descriptor lists 5 of all shots. In the embodiment, step S38 is omitted.
If the video conversion module 13 determines that the shots of the video 2 have been completely analyzed, the video conversion module 13 creates a content list 4 of the video 2 based on all created descriptor lists 5 (step S40). Then, a conversion of the content of the video 2 is finished.
Referring to
As shown in
Subsequently, the refinement module 15 processes the raw descriptors 71 and converts them into a plurality of refined descriptors 72. Further, the refinement module 15 calculates a relational index 720 of each of the refined descriptors 72 based on the directed edge 62 related to each of the raw descriptors 71.
As shown in the embodiment of
It is noted that there are ten refined descriptors 72 in the embodiment of
At the same time, the inference module 14 processes the raw descriptors 71 to obtain a plurality of inferred descriptors 73 having a relation with the raw descriptors 71. Further, the inference module 14 calculates a relational index 730 of each inferred descriptor 73 based on the directed edge 62 related to each raw descriptor 71.
In the embodiment of
It is noted that there are ten inferred descriptors 73 in the embodiment of
Referring to
For performing above determination, the identification system 1 of the invention further comprises an analysis module 16 which is a physical unit or a programmed functional module and is not limited thereto.
Specifically, the identification system 1 selects one of a plurality of videos 2 to be analyzed (step S50). Next, the content list 4 of the selected video 2 is compared with criteria of multiple ADCs (step S52). In the embodiment, the criteria include related parameters of each ADC such as product description, type of product, objects presented in the advertisement, audience sexes, and audience ages and is not limited thereto.
After step S52, the analysis module 16 calculates a relational index for each shot of the video 2 with each ADCs (step S54). Further, the analysis module 16 shows one or more ADCs having a highest relational index for each shot, or one or more ADCs having a relational index greater than a threshold for each shot (step S56).
For example, if a video 2 is divided into three shots and the analysis module 16 compares the video 2 with three ADCs, the analysis module 16 calculates and obtains three relational indexes for each shot, wherein each of the relational indexes represents the relation between the shot and one of the three ADCs. In the embodiment, the greater of the relational index, the more appropriate of the shot is for the ADC to be published.
By taking advantages of the technical solutions illustrated in
Referring to
For performing aforementioned determination, the identification system 1 of the invention further comprises a recommendation module 17 which is a physical unit or a programmed functional module and is not limited thereto.
Specifically, the identification system 1 inputs criteria of an advertisement to be analyzed (step S60). Next, the criteria are compared with the content list 4 of each of the videos 2 (step S62). In the embodiment, the criteria include related parameters of the analyzed advertisement such as product description, type of product, objects presented in the advertisement, image properties, audience sexes, and audience ages and are not limited thereto.
After step S62, the recommendation module 17 calculates a relational index of the advertisement and each shot of each video 2 (step S64). Further, the recommendation module 17 shows one or more shots having a highest relational index with the advertisement, or one or more shots having a relational index greater than a threshold with the advertisement (step S66).
For example, if a first video is divided into three shots and a second video is divided into five shots, the recommendation module 17 compares and calculates the inputted advertisement with each shot of the first video and the second video to obtain eight relational indexes for the advertisement, wherein each of the eight relational indexes represents a relation between the advertisement and each of the eight shots.
By taking advantages of the technical solutions of the
Referring to
As shown in
In the embodiment, the input unit 92 receives a plurality of videos 2 for identifying them. And in turn, the abovementioned descriptor lists 5 and the content lists 4 are created and updated. Also, the input unit 92 receives a plurality of datasets 3 for training the abovementioned DSM 120. In the embodiment, the descriptor lists 5, the content lists 4 and the DSM 120 are stored in the storage media 93 and are not limited thereto.
In the embodiment, the storage media 93 stores a program 930 which has machine codes or program codes executable by the processing unit 91. After the program 930 is run by the processing unit 91, the identification system 9 of the invention performs the following tasks to execute the identification method of the invention: providing a video 2; converting content of the video 2 into a content list 4; providing the DSM 120; importing a descriptor list 5 of the content list 4 into the DSM 120; fetching a plurality of inferred descriptors 73 having a relation with a plurality of raw descriptors 71 from a plurality of node descriptors 61 of the DSM 120; refining the raw descriptors 71 based on a plurality of directed edges 62 in the DSM 120 corresponding to the raw descriptors 71 so as to convert the raw descriptors 71 into a plurality of refined descriptors 72; and updating the descriptor list 5 based on the inferred descriptors 73 and the refined descriptors 72.
By utilizing the identification systems 1 and 9 of the invention and the identification method thereof, it is capable of identifying both significant features and extension messages presented of the video. As a result, content of each shot of the video can be described correctly.
While the invention has been described in terms of preferred embodiments, those skilled in the art will recognize that the invention can be practiced with modifications within the spirit and scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
201710526049.4 | Jun 2017 | CN | national |