1. Field of the Invention
The field of this invention relates generally to a method and algorithm for representing the indexing, searching, retrieval and recognition of still images, text, audio and videos by applying a checksum or any other means to produce unique values of sequential blocks across the digital stream.
2. Prior Art
Prior art Bober, U.S. Pat. No. 7,162,105, teaches a method of representing an object appearing in a still or video image, by processing signals corresponding to the image, the method comprises deriving a plurality of numerical values associated with features appearing on the outline of an object starting from an arbitrary point on the outline and applying a predetermined ordering to the values to arrive at a representation of the outline. It further teaches a method of searching for an object in a still or video image by processing signals corresponding to images, the method comprises inputting a query in the form of a two-dimensional outline, deriving a descriptor of the outline, obtaining a descriptor of objects in stored images derived and comparing the query descriptor with each descriptor for a stored object, and selecting and displaying at least one result corresponding to an image containing an object for which the comparison indicates a degree of similarity between the query and said object.
Although Bober '105 teaches a method for indexing, searching and retrieving images from a database based on their outlines. Bober '105 is complex and prone to inaccuracies for the simple fact that computers don't do well in recognizing data based on their appearances even when complex mathematical formulas are used. Computers, on the other hand, do extremely well in dealing with numerical value representations that correlates to the actual underlying values, images' contours in this case. Bober '105 fails however to offer a solution for recognizing images and videos using an easy to implement and inexpensive solution without requiring a great deal of expertise and complexities.
It is the intent of the present invention to offer a highly accurate solution for the indexing, searching, recognition and retrieval of still images and videos that is easy and inexpensive to implement.
It is the objective of the present invention to offer a highly accurate solution for the indexing, searching, recognition and retrieval of still images, text, digital audio and videos that is easy and inexpensive to implement by partitioning the image into smaller partitions then applying checksum across each partition of the digital stream thus producing individual values for each section for indexing, searching and retrieval, also, by manipulating the image as to produce values that correlates to close matches of the images sections in the storage medium.
In one preferred embodiment of this invention a digital stream (text, image, audio, video, etc.) is partitioned into one or more partitions, each partition is summed (checksum) and the resulting checksum value is used for the indexing of the pertaining digital stream, thus, enabling the summed partitions to be used as an easy and fast means for searching and retrieving the digital stream.
In one other preferred embodiment of this invention a user will be allowed to provide at least one information to a digital stream as it is displayed or just by providing the information in a provided text box for the purpose of relating parts of the digital stream with related content to be associated with two or more parts of the said digital stream. The related information can be based on the digital stream partition's values, portions of the digital stream in regards to time, user provided categorization values related to portions of the digital stream, related words to said portions, etc.
In yet another additional preferred embodiment of this invention means for providing the x-y axis ratios of image's contours to search other image contours based on their respective x-y axis ratios values.
Still in another preferred embodiment of this invention will offer means for relating content to a digital stream based on a user supplied information based on part or for the whole of the digital stream. Such offering will enable other related contents (advertisings) to be associated with the user provided digital stream.
The accompanying drawings, which are incorporated in the form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention:
a illustrates computing device hardware for executing software instructions along with Internet connecting devices.
a illustrates the smaller image of
b illustrates the larger image of
a illustrates a table representing the contours ratio for the images of
In the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It is to be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described herein in connection with one embodiment may be implemented within other embodiments without departing from the spirit and scope of the invention. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the claims are entitled. In the drawings, like numerals refer to the same or similar functionality throughout the several views.
As will be appreciated by those of skill in the art, the present invention may be embodied as a method or a computer program product. Accordingly, the present invention may take a form of an entirely software embodiment or an embodiment combining software and hardware. Furthermore, the present invention may take the form of a computer program product on a computer-readable storage medium having computer-readable program code means embodied in the medium. Any computer readable medium may be utilized including but not limited to: hard disks, CD-ROMs, optical storage devices, or magnetic devices.
Also, any reference to names of a product or of a company is for the purpose of clarifying our discussion and they are registered to their respective owners.
In a preferred embodiment of this invention a method, apparatus and an algorithm (henceforth called algorithm) for subdividing a still image, digital audio and video (henceforth called images and digital stream and used here interchangeably) into smaller segments and having a checksum algorithm (or any other means to produce unique values for each partition) applied to each of the partitioned segment of the digital stream as to produce distinct values for indexing each part of the specific digital stream section will be presented. As well, means for selecting desired segments of the digital stream for input-searching; means for navigating the image within its spectrum as to produce differing input-search values; means for changing its orientation as to skew it then select part of the same thus producing differing input-search values; means for changing its dimensions (enlarge/reduce) selected area of the same for producing differing input-search values; and means for changing the orientation within selected areas as to produce differing input-search values and means for relating content to a digital-content stream.
A checksum algorithm is an algorithm used to produce mathematical sum representing a section of data, a data file, string, data packets, digital stream, etc. In our case, images and digital audio (digital stream) are partitioned and it is applied to each partitioned area of the images and its resulting value is placed into a database as indexing means for indexing the images they represent. An image can be partitioned into a single partition, that is, only one value will be produced for the complete image. Additionally, it can be partitioned into two or more partitions and the more it is partitioned the more values the partitioning process will produce, the more values, the more resolution of the image will be indexed, thus allowing better search of images at the database level.
Before we proceed any further let's give an example of a checksum for purpose of clarity, we'll be using the Adler-32 sum of the ASCII string “HELLO” and it would be calculated as follows:
TABLE-US-00001 ASCII Code String A String B H=72 1+72=73 0+73=73 E=69 73+69=142 73+142=215 L=76 142+76=218 215+218=433 L=76 218+76=294 433+294=727 O=79 294+79=373 727+373=1100
String Checksum=3731100 (the values 373 and 1100)=>HEX=38EE9C
Each byte is represented as a value by a computer and in our example the bytes are letters of the Latin alphabet and they are represented by values of an alphabet table called ASCII (American Standard Code for Information Interchange). Each alphabet is represented by a table and having distinct value for each character of the represented alphabet. HEX (Hexadecimal) values are ways of converting values into the 16-value range format used to represent 0-9 (for 0-9) and for 10-15 (for A-F).
As we now turn to
Not all of the values for the image on the plane 100-A are illustrated on the table 100-B because of the fact that it would create a very long table. The representation of table 100-B are only for x-y axis for six partitions of the image on plane 100-A and they are: x=7 & y=7 (row 1-118) for partition 106; x=7 & y=8 (row 2-118) for partition 107; x=7 & y=9 (row 3-118) for partition 108; x=8 & y=7 (row 4-118) for partition 111; x=8 & y=8 (row 5-118) for partition 110; x=8 & y=9 (row 6-118) for partition 109; x=9 & y=7 (row 7-118) for partition 112; x=9 & y=8 (row 8-118) for partition 113; x=9 & y=9 (row 9-118) for partition 114. The partitions are illustrated in a bold square 105 around the image on the image plane 100-A.
As we now turn to
After power up the CPU 102a will read the programming code from the ROM 104a and starts processing it and it will load an Operating System (OS) 116a from the storage device 106a into the Read Access Memory (RAM) 112a. The OS 116a will load software applications 118a as needed into the RAM 112a and as applications 118a are executed and their interaction will be presented to the user at the display 110a (it can be part of the device or attached thereto). As needed the OS 116a will receive input from others devices that are interfaced with the device 100a by using its Input Output (IO) port 108a, the devices can be but not limited to: mouse, keyboard, touch screen, etc. It will send output to other interfacing devices as well, such as but not limited to: screen, printer, audio card, video card, etc.
Once the device 100a communicates with other devices attached thereto it will use the Network Interface 114a. Now database 120a can be integrated as part of device 100a or it can reside at another location and be attached to the device 100a by the network interface 114a. In case it resides at a different location other than device 100a the computing device handling the database 116a will have similar functionality as of device 100a. As well, the Internet devices doing all the communication between client 128a and server 124b through the Internet connection 128a will have a similar device circuitry as the device 100a.
There are different reasons for partitioning an image into smaller or greater number of partitions. Let's say that we know exactly the image we want to retrieve from the database. Let's further assume that this particular image is part of a movie clip, and the movie cannot to be played to a specific audience, in this situation, the movie (sequence of images—photographs) will simply be saved based on a single value for each image composing the movie. The same can be said for still images (photographs) that need to be restricted and a single partitioning process can be used as well.
There are still other situations that the image doesn't need to be partitioned into many partitions, for instance, a movie is to be blocked from a movie sharing site, in this case a few partitions can accomplish the task. As aforementioned, the more partition of an image the more resolution can be retained for indexing and searching the image, and by using other techniques such as, skewing the image, by changing its dimensions, by changing its color range, etc., the more accurate the search and retrieval of the image will be. In the case of movie clip it may be necessary to partition the clip at every other frame without having all frames (images) partitioned.
The same mode used when portioning an image for its indexation must be used for performing a search as well. Let's say that an image is changed to its grayscale values, partitioned then saved. To be able to find the image the same steps must be taken with the input image, that is, change it to its grayscale then select areas of the partitioned image and initiate the search. If an image is partitioned into four partitions and its color range is the gray-scale range, the same needs to be done to the image that is used an input for the search, it must be converted to its gray-scale range, select four partitions for the image, then select the section(s) that will be used for the search.
Once sections of the input image are selected, the algorithm will produce the value(s) for the partition(s) and lastly, initiate the search. Converting color images to their grayscale values are a good way of producing more accurate searches, since color from one image may have different contrast in another identical image and by graying it, those inconsistencies will be reduce or removed, also, if the image is a high resolution one, it can be converted to a lower resolution, like from 2 bytes (65536 colors) to 1 byte (256 colors) value. If the image has any active filter, layers, special effects, etc., if they are left on the image and indexed, the same must be present on the input image as well. It is a good idea but not a prerequisite, to remove any of these special parameters, place the image in a memory, partition it then have its partitioned areas summed and saved. The same must be done to the input images as well. These changes are done for the indexing of the image only, the image itself will be saved as is without applying any rules, that is, it will be saved in its original format.
The algorithm of this invention can be used in any conceivable way possible. The same image can be partitioned and saved in many different forms. For instance, one can be in its original color values, another its colors can be masked as to have only its green, blue or red equivalents, yet another a filter can be applied to it to produce its black and white equivalent, its gray equivalent, and so on. The image can be saved in many different formats and in any number of partitions as well. The only requirement is that formats used for its indexing be used for its retrieval as well. The algorithm can be programmed to pass the input image directly to the database housing the stored images and the database can be programmed to apply all the rules onto the image and return the closest matches to the client computer.
As we proceed and turn to
Let's keep
The algorithm will produce the same values as was originally produced and saved (
After the user finishes selecting the partitions of the input image 800-A the algorithm will produce the aforementioned values and they are the illustrated results of table 800-B and once he/she initiates some kind of query request the query request will be sent to the database storing the images and their respective indexations then the database will match the input values with the database table 100-B of
There are moments that an image has a different size than the image used as input values, in this case, after selecting the area of the image to produce the input values, the image can be resized to produce different input values and as each resizing the algorithm can produce new search. As aforementioned, all of these interactions can be done at the client computer and passed to the database, or, the algorithm can be implemented at the database level and the client computer will pass the image with its selected partitions. The whole image can be resized as well instead of just the selected input areas.
As we turn to
As aforementioned, the higher degree an image is partitioned when it is indexed, the more resolution of it will be saved, thus providing more relevant values for locating more details of stored images at the database. As we now turn to
As we've aforementioned, before an image is used for input, rules are applied to it (resized, skewed, reshaped, filtered, etc.), as to produce various input values. Also, before the image is indexed its settings can be changed to its equivalent gray-scale, contour equivalent, (distinct RGB values) green colors only, blue colors only, red colors only, black and white, etc., and the same applied to the input image as to produce various matching values for the searching underlay algorithm. As we saw in
It is possible however to have images with the same or similar shapes and having differing sizes to be matched without doing all of the resizing and reshaping as previously described (applying rules). As we now turn to
Let's take two suppositions: 1) The first one, the smaller image 1502 is used as the source for the input and the larger image 1504 as the search target. Now the smaller image's 1502 partitions values will be matched to some partition values of the larger image 1504, since the larger image 1504 has more partitions than the smaller one 1502. If the number of matched values equal a specified threshold of the smaller image 1502, let's say that 90% of the input values of the smaller image 1502 are matched against the indexed values of the larger image 1504, then the larger image 1504 is a close match to the smaller image 1502. 2) The second one is true when the larger image 1504 is used for the input values and the smaller one 1502 is the search target. In this instance, the opposite will happen, that is, 10% of the values of the larger image 1504 are matched against the smaller image 1502 and once the algorithm compares the percentage-threshold of the matched values of the smaller image 1502, and 90% of its values were matched against the search, the same is true as for the smaller image 1502, and it is a close match to the larger image 1504.
As we now turn our attention to
There is at least one more way of checking images correlations based on how close their contours are. Let's now turn to
The relationship column 1608-a is related to relationship 1608 image plane 1600-A and illustrated by the arrow line 1608-c. Column 1604-a is related to the x-axis 1604 of image plane 1600-A (arrow line 1604-c) and column 1602-a is related to y-axis 1602 of the image plane 1600-A (arrow 1602-c). And the column 1616 has the values for partitions of the input image 1610 of the image plane 1600A (table 1600-B) and the indexed image 1606 of the image plane 1600-A (table 1600-C). This column is of importance to our discussion, so let's focus our attention to it. Since the algorithm will produce values from the input image 1610 and the same values are indexed for the saved images, 1606 in this case. Their values have to be the same.
Let's review a couple relationship 1608 between the top image 1610 and bottom image 1606 of the image plane 1600-A. Let's take the relationship #1 (#1 inside the circle). It is represented by the relationship #11608-a for both tables, 1600-B and 1600-C. As we look at column 1611 at row #1 (1600-B) and row #1 (1600-C) we see this relationship (#1 inside circle for both rows) and as we analyze the values for both rows they both are “1010AB”. One more, let's review the relationship #2 and it is shown on column 1611 for rows #9 (1600-B) and #5 (1600-C) and the same value of “1206AB” for both tables.
Let's review one x-y value. Let's take row #1 of column 1611 (table 1600-B) and it has “10” for the x-axis 1604-a and “10” for the y-axis 1602-a. If we follow the x-axis 1604 to the 10th column and up to the y-axis 1602 to the 10th row we see that there is a selected partition of the image and it is the relationship #1 for image 1610. The same explanation applies to both images and their respective x-y axis and values are represented by each of their respective table (image 1610 at table 1600-B and image 1606 at table 1600-C). The values used are fictitious and not necessarily represent any actual values for the respective partition. They are used as is for sake of simplicity and not in any way intended to obscure this invention.
It is now clear that the algorithm can locate images of different sizes as per the teaching of
Now, the objective is to check each image contours for their appearances and similarities. Let's keep
Next is the x2 column 1710-x2 representing the x2-axis of 1700A (
The percentage is taken by subtracting the value of column 1710-xy2 by 1706-xy3; the result multiplied by 100, then the result of the multiplication is divided by the value of column 1706-xy3. It can be done other ways as well that will accomplish a percentage between two values. The same explanation applies to all boxes of
Before proceeding any further, let's review boxes 1700C and 1700D of
Back to
As aforementioned, rules can be set for the search and retrieval of images as it was illustrated by
For instance, if the first rule says that only images having 80% corresponding values with target images and 80% resemblance between the two images, we're certain that images not bearing any similarities will be left out from the list of images. Values for
As we aforementioned, the algorithm can locate images based on their contours, appearances, likeness, etc., also, we've mentioned that the rules of the algorithm can set in a way as to produce differing values of images before indexing and saving them. As we turn now to
In the case that an image has just two distinct colors the algorithm will check for color changes, in the case of black-and-white, when it changes from black to white or when it changes from white to black. Whenever it happens, the algorithm will simply record the y-axis value and all of the x-axis values that is taking part of the ratio calculation and part of the particular color change. If the first color change is from white to black for the y-axis, the same is true for the x-axis. Now if we look at
As we turn our attention to table 1800B it illustrates the image ID 1818, the row order 1816, the x-axis 1810, the y-axis 1812, and the x-y ratio for both columns 1814. The x-column 1810, y-column 1812 values are represented by the x-y axis 1806 of the drawing 1800A, they do not represent actual values and are approximate for sake of simplicity. As for y-axis 1812 it has the value of “5” for all rows 1816 and for the 1.sup.st row 1816 the x-axis 1810 has the value of “4” and once the value of the x-axis 1810 is divided by the y-axis 1812 the result is the value of “0.8” 1814 and it is on the 1.sup.st row 1816 of the “xy” column 1814, and it is the ratio between the two values. The values for x-y axis for table 1800B are taken for the x-y axis 1806 starting with the top box 1807 and to the left all the way to the last one down. The y-axis coordinate value of 1807 is recorded for all values of the x-axis coordinate values on the left of 1806 then the values (x-y) for each row is divided to produce the ratio between the two, so, each x-axis coordinate value will be used in calculating the ratio between each pixel location of the contour (x-axis) in relation the y-axis of the fixed y-axis pixel location of 1807. Once again, the values are fictitious and used here for illustration purpose and not intended to obscure the meanings of this invention.
Once a user initiates an input-search for images based on their contours, many values will be retrieved and they may not necessarily have any relationship with the input images. Once the values are retrieved from the database they can be grouped by the images' ID 1818 (ascending order), the y-axis 1812 (ascending order), the x-axis 1810 (descending order), by the row order 1816 (ascending order), this is shown on table 1800B of
As before, the threshold can set in any conceivable way and based on the percentage of contours-like, contours-ratio, partition-matches, etc. Regarding the contours of
There is still at least one more way of using the partitioning means of the algorithm to accomplish the indexing and retrieval of digital stream and as we'll see shortly, it can be used for the purpose of indexing and search-retrieval of digital audio as well. As we now turn to
Since computer don't understand variations other than variation dealing with zeros and ones, before a sine wave signal (analog format) can be handled by a computer it needs to be translated into a digital format of zeros and ones. There are many electronic circuitry used for converting analog signal to digital format and they are called “analog to digital converters—ADC”. As we turn our attention now to
As aforementioned regarding the partitioning of images, it was presented throughout that each partition of the image is summed as to create unique values representing the digital partition. As we'll see, the same can be accomplished with digital audio. Let's return our attention to
Now with digital audio the algorithm will start partitioning the stream once a specified range (threshold) happens, it can be anywhere on the digital stream. As long it is specific and once the threshold happens the partitioning will begin then proceed thereon. In our illustration it will happen once the values happens to be “0010”, “0100” and “0110”, this is but one way, it tells the algorithm that once the first two values happens and if the third value happens to be the last one of the three-threshold values (it can be set in any way and any number of individual values) it will start partitioning. The partition can be of any length, and since the partitioning will happen at a precise threshold, it wouldn't matter how the digital—audio stream starts and ends. We've used a four bits length for our illustration, in reality it is the minimum of eight bits (one byte), it can be any number of bytes, however.
The same process (rules) that is used for partitioning the digital-audio stream for indexing must be used when performing the search as well. In the case of images we see the images on the computer screen, however with a digital audio, the digital-audio envelope is used to perform the partitioning of the audio stream. As we now turn our attention to
As explained throughout this disclosure filtering can be applied to images before indexing, their color mode changed, their outlined produced, etc., than applying the indexing rules and they can be one or more partitions, the more partitions the more details values are indexed for the image. If the image requires just a simple mechanism for their retrieval, then fewer partitions are required, on the other hand, if greater details are part of the requirement, the images can have greater number of partitions. Images can be partitioned in a plurality of ways as well. For instance, every time a rule is applied (filters, color change, etc.) then the partitioning is applied, a single partition, four partitions, one hundred partitions, etc.
If an image is partitioned into four partitions [1] then one thousand partitions [2], once selecting the input image a quarter of the first indexed partitions can be selected [1] then within the selected partition select the individual partitions for the second indexed partitions [2]. This way the algorithm will first apply rules for the first selection then the second selection and by doing it this way a more precise matching can be accomplished. That is, the first search will be performed and seek the quarter of images' partitions then the individual partitions from the quarters partitions that was found are searched thereafter. Also, instead of having images partitions checksum, the actual byte values for the image can be used for indexing it, if the image has 256 colors than one byte value will be used for each pixel, for 65536 colors two byte values will be used for each pixel, all the bytes values can be used as well for the entire partition.
There are many ways of using the partitioning mechanism that we've presented so far for indexing digital stream (images, digital video and digital audio) and one way which it can be used is to index a text just like any other type of digital-data stream. Text usually is indexed by having some or all of page-words content indexed and available for searching based on their words value or their words proximity to each other. There are some instances however that a part of a page needs to be indexed and searched without the currently in use methodology. For instance, if the page is a book and the user needs to locate the book in its digital format and the user knows portions of the book, maybe the user has a photocopy of a page of it, know by heart, has a page retrieve from a digital format, etc., and if the partitioning process is used for indexing a book, all the user will have to do is type portion of the page and that portion will be converted in its partition value and the book will be found in a split of the time it would've by using the currently in use process.
As with the digital stream, the text stream can be indexed in any way and the same process used for indexing it, will be used for searching-retrieving it as well. As we now turn our attention to
By far we've presented ways of using the partition indexing/retrieval mechanism for digital stream and as we proceed, it will become clear that there are many more ways of using this great invention and enhance the way documents of all sort are indexed and retrieved. As we turn our attention to
As we now focus our attention to
Let's say that the movie clip 2308 is about a trip to New York. Portion of the clip is related to a user experience arranging the actual trip to New York, another part of the clip show the night life and entertainment of New York, one other shows restaurants the user dined, and yet another shows the hotel where the user stayed. In the just described scenario, the user can select a category for the video clip as New York 2322 as for subcategory 2320 the user selects “Tourism” for image 2300 (row #1 table 2314), “Entertainment” for image 2302 (row #2 table 2314), “Restaurants” for image 2304 (row #3 table 2314) and “Accommodations” for image 2306 (row #4 table 2314). Once again, these selections can be done as the user is viewing the movie clip and the algorithm is parsing it (doing the partitioning), the user can stop the clip at any time and select subcategories, categories, type related words, etc. As the user does the interaction the information is recorded in a database table or any other means, in our example is it's a database table 2324. This is but one way and it is illustrated here for the purpose of explaining this invention and many more mode of use can be implemented without departing from its true spirit. It can be a video stream, images, text stream, digital-audio stream, slide presentation stream, etc.
As the movie clip plays and a user viewing the clip decides to interact with the system playing the movie (it can be a computer, television, hand held devices, computers connected to the Internet, etc.), other related contents can be displayed to the user as another movie clip or as any kind of available content format. Let's now say that the user clicks a button, link, or something of that sort, at the time the movie is playing about the moment the user was preparing for the New York trip, after the user selects a link of some sort the user will be taken to the related content, the content of row #1 of table 2320, subcategory column 2326, category column 2328 and related content at column 2330 will be presented to the user. It might be related to travel agencies specialized in New York tourism, or other type of related information to the category “tourism” and “New York”, and advertisings of all sorts. However, this is but one way, the collection of links for all related contents of table 2324 for the playing video clip “XYZ” can be displayed along with the content, a video clip in our explanation, links of related contents can be selected any time the clip is playing, before the clip plays, after the clip has ended playing, displayed on the same, on a separate page, on a popup window, etc.
Let's briefly review table 2320 and it illustrates its relationship with table 2324, for subcategory relationship 2314 (column 2326), for category relationship 2316 (column 2328), the related contents are stored in column 2330, and rows for each content is at rows column 2325. Table 2320 can have other columns as well, like, a column to store user provided words, other website link directing to websites/webpages, etc. This is a very simplistic way of presenting this invention and anyone skilled in the art will readily appreciate that there are many more ways of implementing it without departing from its true spirit and scope. It is done as is for sake of simplicity and not intended to obscure the invention and its true meanings.
There is at least one more way of using the above described means for providing related content to a digital stream (audio, video, books, image, etc) by having means for a user to provide information regarding the digital stream and as we turn to
Let's continue with the movie clip about New York. The user can stop the movie clip at any time and provide the category, subcategory, type related words, etc, and as they are provided they will be saved at a database or any other means. Instead of playing the movie and having it stopped to provide the information, the user can simply provide them based on the timing of its play. For instance, for the first minute the user may provide a string in an entry text field that says: “0:newyork:tourism:xyz” (or any other format) and the string will be parsed into the first row of table 2425 and once the movie clip is played and the user clicks on some kind of link (explained already for
As we now turn to
As we now turn to
As we turn to
As we turn our attention to
User supplied data can be embedded directly to a digital stream and the process involves what we've already explained. At any time the digital stream can be stopped and data related to contents can be embedded into sections of the digital stream. As we turn our attention to
As it is obviously to those of the skill in the art, the present invention can be used in a single device, multiple devices, on a computer network, over the Internet, etc. As well, an end user can apply rules to the digital stream (digital audio, digital video, text, image, slide presentation, etc.) then upload it to the computer/server doing its parsing, indexing and savings. The rules can be any of the rules described throughout the specification of this invention. Furthermore, the user may simply provide the related information in some means of supplying information on a webpage then upload the digital stream and the select/supplied rules to the server and it will apply the rules thereto.
Also, at the time of upload/provide, the user can select the type of contents that will need to be related to the digital stream (category, subcategory, related words, etc.), for instance, the user may select or type the timing related data to the digital stream about the related content's type (e.g. “0:001|turism|newyork”, “0:02|restaurants|newyork”, “0:03|accommodation|newyork”, etc.—please see
Once the digital stream is presented to a user—on a computer or website—and the user interacts with it, the contents related to the digital stream can be presented to the user in any conceivable way and not necessary needs to be playtime related, that is, all related contents can be presented at once or as the digital stream interaction proceeds, it can be links to other websites, portion of the content, the complete content, after it has finished playing, etc. As well the content can be hosted at a content-hosting server over the Internet/network and the user interaction done though a client connection with it, or, it can be in a single location as well without departing from the true spirit, scope and meanings of the present invention.
The partitioning of a digital-content stream can be done on a client computer then its result uploaded to the server computer, it can be done at the server computer after is received by the client computer or a combination thereof. As well the end user at a client computer can select parts of the image as has been taught throughout and the selection sent to the content-hosting computer and it can be done by using JavaApplets, ActiveX, JavaScript on the client computer, etc.
A method and apparatus for indexing, searching and matching still images, text, digital audio and video where rules are applied before their indexations, then they are partitioned and a means for producing individual values for each partition is applied (checksum) and the values are saved into an indexed database. The same rules are applied at the input counterpart as to produce identical values for the selected partitions of the input digital stream then perform a search and match against partition values stored at the database. As well means to associate content to a content-stream partition based on the partitions values, user supplied descriptive words, timing regarding the length of presentation of the content stream, etc.
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations could be made herein without departing from the true spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods, computer software and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, computer software, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, computer software or steps.
This application is a continuation of U.S. patent application Ser. No. 11/682,316, filed 6 Mar. 2007, which is a continuation-in-part of U.S. patent application Ser. No. 11/669,822, filed 31 Jan. 2007, which are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
Parent | 11682316 | Mar 2007 | US |
Child | 14068751 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11669822 | Jan 2007 | US |
Child | 11682316 | US |