The invention generally relates to the field of machine learning and more particularly to natural language processing using a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based Integrated Circuit.
An ideogram is a graphic symbol that represents an idea or concept. Some ideograms are comprehensible only by familiarity with prior convention; others convey their meaning through pictorial resemblance to a physical object.
Machine learning is an application of artificial intelligence. In machine learning, a computer or computing device is programmed to think like human beings so that the computer may be taught to learn on its own. The development of neural networks has been key to teaching computers to think and understand the world in the way human beings do.
The task of recognizing movements of a person in a video and/or a series of images has a lot of practical usage, for example, surveillance camera for detecting suspicious activities for security reasons, video game for detecting player's movements to understand player's command, or, autonomous car for detecting movements of pediatricians and other vehicles, etc.
Prior art approaches to recognize motions of an object in an video generally use software algorithms that solve time-series related problems. Such approaches are hard to be implemented in a semiconductor chip. Therefore, prior approaches cannot support motion recognitions in a local device, or edge computing. Date must be sent to a remote server for complicated computations. The computed results are then sent back to the local device. Such approaches suffer time delay and data security issues.
It would therefore be desirable to have improve methods of recognizing motions of an object in a video clips or an image sequence that can be achieved in a local device.
This section is for the purpose of summarizing some aspects of the invention and to briefly introduce some preferred embodiments. Simplifications or omissions in this section as well as in the abstract and the title herein may be made to avoid obscuring the purpose of the section. Such simplifications or omissions are not intended to limit the scope of the invention.
Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. According to one aspect, a plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
According to another aspect of the invention, 2-D symbol comprises a matrix of N×N pixels of K-bit data representing a “super-character”. The matrix is divided into M×M sub-matrices with each sub-matrix containing (N/M)×(N/M) pixels. K, N and M are positive integers, and N is preferably a multiple of M. Each sub-matrix represents one ideogram defined in an ideogram collection set. “Super-character” represents a meaning formed from a specific combination of a plurality of ideograms. The meaning of the “super-character” is learned by classifying the 2-D symbol via a trained convolutional neural networks model having bi-valued 3×3 filter kernels in a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based integrated circuit.
According yet another aspect, the trained convolutional neural networks model is achieved with the following operations: (a) obtaining a convolutional neural networks model by training the convolutional neural networks model based on image classification of a labeled dataset, which contains a sufficiently large number of multi-layer 2-D symbols, the convolutional neural networks model including multiple ordered filter groups, each filter in the multiple ordered filter groups containing a standard 3×3 filter kernel; (b) modifying the convolutional neural networks model by converting the respective standard 3×3 filter kernels to corresponding bi-valued 3×3 filter kernels of a currently-processed filter group in the multiple ordered filter groups based on a set of kernel conversion schemes; (c) retraining the modified convolutional neural networks model until a desired convergence criterion is met; and (d) repeating (b)-(c) for another filter group until all multiple ordered filter groups have been converted to the bi-valued 3×3 filter kernels.
Ideogram collection set includes, but is not limited to, pictograms, icons, logos, logosyllabic characters, punctuation marks, numerals, special characters.
One of the objectives, features and advantages of the invention is to use a CNN based integrated circuit having dedicated built-in logics for performing simultaneous convolutions such that the image processing technique (i.e., convolutional neural networks) for motion recognition is conducted in hardware.
Other objects, features, and advantages of the invention will become apparent upon examining the following detailed description of an embodiment thereof, taken in conjunction with the attached drawings.
These and other features, aspects, and advantages of the invention will be better understood with regard to the following description, appended claims, and accompanying drawings as follows:
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will become obvious to those skilled in the art that the invention may be practiced without these specific details. The descriptions and representations herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, and components have not been described in detail to avoid unnecessarily obscuring aspects of the invention.
Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Used herein, the terms “vertical”, “horizontal”, “diagonal”, “left”, “right”, “top”, “bottom”, “column”, “row”, “diagonally” are intended to provide relative positions for the purposes of description, and are not intended to designate an absolute frame of reference. Additionally, used herein, term “character” and “script” are used interchangeably.
Embodiments of the invention are discussed herein with reference to
Referring first to
“Super-character” represents at least one meaning each formed with a specific combination of a plurality of ideograms. Since an ideogram can be represented in a certain size matrix of pixels, two-dimensional symbol 100 is divided into M×M sub-matrices. Each of the sub-matrices represents one ideogram, which is defined in an ideogram collection set by humans. “Super-character” contains a minimum of two and a maximum of M×M ideograms. Both N and M are positive integers, and N is preferably a multiple of M.
Shown in
A second example partition scheme 220 of dividing a two-dimension symbol into M×M sub-matrices 222 is shown in
Only limited number of features of an ideogram can be represented using one single two-dimensional symbol. For example, features of an ideogram can be black and white when data of each pixel contains one-bit. Feature such as grayscale shades can be shown with data in each pixel containing more than one-bit.
Additional features are represented using two or more layers of an ideogram. In one embodiment, three respective basic color layers of an ideogram (i.e., red, green and blue) are used collectively for representing different colors in the ideogram. Data in each pixel of the two-dimensional symbol contains a K-bit binary number. K is a positive integer. In one embodiment, K is 5.
In another embodiment, three related ideograms are used for representing other features such as a dictionary-like definition of a Chinese character shown in
Ideogram collection set includes, but is not limited to, pictograms, icons, logos, logosyllabic characters, punctuation marks, numerals, special characters. Logosyllabic characters may contain one or more of Chinese characters, Japanese characters, Korean characters, etc.
In order to systematically include Chinese characters, a standard Chinese character set (e.g., GB18030) may be used as a start for the ideogram collection set. For including Japanese and Korean characters, CJK Unified Ideographs may be used. Other character sets for logosyllabic characters or scripts may also be used.
A specific combined meaning of ideograms contained in a “super-character” is a result of using image processing techniques in a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based computing system. Image processing techniques include, but are not limited to, convolutional neural networks, recurrent neural networks, etc.
“Super-character” represents a combined meaning of at least two ideograms out of a maximum of M×M ideograms. In one embodiment, a pictogram and a Chinese character are combined to form a specific meaning. In another embodiment, two or more Chinese characters are combined to form a meaning. In yet another embodiment, one Chinese character and a Korean character are combined to form a meaning. There is no restriction as to which two or more ideograms to be combined.
Ideograms contained in a two-dimensional symbol for forming “super-character” can be arbitrarily located. No specific order within the two-dimensional symbol is required. Ideograms can be arranged left to right, right to left, top to bottom, bottom to top, or diagonally.
Using written Chinese language as an example, combining two or more Chinese characters may result in a “super-character” including, but not limited to, phrases, idioms, proverbs, poems, sentences, paragraphs, written passages, articles (i.e., written works). In certain instances, the “super-character” may be in a particular area of the written Chinese language. The particular area may include, but is not limited to, certain folk stories, historic periods, specific background, etc.
Referring now to
The CNN based computing system 400 may be implemented on integrated circuits as a digital semi-conductor chip (e.g., a silicon substrate) and contains a controller 410, and a plurality of CNN processing units 402a-402b operatively coupled to at least one input/output (I/O) data bus 420. Controller 410 is configured to control various operations of the CNN processing units 402a-402b, which are connected in a loop with a clock-skew circuit.
In one embodiment, each of the CNN processing units 402a-402b is configured for processing imagery data, for example, two-dimensional symbol 100 of
To store an ideogram collection set, one or more storage units operatively coupled to the CNN based computing system 400 are required. Storage units (not shown) can be located either inside or outside the CNN based computing system 400 based on well known techniques.
“Super-character” may contain more than one meanings in certain instances. “Super-character” can tolerate certain errors that can be corrected with error-correction techniques. In other words, the pixels represent ideograms do not have to be exact. The errors may have different causes, for example, data corruptions, during data retrieval, etc.
In another embodiment, the CNN based computing system is a digital integrated circuit that can be extendable and scalable. For example, multiple copies of the digital integrated circuit may be implemented on a single semi-conductor chip as shown in
All of the CNN processing engines are identical. For illustration simplicity, only few (i.e., CNN processing engines 422a-422h, 432a-432h) are shown in
Each CNN processing engine 422a-422h, 432a-432h contains a CNN processing block 424, a first set of memory buffers 426 and a second set of memory buffers 428. The first set of memory buffers 426 is configured for receiving imagery data and for supplying the already received imagery data to the CNN processing block 424. The second set of memory buffers 428 is configured for storing filter coefficients and for supplying the already received filter coefficients to the CNN processing block 424. In general, the number of CNN processing engines on a chip is 2n, where n is an integer (i.e., 0, 1, 2, 3, . . . ). As shown in
The first and the second I/O data bus 430a-430b are shown here to connect the CNN processing engines 422a-422h, 432a-432h in a sequential scheme. In another embodiment, the at least one I/O data bus may have different connection scheme to the CNN processing engines to accomplish the same purpose of parallel data input and output for improving performance.
Process 500 starts at action 502 by receiving a string of natural language texts 510 in a first computing system 520 having at least one application module 522 installed thereon. The first computing system 520 can be a general computer capable of converting a string of natural language texts 510 to a multi-layer two-dimensional symbol 531a-531c (i.e., an image contained in a matrix of N×N pixels of data in multiple layers).
Next, at action 504, a multi-layer two-dimensional symbol 531a-531c containing M×M ideograms 532 (e.g., two-dimensional symbol 100 of
Finally, at action 506, the meaning of the “super-character” contained in the multi-layer two-dimensional symbol 531a-531c is learned in a second computing system 540 by using an image processing technique 538 to classify the multi-layer two-dimensional symbol 531a-531c, which is formed in the first computing system 520 and transmitted to the second computing system 540. The second computing system 540 is capable of image processing of imagery data such as the multi-layer two-dimensional symbol 531a-531c.
Transmitting the multi-layer 2-D symbol 531a-531c can be performed with many well-known manners, for example, through a network either wired or wireless.
In one embodiment, the first computing system 520 and the second computing system 540 are the same computing system (not shown).
In yet another embodiment, the first computing system 520 is a general computing system while the second computing system 540 is a CNN based computing system 400 implemented as integrated circuits on a semi-conductor chip shown in
The image processing technique 538 includes predefining a set of categories 542 (e.g., “Category-1”, “Category-2”, . . . “Category-X” shown in
In another embodiment, predefined categories contain commands that can activate a sequential instructions on a smart electronic device (e.g., computing device, smart phone, smart appliance, etc.). For example, a multi-layer two-dimensional symbol is formed from a string of 16 logosyllabic Chinese characters. “Super-character” in the multi-layer 2-D symbol thus contains 16 ideograms in three colors (i.e., red, green and blue). After applying image processing technique to imagery data of the 2-D symbol, a series of commands for smart electronic devices is obtained by classifying the imagery data with a set of predefined commands. In this particular example, the meaning of the 16 logosyllabic Chinese characters is “open an online map and find the nearest route to fast food”. The series of commands may be as follows:
In one embodiment, image processing technique 538 comprises example convolutional neural networks shown in
Process 600 starts at action 602 by receiving a string of natural language texts in a computing system having at least one application module installed thereon. An example application module is a software that contains instructions for the computing system to perform the actions and decisions set forth in process 600. The string of natural language texts may include, but are not necessarily limited to, logosyllabic characters, numerals, special characters, western languages based on Latin letters, etc. The string of natural language texts can be inputted to the computing system via various well-known manners, for example, keyboard, mouse, voice-to-text, etc.
Next, at action 604, a size of the received string of natural language texts is determined. Then at decision 610, it is determined whether the size is greater than M×M (i.e., the maximum number of ideograms in the two-dimensional symbol). In one embodiment, M is 4 and M×M is therefore 16. In another embodiment, M is 8 and M×M is then 64.
When decision 610 is true, the received string is too large to be fit into the 2-D symbol and must be first reduced in accordance with at least one language text reduction scheme described below.
Process 600 follows the ‘yes’ branch to action 611. Process 600 attempts to identify an unimportant text in the string according to at least one relevant grammar based rule. The relevant grammar based rule is associated with the received string of natural language texts. For example, when the natural language is Chinese, the relevant grammar is the Chinese grammar. Next, at decision 612, it is determined whether an unimportant text is identified or not. If ‘yes’, at action 613, the identified unimportant text is deleted from the string, and therefore the size of the string is reduced by one. At decision 614, the size of the string is determined if it is equal to M×M. If not, process 600 goes back to repeat the loop of action 611, decision 612, action 613 and decision 614. If decision 614 is true, process 600 ends after performing action 618, in which a multi-layer 2-D symbol is formed by converting the string in its current state (i.e., may have one or more unimportant texts deleted).
During the aforementioned loop 611-614, if there is no more unimportant text in the received string, decision 612 becomes ‘no’. Process 600 moves to action 616 to further reduce the size of the string to M×M via a randomized text reduction scheme, which can be truncation or arbitrary selection. At action 618, a multi-layer 2-D symbol is formed by converting the string in its current state. Process 600 ends thereafter.
The randomized text reduction scheme and the aforementioned scheme of deleting unimportant text are referred to as the at least one language text reduction scheme.
Referring back to decision 610, if it is false, process 600 follows the ‘no’ branch to decision 620. If the size of the received string is equal to M×M, decision 620 is true. Process 600 moves to action 622, in which a multi-layer 2-D symbol is formed by converting the received string. Process 600 ends thereafter.
If decision 620 is false (i.e., the size of the received string is less than M×M), process 600 moves to another decision 630, in which it is determined whether a padding operation of the 2-D symbol is desired. If ‘yes’, at action 632, the string is padded with at least one text to increase the size of the string to M×M in accordance with at least one language text increase scheme. In other words, at least one text is added to the string such that the size of the string is equal to M×M. In one embodiment, the language text increase scheme requires one or more key texts be identified from the received string first. Then one or more identified key texts are repeatedly appended to the received string. In another embodiment, the language text increase scheme requires one or more texts from the receiving string be repeatedly appended to the string. Next, action 622 is performed to form a multi-layer 2-D symbol by converting the padded string (i.e., the received string plus at least one additional text). Process 600 ends thereafter.
If decision 630 is false, process 600 ends after performing action 634. A multi-layer 2-D symbol is formed by converting the received string, which has a size less than M×M. As a result, the 2-D symbol contains at least one empty space. In one embodiment, the multi-layer two-dimensional symbol 531a-531c contains three layers for red, green and blue hues. Each pixel in each layer of the two-dimension symbol contains K-bit. In one embodiment, K equals to 8 for supporting true color, which contains 256 shades of red, green and blue. In another embodiment, K equals to 5 for a reduced color map having 32 shades of red, green and blue.
Based on convolutional neural networks, a multi-layer two-dimensional symbol 711a-711c as input imagery data is processed with convolutions using a first set of filters or weights 720. Since the imagery data of the 2-D symbol 711a-711c is larger than the filters 720. Each corresponding overlapped sub-region 715 of the imagery data is processed. After the convolutional results are obtained, activation may be conducted before a first pooling operation 730. In one embodiment, activation is achieved with rectification performed in a rectified linear unit (ReLU). As a result of the first pooling operation 730, the imagery data is reduced to a reduced set of imagery data 731a-731c. For 2×2 pooling, the reduced set of imagery data is reduced by a factor of 4 from the previous set.
The previous convolution-to-pooling procedure is repeated. The reduced set of imagery data 731a-731c is then processed with convolutions using a second set of filters 740. Similarly, each overlapped sub-region 735 is processed. Another activation can be conducted before a second pooling operation 740. The convolution-to-pooling procedures are repeated for several layers and finally connected to a Fully Connected Networks (FCN) 760. In image classification, respective probabilities 544 of predefined categories 542 can be computed in FCN 760.
This repeated convolution-to-pooling procedure is trained using a known dataset or database. For image classification, the dataset contains the predefined categories. A particular set of filters, activation and pooling can be tuned and obtained before use for classifying an imagery data, for example, a specific combination of filter types, number of filters, order of filters, pooling types, and/or when to perform activation. In one embodiment, the imagery data is the multi-layer two-dimensional symbol 711a-711c, which is form from a string of natural language texts.
In one embodiment, convolutional neural networks are based on a Visual Geometry Group (VGG16) architecture neural nets.
More details of a CNN processing engine 802 in a CNN based integrated circuit are shown in
In order to achieve faster computations, few computational performance improvement techniques have been used and implemented in the CNN processing block 804. In one embodiment, representation of imagery data uses as few bits as practical (e.g., 5-bit representation). In another embodiment, each filter coefficient is represented as an integer with a radix point. Similarly, the integer representing the filter coefficient uses as few bits as practical (e.g., 12-bit representation). As a result, 3×3 convolutions can then be performed using fixed-point arithmetic for faster computations.
Each 3×3 convolution produces one convolution operations result, Out(m, n), based on the following formula:
where:
Each CNN processing block 804 produces Z×Z convolution operations results simultaneously and, all CNN processing engines perform simultaneous operations. In one embodiment, the 3×3 weight or filter coefficients are each 12-bit while the offset or bias coefficient is 16-bit or 18-bit.
To perform 3×3 convolutions at each sampling location, an example data arrangement is shown in
Imagery data are stored in a first set of memory buffers 806, while filter coefficients are stored in a second set of memory buffers 808. Both imagery data and filter coefficients are fed to the CNN block 804 at each clock of the digital integrated circuit. Filter coefficients (i.e., C(3×3) and b) are fed into the CNN processing block 804 directly from the second set of memory buffers 808. However, imagery data are fed into the CNN processing block 804 via a multiplexer MUX 805 from the first set of memory buffers 806. Multiplexer 805 selects imagery data from the first set of memory buffers based on a clock signal (e.g., pulse 812).
Otherwise, multiplexer MUX 805 selects imagery data from a first neighbor CNN processing engine (from the left side of
At the same time, a copy of the imagery data fed into the CNN processing block 804 is sent to a second neighbor CNN processing engine (to the right side of
After 3×3 convolutions for each group of imagery data are performed for predefined number of filter coefficients, convolution operations results Out(m, n) are sent to the first set of memory buffers via another multiplex MUX 807 based on another clock signal (e.g., pulse 811). An example clock cycle 810 is drawn for demonstrating the time relationship between pulse 811 and pulse 812. As shown pulse 811 is one clock before pulse 812, as a result, the 3×3 convolution operations results are stored into the first set of memory buffers after a particular block of imagery data has been processed by all CNN processing engines through the clock-skew circuit 820.
After the convolution operations result Out(m, n) is obtained from Formula (1), activation procedure may be performed. Any convolution operations result, Out(m, n), less than zero (i.e., negative value) is set to zero. In other words, only positive value of output results are kept. For example, positive output value 10.5 retains as 10.5 while −2.3 becomes 0. Activation causes non-linearity in the CNN based integrated circuits.
If a 2×2 pooling operation is required, the Z×Z output results are reduced to (Z/2)×(Z/2). In order to store the (Z/2)×(Z/2) output results in corresponding locations in the first set of memory buffers, additional bookkeeping techniques are required to track proper memory addresses such that four (Z/2)×(Z/2) output results can be processed in one CNN processing engine.
To demonstrate a 2×2 pooling operation,
An input image generally contains a large amount of imagery data. In order to perform image processing operations, an example input image 1400 (e.g., a two-dimensional symbol 100 of
Although the invention does not require specific characteristic dimension of an input image, the input image may be required to resize to fit into a predefined characteristic dimension for certain image processing procedures. In an embodiment, a square shape with (2L×Z)-pixel by (2L×Z)-pixel is required. L is a positive integer (e.g., 1, 2, 3, 4, etc.). When Z equals 14 and L equals 4, the characteristic dimension is 224. In another embodiment, the input image is a rectangular shape with dimensions of (2I×Z)-pixel and (2J×Z)-pixel, where I and J are positive integers.
In order to properly perform 3×3 convolutions at pixel locations around the border of a Z-pixel by Z-pixel block, additional imagery data from neighboring blocks are required.
When more than one CNN processing engine is configured on the integrated circuit. The CNN processing engine is connected to first and second neighbor CNN processing engines via a clock-skew circuit. For illustration simplicity, only CNN processing block and memory buffers for imagery data are shown. An example clock-skew circuit 1540 for a group of example CNN processing engines are shown in
CNN processing engines connected via the second example clock-skew circuit 1540 to form a loop. In other words, each CNN processing engine sends its own imagery data to a first neighbor and, at the same time, receives a second neighbor's imagery data. Clock-skew circuit 1540 can be achieved with well-known manners. For example, each CNN processing engine is connected with a D flip-flop 1542.
Referring now to
Process 1600 starts at action 1602 by receiving a string of written natural language texts in a computing system (e.g., a computer with multiple processing units). At action 1604, a multi-layer two-dimension (2-D) symbol is formed from the received string according to a set of 2-D symbol creation rules. The 2-D symbol contains a “super-character” representing a meaning formed from a specific combination of a plurality of ideograms contained in the 2-D symbol.
Details of an example multi-layer 2-D symbol 100 are described and shown in
Next, at action 1606, the meaning of the “super-character” is learned by classifying the 2-D symbol via a trained convolutional neural networks model having bi-valued 3×3 filter kernels in the CNN based integrated circuit.
A trained convolutional neural networks model is achieved with an example set of operations 1700 shown in
Then, at action 1704, the convolutional neural networks model is modified by converting respective standard 3×3 filter kernels 1810 to corresponding bi-valued 3×3 filter kernels 1820 of a currently-processed filter group in the multiple ordered filter groups based on a set of kernel conversion schemes. In one embodiment, each of the nine coefficients C(i,j) in the corresponding bi-valued 3×3 filter kernel 1820 is assigned a value ‘A’ which equals to the average of absolute coefficient values multiplied by the sign of corresponding coefficients in the standard 3×3 filter kernel 1810 shown in following formula:
Filter groups are converted one at a time in the order defined in the multiple ordered filter groups. In certain situation, two consecutive filter groups are optionally combined such that the training of the convolutional neural networks model is more efficient.
Next, at action 1706, the modified convolutional neural networks model is retrained until a desired convergence criterion is met or achieved. There are a number of well known convergence criteria including, but not limited to, completing a predefined number of retraining operation, converging of accuracy loss due to filter kernel conversion, etc. In one embodiment, all filter groups including those already converted in previous retraining operations can be changed or altered for fine tuning. In another embodiment, the already converted filter groups are frozen or unaltered during the retraining operation of the currently-processed filter group.
Process 1700 moves to decision 1708, it is determined whether there is another unconverted filter group. If ‘yes’, process 1700 moves back to repeat actions 1704-1706 until all filter groups have been converted. Decision 1708 becomes ‘no’ thereafter. At action 1710, coefficients of bi-valued 3×3 filter kernels in all filter groups are transformed from a floating point number format to a fixed point number format to accommodate the data structure required in the CNN based integrated circuit. Furthermore, the fixed point number is implemented as reconfigurable circuits in the CNN based integrated circuit. In one embodiment, the coefficients are implemented using 12-bit fixed point number format.
Referring now to
Process 2000 starts by selecting a plurality of frames out of a video clip or an image sequence of interest at action 2002.
Next at action 2004, a text category is associated with each of the selected frames by applying image classification technique with a trained deep-learning model for a set of categories containing various poses of an object of interest. This can be performed in a multiple-processor computing system. The object of interest can be many different things, for example, animal, animal body part, vehicle, etc. Poses of a person (i.e., object) may include, but are not limited to, standing, squatting, sitting, specific hand gesture, smiling, crying, eye blinking, etc. Once imagery data is associated with a text category, motions of an object can be derived from a specific sequential combination of text categories.
Next, at action 2006, a “super-character” is formed by embedding respective text categories of the plurality of frames as corresponding ideograms. “Super-character” is a 2-D symbol having multiple ideograms contained therein as shown in
Finally, at action 2008, a particular motion of the object of interest is recognized by obtaining the meaning of the “super-character” with image classification via a trained convolutional neural networks model for various motions of the object derived from a specific sequential combination of respective text categories. For example, three hand gestures 2201-2203 shown in
Convolutional neural network model can be implemented in a semi-conductor chip as shown in
In another embodiment, a second example process 2020 of recognizing motions of an object is shown in
Finally, at action 2028, a particular motion of the object of interest is recognized by obtaining the meaning of the “super-character” with image classification via a trained convolutional neural networks model for various motions of the object derived from a specific sequential combination of respective detailed images.
In yet another embodiment, a third example process 2040 of recognizing motions of an object is shown in
Next, at action 2046, a “super-character” is formed by embedding respective reduced-size images of the plurality of frames as corresponding ideograms. Instead of text categories, reduced-size images are used as ideograms in the third embodiment.
Finally, at action 2048, a particular motion of the object of interest is recognized by obtaining the meaning of the “super-character” with image classification via a trained convolutional neural networks model for various motions of the object derived from a specific sequential combination of respective reduced-size images.
Although the invention has been described with reference to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of, the invention. Various modifications or changes to the specifically disclosed example embodiments will be suggested to persons skilled in the art. For example, whereas three hand gestures have been described and shown as an object's pose, other poses may be used for demonstrating the invention, for example, open and close of an eye, movements of lips, motions of a vehicle. Additionally, whereas text categories are shown and described as English language categories, other natural languages may be used for achieving the same, for example, Chinese, Japanese, Korean or another written language. Moreover, whereas the two-dimensional symbol has been described and shown with a specific example of a matrix of 224×224 pixels, other sizes may be used for achieving substantially similar objections of the invention. Additionally, whereas two example partition schemes have been described and shown, other suitable partition scheme of dividing the two-dimensional symbol may be used for achieving substantially similar objections of the invention. Moreover, few example ideograms have been shown and described, other ideograms may be used for achieving substantially similar objectives of the invention. Furthermore, whereas Chinese, Japanese and Korean logosyllabic characters have been described and shown to be an ideogram, other logosyllabic characters can be represented, for example, Egyptian hieroglyphs, Cuneiform scripts, etc. Finally, whereas one type of bi-valued 3×3 filter kernel has been shown and described, other types may be used for accomplishing substantially similar objective of the invention. In summary, the scope of the invention should not be restricted to the specific example embodiments disclosed herein, and all modifications that are readily suggested to those of ordinary skill in the art should be included within the spirit and purview of this application and scope of the appended claims.
This application is a continuation application to a co-pending U.S. patent application Ser. No. 15/861,596, filed on Jan. 3, 2018, which is a continuation-in-part (CIP) to a U.S. patent application Ser. No. 15/709,220 for “Natural Language Processing Using A CNN Based Integrated Circuit” filed on Sep. 19, 2017 (now U.S. Pat. No. 10,083,171 issued Sep. 25, 2018), which is a CIP to a U.S. patent application Ser. No. 15/694,711 for “Natural Language Processing Via A Two-dimensional Symbol Having Multiple Ideograms Contained Therein” filed on Sep. 1, 2017 (now U.S. Pat. No. 10,102,453 issued Oct. 16, 2018), which is a CIP to a co-pending U.S. patent application Ser. No. 15/683,723 for “Two-dimensional Symbols For Facilitating Machine Learning Of Combined Meaning Of Multiple Ideograms Contained Therein” filed on Aug. 22, 2017, which claims priority from a U.S. Provisional Patent Application Ser. No. 62/541,081, entitled “Two-dimensional Symbol For Facilitating Machine Learning Of Natural Languages Having Logosyllabic Characters” filed on Aug. 3, 2017. All of which are hereby incorporated by reference in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5140670 | Chua et al. | Aug 1992 | A |
5355528 | Roska et al. | Oct 1994 | A |
5835924 | Maruyama et al. | Nov 1998 | A |
6047276 | Manganaro et al. | Apr 2000 | A |
6519363 | Su et al. | Feb 2003 | B1 |
6665436 | Su et al. | Dec 2003 | B2 |
6754645 | Shi et al. | Jun 2004 | B2 |
6795579 | Tang et al. | Sep 2004 | B2 |
6941513 | Meystel et al. | Sep 2005 | B2 |
7130487 | Imagawa et al. | Oct 2006 | B1 |
7165019 | Lee et al. | Jan 2007 | B1 |
7478033 | Wu et al. | Jan 2009 | B2 |
7565624 | Fux et al. | Jul 2009 | B2 |
8024319 | Gao et al. | Sep 2011 | B2 |
8321222 | Pollet et al. | Nov 2012 | B2 |
8726148 | Battilana | May 2014 | B1 |
9026432 | Zangvil | May 2015 | B2 |
9165181 | Maeda et al. | Oct 2015 | B2 |
9245205 | Soldevila | Jan 2016 | B1 |
9330086 | Zhao et al. | May 2016 | B2 |
9418319 | Shen et al. | Aug 2016 | B2 |
9460344 | Wang et al. | Oct 2016 | B2 |
10037458 | Mahmoud et al. | Jul 2018 | B1 |
10331967 | Yang | Jun 2019 | B1 |
20010014176 | Kamada et al. | Aug 2001 | A1 |
20030108239 | Su et al. | Jun 2003 | A1 |
20030110035 | Thong et al. | Jun 2003 | A1 |
20040243389 | Thomas et al. | Dec 2004 | A1 |
20050209844 | Wu et al. | Sep 2005 | A1 |
20080130996 | Sternby | Jun 2008 | A1 |
20090048841 | Pollet et al. | Feb 2009 | A1 |
20100158394 | Chang et al. | Jun 2010 | A1 |
20100286979 | Zangvil et al. | Nov 2010 | A1 |
20110015920 | How | Jan 2011 | A1 |
20120263346 | Datta et al. | Oct 2012 | A1 |
20130002553 | Colley | Jan 2013 | A1 |
20130060786 | Serrano et al. | Mar 2013 | A1 |
20130182898 | Maeda et al. | Jul 2013 | A1 |
20140040270 | O'Sullivan | Feb 2014 | A1 |
20140062862 | Yamashita | Mar 2014 | A1 |
20140355835 | Rodriguez-Serrano et al. | Dec 2014 | A1 |
20150036920 | Wu et al. | Feb 2015 | A1 |
20150104149 | Sim et al. | Apr 2015 | A1 |
20150186504 | Gorman | Jul 2015 | A1 |
20150193431 | Stoytchev et al. | Jul 2015 | A1 |
20150339525 | Marcelli | Nov 2015 | A1 |
20160124630 | Qian et al. | May 2016 | A1 |
20160162467 | Munro | Jun 2016 | A1 |
20160163035 | Chang | Jun 2016 | A1 |
20160217319 | Bhanu et al. | Jul 2016 | A1 |
20160307469 | Zhou et al. | Oct 2016 | A1 |
20160358036 | Yang et al. | Dec 2016 | A1 |
20170004184 | Jain et al. | Jan 2017 | A1 |
20170011279 | Soldevila et al. | Jan 2017 | A1 |
20170032035 | Gao et al. | Feb 2017 | A1 |
20170177710 | Burlik | Jun 2017 | A1 |
20180060302 | Liang et al. | Mar 2018 | A1 |
20180101520 | Fuchizaki et al. | Apr 2018 | A1 |
20180147062 | Ay et al. | May 2018 | A1 |
20180150457 | Stoytchev et al. | May 2018 | A9 |
20180150956 | Kao et al. | May 2018 | A1 |
20180341630 | DeVries | Nov 2018 | A1 |
20190043149 | D'Ercoli et al. | Feb 2019 | A1 |
Entry |
---|
Shur et al. “A Corpus of Natural Language for Visual Reasoning”, 2017, Facebook AI Research, Menlo Park, CA. |
Yoon Kim, “Convolutional Neural Networks for Sentence Classification”, Sep. 2014, New York University. |
Rastegari et al. “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks”, Aug. 2, 2016, Allen Institute for AI, Univ. of Washington. |
Gysel et al. “Hardware-Oriented Approximation of Convolutional Neural Networks”, Oct. 20, 2016, Department of Electrical and Computer Engineering, University of California, Davis, CA. |
L. Chua et al. “Cellular Neural Networks: Theory”, IEEE Transactions on Circuits and Systems, vol. 35, No. 10, pp. 1257-1271 Oct. 1988. |
L. Chua et al. “Cellular Neural Networks: Applications”, IEEE Transactions on Circuits and Systems, vol. 35, No. 10, pp. 1273-1290 Oct. 1988. |
Angela Slavova “Applications of Some Mathematical Methods in the Analysis of Cellular Neural Networks”, Journal of computational and Applied Mathematics 114 (2000) 387-404. |
Lee et al. “Color Image Processing in a Cellular Neural-Network Environment” IEEE Transaction on Neural Networks, vol. 7, No. 5. pp. 1086-1098 Sep. 1996. |
L. Yang et al. “VLSI Implementation of Cellular Neural Networks”, IEEE 1990 CH2868-8/90 pp. 2425-2427. |
Korekado et al. “A VLSI Convolutional Neural Network for Image Recognition Using Merged/Mixed Analog-Digital Architecture”, 2003. |
Duan et al. “Memristor-Based Cellular Nonlinear/Neural Network: Design, Analysis, and Applications”, IEEE Transactions on Neural Networks and Learning Systems 2014. |
USPTO Restriction Requirement for U.S. Appl. No. 15/861,596 (Parent App.) dated Jan. 4, 2019. |
USPTO Notice of Allowance for U.S. Appl. No. 15/861,596 (Parent App.) dated Mar. 1, 2019. |
USPTO Office Action for U.S. Appl. No. 15/683,723 (Parent App.) dated Jan. 17, 2019. |
Liu et al. “An Approach to Machine Learning of Chine Pinyin-To-Character Conversion for Small-Memory Application”; Proceedings of the First International Conference on Machine Learning and Cybermetices, Beijing, Nov. 4-5, 2002. |
Shamsher et al. “OCR for Printed Urdu Script Using Feed Forward Neural network”; World Academy of Science, Engineering and Technology 34 2007 (Year: 2007). |
“Text feature extraction based on deep learning: a review”, Hong Liang, Xiao Sun, Yunlei Sun and Yuan Gao; EURASIP Journal on Wireless Communications & Networking: Dec. 15, 2017, vol. Issue 1, pp. 1-12. |
“Using convolution control block for Chinese sentiment analysis”, Zheng Xiao, Xiong Li, Le Wang, Qiuwei Yang, Jiayi Du, Arun Kumar Sangaiah; Journal of Parallel & Distrubuted Computing, Jun. 2018, vol. 118, pp. 18-26. |
“A survey of deep neural network architectures and their applications”, Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yuong Liu, Fuad E. Aisaadi; in Neurocomputing Apr. 19, 2017 234: pp. 11-26. |
“Investigatation on deep learning for off-line handwritten Arabic character recognition”; Bonfenar et a.; in Cognitive Systems Research Aug. 2018 50: pp. 180-195. |
“Chinese Character CAPTCHA Recognition and performance estimation via deep neural network”, Lin et. al. ; Neurocomputing; May 2018, vol. 288, p. 11-19. |
Building fast and compact convolutional neural networks for office handwritten Chinese character recognition, Xiao et al.; in Pattern Recognition Dec. 2017, pp. 72-81. |
“Optical Character Recognition with Neural Network”, Sarita; International Journal of Recent Research Aspects ISSN: 2349-7688, vol. 2 Issue 3, Sep. 2015, pp. 4-8. |
“Improving handwritten Chinese text recognition using neural network language models and convolutional neural network shape models”, Yi-Chao Wu; Fei Yin, Cheng-Lin Liu; in Pattern Recognition May 2017 65:pp. 251-264. |
USPTO Notice of Allowance U.S. Appl. No. 15/709,220 dated Jul. 17, 2018. |
USPTO Office Action U.S. Appl. No. 15/694,711 dated Aug. 3, 2018. |
Number | Date | Country | |
---|---|---|---|
20190228219 A1 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
62541081 | Aug 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15861596 | Jan 2018 | US |
Child | 16374920 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15709220 | Sep 2017 | US |
Child | 15861596 | US | |
Parent | 15694711 | Sep 2017 | US |
Child | 15709220 | US | |
Parent | 15683723 | Aug 2017 | US |
Child | 15694711 | US |