Distributed Pattern Processor Package

Abstract
A distributed pattern processor package comprises a plurality of storage-processing units (SPU's). Each of the SPU's comprises at least a non-volatile memory (NVM) array and a pattern-processing circuit. The preferred processor package further comprises at least a memory die and a logic die. The NVM arrays are disposed on the memory die, whereas the pattern-processing circuits are disposed on the logic die. The memory and logic dice are communicatively coupled by a plurality of inter-die connections.
Description
BACKGROUND
1. Technical Field of the Invention

The present invention relates to the field of integrated circuit, and more particularly to a pattern processor.


2. Prior Art

Pattern processing includes pattern matching and pattern recognition, which are the acts of searching a target pattern (i.e. the pattern to be searched) for the presence of the constituents or variants of a search pattern (i.e. the pattern used for searching). The match usually has to be “exact” for pattern matching, whereas it could be “likely to a certain degree” for pattern recognition. As used hereinafter, search patterns and target patterns are collectively referred to as patterns; pattern database refers to a database containing related patterns. Pattern database includes search-pattern database (also known as search-pattern library) and target-pattern database.


Pattern processing has broad applications. Typical pattern processing includes code matching, string matching, speech recognition and image recognition. Code matching is widely used in information security. Its operations include searching a virus in a network packet or a computer file; or, checking if a network packet or a computer file conforms to a set of rules. String matching, also known as keyword search, is widely used in big-data analytics. Its operations include regular-expression matching. Speech recognition identifies from the audio data the nearest acoustic/language model in an acoustic/language model library. Image recognition identifies from the image data the nearest image model in an image model library.


The pattern database has become large: the search-pattern library (including related search patterns, e.g. a virus library, a keyword library, an acoustic/language model library, an image model library) is already big; while the target-pattern database (including related target patterns, e.g. computer files on a whole disk drive, a big-data database, an audio archive, an image archive) is even bigger. The conventional processor and its associated von Neumann architecture have great difficulties to perform fast pattern processing on large pattern databases.


OBJECTS AND ADVANTAGES

It is a principle object of the present invention to improve the speed and efficiency of pattern processing on large pattern databases.


It is a further object of the present invention to enhance information security.


It is a further object of the present invention to improve the speed and efficiency of big-data analytics.


It is a further object of the present invention to improve the speed and efficiency of speech recognition, as well as enable audio search in an audio archive.


It is a further object of the present invention to improve the speed and efficiency of image recognition, as well as enable video search in a video archive.


In accordance with these and other objects of the present invention, the present invention discloses a distributed pattern processor package.


SUMMARY OF THE INVENTION

The present invention discloses a distributed pattern processor package. Its basic functionality is pattern processing. More importantly, the patterns it processes are stored locally. The preferred pattern processor comprises a plurality of storage-processing units (SPU's). Each of the SPU's comprises a pattern-storage circuit including at least a non-volatile memory (NVM) array for permanently storing at least a portion of a pattern and a pattern-processing circuit for performing pattern processing for the pattern. The preferred pattern processor package comprises at least a memory die and a logic die. The NVM arrays are disposed on the memory die, while the pattern-processing circuits are disposed on the logic die. The memory and logic dice are vertically stacked and communicatively coupled by a plurality of inter-die connections.


The type of integration between the pattern-storage die and the pattern-processing die is referred to as 2.5-D integration. The 2.5-D integration offers many advantages over the conventional 2-D integration, where the pattern-storage circuit and the processing circuit are placed side-by-side on the substrate of a processor die.


First, for the 2.5-D integration, the footprint of the SPU is the larger one of the pattern-storage circuit and the pattern-processing circuit. In contrast, for the 2-D integration, the footprint of a conventional processor is the sum of the pattern-storage circuit and the pattern-processing circuit. Hence, the SPU of the present invention is smaller. With a smaller SPU, the preferred pattern processor package comprises a larger number of SPU's, typically on the order of thousands. Because all SPU's can perform pattern processing simultaneously, the preferred distributed pattern processor package supports massive parallelism.


Moreover, for the 2.5-D integration, the pattern-storage circuit is in close proximity to the pattern-processing circuit. Because the micro-bumps, through-silicon vias (TSV's) and vertical interconnect accesses (VIA's) (referring to FIGS. 2B-2D) are short (tens to hundreds of microns) and numerous (e.g. thousands), fast inter-die connections can be achieved. In compassion, for the 2-D integration, because the pattern-storage circuit is distant from the pattern-processing circuit. Since the wires coupling them are long (hundreds of microns to millimeters) and few (e.g. 64-bit), it takes a longer time for the pattern-processing circuit to fetch pattern data from the pattern-storage circuit.


A NVM-based pattern processor has substantial advantages over a prior-art RAM-based pattern processor. A non-volatile memory (NVM) does not lose information stored therein when power goes off, whereas a random-access memory (RAM) loses information stored therein when power goes off. For the RAM-based pattern processor, patterns (e.g. rules, keywords) have to be loaded into the RAM before usage. This loading process takes time and therefore, the system boot-up time is long. On the other hand, for the NVM-based pattern processor, because patterns are permanently stored in a same package as the pattern-processing circuit, they do not have to be fetched from an external storage before usage. Patterns (e.g. rules, keywords) can be directly read out from the pattern-storage circuit 170 and used by the pattern-processing circuit 180, both of which are located in the same package. Consequently, the NVM-based pattern processor achieves faster system boot-up.


Accordingly, the present invention discloses a distributed pattern processor package, comprising: an input for transferring a first portion of a first pattern; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a pattern-processing circuit, wherein said NVM array stores at least a second portion of a second pattern, said pattern-processing circuit performs pattern processing for said first and second patterns; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said pattern-processing circuit is disposed on said logic die, said NVM array and said pattern-processing circuit are communicatively coupled by a plurality of inter-die connections.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A is a circuit block diagram of a preferred distributed pattern processor package; FIG. 1B is a circuit block diagram of a preferred storage-processing unit (SPU);



FIGS. 2A-2D are cross-sectional views of four preferred distributed pattern processor packages;



FIGS. 3A-3C are circuit block diagrams of three preferred SPU's;



FIGS. 4A-4C are circuit layout views of three preferred SPU's on the logic die.





It should be noted that all the drawings are schematic and not drawn to scale. Relative dimensions and proportions of parts of the device structures in the figures have been shown exaggerated or reduced in size for the sake of clarity and convenience in the drawings. The same reference symbols are generally used to refer to corresponding or similar features in the different embodiments.


As used hereinafter, the symbol “/” means the relationship of “and” or “or”. The phrase “memory” is used in its broadest sense to mean any semiconductor device, which can store information for short term or long term. The phrase “memory array” is used in its broadest sense to mean a collection of all memory cells sharing at least an address line. The phrase “permanently” is used in its broadest sense to mean long-term data storage. The phrase “communicatively coupled” is used in its broadest sense to mean any coupling whereby electrical signals may be passed from one element to another element. The phrase “pattern” could refer to either pattern per se, or the data related to a pattern; the present invention does not differentiate them.


DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Those of ordinary skills in the art will realize that the following description of the present invention is illustrative only and is not intended to be in any way limiting. Other embodiments of the invention will readily suggest themselves to such skilled persons from an examination of the within disclosure.


The present invention discloses a distributed pattern processor package. Its basic functionality is pattern processing. More importantly, the patterns it processes are stored locally. The preferred pattern processor comprises a plurality of storage-processing units (SPU's). Each of the SPU's comprises a pattern-storage circuit including at least a memory array for storing at least a portion of a pattern and a pattern-processing circuit for performing pattern processing for the pattern. The preferred pattern processor package comprises at least a pattern-storage die (also known as a memory die) and a pattern-processing die (also known as a logic die). They are vertically stacked and communicatively coupled by a plurality of inter-die connections.


Referring now to FIGS. 1A-1B, an overview of a preferred distributed pattern processor package 100 is disclosed. FIG. 1A is its circuit block diagram. The preferred distributed pattern processor package 100 not only processes patterns, but also stores patterns. It comprises an array with m rows and n columns (mxn) of storage-processing units (SPU's) 100aa-100mn. Using the SPU 100ij as an example, it has an input 110 and an output 120. In general, the preferred distributed pattern processor package 100 comprises thousands of SPU's 100aa-100mn and therefore, supports massive parallelism.



FIG. 1B is a circuit block diagram of a preferred SPU 100ij. The SPU 100ij comprises a pattern-storage circuit 170 and a pattern-processing circuit 180, which are communicatively coupled by inter-die connections 160. The pattern-storage circuit 170 comprises at least a memory array for storing patterns, whereas the pattern-processing circuit 180 processes these patterns. The memory array 170 is a non-volatile memory (NVM) array. The NVM, also known as read-only memory (ROM), could be a mask-ROM, an OTP, an EPROM, an EEPROM, a flash memory or a 3-D memory (3D-M). Because it is disposed in a different die than the pattern-processing circuit 180, the memory array 170 is drawn by dashed lines.


A NVM-based pattern processor has substantial advantages over a prior-art RAM-based pattern processor. A non-volatile memory (NVM) does not lose information stored therein when power goes off, whereas a random-access memory (RAM) loses information stored therein when power goes off. For the RAM-based pattern processor, patterns (e.g. rules, keywords) have to be loaded into the RAM before usage. This loading process takes time and therefore, the system boot-up time is long. On the other hand, for the NVM-based pattern processor, because patterns are permanently stored in a same package as the pattern-processing circuit, they do not have to be fetched from an external storage before usage. Patterns (e.g. rules, keywords) can be directly read out from the pattern-storage circuit 170 and used by the pattern-processing circuit 180, both of which are located in the same package. Consequently, the NVM-based pattern processor achieves faster system boot-up.


Referring now to FIGS. 2A-2D, four preferred distributed pattern processor packages 100 are shown with focus on the implementations of inter-die connections 160. The preferred distributed pattern processor package 100 comprises at least a memory die 100a (also known as a pattern-storage die) and a logic die 100b (also known as a pattern-processing die), with the memory die 100a comprising the pattern-storage circuit 170 and the logic die 100b comprising the pattern-processing circuits 180.


In FIG. 2A, the memory and logic dice 100a, 100b are vertically stacked, i.e. stacked along the direction perpendicular to the dice 100a, 100b. Both the memory and logic dice 100a, 100b face upward (i.e. along the +z direction). They are communicatively coupled through the bond wires 160w, which realize the inter-die connections 160.


In FIG. 2B, the memory and logic dice 100a, 100b are placed face-to-face, i.e. the memory die 100a faces upward (i.e. along the +z direction), while the logic die 100b is flipped so that it faces downward (i.e. along the −z direction). They are communicatively coupled by the micro-bumps 160x, which realize the inter-die connections 160.


The preferred embodiment of FIG. 2C comprises two memory dice 100a1, 100a2 and a logic die 100b. Each of the memory dice 100a1, 100a2 comprises a plurality of memory arrays 170. The memory dice 100a1, 100a2 are vertically stacked and communicatively coupled by the through-silicon vias (TSV's) 160y. The stack of the memory dice 100a1, 100a2 is communicatively coupled with the logic die 100b by the micro-bumps 160x. The TSV's 160y and the micro-bumps 160x realize the inter-die connections 160.


In FIG. 2D, a first dielectric layer 168a is deposited on top of the memory die 100a and first vias 160za are etched in the first dielectric layer 168a. Then a second dielectric layer 168b is deposited on top of the logic die 100b and second vias 160zb are etching in the second dielectric layer 168b. After flipping the logic die 100b and aligning the first and second vias 160za, 160zb, the memory and logic dice 100a, 100b are bonded. Finally, the memory and logic dice 100a, 100b are communicatively coupled by the contacted first and second vias 160za, 160zb, which realizes the inter-die connections 160. Because they can be made with the standard manufacturing process, the first and second vias 160za, 160zb are small and numerous. As a result, the inter-die connections 160 have a large bandwidth. In this preferred embodiment, the first and second vias 160za, 160zb are collectively referred to as vertical interconnect accesses (VIA's).


In the preferred embodiments of FIGS. 2A-2D, the pattern-storage circuit 170 and the pattern-processing circuit 180 are disposed in a same package 100. This type of integration is referred to as 2.5-D integration. The 2.5-D integration offers many advantages over the conventional 2-D integration, where the pattern-storage circuit and the processing circuit are placed side-by-side on a semiconductor substrate.


First, for the 2.5-D integration, the footprint of the SPU 100ij is the larger one of the pattern-storage circuit 170 and the pattern-processing circuit 180. In contrast, for the 2-D integration, the footprint of a conventional processor is the sum of the pattern-storage circuit and the pattern-processing circuit. Hence, the SPU 100ij of the present invention is smaller. With a smaller SPU 100ij, the preferred pattern processor 100 comprises a larger number of SPU's, typically on the order of thousands. Because all SPU's can perform pattern processing simultaneously, the preferred distributed pattern processor package 100 supports massive parallelism.


Moreover, for the 2.5-D integration, the pattern-storage circuit 170 is in close proximity to the pattern-processing circuit 180. Because the micro-bumps, TSV's and VIA's are short (tens to hundreds of microns) and numerous (e.g. thousands), fast inter-die connections 160 can be achieved. In compassion, for the 2-D integration, the pattern-storage circuit is distant from the pattern-processing circuit. Since the wires coupling them are long (hundreds of microns to millimeters) and few (e.g. 64-bit), it takes a longer time for the pattern-processing circuit to fetch pattern data from the pattern-storage circuit.


Referring now to FIGS. 3A-4C, three preferred SPU 100ij are shown. FIGS. 3A-3C are their circuit block diagrams and FIGS. 4A-4C are their circuit layout views. In these preferred embodiments, a pattern-processing circuit 180ij serves different number of memory arrays 170ij.


In FIG. 3A, the pattern-processing circuit 180ij serves one memory array 170ij, i.e. it processes the patterns stored in the memory array 170ij. In FIG. 3B, the pattern-processing circuit 180ij serves four memory arrays 170ijA-170ijD, i.e. it processes the patterns stored in the memory arrays 170ijA-170ijD. In FIG. 3C, the pattern-processing circuit 180ij serves eight memory array 170ijA-170ijD, 170ijW-170ijZ, i.e. it processes the patterns stored in the memory array 170ijA-170ijD, 170ijW-170ijZ. As will become apparent in FIGS. 4A-4C, the more memory arrays it serves, a larger area and more functionalities the pattern-processing circuit 180ij will have. In FIGS. 3A-4C, because they are located on a different die than the pattern-processing circuit 180ij (referring to FIGS. 2A-2D), the memory arrays 170ij -170ijZ are drawn by dashed lines.



FIGS. 4A-4C disclose the circuit layouts of the logic die 100b, as well as the projections of the memory arrays 170ij -170ijZ (physically located on the memory die 100a) on the logic die 100b (drawn by dashed lines). The embodiment of FIG. 4A corresponds to that of FIG. 3A. In this preferred embodiment, the pattern-processing circuit 180ij is disposed on the logic die 100b. It is at least partially covered by the memory array 170ij.


In this preferred embodiment, the pitch of the pattern-processing circuit 180ij is equal to the pitch of the memory array 170ij. Because its area is smaller than the footprint of the memory array 170ij, the pattern-processing circuit 180ij has limited functionalities. FIGS. 4B-4C discloses two complex pattern-processing circuits 180ij.


The embodiment of FIG. 4B corresponds to that of FIG. 3B. In this preferred embodiment, the pattern-processing circuit 180ij is disposed on the logic die 100b. It is at least partially covered by the memory arrays 170ijA-170ijD. Below the four memory arrays 170ijA-170ijD, the pattern-processing circuit 180ij can be laid out freely. Because the pitch of the pattern-processing circuit 180ij is twice as much as the pitch of the memory arrays 170ij, the pattern-processing circuit 180ij is four times larger than the footprints of the memory arrays 170ij and therefore, has more complex functionalities.


The embodiment of FIG. 4C corresponds to that of FIG. 3C. In this preferred embodiment, the pattern-processing circuit 180ij is disposed on the logic die 100b. These memory arrays 170ijA-170ijD, 170ijW-170ijZ are divided into two sets: a first set 170ijSA includes four memory arrays 170ijA-170ijD, and a second set 170ijSB includes four memory arrays 170ijW-170ijZ. Below the four memory arrays 170ijA-170ijD of the first set 170ijSA, a first component 180ijA of the pattern-processing circuit 180ij can be laid out freely. Similarly, below the four memory array 170ijW-170ijZ of the second set 170ijSB, a second component 180ijB of the pattern-processing circuit 180ij can be laid out freely. The first and second components 180ijA, 180ijB collectively form the pattern-processing circuit 180ij. The routing channel 182, 184, 186 are formed to provide coupling between different components 180ijA, 180ijB, or between different pattern-processing circuits. Because the pitch of the pattern-processing circuit 180ij is four times as much as the pitch of the memory arrays 170ij (along the x direction), the pattern-processing circuit 180ij is eight times larger than the footprints of the memory arrays 180ij and therefore, has even more complex functionalities..


The preferred distributed pattern processor package 100 can be either processor-like or storage-like. The processor-like pattern processor 100 acts like a processor package with an embedded search-pattern library. It searches a target pattern from the input 110 against the search-pattern library. To be more specific, the memory array 170 stores at least a portion of the search-pattern library (e.g. a virus library, a keyword library, an acoustic/language model library, an image model library); the input 110 includes a target pattern (e.g. a network packet, a computer file, audio data, or image data); the pattern-processing circuit 180 performs pattern processing on the target pattern with the search pattern. Because a large number of the SPU's 100ij (thousands, referring to FIG. 1A) support massive parallelism and the inter-die connections 160 has a large bandwidth (referring to FIGS. 2B-2D), the preferred processor package with an embedded search-pattern library can achieve fast and efficient search.


Accordingly, the present invention discloses a processor package with an embedded search-pattern library, comprising: an input for transferring at least a portion of a target pattern; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a pattern-processing circuit, wherein said NVM array stores at least a portion of a search pattern, said pattern-processing circuit performs pattern processing on said target pattern with said search pattern; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said pattern-processing circuit is disposed on said logic die, said NVM array and said pattern-processing circuit are communicatively coupled by a plurality of inter-die connections.


The storage-like pattern processor 100 acts like a storage package with in-situ pattern-processing capabilities. Its primary purpose is to store a target-pattern database, with a secondary purpose of searching the stored target-pattern database for a search pattern from the input 110. To be more specific, a target-pattern database (e.g. computer files on a whole disk drive, a big-data database, an audio archive, an image archive) is stored and distributed in the memory arrays 170; the input 110 include at least a search pattern (e.g. a virus signature, a keyword, a model); the pattern-processing circuit 180 performs pattern processing on the target pattern with the search pattern. Because a large number of the SPU's 100ij (thousands, referring to FIG. 1A) support massive parallelism and the inter-die connections 160 has a large bandwidth (referring to FIGS. 2B-2D), the preferred storage package can achieve a fast speed and a good efficiency.


Like the flash memory, a large number of the preferred storage packages 100 can be packaged into a storage card (e.g. an SD card, a TF card) or a solid-state drive (i.e. SSD). These storage cards or SSD can be used to store massive data in the target-pattern database. More importantly, they have in-situ pattern-processing (e.g. searching) capabilities. Because each SPU 100ij has its own pattern-processing circuit 180, it only needs to search the data stored in the local memory array 170 (i.e. in the same SPU 100ij). As a result, no matter how large is the capacity of the storage card or the SSD, the processing time for the whole storage card or the whole SSD is similar to that for a single SPU 100ij. In other words, the search time for a database is irrelevant to its size, mostly within seconds.


In comparison, for the conventional von Neumann architecture, the processor (e.g. CPU) and the storage (e.g. HDD) are physically separated. During search, data need to be read out from the storage first. Because of the limited bandwidth between the CPU and the HDD, the search time for a database is limited by the read-out time of the database. As a result, the search time for the database is proportional to its size. In general, the search time ranges from minutes to hours, even longer, depending on the size of the database. Apparently, the preferred storage package with in-situ pattern-processing capabilities 100 has great advantages in database search.


When the preferred storage package 100 performs pattern processing for a large database (i.e. target-pattern database), the pattern-processing circuit 180 could just perform partial pattern processing. For example, the pattern-processing circuit 180 only performs a preliminary pattern processing (e.g. code matching, or string matching) on the database. After being filtered by this preliminary pattern-processing step, the remaining data from the database are sent through the output 120 to an external processor (e.g. CPU, GPU) to complete the full pattern processing. Because most data are filtered out by this preliminary pattern-processing step, the data output from the preferred storage package 100 are a small fraction of the whole database. This can substantially alleviate the bandwidth requirement on the output 120.


Accordingly, the present invention discloses a storage package with in-situ pattern-processing capabilities, comprising: an input for transferring at least a portion of a search pattern; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a pattern-processing circuit, wherein said NVM array stores at least a portion of a target pattern, said pattern-processing circuit performs pattern processing on said target pattern with said search pattern; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said pattern-processing circuit is disposed on said logic die, said NVM array and said pattern-processing circuit are communicatively coupled by a plurality of inter-die connections.


In the following paragraphs, applications of the preferred distributed pattern processor package 100 are described. The fields of applications include: A) information security; B) big-data analytics; C) speech recognition; and D) image recognition. Examples of the applications include: a) information-security processor; b) anti-virus storage; c) data-analysis processor; d) searchable storage; e) speech-recognition processor; f) searchable audio storage; g) image-recognition processor; h) searchable image storage.


A) Information Security


Information security includes network security and computer security. To enhance network security, virus in the network packets needs to be scanned. Similarly, to enhance computer security, virus in the computer files (including computer software) needs to be scanned. Generally speaking, virus (also known as malware) includes network viruses, computer viruses, software that violates network rules, document that violates document rules and others. During virus scan, a network packet or a computer file is compared against the virus patterns (also known as virus signatures) in a virus library. Once a match is found, the portion of the network packet or the computer file which contains the virus is quarantined or removed.


Nowadays, the virus library has become large. It has reached hundreds of MB. On the other hand, the computer data that require virus scan are even larger, typically on the order of GB or TB, even bigger. On the other hand, each processor core in the conventional processor can typically check a single virus pattern once. With a limited number of cores (e.g. a CPU contains tens of cores; a GPU contains hundreds of cores), the conventional processor can achieve limited parallelism for virus scan. Furthermore, because the processor is physically separated from the storage in a von Neumann architecture, it takes a long time to fetch new virus patterns. As a result, the conventional processor and its associated architecture have a poor performance for information security.


To enhance information security, the present invention discloses several distributed pattern processor packages 100. It could be processor-like or storage-like. For processor-like, the preferred distributed pattern processor package 100 is an information-security processor, i.e. a processor for enhancing information security; for storage-like, the preferred distributed pattern processor package 100 is an anti-virus storage, i.e. a storage with in-situ anti-virus capabilities.


a) Information-Security Processor


To enhance information security, the present invention discloses an information-security processor 100. It searches a network packet or a computer file for various virus patterns in a virus library. If there is a match with a virus pattern, the network packet or the computer file contains the virus. The preferred information-security processor 100 can be installed as a standalone processor in a network or a computer; or, integrated into a network processor, a computer processor, or a computer storage.


In the preferred information-security processor 100, the memory arrays 170 in different SPU 100ij stores different virus patterns. In other words, the virus library is stored and distributed in the SPU's 100ij of the preferred information-security processor 100. Once a network packet or a computer file is received at the input 110, at least a portion thereof is sent to all SPU's 100ij. In each SPU 100ij, the pattern-processing circuit 180 compares said portion of data against the virus patterns stored in the local memory array 170. If there is a match with a virus pattern, the network packet or the computer file contains the virus.


The above virus-scan operations are carried out by all SPU's 100ij at the same time. Because it comprises a large number of SPU's 100ij (e.g. thousands), the preferred information-security processor 100 achieves massive parallelism for virus scan. Furthermore, because the inter-die connections 160 are numerous and the pattern-processing circuit 180 is physically close to the memory arrays 170 (compared with the conventional von Neumann architecture), the pattern-processing circuit 180 can easily fetch new virus patterns from the local memory array 170. As a result, the preferred information-security processor 100 can perform fast and efficient virus scan. In this preferred embodiment, the pattern-processing circuit 180 is a code-matching circuit.


Accordingly, the present invention discloses an information-security processor package, comprising: an input for transferring at least a portion of data from at least a network packet or a computer file; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a code-matching circuit, wherein said NVM array stores at least a portion of a virus pattern, said code-matching circuit searches said virus pattern in said portion of data; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said code-matching circuit is disposed on said logic die, said NVM array and said code-matching circuit are communicatively coupled by a plurality of inter-die connections.


b) Anti-Virus Storage


Whenever a new virus is discovered, the whole disk drive (e.g. hard-disk drive, solid-state drive) of the computer needs to be scanned against the new virus. This full-disk scan process is challenging to the conventional von Neumann architecture. Because a disk drive could store massive data, it takes a long time to even read out all data, let alone scan virus for them. For the conventional von Neumann architecture, the full-disk scan time is proportional to the capacity of the disk drive.


To shorten the full-disk scan time, the present invention discloses an anti-virus storage. Its primary function is a computer storage, with in-situ virus-scanning capabilities as its secondary function. Like the flash memory, a large number of the preferred anti-virus storage 100 can be packaged into a storage card or a solid-state drive for storing massive data and with in-situ virus-scanning capabilities.


In the preferred anti-virus storage 100, the memory arrays 170 in different SPU 100ij stores different data. In other words, massive computer files are stored and distributed in the SPU's 100ij of the storage card or the solid-state drive. Once a new virus is discovered and a full-disk scan is required, the pattern of the new virus is sent as input 110 to all SPU's 100ij, where the pattern-processing circuit 180 compares the data stored in the local memory array 170 against the new virus pattern.


The above virus-scan operations are carried out by all SPU's 100ij at the same time and the virus-scan time for each SPU 100ij is similar. Because of the massive parallelism, no matter how large is the capacity of the storage card or the solid-state drive, the virus-scan time for the whole storage card or the whole solid-state drive is more or less a constant, which is close to the virus-scan time for a single SPU 100ij and generally within seconds. On the other hand, the conventional full-disk scan takes minutes to hours, or even longer. In this preferred embodiment, the pattern-processing circuit 180 is a code-matching circuit.


Accordingly, the present invention discloses an anti-virus storage package, comprising: an input for transferring at least a portion of a virus pattern; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a code-matching circuit, wherein said NVM array stores at least a portion of data from a computer file, said code-matching circuit searches said virus pattern in said portion of data; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said code-matching circuit is disposed on said logic die, said NVM array and said code-matching circuit are communicatively coupled by a plurality of inter-die connections.


B) Big-Data Analytics


Big data is a term for a large collection of data, with main focus on unstructured and semi-structure data. An important aspect of big-data analytics is keyword search (including string matching, e.g. regular-expression matching). At present, the keyword library becomes large, while the big-data database is even larger. For such large keyword library and big-data database, the conventional processor and its associated architecture can hardly perform fast and efficient keyword search on unstructured or semi-structured data.


To improve the speed and efficiency of big-data analytics, the present invention discloses several distributed pattern processor packages 100. It could be processor-like or storage-like. For processor-like, the preferred distributed pattern processor package 100 is a data-analysis processor, i.e. a processor for performing analysis on big data; for storage-like, the preferred distributed pattern processor package 100 is a searchable storage, i.e. a storage with in-situ searching capabilities.


c) Data-Analysis Processor


To perform fast and efficient search on the input data, the present invention discloses a data-analysis processor 100. It searches the input data for the keywords in a keyword library. In the preferred data-analysis processor 100, the memory arrays 170 in different SPU 100ij stores different keywords. In other words, the keyword library is stored and distributed in the SPU's 100ij of the preferred data-analysis processor 100. Once data are received at the input 110, at least a portion thereof is sent to all SPU's 100ij. In each SPU 100ij, the pattern-processing circuit 180 compares said portion of data against various keywords stored in the local memory array 170.


The above searching operations are carried out by all SPU's 100ij at the same time. Because it comprises a large number of SPU's 100ij (e.g. thousands), the preferred data-analysis processor 100 achieves massive parallelism for keyword search. Furthermore, because the inter-die connections 160 are numerous and the pattern-processing circuit 180 is physically close to the memory arrays 170 (compared with the conventional von Neumann architecture), the pattern-processing circuit 180 can easily fetch keywords from the local memory array 170. As a result, the preferred data-analysis processor 100 can perform fast and efficient search on unstructured data or semi-structured data.


In this preferred embodiment, the pattern-processing circuit 180 is a string-matching circuit. The string-matching circuit could be implemented by a content-addressable memory (CAM) or a comparator including XOR circuits. Alternatively, keyword can be represented by a regular expression. In this case, the sting-matching circuit 180 can be implemented by a finite-state automata (FSA) circuit.


Accordingly, the present invention discloses a data-analysis processor package, comprising: an input for transferring at least a portion of data from a big-data database; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a string-matching circuit, wherein said NVM array stores at least a portion of a keyword, said string-matching circuit searches said keyword in said portion of data; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said string-matching circuit is disposed on said logic die, said NVM array and said string-matching circuit are communicatively coupled by a plurality of inter-die connections.


d) Searchable Storage


Big-data analytics often requires full-database search, i.e. to search a whole big-data database for a keyword. The full-database search is challenging to the conventional von Neumann architecture. Because the big-data database is large, with a capacity of GB to TB, or even larger, it takes a long time to even read out all data, let alone analyze them. For the conventional von Neumann architecture, the full-database search time is proportional to the database size.


To improve the speed and efficiency of full-database search, the present invention discloses a searchable storage. Its primary function is database storage, with in-situ searching capabilities as its secondary function. Like the flash memory, a large number of the preferred searchable storage 100 can be packaged into a storage card or a solid-state drive for storing a big-data database and with in-situ searching capabilities.


In the preferred searchable storage 100, the memory arrays 170 in different SPU 100ij stores different portions of the big-data database. In other words, the big-data database is stored and distributed in the SPU's 100ij of the storage card or the solid-state drive. During search, a keyword is sent as input 110 to all SPU's 100ij. In each SPU 100ij, the pattern-processing circuit 180 searches the portion of the big-data database stored in the local memory array 170 for the keyword.


The above searching operations are carried out by all SPU's 100ij at the same time and the keyword-search time for each SPU 100ij is similar. Because of massive parallelism, no matter how large is the capacity of the storage card or the solid-state drive, the keyword-search time for the whole storage card or the whole solid-state drive is more or less a constant, which is close to the keyword-search time for a single SPU 100ij and generally within seconds. On the other hand, the conventional full-database search takes minutes to hours, or even longer. In this preferred embodiment, the pattern-processing circuit 100 is a string-matching circuit.


Accordingly, the present invention discloses a searchable storage package, comprising: an input for transferring at least a portion of a keyword; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a string-matching circuit, wherein said NVM array stores at least a portion of data from a big-data database, said string-matching circuit searches said keyword in said portion of data; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said string-matching circuit is disposed on said logic die, said NVM array and said string-matching circuit are communicatively coupled by a plurality of inter-die connections.


C) Speech Recognition


Speech recognition enables the recognition and translation of spoken language. It is primarily implemented through pattern recognition between audio data and an acoustic model/language library, which contains a plurality of acoustic models or language models. During speech recognition, the pattern processing circuit 180 performs speech recognition to the user's audio data by finding the nearest acoustic/language model in the acoustic/language model library. Because the conventional processor (e.g. CPU, GPU) has a limited number of cores and the acoustic/language model database is stored externally, the conventional processor and the associated architecture have a poor performance in speech recognition.


e) Speech-Recognition Processor


To improve the performance of speech recognition, the present invention discloses a speech-recognition processor 100. In the preferred speech-recognition processor 100, the user's audio data is sent as input 110 to all SPU 100ij. The memory arrays 170 store at least a portion of the acoustic/language model. In other words, an acoustic/language model library is stored and distributed in the SPU's 100ij. The pattern-processing circuit 180 performs speech recognition on the audio data from the input 110 with the acoustic/language models stored in the memory arrays 170. In this preferred embodiment, the pattern-processing circuit 180 is a speech-recognition circuit.


Accordingly, the present invention discloses a speech-recognition processor package, comprising: an input for transferring at least a portion of audio data; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a speech-recognition circuit, wherein said NVM array stores at least a portion of an acoustic/language model, said speech-recognition circuit performs pattern recognition on said portion of audio data with said acoustic/language model; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said speech-recognition circuit is disposed on said logic die, said NVM array and said speech-recognition circuit are communicatively coupled by a plurality of inter-die connections.


f) Searchable Audio Storage


To enable audio search in an audio database (e.g. an audio archive), the present invention discloses a searchable audio storage. In the preferred searchable audio storage 100, an acoustic/language model derived from the audio data to be searched for is sent as input 110 to all SPU 100ij. The memory arrays 170 store at least a portion of the user's audio database. In other words, the audio database is stored and distributed in the SPU's 100ij of the preferred searching audio storage 100. The pattern-processing circuit 180 performs speech recognition on the audio data stored in the memory arrays 170 with the acoustic/language model from the input 110. In this preferred embodiment, the pattern-processing circuit 180 is a speech-recognition circuit.


Accordingly, the present invention discloses a searchable audio storage package, comprising: an input for transferring at least a portion of an acoustic/language model; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a speech-recognition circuit, wherein said NVM array stores at least a portion of audio data, said speech-recognition circuit performs pattern recognition on said portion of audio data with said acoustic/language model; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said speech-recognition circuit is disposed on said logic die, said NVM array and said speech-recognition circuit are communicatively coupled by a plurality of inter-die connections.


D) Image Recognition or Search


Image recognition enables the recognition of images. It is primarily implemented through pattern recognition on image data with an image model, which is a part of an image model library. During image recognition, the pattern processing circuit 180 performs image recognition to the user's image data by finding the nearest image model in the image model library. Because the conventional processor (e.g. CPU, GPU) has a limited number of cores and the image model database is stored externally, the conventional processor and the associated architecture have a poor performance in image recognition.


g) Image-Recognition Processor


To improve the performance of image recognition, the present invention discloses an image-recognition processor 100. In the preferred image-recognition processor 100, the user's image data is sent as input 110 to all SPU 100ij. The memory arrays 170 store at least a portion of the image model. In other words, an image model library is stored and distributed in the SPU's 100ij. The pattern-processing circuit 180 performs image recognition on the image data from the input 110 with the image models stored in the memory arrays 170. In this preferred embodiment, the pattern-processing circuit 180 is an image-recognition circuit.


Accordingly, the present invention discloses an image-recognition processor package, comprising: an input for transferring at least a portion of image data; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and an image-recognition circuit, wherein said NVM array stores at least a portion of an image model, said image-recognition circuit performs pattern recognition on said portion of image data with said image model; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said image-recognition circuit is disposed on said logic die, said NVM array and said image-recognition circuit are communicatively coupled by a plurality of inter-die connections.


h) Searchable Image Storage


To enable image search in an image database (e.g. an image archive), the present invention discloses a searchable image storage. In the preferred searchable image storage 100, an image model derived from the image data to be searched for is sent as input 110 to all SPU 100ij. The memory arrays 170 store at least a portion of the user's image database. In other words, the image database is stored and distributed in the SPU's 100ij of the preferred searchable image storage 100. The pattern-processing circuit 180 performs image recognition on the image data stored in the memory arrays 170 with the image model from the input 110. In this preferred embodiment, the pattern-processing circuit 180 is an image-recognition circuit.


Accordingly, the present invention discloses a searchable image storage package, comprising: an input for transferring at least a portion of an image model; a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and an image-recognition circuit, wherein said NVM array stores at least a portion of image data, said image-recognition circuit performs pattern recognition on said portion of image data with said image model; at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said image-recognition circuit is disposed on said logic die, said NVM array and said image-recognition circuit are communicatively coupled by a plurality of inter-die connections.


While illustrative embodiments have been shown and described, it would be apparent to those skilled in the art that many more modifications than that have been mentioned above are possible without departing from the inventive concepts set forth therein. The invention, therefore, is not to be limited except in the spirit of the appended claims.

Claims
  • 1. A distributed pattern processor package, comprising: an input for transferring at least a first portion of a first pattern;a plurality of storage-processing units (SPU's) communicatively coupled with said input, each of said SPU's comprising at least a non-volatile memory (NVM) array and a pattern-processing circuit, wherein said NVM array stores at least a second portion of a second pattern, said pattern-processing circuit performs pattern processing for said first and second patterns;at least a memory die and a logic die, wherein said NVM array is disposed on said memory die, said pattern-processing circuit is disposed on said logic die, said NVM array and said pattern-processing circuit are communicatively coupled by a plurality of inter-die connections.
  • 2. The pattern processor package according to claim 1, wherein said NVM array does not lose information stored therein when power goes off.
  • 3. The pattern processor package according to claim 2, wherein said NVM is a mask-ROM, an OTP, an EPROM, an EEPROM, a flash memory, or a 3-D memory (3D-M).
  • 4. The pattern processor package according to claim 1, wherein said NVM array and said pattern-processing circuit at least partially overlap.
  • 5. The pattern processor package according to claim 1, wherein each NVM array is vertically aligned and communicatively coupled with a pattern-processing circuit.
  • 6. The pattern processor package according to claim 1, wherein each pattern-processing circuit is vertically aligned and communicatively coupled with at least a NVM array.
  • 7. The pattern processor package according to claim 1, wherein the pitch of said pattern-processing circuit is an integer multiple of the pitch of said NVM array.
  • 8. The pattern processor package according to claim 1, wherein said inter-die connections are micro-bumps.
  • 9. The pattern processor package according to claim 1, wherein said inter-die connections are through-silicon vias (TSV's).
  • 10. The pattern processor package according to claim 1, wherein said inter-die connections are vertical interconnect accesses (VIA's).
  • 11. The pattern processor package according to claim 1 being a processor package with an embedded search-pattern library, wherein said first pattern includes a target pattern and said second pattern includes a search pattern.
  • 12. The pattern processor package according to claim 1 being an information-security processor package, wherein said input transfers at least a portion of data from a network packet or a computer file; said NVM array stores at least a portion of a virus pattern; and, said pattern-processing circuit is a code-matching circuit for searching said virus pattern in said portion of data.
  • 13. The pattern processor package according to claim 1 being a data-analysis processor package, wherein said input transfers at least a portion of data from a big-data database; said NVM array stores at least a portion of a keyword; and, said pattern-processing circuit is a string-matching circuit for searching said keyword in said portion of data.
  • 14. The pattern processor package according to claim 1 being a speech-recognition processor package, wherein said input transfers at least a portion of audio data; said NVM array stores at least a portion of an acoustic/language model; and, said pattern-processing circuit is a speech-recognition circuit for performing speech recognition on said portion of audio data with said acoustic/language model.
  • 15. The pattern processor package according to claim 1 being an image-recognition processor package, wherein said input transfers at least a portion of image data; said NVM array stores at least a portion of an image model; and, said pattern-processing circuit is an image-recognition circuit for performing image recognition on said portion of image data with said image model.
  • 16. The pattern processor package according to claim 1 being a storage package with in-situ pattern-processing capabilities, wherein said first pattern is a search pattern and said second pattern is a target pattern.
  • 17. The pattern processor package according to claim 1 being an anti-virus storage package, wherein said input transfers at least a portion of a virus pattern; said NVM array stores at least a portion of data from a computer file; and, said pattern-processing circuit is a code-matching circuit for searching said virus pattern in said portion of data.
  • 18. The pattern processor package according to claim 1 being a searchable storage package, wherein said input transfers at least a portion of a keyword; said NVM array stores at least a portion of data from a big-data database; and, said pattern-processing circuit is a string-matching circuit for searching said keyword in said portion of data.
  • 19. The pattern processor package according to claim 1 being a searchable audio storage package, wherein said input transfers at least a portion of an acoustic/language model; said NVM array stores at least a portion of audio data; and, said pattern-processing circuit is a speech-recognition circuit for performing speech recognition on said portion of audio data with said acoustic/language model.
  • 20.The pattern processor package according to claim 1 being a searchable image storage package, wherein said input transfers at least a portion of an image model; said NVM array stores at least a portion of image data; and, said pattern-processing circuit is an image-recognition circuit for performing image recognition on said portion of image data with said image model.
Priority Claims (5)
Number Date Country Kind
201610127981.5 Mar 2016 CN national
201710122861.0 Mar 2017 CN national
201710130887.X Mar 2017 CN national
201810381860.2 Apr 2018 CN national
201810388096.1 Apr 2018 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation-in-part of application “Distributed Pattern Processor Comprising Three-Dimensional Memory”, application Ser. No. 15/452,728, filed Mar. 7, 2017, which claims priorities from Chinese Patent Application No. 201610127981.5, filed Mar. 7, 2016; Chinese Patent Application No. 201710122861.0, filed Mar. 3, 2017; Chinese Patent Application No. 201710130887.X, filed Mar. 7, 2017, in the State Intellectual Property Office of the People's Republic of China (CN), the disclosures of which are incorporated herein by references in their entireties.

Continuation in Parts (1)
Number Date Country
Parent 15452728 Mar 2017 US
Child 16258667 US