Web search based ad services and search engines have become important tools for providing information to users. One factor in attracting users and advertisers is providing relevant information and ads for a given search query. Search relevance may be determined by a ranking function that ranks resultant documents according to their similarities to the input query.
Information retrieval (IR) researchers have studied search relevance for various search engines and tools. Representative methods include Boolean, vector space, probabilistic, and language models. Earlier search engines and tools were mainly based on such IR algorithms. These search engines and tools incorporate in varying degrees the concept of the ranking function. Many factors may affect the ranking function for search relevance. These factors may include page content, title, anchor, URL, spam, and page freshness. It is extremely difficult to manually tune ranking function parameters to accommodate these factors for large-scale data sets, such as those that are common in many applications including World Wide Web (“Web”) applications and speech and image processing. For these large data sets, machine based learning algorithms have been applied to learn complex ranking functions from large-scale data sets.
Early algorithms for ranking function learning include Polynomial-based regression, Genetic Programming, RankSVM and classification-based SVM. However, these algorithms were only evaluated on a small-scale dataset due to the high computational cost. In fact, these traditional machine-learning algorithms operate slowly when searching large-scale data sets. Users often wait many hours, days, or even weeks to get results from these data sets. This slow computation time may be due, in part, to a typical personal computer (PC) being unable to exploit full parallelism in machine-learning algorithms efficiently.
Instruction level parallelism techniques somewhat improve the processing time. More particularly, distributed implementations with process level parallelism are faster than many of the PC central processing units (CPUs), which execute instructions in sequential manner. However, distributed implementations occupy many machines Additionally, for some algorithms, distributed computing yields poor speed improvement per processor added due to communication cost. A Graphics Processing Unit (GPU)-based accelerator could only accelerate a limited spectrum of machine learning algorithms due to its special hardware structure optimized for graphics applications. Thus, memory access bandwidth, communication cost, flexibility and granularity of parallelism remain bottlenecks for these solutions.
An accelerator system and method is provided that, according to one exemplary implementation, utilizes FPGA technology to achieve better parallelism and flexibility. The FPGA-based accelerator uses a PCI controller to communicate with a host CPU. A memory hierarchy composed of embedded Random Access Memory (RAM) in the FPGA, Static Random Access Memory (SRAM) and Synchronous Dynamic Random Access Memory (SDRAM), allows the FPGA assisted accelerator to take advantage of memory locality in algorithms.
According to another exemplary implementation, an FPGA-based accelerator system is combined with a relevance-ranking algorithm, such as the algorithm known as RankBoost, to increase the speed of a training process. Using an approximated RankBoost algorithm reduces the computation and storage scale from O(N2) to O(N). This algorithm could be mapped to the accelerator system to increase the speed of the pure software implementation by approximately 170 times. Several techniques assist in achieving the acceleration rate. The algorithm and related data structures associated with the FPGA-based accelerator may be organized to enable streaming data access and, thus, increase the training speed. The data may be compressed to enable the system and method to be operable with larger data sets. At least a portion of the approximated RankBoost algorithm may be implemented as a single instruction multiple data streams (SIMD) architecture with multiple processing engines (PEs) in the FPGA. Thus, large data sets, such as a training set can be loaded on memories associated with an FPGA to increase the speed of the relevance ranking algorithm.
By virtue of this system, a user can train a ranking model with much less time and cost, so they can attempt different learning parameters of the algorithm in the same time, or carry out a study that depends on numerous ranking models.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Overview
An FPGA-based accelerator system for machine learning as described and claimed herein accelerates selected algorithms by providing better processing parallelism and memory access. The accelerator system may include an acceleration device, which may include a substrate, such as a Peripheral Component Interconnect (PCI) card, with a Field-Programmable Gate Array (FPGA) and memories acting as caches, e.g., SRAM, SDRAM, and so forth, connected to a computing device. One or more algorithms may be implemented on one or more of the FPGAs with direct parallel architecture and/or pipelined architecture to exploit both application parallelism and direct functional logic implementation. The PCI could also be replaced by other computer buses, including but not limited to PCI-X, PCI-Express, HyperTransport, Universal Serial Bus (USB) and Front-Side Bus (FSB).
A training data set or other data may be loaded onto one or more memories on the accelerator board, or onto embedded memories in the FPGA, to increase memory access bandwidth and data locality. The training data set may comprise information collected from Web searches to assess relevancy, and other characteristics. The system may include or be associated with one or more PCs or other computing devices, each computing device having one or more accelerator cards.
Exemplary System
Accelerator System Architecture
Training data or other data being accessed by the FPGA 106 may be loaded to DDR memory 108, including SRAM 110 or SDRAM 112, on the PCI board 104, or to embedded memories in the FPGA 106, in order to increase memory access bandwidth and data locality. Software loaded on the computer 114 may be capable of programming or re-programming the FPGA 106 at any time during processing.
As shown in
The FPGA 304 may include a PCI local interface 322 for interfacing with the PCI 9054 chip 320. The PCI local interface 322 may also connect to the processing engine (PE) units, e.g., PE0, PE1, and PEn. The PE units implement the computation logic. The FPGA 304 may also have a DDR interface 324 for interfacing with DDR memory 326. The FPGA 304 may additionally have a control unit 328 for controlling the processing units PE0, PE1, PW, and PEn by sending a signal to the PE units. The FPGA 304 may also have a memory management unit (MMU) 330 for aligning or managing data for faster processing. The processing engines of the FPGA 304 may provide an output to the PCI local interface 320 for further implementation or use.
Data Organization
According to one example, training data that will be iteratively used may be loaded onto SDRAM onboard an accelerator device, such as accelerator device 301. The training data loaded in the SDRAM may be organized according to its access order in logic associated with the FPGA by a software tool so that the FPGA can fetch data in a so-called, and well-known, “burst” mode, thus enabling high bandwidth access to the data set.
Randomly used large-scale data structures could be loaded to SRAM onboard the accelerator device, such as accelerator device 301, and associated with an FPGA, such as FPGA 304. According to this implementation, the SRAM may be used as a large low latency cache.
Temporary data structures, such as intermediate variables, parameters, and so forth, and results, e.g., the learned model, could be stored in distributed memory or registers inside the FPGA, which would act as high bandwidth, low latency cache. The data could be utilized without needing to access memory off of the FPGA, which would enhance the access speed of the cache.
Stream Data Processing Architecture
Data Compression/Decompression
Relevance-Ranking Algorithm
A relevance-ranking algorithm may be used to learn the ranking function H by combining a given collection of ranking functions. The relevance-ranking algorithm may be pair-based or document-based. The psuedocode for one such relevance ranking algorithm, is shown below:
(1) Train WeakLearn using distribution Dt.
(2) WeakLearn returns a weak hypothesis ht,
(3) Choose αt ε R
(4) Update weights: for each pair (d0, d1):
where Zt is the normalization factor:
The relevance-ranking algorithm is utilized in an iterative manner. In each round, a procedure named “WeakLearn” is called to select the best “weak ranker” from a large set of candidate weak rankers. The weak ranker has the form ht: X→R and ht(x1)>ht(x0) means that instance x1 is ranked higher than x0 in round t. A distribution Dt over X×X is maintained in the training process. Weight Dt(x0, x1) will be decreased if ht ranks x0 and x1 correctly (ht(x1)>ht(x0)), and increased otherwise. Thus, Dt will tend to concentrate on the pairs that are hard to rank. The final strong ranker H is a weighted sum of the selected weak rankers in each round.
The WeakLearn algorithm may be implemented to find the weak ranker with a maximum r(f, θ), by generating a temporary variable π(d) for each document. The WeakLearn algorithm may be defined as follows:
To extend the relevance-ranking algorithm to Web relevance ranking, training pairs may be generated and weak rankers may be defined. To generate the training pairs, the instance space for a search engine may be partitioned according to queries issued by users. For each query q, the returned documents may be rated a relevance score, from 1 (means ‘poor match’) to 5 (means ‘excellent match’) using a manual or automated process. Unlabeled documents may be given a relevance score of 0. Based on the rating scores (ground truth), the training pairs for the relevance-ranking algorithm may be generated from the returned documents for each query.
So-called “weak rankers” may be defined as a transformation of a document feature, which is a one-dimensional real value number. Document features can be classified into query dependent features, such as query term frequencies in a document and term proximity, and query independent features, such as PageRank, and so forth. Thus, the same document may be represented by different feature vectors for different queries based upon its query-dependent features.
In keeping with the previous algorithm example, a document may be designated as d(q), a pair as {d1(q), d2(q)}, and dij means a document for query qi. The kth feature for document is denoted as fk(dij). With these notations, an alternative relevance-ranking algorithm may be implemented as follows.
Ni documents {dij|j=1, . . . , Ni} for each query qi, where Σi=1NqNi=Ndoc.
Nf features {fk(dij)|j=1, . . . , Nf} for each document di.j
Nkθ candidate thresholds {θks|s=1, . . . , Nkθ} for each fk.
Npair pairs (dij1, dij2) generated by ground truth rating {R(qi, dij)} or {Rij}.
(1) Train WeakLearn using distribution Dt.
(2) WeakLearn returns a weak hypothesis ht, weight αt
(3) Update weights: for each pair (d0, d1):
where Zt is the normalization factor:
For the relevance-ranking algorithms described by example above, WeakLearn may be defined as a routine that uses the Nf document features to form its weak rankers, attempting to find the one with the smallest pair-wise disagreement relative to distribution D over Npair document pairs. The weak ranker may be defined by the following relationship:
To find the best h(d), the weak learner checks all of the possible combinations of feature fi and threshold θ. The WeakLearn algorithm may be implemented to ascertain a maximum r(f, θ) by generating a temporary variable π(d) for each document. Intuitively, π contains information regarding labels and pair weights, and the weak ranker only needs to access π in a document-wise manner for each feature and each threshold, that is O(NdocNfNθ), in a straightforward implementation. Based on this, an alternative weak learner may be utilized using an integral histogram to further reduce the computational complexity to O(NdocNf). Because of this relatively low computational complexity, the algorithm may be implemented in both software and hardware, e.g., an accelerator system utilizing an FPGA, as described above.
According to the implementation, r may be calculated in O(NdocNf) time in each round using an integral histogram in O(NdocNf) time. First, feature values {fk(d)} in a dimension of the whole feature vector (f1, . . . , fN
The boundaries of these bins are:
Exemplary Implementation of Relevance-Ranking Algorithm
Software provided on or to the host computer will send the quantized feature values to a DDR memory through the PCI bus, PCI controller and FPGA. As described above, the data may be organized to enable streaming memory access, which can make full use of DDR memory bandwidth. In each training round, the software will call WeakLearn to compute π(d) for every document, and send π(d) to a First In First Out (FIFO) queue in the FPGA. The control unit (CU) in the FPGA will direct the PE arrays to build histograms and integral histograms, and will then send the results r(f,θ) as output to the FIFO queue. The CU is implemented as a finite state machine (FSM), which halts or resumes the pipeline in PE units according to the status of each FIFO. When the CU indicates that the calculation of r is finished, the software will read back these r values and select the maximum value. Then the software will update the distribution D(d0, d1) over all pairs and begin the next round.
It is noted that the micro-architecture of the PE supports fully-pipelined operation, which enhances the performance of hardware, particularly with regard to machine learning algorithms, such as a relevance-ranking algorithm.
An example data input 600 into 8 PE arrays with 16 features per PE is illustrated in
A streaming memory access organization can also be used for the FIFO buffer that will provide data from the DDR memory to the group of PE units. The width of the FIFO associated with the PE array may be, for example, 128 bits, which is equivalent to 16 bytes. The data in the FIFO can be arranged as shown in
Although the invention has been described in language specific to structural features and/or methodological steps, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or steps described. Rather, the specific features and steps are disclosed as preferred forms of implementing the claimed invention.
This application is a divisional of prior pending U.S. patent application Ser. No. 11/737,605, filed Apr. 19, 2007, which is herein incorporated by reference in its entirety. Any disclaimer that may have occurred during the prosecution of the above-referenced application(s) is hereby expressly rescinded, and reconsideration of all relevant art is respectfully requested.
Number | Name | Date | Kind |
---|---|---|---|
5320538 | Baum | Jun 1994 | A |
5423554 | Davis | Jun 1995 | A |
5432895 | Myers | Jul 1995 | A |
5490784 | Carmein | Feb 1996 | A |
5577981 | Jarvik | Nov 1996 | A |
5712922 | Loewenthal et al. | Jan 1998 | A |
5883628 | Mullaly et al. | Mar 1999 | A |
5892961 | Trimberger | Apr 1999 | A |
5892962 | Cloutier | Apr 1999 | A |
5913727 | Ahdoot | Jun 1999 | A |
6011407 | New | Jan 2000 | A |
6037914 | Robinson | Mar 2000 | A |
6064749 | Hirota et al. | May 2000 | A |
6084979 | Kanade et al. | Jul 2000 | A |
6102832 | Tani | Aug 2000 | A |
6122627 | Carey et al. | Sep 2000 | A |
6134540 | Carey et al. | Oct 2000 | A |
6141034 | McCutchen | Oct 2000 | A |
6144985 | Rompe | Nov 2000 | A |
6199008 | Aratow et al. | Mar 2001 | B1 |
6222757 | Rau et al. | Apr 2001 | B1 |
6226237 | Chan et al. | May 2001 | B1 |
6356637 | Garnett | Mar 2002 | B1 |
6408257 | Harrington et al. | Jun 2002 | B1 |
6477527 | Carey et al. | Nov 2002 | B2 |
6567834 | Marshall et al. | May 2003 | B1 |
6601049 | Cooper | Jul 2003 | B1 |
6611823 | Selmic et al. | Aug 2003 | B1 |
6662470 | Ellis, III | Dec 2003 | B2 |
6751600 | Wolin | Jun 2004 | B1 |
6945869 | Kim et al. | Sep 2005 | B2 |
6951515 | Ohshima et al. | Oct 2005 | B2 |
6999083 | Wong et al. | Feb 2006 | B2 |
7001272 | Yamashita et al. | Feb 2006 | B2 |
7024033 | Li et al. | Apr 2006 | B2 |
7050078 | Dempski | May 2006 | B2 |
7099745 | Ebert | Aug 2006 | B2 |
7138963 | Hobgood et al. | Nov 2006 | B2 |
7162054 | Meisner et al. | Jan 2007 | B2 |
7197497 | Cossock | Mar 2007 | B2 |
7318051 | Weston et al. | Jan 2008 | B2 |
7369869 | Wiatrowski et al. | May 2008 | B2 |
7398093 | Hull et al. | Jul 2008 | B2 |
7446772 | Wong et al. | Nov 2008 | B2 |
7526101 | Avidan | Apr 2009 | B2 |
7529732 | Liu et al. | May 2009 | B2 |
7660793 | Indeck et al. | Feb 2010 | B2 |
7707163 | Anzalone et al. | Apr 2010 | B2 |
7805438 | Liu et al. | Sep 2010 | B2 |
7840482 | Singla et al. | Nov 2010 | B2 |
7844085 | Lu et al. | Nov 2010 | B2 |
8117137 | Xu et al. | Feb 2012 | B2 |
20020010734 | Ebersole et al. | Jan 2002 | A1 |
20020123977 | Raz | Sep 2002 | A1 |
20020167536 | Valdes et al. | Nov 2002 | A1 |
20030002731 | Wersing et al. | Jan 2003 | A1 |
20040111388 | Boiscuvier et al. | Jun 2004 | A1 |
20040136471 | Pao et al. | Jul 2004 | A1 |
20040193441 | Altieri | Sep 2004 | A1 |
20040225483 | Okoniewski et al. | Nov 2004 | A1 |
20050049913 | Huddleston et al. | Mar 2005 | A1 |
20050144149 | Li et al. | Jun 2005 | A1 |
20050234953 | Zhang et al. | Oct 2005 | A1 |
20050246328 | Zhang et al. | Nov 2005 | A1 |
20050280661 | Kobayashi et al. | Dec 2005 | A1 |
20060013473 | Woodfill et al. | Jan 2006 | A1 |
20060038833 | Mallinson et al. | Feb 2006 | A1 |
20060047704 | Gopalakrishnan | Mar 2006 | A1 |
20060105838 | Mullen | May 2006 | A1 |
20060110028 | Liu et al. | May 2006 | A1 |
20060122834 | Bennett | Jun 2006 | A1 |
20060126740 | Lin et al. | Jun 2006 | A1 |
20060136411 | Meyerzon et al. | Jun 2006 | A1 |
20060149883 | Benbow et al. | Jul 2006 | A1 |
20060224532 | Duan et al. | Oct 2006 | A1 |
20060262140 | Kujawa et al. | Nov 2006 | A1 |
20060294059 | Chamberlain et al. | Dec 2006 | A1 |
20070022063 | Lightowler | Jan 2007 | A1 |
20070035562 | Azuma et al. | Feb 2007 | A1 |
20070038944 | Carignano et al. | Feb 2007 | A1 |
20070073749 | Fan | Mar 2007 | A1 |
20070110298 | Graepel et al. | May 2007 | A1 |
20070132785 | Ebersole, Jr. et al. | Jun 2007 | A1 |
20070162448 | Jain et al. | Jul 2007 | A1 |
20070233679 | Liu et al. | Oct 2007 | A1 |
20070244884 | Yang | Oct 2007 | A1 |
20080004865 | Weng et al. | Jan 2008 | A1 |
20080018668 | Yamauchi | Jan 2008 | A1 |
20080025579 | Sidlauskas et al. | Jan 2008 | A1 |
20080027912 | Liu et al. | Jan 2008 | A1 |
20080027925 | Li et al. | Jan 2008 | A1 |
20080033939 | Khandelwal | Feb 2008 | A1 |
20080060034 | Egnal et al. | Mar 2008 | A1 |
20080097938 | Guyon et al. | Apr 2008 | A1 |
20080114724 | Indeck et al. | May 2008 | A1 |
20080126275 | Crnojevic et al. | May 2008 | A1 |
20080140589 | Basu et al. | Jun 2008 | A1 |
20080150963 | Stambaugh | Jun 2008 | A1 |
20080262984 | Xu et al. | Oct 2008 | A1 |
20090031104 | Vorbach et al. | Jan 2009 | A1 |
20090157571 | Smith et al. | Jun 2009 | A1 |
20090240680 | Tankovich et al. | Sep 2009 | A1 |
20090265290 | Ciaramita et al. | Oct 2009 | A1 |
20090287620 | Xu | Nov 2009 | A1 |
20090287621 | Krupka et al. | Nov 2009 | A1 |
Number | Date | Country |
---|---|---|
1170868 | Jan 2002 | EP |
Entry |
---|
Berkun, A.C. et al. “An Advanced FPGA-Based Processor and Controller for the Next Generation Precipitation Rader”. IEEE. 2002. pp. 780-782. |
Dowle, J. et al. “An FPGA-Based MIMO and Space-Time Processing Platform”. Hindawi Publishing Corporation. EURASIP Journal on Applied Signal Processing. Volume 2006, Article ID 34653, pp. 1-14. DOI 10.1155/ASP/2006/34653. |
Porikli, F. “Achieving real-time object detection and tracking under extreme conditions”. Journal Real-Time Image Processing (2006), vol. 1, pp. 33-40. Springer. Published onine Aug. 18, 2006. DOI 10.1007/s11554-006-0011-z. |
He, J. et al. “Manifold-Ranking Based Image Retrieval”. MM'04, Oct. 10-16, 2004, New York, New York, USA. ACM. 2004. pp. 9-16. |
Zhou, X.S. et al. “A Discussion of Nonlinear Variants of Biased Discriminants for Interactive Image Retrieval”. CIVR 2004, Lecture Notes in Computer Science, vol. 3115, 2004, pp. 353-364. |
Porikli, F. “Integral Histogram: A FastWay to Extract Histograms in Cartesian Spaces”. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE. 2005. 8 pages. |
Microsoft Computer Dictionary, Fifth Edition. Definition of “Document” on p. 171. Microsoft Press, 2002. 3 pages. |
Dementhon, D. et al. “Relevance Ranking of Video Data Using Hidden Markov Model Distances and Polygon Simplification”. Visual 2000, LNCS 1929, pp. 49-61, 2000. |
Waring, C.A. et al. “Face Detection Using Spectral Histograms and SVMs”. IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 35, No. 3, Jun. 2005. pp. 467-476. |
Liu, X. et al. “Image and Texture Segmentation Using Local Spectral Histograms”. IEEE Transactions on Image Processing, vol. 15, No. 10, Oct. 2006. pp. 3066-3077. |
Yan, R. et al. “Co-retrieval: a boosted reranking approach for video retrieval”. IEE Proceedings Vision, Image and Signal Processing, vol. 152, Issue 6, pp. 888-895. IEE Proceedings online No. 20045188. Published online Jul. 5, 2005. doi: 10.1049/ip-vis:20045188. |
Wong, K.-M. et al. “MPEG-7 Dominant Color Descriptor Based Relevance Feedback Using Merged Palette Histogram”. ICASSP 2004. IEEE, 2004. pp. 433-436. |
Office Action for U.S. Appl. No. 12/238,012, mailed on Mar. 16, 2012, Ning-Yi Xu, “Automated Feature Selection Based on Rankboost for Ranking”, 24 pgs. |
Liu, et al., “Gradient Feature Selection for Online Boosting”, IEEE, In the Proceedings of the 11th International Conference on Computer Vision, 2007, pp. 14-21 (8 pgs.). |
Agarwal et al., “Proceesing of the NIPS 2005 Workshop on Learning to Rank”, NIPS, Dec. 9, 2005, Whistler BC, Canada, 44 pages including pp. 1-pp. 40. |
Akesson et al., “Augmented Virtuality: A Method to Automatically Augment Virtual Worlds with Video Images”, Abstract, Preface and Table of Contents for Master Thesis at Swedish Institute of Computer Science, Nov. 1997, full text found at http://www.sics.se/˜kalle/projects/Master—Thesis/Index.html., 5 pgs. |
Aryashev, et al., “Parallel FPGA Processor Card for Distributed Information Processing”, retrieved on Mar. 7, 2007, at <<http://www.niisi.ru/old/pap—for.htm>>, pp. 1-11. |
Billinghurst et al., “Collaborative Mixed Reality”, In Proceedings of the 1st Intl Symposium on Mixed Reality (ISMR 99), Japan, Mar. 1999, 16 pgs. |
Burges, et al, “Learning to Rank with Nonsmooth Cost Functions”, retrieved at <<http://research.microsoft.com/˜cburges/papers/lambdarank.pdf>>, 2006, Microsoft, pp. 1-8. |
Cloutier et al, “VIP: An FPGA-Based Processor for Image Processing and Neural Networks”, Proceedings of MicroNeuro' 96, IEEE, 1996, pp. 330-pp. 336. |
Cuenca et al., “Reconfigurable Frame-Grabber for Real-Time Automated Visual inspection (RT-AVI) Systems”, FPL2001, LNCS 2147, 2001, pp. 223-pp. 231. |
De Fuentes, “A Parallel Programming Model for a Multi-FPGA Multiprocessor Machine”, Thesis, University of Toronto, 2006, pp. 1-pp. 89. |
Dillinger et al., “FPGA Based Real-Time Image Segmentation for Medical Systems and Data Processing”, IEEE, 2005, pp. 161-pp. 165. |
Eick et al., “Hardware Accelerated Algorithms for Semantic Processing of Document Streams”, IEEE Aerospace Conference, Big Sky, MT, Mar. 4-11, 2006, pp. 1-pp. 14. |
Farber, et al, “Parallel Neural Network Training on Multi-Spert”, retrieved at <<http://ieeexplore.ieee.org/iel4/5245/14210/00651531.pdf?tp=&isnumber=14210&arnumber=651531>>, IEEE, 1997, pp. 659-666. |
Fischer, et al., “Stylized Augmented Reality for Improved Immersion”. <<http://www.gris.uni-tuebingen.de/people/staff/bartz/Publications/paper/vr2005.pdf>> Last accessed on Aug. 17, 2007. |
“Fpga-Press-3x: Data/ Image compression PCI -Board”, available at least as early as Mar. 6, 2007, at <<www.gemac-chemnitz.de/all—pdf—files/fpga-board-flyer.pdf>>, pp. 1-2. |
Ghoting et al., “A Characterization of Data Mining Algotiyhms on a Modern Processor”, Proceeding of the First International Workshop on Data Management on New Hardware (DaMon 2005), Jun. 12, 2005, Baltimore MD, pp. 1- pp. 6. |
Huang, et al, “Improving the Back Propagation Learning Speed with Adaptive Neuro-Fuzzy Technique”, retrieved at <<http://ieeexplore.ieee.org/iel4/5797/15470/00714328.pdf?tp=&isnumber=15470&arnumber=714328>>, IEEE, 1993, pp. 2897-2900. |
Joachims, “Optimizing Search Engines Using Clickthrough Data,” retreived at http://www.cs.cornell.edu/People/tj/publications/joachims—02c.pdf, SIGKDD 02, Edmonton, Alberta, Canada, 2002, 10 pgs. |
Julier, et al., “Information Filtering for Mobile Augmented Reality”. Jul. 2, 2002. <<http://www.ait.nrl.navy.mil/3dvmel/papers/j—IEEECGA02.pdf>>. |
Lemoine, et al., “Run Time Reconfiguration of FPGA for Scanning Genomic DataBases”, retrieved on Mar. 7, 2007, at <<doi.ieeecomputersociety.org/10.1109/FPGA.1995.477414>>, IEEE, 1995, pp. 1-2, Abstract only. |
Li, et al, “Learning to Rank Using Classification and Gradient Boosting”, retrieved at <<http://research.microsoft.com/˜cburges/papers/BoostTreeRank.pdf>>, Microsoft, 2007, pp. 1-10. |
Littlefield, “Board vendor FPGA toolkits make or break your project”, available at least as early as Mar. 6, 2007, at <<www.cwcembedded.com/documents/contentdocuments/Article-MES-August-2006.pdf>>, Military Embedded Systems, 2006, pp. 1-4. |
McElligott et al., “ForSe FIElds—Force Sensors for Interactive Environments”, Lecture Notes in Computer Science, UbiComp 2002: Ubiquitous Computing, 4th Intl Conf, Sweden Sep. 2002, vol. 2498, 8 pgs. |
McGettrick et al., “Searching the Web with an FPGA based Search Engine”, ARC 2007, LNCS 4419, pp. 350-pp. 357. |
Melnik et al, “Concave Learners for Rankboost,” retreived at <<http://jmlr.csail.mit.edu/papers/volume8/melnik07a/melnik07a.pdf>>, Journal of Machine Learning Research, Apr. 25, 2007, pp. 791-812. |
Muir, “Virtual Reality vs Composite Reality” Blog, retrieved on Jun. 6, 2007, at http://citrite.org/blogs/jeffreymuir/2007/02/22/virtual-reality-vs-composite-reality/, Feb. 22, 2007, 3 pages. |
Mutlu et al., “Address-Value Delta (AVD) Prediction: A Hardware Technique for Efficiently Parallelizing Depending Cache Misses”, IEEE Transactions on Computers, vol. 55, No. 12., 2006, pp. 1491-pp. 1508. |
Nair et al., “An FPGA-Based People Detection System”, EURASIP Journal on Applied Signal Processing 2005:7, pp. 1047-pp. 1061. |
Final Office Action for U.S. No. Appl. 11/737,605, mailed on Jun. 9, 2011, “Field-Programmable Gate Array Based Accelerator System”. |
Non-Final Office Action for U.S. Appl. No. 12/238,239. mailed on Aug. 16, 2011, Vincent, David Robert, “Field-Programmable Gate Array Based Accelerator System” 14 pages. |
Non-Final Office Action for U.S. Appl. No. 12/238,012, mailed on Aug. 30, 2011, Ning-Yi Xu, “Automated Feature Selection Based on Rankboost for Ranking”, 17 pages. |
Piekarski et al., “ARQuake: The Outdoor Augmented Reality Gaming System”, Communications of the ACM, vol. 45, No. 1, Jan. 2002, pp. 36-38. |
Raykar, et al, “A Fast Algorithm for Learning a Ranking Function from Large Scale Data Sets”, retrieved at <<http://www.umiacs.umd.edu/users/vikas/publications/raykar—PAMI—2007.pdf>>, IEEE, Aug. 22, 2007, pp. 0-29. |
Rose et al., “Annotating Real-World Objects Using Augmented Reality”, Computer Graphics: Developments in Virtual Environments (Proc. CG Intl 95 Conf), Jun. 1995, 21 pgs. |
Sestito, et al., “Intelligent Filtering for Augmented Reality”. <<www.siaa.asn.,au/get/2395365383.pdf>> Last accessed on Aug. 17, 2007. |
Simsarian et al., “Windows on the World: An Example of Augmented Virtuality”, retrieved on Jun. 6, 2007 and found at http://www.sics.se/˜kalle/published/wow.pdf, 7 pages. |
Teitel, “The Eyephone: A Head-Mounted Stereo Display”, Abstract, Proceedings of SPIE, Sep. 1990, Stereoscopic Displays and Applications, vol. 1256, 2 pgs. |
Torresen, et al, “A Review of Parallel Implementations of Backpropagation Neural Networks”, retrieved at <<http://ifi.uio.no/˜jimtoer/chp2.ps>>, Chapter 2 in the book by N. Sundararajan and P. Saratchandran (editors): Parallel Architectures for Artificial Neural Networks, IEEE CS Press, 1998, pp. 41-118. |
Usunier et al, “Boosting Weak Ranking Functions to Enhance Passage Retrieval for Question Answering,” retrieved at <<http://eprints.pascal-network.org/archive/00000428/02/BosWRF2EPR—IR4QA04.pdf, SIGIR 2004, Jul.25-29, 2004, Sheffield, England, 6 pgs. |
Vu et al., “Using RankBoost to Compare Retrieval Systems”, CIKM 2005, Proceeding of the 14th ACM International Conference in Information and Knowledge Management, 2005, pp. 309-pp. 310. |
Wikipedia, “Augmented Virtuality”, retrieved on Jun. 6, 2007, at http://en.wikipedia.org/wiki/Augmented—virtuality, 1 pg. |
Wikipedia, “EyeToy”, retrieved on Jun. 6, 2007, found at http://en.wikipedia.org/wiki/EyeToy, 5 pages. |
Xilinx, Virtex-II Pro FPGAs: The Highest System Performance; The Lowest System Cost, Product Brochure, 2004, pp. 1-pp. 2. |
Xu et al, “FPGA-based Accelerator Design for RankBoost in Web Search Engines,” retrieved at <<http://www.doc.ic.ac.uk/˜wl/teachlocal/cuscomp/papers/fpt07nx.pdf, IEEE, 2007, 8 pgs, Dec. 2007. |
Yang et al., “Implementation of an RBF Neural Network on Embedded Systems: Real-Time Face Tracking and Identity Verification”, IEEE Transactions on Neural Networks, vol. 14, No. 5, Sep. 2003, pp. 1162-pp. 1175. |
Yue et al., “On Using Simultaneous Perturbation Stochastic Approximation of IR Measures, and the Empirical Optimality of LambdaRank”, NIPS Machine Learning for Web Search Workshop, 2007, pp. 1-pp. 8. |
Number | Date | Country | |
---|---|---|---|
20120092040 A1 | Apr 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11737605 | Apr 2007 | US |
Child | 13335333 | US |