STRUCTURE AND TRAINING FOR IMAGE CLASSIFICATION

Information

  • Patent Application
  • 20170193328
  • Publication Number
    20170193328
  • Date Filed
    December 31, 2015
    8 years ago
  • Date Published
    July 06, 2017
    7 years ago
Abstract
A computer implemented method of training an image classifier, comprising: receiving training images data labeled according to image classes; selecting reference points of the images; and constructing a set of voting convolutional tables and binary features on a patch surrounding each reference point by performing, for each calculation stage: creating a voting table by: creating first candidate binary features; calculating a global loss reduction for each first candidate binary feature; selecting one first candidate binary feature having minimal global loss reduction; and repeating to select stage-size binary features; and performing a tree split using the voting table by: creating second candidate binary features; calculating a combined loss reduction for each stage-split size group of the second candidate binary features; selecting one of the groups having a maximal combined loss reduction; and creating a child-directing table using the selected binary features.
Description
BACKGROUND

Practical object recognition problems often have to be solved under severe computation and time constraints. Some examples of interest are natural user interfaces, automotive active safety, robotic vision or sensing for the Internet of Things (IoT). Often the problem is to obtain high accuracy in real time, on a low power platform, or in a background process that may only utilize a small fraction of the central processing unit (CPU). In other cases the classifier is part of a cascade, or a complex multiple-classifier system. Various architectures have been suggested and/or used to optimize the accuracy-speed trade-off.


SUMMARY

According to some embodiments of the present disclosure, there are provided systems and methods for image classification based on convolutionally-applied sparse feature extraction, using a combination of trees and ferns, and a linear voting layer. An image classifier is trained using training images data and then used for sorting sample images received from a capture device.


When classifying an image, multiple codewords are created for each reference point in the image by concatenation of binary features. A vote is calculated for each codeword to each class, and the class having the highest sum of vote is selected for the image. The creation of the codeword is done by a method combining the use of ferns and trees structure of calculating binary features, referred to as ‘long tree’. Calculation is divided into multiple stages, each defining a set of bit functions and split size (number of binary features to be calculated) and a child-directing table. Several binary features are calculated for one stage and then used with the child-directing table as an index for the next stage. When all binary features are calculated, they are combines to form a codeword.


When training the classifier, a method is used that iterates between gradient based word calculator optimization and global optimization of table weights. The training includes selecting reference points in the training images data which is labeled according to classes, and constructing binary features on a patch surrounding each reference point. The binary features are constructed by steps of two kinds: fern growing and tree split. At fern growing, candidate binary features are randomly generated, the global loss reduction for each of them is computed (using a linear approximation of the loss) and the best is retained. At tree split again candidate bit features are randomly generated, but this time a set of several bit functions are selected as children with a child-directing table that gives the maximal value of combined loss reduction for this stage.


The described method provides improved speed to accuracy ratio over existing methods of image classification. The method is most beneficial when implemented in systems requiring accurate results using limited computing resources and/or in a very short time, for example 1-1000 CPU microseconds. Such exemplary systems include, for example, user interfaces that are operated by identifying a user's hand poses and/or motions captured by camera, automotive active safety that recognizes road objects and motions and robotic vision that has to classify objects in its surroundings.


Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the disclosure, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

Some embodiments of the disclosure are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the disclosure. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the disclosure may be practiced.


In the drawings:



FIGS. 1A and 1B illustrate a flowchart of an exemplary process for training an image classifier and classifying images, according to some embodiments of the present disclosure;



FIG. 2 is a schematic illustration of an exemplary system for classifying images, according to some embodiments of the present disclosure;



FIG. 3 is a schematic illustration of an exemplary system for training an image classifier, according to some embodiments of the present disclosure; and



FIG. 4 includes speed-accuracy trade-offs curves for the 4 exemplary datasets, according to some embodiments of the present disclosure.





DETAILED DESCRIPTION

According to some embodiments of the present disclosure, there are provided systems and methods for classifying images under classification time constraints, using Convolutional Tables Ensemble (CTE) architecture for object category recognition. The architecture is based on convolutionally-applied sparse feature extraction, using a combination of trees or ferns, and a linear voting layer. The combination of trees and ferns structures of calculating binary features, also referred to as bit functions, is done by splitting the ferns into stages. This is referred to as ‘long tree’. To train the long tree classifier, the maximal global loss reduction is calculated to select the split and bit functions.


The fastest theoretical classifier possible would be the one concatenating all the pixel values into a single index, and then using this index to access a table listing the labels of all possible images. This is not feasible due to the exponential requirements of memory and training set size, so the presented architecture, also referred to as Convolutional Tables Ensemble (CTE), compromises this limit idea in two main ways: first, instead of encoding the whole image with a single index, the image is treated as a dense set of patches where each patch is encoded using the same parameters. This is analogous to a convolutional layer in a convolutional neural network (CNN), where the same set of filters is applied at each image position. Second, instead of describing a patch using a single long index, it is encoded with a set of short indices, that are used to access a set of reasonable-size tables. Votes of all tables at all positions are combined linearly to yield the classifier outputs. Variants of this architecture have been used successfully mainly for classification of depth images.


The idea of applying the same feature extraction on a dense locations grid is very old and influential in vision, and is a key tenet in CNNs, the state-of-the-art in object recognition. It provides a good structural prior in the form of translation invariance. Another advantage lies in enhanced sample size for learning local feature parameters, since these may be trained from (number of training images)×(number of image patches) instances. The presented architectures are not deep in the CNN sense, and correspond to a single convolutional layer, followed by spatial pooling.


The main vessel used for obtaining high classification speed is the utilization of table-based feature extractors, instead of heavier computations such as applying a large set of filters in a convolutional layer. In table-based feature extraction, the patch is characterized using a set of fast bit functions, such as a comparison between two pixels. K bits are extracted and concatenated into a word. This word is then used as an index into a set of weight tables, one per class, and the weights extracted provide the classes support from this word. Support weights are accumulated across many tables and all image positions, and the label is decided according to the highest scoring class.


The power of this architecture is in the combination of fast-but-rich features with a high capacity classifier. Using K, for example 12 quick bit functions, the representation considers all their 2K combinations as features. The representation is highly sparse, with






1

2
K





of the features active at each position. The classifier is linear, but it operates over numerous highly non linear features. For M tables, for example 50 tables, and C classes, the number of weights to optimize is M2KC, which may be very high even for modest values of M, K, C. The architecture hence requires a large training set to be used, and it effectively trades training sample size for speed and accuracy.


Pushing the speed-accuracy envelope using this architecture requires making careful structural and algorithmic choices. First, bit functions and image preprocessing should be chosen. Simple functions were tried, which were suitable for depth images, and they were extended using gradient and color based channels and features. Another type of bit function introduced is spatial bits stating the rough location of the patch, which enable to combine global and local pooling. A second important choice is between conditional computation of bit functions, leading to tree structures and unconditional computation as in fern structures. While trees may enable higher accuracy, ferns are better suited for vector processing (such as Streaming SIMD Extensions (SSE) instructions) and thus provide significant speed advantages. It was explored between these ends empirically using a ‘long tree’ structure, whose configuration enables testing intermediate structures.


Several works have addressed the challenges of learning a tables-based classifier, and they vary in optimization effort from extremely random forests to global optimization of table weights and greedy forward choice of bit functions. The presented approach builds on previous approaches and extends them with new possibilities. The table ensemble is learnt by adding one table at a time, using a framework similar to the ‘anyboost’ algorithm. Training iterates between minimizing a global convex loss, differentiating this loss with regard to examples, and using these gradients to guide construction of the next table. For the global optimization, two main options are presented: a support vector machine (SVM) loss and a softmax loss as commonly used in CNN training. For the optimization of the bit function parameters in a new fern/tree, several options were developed: forward bit selection, iterative bit replacement, and iterative local refinement. In some cases, such as the threshold parameters of certain bits, an algorithm providing the optimal solution is suggested.


Since CTEs may be much faster than CNNs, while the latter excel at accuracy, it is desirable to merge their advantages when possible. In several recent studies, the output of an accurate but computationally expensive classifier is used to train another classifier, with a different and often computationally cheaper architecture. This technique, termed distillation, is used to train a CTE classifier with a CNN teacher, with encouraging results on the MNIST data.


Before explaining at least one embodiment of the exemplary embodiments in detail, it is to be understood that the disclosure is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The disclosure is capable of other embodiments or of being practiced or carried out in various ways.


Referring now to the drawings, FIGS. 1A and 1B illustrate a flowchart of an exemplary process for training an image classifier and classifying images, according to some embodiments of the present disclosure. First, the image classifier is trained using training images data, as shown at 101. Then, the trained classifier is used for sorting sample images received from a capture device, as shown at 102.


Reference is also made to FIG. 2, which is a schematic illustration of an exemplary system for classifying images, according to some embodiments of the present disclosure. An exemplary system 200 includes a capture device 210 for capturing at least one image, one or more hardware processor(s) 220 and a storage medium 230 for storing the code instructions of the classifier. System 200 may be included in on one or more computerized devices, for example, computer, mobile device, computerized machine and/or computerized appliance equipped and/or attached to capture device 210. Storage medium 230 may include, for example, a digital data storage unit such as a magnetic drive and/or a solid state drive. Storage medium 230 may also be, for example, a content delivery network or content distribution network (CDN) is a large distributed system of servers deployed in multiple data centers across the Internet.


First, the process of classification by the classifier is presented. A convolutional table ensemble is a classifier f: I→{1, . . . , C} where Iεcustom-characterSx×Sy×D and C is the number of classes.


As shown at 103, a sample image is received by processor 220 from capture device 210. Capture device 210 may include, for example, color camera(s), depth camera(s), stereo camera(s), infrared (IR) camera(s), a motion detector, a proximity sensor and/or any other imaging device or combination thereof that captures visual signals.


Optionally, the sample image includes multiple channels. The channels may include, for example, gray scale and color channels, or depth and infra-red (IR) (multiplied by a depth-based mask) for depth images.


Optionally, as shown at 104, the image I may undergoes a preparation stage where additional feature channels (‘maps’) may be added to it, so it is transformed to Ieεcustom-characterSx×Sy×De with De≧D. Exemplary additional channels include:


Gradient-based channels: Two kinds of gradient maps are computed from the original channels. A normalized gradient channel includes the norm of the gradient for each pixel location. In oriented gradient channels the gradient energy of a pixel is softly quantized into NO orientation maps.


Integral channels: Integral images of channels from the previous two forms. Applying integral channel bits to these channels allows fast calculation of channel area sums.


Spatial channels: Two channels stating the horizontal and vertical location of a pixel in the image. These channels state the quantized location, using NH=└ log2 Sx┘ and NV=└ log2 Sy┘ bits respectively.


Optionally, as shown at 105, after preparation, the channels are smoothed by a convolution with a triangle filter. Spatial channels enable the incorporation of patches' position in the word computed. They are used with a ‘Get Bit l’ bit function type, with l referring to the higher bits. This effectively puts a spatial grid over the image, thus turning the global summation pooling into local summation using a pyramid-like structure. For example using two bit functions, checking for the NH-th horizontal bit and the NV-th vertical bit, effectively puts a 2×2 grid over the image where words are summed independently and get different weights for each quarter. Similarly using 4 spatial bits one gets a 4×4 pyramid, etc. Enforcing a different number of spatial bits in each convolutional table may improve feature diversity and consequently the accuracy.


Then, as shown at 106, a codeword is produced for each reference points of the image. The codeword is constructed from binary features, also referred to as bit functions. A bit function is applied on pixels from a patch surrounding the reference point and produces a single bit. Word calculators compute an index descriptor of a patch Pεcustom-characterl×l×De by applying K bit functions, each producing a single bit. Each such function is composed of a simple comparison operation, with a few parameters stating its exact operation. Exemplary bit function forms include:





One pixel: F(P)=σ(P(x,y,d)−t)





Two pixels: F(P)=σ(P(x1,y1,d)−P(x2,y2,d)−t)






Get Bit l: F(P)=(P(0,0,d)<<1)&1





Integral channel bit: F(p)=σ(P(x1,y1,d)−P(x1,y2,d)−P(x2,y1,d)+P(x2,y2,d)−t)


where σ is the Heaviside step function. The first two bit function types may be applied to any input channel dε{1, . . . , De}, while the latter two are meaningful only for specific channels.


For an image location p=(x; y)ε{1, . . . , Sx}×{1, . . . , Sy}, its neighborhood is denoted by N(p), and the patch centered at location p is denoted by IN(p). A word calculator is a feature extractor applied to such a patch and returning a K-bit index, i.e. a function W:custom-characterl×l×De→{0,1}K, where l is the neighborhood size. In a classifier there are M calculators denoted by Bm=B(•;Θm) where m=1, . . . , M and Θm are the calculator parameters. The calculator computes its output by applying K bit-functions to the patch, each producing a single bit.


The structure of the word calculator is a combination of ferns architecture and tree hierarchy architecture. The main design decision in this respect is the choice between ferns and trees. Ferns include only K bit functions, so the number of parameters is relatively small and over-fitting during local optimization is less likely. Trees are a much larger hypothesis family with up to 2K−1 bit functions in a full tree. Thus they are likely to enable higher accuracy, but also be more prone to overfit. This trade-off is explored using a ‘long tree’ structure enabling a gradual interplay between the fern and full tree extremes.


First, as shown at 107, calculation stages are defined, each defining a stage size number of bits to include in the tree stage, and a stage split factor determining the number of children of the node. In a long tree the K bits to compute are divided into NS stages, with Ks bits computed at stage s=1, . . . , NS, so Σs=1NSKs=K. A tree of depth NS is built, where a node in stage s contains Ks bit functions computing a Ks-bits word. A node in stage s=1, . . . , NS−1 has qs children, and it has a child-directing table of size 2Ks, with entries containing child indices in 1, . . . , qs.


Then, as shown at 108, split size number of binary features are calculated on a patch surrounding the reference point. Computation starts at stage 1 at a root node and computes the Ks bits in a node.


Then, as shown at 109, all Ks bits are constructed into a word.


Finally, as shown at 110, the produced word is used as an index to the child-directing table, whose output is the index of the child node to descend to. The tree structure is determined by the vectors (K1, . . . , KNS) and (q1, . . . , qNS−1) of stage size and stage split factors respectively.


This is repeated for all calculation stages to produce all K bits.


When speed is considered, an important point is that ferns may be efficiently implemented using vector operations (like SSE), constructing the word in several locations at the same time. The efficiency arises because computing the same bit function for several contiguous patches involves access to contiguous pixels, which may be done without expensive gather operations. Conversely, for trees different bit functions are applied at contiguous patches so the accessed pixels are not contiguous in memory. Therefore, trees may be more accurate, but ferns provide considerably better accuracy-speed trade-off.


As shown at 111, a vote is calculated of each of the codewords to each of the classes according to a weight matrix. This is done by convolutional table which sums the votes of a word calculator over all pixels in an aggregation area. Each word calculator is applied to all locations in an integration area, and each word extracted casts votes for the C output classes. Convolutional table m is hence a triplet (Bm, Am, Wm) where Am⊂{1, . . . , Sx}×{1, . . . , Sy} is the integration area of word calculator Bm, and Wmεcustom-characterC×2K weight matrix. The word calculated at location p is denoted by bpm=B(IN(p)e; Θm). A histogram Hm=(H0m, . . . , H2K−1m) is gathered, counting word occurrences; i.e.,






H
b
mpεAmδ(bpm−b)  (1)


with the discrete delta function. The class support of the convolutional table is the C-element vector Wm(Hm)t.


As shown at 112, the votes are summed for each of the classes to determine a classification of the image. The ensemble sums the votes of M convolutional tables. The ensemble classification is done by accumulating the class support of all convolutional tables into a linear classifier with a bias term. Let =[H1, . . . , HM], and W=[W1, . . . , WMcustom-characterC×M2K. The classifier's decision is given by






C*=arg maxcWHt−Tt  (2)


where T=(T1, . . . , TC) is a vector of class biases. The following algorithm shows the classifier's test time flow:


Classification input: An image I of size Sx×Sy×D,


classifier parameters (Θm, Am, Wm,c, Tc)m=1,c=1M,C


Am⊂{1, . . . , Sx}×{1, . . . , Sy}, Wm,cεcustom-character2K, Tcεcustom-character


Classification output: A classifier decision in {1, . . . , C}


Initialization: For c=1, . . . , C Score[c]=−Tc

    • Prepare extended image Ieεcustom-characterSx×Sy×D


For all tables m=1, . . . , M

    • For all pixels pεAm





Compute F=B(IN(p)em)ε{0,1}K





For c=1, . . . ,C Score[c]=Score[c]+Wm,c[F]


Return arg maxc Score[c]


Note that the histograms Hm are not accumulated in practice, and instead each word computed directly votes for all classes.


Reference is now made to FIG. 3, which is a schematic illustration of an exemplary system for training an image classifier, according to some embodiments of the present disclosure. An exemplary system 300 includes one or more hardware processor(s) 320, as described above, and a storage medium 330, as described above, for storing the code instructions and the training images data.


First, as shown at 113, training images data, labeled according to image classes is received and stored in storage medium 330. The training images data may be received, for example, from standardized image databases, such as MNIST, CIFAR-10, SVHN and/or 3-HANDPOSE.


Then, as shown at 114, multiple reference points of the training images data are selected.


In some previous publications, instances of convolutional tables ensemble were discriminatively optimized for specific tasks and losses, for example hand pose recognition using SVM and/or face alignment using l2 regression. The main idea behind these methods is to iterate between solving a convex problem for a fixed representation, and augmenting the representation based on gradient signals from the obtained solution. The method presented here adapt these ideas to linear M-classification with an arbitrary l2-regularized convex loss function. Assume a labeled training sample with fixed representation {(Hi,yi)}i=1N where Hiεcustom-characterm2K, yiε{1, . . . , C}, and denote the c-th row of the weight matrix W by Wc. It is desired to learn a linear classifier of the form C*=arg max csc with sc=WcHt−Tc by minimizing a sample loss function of the form






L({Hiyi}i=1N)=½∥W∥2i=1Nl({sic}c=1C,yi)  (3)


with l({sc}c=1C,y) a convex function of sc. L is strictly convex with a single global minimum, hence solvable using known techniques. Once the problem has been solved for the fixed representation Hi, the representation may be extended by incorporating a new table, effectively adding 2K new features. In order to choose the new features wisely, it is considered how the loss changes when a new feature f+ is added to the representation with small class weights, regarded as a small perturbation of the existing model.


The value of a new feature candidate for example i is denote by fi+. After incorporating the new feature, example i's representation changes from Hi to Ht+=[Hi,fi+] and weights vectors Wc are augmented to [Wc,wc+] with wc+εcustom-character. Class scores sic are updated to sic,+=Wc+(Hi+)t−tc=sic+wc+f+. Finally, the loss is changed to L+=L({(Hi+,yi}i=1N)=½∥W∥2+½ΣCc=1wc+2i=1Nl({(sic,+}c=1C,yi). The new weights vector is denoted W+=[w1+, . . . , wC+]. Assuming that the new feature is added with small weights; i.e., wc+≦ε for all c. L+ may be Taylor approximated around W+=0, with the gradient








d






L
+



dW
+







W
+

=
0


:











d






L
+



dW
+









w
+

=
0




=


w
+

+




i
=
1

N





dl


(


{

s
i

c
,
+


}

,

y
i


)



ds
i
c




f
i
+









w
+

=
0



=




i
=
1

N





dl


(


{

s
i
c

}

,

y
i


)



ds
i
c




f
i
+







Using the gradient in a Taylor approximation of L+ gives







L
+

=


L
+



W
+



(


d






L
+



dW
+


)


t

+

O


(




W
+



2

)



=

L
+




c
=
1

C




w
c
+






i
=
1

N





dl


(


{

s
i
c

}

,

y
i


)



ds
i
c




f
i
+





+

O


(




W
+



2

)








Denote







g
i
c

=



dl


(


{

s
i
c

}

,

y
i


)



ds
i
c


.





For loss minimization, Σc=1C wc+ Σi=1Ngicfi+ should be minimized over W+ and f+. For fixed f+ minimizing over W+ is simple. Denoting Rc(f+)=Σi=1ngicfi+, Σc=1Cwc+Rc(f+) has to be minimized under the constraint wc+≦⊂, ∀c. Each term may be minimized in the sum independently to get w+,copt=−εsign(Rc), and the value of the minimum is −εΣi=1C|Rc(f+)|. Hence, for a single feature addition, maximizing the score Σi=1C|Rc(f+)| is needed.


2K features are added at once, generated by a new word calculator B. The derivation above may be done for each of them independently, so for the addition of the features {Hb+}bε{0,1}K it is






L
+
≈L−Σ
bε{0,1}

K
−εΣc=1C|Rc(Hb+)|=L−εΣc=1CΣbε{0,1}Ki=1NgicΣpεA+δ(bi,p+−b)|=L−εΣc=1CΣbε{0,1}K{i,p:b1,p+=b}gic|custom-characterL−εR(B)  (6)


where we used Equation 1 for Hb+ and denoted bi,p+=B(Ii,N(p)). The resulting training algorithm iterates between global classifier optimization and greedy optimization of the next convolutional table by maximizing R(B; Θ). The following algorithm summarizes the training algorithm:


Training input: A labeled training set S={Ii,yi}i=1N,


parameters M, K, C, {Am}m=1M, convex loss L(S; W, T)


Training output: A classifier (Θm, Am, Wm,c, Tc)m=1,c=1M,C


Initialization: gic=1/|{Ii|yic=1})| if yic=1,






g
i
c=−1/|{Ii|yic=−1}| if yic=−1

    • Prepare extended image Ieεcustom-characterSx×Sy×D


For m=1, . . . , M

    • Table addition: choose Θm to optimize:





Θm=arg maxΘm=arg maxΘΣcΣbε{0,1}Ki,p:bi,p(Θ)gic|

    • Update representation: ∀i=1, . . . , N, bε{0,1}K






H
b
m
[I
i]=Σpδ(bp,im=b),H=[H,Hm]

    • Global optimization: train W, T to optimize:





arg minW,TL({Hi,yi}i=1N;W,T)

    • If m<M get loss gradients:







g
i
c

=


d





L


dS
i
c






Optionally, forward bit function selection is used for the optimization of R(B; Θ). In forward selection, we optimize R(B) by adding one bit after the other. For fern growing there are K such stages. At stage l=1, . . . K, {Fj}j=1Nc candidate bit functions are generated, with their type and parameters drawn from a prior distribution. For each j, the current word calculator B is augmented to B+=[B, Fj] and the one with the highest score is chosen. However, simple greedy computation of R(B+) at each stage may not the best way to optimize R(B), and an auxiliary score which additively normalizes the newly-introduced features does a better job. Denote the patch features of a word calculator B by δb(P)=δ(B(P)=b), by δi,pb the value of δb for pixel p in image i and by Rc(f(P))custom-characterΣi,p gicfi,p the score R induced by a patch function f. The addition of a new bit effectively replaces the feature δb for bε{0, 2l−1} with 2 new features δ(b,0) and δ(b,1). when the gradients in cell b are not balanced; i.e., Σi,p δi,pbgic=C0≠0, as is often the case, a feature δ(b,0) may get a good Rc(b,0)) score of C0 even when it is constant, or otherwise uninformative. To handle this, a normalized version of the new features is scored, with an average value of 0, which more effectively measures the added information in the new features. The following lemma shows that this is a valid, as well as computationally effective strategy:


Lemma (1):


Let








δ
_


(

b
,
u

)


=


δ

(

b
,
u

)


-



#


δ

(

b
,
u

)




#


δ

(
b
)






δ
b







for u=0,1 and #δai,p δi,pa. The following properties hold:


1. Using δ(b,u), δb in a classifier is equivalent to using δ(b,0), δ(b,1); i.e, for any weight choice w0, w1 there are wb, wΔ such that w0δ(b,0)+w1δ(b,1)=wbδb+wΔδ(b,1)


2. Rc(δ(b,0))=Rc(δ(b,1))








R
c



(


δ
_


(

b
,
0

)


)


=





i
,
p





(


g
i
c

-

E


[


g
i
c


b

]



)



δ

(

b
,
0

)







with






E


[


g
i
c


b

]






=








i
,
p





g
i
c



δ
b




#


δ
b








Property 1 shows that δb, δ(b,1) features may be scored instead of δ(b,u) features. Since only δ(b,1) is affected by the new candidate bit, it is possible to score only those terms when selecting among candidates. Property 3 shows that the gradient is normalized instead of the feature candidates, which is cheaper (as there are Nc candidates but only a single gradient vector). In summary, the next bit selection is optimized by maximizing






R
Δ([B,Fj])custom-characterΣc=1CΣbε{0,1}l{i,p:bi,p=(b,1)}(gic−E[gic|b])|  (7)


over the choice of Fj. The calculation requires a single histogram aggregation sweep over all patches (i,p).


Most of the bit functions obtain their bit by comparing an underlying patch measurement to a threshold t. For such functions, the optimal threshold parameter may be found with a small additional computational cost. This is done by sorting the underlying values of Fj and computing the sum over i, p in Equation 7 by running with the sorted order. This way, a running statistic of the RΔ score may be maintained, computing the score for all possible thresholds and keeping the best.


Optionally, for a ‘long tree’ a similar algorithm is employed, with ferns internal to nodes optimized as full ferns, but tree splits requiring special treatment.


As shown at 115, K binary features are constructed on a patch surrounding each reference point for each calculation stage (nodes). This is done in two steps, fern growing as shown at 116 and tree split as shown in 120.


Fern growing is done, as shown at 116, by iteratively selecting candidate binary features having minimal global loss reduction. A fern with Ks bits is created at stage s. The fern growing is optionally done as shown at 117-119:


First, as shown at 117, multiple candidate binary features are created.


Then, as shown at 118, a global loss reduction is calculated for each of the candidate binary features.


Then, as shown at 119, a binary feature is selected, maximizing the global loss reduction.


Stages 117, 118 and 119 are iterated Ks times to select Ks binary features.


Assuming a node is split in stage s, tree split is performed by selecting a candidate binary features group and creating a child-directing table, as shown at 120. Optionally, the tree split is performed as shown at 121-123:


The current word calculator has already computed a Lsi=1s Ki-bit word, among which Ks were computed in the current node.


First, as shown at 121, NC candidate binary features are created.


Then, as shown at 122, a combined R loss reduction is calculated for each group of stage-split binary features. A loss reduction matrix M of size 2K×NC is built, stating how each candidate contributes to each word of the tree node. Entrée (i,j) includes the loss reduction induced by candidate j to patches mapped to word i.


Then, as shown at 123, one of the groups of binary features, a set of qs columns S={i1, . . . , iq}, is selected which has a maximal value of the combined loss reduction ΣmaxM(i,j).


Finally, a child-directing table is created using said selected binary features The corresponding qs bit functions are taken as the first bit functions of the children and the children directing table is built pointing each node word to its best branch.


Since different prefixes of the current calculator B are augmented by different bit functions, the R score has to be decomposed. Denoted by a is the index set {Ls−Ks+1, . . . , Ls} of bits computed by the current node, and by b(a) the limitation of a binary word b to indices a. For a Ks-bit word zε{0,1}Ks, we define the component of R contributed by words with b(a)=z by






R
b(a)=z(B)custom-characterΣc=1CΣbε{0,1}Ls,b(a)=z{i,p:bi,p=b}gic|  (8)


A child-directing table and binary features are selected for the children from the candidate binary features, having a maximal value of combined loss reduction. The first bit functions are chosen of all the children, as well as the redirection table, to optimize R. For the tree split, a large set custom-character of candidate bits is drawn, and the first bits of the qs children are chosen by optimizing






custom-characterΣzε{0,1}KsmaxFzεGRb(a)=z([B,Fz])  (9)


with G the set of chosen bits for the children and entry z in the redirection table set to the index of the child containing Fz. For this optimization we compute the score matrix Sεcustom-character with S(i,j)=Rb(a)=i([B, Fj]). Given a choice of G, amounting to a choice of column subset in S, the optimization over Fz is trivial and the score is easy to compute. Optimization is done over G by exhaustively trying all choices of G for |G|=2, and greedily adding columns to G until it contains qs members.


This is optionally repeated for all calculation stages. For each stage, a classifier is trained using the current trees/ferns (the selected binary features), as shown at 124, and a linear approximation of the loss of the current classifier is computed, as shown at 125. Training then iterates back to constructing binary features for another tree/fern.


Optionally, in addition to forward bit selection, an iterative bit replacement is implemented. While the last bit functions in a fern are chosen to complement the previous ones, the bits chosen at the beginning are not optimized to be complementary and may be suboptimal in a long word calculator. The bit replacement algorithm operates after forward bit selection. It runs over all the bit functions several times and attempts to replace each function with several randomly drawn candidates. A replacement step is accepted when it improves the RΔ(B) score.


Optionally, a bit refinement algorithm is implemented. The algorithm attempts to replace a bit function by small perturbations of its parameters, thus effectively implementing a local search. For trees, bit replacement/refinement is done only for bits inside a node, and once a split is made the node parameters are fixed.


Optionally, an SVM-based loss is used for global optimization. In the SVM loss, we take the sum of C SVM programs, each minimizing a one-versus-all error. Let yi,c=2*δ(yi,c)−1 be binary class labels. The loss is






L
SVM=½∥W∥2+ΛΣc=1CΣi=1Nmax(1−yi,csic,0)  (10)


The loss aims for class separation in C independent classifiers. Its advantage lies in the availability of fast and scalable methods for solving large and sparse SVM programs. The loss gradients are gic=−yi,c when example i is a support vector, and 0 otherwise. A first order approximation for minW LSVM may be derived for new feature addition, in which the example gradients are −αiyi,c with αi the dual SVM variables at the optimum.


Optionally, a softmax loss is used for global optimization, as typically used in neural networks optimization. The softmax loss is:







L
LR

=



1
2





W


2


+

Λ





i
=
1

N



log



exp


(

s
i

y
i


)





c



exp


(

s
i
c

)












This loss provides a direct minimization of the M-class error. The gradients are gic=exp(siyi)/Σcexp(sic)−δ(yi,c). Conveniently, it may be extended to a distillation loss, which enables guidance of the classifier using an internal representation of a well-trained CNN classifier.


Optionally, each features column is normalized by the expected count of active examples. Features in a word histogram have significant variance, as some words appear in large quantities in a single image. Without normalization such words may be arbitrarily preferred due to their lower regularization cost—they may be used with lower weights. Denote the column of a feature across all examples by Colbm=(Hb,1m, . . . , Hb,Nm). Normalizing each features column by the expected count of active examples L1(Colbm)/L0(Colbm) may improve accuracy and convergence speed in many cases.


Exemplary implementations of embodiments of the present disclosure are now presented, demonstrating the performance gains of our techniques by comparison with the Discriminative Ferns Ensemble (DFE) method, ablation studies, fern-tree trade-off experiments, and distillation results. Implementations were created and experimented on 4 publicly available object recognition benchmark datasets: MNIST, CIFAR-10, SVHN and 3-HANDPOSE. The first three are standard recognition benchmarks in grayscale (MNIST) or RGB (CIFAR-10, SVHN), with 10 classes each. 3-HANDPOSE are a 4-class dataset, with 3 hand poses and a fourth class of ‘other’, and its images contain depth and IR channels. The image sizes are between 28×28 (MNIST) and 36×36 (3-HANDPOSE). The training set size ranges from 50000 (CIFAR-10) to 604000 (SVHN). The training code was written in Matlab, with some routines using code from the external packages. The test time classier was implemented and optimized in C. For ferns an algorithm 1 with SSE operations was implemented. Words are computed for 8 neighboring pixels together, and voting is done for 8 classes at once. For trees a program generating efficient code was implemented of the bit computation loop for a specific tree, so the tree parameters are part of the code. This obtained an acceleration factor of 2× over standard C code. The algorithm was also thread-parallelized over the convolutional tables, with a good speed-up of 3.6× obtained from 4 cores. However, single thread performance was reported and compared to keep the methodology as simple as possible.


The DFE was suggested for classification of 3-HANDPOSE, and may be seen a baseline for CTE, which enhances it in many aspects.























CTE
\Opt
\Ftr
\WC



\Sp.


Dataset
DFE
base
TH
Norm
opt
\Chnls
\Smooth
\Spatial
Enforce
























MNIST
0.77
0.45
0.48
0.58
0.48
0.7
0.66
0.51
0.48


CIFAR-10
32.5
20.3
21.3
21.9
21.8
22.0
22.2
21.5
21.0


SVHN
11.9
6.5
7.1
7.1
10.5
13.2
7.2
13.2
7.6


3-
3.2
2.3
2.1
2.5
3.5
4.4
2.7
2.2
2.2


HANDPOSE









The first two columns in the above table present errors of DFE and CTE on the 4 datasets, using 50 ferns for MNIST, SVHN, 3-HANDPOSE and 100 for CIFAR-10. MNIST was trained with softmax distillation loss (see below for details), and the others with SVM loss. The aggregation area {Am}m=1M were chosen to be identical for all tables in a classifier, forming a centered square occupying most of the image. To enable the comparison, M-class error rates are extracted from DFE. It may be seen that CTE base provides significant improvements of 24-45% error reduction over DFE, with 28% obtained for 3-HANDPOSE, where DFE was originally applied. Note that the CTE base is not the best choice for 3-HANDPOSE. With additional parameter tuning result of 2% may be obtained with 50 ferns, which is an improvement of 38% over DFE.


The accuracy obtained by a CTE is influenced by many small incremental improvements related to structural and algorithmic variations. Columns 3-9 in the above table show the contribution of some ingredients by removing them from the baseline CTE. For MNIST, where the effects are small due to the low error, results were averaged over 5 experiments varying in their random seed, a with seed-induced std of 0.1%. It may be seen that these ingredients consistently contribute to accuracy for non-depth data.


The trade-off between ferns and trees for MNIST and CIFAR-10 is presented in the following table:


















Dataset
Tree form
Error (%)
speed (_S)





















MNIST
Fern
0.45
106




Depth 3, 4-way splits
0.43
398




Depth 6, 2-way splits
0.39
498



CIFAR-10
Fern
21.0
800




Depth 3, 4-way splits
19.7
3111




Depth 4, 3-way splits
19.3
3544










For MNIST, the results were averaged over 5 experiments, with a seed induced std of 0.028%. It may be seen that trees provide better accuracy. However, the speed cost of using trees is significant, due to the inability to efficiently vectorize their implementation.


Experiments were made with distillation known from a CNN. In such experiments, soft labels are taken from our best CNN model, and a CTE is trained to optimize a convex combination of the standard softmax loss and the Kullback-Leibler distance from the CNN-induced probabilities. This was attempted for MNIST and CIFAR-10 using our best CNN models, providing 0.31% and 8.6% error respectively as distillation sources. For MNIST, this training methodology proved to be successful. Averaging over 5 seeds, the accuracy of a 50-fern CTE optimized for softmax was 0.66% (the std was 0.025%) without distillation, and 0.45%(0.029%) with distillation. For comparison, an SVM-optimized CTE with the same parameters obtained 0.61%(0.04%) error. For CIFAR-10 distillation did not consistently improve the results.


The best accuracy obtainable for a specific speed constraint and vice versa is showed by trade-off or Pareto curves. Since the design space for variations of CNNs and CTE algorithms is huge, and the training time of the algorithms is considerable, it has to be sampled wisely to get a good curve approximation. The presented sampling technique is based on two stages. In stage 1 the most accurate classifiers for CTE and CNN were searched with loose speed constraints, so even slow classifiers were considered. The few top accuracy variants of each architecture were then used as baselines and accelerated them by systematically varying certain design parameters.


The CNN baseline architectures are variations of Deep-CNiN(l,k), with l=3-4 convolutional layers and k=60-100, implying usage of i·k maps at the i-th layer. Higher l,k values provide better accuracy, but such architectures are much slower than 1 CPU millisecond and so they are outside the current domain of interest. Experiments were made with dropout, parametric RELU units, affine image transformations following, and HSV image transformations following. Acceleration of the baseline architectures used three main parameters. The first was reducing parameter k controlling the network width. The second was reduction of the number of maps in the output of the NIN layers. This reduces the number of input maps for the next layer, and may dramatically save computation with relatively small loss of accuracy. The third was raising the convolution stride parameter from 1 to 2. For CTEs, the exploration space includes both ferns and trees. The best performing configurations were then accelerated using a single parameter: the number of tables in the ensemble.


Reference is also made to FIG. 4, which includes speed-accuracy trade-offs curves for the 4 exemplary datasets, according to some embodiments of the present disclosure. CTE configurations were systematically experimented to obtain accuracy-speed trade-off graphs for the datasets mentioned. Classification speed in microseconds is displayed along the X-axis in log scale with base 10. For all datasets, there is a high speed regime where CTEs provide better accuracy than CNNs. Specifically CTEs are preferable for all datasets when less than 100 microseconds are available for computation. Starting from 1000 microseconds and up CNNs are usually better, with CTEs still providing comparable accuracy for MNIST and 3-HANDPOSE at the 1-10 milliseconds regime. Viewed conversely, for a wide range of error rates, when the error rate is obtainable by a CTE, it is obtainable with significant speedups over CNNs. Some examples of this phenomenon are given in the following table.















Dataset Error
CTE (_S)
(%) CNN (_S)
speedup


















MNIST 0.01
4.8
63.9
13.1


CIFAR-10 0.25
168.8
882
5.2


SVHN 0.15
18.4
88
4.7


3-HANDPOSE 0.035
6.3
1250
199.3









Note that while a working point of 0.25 error for CIFAR-10 may seem high, the majority of the one-versus-one errors of such a classifier are lower than 0.05, which may be good enough for many purposes.


It is expected that during the life of a patent maturing from this application many relevant systems and methods for image classification will be developed and the scope of the term image classification is intended to include all such new technologies a priori.


The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”.


The term “consisting of” means “including and limited to”.


The term “consisting essentially of” means that the composition, method or structure may include additional ingredients, steps and/or parts, but only when the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.


As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof.


As used herein the term “method” refers to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the chemical, pharmacological, biological, biochemical and medical arts.


According to an aspect of some embodiments of the present invention there is provided a computer implemented method of training an image classifier using training images data, comprising: receiving training images data, the training images data is labeled according to image classes; selecting a plurality of reference points of the training images; and constructing a set of voting convolutional tables and binary features on a patch surrounding each of the plurality of reference points by: creating a voting table by: creating a first plurality of candidate binary features; calculating a global loss reduction for each of the first plurality of candidate binary features; selecting one of the first plurality of candidate binary features having minimal global loss reduction; and repeating the creating, the calculating and the selecting to select stage-size binary features; and performing a tree split using the voting table by: creating a second plurality of candidate binary features; calculating a combined loss reduction for each stage-split size group of the second plurality of candidate binary features; selecting one of the groups of binary features having a maximal value of the combined loss reduction; and creating a child-directing table using the selected binary features; and repeating the creating and the performing for each of a plurality of calculation stages.


Optionally, the method further comprises: repeating the constructing for each of a plurality of trees.


Optionally, the classifier is used for classifying sample image data by: receiving a sample image from a capture device; for each of a plurality of reference points of the image, producing a codeword by: creating the plurality of calculation stages, each defining the stage split, stage size and the child-directing table; calculating the stage size number of binary features on a patch surrounding the reference point, for a first of the plurality of calculation stages; adding all the binary features to a codeword; using the codeword as an index to the child-directing table to produce an index for a next of the plurality of calculation stages; repeating the calculating, the adding and the using for each of the plurality of calculation stages; and calculating a vote of each of the plurality of codewords to each of a plurality of classes according to a weight matrix; and summing the votes for each of the plurality of classes to determine a classification of the image.


More optionally, the method further comprises: repeating the producing for each of a plurality of trees.


More optionally, the image includes a plurality of channels.


More optionally, the method further comprises, after the receiving: preparing the image to include additional channels.


More optionally, the method further comprises, after the receiving: smoothing at least one channel of the image by a convolution with a triangle filter.


Optionally, the method further comprises: replacing at least one of the binary features with a one of the candidate binary features that optimizes calculation of the codeword.


Optionally, the method further comprises: replacing at least one of the binary features with a binary feature different by a small perturbations of parameters.


Optionally, the constructing includes iterating between gradient based word calculator optimization and global optimization of table weights.


More optionally, the global optimization is based on bit refinement algorithm.


More optionally, the global optimization is based on a support vector machine (SVM) loss.


Optionally, the training images data is an output of a convolutional neural network (CNN) classifier.


According to an aspect of some embodiments of the present invention there is provided a system of training an image classifier using training images data, comprising: a memory storing training images data, the training images data is labeled according to image classes; a code store storing a code; at least one processor coupled to the memory and the program store for executing the stored code, the code comprising: code instructions to receive the training images data; code instructions to selecting a plurality of reference points of the training images; and code instructions to construct a set of voting convolutional tables and binary features on a patch surrounding each of the plurality of reference points by: creating a voting table by: creating a first plurality of candidate binary features; calculating a global loss reduction for each of the first plurality of candidate binary features; selecting one of the first plurality of candidate binary features having minimal global loss reduction; and repeating the creating, the calculating and the selecting to select stage-size binary features; and performing a tree split using the voting table by: creating a second plurality of candidate binary features; calculating a combined loss reduction for each stage-split size group of the second plurality of candidate binary features; selecting one of the groups of binary features having a maximal value of the combined loss reduction; and creating a child-directing table using the selected binary features; and repeating the creating and the performing for each of a plurality of calculation stages.


Optionally, the system further comprises: code instructions to repeat the code instructions to construct a set of voting convolutional tables and binary features, for each of a plurality of trees.


Optionally, the system further comprises: code instructions to replace at least one of the binary features with a one of the candidate binary features that optimizes calculation of the codeword.


Optionally, the system further comprises: code instructions to replace at least one of the binary features with a binary feature different by a small perturbations of parameters.


Optionally, the code instructions to construct a set of voting convolutional tables and binary features includes iterating between gradient based word calculator optimization and global optimization of table weights.


According to an aspect of some embodiments of the present invention there is provided a software program product for training an image classifier using training images data, comprising: a non-transitory computer readable storage medium; first program instructions for receiving training images data, the training images data is labeled according to image classes; second program instructions for selecting a plurality of reference points of the training images; third program instructions for creating a first plurality of candidate binary features; fourth program instructions for calculating a global loss reduction for each of the plurality of candidate binary features; and fifth program instructions for selecting one of the first plurality of candidate binary features having minimal global loss reduction; sixth program instructions for repeating the third, fourth and fifth program instructions to select stage-size binary features; seventh program instructions for creating a second plurality of candidate binary features; eighth program instructions for calculating a combined loss reduction for each stage-split size group of the second plurality of candidate binary features; ninth program instructions for selecting one of the groups of binary features having a maximal value of combined loss reduction; tenth program instructions for creating a child-directing table using the selected binary features; eleventh program instructions for repeating the third, fourth, fifth, sixth, seventh, eighth, ninth and tenth program instructions for each of the plurality of calculation stages; twelfth program instructions for repeating the second, third, fourth, fifth and sixth program instructions for each of the plurality of reference points; wherein the first, second, third, fourth, fifth, sixth, seventh eighth, ninth, tenth, eleventh and twelfth program instructions are executed by at least one computerized processor from the non-transitory computer readable storage medium.


Optionally, the software program product further comprises: thirteenth program instructions for repeating the third, fourth, fifth, sixth, seventh eighth, ninth, tenth, eleventh and twelfth program instructions for each of a plurality of trees; wherein the thirteenth program instructions are executed by the at least one computerized processor.


Certain features of the examples described herein, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the examples described herein, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the disclosure. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.

Claims
  • 1. A computer implemented method of training an image classifier using training images data, comprising: receiving training images data, said training images data being labeled according to image classes;selecting a plurality of reference points of said training images data; anditeratively constructing a set of voting convolutional tables and binary features on a patch surrounding each of said plurality of reference points, for each of a plurality of calculation stages, by: creating a voting table to select stage-size binary features by iteratively selecting candidate binary features having minimal global loss reduction; andperforming a tree split using said voting table by selecting a candidate binary features group and creating a child-directing table.
  • 2. The method of claim 1, wherein said creating a voting table includes iteratively: creating a first plurality of candidate binary features;calculating a global loss reduction for each of said first plurality of candidate binary features; andselecting one of said first plurality of candidate binary features having minimal global loss reduction.
  • 3. The method of claim 1, wherein said performing a tree split includes: creating a second plurality of candidate binary features;calculating a combined loss reduction for each stage-split size group of said second plurality of candidate binary features;selecting one of said groups of binary features having a maximal value of said combined loss reduction; andcreating a child-directing table using said selected binary features.
  • 4. The method of claim 1, further comprising: iteratively performing said constructing for each of a plurality of trees.
  • 5. The method of claim 1, wherein said classifier being used for classifying sample image data by: receiving a sample image from a capture device;for each of a plurality of reference points of said image, producing a codeword by iteratively: creating said plurality of calculation stages, each defining said stage split, stage size and said child-directing table;calculating said stage size number of binary features on a patch surrounding said reference point, for a first of said plurality of calculation stages;adding all said binary features to a codeword; andusing said codeword as an index to said child-directing table to produce an index for a next of said plurality of calculation stages; calculating a vote of each of said plurality of codewords to each of a plurality of classes according to a weight matrix; andsumming said votes for each of said plurality of classes to determine a classification of said image.
  • 6. The method of claim 5, further comprising: repeating said producing for each of a plurality of trees.
  • 7. The method of claim 5, wherein said image includes a plurality of channels.
  • 8. The method of claim 5, further comprising, after said receiving: preparing said image to include additional channels.
  • 9. The method of claim 5, further comprising, after said receiving: smoothing at least one channel of said image by a convolution with a triangle filter.
  • 10. The method of claim 1, further comprising: replacing at least one of said binary features with a one of said candidate binary features that optimizes calculation of said codeword.
  • 11. The method of claim 1, further comprising: replacing at least one of said binary features with a binary feature different by a small perturbations of parameters.
  • 12. The method of claim 1, wherein said constructing includes iterating between gradient based word calculator optimization and global optimization of table weights.
  • 13. The method of claim 12, wherein said global optimization is based on bit refinement algorithm.
  • 14. The method of claim 12, wherein said global optimization is based on a support vector machine (SVM) loss.
  • 15. The method of claim 1, wherein said training images data is an output of a convolutional neural network (CNN) classifier.
  • 16. A system of training an image classifier using training images data, comprising: a memory storing training images data, said training images data being labeled according to image classes;a code store storing a code;at least one processor coupled to said memory and said program store for executing said stored code, said code comprising:code instructions to receive said training images data;code instructions to selecting a plurality of reference points of said training images data; andcode instructions to iteratively construct a set of voting convolutional tables and binary features on a patch surrounding each of said plurality of reference points, for each of a plurality of calculation stages, by: creating a voting table to select stage-size binary features by iteratively selecting candidate binary features having minimal global loss reduction; andperforming a tree split using said voting table by selecting a candidate binary features group and creating a child-directing table.
  • 17. The method of claim 16, wherein said creating a voting table includes iteratively: creating a first plurality of candidate binary features;calculating a global loss reduction for each of said first plurality of candidate binary features; andselecting one of said first plurality of candidate binary features having minimal global loss reduction.
  • 18. The method of claim 16, wherein said performing a tree split includes: creating a second plurality of candidate binary features;calculating a combined loss reduction for each stage-split size group of said second plurality of candidate binary features;selecting one of said groups of binary features having a maximal value of said combined loss reduction; andcreating a child-directing table using said selected binary features.
  • 19. The system of claim 16, further comprising: code instructions to iteratively perform said code instructions to iteratively construct a set of voting convolutional tables and binary features, for each of a plurality of trees.
  • 20. A software program product for training an image classifier using training images data, comprising: a non-transitory computer readable storage medium;first program instructions for receiving training images data, said training images data being labeled according to image classes;second program instructions for selecting a plurality of reference points of said training images;third program instructions for iteratively constructing a set of voting convolutional tables and binary features on a patch surrounding each of said plurality of reference points, for each of a plurality of calculation stages, by: creating a voting table to select stage-size binary features by iteratively selecting candidate binary features having minimal global loss reduction; andperforming a tree split using said voting table by selecting a candidate binary features group and creating a child-directing table;wherein said first, second and third program instructions are executed by at least one computerized processor from said non-transitory computer readable storage medium.
RELATED APPLICATIONS

This application is related to co-filed, co-pending and co-assigned U.S. patent applications entitled “HAND GESTURE API USING FINITE STATE MACHINE AND GESTURE LANGUAGE DISCRETE VALUES” (Attorney Docket No. 63958), “MULTIMODAL INTERACTION USING A STATE MACHINE AND HAND GESTURES DISCRETE VALUES” (Attorney Docket No. 63959), “RECOGNITION OF HAND POSES BY CLASSIFICATION USING DISCRETE VALUES” (Attorney Docket No. 63960), “TRANSFORM LIGHTWEIGHT SKELETON AND USING INVERSE KINEMATICS TO PRODUCE ARTICULATE SKELETON” (Attorney Docket No. 63961), “TRANSLATION OF GESTURE TO GESTURE CODE DESCRIPTION USING DEPTH CAMERA” (Attorney Docket No. 63966), “GESTURES VISUAL BUILDER TOOL” (Attorney Docket No. 63967), “ELECTRICAL DEVICE FOR HAND GESTURES DETECTION” (Attorney Docket No. 63970) and “DETECTION OF HAND GESTURES USING GESTURE LANGUAGE DISCRETE VALUES” (Attorney Docket No. 63971), the disclosures of which are incorporated herein by reference.