SPARSE BINARY REPRESENTATION FOR SELF SUPERVISED INFORMATION EXTRACTION

Information

  • Patent Application
  • 20230297815
  • Publication Number
    20230297815
  • Date Filed
    March 15, 2023
    a year ago
  • Date Published
    September 21, 2023
    8 months ago
  • CPC
    • G06N3/0455
    • G06N3/0985
  • International Classifications
    • G06N3/0455
    • G06N3/0985
Abstract
A method for generating a sparse binary representation (SBR) of neural network intermediate features (NNIFs) of a neural network (NN). The method includes (i) feeding the neural network by input information; (ii) neural network processing the input information to provide, at least, the NNIFs; (iii) SBR processing, by a SBR module, the NNIFs, to provide the SBR representation of the NNIFs; and (iv) outputting the SBR representation. The SBR module has undergone a training process that used a loss function that takes into account a sparsity of training process SBR representations.
Description
Claims
  • 1. A method for generating a sparse binary representation (SBR) of neural network intermediate features (NNIFs) of a neural network (NN), the method comprises: feeding the neural network by input information;neural network processing the input information to provide, at least, the NNIFs;SBR processing, by a SBR module, the NNIFs, to provide the SBR representation of the NNIFs; andoutputting the SBR representation;wherein the SBR module has undergone a training process that used a loss function that takes into account a sparsity of training process SBR representations.
  • 2. The method according to claim 1 wherein the SBR module comprises an encoder that is followed by a thresholding unit.
  • 3. The method according to claim 2, comprising training the SBR module.
  • 4. The method according to claim 3, wherein the training comprises performing multiple training iterations; wherein a training iteration comprises: receiving by the SBR module a set of training process NNIFs;generating, by the SBR module, a training process SBR representation;feeding the training process SBR representation to a decoder to providea set of reconstructed training process NNIFs; applying the loss function to provide a loss function value; wherein the loss function value is based on the sparsity of the training process SBR representation and on an accuracy of the set reconstructed training process NNIFs.
  • 5. The method according to claim 4 comprising amending the encoder and the decoder based on the loss function value.
  • 6. The method according to claim 4 wherein the generating of the training process SBR representation comprises calculating, by the encoder, a signature of the set of training process NNIFs.
  • 7. The method according to claim 4 comprising evaluating an amount of irrelevant bits within the training process SBR representation.
  • 8. The method according to claim 7, comprising changing at least one hyper parameters and performing additional testing iterations when the amount of irrelevant bits exceeds a threshold.
  • 9. The method according to claim 4 wherein the sparsity of the training process SBR representation is less significant than the accuracy of the set reconstructed training process NNIFs.
  • 10. The method according to claim 1 wherein the NNIFs are outputted from one or more layers of the NN.
  • 11. The method according to claim 1 wherein the NNIFs are selected based on one or more objects of interest to be represented by the SBR representation of the NNIFs.
  • 12. The method according to claim 1 comprising performing an autonomous driving operation based on the SBR representation of the NNIFs.
  • 13. The method according to claim 1, wherein the SBR module comprises an encoder that is followed by a thresholding unit; wherein the training process comprises performing multiple training iterations; wherein a training iteration comprises: receiving by the SBR module a set of training process NNIFs;generating, by the SBR module, a training process SBR representation;feeding the training process SBR representation to a decoder to providea set of reconstructed training process NNIFs; applying the loss function to provide a loss function value; wherein the loss function value is based on the sparsity of the training process SBR representation and on an accuracy of the set reconstructed training process NNIFs.
  • 14. A non-transitory computer readable medium for generating a sparse binary representation (SBR) of neural network intermediate features (NNIFs) of a neural network (NN), the non-transitory computer readable medium stores instructions for: feeding the neural network by input information;neural network processing the input information to provide, at least, the NNIFs;SBR processing, by a SBR module, the NNIFs, to provide the SBR representation of the NNIFs; andoutputting the SBR representation;wherein the SBR module has undergone a training process that used a loss function that takes into account a sparsity of training process SBR representations.
  • 15. The non-transitory computer readable medium according to claim 14, wherein the SBR module comprises an encoder that is followed by a thresholding unit.
  • 16. The non-transitory computer readable medium according to claim 15, wherein the training process comprises performing multiple training iterations; wherein a training iteration comprises: receiving by the SBR module a set of training process NNIFs;generating, by the SBR module, a training process SBR representation;feeding the training process SBR representation to a decoder to providea set of reconstructed training process NNIFs; applying the loss function to provide a loss function value; wherein the loss function value is based on the sparsity of the training process SBR representation and on an accuracy of the set reconstructed training process NNIFs.
  • 17. The non-transitory computer readable medium according to claim 16, wherein the training process comprises evaluating an amount of irrelevant bits within the training process SBR representation.
  • 18. The non-transitory computer readable medium according to claim 14, wherein the sparsity of the training process SBR representation is less significant than the accuracy of the set reconstructed training process NNIFs.
  • 19. The non-transitory computer readable medium according to claim 14, wherein the NNIFs are selected based on one or more objects of interest to be represented by the SBR representation of the NNIFs.
  • 20. The non-transitory computer readable medium according to claim 14, that stores instructions for performing an autonomous driving operation based on the SBR representation of the NNIFs.
Provisional Applications (1)
Number Date Country
63269449 Mar 2022 US