Information
-
Patent Application
-
20230296516
-
Publication Number
20230296516
-
Date Filed
February 17, 2023a year ago
-
Date Published
September 21, 2023a year ago
-
Inventors
-
Original Assignees
-
CPC
-
-
International Classifications
Abstract
Artificial intelligence driven signal enhancement of sequencing images enables enhanced sequencing by synthesis that determines a sequence of bases in genetic material with any one or more of: improved performance, improved accuracy, and/or reduced cost. A training set of images taken at unreduced and reduced power levels used to excite fluorescence during sequencing by synthesis is used to train a neural network to enable the neural network to recover enhanced images, as if taken at the unreduced power level, from unenhanced images taken at the reduced power level.
Claims
- 1. A method of reducing excitation power used to produce fluorescence and collected images during sequencing, the method comprising:
accessing a training set of paired images taken at an unreduced power level and a reduced power level used to excite fluorescence during a sequencing operation;wherein a power reduction ratio between the unreduced power level, before reduction, and the reduced power level, after reduction, is at least 2 to 1;training a convolutional neural network comprising a generative adversarial network that has a generator stage and a discriminator stage each updating respective pluralities of parameters during the training, the plurality of parameters of the generator stage enabling substantially recovering enhanced images, as if taken at the unreduced power level, from unenhanced images taken at the reduced power level, after reduction;whereby trained filters of the convolutional neural network enable adding information to images taken at the reduced power level to enable production of the enhanced images; andsaving the trained filters for use processing collected images from sequencing at the reduced power level.
- 2. The method of claim 1,wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at every cycle of the plurality of imaging cycles.
- 3. The method of claim 1, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at less than every cycle of the plurality of imaging cycles.
- 4. The method of claim 1, wherein the generator stage provides candidate enhanced images to the discriminator stage.
- 5. The method of claim 1, wherein the convolutional neural network is a training convolutional neural network, and the generator stage is a training generator stage, and further comprising accessing production images taken at the reduced power level and using information of the trained filters in a production convolutional neural network that has a production generator stage to enhance the production images as if taken at the unreduced power level.
- 6. The method of claim 1, further comprising creating collected images at the unreduced and the reduced power levels by controlling excitation power produced by one or more lasers and used to produce fluorescence.
- 7. The method of claim 1, wherein the reduced power level is controlled by using an acousto-optic modulator positioned in a transmission path between a laser source and a slide on which samples are sequenced.
- 8. The method of claim 1, further comprising creating collected images at the unreduced and the reduced power levels by controlling a number of photons reaching a sensor from a slide on which samples are sequenced, so that samples for the unreduced and the reduced power levels are collected in a single cycle of sequencing by synthesis.
- 9. The method of claim 1, further comprising creating collected images at the unreduced and the reduced power levels by collecting a first sample for the unreduced power level from a slide and synthetically impairing the first sample to produce a second sample.
- 10. The method of claim 1, further comprising base calling from the enhanced images.
- 11. The method of claim 1, wherein the convolutional neural network comprises any combination of any one or more of
one or more 1D convolutional layers,one or more 2D convolutional layers,one or more 3D convolutional layers,one or more 4D convolutional layers,one or more 5D convolutional layers,one or more multi-dimensional convolutional layers,one or more single channel convolutional layers,one or more multi-channel convolutional layers,one or more 1 × 1 convolutional layers,one or more atrous convolutional layers,one or more transpose convolutional layers,one or more depthwise separable convolutional layers,one or more pointwise convolutional layers,one or more 1 × 1 convolutional layers,one or more group convolutional layers,one or more flattened convolutional layers,one or more spatial convolutional layers,one or more spatially separable convolutional layers,one or more cross-channel convolutional layers,one or more shuffled grouped convolutional layers,one or more pointwise grouped convolutional layers,one or more upsampling layers,one or more downsampling layers,one or more averaging layers, andone or more padding layers.
- 12. The method of claim 1, wherein the training comprises determining one or more loss terms comprising any combination of any one or more of a logistic regression/log loss, a multi-class cross-entropy/softmax loss, a binary cross-entropy loss, a mean squared error loss, a mean absolute error loss, a mean absolute percentage error loss, a mean squared logarithmic error loss, an L1 loss, an L2 loss, a smooth L1 loss, a Huber loss, a patch-based loss, a pixel-based loss, a pixel-wise loss, a single-image loss, adversarial loss, and a fiducial-based loss.
- 13. The method of claim 1, wherein the convolutional neural network is a training convolutional neural network comprised in a training sequencing by synthesis instrument and further comprising training a production convolutional neural network comprised in a production sequencing by synthesis instrument, the training the production convolutional neural network starting with information of the trained filters and updating parameters of the production convolutional neural network based on processing fiducial elements of tuning images obtained via the production sequencing by synthesis instrument and wherein the tuning images are taken at the reduced power level.
- 14. The method of claim 1, wherein the sequencing by synthesis comprises a plurality of cycles, each cycle corresponding to a single base call for each of a plurality of oligos, each cycle occurring one after another sequentially, and the training is performed with respect to a plurality of contiguous non-overlapping ranges of the cycles, resulting in a plurality of trained filters each corresponding to a respective one of the non-overlapping cycle ranges.
- 15. The method of claim 1, further comprising determining image quality of the enhanced images, and responsive to the quality being below a threshold, recapturing one or more of the images taken at the reduced power level using the unreduced power level.
- 16. The method of claim 1, further comprising pretraining the convolutional neural network using pretraining images taken at a power level that is greater than the reduced power level and less than the unreduced power level.
- 17. The method of claim 1, wherein each of the images taken at the reduced power level is produced by capturing multiple images of a same tile with a TDI sub-pixel imager and then processing the multiple images with an AI model to produce the respective image taken at the reduced power level.
- 18. A method of reducing excitation power used to produce fluorescence and collected images during sequencing, the method comprising:
accessing a training set of paired images taken at an unreduced power level and a reduced power level used to excite fluorescence during a sequencing operation;wherein a power reduction ratio between the unreduced power level, before reduction, and the reduced power level, after reduction, is at least 2 to 1;training a convolutional neural network that has an encoder stage and a decoder stage each updating respective pluralities of parameters during the training, the respective pluralities of parameters collectively enabling substantially recovering enhanced images, as if taken at the unreduced power level, from unenhanced images taken at the reduced power level, after reduction;whereby trained filters of the convolutional neural network enable adding information to images taken at the reduced power level to enable production of the enhanced images; andsaving the trained filters for use processing collected images from sequencing by synthesis at the reduced power level.
- 19. The method of claim 19, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at every cycle of the plurality of imaging cycles.
- 20. The method of claim 19, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at less than every cycle of the plurality of imaging cycles.
- 21. The method of claim 19, wherein the convolutional neural network further comprises one or more skip connections between the encoder and decoder stages.
- 22. The method of claim 19, wherein the encoder stage provides an intermediate representation to the decoder stage.
- 23. The method of claim 19, wherein the convolutional neural network is a training convolutional neural network and further comprising accessing production images taken at the reduced power level and using information of the trained filters in a production convolutional neural network to enhance the production images as if taken at the unreduced power level.
- 24. The method of claim 19, further comprising base calling from the enhanced images.
- 25. The method of claim 19, wherein each of the images taken at the reduced power level is produced by capturing multiple images of a same tile with a TDI sub-pixel imager and then processing the multiple images with an AI model to produce the respective image taken at the reduced power level.
- 26. The method of claim 19, wherein the convolutional neural network is a training convolutional neural network comprised in a training sequencing by synthesis instrument and further comprising training a production convolutional neural network comprised in a production sequencing by synthesis instrument, the training the production convolutional neural network starting with information of the trained filters and updating parameters of the production convolutional neural network based on processing fiducial elements of tuning images obtained via the production sequencing by synthesis instrument and wherein the tuning images are taken at the reduced power level.
- 27. The method of claim 19, wherein the sequencing by synthesis comprises a plurality of cycles, each cycle corresponding to a single base call for each of a plurality of oligos, each cycle occurring one after another sequentially, and the training is performed with respect to a plurality of contiguous non-overlapping ranges of the cycles, resulting in a plurality of trained filters each corresponding to a respective one of the non-overlapping cycle ranges.
- 28. The method of claim 19, further comprising determining image quality of the enhanced images, and responsive to the quality being below a threshold, recapturing one or more of the images taken at the reduced power level using the unreduced power level.
- 29. The method of claim 19, further comprising pretraining the convolutional neural network using pretraining images taken at a power level that is greater than the reduced power level and less than the unreduced power level.
- 30. A method of reducing excitation power used to produce fluorescence and collected images during a sequencing operation, the method comprising:
accessing a training set of images taken at an unreduced power level and a reduced power level used to excite fluorescence during sequencing;wherein a power reduction ratio between the unreduced power level, before reduction, and the reduced power level, after reduction, is at least 2 to 1;training a convolutional neural network comprising a cycle-consistent generative adversarial network that has first and second generator stages and first and second discriminator stages each of the generator stages and each of the discriminator stages updating respective pluralities of parameters during the training, the plurality of parameters of the first generator stage enabling substantially recovering enhanced images, as if taken at the unreduced power level, from unenhanced images taken at the reduced power level, after reduction;whereby trained filters of the convolutional neural network enable adding information to images taken at the reduced power level to enable production of the enhanced images; andsaving the trained filters for use processing collected images from sequencing at the reduced power level.
- 31. The method of claim 31, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at every cycle of the plurality of imaging cycles.
- 32. The method of claim 31, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at less than every cycle of the plurality of imaging cycles.
- 33. The method of claim 31, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at every cycle of the plurality of imaging cycles.
- 34. The method of claim 31, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at less than every cycle of the plurality of imaging cycles.
- 35. The method of claim 31, further comprising base calling from the enhanced images.
- 36. The method of claim 31, wherein each of the images taken at the reduced power level is produced by capturing multiple images of a same tile with a TDI sub-pixel imager and then processing the multiple images with an AI model to produce the respective image taken at the reduced power level.
- 37. A non-transitory computer readable storage medium impressed with computer program instructions, which, when executed on a processor, implement actions comprising:
accessing a training set of paired images taken at an unreduced power level and a reduced power level used to excite fluorescence during a sequencing operation;wherein a power reduction ratio between the unreduced power level, before reduction, and the reduced power level, after reduction, is at least 2 to 1;training a convolutional neural network comprising a generative adversarial network that has a generator stage and a discriminator stage each updating respective pluralities of parameters during the training, the plurality of parameters of the generator stage enabling substantially recovering enhanced images, as if taken at the unreduced power level, from unenhanced images taken at the reduced power level, after reduction;whereby trained filters of the convolutional neural network enable adding information to images taken at the reduced power level to enable production of the enhanced images; andsaving the trained filters for use processing collected images from sequencing at the reduced power level.
- 38. The non-transitory computer readable storage medium of claim 40, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at every cycle of the plurality of imaging cycles.
- 39. The non-transitory computer readable storage medium of claim 40, wherein the sequencing operation has a plurality of imaging cycles, and wherein the paired images are taken at less than every cycle of the plurality of imaging cycles.
- 40. The non-transitory computer readable storage medium of claim 40, wherein the actions further comprise base calling from the enhanced images.
- 41. The non-transitory computer readable storage medium of claim 40, wherein each of the images taken at the reduced power level is produced by capturing multiple images of a same tile with a TDI sub-pixel imager and then processing the multiple images with an AI model to produce the respective image taken at the reduced power level.
- 42. A method of reducing excitation power used to produce fluorescence and collected images during sequencing, the method comprising:
accessing a training set of paired images, wherein each of a plurality of the paired images comprise an actual image captured at an unreduced power level used to excite fluorescence during sequencing by synthesis and a corresponding synthetic image processed from the actual image to simulate signal data captured at a reduced power level;training a convolutional neural network comprising a generative adversarial network that has a generator stage and a discriminator stage each updating respective pluralities of parameters during the training, the plurality of parameters of the generator stage enabling substantially recovering enhanced images, as if taken at the unreduced power level, from unenhanced images taken at the reduced power level, after reduction;whereby trained filters of the convolutional neural network enable adding information to images taken at the reduced power level to enable production of the enhanced images; andsaving the trained filters for use processing collected images from sequencing at the reduced power level.
- 43. The method of claim 46, wherein a power reduction ratio between the unreduced power level and the reduced power level is at least 2 to 1.
- 44. The method of claim 46, wherein the sequencing operation has a plurality of imaging cycles, and wherein paired images are obtained for every cycle of the plurality of imaging cycles.
- 45. The method of claim 46, wherein the sequencing operation has a plurality of imaging cycles, and wherein paired images are obtained for less than every cycle of the plurality of imaging cycles.
Provisional Applications (1)
|
Number |
Date |
Country |
|
63311427 |
Feb 2022 |
US |