Star tracker devices and methods utilizing relatively low cost components and vector-based deep learning are provided.
Star trackers continue to play a key role in spacecraft guidance and control systems. A star tracker is fundamentally a camera that images a star field and computes and reports the direction the star tracker boresight is pointing (its attitude). Like all components used in space missions, there is continuous pressure to reduce size, weight, power and cost (SWAP-C) and increase the lifetime of these components without compromising performance. A tracker must be rugged enough to survive the stresses of launch and then function for many years in the extreme temperatures and radiation encountered in the harsh environment of space. Star trackers are typically mounted on the external surface of a spacecraft bus and are not shielded from the environment.
First generation star trackers utilized imaging tube technologies and analog electronics. Charge-coupled-devices (CCDs) brought much greater optical sensitivity, and the digital electronics which supplanted the analog circuitry in second-generation trackers enabled more sophisticated algorithms, greatly increasing their performance. CCD sensors, however, require special electronics for control and clocking the image sensor, and an external analog to digital converter (ADC) to digitize the CCD output signal. Further, a CCD's performance degrades when subjected to the space proton environment and they are susceptible to transient effects from high energy particles encountered in space.
The advent of CMOS imaging sensors brought the promise of increased radiation hardness of the imager through the use of silicon-on-insulator (SOI) structures to reduce the volume of active silicon in the imaging sensor. CMOS sensors also integrate the clocking and ADC circuitry on the same die, reducing the number of electronic components required and therefore reducing the SWAP of the trackers. However, trackers using earlier CMOS imagers suffered in performance since the sensors were front-side illuminated (FSI), which significantly reduced their sensitivity. The use of micro-lenses partly counteracts the lower sensitivity of FSI CMOS imagers, but reduce the accuracy of the computed stellar image centroids. Also, the first CMOS star tracker sensors used less sophisticated pixel designs and relied on a simple rolling-shutter readout scheme that resulted in a skewed readout time of the imaged stars across the array.
More recently, CMOS sensor vendors are producing sophisticated back-side illuminated (BSI) CMOS imaging sensors, which feature fundamentally improved sensitivity. BSI sensor designs result in the entire surface of the imaging sensor being light sensitive, greatly improving the sensor's quantum efficiency and fill-factor while eliminating the need for micro-lenses. Newer sensors use more sophisticated CMOS pixel designs featuring higher transistor count, pinned photodiodes and transfer gates to provide ‘snapshot’ or global shutter readout. In this mode, all pixels in the array integrate signal for the same absolute time period. A modern star tracker can also benefit from continuing advances in electronics integration. However, due to processing limitations and to practical limitations in providing highly precise optics in connection with a large image sensor, CMOS sensor based star trackers have typically been limited to using a limited number of stars imaged onto an area consisting of only a portion of the total number of pixels of the sensor for attitude determination.
In a typical implementation of a star tracker incorporating a digital image sensor, the sensor includes an array of pixels that is used to obtain an image from within a field of view of the device defined by the size of the sensor and associated imaging optics. The relative location of identified stars within the image, and the line of sight of the device, enable a relative location of a platform carrying the star tracker device to be determined. However, determining the relative location of stars with even moderate accuracy (i.e. having an attitude uncertainty of ˜1 arc-second) has required expensive, low noise detectors and specialized optical components that are capable of imaging light from stars as precisely focused point sources. Because of the high cost of such components, star trackers have typically been limited to use in systems having large budgets. Accordingly, it would be desirable to provide a star tracker having acceptable accuracy and precision using lower cost components. For example, it would be desirable to provide a star tracker having moderate accuracy that was capable of using commercial, off the shelf components.
Embodiments of the present disclosure provide systems and methods that use vector-based deep learning to improve the performance of a star tracker built using commercial off-the-shelf (COTS) components, including a relatively low cost CMOS detector and catalog optics. In addition to decreasing the cost of required components, this approach enables a large reduction in manufacturing, test, and material costs. In accordance with embodiments of the present disclosure, a neural network, such as a network incorporating Hinton's capsule networks (HCNs) or featuring a coordinate convolution layer, is used to simultaneously classify and localize features within a full frame image of a scene containing hundreds of stars. In accordance with further embodiments of the present disclosure, a neural network is used to classify and localize features that emerge from a defocused scene of stars. The locations of these features in the image are correlated to a particular orientation of the camera or optical system used to collect the image, and are in turn applied to an attitude determination. Because of the additional features in a collected image created by the inclusion of a large number of stars, alone or in combination with the blurring of each light source due to the de-focusing, the star tracker can generate an adequate solution in the presence of noise and distortion created by the associated optics. In particular, each feature loses precision, but more features exist in the scene as imaged that can be utilized in determining the attitude solution, compensating for the flaws that come with the lower quality optics and electronics that can be utilized in embodiments of the present disclosure. More particularly, embodiments of the present disclosure provide a method of using vector-based deep learning to improve the performance of a star tracker built with COTS, low cost CMOS detector and catalog optics. This results in a large reduction in manufacturing, test, and material costs.
Star tracker systems or devices in accordance with embodiments of the present disclosure can include an optical system that collects light from within a field of view encompassing at least a portion of a scene containing a plurality of stars. The light collected by the optical system is directed to a detector. In accordance with embodiments of the present disclosure, the detector includes an array of pixels defining a focal imaging plane. The optical system and detector can be arranged such that the image information is blurred or defocused as it is received at the focal plane of the detector. The blurring or defocusing can be achieved through physically shifting the components of the optical train such that the collected image data is blurred or defocused. Alternatively or in addition, the diffusion elements or the like can be included in the optical system. The star tracker additionally includes a processor and memory. Application programming stored in the memory can be executed by the processor two determine an attitude of the star tracker from collected image information. More particularly, the application programming can implement a vector based deep learning model incorporating a Hinton's capsule network or a coordinate convolution layer. The deep learning model or network is generally trained to determine the attitude of the star tracker from collected image information. In accordance with the least some embodiments of the present disclosure, the image information processed by the deep learning network includes intentionally defocused or blurred information. Moreover, the application programming can implement a defocusing module. Embodiments of the present disclosure incorporate relatively low cost, commercial off-the-shelf optical system and detector components.
Methods in accordance with embodiments of the present disclosure include training a vector based deep learning network to determine pose or attitude information based on image data. The image data used to determine the attitude information includes image information collected from a scene containing a plurality of stars. In addition, the image information can include a variety of features, including blur centroids and intersections between blurs within the image data. At least some embodiments of the present disclosure include the intentional creation of a blurred or defocused image for use in attitude determination. The creation of a blurred or defocused image can be achieved through the physical manipulation or configuration of optical components within a star tracker, and/or processing the collected image data in a blurring module or kernel included in application programming. Execution of the application programming to analyze collected image data and to output an attitude determination can be performed using a graphical processing unit or other processor included in the star tracker. In addition, methods in accordance with embodiments of the present disclosure enable image data to be collected using relatively low cost optical system and detector components.
Additional features and advantages of embodiments of the present disclosure will become more readily apparent from the following discussion, particularly when taken together with the accompanying drawings.
In accordance with embodiments of the present disclosure, the lens assembly or optical system 204 includes components that are available as commercial off the shelf (COTS) components, also referred to herein as catalog items. This is in contrast to standard moderate or high accuracy star trackers, which require expensive, custom-fabricated, multi-element optics. As a result, the optical system 204 components can be obtained at a lower cost than those incorporated into standard moderate or high accuracy star trackers. However, the quality of images obtained by the optical system 204 of at least some embodiments of a star tracker 108 as disclosed herein can be blurred or defocused, particularly as compared to standard moderate high accuracy star trackers. Alternatively or in addition, embodiments of the present disclosure can utilize an image sensor or detector 208 that is relatively low cost as compared to the image sensors typically incorporated in a moderate or high accuracy star tracker. For example, a star tracker 108 in accordance with embodiments of the present disclosure can utilize a CMOS image sensor 208 available as a relatively low cost, COTS image sensor. The use of such relatively low cost lens assembly 204 and detector 208 components can result in significant cost savings. The reduced performance of a low-cost lens assembly 204 and/or detector 208 are overcome by utilizing a neural network, which can incorporate vector-based deep learning (DL) and convolution methods, as described elsewhere herein.
The star tracker 108 additionally includes a processor 212, memory 216, data storage 220, and a communications interface 224. The processor 212 can include a general purpose programmable processor, graphics processing unit (GPU), a field programmable gate array (FPGA), controller, application specific integrated circuit (ASIC), or other processing device or set of devices capable of executing instructions for operation of the star tracker 108. The instructions executed by the processor 212 can be stored as application programming 228 in the memory 216 and/or data storage 220. As discussed in greater detail elsewhere herein, the application programming 228 is executed by the processor 212 to implement a neural network that identifies features, including stars, the centroids of stars, and/or intersections between overlapping image blurs of stars present in an image or in a defocused image. The neural network can incorporate vector-based DL and convolution processes. Moreover, as discussed in greater detail elsewhere herein, the processes implemented by the application programming 228 can incorporate a defocus module to intentionally create a blurred image. The memory 216 can include one or more volatile or nonvolatile solid-state memory devices, such as but not limited to RAM, SDRAM, or the like. The data storage 220 can include one or more mass storage devices, such as, but not limited to, a hard disk drive, optical storage device, solid-state drive, or the like. In addition to providing storage for the application programming 228, the memory 216 and/or the data storage 220 can store intermediate or final data products or other data or reference information, such as but not limited to a vector based deep learning model, navigational information, a star database or catalog, attitude and timing information, and image data. The communications interface 224 can provide determined attitude information to a local or remote output device, storage device, processing system, or the like. Moreover the communications interface can receive instructions, data, or other information from local or remote sources. As examples, the communications interface 224 can include a radio, optical communication system, serial interface, or the like.
Initially, at step 304, multiple frames of training image data are obtained. Each frame contains a plurality of star points produced by stars 112 within a scene 110. Moreover, different training image data frames can include different sets of star points or images of different stars 112, and/or can be obtained from different perspectives or along different lines of sight 118. The different lines of sight 118 correspond to different attitudes of the camera or optical system used to obtain the images. The multiple training image data frames are then stored in memory 216 or data storage 220 (step 308). In accordance with embodiments of the present disclosure, the entirety of each frame of collected image data is used in implementing the training and attitude determination process. Accordingly, the entire frame of collected image data is stored. The image data can be obtained by the star tracker 108 being trained, by another star tracker, by another optical system, or can include simulated data.
With reference now to
With reference now to
At step 312, the location of each star point or the centroid of a blur 504 associated with a star point within each of the blurred images 500 is determined and stored in memory 216. The location of each star point can be identified using a centroiding method. In addition, step 312 can include determining the location of each point of intersection 508 or overlap between every star point's blur 504 within the image. That information can also be stored in the memory 216.
At step 316, the stored location and spatial relationship information obtained from each of the full frame images 500, and the attitude or lines of sight 118 associated with each of the images 500, is used to train a vector-based deep learning model implemented by the application programming 228 to identify the visual features 504 and 508 that emerge as a result of heavily blurred stars within an image 500. Moreover, the location and spatial relationship information of the features 504 and 508 in different images 500 encompassing different fields of view 116 can be correlated to different star tracker 108 attitudes. More particularly, embodiments of the present disclosure provide for the training of a deep learning network for enabling the classification and localization of image features. The training can include capturing and processing a plurality of star tracker full-frame images that each contain a plurality of image features created by collecting light from stars 112 in the different fields of view 116 at different attitudes. The processed information can be incorporated into a model or star catalog that is then applied to determine attitude information from collected images as discussed elsewhere herein.
In accordance with embodiments of the present disclosure, the deep learning model can implement an HCN (see
In accordance with other embodiments of the present disclosure, a deep learning model features a network that incorporates a coordinate convolution layer (see
In applying either an HCN or a coordinate convolution layer, embodiments of the present disclosure avoid the use of a max pooling layer, which uses the maximum value clustered convolutional neural network nodes or averaging to reduce model complexity by assuming that different internal representations of an image don't change the properties of the image. The use of a max pooling layer is effective at reducing the complexity, and thus increasing the speed, of neural networks. However, convolutional neural networks that incorporate max pooling are prone to misclassifying output results. Accordingly, in applications that require certainty in the relationship between identified objects within an image, such as star tracker applications, max pooling is preferably avoided.
After training of the model implemented by the application programming 228, the star tracker 108 can be deployed (step 320). Deployment of the star tracker 108 can include placing a platform 104 carrying the star tracker 108 at a location or in an area in which operation of the star tracker 108 to determine the attitude and/or location of the star tracker 108 and the associated platform 104 is desired. The star tracker 108 can then be operated to obtain a frame of star tracker image data (step 324). The star tracker image data is defocused image data 500, and contains blurred images 504 of point sources or stars 112 having centroids and forming intersections 508. The blurring or defocusing of an image collected by the star tracker can be the result of the use of relatively low cost optical 204 and/or detector 208 components. Alternatively or in addition, the blurring or defocusing can be the result of the intentional blurring or defocusing of the image data. Intentional blurring or defocusing can be implemented in the hardware of the optical system 204 or the detector 208, or both. For instance, elements within the optical system 204 can be shifted relative to one another or relative to the detector 208, or the detector location can be shifted relative to the optical system 204, such that light collected by the optical system 204 is not focused at the focal plane 209 of the detector. As another example, a diffusion or blurring element can be incorporated into the optical system 204. Intentional blurring or defocusing can also be implemented in software. For example, the application programming 228 can include a blurring or defocus module that creates blurs 504 and intersections 508 between blurs in a processed version of a collected image, to form a blurred or defocused image 500. In addition, embodiments of the present disclosure collect and process image data that encompasses the entirety or at least most of the pixels 210 of a detector 208 having a large number (e.g. at least one million) of pixels.
The frame of blurred or defocused image 500 data is then processed to classify and localize features 504 and 508 within the data (step 328). In accordance with embodiments of the present disclosure, features that are classified and localized can include point sources, such as star 112 points within the image frame, or the centroid of a blurred image 504 of a star point. In addition, features that are classified and localized can include intersections 508 between blurred images 504 of individual stars 112 present in the image data as obtained by the detector 208, and/or as created in a blurring module or function implemented by the application programming 228. Thus, embodiments of the present disclosure enable features in addition to the relative locations of star points 112 to be utilized in determining the attitude or pose of the star tracker 108. In addition to creating and using features within an obtained image that supplement and are in addition to the location of point sources themselves, embodiments of the present disclosure utilize the image information obtained from across all or most of the detector 208. Accordingly, the number of features encompassed by the frame of image data that are used to determine the attitude of the star tracker 108 is increased as compared to standard star tracker implementations. This in turn enables a star tracker 108 in accordance with embodiments of the present disclosure to provide attitude determinations with an attitude uncertainty that compares to the attitude uncertainty of a star tracker using higher cost components.
In accordance with embodiments of the present disclosure, vector-based convolution based deep learning techniques are applied to the defocused field to provide attitude determination performance that is equivalent to standard moderate and high accuracy star trackers, even using relatively low cost, low-grade, low precision optical system 204 components, and even using relatively low cost, low-grade, high noise detector 208 components. In particular, embodiments of the present disclosure use the trained capsule network (HCNs) to simultaneously classify and localize features that emerge from a defocused scene of stars 112. HCN is a method of DL which incorporates vector-based information (specifically all pose information) of objects to capture spatial relationships. Pose includes position (x, y, z) and orientation, instantiated in a “Capsule” which encapsulates vector information about the state of the feature being processed through the HCN (
The determined attitude solution can then be provided as an output (step 332). The output can be provided to a navigational module included in the platform 104, to an external vehicle or center, and/or stored for later use. Moreover, the output can include the attitude of the star tracker 108, and/or the location of the star tracker 108 within a reference frame.
At step 336, a determination can be made as to whether the process of obtaining image data and calculating an attitude solution should continue. If the process is to be continued, it can return to step 324, to obtain a new frame of image data. Alternatively, the process can end.
Standard moderate accuracy star trackers (i.e., attitude uncertainty ˜1 arc-second) require expensive low noise detectors and custom-fabricated multi-element optics. The conventional approach is an expensive attitude determination instrument solution that only tracks tens of stars (within hundreds of pixels) on the focal plane, due to the nature of CCD readout and limitations in computational power. Embodiments of the present disclosure utilize commercial off-the-shelf (COTS) detectors (order tens of dollars) and off-the-shelf low-cost optics (order hundreds of dollars). The compromise in detector and optical performance allows a cost reduction factor in the 100's.
To maintain attitude uncertainty in the realm of 1 arc-second, embodiments of the present disclosure increase the available information from tens of stars (within hundreds of pixels) on the focal plane to the hundreds of stars across the entire focal plane (utilizing ˜1 million pixels). In addition, the blurring and creation of additional image features that can occur as a result of the use of COTS optical system 204 and detector 208 components is used to advantage in making attitude determinations. Commercially available, low cost CMOS detectors readily provide full-frame output (millions of pixels) at frame rates upwards of 10 Hz, thus enabling this approach. To make use of the additional data, a processing strategy is employed that combines image processing and vector-based deep learning (DL) to extract information from the full-frame images. The added information from the larger field of view more than compensates for the lesser performance of the low-cost detector/optic combination.
Embodiments of the present disclosure utilize very low-cost COTS detectors and optics provide overall cost reduction factor in the 100's while achieving similar performance compared to standard medium accuracy star trackers. The lesser performance of the low-cost detectors and optics are overcome by utilizing vector-based DL and convolution methods to process features across the full image frame (all available pixels) instead of only a portion of the image (hundreds of pixels) as in the heritage approaches.
Heritage approaches use precise optics, localizing starlight to few pixels, from which precise attitude information can be derived. In contrast, embodiments of the present disclosure take advantage of image blur and make use of point spread function (PSF) overlap. Information from a scene is extracted from pixel-to-pixel relationships and associated information. Heavy defocus in the optical train or convolution with a blurring kernel in processing can yield such intra-pixel information spread, where defocused stars produce annuli that interplay with other annuli thus increasing the number of pixels that contain useful information. The result obtained using a star tracker 108 in accordance with embodiments of the present disclosure is comparable to the performance of heritage approaches, but without the need for precision optics.
Embodiments of the present disclosure use vector-based deep learning to process the full-frame star field images. Combined with the additional intra-pixel information provided by the defocus method outlined above, this results in high precision attitude determination. In addition to enabling the use of low cost optics 204 and detectors 208, embodiments of the present disclosure can provide accurate attitude determinations even after the alignment of optical system 204, detector 208 or other star tracker 108 components has been altered due to stresses during the launch or repositioning of a platform 104 carrying the star tracker 108, due to damage or wear after the star tracker 108 has been operationally deployed, or due to any other causes.
A method for very low-cost star trackers built with low-cost, commercially available detectors and optics, while still providing attitude uncertainties comparable with conventional moderate accuracy star trackers (˜1 arc-second), is provided. The method uses full-frame images instead of partial-frame as in conventional methods, enabled by CMOS imagers in place of the traditional CCD. In addition, the method uses heavy defocus to spread useful information from a stellar field over additional pixels, lending to supplementary data apart from star locations. Vector-based deep learning methods such as HCN, or an approach that incorporates a coordinate convolution layer, can be used to solve for attitude based on the described approach.
Embodiments of the present disclosure provide a method of increasing the accuracy of star trackers using a plurality blurred stars formed in the full-frames of the star tracker and vector-based deep learning comprising:
The foregoing discussion of the disclosed systems and methods has been presented for purposes of illustration and description. Further, the description is not intended to limit the disclosed systems and methods to the forms disclosed herein. Consequently, variations and modifications commensurate with the above teachings, within the skill or knowledge of the relevant art, are within the scope of the present disclosure. The embodiments described herein are further intended to explain the best mode presently known of practicing the disclosed systems and methods, and to enable others skilled in the art to utilize the disclosed systems and methods in such or in other embodiments and with various modifications required by the particular application or use. It is intended that the appended claims be construed to include alternative embodiments to the extent permitted by the prior art.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/795,707, filed Jan. 23, 2019, the entire disclosure of which is hereby incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5588073 | Lee et al. | Dec 1996 | A |
5719794 | Altshuler et al. | Feb 1998 | A |
5960391 | Tateishi et al. | Sep 1999 | A |
6075991 | Raleigh et al. | Jun 2000 | A |
6252627 | Frame et al. | Jun 2001 | B1 |
6437692 | Petite et al. | Aug 2002 | B1 |
6597394 | Duncan et al. | Jul 2003 | B1 |
6820053 | Ruwisch | Nov 2004 | B1 |
7020501 | Elliott et al. | Mar 2006 | B1 |
7590098 | Ganesh | Sep 2009 | B2 |
8019544 | Needelman et al. | Sep 2011 | B2 |
8583371 | Goodzeit et al. | Nov 2013 | B1 |
8929936 | Mody et al. | Jan 2015 | B2 |
9073648 | Tsao et al. | Jul 2015 | B2 |
9191587 | Wright et al. | Nov 2015 | B2 |
9294365 | Misra et al. | Mar 2016 | B2 |
9449374 | Nash et al. | Sep 2016 | B2 |
9702702 | Lane | Jul 2017 | B1 |
9924522 | Gulati et al. | Mar 2018 | B2 |
9927510 | Waldron et al. | Mar 2018 | B2 |
10021313 | Chen et al. | Jul 2018 | B1 |
10027893 | Bell et al. | Jul 2018 | B2 |
10048084 | Laine et al. | Aug 2018 | B2 |
10271179 | Shima | Apr 2019 | B1 |
10970520 | Kim | Apr 2021 | B1 |
20040190762 | Dowski, Jr. | Sep 2004 | A1 |
20050049876 | Agranat | Mar 2005 | A1 |
20050213096 | Kouris et al. | Sep 2005 | A1 |
20050228660 | Schweng | Oct 2005 | A1 |
20060030332 | Carrott et al. | Feb 2006 | A1 |
20060238623 | Ogawa | Oct 2006 | A1 |
20070010956 | Nerguizian et al. | Jan 2007 | A1 |
20080020354 | Goree et al. | Jan 2008 | A1 |
20080045235 | Kennedy et al. | Feb 2008 | A1 |
20080293353 | Mody et al. | Nov 2008 | A1 |
20090179142 | Duparre et al. | Jul 2009 | A1 |
20090197550 | Huttunen et al. | Aug 2009 | A1 |
20090268619 | Dain et al. | Oct 2009 | A1 |
20100091017 | Kmiecik et al. | Apr 2010 | A1 |
20120071105 | Walker et al. | Mar 2012 | A1 |
20120072986 | Livsics et al. | Mar 2012 | A1 |
20120163355 | Heo et al. | Jun 2012 | A1 |
20120167144 | Avison-Fell | Jun 2012 | A1 |
20120202510 | Singh | Aug 2012 | A1 |
20120238201 | Du et al. | Sep 2012 | A1 |
20120238220 | Du et al. | Sep 2012 | A1 |
20140218520 | Teich et al. | Aug 2014 | A1 |
20140232871 | Kriel et al. | Aug 2014 | A1 |
20140282783 | Totten et al. | Sep 2014 | A1 |
20140329540 | Duggan et al. | Nov 2014 | A1 |
20150009072 | Nijsure | Jan 2015 | A1 |
20150358546 | Higashiyama | Dec 2015 | A1 |
20160101779 | Katoh | Apr 2016 | A1 |
20160173241 | Goodson et al. | Jun 2016 | A1 |
20160187477 | Wang | Jun 2016 | A1 |
20160198141 | Fettig | Jul 2016 | A1 |
20160227138 | Kozlowski | Aug 2016 | A1 |
20160379374 | Sokella et al. | Dec 2016 | A1 |
20170120906 | Penilla et al. | May 2017 | A1 |
20170123429 | Levinson et al. | May 2017 | A1 |
20170131096 | Karlov et al. | May 2017 | A1 |
20170366264 | Riesing et al. | Dec 2017 | A1 |
20180019910 | Tsagkaris et al. | Jan 2018 | A1 |
20180025641 | LaVelle et al. | Jan 2018 | A1 |
20180033449 | Theverapperuma et al. | Feb 2018 | A1 |
20180053108 | Olabiyi | Feb 2018 | A1 |
20180082438 | Simon et al. | Mar 2018 | A1 |
20180107215 | Djuric et al. | Apr 2018 | A1 |
20180121767 | Wang | May 2018 | A1 |
20180149730 | Li et al. | May 2018 | A1 |
20180268571 | Park et al. | Sep 2018 | A1 |
20180293893 | Yang et al. | Oct 2018 | A1 |
20180324595 | Shima | Nov 2018 | A1 |
20190049955 | Yabuuchi et al. | Feb 2019 | A1 |
20190066713 | Mesgarani et al. | Feb 2019 | A1 |
20190122689 | Jain et al. | Apr 2019 | A1 |
20190164430 | Nix | May 2019 | A1 |
20190213887 | Kitayama et al. | Jul 2019 | A1 |
20190222752 | Burstein | Jul 2019 | A1 |
20190294108 | Ozcan et al. | Sep 2019 | A1 |
20190318725 | Le Roux et al. | Oct 2019 | A1 |
20190322282 | Theodosis et al. | Oct 2019 | A1 |
20190353741 | Bolster, Jr. et al. | Nov 2019 | A1 |
20190363430 | Wang et al. | Nov 2019 | A1 |
20200333140 | Elson et al. | Oct 2020 | A1 |
Number | Date | Country |
---|---|---|
108875595 | Nov 2018 | CN |
108985372 | Dec 2018 | CN |
Entry |
---|
U.S. Appl. No. 16/597,411, filed Oct. 9, 2019, Schmidt et al. |
U.S. Appl. No. 16/668,826, filed Oct. 30, 2019, Shima. |
U.S. Appl. No. 16/693,992, filed Nov. 25, 2019, Tchilian. |
U.S. Appl. No. 16/806,367, filed Mar. 2, 2020, Frye et al. |
“Deep Learning Meets DSP: OFDM Signal Detection,” KickView Tech Blog, Feb. 13, 2018, 25 pages [retrieved online from: blog.kickview.com/deep-learning-meets-dsp-ofdm-signal-detection/]. |
Buchheim “Astronomical Discoveries You Can Make, Too!” Springer, 2015, pp. 442-443. |
Ma et al. “Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions,” Applied Optics, Sep. 2015, vol. 54, No. 25, pp. 7559-7566. |
Ma et al. “Performance Analysis of the Attitude-correlated Frames Approach for Star Sensors,” IEEE, 3rd IEEE International Workshop on Metrology for Aerospace (MetroAeroSpace), Firenze, Italy, Jun. 22-23, 2016, pp. 81-86. |
Nair et al. “Accelerating Capsule Networks with Tensor Comprehensions,” Princeton, May 2018, 8 pages. |
Ni et al. “Attitude-correlated frames adding approach to improve signal-to-noise ratio of star image for star tracker,” Optics Express, May 2019, vol. 27, No. 11, pp. 15548-15564. |
Wang “Research on Pruning Convolutional Neural Network, Autoencoder and Capsule Network,” before Oct. 9, 2018, 11 pages. |
Wang et al. “An Optimization View on Dynamic Routing Between Capsules,” ICLR 2018 Workshop, Feb. 2018, 4 pages. |
Number | Date | Country | |
---|---|---|---|
62795707 | Jan 2019 | US |