This disclosure relates to test and measurement systems, more particularly to systems and methods for tuning parameters of a device under test, for example an optical transceiver device.
Manufacturers of optical transmitters test their transmitters by tuning them to ensure they operate correctly. Each transmitter can take up to two hours as a worst case example. Tuning typically takes the form of sweeping the turning parameters, meaning that the transmitter tunes at each level of each parameter. In the worst case example, it may take up to 200 iterations and three to five iterations in the best case. Speeding up this process to reduce the number of iterations to get the transceiver tuned reduces both the time and therefore, the expense, of manufacturing.
Embodiments of the disclosed apparatus and methods address shortcomings in the prior art.
Currently, optical transceiver tuning does not employ any type of machine learning to decrease the time needed to tune and test the transceivers. The amount of time it takes to train them increases the overall costs of the transceivers. The use of a machine learning process can decrease the amount of time needed to tune, test and determine if a part passes or fails. The embodiments here use one or more neural networks to obtain the tuning parameters for optical transceivers to allow a determination of whether the transceiver operates correctly more quickly than current processes.
The test and measurement device 10 may include many different components.
For purposes of this discussion, the term “test and measurement device” as used here includes those embodiments that include an external computing device as part of the test and measurement device. The memory 24 may reside on the test and measurement device or be distributed between the test and measurement device and the computing device. As will be discussed in more detail later, the memory may contain training data and run-time data used by the machine learning system. The test and measurement device may also include a user interface 26 that may comprise one or more of a touch screen, video screen, knobs, buttons, lights, keyboard, mouse, etc.
In the discussion here, the term “reference,” either parameters or waveform, refers to the set of waveforms generated by the test automation process and used in both training and in run-time operation of the machine learning system. The term “predicted” refers to the output of the machine learning system. The term “optimized” refers to tuning parameters that result from an optimization process from the test automation or the machine learning system.
The below discussion uses waveforms acquired from the transceiver in the test automation process. The embodiments below use three waveforms for non-linear transceivers. No limitation to any particular number of waveforms is intended, nor should any be implied.
A histogram for each tuning parameter for each device for each temperature for optimal tuning is created. This is then used to develop an average parameter set. One should note that the process does not have to use an average, or mean, set of parameters, but for ease of discussion and understand this discussion will use that. The user or system could then produce two variations of the mean parameter set, MeanPar, for example by adding to the mean parameters to produce Delta1Par set, and subtracting from the mean parameters to produce Delta2Par set.
The MeanPar set, and the Delta1Par and Delta2Par parameter sets are then used as reference parameters for training and for runtime of the system. The objective is to provide three sample points on the parameters' non-linear characteristics curves so that the neural network has enough data to accurately predict the optimal parameter set by looking at the processed waveforms from these reference parameter settings. The number of reference parameter sets may be from 1 to N where N is limited by compute time and resources. In one example, 200 transceivers undergo testing of five acquisitions with these three parameters sets, resulting in 15 waveforms for every transceiver, totaling 3000 waveforms for training. In
The resulting waveforms are then processed in sequence, represented by switch 36 followed by a digitizer 38. A waveform for each set of reference parameters is stored in memory, such as that shown in
As discussed above the test automation process 20 will use the “standard” tuning process that is to be replaced with the faster machine learning process of the embodiments. The optimal tuning parameters used in training may undergo normalization or standardization depending upon the data at 41. The normalization or standardization rescales the tuning parameter values to fit within a normalized range for the neural network 22 to handle well. In one embodiment, the normalization would be to rescale the tuning parameter to fall with values of 0 to 1, or to center it zero in a range of −1 to 1. One embodiment of a formula for normalization is y=(x−min)/(max−min), where y is the normalized value and x is the original value. If the system uses a pretrained network that does not have built in normalization, it would use this normalization.
The user algorithm 40 tunes each of the Tx device samples for optimal tuning parameters which are then used to train the network to associate the optimal tuning parameters with a given waveform shape and characteristics such as temperature and scope noise. This association is always done using the same input reference tuning parameters stored in the Tx DUT during waveform acquisition.
The hyperspectral tensor building may use a short pattern tensor building process for machine learning. The builder 42 uses the decoded pattern and the input waveform to search down the record length and look for pre specified patterns which may be defined from a unit interval in the machine learning system or may be hidden from the user depending on the application. Several different patterns may be used. For optical tuning, the system may use a short pattern of each level with no transitions for five single unit intervals are used to view the PAM4 levels by the machine. This allows the machine to recognize the Tx levels parameters settings more readily.
For recognizing FFE taps of the TX parameter settings, impulse response code sequences, i.e. short patterns, are pulled out and placed into the images input to the neural network 22. The term hyperspectral is used in the sense that the image input to deep learning neural network 22 has three color channels labeled as R, G, and B (red, green and blue). The waveforms making up these hyperspectral images are not actually in these colors, but rather the system uses the three color channels to incorporate the different images created by tensor builder. The system collects three waveforms and each of the images from each of the three waveforms goes into one of the color channels. For example, without limitation, the wfinMean that results from the MeanPar data set may be on the Red channel, wfinDelta1 from the Delta1Par set on the Blue channel, and wfinDelta2 from the Delta2Par set on the Green channel. No limitation to these channels exists. Additional color channels in addition to RGB may be incorporated if desired. The pretrained deep learning networks used here only have the three color channels, but more channels could result from a custom deep learning network, or multiple pretrained three-color channel networks.
As will be discussed in more detail further, the discussion here shows two instances of tensor builders. One creates images used for training the optimal parameter network. The other may be used for tuning a network that outputs TDECQ (transmitter dispersion eye closure quaternary), and optimized FFE (feed-forward equalization) filter taps. In summary, the machine learning system is trained to receive waveform tensor images and output a set of optimized transmitter parameters, and may also output a set of optimized FFE taps and TDECQ.
The process then loops as many times as necessary. For each transceiver, the temperature changes until the transceiver has completed tests for the desired temperature range at 70. The process then moves onto the next transceiver at 72 until a sufficient number have been completed to ensure a high enough level of accuracy.
The tensor builder then produces several outputs and sends them to the neural network, which may comprise multiple sub-networks within the machine learning system. The tensor builder sends a level tensor to a first neural net, identified as neural network A, an impulse tensor to the transceiver FFE taps neural network B, the impulse tensor to the optimized FFE taps neural network C, along with the optimized FFE taps array from the TDECQ module, and a combined tensor to the neural network D for the TDECQ results. This continues until the current transceiver cycles through all of the temperatures at 96, and then repeats for all of the transceivers at 98, until enough have been processed, at which point the training process ends. A determination as to whether enough transceivers have been processed may involve testing the machine learning system to determine its accuracy.
Once trained and validated, the machine learning system will operate at run-time.
The process continues similar to the training cycle by capturing or acquiring the waveform at 116 and performing the TDECQ analysis at 118. At 120, the system checks if the transceiver has undergone the process for all three of the reference parameter sets. Once all three reference sets have completed, the three waveforms and the decoded pattern are sent to the tensor builder at 122. The tensor builder creates a tensor image for all three waveforms, and combines them all into a RGB image, as previously discussed. Similar to the training process, the tensor builder sends the levels tensor, the impulse tensor and the combined tensor image to the neural network 124. During run-time, however, the neural network(s)/machine learning system produces the optimized FFE taps, and TDECQ information to provide a predicted, optimized set of tuning parameters for the transceiver.
After the predictions are made, the APS flag is set to false at 126. When the process returns to 114 to set the transceiver to the appropriate parameters, the appropriate parameters are now the predicted tuning parameters. This also indicates that the system should only acquire one waveform, and only one tensor image is generated, and the neural network D produces a single TDECQ value. The APS flag remains false at 126, so the value is checked at 130 and the transceiver either passes or fails. If it passes, then the next temperature is selected at 128 if any still need to be tested, or the process ends.
In this manner, more than one neural network has been trained within the overall neural network/machine learning system. Two networks predict optimal transmitter tuning parameters, one, network C, is used to predict the FFE taps, and D is used to predict a TDECQ measurement.
The system uses the TDECQ measurement during training as input to network D so that network D associates an input waveform with a TDECQ value. Similarly, the optimized FFE taps out of the TDECQ module were used during training for the C network so that the C network associates waveforms with optimized tap values. Both Networks A and B provided tuning parameters to associate with three reference waveforms. Network A provided levels and gain type parameters. Network B associated FFE tuning parameters. The transmitter has FFE built into it and so the taps of the FFE filter are tuning parameters.
The optimized FFE taps out of the TDECQ module are different. They are not tuning parameters for the TX but they represent what one would have set an FFE to in order to open up the eye as much as possible in order to make the TDECQ measurement. Those FFE taps are applied to the waveform prior to making the TDECQ measurement.
While the embodiments described above are discussed using the example application of optical transceiver tuning processes, embodiments of the disclosure may be used in any process involving adjustment of operating parameters of any type of device under test, and may be especially useful to improve processes involving iterative adjustment of operating parameters to achieve a desired performance measurement of the DUT or to achieve a passing test result for the DUT.
Aspects of the disclosure may operate on a particularly created hardware, on firmware, digital signal processors, or on a specially programmed general purpose computer including a processor operating according to programmed instructions. The terms controller or processor as used herein are intended to include microprocessors, microcomputers, Application Specific Integrated Circuits (ASICs), and dedicated hardware controllers. One or more aspects of the disclosure may be embodied in computer-usable data and computer-executable instructions, such as in one or more program modules, executed by one or more computers (including monitoring modules), or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on a non-transitory computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, Random Access Memory (RAM), etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various aspects. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, FPGA, and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
The disclosed aspects may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed aspects may also be implemented as instructions carried by or stored on one or more or non-transitory computer-readable media, which may be read and executed by one or more processors. Such instructions may be referred to as a computer program product. Computer-readable media, as discussed herein, means any media that can be accessed by a computing device. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media.
Computer storage media means any medium that can be used to store computer-readable information. By way of example, and not limitation, computer storage media may include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Video Disc (DVD), or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and any other volatile or nonvolatile, removable or non-removable media implemented in any technology. Computer storage media excludes signals per se and transitory forms of signal transmission.
Communication media means any media that can be used for the communication of computer-readable information. By way of example, and not limitation, communication media may include coaxial cables, fiber-optic cables, air, or any other media suitable for the communication of electrical, optical, Radio Frequency (RF), infrared, acoustic or other types of signals.
Additionally, this written description makes reference to particular features. It is to be understood that the disclosure in this specification includes all possible combinations of those particular features. For example, where a particular feature is disclosed in the context of a particular aspect, that feature can also be used, to the extent possible, in the context of other aspects.
Also, when reference is made in this application to a method having two or more defined steps or operations, the defined steps or operations can be carried out in any order or simultaneously, unless the context excludes those possibilities.
Illustrative examples of the disclosed technologies are provided below. An embodiment of the technologies may include one or more, and any combination of, the examples described below.
Example 1 is a test and measurement system, comprising: a test and measurement device; a connection to allow the test and measurement device to connect to an optical transceiver; and one or more processors, configured to execute code that causes the one or more processors to: set operating parameters for the optical transceiver to reference operating parameters; acquire a waveform from the optical transceiver; repeatedly execute the code to cause the one or more processors to set operating parameters and acquire a waveform, for each of a predetermined number of sets of reference operating parameters; build one or more tensors from the acquired waveforms; send the one or more tensors to a machine learning system to obtain a set of predicted operating parameters; set the operating parameters for the optical transceiver to the predicted operating parameters; and test the optical transceiver using the predicted operating parameters.
Example 2 is the test and measurement system of Example 1, wherein the one or more processors are further configured to cause the one or more processors to repeat the execution of the code for each temperature setting in a range of temperatures settings.
Example 3 is the test and measurement system of Examples 1 or 2, wherein the one or more processors are distributed among the test and measurement device, a test automation system and a machine learning system separate from the test and measurement device.
Example 4 is the test and measurement system of any of Examples 1 through 3, wherein the machine learning system comprises a plurality of neural networks, one or more to produce the predicting operating parameters, one to produce a predicted measurement value, and one to produce a set of optimized feed-forward equalization filter tap values.
Example 5 is the test and measurement system of any of Examples 1 through 4, wherein the predetermined number of sets of reference parameters is three, one set of average values for the parameters, one set having values higher than the average values, and one set having values lower than the average values.
Example 6 is the test and measurement system of any of Examples 1 through 5, wherein the one or more processors are further configured to execute code to cause the one or more processors to apply an inverse normalization to the predicted operating parameters.
Example 7 is a method to tune optical transceivers, comprising: connecting the optical transceiver to a test and measurement device; setting operating parameters for tuning the transceiver to reference operating parameters; acquiring a waveform from the optical transceiver; repeating the setting operating parameters and acquiring a waveform, for each of a predetermined number of sets of reference operating parameters; building one or more tensors from the acquired waveforms; sending the one or more tensors to a machine learning system to obtain a set of predicted operating parameters; setting the operating parameters for tuning the optical transceiver to the predicted operating parameters; and validating the predicted operating parameters.
Example 8 is the method of Example 7, further comprising repeating the method for each temperature in a range of temperatures.
Example 9 is the method of Examples 7 or 8, wherein building one or more tensors comprises building a levels tensor, an impulse tensor, and a combined tensor.
Example 10 is the method of any of Examples 7 through 9, wherein building a combined tensor comprises building an image having three or more channels.
Example 11 is the method of Example 10, wherein the image includes a bar graph image representing at least one of a temperature value and a noise value.
Example 12 is the method of any of Examples 7 through 11, further comprising generating reference operating parameters, comprising: tuning a plurality of optical transceivers at a plurality of temperatures; generating a histogram of each tuning parameter, at each temperature, for the plurality of tuned optical transceivers; calculating a mean reference parameter set by averaging values for each parameter in the histograms; increasing values in the mean parameter set to create a first delta reference parameter set; and decreasing values in the mean parameter set to create a second delta reference parameter set.
Example 13 is the method as any of Examples 7 through 13, wherein building one or more tensors comprises building a tensor having three channels, one channel for a mean reference tensor, one channel for a first delta tensor, and one channel for a second delta tensor.
Example 14 is the method of any of Examples 7 through 13, wherein sending the one or more tensors to a machine learning system comprises: sending the one or more tensors to two neural networks in the machine learning system to obtain optimized operating parameters; sending an impulse tensor to a third neural network to obtain optimized feed-forward equalization filter taps; and sending a combined tensor to a fourth neural network to obtain a predicted transmitter and dispersion eye closure penalty quaternary (TDECQ) measurement.
Example 15 is the method of any of Examples 7 through 14, wherein validating the predicted operating parameters comprises measuring the output of the transceiver and determining if the output is within a predetermined range.
Example 16 is the method of any of Examples 7 through 15, further comprising applying an inverse normalization to the predicted operating parameters.
Example 17 is a method of training a machine learning system for tuning optical transceivers, comprising: creating a predetermined number of sets of reference operating parameters; repeating, for each set of reference operating parameters: setting the operating parameters for the optical transceiver to a set of reference operating parameters; acquiring a waveform from the optical transceiver; tuning the optical transceiver to meet a desired output; and performing transmitter dispersion eye closure quaternary (TDECQ) analysis on the waveform; providing results of the analysis and the waveforms to a tensor builder; sending one or more tensors to one or more neural networks with the associated waveforms and reference operating parameters; and repeating the method until a sufficient number of transmitters have been tuned.
Example 18 is the method of Example 17, wherein creating the predetermined number of sets of reference operating parameters comprises: tuning a plurality of optical transceivers at a plurality of temperatures; generating a histogram of each parameter, at each temperature, for the plurality of tuned optical transceivers; calculating a mean reference parameter set by averaging values for each parameter in the histograms; increasing values in the mean parameter set to create a first delta reference parameter set; and decreasing values in the mean parameter set to create a second delta reference parameter set.
Example 19 is the method of Examples 17 or 18, further comprising normalizing the reference parameters.
Example 20 is the method of any of Examples 17 through 19, wherein the one or more neural networks comprise one or more neural networks for levels, one network for providing values for feed-forward equalization filters, and one network for a time dispersion eye closure quaternary value.
All features disclosed in the specification, including the claims, abstract, and drawings, and all the steps in any method or process disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in the specification, including the claims, abstract, and drawings, can be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise.
Although specific embodiments have been illustrated and described for purposes of illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, the invention should not be limited except as by the appended claims.
This disclosure claims benefit of U.S. Provisional Patent Application No. 63/165,698, titled, “OPTICAL TRANSMITTER TUNING USING MACHINE LEARNING,” filed Mar. 24, 2021, and U.S. Provisional Patent Application No. 63/272,998, titled “OPTICAL TRANSMITTER TUNING USING MACHINE LEARNING AND REFERENCE PARAMETERS,” filed Oct. 28, 2021, which are incorporated herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5272723 | Kimoto | Dec 1993 | A |
5397981 | Wiggers | Mar 1995 | A |
5594655 | Berchin | Jan 1997 | A |
7181146 | Yorks | Feb 2007 | B1 |
7298463 | French | Nov 2007 | B2 |
8583395 | Dybsetter | Nov 2013 | B2 |
8861578 | Lusted | Oct 2014 | B1 |
9059803 | Detofsky | Jun 2015 | B2 |
9337993 | Lugthart | May 2016 | B1 |
9548858 | Cirit | Jan 2017 | B1 |
9699009 | Ainspan | Jul 2017 | B1 |
10171161 | Cote | Jan 2019 | B1 |
10236982 | Zhuge | Mar 2019 | B1 |
10270527 | Mentovich | Apr 2019 | B1 |
10396897 | Malave | Aug 2019 | B1 |
10585121 | Absher | Mar 2020 | B2 |
10727973 | Kumar | Jul 2020 | B1 |
10852323 | Schaefer | Dec 2020 | B2 |
10863255 | Zhang | Dec 2020 | B2 |
11005697 | Liston | May 2021 | B2 |
11040169 | Jung | Jun 2021 | B2 |
11095314 | Medard | Aug 2021 | B2 |
11177986 | Ganesan | Nov 2021 | B1 |
11233561 | O'Shea | Jan 2022 | B1 |
11237190 | Rule | Feb 2022 | B2 |
11336378 | Buttoni | May 2022 | B2 |
11388081 | Sommers | Jul 2022 | B1 |
11476967 | Geng | Oct 2022 | B2 |
11695601 | Sudhakaran | Jul 2023 | B2 |
20020063553 | Jungerman | May 2002 | A1 |
20030053170 | Levinson | Mar 2003 | A1 |
20030220753 | Pickerd | Nov 2003 | A1 |
20040032889 | Hikada | Feb 2004 | A1 |
20040121733 | Peng | Jun 2004 | A1 |
20040131365 | Lee | Jul 2004 | A1 |
20040136422 | Mahowald | Jul 2004 | A1 |
20040165622 | Lu | Aug 2004 | A1 |
20040236527 | Felps | Nov 2004 | A1 |
20050222789 | West | Oct 2005 | A1 |
20050246601 | Waschura | Nov 2005 | A1 |
20050249252 | Sanchez | Nov 2005 | A1 |
20060120720 | Hauenschild | Jun 2006 | A1 |
20080126001 | Murray | May 2008 | A1 |
20080159737 | Noble | Jul 2008 | A1 |
20080212979 | Ota | Sep 2008 | A1 |
20090040335 | Ito | Feb 2009 | A1 |
20110085793 | Oomori | Apr 2011 | A1 |
20110161738 | Zhang | Jun 2011 | A1 |
20110286506 | Libby | Nov 2011 | A1 |
20130046805 | Smith | Feb 2013 | A1 |
20140093233 | Gao | Apr 2014 | A1 |
20140343883 | Libby | Nov 2014 | A1 |
20150003505 | Lusted | Jan 2015 | A1 |
20150055694 | Juenemann | Feb 2015 | A1 |
20150207574 | Schoen | Jul 2015 | A1 |
20160191168 | Huang | Jun 2016 | A1 |
20160328501 | Chase | Nov 2016 | A1 |
20180006721 | Ishizaka | Jan 2018 | A1 |
20180074096 | Absher | Mar 2018 | A1 |
20180204117 | Brevdo | Jul 2018 | A1 |
20180356655 | Welch | Dec 2018 | A1 |
20190038387 | Chu | Feb 2019 | A1 |
20190278500 | Lakshmi | Sep 2019 | A1 |
20190332941 | Towal | Oct 2019 | A1 |
20190370158 | Rivoir | Dec 2019 | A1 |
20190370631 | Fais | Dec 2019 | A1 |
20200035665 | Chuang | Jan 2020 | A1 |
20200057824 | Yeh | Feb 2020 | A1 |
20200166546 | O'Brien | May 2020 | A1 |
20200195353 | Ye | Jun 2020 | A1 |
20200229206 | Badic | Jul 2020 | A1 |
20200313999 | Lee | Oct 2020 | A1 |
20200335029 | Gao | Oct 2020 | A1 |
20210041499 | Ghosal | Feb 2021 | A1 |
20210105548 | Ye | Apr 2021 | A1 |
20210111794 | Huang | Apr 2021 | A1 |
20210160109 | Seol | May 2021 | A1 |
20210167864 | Razzell | Jun 2021 | A1 |
20210314081 | Shattil | Oct 2021 | A1 |
20210389373 | Pickerd | Dec 2021 | A1 |
20210390456 | Pickerd | Dec 2021 | A1 |
20220070040 | Namgoong | Mar 2022 | A1 |
20220076715 | Lee | Mar 2022 | A1 |
20220121388 | Woo | Apr 2022 | A1 |
20220182139 | Zhang | Jun 2022 | A1 |
20220199126 | Lee | Jun 2022 | A1 |
20220200712 | Lillie | Jun 2022 | A1 |
20220215865 | Woo | Jul 2022 | A1 |
20220236326 | Schaefer | Jul 2022 | A1 |
20220239371 | Xu | Jul 2022 | A1 |
20220247648 | Pickerd | Aug 2022 | A1 |
20220311513 | Pickerd | Sep 2022 | A1 |
20220311514 | Smith | Sep 2022 | A1 |
20220334180 | Pickerd | Oct 2022 | A1 |
20220373597 | Agoston | Nov 2022 | A1 |
20220373598 | Tan | Nov 2022 | A1 |
20220385374 | Arikawa | Dec 2022 | A1 |
20220390515 | Pickerd | Dec 2022 | A1 |
20220407595 | Varughese | Dec 2022 | A1 |
20230050162 | Tan | Feb 2023 | A1 |
20230050303 | Pickerd | Feb 2023 | A1 |
20230052588 | Sudhakaran | Feb 2023 | A1 |
20230088409 | Parsons | Mar 2023 | A1 |
20230098379 | Smith | Mar 2023 | A1 |
20230228803 | Sun | Jul 2023 | A1 |
Number | Date | Country |
---|---|---|
107342810 | Nov 2019 | CN |
2743710 | Sep 2018 | EP |
3936877 | Jan 2022 | EP |
6560793 | Aug 2019 | JP |
2021092156 | May 2021 | WO |
WO-2021092156 | May 2021 | WO |
2022171645 | Aug 2022 | WO |
2022189613 | Sep 2022 | WO |
Entry |
---|
Varughese, Siddarth, et al., Accelerating Assessments of Optical Components Using Machine Learning: TDECQ as Demonstrated Example, Journal of Lightwave Technology, Jan. 1, 2021, pp. 64-72, vol. 39, No. 1, IEEE. |
Varughese, Siddarth, et al., Accelerating TDECQ Assessments using Convolutional Neural Networks, OFC, Mar. 2020, 3 pages, The Optical Society (OSA). |
Watts et al., “Performance of Single-Mode Fiber Links Using Electronic Feed-Forward and Decision Feedback Equalizers”, 2005, IEEE Photonics Techology Letters, vol. 17, No. 10, pp. 2206-2208 (Year: 2005). |
Echeverri-Chacon et al., “Transmitter and Dispersion Eye Closure Quaternary (TDECQ) and Its Sensitivity to Impairments in PAM4 Waveforms”, 2019, Journal of Lightwave Technology, vol. 37, No. 3, pp. 852-860 (Year: 2019). |
Wang et al., “Intelligent Constellation Diagram Analyzer Using Convolutional Neural Network-Based Deep Learning,” Optics Express, Jul. 24, 2017, vol. 25, No. 15. |
Number | Date | Country | |
---|---|---|---|
20220311513 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
63272998 | Oct 2021 | US | |
63165698 | Mar 2021 | US |