Applications like machine learning (ML), deep learning (DL), natural language processing (NLP), and machine vision (MV) are becoming more complex over time and being developed to handle more sophisticated tasks. Computing devices, however, have not advanced at a pace where they can effectively handle the needs of these new applications. Without sufficiently advanced computing paradigms, ML, DL, NLP, and MV applications, for example, cannot reach their full potential.
A tensor engine is a specialized processor in an ASIC that has the capability to make a computer more effective at handling ML, DL, NLP, and MV applications. The tensor engine is an AI accelerator specifically designed for neural network machine learning, and can be utilized via TensorFlow software, for example. Tensor engines typically implement the linear algebra needed to process an inference using a model in the neural network. In the above implementation, the tensor engine performs the operations that are not handled by a DNN (such as convolutions).
Tensor engines usually receive multi-dimensional arrays from memory with the data to perform the linear algebra upon. The tensor engine needs to execute a nested loop structure to process the multi-dimensional arrays. This computation is very expensive since it involves pointers, loop variables, and multiplication operations. The number of instructions needed to implement nested loops in tensor engines makes them inadequate for high performance computing applications.
One embodiment is a method for processing a tensor. The method comprises obtaining a first register for a number of items in the tensor, obtaining one or more second registers for a number of items in a first and a second axis of the tensor, obtaining a stride in the first and the second axis, obtaining a next item in the tensor using the stride in the first axis and a first offset register, when the first register indicates the tensor has additional items to process and the second registers indicate the next item resides in the first axis, obtaining a next item in the tensor using the stride in the first axis and the second axis, the first offset register, and a second offset register, when the first register indicates the tensor has additional items to process, and the second registers indicate the next item resides in the second axis of the tensor, modifying the first register and one or more of the second registers, and modifying at least one of the first and the second offset registers.
Another embodiment is a load/store unit (LDSU) for a tensor engine. The LDSU comprises a first register for storing values associated with a number of items in a tensor, a plurality of second registers for storing a number of items in a first and a second axis of the tensor, a plurality of offset registers associated with the first and the second axis, a first and a second stride register associated with the first and the second axis, a tensor walking module configured to obtain a next item in the tensor using a first stride register and a first offset register, when the first register indicates the tensor has additional items to process and the second registers indicate the next item resides in the first axis of the tensor, the tensor walking module further configured to obtain a next item in the tensor using the first and the second stride registers, the first offset register, and a second offset register, when the first register indicates the tensor has additional items to process, and the second registers indicate the next item resides in the second axis of the tensor, an iteration tracking module configured to modify the first and the second registers, and a striding module configured to modify at least one of the first offset register or the second offset register.
The present application discloses a load/store unit (LDSU) as well as example machine-learning (ML) accelerators that can take advantage of the benefits provided by the LDSU. In some embodiments, the LDSU is configured for operation with a tensor engine. The following description contains specific information pertaining to implementations in the present disclosure. The Figures in the present application and their accompanying Detailed Description are directed to merely example implementations. Unless noted otherwise, like or corresponding elements among the Figures may be indicated by like or corresponding reference numerals. Moreover, the Figures in the present application are generally not to scale and are not intended to correspond to actual relative dimensions.
Referring to
Tensor engine 120 includes register bank 140 and compute elements 170. Compute elements 170 are configured to perform one or more mathematical operations on the data obtained from register bank 140 and optionally write the results back to register bank 140. LDSU 111 includes an access module 130. In operation, the LDSU 111 uses the access module 130 to read the tensor 100 from the memory 150 and to write the tensor 100 to the register bank 140. Alternatively, although not shown explicitly in
LDSU 111 includes a loop tracking module 192 (e.g., an iteration tracking module), an index tracking module 193, an addressing module 194, a walking module 195, a striding module 196, and a layout module 197. The modules 192-197 can be implemented in hardware, software, firmware, or any applicable combination of these elements. The tensor 100 can be obtained by walking through each data element of data type 165 in the tensor 100 using one or more of the modules 192-197. LDSU 111 walks through tensor 100 using a memory 190 which can be loaded in advance of the processing tensor 100, either from a compiler, a host, or any applicable form of input capable of setting up memory 190 in advance of execution. The memory can be updated when each item from tensor 100 is accessed by the LDSU 111. In one embodiment, when the LDSU 111 is moved to the next position in tensor 100 an effective address (e.g., in a memory region) for the next item is computed which can be used by the access module 130 to read the next item from memory 150 or register bank 140.
Memory 190 can include one or more registers. At least some of the registers correspond to a first counter for the number of items in tensor 100 and a second counter for the number of items in each of a plurality of dimensions of tensor 100 (e.g., the size of the arrays for C, H, and W). In one embodiment, the first counter is set to the number of items in tensor 100 and for each step, the counter is decremented until it reaches zero, at which time the system knows it has reached the end of tensor 100. Other implementations for the first counter are possible as well. The second counter can be set as indices for each dimension of tensor 100, such that for each step the second counter can be used to determine whether the next step in tensor 100 is in the current dimension, or whether the last item in the current dimension has been reached and the next stride is in the next axis of tensor 100 that needs to be traversed. In one embodiment, the first counter can be determined by taking the indices for each dimension representing the number of items in each dimension and taking the product of all of the values.
The loop tracking module 192 can access one or more registers to determine when the end of the tensor has been reached. The index tracking module 193 can access one or more registers for each dimension of the tensor to determine if it is the end of the tensor or the last element in a dimension. After the LDSU 111 moves to the next item, the loop tracking module 192 and the index tracking module 193 update, decrement, increment, and/or otherwise modify the registers.
Addressing module 194 can be used to determine the effective address for the next item in the tensor each time LDSU 111 moves to the next item. In the embodiment where memory 150 has a plurality of registers, the addressing module 194 uses a base register and one or more offset registers to provide the effective address (e.g., in a memory region) to the access module 130. The base register can have a value that corresponds to the memory location (e.g., memory region) where the first bit of the first item in the tensor resides, either in memory 150 or register bank 140.
Striding module 196 can be used to determine the stride in each of the dimensions of tensor 100. The stride values can be stored in memory 190 in a stride register for each dimension, for example. In one embodiment, a compiler, host, or other process loads the stride registers in advance of processing a tensor. At each step in the processing of the tensor, the striding module 196 updates the appropriate stride registers to correspond to the next position of the LDSU 111.
Walking module 195 can be used to move the LDSU 111 to the next item in tensor 100 so that the access module 130 can obtain (load or store) the next item from either memory 150 or register bank 140. In one embodiment, memory 190 includes a plurality of offset registers, at least one for each dimension of tensor 100. To obtain the next item in tensor 100 and/or to move the LDSU 111 to the next position, the current values in the offset registers are added together. In one embodiment, additional LDSUs 111B and additional tensor engines 120B are used such that each of tensors 102, 104, and 106 have their own LDSU and tensor engine that can operate in parallel with LDSU 111 and tensor engine 120. In one embodiment, an optional layout module 197 is used which makes the manner and/or order in which tensor walking module 195 walks through tensor 100 configurable. The order can be set at compile time in advance of the processing tensor 100, either from a compiler, a host, or any applicable form of input capable of setting up memory 190 and/or providing input and output to the layout module 197. In embodiments where registers are used for each dimension of the tensor, the registers can form a 2-dimensional array where the layout module 155 selects each row for processing in the order specified by the layout and the tensor is processed accordingly.
Using three nested loops to process tensor 210 is inefficient for use in an ML accelerator. The computation to find the effective address occurs at every step of the loop as well as pointer math with array indices. The size and amount of tensors that are typically processed coupled with the number of inefficient operations makes the prior art tensor engine of
One example of a compute element 400 is shown in
Referring now to
Activations from an originating node in ML processor or from an originating node in another ML processor in the ML accelerator 500 are streamed into a destination node in the ML processor. DNN 106 and tensor engine 120 perform computations on the streamed activations using the weights stored in L2SRAM 114. By pre-loading weights into L2SRAM 114 of each node 504, ML models (also referred to as execution graphs) are pre-loaded in each ML processor of the ML accelerator 500.
In general, a machine learning model is distributed onto one or more nodes where each node might execute several neurons. In the embodiment of
In the implementation of
Referring now to
As will be understood by someone having ordinary skill in the art, the process repeats over an arbitrary height, width, channel, and any additional dimensions of any tensor the system walks. Moreover, the system can support any number of tensors and any arbitrary size for the primitive data elements from one bit to BFP-32, for example. Furthermore, the registers in memory 190 of LDSU 111 can be laid out, by a compiler, for example, such that user or the input data is capable of determining the order that the dimensions are walked. In one embodiment, the height dimension can be walked first, and in another embodiment the channel dimension can be walked first, for example. This could provide advantages and/or optimizations for different types of input data sets when used by a system that takes advantage of a tensor engine with LDSU 111. In one embodiment, a layout module 155 can be used which can receive input from the compiler, a user interface, or other system to enable the rows in memory 190 to be traversed in an arbitrary order. It should also be noted that anywhere the present disclosure describes a tensor being obtained from a memory, various embodiments could also obtain the tensor from a register bank in the tensor engine itself, or elsewhere. Moreover, when an effective address is determined, it can be used to load or store a tensor at the determined address.
When there are more items at operation 1406 to obtain, read, write, load, store, and/or otherwise access, the tensor can be walked as follows. The next item is obtained at operation 1408 using the stride in any of the applicable dimensions and any values in the offset registers. One embodiment uses a striding module for each axis of the tensor that is being traversed, which enables the system to update offset registers every time the LDSU is moved without needing any nested loop operations. At operation 1410, the effective address of the next item is computed. An address module can be used to add a value in a base register with the current offset values summed from a tensor walking module 195, for instance. At operation 1412, the next item is read, written, loaded, stored, and/or otherwise accessed in a memory location using the effective address. Thereafter, at operation 1414, the first and the second counters are modified.
When there are no more items at operation 1406, the last item in the tensor was reached. Control can return to the main system, ML accelerator, computing device, or other process at operation 1400 that called the LDSU functionality and/or otherwise needed to process a tensor. Operation 1400 repeats until the LDSU functionality needs to be called again and operation 1400 becomes true.
Thereafter, or if the current item was not the last item at operation 1510, the next item is obtained using the stride and any existing offsets at operation 1516. At operation 1518, the effective address of the next item is computed. At operation 1520, the next item is read, written, loaded, stored, and/or otherwise accessed to or from a memory location such as a memory or a register bank. At operation 1522, the item counter is modified. At operation 1524, the indices for the current dimensions being traversed are modified. The process repeats at operation 1508 until the last item in the tensor is processed.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The present application claims priority to and the benefit of U.S. Provisional Patent Application No. 63/441,689, filed Jan. 27, 2023, which incorporates by reference, in its entirety, U.S. patent application Ser. No. 17/807,694, entitled MULTI-CHIP ELECTRO-PHOTONIC NETWORK, filed on Jun. 17, 2022.
Number | Name | Date | Kind |
---|---|---|---|
4912706 | Eisenberg et al. | Mar 1990 | A |
4934775 | Koai | Jun 1990 | A |
5457563 | Van Deventer | Oct 1995 | A |
6249621 | Sargent et al. | Jun 2001 | B1 |
6714552 | Cotter | Mar 2004 | B1 |
7034641 | Clarke et al. | Apr 2006 | B1 |
7778501 | Beausoleil et al. | Aug 2010 | B2 |
7889996 | Zheng et al. | Feb 2011 | B2 |
7894699 | Beausoleil | Feb 2011 | B2 |
7961990 | Krishnamoorthy et al. | Jun 2011 | B2 |
8064739 | Binkert et al. | Nov 2011 | B2 |
8213751 | Ho et al. | Jul 2012 | B1 |
8260147 | Scandurra et al. | Sep 2012 | B2 |
8285140 | Mccracken et al. | Oct 2012 | B2 |
8326148 | Bergman et al. | Dec 2012 | B2 |
8340517 | Shacham et al. | Dec 2012 | B2 |
8447146 | Beausoleil et al. | May 2013 | B2 |
8611747 | Wach | Dec 2013 | B1 |
9036482 | Lea | May 2015 | B2 |
9369784 | Zid et al. | Jun 2016 | B2 |
9495295 | Dutt et al. | Nov 2016 | B1 |
9791761 | Li et al. | Sep 2017 | B1 |
9831360 | Knights et al. | Nov 2017 | B2 |
9882655 | Li et al. | Jan 2018 | B2 |
10031287 | Heroux et al. | Jul 2018 | B1 |
10107959 | Heroux et al. | Oct 2018 | B2 |
10117007 | Song et al. | Oct 2018 | B2 |
10185085 | Huangfu et al. | Jan 2019 | B2 |
10225632 | Dupuis et al. | Mar 2019 | B1 |
10250958 | Chen et al. | Apr 2019 | B2 |
10281747 | Padmaraju et al. | May 2019 | B2 |
10365445 | Badihi et al. | Jul 2019 | B2 |
10564512 | Sun et al. | Feb 2020 | B2 |
10598852 | Zhao et al. | Mar 2020 | B1 |
10651933 | Chiang et al. | May 2020 | B1 |
10837827 | Nahmias et al. | Nov 2020 | B2 |
10908369 | Mahdi et al. | Feb 2021 | B1 |
10915297 | Halutz | Feb 2021 | B1 |
10935722 | Li et al. | Mar 2021 | B1 |
10951325 | Rathinasamy et al. | Mar 2021 | B1 |
10962728 | Nelson et al. | Mar 2021 | B2 |
10976491 | Coolbaugh et al. | Apr 2021 | B2 |
11107770 | Ramalingam et al. | Aug 2021 | B1 |
11165509 | Nagarajan et al. | Nov 2021 | B1 |
11165711 | Mehrvar et al. | Nov 2021 | B2 |
11233580 | Meade et al. | Jan 2022 | B2 |
11321092 | Raikin | May 2022 | B1 |
11336376 | Xie | May 2022 | B1 |
11493714 | Mendoza et al. | Nov 2022 | B1 |
11500153 | Meade et al. | Nov 2022 | B2 |
11509397 | Ma et al. | Nov 2022 | B2 |
11769710 | Refai-Ahmed et al. | Sep 2023 | B2 |
11817903 | Pleros et al. | Nov 2023 | B2 |
20040213229 | Chang et al. | Oct 2004 | A1 |
20060159387 | Handelman | Jul 2006 | A1 |
20060204247 | Murphy | Sep 2006 | A1 |
20110206379 | Budd | Aug 2011 | A1 |
20120020663 | McLaren | Jan 2012 | A1 |
20120251116 | Li et al. | Oct 2012 | A1 |
20130275703 | Schenfeld | Oct 2013 | A1 |
20130308942 | Ji et al. | Nov 2013 | A1 |
20150109024 | Abdelfattah et al. | Apr 2015 | A1 |
20150354938 | Mower et al. | Dec 2015 | A1 |
20160116688 | Hochberg et al. | Apr 2016 | A1 |
20160344507 | Marquardt et al. | Nov 2016 | A1 |
20170045697 | Hochberg et al. | Feb 2017 | A1 |
20170194309 | Evans et al. | Jul 2017 | A1 |
20170194310 | Evans et al. | Jul 2017 | A1 |
20170207600 | Klamkin et al. | Jul 2017 | A1 |
20170220352 | Woo | Aug 2017 | A1 |
20180107030 | Morton et al. | Apr 2018 | A1 |
20180260703 | Soljacic et al. | Sep 2018 | A1 |
20190026225 | Gu et al. | Jan 2019 | A1 |
20190049665 | Ma et al. | Feb 2019 | A1 |
20190205737 | Bleiweiss et al. | Jul 2019 | A1 |
20190265408 | Ji et al. | Aug 2019 | A1 |
20190266088 | Kumar | Aug 2019 | A1 |
20190266089 | Kumar | Aug 2019 | A1 |
20190294199 | Carolan et al. | Sep 2019 | A1 |
20190317285 | Liff | Oct 2019 | A1 |
20190317287 | Raghunathan et al. | Oct 2019 | A1 |
20190356394 | Bunandar et al. | Nov 2019 | A1 |
20190372589 | Gould | Dec 2019 | A1 |
20190385997 | Choi et al. | Dec 2019 | A1 |
20200006304 | Chang et al. | Jan 2020 | A1 |
20200125716 | Chittamuru et al. | Apr 2020 | A1 |
20200142441 | Bunandar et al. | May 2020 | A1 |
20200174707 | Johnson | Jun 2020 | A1 |
20200200987 | Kim | Jun 2020 | A1 |
20200213028 | Behringer et al. | Jul 2020 | A1 |
20200250532 | Shen et al. | Aug 2020 | A1 |
20200284981 | Harris et al. | Sep 2020 | A1 |
20200310761 | Rossi | Oct 2020 | A1 |
20200409001 | Liang et al. | Dec 2020 | A1 |
20200410330 | Liu | Dec 2020 | A1 |
20210036783 | Bunandar et al. | Feb 2021 | A1 |
20210064958 | Lin et al. | Mar 2021 | A1 |
20210072784 | Lin et al. | Mar 2021 | A1 |
20210116637 | Li et al. | Apr 2021 | A1 |
20210132309 | Zhang et al. | May 2021 | A1 |
20210132650 | Wenhua et al. | May 2021 | A1 |
20210133547 | Wenhua et al. | May 2021 | A1 |
20210173238 | Hosseinzadeh | Jun 2021 | A1 |
20210257396 | Piggott et al. | Aug 2021 | A1 |
20210271020 | Islam et al. | Sep 2021 | A1 |
20210286129 | Fini et al. | Sep 2021 | A1 |
20210305127 | Refai-Ahmed et al. | Sep 2021 | A1 |
20210406164 | Grymel | Dec 2021 | A1 |
20210409848 | Saunders et al. | Dec 2021 | A1 |
20220003948 | Zhou et al. | Jan 2022 | A1 |
20220004029 | Meng | Jan 2022 | A1 |
20220012578 | Brady et al. | Jan 2022 | A1 |
20220012582 | Pleros et al. | Jan 2022 | A1 |
20220044092 | Pleros et al. | Feb 2022 | A1 |
20220091332 | Yoo et al. | Mar 2022 | A1 |
20220092016 | Kumashikar | Mar 2022 | A1 |
20220171142 | Wright et al. | Jun 2022 | A1 |
20220302033 | Cheah et al. | Sep 2022 | A1 |
20220342164 | Chen et al. | Oct 2022 | A1 |
20220374575 | Ramey et al. | Nov 2022 | A1 |
20220382005 | Rusu | Dec 2022 | A1 |
20230089415 | Zilkie et al. | Mar 2023 | A1 |
20230197699 | Spreitzer et al. | Jun 2023 | A1 |
20230251423 | Perez Lopez et al. | Aug 2023 | A1 |
20230258886 | Liao | Aug 2023 | A1 |
20230282547 | Refai-Ahmed et al. | Sep 2023 | A1 |
20230308188 | Dorta-Quinones | Sep 2023 | A1 |
20230314702 | Yu | Oct 2023 | A1 |
20230376818 | Nowak | Nov 2023 | A1 |
20230393357 | Ranno | Dec 2023 | A1 |
Number | Date | Country |
---|---|---|
2019100030 | Feb 2019 | AU |
2019100679 | Aug 2019 | AU |
2019100750 | Aug 2019 | AU |
102281478 | Dec 2011 | CN |
102333250 | Jan 2012 | CN |
102413039 | Apr 2012 | CN |
102638311 | Aug 2012 | CN |
102645706 | Aug 2012 | CN |
202522621 | Nov 2012 | CN |
103369415 | Oct 2013 | CN |
103442311 | Dec 2013 | CN |
103580890 | Feb 2014 | CN |
104539547 | Apr 2015 | CN |
105451103 | Mar 2016 | CN |
205354341 | Jun 2016 | CN |
105812063 | Jul 2016 | CN |
105847166 | Aug 2016 | CN |
106126471 | Nov 2016 | CN |
106331909 | Jan 2017 | CN |
106407154 | Feb 2017 | CN |
106533993 | Mar 2017 | CN |
106549874 | Mar 2017 | CN |
106796324 | May 2017 | CN |
106888050 | Jun 2017 | CN |
106911521 | Jun 2017 | CN |
106936708 | Jul 2017 | CN |
106936736 | Jul 2017 | CN |
106980160 | Jul 2017 | CN |
107911761 | Apr 2018 | CN |
108599850 | Sep 2018 | CN |
207835452 | Sep 2018 | CN |
108737011 | Nov 2018 | CN |
110266585 | Sep 2019 | CN |
110505021 | Nov 2019 | CN |
111208690 | May 2020 | CN |
111752891 | Oct 2020 | CN |
111770019 | Oct 2020 | CN |
111786911 | Oct 2020 | CN |
3007537 | Dec 2014 | FR |
2223867 | Apr 1990 | GB |
201621017235 | Jul 2016 | IN |
202121008267 | Apr 2021 | IN |
6747660 | Aug 2020 | JP |
2020155112 | Sep 2020 | JP |
101242172 | Mar 2013 | KR |
101382606 | Apr 2014 | KR |
101465420 | Nov 2014 | KR |
101465498 | Nov 2014 | KR |
101541534 | Aug 2015 | KR |
101548695 | Sep 2015 | KR |
101766786 | Aug 2017 | KR |
101766792 | Aug 2017 | KR |
WO2015176289 | Nov 2015 | WO |
WO2020072925 | Apr 2020 | WO |
WO2020102204 | May 2020 | WO |
WO2020191217 | Sep 2020 | WO |
WO2021021787 | Feb 2021 | WO |
WO2022032105 | Feb 2022 | WO |
WO2022133490 | Jun 2022 | WO |
WO2023177417 | Sep 2022 | WO |
WO2022266676 | Dec 2022 | WO |
WO2023177922 | Sep 2023 | WO |
Entry |
---|
Dakkak, A.D. et al., Accelerating Reduction and Scan Using Tensor Core Units, 2019,ACM,pp. 46-57. (Year: 2019). |
U.S. Appl. No. 63/392,475, filed Jul. 26, 2022, Aggarwal et al. |
U.S. Appl. No. 18/076,196, filed Dec. 6, 2022, Aggarwal et al. |
U.S. Appl. No. 18/076,210, filed Dec. 6, 2022, Aggarwal et al. |
U.S. Appl. No. 18/217,898, filed Jul. 3, 2023, Aggarwal et al. |
U.S. Appl. No. 63/437,639, filed Jan. 6, 2023, Plunkett et al. |
U.S. Appl. No. 63/437,641, filed Jan. 6, 2023, Plunkett et al. |
Hendry, G. et al.; “Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing,” SC '10: Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, New Orleans, LA, USA, 2010, pp. 1-12. |
Liu, Jifeng, et al; “Waveguide-integrated, ultralow-energy GeSi electro-absorption modulators”, Nature Photonics, [Online] vol. 2, No. 7, May 30, 2008 (May 30, 2008), pp. 433-437. |
Wu, Longsheng et al.; “Design of a broadband Ge 1-20 1-x Six electro-absorption modulator based on the Franz-Keldysh effect with thermal tuning”, Optics Express, [Online] vol. 28, No. 5, Feb. 27, 2020 (Feb. 27, 2020), p. 7585. |
Zhang, Yulong; “Building blocks of a silicon photonic integrated wavelength division multiplexing transmitter for detector instrumentation” , Doktors Der Ingenieurwissenschaften (Dr.-Ing. ), Dec. 15, 2020 (Dec. 15, 2020), 128 pages. |
U.S. Appl. No. 18/540,579, filed May 1, 2024, Office Action. |
U.S. Appl. No. 17/807,692, filed Jul. 12, 2024, Office Action. |
U.S. Appl. No. 18/407,408, filed Jul. 30, 2024, Notice of Allowance. |
U.S. Appl. No. 18/407,410, filed May 24, 2024, Office Action. |
U.S. Appl. No. 18/407,410, filed Aug. 12, 2024, Notice of Allowance. |
U.S. Appl. No. 17/903,455, filed Jun. 27, 2024, Office Action. |
U.S. Appl. No. 18/590,708, filed Aug. 7, 2024, Notice of Allowance. |
PCT/US2023/015680, Aug. 9, 2024, International Preliminary Report on Patentability. |
10-2023-7007856, Aug. 21, 2024, Foreign Notice of Allowance. |
202180068303.5, Jul. 31, 2024, Foreign Notice of Allowance. |
11202307570T, Apr. 10, 2024, Foreign Notice of Allowance. |
202280020819.7, Apr. 4, 2024, Foreign Office Action. |
202180093875.9, Apr. 12, 2024, Foreign Office Action. |
PCT/US2024/010774, May 3, 2024, International Search Report and Written Opinion. |
EP23220883, May 7, 2024, Extended European Search Report. |
PCT/US2024/013168, May 8, 2024, International Search Report and Written Opinion. |
22826043.6, Jun. 14, 2024, Extended European Search Report. |
21853044.2, Jul. 23, 2024, Extended European Search Report. |
1020237024129, Aug. 2, 2024, Foreign Office Action. |
1020237044346, Aug. 27, 2024, Foreign Office Action. |
U.S. Appl. No. 63/049,928, filed Jul. 9, 2020, Pleros et al. |
U.S. Appl. No. 63/062,163, filed Aug. 6, 2020, Pleros et al. |
U.S. Appl. No. 63/199,286, filed Dec. 17, 2020, Ma et al. |
U.S. Appl. No. 63/199,412, filed Dec. 23, 2022, Ma et al. |
U.S. Appl. No. 63/201,155, filed Apr. 15, 2021, Ma et al. |
U.S. Appl. No. 63/261,974, filed Oct. 1, 2021, Pleros et al. |
U.S. Appl. No. 63/212,353, filed Jun. 18, 2021, Winterbottom et al. |
U.S. Appl. No. 17/807,692, filed Jun. 17, 2022, Winterbottom et al. |
U.S. Appl. No. 17/807,694, filed Jun. 17, 2022, Winterbottom et al. |
U.S. Appl. No. 17/807,698, filed Jun. 17, 2022, Winterbottom et al. |
U.S. Appl. No. 17/807,699, filed Jun. 17, 2022, Winterbottom et al. |
U.S. Appl. No. 17/807,695, filed Jun. 17, 2022, Winterbottom et al. |
U.S. Appl. No. 63/321,453, filed Mar. 18, 2022, Bos et al. |
U.S. Appl. No. 17/903,455, filed Sep. 6, 2022, Lazovsky et al. |
U.S. Appl. No. 18/123,161, filed Mar. 17, 2023, Bos et al. |
U.S. Appl. No. 17/957,731, filed Sep. 30, 2022, Pleros et al. |
U.S. Appl. No. 17/957,812, filed Sep. 30, 2022, Pleros et al. |
U.S. Appl. No. 63/420,323, filed Oct. 28, 2022, Sahni. |
U.S. Appl. No. 18/123,170, filed Mar. 17, 2023, Sahni. |
U.S. Appl. No. 63/420,330, filed Oct. 28, 2022, Sahni et al. |
U.S. Appl. No. 63/428,663, filed Nov. 29, 2022, Sahni et al. |
U.S. Appl. No. 63/441,689, filed Jan. 27, 2023, Winterbottom. |
U.S. Appl. No. 63/579,486, filed Aug. 29, 2023, Aggarwal et al. |
U.S. Appl. No. 63/535,509, filed Aug. 30, 2023, Winterbottom et al. |
U.S. Appl. No. 63/535,511, filed Aug. 30, 2023, Winterbottom et al. |
U.S. Appl. No. 63/535,512, filed Aug. 30, 2023, José Maia da Silva et al. |
U.S. Appl. No. 63/592,509, filed Oct. 23, 2023, Aggarwal et al. |
U.S. Appl. No. 63/592,517, filed Oct. 23, 2023, Winterbottom et al. |
U.S. Appl. No. 18/473,898, filed Sep. 25, 2023, Pleros et al. |
U.S. Appl. No. 18/523,667, filed Nov. 29, 2023, Sahni et al. |
U.S. Appl. No. 18/293,673, filed Jan. 30, 2024, Bos et al. |
U.S. Appl. No. 18/407,408, filed Jan. 8, 2024, Aggarwal. |
U.S. Appl. No. 18/407,410, filed Jan. 8, 2024, Aggarwal. |
U.S. Appl. No. 18/423,210, filed Jan. 25, 2024, Winterbottom. |
U.S. Appl. No. 18/540,579, filed Dec. 14, 2023, Winterbottom et al. |
U.S. Appl. No. 18/590,689, filed Feb. 28, 2024, Winterbottom et al. |
U.S. Appl. No. 18/590,703, filed Feb. 28, 2024, Winterbottom et al. |
U.S. Appl. No. 18/590,708, filed Feb. 28, 2024, Winterbottom et al. |
Ardestani, et al., “Supporting Massive DLRM Inference Through Software Defined Memory”, Nov. 8, 2021; 14 pages. |
Agrawal, Govind; “Chapter 4—Optical Receivers”, Fiber-Optic Communications Systems, John Wiley & Sons, Inc., (2002), pp. 133-182. |
Burgwal, Roel et al; “Using an imperfect photonic network to implement random unitaries,” Opt. Express 25(23), (2017), 28236-28245. |
Capmany, Francoy et al.; “Thepgrammable processor” Nature Photonics, 109/22/20226, (2016), 5 pgs. |
Carolan, Jacques et al.; “Universal Linear Optics”; arXiv: 1505.01182v1; (2015); 13 pgs. |
Clements, William et al; “Optimal design for universal multiport interferometers”; Optica; vol. 3, No. 12; (2016), pp. 1460-1465. |
Eltes, Felix et al.; “A BaTiO3-Based Electro-Optic Pockets Modulator Monolithically Integrated on an Advanced Silicon Photonics Platform”; J. Lightwave Technol. vol. 37, No. 5; (2019), pp. 1456-1462. |
Eltes, Felix et al.; Low-Loss BaTiO3—Si Waveguides for Nonlinear Integrated Photonics'; ACS Photon., vol. 3, No. 9; (2016), pp. 1698-1703. |
Harris, NC et al.; “Efficient, compact and low loss thermo-optic phase shifter in colicon”; Opt. Express, vol. 22, No. 9; (2014), pp. 10487-10493. |
Jiang, W.; “Nonvolatile and ultra-low-loss reconfigurable mode (De) multiplexer/switch using triple-waveguide coupler with Ge2Sb2Se4T31 phase change material”; Sci. Rep. vol. 8, No. 1; (2018), 12 pages. |
Lambrecht, Joris et al.; “90-GB/s NRZ Optical Receiveer in Silicon Using a Fully Differential Transimpedance Aplifier,” Journal of Lightwave Technology, vol. 37, No. 9; (2019); pp. 1964-1973. |
Manolis, A. et al; “Non-volatile integrated photonic memory using GST phase change material on a fully eched Si3N4/SiO2 waveguide”; Conference on Lasers and Electro-optics; OSA Technical Digest, paper STh3R.4; (2020); 2 pages. |
Miller, David A. et al; “Perfect optics with imperfect components”; Optica, vol. 2, No. 8; (2015); pp. 747-750. |
Miller, David A. et al; “Self-Configuring Universal Linear Optical Component”; Photon. Res. 1; [Online]; Retrieved from the interent: URL: https://arxiv.org/ftp/arxiv/papers/1303/1303.4602.pdf; (2013), pp. 1-15. |
Miscuglio, Mario et al.; “Photonic Tensor cores for machine learning”; Applied Physics Reviews, vol. 7, Issue 3; (2020), 16 pages. |
Mourgias-Alexandris, George et al; “An all-optical neuron with sigmoid activation function;” Optics Express, vol. 27, No. 7; (2019), pp. 9620-9630. |
Mourgias-Alexandris, George et al; Neuromorphic Photonics with Coherent Linear Neurons Using Dual-IQ Modulation Cells, Journal of Lightwave Technology, vol. 38, No. 4; Feb. 15, 2020, pp. 811-819. |
Pai, Sunil et al.; “Parallel Programming of an Arbitrary Feedforward Photonic Network”; IEEE Journal of Selected Topics in Quantum Electronics, vol. 26, No. 5; (2020), 13 pages. |
Perez, Daniel et al. “Reconfigurable lattice mesh designs for prgrammable photonic processors”; Optics Express vol. 24, Issue 11; (2016); pp. 12093-12106. |
Raj, Mayank et al.; “Design of a 50-GB/s Hybid Integrated Si-Photonic Optical Link in 16-nm FinFET”; IEEE Journal of Solid-State Circuits, vol. 55, No. 4, Apr. 2020, pp. 1086-1095. |
Reck, M. et al; “Experimental Realization of any Discrete Unitary Operator”; Phys. Rev. Lett. 73; (1994); pp. 58-61. |
Shen, Yichen et al; “Deep learning with coherent nanophotonic circuits”; https://arxiv.org/pdf/1610.02365.pdf; (2016); 8 pages. |
Shi, Bin et al.; Numerical Simulation of an InP Photonic Integrated Cross-Connect for Deep Neural Networks on Chip; Applied Sciences, Jan. 9, 2020, pp. 1-15. |
Shokraneh, Farhad et al; “The diamond mesh, a phase-error-and loss-tolerant fieldprogrammable MZI-based optical processor for optical neural networks” Opt. Express, vol. 28, No. 16; (2020); pp. 23495-23508. |
Sun, Chen et al; “A 45 nm cmos-soi monolithic photonics platform with bit-statistics-based resonant microring thermal tuning”; IEEE Journal of Solid-State Circuits, vol. 51, No. 4; (2016); 20 pages. |
Tait, Alexander et al; “Broadcast and Weight: an Intergated Network for Scalable Photonic Spike Processing”; Journal of Lightwave Technology, vol. 32, No. 21; (2014); pp. 4029-4041. |
Yang, Lin et al; “On-chip CMOS-compatible optical signal processor”; Opt. Express, vol. 20, No. 12; (2012) pp. 13560-13565. |
Zhuang, L. et al; Programmable photonic signal processor chip for radiofrequency applications; Optica 2; 854-859; (2015); 10 pages. |
U.S. Appl. No. 17/395,849, filed Jan. 5, 2023, Office Action. |
U.S. Appl. No. 17/395,849, filed Jul. 24, 2023, Notice of Allowance. |
U.S. Appl. No. 17/645,001, filed Jul. 20, 2022, Notice of Allowance. |
U.S. Appl. No. 18/540,579, Feb. 14, 2024, Office Action. |
U.S. Appl. No. 17/807,692, Feb. 15, 2024, Restriction Requirement. |
U.S. Appl. No. 18/407,408, filed Mar. 28, 2024, Office Action. |
U.S. Appl. No. 18/407,410, filed Mar. 15, 2024, Restriction Requirement. |
PCT/US2021/044956, Nov. 19, 2021, ISR. |
PCT/US2021/073003, Mar. 22, 2022, ISR. |
PCT/US2022/073039, Sep. 1, 2022, Invitation to Pay Additional Fees. |
PCT/US2022/073039, Dec. 2, 2022, International Search Report and Written Opinion. |
PCT/US2022/042621. Feb. 15, 2023, International Search Report and Written Opinion. |
PCT/US2023/015680, May 23, 2023, Invitation to Pay Additional Fees. |
PCT/US2023/015680, Aug. 23, 2023, International Search Report and Written Opinion. |
20220404544, Jan. 19, 2024, Foreign Office Action. |
202180068303.5, Jan. 20, 2024, Foreign Office Action. |
PCT/US2022/042621, Feb. 26, 2024, International Preliminary Report on Patentability. |
2023-537068, Oct. 1, 2024, Foreign Office Action. |
11202304676X, Oct. 4, 2024, Foreign Notice of Allowance. |
Number | Date | Country | |
---|---|---|---|
20240403046 A1 | Dec 2024 | US |
Number | Date | Country | |
---|---|---|---|
63441689 | Jan 2023 | US |