Examples described herein relate to integrated circuits (ICs), for example an integrated circuit (IC) supporting a coupling with one or more other ICs such that the two or more coupled ICs appear as a single integrated device to a host processor device and its associated operating system.
Depending on the example, a number of transducers may be controlled by a processor (such as a software driver of a processor, or host, running an operating system). Multiple transducers that are connected and that are to be controlled by a driver of a processor may be referred to as “aggregated” transducers, and there can be associated difficulties in controlling such “aggregated” transducers.
These difficulties can include the scenario wherein one particular software driver or controlling software may only work when the “aggregated” transducers or controlling integrated circuits are identical. For example, one particular driver may only output a type or format of signal that is compatible with one transducer of the “aggregated” transducers and that is not compatible with the one or more other “aggregated” transducers, and the input/output signals required by both the driver and the one or more other “aggregated” transducers may not be compatible.
The present examples are concerned with ICs that can present themselves, and their respective transducer(s) to which they are connected, to an operating system and its associated host processor device as a single integrated device. By “integrated” in the sense of “a single integrated device”, it is meant that two or more ICs can present themselves as if they were a single IC to software running on a host processor, according to the techniques presented in this disclosure. In other words, the host processor appears, from its prospective, to be coupled to a monolithic integrated device, or monolithic IC, that is made up of a plurality of IC's.
According to an example there is provided a first integrated circuit configured to receive an audio signal and configured to drive an audio transducer based on the received audio signal, the first integrated circuit being configured to transmit a portion of the audio signal to a second integrated circuit.
The first integrated circuit may further comprise a first interface and a processor. The first interface may be configured to receive the audio signal and transmit the audio signal to the processor. The processor may be configured to transmit the portion of the audio signal to the second integrated circuit.
The first integrated circuit may be configured to transmit an echo cancellation signal to an external processor.
The processor may be configured to receive an echo cancellation signal from the second integrated circuit. The first interface may be configured to transmit the echo cancellation to the external processor signal based on the received echo cancellation signal.
The processor may be configured to receive two mono echo cancellation signals and combine these into a stereo echo cancellation signal. The first interface may be configured to transmit the stereo echo cancellation signal to the processor.
The processor may be configured to generate the echo cancellation signal.
The first integrated circuit may be configured to drive at least one tweeter speaker and/or at least one woofer speaker.
The processor may be configured to split the received audio signal into first and second frequency bands. The first integrated circuit may be configured to drive the audio transducer on the basis of one of the first and second frequency bands. The processor may be configured to transmit the other of the first and second frequency bands to the second integrated circuit.
The first integrated circuit may further comprise a second interface. The first integrated circuit may be configured to transmit a control signal, via the second interface, to the second integrated circuit to control a function of the second integrated circuit.
The first integrated circuit may be configured to receive the control signal from an external processor.
The first integrated circuit may be configured to load and/or manage and/or validate firmware on the second integrated circuit via second interface.
The first integrated circuit may be configured such that an external processor can load and/or manage and/or validate firmware on the second integrated circuit via the second interface of the first integrated circuit.
The first integrated circuit may be additionally configured to control an audio jack and/or a microphone.
The first integrated circuit may comprise any one or more of:
According to another example there is provided an arrangement comprising:
The processor of the second integrated circuit may be configured to drive a second audio transducer on the basis of the signal received from the processor of the first integrated circuit.
One of the first and second integrated circuits may be configured to drive at least one tweeter speaker and wherein the other of the first and second integrated circuits is configured to drive at least one woofer speaker.
The processor of the first integrated circuit may be configured to separate the received audio signal into a first component having a first frequency and a second component having a second frequency. The first integrated circuit may be configured to drive the first audio transducer on the basis of the first frequency signal component. The processor of the first integrated circuit may be configured to transmit the second frequency component to the processor of the second integrated circuit. The processor of the second integrated circuit may be configured to drive the second audio transducer on the basis of the second frequency signal component.
The processor of the second integrated circuit may be configured to transmit an echo cancellation signal to the processor of the first integrated circuit. The first interface of the first integrated circuit may be configured to transmit the echo cancellation signal to an external processor.
The first integrated circuit may comprise a control interface. The second integrated circuit may comprise a control interface. Wherein:
The arrangement may comprise a third integrated circuit comprising a processor configured to receive the audio signal from the first integrated circuit and configured to drive a third audio transducer on the basis of the signal received from the processor of the first integrated circuit.
The first integrated circuit may be configured to drive a pair of tweeters. Each of the second and third integrated circuits may be configured to drive a woofer.
The processor of the second integrated circuit may be configured to transmit a mono echo cancellation signal to the processor of the first integrated circuit. The processor of the third integrated circuit may be configured to transmit a mono echo cancellation signal to the processor of the first integrated circuit. The processor of the first interface may be configured to receive the two mono signals from the second and third integrated circuits, combine the received mono signals into a stereo echo cancellation signal, and the first integrated circuit may be configured to transmit the stereo echo cancellation signal to an external processor.
The first integrated circuit may comprise a control interface. The second and third integrated circuits may respectively comprise a control interfaces. Wherein:
The processor of the first integrated circuit may be configured to generate and transmit an echo cancellation signal to an external processor.
The arrangement may comprise a third integrated circuit comprising a processor configured to receive an audio signal and configured to drive a third audio transducer on the basis of the signal received from the processor of the first integrated circuit.
Each of the first and third integrated circuits may be configured to drive a woofer. The second integrated circuit may be configured to drive a pair of tweeters.
According to another example there is provided a system comprising the first integrated circuit or the arrangement as described above, further comprising a processor, wherein the processor stores a programmable table that is readable by software, wherein the table comprises an entry that, when read by an operating system, presents at least the first and second integrated circuits as an integrated device to the operating system.
Any one or more of the first, second, or third integrated circuits may comprise an audio codec and/or a digital signal processor.
At least the first integrated circuit and the second integrated may appear as an integrated solution to a processor running an operating system.
The present disclosure may be understood with reference to the accompanying drawings in which:
As used herein the term “driver” will be understood to encompass a hardware driver (e.g. a transducer driver) and/or a software driver (e.g. a device driver). The skilled person will recognise the context from the individual examples as this disclosure relates to hardware and/or software drivers.
The first IC 100 may be considered as an interface, buffer, barrier, unhidden, non-masked, and the like, type of IC that is coupled between the host operating processor/system (not illustrated) and the second IC 150, or plurality of second ICs 150-N: where N is an integer of one (1) or more.
What
In some examples, as will be described below, the transducer 110 may comprise at least one audio transducer 110. For example, the IC 100 may be configured to drive a single speaker or a plurality of speakers, such as a pair of tweeters or a pair of woofers. As will be described below, the interface IC 100 may be, for example, an audio codec and/or an audio amplifier depending on the application.
Two examples will be discussed in this disclosure. The first example is that the IC 100 may comprise an amplifier. The IC 100 may also comprise a digital signal processor (“DSP”) wherein the combination of the amplifier and the DSP may be considered a ‘smart amplifier’ that is configured to perform an enhancement and/or protection algorithm, for example on an audio signal, and the IC 100 may be configured to drive a transducer 110 on the basis of the processed signal. In this example, the IC 100 may be specifically for the processing of audio and this example IC 100 may be suited for controlling a transducer 110 such as a woofer speaker. In the second example, the IC 100 may comprise a codec. The IC 100 may comprise an analogue-to-digital converter (“ADC”) to receive an input analogue signal, e.g. an input audio signal, and a digital-to-analogue converter (“DAC”) to transmit an output digital signal (e.g. to drive a speaker) and/or may include an embedded processor, such as an integrated DSP or an integrated microcontroller (“MCU”) configured to process control messages and/or enhancement and/or protection algorithms for the IC. The embedded processor may alternatively or additionally provide a simplified control interface to a host (e.g. host processor) and may, for example, translate generic commands into device specific controls. In this example, the interface IC 100 is not only for the purpose of controlling a transducer such as a speaker for example but can also control the programming of the other interfaced or buffered ICs 150-N. In the examples that follow, each type of IC may be used as the first or interface IC 100 in the
The buffer IC 100, indeed any of the ICs discussed herein, depending on the example, may comprise an audio device (e.g. a multifunction audio device) such as an audio processor, smart amplifer and/or audio codec. Such audio devices may comprise a MIPI SoundWire® compliant audio device, and as such, the ICs may have a number of associated functions, each of which may be an SDCA function (SDCA meaning “Sound Wire Device Class Audio). According to the SDCA specification, a block of 64 MBytes of register addresses is allocated to SDCA controls. The 26 LSBs which identify individual controls are set based on the following variables:
The above six (6) described variable parameters are used to build a 32-bit address to access the desired Controls. Because of address range, paging is required, but the most often used parameter values are placed in the lower 16 bits of the address. This helps to keep the paging registers constant while updating Controls for a specific Device/Function.
For example, where a file download request is used, this may be done according to a method defined by the SDCA specification used for downloading firmware and other device-specific files. Each function may be an audio function for example. Each function may comprise a class-specific entity that describes how software running on an external host processor views signal paths internal to the IC 100 to achieve the desired functionality. In one example, the first or buffer IC 100 may be configured to implement the following four SDCA functions: Simple Amplifier, Simple Microphone, Universal Audio Jack (UAJ), and a Network Digital Audio Interface (NDAI). As will be explained below, the barrier IC 100 may comprise an extension unit for each function, being an element contained in one (or more) SDCA audio functions. Accordingly, the firmware/configuration data may be compatible with the SDCA specification.
The ICs 200 and 250 of
It will be appreciated that the second or interfaced IC 250 could comprise any suitable combination of hardware and/or software and/or firmware and functionality, but that the architecture shown in
The processor 352 of the buffered IC 350 is configured to generate, for example, an echo cancellation signal SEC and transmit that signal to the processor 312 of the first IC 300 via signal path 319b and interface/port 353 of the buffered IC 350 and interface/port 313 of the buffer IC 300. The processor 312 of the interface IC 300 is configured to transmit the echo cancellation signal SEC generated by the second IC 350 to the external processor (not illustrated) via signal paths 319b and 316b and the first interface/port 311 (see 321 and 322). In other words, the first IC 300 is configured to transmit the echo cancellation signal SEC to an external processor via the first interface 311, the signal SEC comprising an echo cancellation signal generated by the second IC 350 (e.g. by the processor 352). The second IC 350 may receive information comprising any audio filter(s) and/or delay parameter(s) of the first IC 300 that are applied to the incoming main render audio signal SIN in order to generate an appropriate echo cancellation signal. The first IC 300 may additionally be configured to process any ultrasonic streams without transmitting any such ultrasonic streams to the second IC 350.
The first IC 300 of this example also comprises a second interface 320, which may comprise a serial peripheral interface (“SPI”) (although in other examples the interface may comprise alternate control ports such as I2C. The second interface or port 320 is configured to transmit a control signal SCTL to an interface 351 of the second IC 350. The interface or port 351 of the second IC 350 may also comprise an SPI. The first IC 300 may be configured to transfer firmware to the second IC 350 and/or load firmware into the memory registers (not illustrated) of the second IC 350. For instance, an external processor (not illustrated) may load firmware into the memory registers (not illustrated) of the first IC 300 and also load firmware into the memory registers of the second IC 350 via the interface 320 of the first IC 300 and the interface 351 of the second IC 350. The first buffer IC 300 may be configured to control the second buffered IC 350 in the sense that it can perform firmware signature validation (e.g. configured to validate firmware signatures) for the second IC 350. Firmware for the second IC 350 may be loaded by an extension driver, a trusted host, or via a file download to the first IC 300 which then transfers the firmware to the second IC 350 (via the interfaces 320 and 351). In this way, the second IC 350 is effectively embedded in the first IC 300 such that a driver, or any drivers, for the second IC 350 can exist either entirely on the firmware of the first IC 300 (rather than in a host operating system) or the driver can be a legacy driver running on a host operating system acting via the control interface 320 on the first IC 300 (for example a high definition audio (“HDA”) driver may be utilized on the host in examples where the firmware for the first IC 300 is not available.
Each IC 300, 350 of this example is configured to drive a respective transducer 310a, 310b, which may be transducers associated with speakers of the same type or of a different type. For example, the first IC 300 may be configured to drive a tweeter (310a) and the second IC 350 may be configured to drive a woofer (310b). The first IC 300 may be configured to drive a pair of tweeters, in one example and/or the second IC 350 may be configured to drive a pair of woofers, in another example.
The first IC 300 in this example may comprise an audio codec and may present itself (and the second buffered IC 350) to a software driver as an amplifier. The processors 312, 352 may each comprise audio signal processors or DSPs and the processor 352 of the second IC 350, may be configured to handle any channel split and/or delay matching. The IC 350 may comprise an amplifier and may comprise a DSP configured to process an enhancement and/or protection algorithm, for example on the audio signal S1 received via signal path 319a, the IC 350 driving the transducer 310b on the basis of the processed version of the signal S1.
The IC 300 may comprise a codec, e.g. as described with reference to
As for the previous example, the
The arrangement shown in
As for the previous examples, the
Each IC 400, 450 could optionally comprise a general purpose input/output interface or port (“GPIO”) in examples where it is desired for extension drivers to only handle initialisation of the IC 400 and/or the IC 450, without being afforded control of the runtime configuration of the ICs 400, 450. In examples without such a GPIO, an extension driver/driver(s) may handle runtime functions (for example, stream start and stream stop).
According to
A first integrated circuit 500 is an audio codec in this example and is configured to drive two tweeter speakers 510a1 and 510a2. The first IC 500 may be considered as an IC of a first type. The buffer IC 500 comprises a first interface or port 511 which is an audio interface such as SoundWire™ and is configured to receive a main render audio signal SIN, which is configured to be transmitted to a processor 512 and the processor 512 is configured to drive the pair of tweeter speakers 510a1 and 510a2.
The system of
The processor 512 of the first IC 500 is configured to transmit the audio signal SIN, or part/representation thereof, to each of the second and third ICs 550, 560 (see paths labelled 519a).
The IC 500 comprises a control interface 520, which may comprise a serial peripheral interface or port (“SPI”) which can communicate with respective interfaces or ports (e.g. SPIs) 551, 561 of the second and third ICs 550, 560. Via these interfaces 520, 551 and 561, the first IC 500 (or the host processor 580, through the first IC 500) can perform tasks such as configuring the second and third ICs 550, 560 (e.g. loading firmware into the memory spaces or registers of the ICs) as described above with respect to
The processor 512 of the first IC 500 is configured to transmit the main audio signal SIN to respective processors 552, 562 of the second and third ICs 550, 560 via signal paths 516a and 519a. The second and third ICs 550, 560 (e.g. the processors 552, 562 thereof) are configured to perform at least one of: separating the audio signal SIN into appropriate channels for their respective speakers (e.g. separating into appropriate frequency components) and delay matching. As indicated by signal path 516b and 519b each of the second and third ICs 550, 560 (e.g. the processors 552, 562 thereof) are configured to transmit echo cancellation signals SEC1, SEC2 (e.g. left and right channels) back to the processor 512 of the first IC which transmits a stereo echo cancellation signal SEC1+2 back to the processor 580.
In summary, the first IC 500 in this example presents as a 2×2 smart amp to the processor 580 (e.g. to the driver 583). The main audio render SIN according to this architecture is routed to each of the second and third ICs 550, 560, from a processor 512 of the buffer IC 500 to the processors 552, 562 of the buffered ICs 550, 560. Each of the second and third ICs 550, 560 then handle the channel split and delay matching, and return echo cancellation signals SEC1, SEC2 (e.g. left and right channels) back to the first IC 500 via their processors. This architecture advantageously can be controlled by even a simple driver, without the need for aggregation. The first IC 500 could be configured to perform firmware signature validation (e.g. configured to validate firmware signatures) for one or more of the second and third ICs (through the interfaces 520, 551).
As stated above, the second and third ICs 550 and 560 handle the echo cancellation signals SEC1, SEC2 (e.g. assuming main render is in sync and that the filter and delay parameters of the first IC 500 are knowable). The first IC 500 may be configured to process ultrasonic streams entirely within the first IC 500. Firmware for the second and/or third ICs 550, 560 may be loaded by an extension driver, via a trusted host (secure systems), or via a file download to the first IC 500 which then transfers the firmware to the second and/or the third IC 550, 560. The first IC 500 may be configured to extract tweeter content from reference signals using on-board filters, and in this way eliminate a channel (e.g. an Rx channel).
It will be appreciated that additional ICs of the second type (e.g. additional ICs like 550 and 560 etc.) may be added to the system of
By virtue of this arrangement, an embedded integration for a buffered IC of a second type (such as 550, 560) is achieved, allowing their drivers to exist either entirely on the firmware of the first IC 550 rather than in the host OS (e.g. running on the processor 580), or the driver may be a driver running on the host OS acting via the control interface 520 on the first IC (for example the second and/or third driver, such as a high definition audio driver, may be utilized on the host if the firmware for the first IC 500 is not available.
In operation, a stereo audio stream SIN is transmitted to the first IC 500. The processor 512 of the first IC 500 is configured to separate the audio stream into two sets of frequency components (e.g. band splitting the audio into high/low frequency components). In this example, the low frequency components are transmitted to the second and third ICs 550, 560 for them to drive the woofers 510b1/2 and the first IC 500 drives the tweeters 510a1/2 using the high frequency components. Any control information (such as volume and/or sample rate etc.) that is transmitted from the processor 580 is intercepted by the first IC 500 and sent over the control interface (SPI) 520 to the second and/or third ICs 550, 560 via their respective interfaces or ports 551, 561. These messages may be deconstructed as necessary. Due to this configuration, the arrangement presents itself as a single stereo amplifier to an operating system despite the fact that it is a four-speaker system. This, in turn, means that the driver need only access the controls for a single device/stereo amplifier.
In an example, the interface 511 comprises one SoundWire port input, for the two channel main render audio signal SIN, and one SoundWire port output for a two-channel echo cancellation signal. The first IC 500 could additionally comprise an ultrasonic render. The IC 500 (e.g. the processor 512 and/or interface 513 thereof) comprises two transmission (Tx) channels for transmitting the main audio render SIN to the second and third ICs 550, 560, and two receive (Rx) channels for receiving the echo cancellation signals SEC1 and SEC2 from the second and third ICs 550, 560. The processor 512 and/or interface 513 could comprise two additional Rx channels for tweeter content. The processors 552, 562 and/or interfaces 553, 563 of the second and third ICs 550, 560 comprise two Rx channels to receive the main audio render SIN from the first IC 500 as a common stream, and one Tx channel each to transmit the echo cancellation signal (but they could comprise an additional Tx channel, for example to transmit tweeter content).
A first IC 600 is configured to drive a woofer speaker 610b1 and is an IC of a first type. A second IC 650 is an audio codec configured to drive a pair of tweeter speakers 663a1, 663a2 and is an IC of a second type. A third IC 660 is also configured to drive a woofer speaker 610b2 and is an IC of the first type. A main audio render signal SIN is transmitted (e.g. from a processor 680) to both the first and third ICs 600, 660 which each comprise respective interfaces 611, 661 (which may each be SoundWire™ interfaces). Each of the first and third ICs 600, 660 are configured to generate respective echo cancellation signals and transmit their respective echo cancellation signals SEC1 and SEC2 back to the host processor 680 (these may respectively comprise left and right channels of an echo cancellation signal). Each of the first and third ICs 600, 660 comprise control modules 633, 653 for driving respective amplifier transducers 610b1 and 610b2, wherein the control modules 633 and 653 are respectively controlled by drivers 698 and 697 of the operating system (see SCTL1 and SCTL2).
In the
The second IC 650 comprises control module 673 for driving the tweeter pair 663 that are controlled by extension drivers 695, 696 of the processor 680 (see SCTL3 and SCTL4). As for
Each IC 600, 650, 660 also respectively comprises a general purpose input/output (GPIO) 540, 541, 542.
According to this example, the first and third ICs 600, 660 receive the main audio render SIN and each return an echo reference signal SEC1, SEC2, via their respective interfaces 611, 661 (e.g. Soundwire® interfaces) and pass processed audio (e.g. tweeter audio) to the second IC 650 via their processors 613, 662 (e.g. the paths labelled 619). In other words, the processors 613 and 662 are configured to generate processed audio (e.g. tweeter audio) from the received main audio render SIN. The first and third ICs 600, 660 lack a host control interface, so the writes to the second IC 660 may be handled by the host processor 680. The first and third ICs 600, 660 handle the echo cancelation signals, assuming that the main audio render SIN is in sync and that the filter and delay parameters of the second IC 660 are knowable. The first and third ICs 600, 660 may be configured to pass through ultrasonic streams. A GPIO from one or more of the first and third ICs to the second IC may enable/disable a signal from the first or third ICs to the second IC (meaning that an extension driver/extension driver(s) may only need to handle initialization, and not runtime configuration). Without the GPIO, the extension driver(s) may handle stream start and stream stop.
In an example, the processor 662 of the second IC 650 (and/or an interface thereof) comprises two Rx channels to receive audio (e.g. tweeter audio). The first and third ICs 600, 660 may each comprise one SoundWire® port two-channel input to receive the main render audio SIN as a common stream, and one SoundWire® port output, one channel, for the echo cancellation signal. The processors 613, 662 of the first and third Ics 600, 660 (and/or an interface thereof) comprise one Tx channel to transmit tweeter content to the second IC 650. The first and third ICs 600, 660 could comprise a SoundWire® port output for a single channel ultrasonic render.
Comparing the
It will be appreciated that a schematic diagram of the type of
Various arrangements are therefore discussed herein where one device may be “hidden behind” another, such that the two devices are connected in such a way that they present themselves as a single device to a processor whose drive is afforded control over the devices.
As discussed above, a first IC may receive an audio signal and transmit this to a second device, which may comprise an IC or another type of device such as a DSP, the second device processing or transmitting the signal in some way. This has the ability of offloading the functionality of the second device which can be controlled by the firmware of the first IC. In more detail, one of the reasons problems can occur with aggregated transducers driving multiple and different speaker types is due to the software driver on the processor being unable to read the device features for different devices, and knowing how to combine those such that the processor can control all of the devices. According to the techniques discussed here, multiple devices are effectively combined into one endpoint (the first IC) which is seen by the processor so the driver reads the features appropriate for the first IC, and can control other (aggregated) devices due to how the subsequent devices are connected to the first IC (as discussed with reference to
With reference again to
Any one or more of the ICs described herein may be configured to perform band split filtering (as described above), may comprise a delay line (e.g. for the time-alignment of audio), configured to perform enhanced processing of audio, and/or may be configured to perform level matching.
The examples described herein overcome the following challenges. The posture of the end-device having the aggregated transducers is supported, particularly when different ICs exhibit different performance characteristics. The signal processing capabilities of any given IC can be concerned with a specific function (e.g. filtering, e.g. audio filtering—a given IC may be to produce a tweeter output from a full-range stream, for example). A given IC may only support a given bandwidth render, and there may be sample rate changes on an amplifier path. A given IC may have a different group delay compared to another IC (e.g. due to DSP etc). For one example IC, the main render delay may be minimum 32 samples, plus any delay introduced by a signal chain(s). A given IC may have a Serial Peripheral Interface (SPI) master, whereas another given interface IC may have a SPI drive interface and two I2C driven interfaces.
According to this disclosure there is therefore provided an architecture (e.g. an SDCA architecture) enabling a dis-integrated audio implementation to appear as an integrated solution to an operating system. The architecture provides: (i) capability to transfer audio data between a device (such as an SDCA device) a secondary device (in some examples via an 12S interface); (ii) capability to transfer control data between a device (such as an SDCA device) and a secondary device (e.g. via an SPI); (iii) re-programmable SDCA implementation enabling the SDCA device to be configured as needed for the overall architecture (e.g., an ARM M0+ processor); (iv) delayed register read/write for an SDCA device (giving time to communicate with the second device and respond if required). An operating system can therefore aggregate multiple speaker devices into a single speaker endpoint where the solution “looks like” (to an external processor) a stereo pair of amplifiers. This architecture handles the signal splitting and sending to the appropriate amplifiers.
Features of any given aspect or example may be combined with the features of any other aspect or example and the various features described herein may be implemented in any combination in a given example.
The term “node” as used herein shall be understood by those of ordinary skill in the art to include the mechanical and/or electrical connection terms “terminal”, “bond pad”, “pin”, “ball” etc.
The skilled person will recognise that where applicable the above-described apparatus and methods may be embodied as processor control code, for example on a carrier medium such as a disk, CD- or DVD-ROM, programmed memory such as read only memory (Firmware), or on a data carrier such as an optical or electrical signal carrier. For many applications, embodiments of the invention will be implemented on a DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array). Thus, the code may comprise conventional program code or microcode or, for example code for setting up or controlling an ASIC or FPGA. The code may also comprise code for dynamically configuring re-configurable apparatus such as re-programmable logic gate arrays. Similarly, the code may comprise code for a hardware description language such as Verilog™ or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, the code may be distributed between a plurality of coupled components in communication with one another. Where appropriate, the embodiments may also be implemented using code running on a field-(re-)programmable analogue array or similar device in order to configure analogue hardware.
It should be noted that the above-mentioned examples illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word “comprising” does not exclude the presence of elements or steps other than those listed in a claim, “a” or “an” does not exclude a plurality, and a single feature or other unit may fulfil the functions of several units recited in the claims. Any reference signs in the claims shall not be construed so as to limit their scope.
As used herein, when two or more elements are referred to as “coupled” to one another, such term indicates that such two or more elements are in electronic communication or mechanical communication, as applicable, whether connected indirectly or directly, with or without intervening elements.
This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Accordingly, modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the disclosure. For example, the components of the systems and apparatuses may be integrated or separated. Moreover, the operations of the systems and apparatuses disclosed herein may be performed by more, fewer, or other components and the methods described may include more, fewer, or other steps. Additionally, steps may be performed in any suitable order. As used in this document, “each” refers to each member of a set or each member of a subset of a set.
Although exemplary embodiments are illustrated in the figures and described below, the principles of the present disclosure may be implemented using any number of techniques, whether currently known or not. The present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described above.
Unless otherwise specifically noted, articles depicted in the drawings are not necessarily drawn to scale.
All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the disclosure.
Although specific advantages have been enumerated above, various embodiments may include some, none, or all of the enumerated advantages. Additionally, other technical advantages may become readily apparent to one of ordinary skill in the art after review of the foregoing figures and description.
To aid the Patent Office and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants wish to note that they do not intend any of the appended claims or claim elements to invoke 35 U.S.C. § 112(f) unless the words “means for” or “step for” are explicitly used in the particular claim.
Number | Date | Country | Kind |
---|---|---|---|
2301581.1 | Feb 2023 | GB | national |
Number | Date | Country | |
---|---|---|---|
63393343 | Jul 2022 | US |