Field
Embodiments of the present invention are related to momentum transfer communication systems. Specifically, embodiments of the present invention are directed to encoding or decoding a received signal using momentum transfer encoding and/or decoding techniques.
Background
The proliferation of mobile communications platforms is challenging capacity of networks largely because of the ever increasing data rate at each node. This places significant power management demands on power consuming devices, such as personal computing devices, as well as on cellular and WLAN terminals, or any other device that utilizes power or energy stored in a power storage device. The increased data throughput translates to shorter meantime between battery charging cycles and increased thermal footprint.
A need exists to address drawbacks in conventional mobile communications platforms designs.
In an embodiment, an apparatus comprises at least one of a single-ended encoding circuit or a differential encoding circuit and a controller configured to control the at least one of the single-ended encoding circuit or the differential encoding circuit to encode a received signal into an encoded signal using momentum transfer encoding techniques.
Further features and advantages of the embodiments disclosed herein, as well as the structure and operation of various embodiments, are described in detail below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to a person skilled in the relevant art based on the teachings contained herein.
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present invention and, together with the description, further serve to explain the embodiments and to enable a person skilled in the relevant art to make and use the invention.
Embodiments will now be described with reference to the accompanying drawings. In the drawings, generally, like reference numbers indicate identical or functionally similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
Embodiments herein disclose improvements in power efficiency of communications processes, along with a method for efficiency enhancement. Shannon's work is helpful for analyzing information capacity of a communications system, but his formulation does not predict an efficiency relationship suitable for calculating the power consumption of a system, particularly for practical signals which may only approach the capacity limit.
Verification shows that embodiments of the present invention result in enhanced power efficiency. Hardware was constructed to measure the efficiency of a prototypical Gaussian signal prior to efficiency enhancement. After an optimization was performed, the efficiency of the encoding apparatus increased from 3.125% to greater than 86% for a manageable investment of resources. Likewise, several telecommunications standards based waveforms were also tested on the same hardware. The results reveal that the developed physical theories extrapolate in a very accurate manner to an electronics application, predicting the efficiency of single ended and differential encoding circuits before and after optimization.
1st Law of Thermodynamics: The first law is often formulated by stating that the change in the internal energy of a closed system is equal to the amount of heat supplied to the system, minus the amount of work done by the system on its surroundings. Other forms of energy (including electrical) may be substituted for heat energy in an extension of the first law formulation. The first law of thermodynamics is an energy conservation law with an implication that energy cannot be created or destroyed. Energy may be transformed or transported but a numerical calculation of the sum total of energy inputs to an isolated process or system will equal the total of the energy stored in the process or system plus the energy output from the process or system. The law of conservation of energy states that the total energy of an isolated system is constant. The first law of thermodynamics is referenced occasionally as simply the first law.
2nd Law of Thermodynamics: The second law is a basic postulate defining the concept of thermodynamic entropy, applicable to any system involving measurable energy transfer (classically heat energy transfer). In statistical mechanics information entropy is defined from information theory using Shannon's entropy. In the language of statistical mechanics, entropy is a measure of the number of alternative microscopic configurations or states of a system corresponding to a single macroscopic state of the system. One consequence of the second law is that practical physical systems may never achieve 100% thermodynamic efficiency. Also, the entropy of an isolated system will always possess an ever increasing entropy up to the point equilibrium is achieved. The second law of thermodynamics is referred to as simply the second law.
ACPR: Adjacent Channel Power Ratio usually measured in decibels (dB) as the ratio of an “out of band” power per unit bandwidth to an “in band” signal power per unit bandwidth. This measurement is usually accomplished in the frequency domain. Out of band power is typically unwanted.
A.C.: An alternating current which corresponds to a change in the direction of charge transport and/or the electromagnetic fields associated with moving charge through a circuit. One direction of current flow is usually labeled as positive and the opposite direction of current flow is labeled as negative and direction of current flow will change back and forth between positive and negative over time.
Access: Obtain examine or retrieve; ability to use; freedom or ability to obtain or make use of something.
Account: Record, summarize; keeping a record of reporting or describing an existence of.
A.C. Coupled: A circuit or system module is A.C. coupled at is interfaced to another circuit or system/module if D.C, current cannot pass through the interface but A.C. current or signal or waveform can pass through the interface.
A.C.L.R.: Adjacent channel leakage ratio is a measure of how much signal from a specific channel allocation leaks to an adjacent channel. In this case channel refers to a band of frequencies. Leakage from one band or one channel to another band or channel occurs when signals are processed by nonlinear systems.
A/D: Analog to digital conversion.
Adapt: Modify or adjust or reconstruct for utilization.
Adjust: Alter or change or arrange for a desired result or outcome.
Algorithm: A set of steps that are followed in some sequence to solve a mathematical problem or to complete a process or operation such as (for example) generating signals according to FLUTTER™.
Align: Arrange in a desired formation; adjust a position relative to another object, article or item, or adjust a quality/characteristic of objects, articles or items in a relative sense.
Allocate: Assign, distribute, designate or apportion.
Amplitude: A scalar value which may vary with time. Amplitude can be associated as a value of a function according to its argument relative to the value zero. Amplitude may be used to increase or attenuate the value of a signal by multiplying a constant by the function. A larger constant multiplier increases amplitude while a smaller relative constant decreases amplitude. Amplitude may assume both positive and negative values.
Annihilation of Information: Transfer of information entropy into non-information bearing degrees of freedom no longer accessible to the information bearing degrees of freedom of the system and therefore lost in a practical sense even if an imprint is transferred to the environment through a corresponding increase in thermodynamic entropy.
Apparatus: Any system or systematic organization of activities, algorithms, functions, modules, processes, collectively directed toward a set of goals and/or requirements: An electronic apparatus consists of algorithms, software, functions, modules, and circuits in a suitable combination depending on application which collectively fulfill a requirement. A set of materials or equipment or modules designed for a particular use.
Application Phase Space: Application phase space is a higher level of abstraction than phase space. Application phase space consists of one or more of the attributes of phase space organized at a macroscopic level with modules and functions within the apparatus. Phase space may account for the state of matter at the microscopic (molecular) level but application phase space includes consideration of bulk statistics for the state of matter where the bulks are associated with a module function, or degree of freedom for the apparatus.
Approximate: Approximate: almost correct or exact; close in value or amount but not completely precise; nearly correct or exact.
apriori: What can be known based on inference from common knowledge derived through prior experience, observation, characterization and/or measurement. Formed or conceived beforehand; relating to what can be known through an understanding of how certain things work rather than by observation; presupposed by experience. Sometimes separated as a priori.
Articulating: Manipulation of multiple degrees of freedom utilizing multiple facilities of an apparatus in a deliberate fashion to accomplish a function or process.
Associate: To be in relation to another object or thing; linked together in some fashion or degree.
Auto Correlation: Method of comparing a signal with or waveform itself. For example, Time-Auto Correlation function compares a time shifted version of a signal or waveform with itself. The comparison is by means of correlation.
Auto Covariance: Method of comparing a signal or waveform with itself once the average value of the signal/or waveform is removed. For example, a time auto covariance function compares a signal or waveform with a time shifted version of said signal or waveform.
Bandwidth: Frequency span over which a substantial portion of a signal is restricted or distributed according to some desired performance metric. Often a 3 dB power metric is allocated for the upper and lower band (span) edge to facilitate the definition. However, sometimes a differing frequency span vs. power metric, or frequency span vs. phase metric, or frequency span vs. time metric, is allocated/specified. Frequency span may also be referred to on occasion as band, or bandwidth depending on context.
Baseband: Range of frequencies near to zero Hz. and including zero Hz.
Bin: A subset of values or span of values within some range or domain.
Bit: Unit of information measure (binary digit) calculated using numbers with a base 2.
Blended Controls: A set of dynamic distributed control signals generated as part of the FLUTTER™ algorithm, used to program, configure, and dynamically manipulate the information encoding and modulation facilities of a communications apparatus.
Blended Control Function: Set of dynamic and configurable controls which are distributed to an apparatus according to an optimization algorithm which accounts for H(x), the input information entropy, the waveform standard, significant hardware variables and operational parameters. Blended control functions are represented by {tilde over (ℑ)}{H(x)ν,i} where ν+i is the total number of degrees of freedom for the apparatus which is being controlled. BLENDED CONTROL BY PARKERVISION™ is a registered trademark of ParkerVision, Inc., Jacksonville, Fla.
Branch: A path within a circuit or algorithm or architecture.
Bus: One or more than one interconnecting structure such as wires or signal lines which may interface between circuits or modules and transport digital or analog information or both.
C: An abbreviation for coulomb, which is a quantity of charge.
Calculate: Solve; probe the meaning of to obtain the general idea about something; to determine by a process. Solve a mathematical problem or equation.
Capacity: The maximum possible rate for information transfer through a communications channel, while maintaining a specified quality metric. Capacity may also be designated (abbreviated) as C, or C with possibly a subscript, depending on context. It should not be confused with Coulomb, a quantity of charge. On occasion capacity is qualified by some restrictive characteristics of the channel.
Cascading: Transferring or representing a quantity or multiple quantities sequentially. Transferring a quantity or multiple quantities sequentially.
Cascoding: Using a power source connection configuration to increase potential energy.
Causal: A causal system means that a system's output response (as a function of time) cannot precede its input stimulus.
CDF or cdf: Cumulative Distribution Function in probability theory and statistics, the cumulative distribution function (CDF), describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Cumulative distribution functions are also used to specify the distribution of multivariate random variables. A cdf may be obtained through an integration or accumulation over a relevant pdf domain.
Characterization: Describing the qualities or attributes of something. The process of determining the qualities or attributes of an object, or system.
Channel Frequency: The center frequency for a channel. The center frequency for a range or span of frequencies allocated to a channel.
Charge: Fundamental unit in coulombs associated with an electron or proton, ˜±1.602×10−19 C., or an integral multiplicity thereof.
Code: A combination of symbols which collectively possess an information entropy.
Communication: Transfer of information through space and time.
Communications Channel: Any path possessing a material and/or spatial quality that facilitates the transport of a signal.
Communications Sink: Targeted load for a communications signal or an apparatus that utilizes a communication signal. Load in this circumstance refers to a termination which consumes the application signal and dissipates energy.
Complex Correlation: The variables which are compared are represented by complex numbers. The resulting metric may have a complex number result.
Complex Number: A number which has two components; a real part and an imaginary part. The imaginary part is usually associated with a multiplicative symbol i) or j) which has a value √{square root over (−1)}. The numbers are used to represent values on two different number lines and operations or calculations with these numbers require the use of complex arithmetic. Complex arithmetic and the associated numbers are used often in the study signals, mathematical spaces, physics and many branches of science and engineering.
Complex Signal Envelope: A mathematical description of a signal, x(t), suitable for RF as well as other applications. The various quantities and relationships that follow may be derived from one another using vector analysis and trigonometry as well as complex arithmetic.
Compositing: The mapping of one or more constituent signals or portions of one or more constituent signals to domains and their subordinate functions and arguments according to a FLUTTER™ algorithm. Blended controls developed in the FLUTTER™ algorithm, regulate the distribution of information to each constituent signal. The composite statistic of the blended controls is determined by an information source with source entropy of H(x), the number of the available degrees of freedom for the apparatus, the efficiency of each degree of freedom, and the corresponding potential to distribute a specific signal rate, as well as information, in each degree of freedom.
Consideration: Use as a factor in making a determination.
Constellation: Set of coordinates in some coordinate system with an associated pattern.
Constellation Point: A single coordinate from a constellation.
Constituent Signal: A signal which is part of a parallel processing path in FLUTTER™ and used to form more complex signals through compositing or other operations.
Coordinate: A value which qualifies and/or quantifies position within a mathematical space. Also may possess the meaning; to manage a process.
Correlation: The measure by which the similarity of two or more variables may be compared. A measure of 1 implies they are equivalent and a measure of 0 implies the variables are completely dissimilar. A measure of (−1) implies the variables are opposite or inverse. Values between (−1) and (+1) other than zero also provide a relative similarity metric.
Covariance: This is a correlation operation between two different random variables for which the random variables have their expected values or average values extracted prior to performing correlation.
Create: To make or produce or cause to exist; to being about; to bring into existence. Synthesize, generate.
Cross-Correlations: Correlation between two different variables.
Cross-Covariance: Covariance between two different random variables.
Current: The flow of charge per unit time through a circuit.
D2P™: Direct to Power (Direct2Power™) a registered trademark of ParkerVision Inc., corresponding to a proprietary RF modulator and transmitter architecture and modulator device.
D/A: Digital to Analog conversion.
Data Rates: A rate of information flow per unit time.
D.C.: Direct Current referring to the average transfer of charge per unit time in a specific path through a circuit. This is juxtaposed to an AC current which may alternate directions along the circuit path over time. Generally a specific direction is assigned as being a positive direct current and the opposite direction of current flow through the circuit is negative.
D.C. Coupled: A circuit or system/module is D.C. coupled at its interface to another circuit or system/module if D.C. current or a constant waveform value may pass through the interface.
DCPS: Digitally Controlled Power or Energy Source
Decoding: Process of extracting information from an encoded signal.
Decoding Time: The time interval to accomplish a portion or all of decoding.
Degrees of Freedom: A subset of some space (for instance phase space) into which energy and/or information can individually or jointly be imparted and extracted according to qualified rules which may determine codependences. Such a space may be multi-dimensional and sponsor multiple degrees of freedom. A single dimension may also support multiple degrees of freedom. Degrees of freedom may possess any dependent relation to one another but are considered to be at least partially independent if they are partially or completely uncorrelated. Degrees of freedom also possess a corresponding realization in the information encoding and modulation functions of a communications apparatus. Different mechanisms for encoding information in the apparatus may be considered as degrees of freedom.
Delta Function: In mathematics, the Dirac delta function, or δ function, is a generalized function, or distribution, on the real number line that is zero everywhere except at the specified argument of the function, with an integral equal to the value one when integrated over the entire real line. A weighted delta function is a delta function multiplied by a constant or variable.
Density of States for Phase Space: Function of a set of relevant coordinates of some mathematical, geometrical space such as phase space which may be assigned a unique time and/or probability, and/or probability density. The probability densities may statistically characterize meaningful physical quantities that can be further represented by scalars, vectors and tensors.
Derived: Originating from a source in a manner which may be confirmed by measure, analysis, or inference.
Desired Degree of Freedom: A degree of freedom that is efficiently encoded with information. These degrees of freedom enhance information conservation and are energetically conservative to the greatest practical extent. They are also known as information bearing degrees of freedom. These degrees of freedom may be deliberately controlled or manipulated to affect the causal response of a system through, and application of, algorithm or function such as a blended control function enabled by a FLUTTER™ algorithm.
Dimension: A metric of a mathematical space. A single space may have one or more than one dimension. Often, dimensions are orthogonal. Ordinary space has 3-dimensions; length, width and depth. However, dimensions may include time metrics, code metrics, frequency metrics, phase metrics, space metrics and abstract metrics as well, in any suitable quantity or combination.
Domain: A range of values or functions of values relevant to mathematical or logical operations or calculations. Domains may encompass processes associated with one or more degrees of freedom and one or more dimensions and therefore bound hyper-geometric quantities. Domains may include real and imaginary numbers, and/or any set of logical and mathematical functions and their arguments.
Encoding: Process of imprinting information onto a waveform to create an information bearing function of time.
Encoding Time: Time interval to accomplish, a portion or all, encoding.
Energy: Capacity to accomplish work where work is defined as the amount of energy required to move an object or associated physical field (material or virtual) through space and time. Energy may be measured in units of Joules.
Energy Function: Any function that may be evaluated over its arguments to calculate the capacity to accomplish work, based on the function arguments. For instance, energy may be a function of time, frequency, phase, samples, etc. When energy is a function of time it may be referred to as instantaneous power or averaged power depending on the context and distribution of energy vs. some reference time interval. One may interchange the use of the term power and energy given implied or explicit knowledge of some reference interval of time over which the energy is distributed. Energy may be quantified in units of Joules.
Energy Partition: A function of a distinguishable gradient field, with the capacity to accomplish work. Partitions may be specified in terms of functions of energy, functions of power, functions of current, functions of voltage, or some combination of this list.
Energy partitions are distinguished by distinct ranges of variables which define them. For instance, out of i possible energy domains the kth energy domain may associate with a specific voltage range or current range or energy range or momentum range . . . etc.
Energy Source or Sources: A device or devices which supplies or supply energy from one or more access nodes of the source or sources to one or more apparatuses. One or more energy sources may supply a single apparatus. One or more energy sources may supply more than one apparatus.
Entropy: Entropy is an uncertainty metric proportional to the logarithm of the number of possible states in which a system may be found according to the probability weight of each state.
{For example: information entropy is the uncertainty of an information source based on all the possible symbols from the source and their respective probabilities.}
{For example: Physical entropy is the uncertainty of the states for a physical system with a number of degrees of freedom. Each degree of freedom may have some probability of energetic excitation.}
Equilibrium: Equilibrium is a state for a system in which entropy is stable, i.e., no longer changing.
Ergodic: Stochastic processes for which statistics derived from time samples of process variables correspond to the statistics of independent ensembles selected from the process. For ergodic ensemble, the average of a function of the random variables over the ensemble is equal with probability unity to the average over one or more possible time translations of a particular member function of the ensemble, except for a subset of representations of measure zero. Although processes may not be perfectly ergodic they may be suitably approximated as so under a variety of practical circumstances.
Ether: Electromagnetic transmission medium, usually ideal free space unless otherwise implied. It may be considered as an example of a physical channel.
EVM: Error Vector Magnitude applies to a sampled signal that is described in vector space. The ratio of power in the unwanted variance (or approximated variance) of the signal at the sample time to the root mean squared power expected for a proper signal.
Excited: A stimulated state or evidence of a stimulated state relative to some norm.
Feedback: The direction of signal flow from output to input of a circuit or module or apparatus. Present output values of such architectures or topologies are returned or “fed back” to portions of the circuit or module in a manner to influence future outputs using control loops. Sometimes this may be referred to as closed loop feed forward (CLFF) to indicate the presence of a control loop in the architecture.
Feed forward: The direction of signal flow from input to output of a circuit or module or apparatus. Present output values of such architectures or topologies are not returned or “fed back” to portions of the circuit or module in a manner to influence future outputs using control loops. Sometimes this may be referred to as open loop feed forward (OLFF) to indicate the absence of a control loop in the architecture.
FLUTTER™: Algorithm which manages one or more of the degrees of freedom of a system to efficiently distribute energy via blended control functions to functions/modules within a communications apparatus. FLUTTER™ is a registered trademark of ParkerVision, Inc. Jacksonville, Fla.
Frequency: (a) Number of regularly occurring particular distinguishable events per unit time, usually normalized to a per second basis. Number of cycles or completed alternations per unit time of a wave or oscillation, also given in Hertz (Hz) or radians per second (in this case cycles or alternations are considered events). The events may also be samples per unit time, pulses per unit time, etc. An average rate of events per unit time.
(b) In statistics and probability theory the term frequency relates to how often or how likely the occurrence of an event is relative to some total number of possible occurrences. The number of occurrences of a particular value or quality may be counted and compared to some total number to obtain a frequency.
Frequency Span: Range of frequency values. Band of frequency values. Channel.
Function of: ℑ{ } or {tilde over (ℑ)}{ } are used to indicate a “function of” the quantity or expression (also known as argument) in the bracket { }. The function may be a combination of mathematical and/or logical operation.
Harmonic: Possessing a repetitive or rhythmic quality, rhythm or frequency which may be assigned units of Hertz (Hz) or radians per second (rad/s) or integral multiples thereof. For instance a signal with a frequency of ƒc possesses a first harmonic of 1ƒc Hz, a second harmonic of 2ƒc Hz, a third harmonic of 3ƒc Hz, so on and so forth. The frequency 1ƒc Hz or simply ƒc Hz is known as the fundamental frequency.
Hyper-Geometric Manifold: Mathematical surface described in a space with 4 or more dimensions. Each dimension may also consist of complex quantities.
Impedance: A measure to the opposition of time varying current flow in a circuit. The impedance is represented by a complex number with a real part or component also called resistance and an imaginary part or component also called a reactance. The unit of measure is ohms.
Imprint: The process of replicating information, signals, patterns, or set of objects. A replication of information, signals, patterns, or set of objects.
Information: A message (sequence of symbols) contains a quantity of information determined by the accumulation of the following; the logarithm of a symbol probability multiplied by the negative of the symbol probability, for one or more symbols of the message. In this case symbol refers to some character or representation from a source alphabet which is individually distinguishable and occurs with some probability in the context of the message. Information is therefore a measure of uncertainty in data, a message or the symbols composing the message. The calculation described above is an information entropy measure. The greater the entropy the greater the information content. Information can be assigned the units of bits or vats depending on the base of the logarithm.
In addition, for purpose of disclosure information will be associated with physical systems and processes, as an uncertainty of events from some known set of possibilities, which can affect the state of a dynamic system capable of interpreting the events. An event is a physical action or reaction which is instructed or controlled by the symbols from a message.
Information Bearing: Able to support the encoding of information. For example, information bearing degrees of freedom are degrees of freedom which may be encoded with information.
Information Bearing Function: Any set of information samples which may be indexed.
Information Bearing Function of Time: Any waveform, that has been encoded with information and therefore becomes a signal. Related indexed values may be assigned in terms of some variable encoded with information vs. time.
Information Entropy: H(p(x)) is also given the abbreviated notation H(x) and refers to the entropy of a source alphabet with probability density p(x), or the uncertainty associated with the occurrence of symbols (x) from a source alphabet. The metric H(x) may have units of bits or bits/per second depending on context but is defined by
in the case where p(x)i is a discrete random variable. If p(x) is a continuous random variable then:
Using mixed probability densities, mixed random variables, both discrete and continuous entropy functions may apply with a normalized probability space of measure 1. Whenever b=2 the information is measured in bits. If b=e then the information is given in vats. H(x) may often be used to quantity an information source. (On occasion H(x), Hx or its other representations may be referred to as “information”, “information uncertainty” or “uncertainty”. It is understood that a quantity of information, its entropy or uncertainty is inherent in such a shorthand reference.
Information Stream: A sequence of symbols or samples possessing an information metric. For instance, a code is an example of an information stream. A message is an example of an information stream.
Input Sample: An acquired quantity or value of a signal, waveform or data stream at the input to a function, module, apparatus, or system.
Instantaneous; Done, occurring, or acting without any perceptible duration of time; Accomplished without any delay being purposely introduced; occurring or present at a particular instant.
Instantaneous Efficiency: This is a time variant efficiency obtained from the ratio of the instantaneous output power divided by the instantaneous input power of an apparatus, accounting for statistical correlations between input and output. The ratio of output to input powers may be averaged.
Integrate: This term can mean to perform the mathematical operation of integration or to put together some number of constituents or parts to form a whole.
Interface: A place or area where different objects or modules or circuits, meet and communicate or interact with each other or values or attributes or quantities are exchanged.
Intermodulation Distortion: Distortion arising from nonlinearities of a system. These distortions may corrupt a particular desired signal as it is processed through the system.
Iterative: Involving repetition. Involving repetition while incrementing values, or changing attributes.
kB: (See Boltzmann's Constant)
Line: A geometrical object which exists in two or more dimensions of a referenced coordinate system. A line possesses a continuous specific sequence of coordinates within the reference coordinate system and also possesses a finite derivative at every coordinate (point) along its length. A line may be partially described by its arc length and radius of curvature. The radius of curvature is greater than zero at all points along its length. A curved line may also be described by the tip of a position vector which accesses each point along the line for a prescribed continuous phase function and prescribed continuous magnitude function describing the vector in a desired coordinate system.
Line Segment: A portion of a line with a starting coordinate and an ending coordinate.
Linear: Pertaining to a quality of a system to convey inputs of a system to the output of the system. A linear system obeys the principle of superposition.
Linear Operation: Any operation of a module system or apparatus which obeys the principle of superposition.
LO: Local Oscillator
Logic: A particular mode of reasoning viewed as valid or faulty, a system of rules which are predictable and consistent.
Logic Function: A circuit, module, system or processor which applies some rules of logic to produce an output from one or more inputs.
Macroscopic Degrees of Freedom: The unique portions of application phase space possessing separable probability densities that may be manipulated by unique physical controls derivable from the function {tilde over (ℑ)}{H(x)ν
Magnitude: A numerical quantitative measurement or value proportional to the square root of a squared vector amplitude.
Manifold: A surface in 3 or more dimensions which may be closed:
Manipulate: To move or control; to process using a processing device or algorithm:
Mathematical Description: Set of equations, functions and rules based on principles of mathematics characterizing the object being described.
Message: A sequence of symbols which possess a desired meaning or quantity and quality of information.
Metrics: A standard of measurement; a quantitative standard or representation; a basis for comparing two or more quantities. For example, a quantity or value may be compared to some reference quantity or value.
Microscopic Degrees of Freedom: Microscopic degrees of freedom are spontaneously excited due to undesirable modes within the degrees of freedom. These may include, for example, unwanted. Joule heating, microphonics, photon emission, electromagnetic (EM) field emission and a variety of correlated and uncorrelated signal degradations.
MIMO: Multiple input multiple output system architecture.
MISO: Multiple input single output operator.
Mixture: A combination of two or more elements; a portion formed by two or more components or constituents in varying proportions. The mixture may cause the components or constituents to retain their individual properties or change the individual properties of the components or constituents.
Mixed Partition: Partition consisting of scalars, vectors tensors with real or imaginary number representation in any combination.
MMSE: Minimum Mean Square Error. Minimizing the quantity ({tilde over (X)}−X)2 where {tilde over (X)} is the estimate of X, a random variable. {tilde over (X)} is usually an observable from measurement or may be derived from an observable measurement, or implied by the assumption of one or more statistics.
Modes: The manner in which energy distributes into degrees of freedom. For instance, kinetic energy may be found in vibrational, rotational and translation forms or modes. Within each of these modes may exist one or more than one degree of freedom. In the case of signals for example, the mode may be frequency, or phase or amplitude, etc. Within each of these signal manifestations or modes may exist one or more than one degree of freedom.
Modify: To change some or all of the parts of something.
Modulation: A change in a waveform, encoded according to information, transforming the waveform to a signal.
Modulation Architecture: A system topology consisting of modules and/or functions which enable modulation.
Modulated Carrier Signal: A sine wave waveform of some physical quantity (such as current or voltage) with changing phase and/or changing amplitude and/or changing frequency where the change in phase and amplitude are in proportion to some information encoded onto the phase and amplitude. In addition, the frequency may also be encoded with information and therefore change as a consequence of modulation.
Module: A processing related entity, either hardware, software, or a combination of hardware and software, or software in execution. For example, a module may be, but is not limited to being, a process running on a processor or microprocessor, an object, an executable, a thread of execution, a program, and/or a computer. One or more modules may reside within a process and/or thread of execution and a module may be localized on one chip or processor and/or distributed between two or more chips or processors. The term “module” also means software code, machine language or assembly language, an electronic medium that may store an algorithm or algorithms or a processing unit that is adapted to execute program code or other stored instructions. A module may also consist of analog, or digital and/or software functions in some combination or separately. For example an operational amplifier may be considered as an analog module.
Multiplicity: The quality or state of being plural or various.
Nat: Unit of information measure calculated using numbers with a natural logarithm base.
Node: A point of analysis, calculation, measure, reference, input or output, related to procedure, algorithm, schematic, block diagram or other hierarchical object. Objects, functions, circuits or modules attached to a node of a schematic or block diagram access the same signal and/or function of signal common to that that node.
Non Central: As pertains to signals or statistical quantities; the signals or statistical quantities are characterized by nonzero mean random processes or random variables.
Non-Excited: The antithesis of excited. (see unexcited)
Non-Linear: Not obeying the principle of super position. A system or function which does not obey the superposition principle.
Non-Linear Operation: Function of an apparatus, module, or system which does not obey superposition principles for inputs conveyed through the system to the output.
Nyquist Rate: A rate which is 2 times the maximum frequency of a signal to be reproduced by sampling.
Nyquist—Shannon Criteria: Also called the Nyquist-Shannon sampling criteria; requires that the sample rate for reconstructing a signal or acquiring/sampling a signal be at least twice the bandwidth of the signal (usually associated as an implication of Shannon's work). Under certain conditions the requirement may become more restrictive in that the required sample rate may be defined to be twice the frequency of the greatest frequency of the signal being sampled, acquired or reconstructed (usually attributed to Nyquist). At baseband, both interpretations apply equivalently. At pass band it is theoretically conceivable to use the first interpretation, which affords the lowest sample rate.
Object: Some thing, function, process, description, characterization or operation. An object may be abstract or material, of mathematical nature, an item or a representation depending on the context of use.
Obtain: To gain or acquire.
“on the fly”: This term refers to a substantially real time operation which implements an operation or process with minimal delay maintaining a continuous time line for the process or operation. The response to each step of the operation, or procedure organizing the operation, responds in a manner substantially unperceived by an observer compared to some acceptable norm.
Operation: Performance of a practical work or of something involving the practical application of principles or processes or procedure; any of various mathematical or logical processes of deriving one entity from others according to a rule. May be executed by one or more processors or processing modules or facilities functioning in concert or independently.
Operational State: Quantities which define or characterize an algorithm, module, system or processor a specific instant.
Operatively Coupled: Modules or Processors which depend on their mutual interactions.
Optimize: Maximize or Minimize one or more quantities and/or metrics of features subject to a set of constraints.
PAER: Peak to Average Energy Ratio which can be measured in dB if desired. It may also be considered as a statistic or statistical quantity for the purpose of this disclosure. It is obtained by dividing the peak energy for a signal or waveform by its average energy.
PAPR: Peak to Average Power Ratio which can be measured in dB if desired. For instance PAPR is the peak to average power of a signal or waveform determined by dividing the instantaneous peak power excursion for the signal or waveform by its average power value. It may also be considered as a statistic or statistical quantity for the purpose of this disclosure.
Peak to Average Power Ratio which can be measured in dB if desired. For instance PAPRsig is the peak to average power of a signal determined by dividing the instantaneous peak power excursion for the signal by its average power value. It may also be considered as a statistic or statistical quantity for the purpose of this disclosure
Parallel Paths: A multiplicity of paths or branches possessing the attribute of a common direction of signal or process flow through a module, circuit, system or algorithm. In a simple case parallel paths may possess a comment source terminal or node and a common ending node or terminus. Each path or branch may implement unique processor or similar processes.
Parameter: A value or specification which defines a characteristic of a system, module, apparatus, process, signal or waveform. Parameters may change.
Parsing: The act of dividing, sub dividing, distributing or partitioning.
Partial: Less than the whole.
Partitions: Boundaries within phase space that enclose points, lines, areas and volumes. They may possess physical or abstract description, and relate to physical or abstract quantities. Partitions may overlap one or more other partitions. Partitions may be described using scalars, vectors, tensors, real or imaginary numbers along with boundary constraints. Partitioning is the act of creating partitions.
Pass band: Range of frequencies with a substantially defined range or channel not possessing DC response or zero Hz frequency content.
Patches: A geometrical structure used as a building block to approximate a surface rendering from one or more patches.
PDF or Probability Distribution: Probability Distribution Function is a mathematical function relating a value from a probability space to another space characterized by random variables.
pdf or Probability Density: Probability Density Function is the probability that a random variable or joint random variables possess versus their argument values. The pdf may be normalized so that the accumulated values of the probability space possesses a measure of the CDF.
Phase Space: A conceptual space that may be composed of real physical dimensions as well as abstract mathematical dimensions, and described by the language and methods of physics, probability theory and geometry. In general, the phase space contemplates the state of matter within the phase space boundary, including the momentum and position for material of the apparatus.
Plane: Two dimensional geometrical object which, is defined by two straight lines.
Point: One dimensional mathematical or geometrical object, a single coordinate of a coordinate system.
Portion: Less than or equal to the whole.
Possess: To have, or to exhibit the traits of what is possessed.
Power Differential: Comparison of a power level to a reference power level by calculating the difference between the two.
Power Function: Energy function per unit time or the partial derivative of an energy function with respect to time. If the function is averaged it is an average power. If the function is not averaged it may be referred to as an instantaneous power. It has units of energy per unit time and so each coordinate of a power function has an associated energy which occurs at an associated time. A power function does not alter or change the units of its time distributed resource (i.e. energy in Joules).
Power Level: A quantity with the metric of Joules per second.
Power Source or Sources: An energy source or sources which is/are described by a power function or power functions. It may possess a single voltage and/or current or multiple voltages and/or currents deliverable to an apparatus or a load. A power source may also be referred to as power supply.
Probability: Frequency of occurrence for some event or events which may be measured or predicted from some inferred statistic.
Processing: The execution of a set of operations to implement a process or procedure.
Processing Paths: Sequential flow of functions, modules, and operations in an apparatus, algorithm, or system to implement a process or procedure.
Provide: Make available, to prepare.
Pseudo-Phase Space: to representation of phase space or application phase space which utilizes variables common to the definition of the apparatus such as voltage, current, signal, complex signal, amplitude, phase, frequency, etc. These variables are used to construct a mathematical space related to the phase space. That is, there is a known correspondence in change for the pseudo-phase space for a change in phase space and vice versa.
Q Components: Quadrature phase of a complex signal also called the complex part of the signal.
Radial Difference: Difference in length along a straight line segment or vector which extends along the radial of a spherical or a cylindrical coordinate system
Radio Frequency (RF): Typically a rate of oscillation in the range of about 3 kHz to 300 GHz, which corresponds to the frequency of radio waves, and the alternating currents (AC), which carry radio signals. RF usually refers to electrical rather than mechanical oscillations, although mechanical RF systems do exist.
Random: Not deterministic or predictable.
Random Process: An uncountable, infinite, time ordered continuum of statistically independent random variables. A random process may also be approximated as a maximally dense time ordered continuum of substantially statistically independent random variables.
Random Variable: Variable quantity which is non-deterministic, or at least partially so, but may be statistically characterized. Random variables may be real or complex quantities.
Range: A set of values or coordinates from some mathematical space specified by a minimum and a maximum for the set
Rate: Frequency of an event or action.
Real Component: The real portion/component of a complex number sometimes associated with the in-phase or real portion/component of a signal, current or voltage. Sometimes associated with the resistance portion/component of an impedance.
Related: Pertaining to, associated with.
Reconstituted: A desired result formed from one or more than one operation and multiple contributing portions.
Relaxation Time: A time interval for a process to achieve a relatively stable state or a relative equilibrium compared to some reference event or variable state reference process. For instance a mug of coffee heated in a microwave eventually cools down to assume a temperature nearly equal to its surroundings. This cooling time is a relaxation time differentiating the heated state of the coffee and the relatively cool state of the coffee?
Rendered: Synthesized, generated or constructed or the result of a process, procedure, algorithm or function.
Rendered Signal: A signal which has been generated as an intermediate result or a final result depending on context. For instance, a desired final RF modulated output can be referred to as a rendered signal.
Rendering Bandwidth: Bandwidth available for generating a signal or waveform.
Rendering Parameters: Parameters which enable the rendering process or procedure.
Representation: A characterization or description for an object, or entity. This may be for example, a mathematical characterization, graphical representation, model, . . . etc.
Rotational Energy: Kinetic energy associated with circular or spherical motions.
Response: Reaction to an action or stimulus.
Sample: An acquired quantity or value. A generated quantity or value.
Sample Functions: Set of functions which consist of arguments to be measured or analyzed or evaluated. For instance, multiple segments of a waveform or signal could be acquired or generated (“sampled”) and the average, power, or correlation to some other waveform, estimated from the sample functions.
Sample Regions: Distinct spans, areas, or volumes of mathematical spaces which can contain, represent and accommodate a coordinate system for locating and quantifying the metrics for samples contained within the region.
Scalar Partition: Any partition consisting of scalar values.
Set: A collection, an aggregate, a class, or a family of any objects.
Signal: An example of an information bearing function of time, also referred to as information bearing energetic function of time and space that enables communication.
Signal Constellation: Set or pattern of signal coordinates in the complex plane with values determined from aI(t) and aQ(t) and plotted graphically with aI(t) versus aQ(t) or vice versa. It may also apply to a set or pattern of coordinates within a phase space. aI(t) and aQ(t) are in phase and quadrature phase signal amplitudes respectively. aI(t) and aQ(t) are functions of time obtained from the complex envelope representation for a signal.
Signal Efficiency: Thermodynamic efficiency of a system accounting only for the desired output average signal power divided by the total input power to the system on the average.
Signal Ensemble: Set of signals or set of signal samples or set of signal sample functions.
Signal Envelope Magnitude: This quantity is obtained from (aI2+aQ2)1/2 where aI is the in phase component of a complex signal and aQ is the quadrature phase component of a complex signal. aI and aQ may be functions of time.
Signal of Interest: Desired signal. Signal which is the targeted result of some operation, function, module or algorithm.
Signal Phase: The angle of a complex signal or phase portion of a(t)e−jω
and the sign function is determined from the signs of aQ, aI to account for the repetition of modulo tan aQ/aI.
aI(t) and aQ(t) are in phase and quadrature phase signal amplitudes respectively. aI(t) and aQ(t) are functions of time obtained from the complex envelope representation for a signal.
Signal Partition: A signal or signals may be allocated to separate domains of a FLUTTER™ processing algorithm. Within a domain a signal may possess one or more partitions. The signal partitions are distinct ranges of amplitude, phase, frequency and/or encoded waveform information. The signal partitions are distinguishable by some number of up to and including ν degrees of freedom they associate with where that number is less than or equal to the number of degrees of freedom for a domain or domains to which a signal partition belongs.
Sources: Origination of some quantity such as information, power, energy, voltage or current.
Space: A region characterized by span or volume which may be assigned one or more dimensional attributes. Space may be a physical or mathematical construct or representation. Space possesses a quality of dimension or dimensions with associated number lines or indexing strategies suitable for locating objects assigned to the space their relative positions as well as providing a metric for obtaining characteristics of the assigned objects. Space may be otherwise defined by an extent of continuous or discrete coordinates which may be accessed. Space may be homogeneous or nonhomogeneous. A nonhomogeneous space has continuous and discrete coordinate regions or properties for calculations of metrics within the space which change from some domain or region within the space to another domain or region within the space. A homogeneous space possesses either a continuum of coordinates or a discrete set of coordinates and the rules for calculating metrics do not change as a function of location within the space. Space may possess one or more than one dimension.
Spawn: Create, generate, synthesize.
Spectral Distribution: Statistical characterization of a power spectral density.
Spurious Energy: Energy distributed in unwanted degrees of freedom which may be unstable, unpredictable, etc.
Statistic: A measure calculated from sample functions of a random variable.
Statistical Dependence: The degree to which the values of random variables depend on one another or provide information concerning their respective values.
Statistical Parameter: Quantity which affects or perhaps biases a random variable and therefore its statistic.
Statistical Partition: Any partition with mathematical values or structures, i.e., scalars, vectors, tensors, etc., characterized statistically.
Stimulus: An input for a system or apparatus which elicits a response by the system or apparatus.
Storage Module: A module which may store information, data, or sample values for future use or processing.
Subset: A portion of a set. A portion of a set of objects.
Sub-Surfaces: A portion of a larger surface.
Sub-system: A portion of a system at a lower level of hierarchy compared to a system.
Subordinate: A lower ranking of hierarchy or dependent on a higher priority process, module, function or operation.
Substantially: An amount or quantity which reflects acceptable approximation to some limit.
Suitable: Acceptable, desirable, compliant to some requirement, specification, or standard.
Superposition: A principle which may be given a mathematical and systems formulation. For n given inputs (x1, x2, . . . , xn) to a system the output y of the system may be obtained from either of the following equations if the principle of superposition holds;
ℑ{x1+x2+ . . . xn}=y or ℑ{x1}+ℑ{x2}+ . . . ℑ{xn}=y
That is, the function ℑ{ } may be applied to the sum of one or more inputs or to each input separately then, summed to obtain an equivalent result in either case. When this condition holds then the operation described by ℑ{ }, for instance a system description or an equation, is also said to be linear.
Switch or Switched: A discrete change in a values and/or processing path, depending on context. A change of functions may also be accomplished by switching between functions.
Symbol: A segment of a signal (analog or digital), usually associated with some minimum integer information assignment in bits, or nats.
System Response: A causal reaction of a system to a stimulus.
Tensor: A mathematical object formed from vectors and arrays of values. Tensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples of such relations include the dot product, the cross product and linear maps. Vectors and scalars themselves are also tensors. A tensor can be represented as a multi-dimensional array of numerical values
Tensor Partition: Any partition qualified or characterized by tensors.
Thermal Characteristics: The description or manner in which heat distributes in the various degrees of freedom for an apparatus.
Thermodynamic Efficiency: Usually represented by the symbol η or {tilde over (η)} and may be accounted for by application of the 1st and 2nd Laws of Thermodynamics.
where Pout is the power in a proper signal intended for the communication sink, load or channel, Pin is measured as the power supplied to the communications apparatus while performing it's function. Likewise, Eout corresponds to the proper energy out of an apparatus intended for communication sink, load or channel, while Ein is the energy supplied to the apparatus.
Thermodynamic Entropy: probability measure for the distribution of energy amongst one or more degrees of freedom for a system. The greatest entropy for a system occurs at equilibrium by definition. It is often represented with the symbol S. Equilibrium is determined when
“→” in this case means “tends toward the value of”.
Thermodynamic Entropy Flux: A concept related to the study of transitory and non-equilibrium thermodynamics. In this theory entropy may evolve according to probabilities associated with random processes or deterministic processes based on certain system gradients. After a long period, usually referred to as the relaxation time, the entropy flux dissipates and the final system entropy becomes the approximate equilibrium entropy of classical thermodynamics, or classical statistical physics.
Thermodynamics: A physical science that accounts for variables of state associated with the interaction of energy and matter. It encompasses a body of knowledge based on 4 fundamental laws that explain the transformation, distribution and transport of energy in a general manner.
Transformation: Changing from one form to another.
Transition: Changing between states or conditions.
Translational Energy: Kinetic energy associated with motion along a path or trajectory.
Uncertainty: Lack of knowledge or a metric represented by H(x), also Shannon's uncertainty.
Undesired Degree of Freedom: A subset of degrees of freedom that give rise to system inefficiencies such as energy loss or the non-conservation of energy and/or information loss and non-conservation of information with respect to a defined system boundary. Loss refers to energy that is unusable for its original targeted purpose.
Unexcited State: A state that is not excited compared to some relative norm defining excited. A state that is unexcited is evidence that the state is not stimulated. An indication that a physical state is unexcited is the lack of a quantity of energy in that state compared to some threshold value.
Utilize: Make use of.
Variable: A representation of a quantity that may change.
Variable Energy Source: An energy source which may change values, with or without the assist of auxiliary functions, in a discrete or continuous or hybrid manner.
Variable Power Supply: A power source which may change values, with or without the assist of auxiliary functions, in a discrete or continuous or hybrid manner.
Variance: In probability theory and statistics, variance measures how far a set of numbers is spread out. A variance of zero indicates that one or more of the values are identical. Variance is always non-negative: a small variance indicates that the data points tend to be very close to the mean (expected value) and hence to each other, while a high variance indicates that the data points are very spread out around the mean and from each other.
The variance of a random variable X is its second central moment, the expected value of the squared deviation from the mean μ=E[X]:
Var(X)=E[(X−μ)2].
This definition encompasses random variables that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:
Var(X)=Cov(X,X).
The variance is also equivalent to the second cumulant of the probability distribution for X. The variance is typically designated as Var(X), σ2X, or simply σ2 (pronounced “sigma squared”). The expression for the variance can be expanded:
A mnemonic for the above expression is “mean of square minus square of mean”.
If the random variable X is continuous with probability density function ƒ(x), then the variance is given by;
Var(X)=σ2=∫(x−μ)2ƒ(x)dx=∫x2ƒ(x)dx−μ2
where μ is the expected value,
μ=∫xƒ(x)dx
and where the integrals are definite integrals taken for x ranging over the range of the random variable X.
Vector Partition: Any partition consisting of or characterized by vector values.
Vibrational Energy: Kinetic energy contained in the motions of matter which rhythmically or randomly vary about some reference origin of a coordinate system.
Voltage: Electrical potential difference, electric tension or electric pressure (measured in units of electric potential: volts, or joules per coulomb) is the electric potential difference between two points, or the difference in electric potential energy of a unit charge transported between two points. Voltage is equal to the work done per unit charge against a static electric field to move the charge between two points in space. A voltage may represent either a source of energy (electromotive force), or lost, used, or stored energy (potential drop). Usually a voltage is measured with respect to some reference point or node in a system referred to a system reference voltage or commonly a ground potential. In many systems a ground potential is zero volts though this is not necessarily required.
Voltage Domain: A domain possessing functions of voltage.
Voltage Domain Differential: Differences between voltages within a domain.
Waveform Efficiency: This efficiency is calculated from the average waveform output power of an apparatus divided by its averaged waveform input power.
Work: Energy exchanged between the apparatus and its communications sink, load, or channel as well as its environment, and between functions and modules internal to the apparatus. The energy is exchanged by the motions of charges, molecules, atoms, virtual particles and through electromagnetic fields as well as gradients of temperature. The units of work may be Joules. The evidence of work is measured by a change in energy.
A symbol (typically 3 dots or more) used occasionally in equations, drawings and text to indicate an extension of a list of items, symbols, functions, objects, values, etc. . . . as required by the context. For example the notation ν1, ν2 . . . νn indicates the variable ν1, the variable ν2, and all variables up to and including νn, where n is a suitable integer appropriate for the context. The sequence of dots may also appear in other orientations such as vertical column or semicircle configuration.
ν+i: This is the total of the number of desirable degrees of freedom of a FLUTTER™ based system also known as the blended control Span, composed of some distinct number of degrees of freedom ν and some number of energy partitions i. ν and i are suitable integer values.
νi: νi is the ith subset of ν degrees of freedom. Each ν1, ν2, . . . νi of the set may represent a unique number and combination of the ν distinct degrees of freedom. The subscript i indicates an association with the ith energy partition. νi is sometimes utilized as a subscript for FLUTTER™ system variables and/or blended control functions
ν.i: This represents a joint set of values which may be assigned or incremented as required depending on context. The set values ν, i are typically utilized as an index for blended control enumeration. For example {tilde over (ℑ)}{H(x)ν,i} has the meaning; The νth, ith function of system information entropy H(x), or some subset of these functions. H(x)ν,i may represent some portion of the system entropy H(x) depending on the values assumed by ν,i.
x→y; The arrow (→) between two representative symbols or variables means that the value on the left approaches the value on the right, for instance, x→y means x becomes a value substantially the same as y or the variable x is approximately the same as y. In addition, x and y can be equations or logical relationships.
{tilde over (ℑ)}{H(x)ν,i}: This notation is generally associated with blended controls. It has several related meanings including;
a) A function of the νth, ith Information Entropy Function parsed from H(x).
b) A subset of blended controls for which ν, i may assume appropriate integer values.
c) An expanded set in matrix form
The meaning of {tilde over (ℑ)}{H(x)ν,i} from the definitions a), b), c) depends on the context of discussion.
+ or +/−: The value or symbol or variable following this ± may assume positive or negative values. For instance, +/−Vs means that Vs may be positive or negative.
∓ or −/+: The value or symbol or variable following this ∓ may assume negative or positive values. For instance, −/+Vs means that Vs may be negative or positive.
∫llulƒ(x)dx: Integration is a mathematical operation based on the calculus of Newton and Leibnitz which obtains a value for the area of a curve under the function of variable x, ƒ(x) between the function limits of ll a lower limit value and ul the upper limit value.
Σnxn: Summation is a mathematical operation which sums together all xn=x1, x2, . . . of a set of values over the index n which may take on integer values.
: The brackets indicate a time domain average of the quantity enclosed by the bracket.
Shannon created the standard by which communications systems are measured. His information capacity theorems are universally recognized and routinely applied by communications systems engineers. Shannon's theorems provide a way of calculating information transfer per unit time for given signal and noise power, yet there is no explicit connection of these concepts to power consumption. This disclosure provides that connection. Power efficiency is an increasingly important topic due to the proliferation of mobile communications and mobile computing. Battery life and heat dissipation versus the bandwidth and quality of service are driving market concerns for mobile communications.
In an embodiment, the preferred power efficiency metric is the thermodynamic efficiency η defined as the effective power output of a system for a given invested input power. Pe is the effective power delivered by the system and Pw is the waste power so that efficiency is given by:
In a communications system, the effective output power is defined as the power delivered to the communications load or sink and exclusively associated with the information bearing content of a signal. The waste energy is associated with non-information bearing degrees of freedom within the communications system which siphon some portion of the available input power. Though Pw can take many intermediate forms of expression, it is ultimately dissipated as heat in the environment.
The principles presented herein can be applied to any communications process, whether it be mechanical, electrical, or optical by nature. The classical laws of motion, first two laws of thermodynamics and Shannon's uncertainty function provide common frameworks for analysis and foundation for development of important models.
Shannon's approach is based on a mathematical model rather than physical insight. A particle based model is introduced to emphasize physical principles. At a high level of abstraction, the model retains the classical form used by Shannon, comprising transmitter (Tx), physical transport media and receiver (Rx). Collectively, these elements and supporting functions comprise the extended channel.
Some principles reveal that the nature of the communications process are complementary to Shannon's approach. Momentum is a metric for analyzing the motion of material bodies and particles. The transfer of information using particle based models is accomplished through the exchange of momentum, imprinting the information expressed in the motion of one particle on another.
Momentum transfer principles are presented which can be used to analyze the efficiency of any communications subsystem or extended channel. The principles can be applied to any interface where information is transferred.
The capacity, C, of an extended communications channel which propagates a signal with average power,
B=2ƒs Hz where ƒs is a Shannon-Nyquist sampling frequency required for signal construction.
In Section 3, ƒs is derived as the frequency of the forces required in an embodiment to impart momentum to a particle to encode it with information. The bandwidth B in a physical system is a direct consequence of the maximum available power Pm, to facilitate particle motion. Pm plays the analogous role in an electronics apparatus when specifying the maximum limit of a power supply with average power Ps.
In Section 5, the efficiency η is studied in detail to establish the power resource required to generate the average signal power
It is shown in Sections 3 and 5 that the average power supplied to a communications apparatus is ƒsεins where εinεe, is the average energy per sample of a communications process over time. Some of this energy, εe, is effectively used to generate and transfer a signal and some is waste, εw.
It is clear that for an efficiency of 100 percent that a given non zero and finite capacity in bits per second is attained with the lowest investment of power, ƒsεins. In some embodiments, η would be fixed for a given C. However, in other embodiments such as methods introduced in Sections 5, 6 and 7 permit improvement of η subject to an optimization procedure.
It is further shown in Section 5 that the efficiency of an information encoding process can be captured by the following simple equation:
which kmod and kσ are constants of implementation for the encoding apparatus and PAPR is defined as the peak to average power ratio of the encoded signal. The PAPR is defined for a non-dissipative system as:
The encoding also applies for decoding of information in a particle based model since imparted momentum is relative.
In some embodiments, communications processes should conserve information with maximum efficiency as a design goal. The fundamental principles which determine conserved momentum exchanges between particles or virtual particles are necessary and sufficient to satisfy the required information theory constraints and derive efficiency optimization relationships. In this manner the macroscopic observable, η which is regarded as a thermodynamic quantity, can be related to microscopic momentum exchanges.
1.1. Capacity and Efficiency
Shannon proved that the capacity of a system is achieved when the signal possesses a Gaussian statistic. However, this poses a dilemma because such signals are not finite. In the context of a physical model, the power resource Pm would grow infinitely large and the efficiency of encoding a signal would correspondingly become zero. In addition, the duration of a signal would be infinite as shown in Section 2. These extremes are avoided by utilizing a prototypical Gaussian signal truncated to a 12 dB PAPR which preserves nearly all of the information encoded in the Gaussian signal.
A capacity equation is derived in Section 4 using the physical model developed in Section 3. This capacity equation is called the physical capacity equation and resembles the Shannon-Hartley equation with variations substantiated by physical principles. A notable differentiation is that for a given energy investment the capacity is twice that of the classical capacity equation per encoding dimension because information can be independently encoded in both position and momentum of a particle. Another difference is a modification to avoid an infinite capacity for the condition of zero degrees Kelvin. The quantities ƒs, Pm, and PAPR play a prominent role in the equation along with the random variables, momentum and position.
In Section 5, the efficiency of the capacity based on the prototypical Gaussian signal with a 12 dB PAPR is obtained. This Gaussian signal possesses an entropy defined by Shannon (see Section 2) and Section 10.10 (Appendix J) which is given by ˜ln(√{square root over (2πe)}σ) where σ is the standard deviation of the Gaussian signal. σ is approximately 1 for the prototypical Gaussian reference signal. The thermodynamic efficiency for encoding this signal is strongly inversely related to the PAPR yet can be improved by using techniques introduced in Sections 6 and 7. It is also shown that PAPR is a nonlinear monotonically increasing parameter of a signal as capacity increases up to the classical Gaussian limit. Thus, efficiency is strongly inversely proportional to capacity. Efficiency enhancement exploits this relationship. The procedures for efficiency enhancement are accompanied with an optimization procedure which is a numerical calculus of variations approach in Section 7.
In high SNR it is possible to estimate the performance bounds of other signals possessing non-Gaussian densities in a comparative manner by defining a normalized entropy ratio Hr which compares the Shannon entropy of a signal of interest to the quantity ln(√{square root over (2πe)}σ) in such a manner that the ratio Hr≤1.
It is shown in Section 5 that as Hr becomes smaller, the information transfer of a channel becomes smaller but the efficiency can correspondingly increase. This is because the PAPR for such signals correspondingly decreases.
In some embodiments, it is of practical concern to design efficient systems which ever press Shannon's theoretical limit but do not achieve Hr=1. The methods for efficiency enhancement for the Gaussian prototype signal are shown to also apply to all signals. Thus, even if a signal is inherently more efficient than the Gaussian prototype, the efficiency may still be significantly improved. This improvement can be several fold for complexly encoded signals. This is of particular interest to those engaged in designs which use standards-based signals deployed by the telecommunications industry as well as wireless local area networks (WLAN).
There is a diminishing rate of return for the investment of resources to improve efficiency. This is evident in the theoretical calculations of Section 5 and verified with laboratory hardware in Section 7. Hardware was constructed to measure the efficiency of the prototypical Gaussian signal prior to efficiency enhancement and after an optimization was performed. Likewise, several standards-based waveforms were also tested on the same hardware. The results reveal that the particle based theories extrapolate in a very accurate manner to an electronics application. The theory is not restricted to Gaussian waveforms but enables prediction of the efficiency for any signal before and after optimization.
1.2. Additional Discussion of Communication
Communications is the Transfer of Information Through Space and Time.
It follows, that information transfer is based on physical processes.
In some embodiments, the essential assumptions are; that a transmitter and receiver cannot be collocated in the coordinates of space-time, and that information is transferred between unique coordinates in space-time. Instantaneous action at a distance is not permitted. Also, the discussion is restricted to classical speeds where it is assumed ν/c<<1.
The measure for information is usually defined by Shannon's uncertainty metric H (p(x)), discussed in detail in the next section. Shannon's uncertainty function permits maximum deviation of a constituent random variable x, given its describing probability density ρ(x), on a per sample basis without physical restriction or impact. This disclosure introduces these restrictions through the joint entropy H(ρ(q,ρ)) where q is position and ρ is momentum. It should be noted that a practical form of the Shannon—Hartley capacity equation requires the insertion of the bandwidth B. The insertion of B limits the rate of change of the random signal x(t) through a Fourier transform. Since x(t) has a limited rate of change, the physical states of encoding evolve to realize full uncertainty over a specified phase space. The more rapid the evolution, the greater the investment of energy per unit time for a moving particle to access the full uncertainty of a phase space based on physical coordinates, q, ρ.
A Signal Shall be Defined as an Information Bearing Function of Space-Time.
It is assumed that continuous signals can be represented by discrete samples versus time through sampling theorems. In an embodiment, the discrete samples are associated as the position and momentum coordinates of particles comprising the signals.
Shannon proved the following capacity limit (Shannon-Hartley Equation) for information transfer through a bandwidth limited continuous AWGN channel based on mathematical reasoning and geometric arguments.
Where C Δ is channel capacity in bits/second, B Δ is bandwidth of the entire channel in Hz,
The definition for capacity is based on:
Where M is the number of unique signal functions or messages per time interval T which can be distinguished within a hyper geometric message space constraining the signal plus additive white Gaussian noise (AWGN). The noise does not remain white due to the influence of B yet does retain its Gaussian statistic. Shannon reasoned that each point in the hyperspace represents a single message signal of duration T and that there is no restriction on the number of such distinguishable points except for the influence of uncorrelated noise sharing the hyperspace.
Continuous waveforms can be precisely reproduced by interpolation of the samples using the Cardinal Series originally introduced by Whittaker and adopted by Shannon. The following series forms the basis for Shannon's sampling theorem.
If the samples are enumerated according to the principles of Nyquist and Shannon, equation 2-3 becomes:
For regular sampling, the time between samples, Ts, is given by a constant ½B in seconds. This scheme permits faithful reproduction of each mi(t) message signals with discrete coordinates whose weights are mn for the nth sample.
Thus, Shannon conceives a hyperspace whose coordinates are message signals, statistically independent, and mutually orthogonal over T. He further proves that the magnitude of coordinate radial {right arrow over (R)}i is given by:
|Ri|=√{square root over (2BT
Where
Shannon focused on the conditions where T→∞. This also implies Ns→∞. If all messages permitted in the hyperspace are characterized by statistically independent and identically distributed (iid) random variables (RV) then the expected values of Equation 2-6 are identical. The independently averaged message signal energies are compressed to a relatively thin hyper shell at the nominal radius;
R=√{square root over (2BT
Having established the geometric view without noise, it is possible to introduce a noise process which possesses a Gaussian statistic. Each of the mi messages 302, 304, and 306 is corrupted by the noise. The noise on each message is also iid. It is implied that each of the potential mi messages 302, 304, and 306, or sub sequence of samples hereafter referred to as symbols, are known a priori and thus distinguishable through correlation methods at a receiver. The symbols are known to be from a standard alphabet. In some embodiments, however, the particular transmitted symbol from the alphabet is unknown until detected at the receiver. Hence, each coordinate in the hyper-space possesses an associated function which must be cross-correlated with the incoming messages and the largest correlation is declared as the message which is most likely communicated. Whenever averaged noise waveform,
Finally, Shannon argues the requirements for capacity C which guarantees that the adjacent messages or any wrong message within the space will not be interpreted during the decoding process even for the case where the signals are corrupted by AWGN. The remarkable but intuitively satisfying result is that even for the case of AWGN, the perturbations can be averaged out over an interval T→∞ because the expected value of the noise is zero, yet the magnitude of normalized correlation for the message of interest approaches 1. Thus the correlation output is always correctly distinguishable. This infinite interval of averaging would have the effect of removing the cloud of uncertainty 412 around m2 406 in
The additional geometrical reasoning to support his result comes from the idea that a hyper volume of radius R which comprises points weighted by signal plus noise energy per unit time (
Hence, from Equations 2-8 and 2-2:
2.1. The Uncertainty Function
Shannon's uncertainty function is given in both discrete and continuous forms:
ρ(x)l is the lth probability of discrete samples from a message function in Equation 2-10 and ρ(x) is the probability of a continuous random variable assigned to a message function in Equation 2-11. Equation 2-11 is also be referred to as the differential entropy. The choice of metric depends on the type of analysis and message signal. The cumulative metric considers the entire probability space with a normalized measure of 1. The units are given in nats for the natural logarithm kernel and bits whenever the logarithm is base 2. This uncertainty relationship is the same formula as that for thermodynamic entropy from statistical physics though they are not generally equivalent.
Jaynes and others have pointed out certain challenges concerning the continuous form which shall be avoided. An adjustment to Shannon's continuous form was proposed by Jaynes and one of the approaches taken in this work, which requires recognition of the limit for discrete probabilities as they become more densely allocated to a particular space. Equations 2-10 and 2-11 are not precisely what is needed moving forward but they provide an essential point of reference for a measure of information. In Shannon's case, x is a nondeterministic variable from some normalized probability space which encodes information. For instance, the random values m from the prior section could be represented by x. The nature of H(ρ(x)) is modified in subsequent discussion to accommodate rules for constraining x according to physical principles. In this context the definition for information is not altered from Shannon's, merely the manner in which the probability space is dynamically derived and defined. Hereafter H(ρ(x)) will be referred to as H(x) on occasion, where the context of the probability density ρ(x) is assumed.
Capacity is defined in terms of maximization of the channel data rate which in turn may be derived from the various uncertainties or Shannon entropies whenever they are assigned a rate in bits or flats per second. Each sample from the message functions, mi, possess some uncertainty and therefore information entropy.
Using Shannon's notation, the following relationships illustrate how the capacity is obtained.
H(x)+Hx(y)=H(y)+Hy(x)
H(x)−Hy(x)=H(y)−Hx(y)
RH(x)−Hy(x)per unit time
CΔ max{R} Equation 2 11
Where H(x) is an uncertainty metric or information entropy of the source in bits, Hx(y) is an uncertainty of the channel output given precise knowledge of the channel input, H(y) is an uncertainty metric for the channel output in bits, Hy(x) is an uncertainty of the input given knowledge of the output observable (this quantity is also called equivocation), and R is a rate of the channel in bits/sec.
It is apparent that rates less than C are possible. Shannon's focus was to obtain C.
2.2. Physical Considerations
The prior sections presented the Shannon formulation based on mathematical and geometrical arguments. However, there are some important observations if one acknowledges physical limitations. These observations fail into the following general categories.
(a) An irreducible message error rate floor of zero is possible for the condition of maximum channel capacity only for the case of T→∞.
(b) There is no explicit energy cost for transitioning between samples within a message.
(c) There is no explicit energy cost for transitioning between messages.
(d) Capacities may approach infinity under certain conditions. This is counter to physical limitations since no source can supply infinite rates and no channel can sustain such rates.
(e) The messages m1, m2, . . . mi, may be arbitrarily close to one another within the hyper geometric signal space.
By collapsing the time variable associated with each message in Shannon's Hyper-space, (b) and (c) become obscured. We will expand the time variable. (d) and (e) can be addressed by acknowledging physical limits on the resolution of x(t). We introduce this resolution.
In this section, a physical model for communications is introduced in which particle dynamics are modeled by encoding information in the position and momentum coordinates of a phase space. In an embodiment, the formulation leverages some traditional characteristics of classical phase space inherited from statistical mechanics but also requires the conservation of particle information.
The subsequent discussions suppose that the transmitter, channel, receiver, and environment may be partitioned for analysis purposes and that each may be modeled as occupying some phase space which supports particle motion, as well as exchanged momentum and radiation. The analysis provides a characterization of trajectories of particles and their fluctuations through the phase space. In an embodiment, mean statistics are also necessary to discriminate the fluctuations and calculate average energy requirements. The characteristic intervals of communications processes are typically much shorter than thermal relaxation time constants for the system. This enables the most robust differentiation of information with respect to the environment for a given energy resource. The fundamental nature of communications involves extraction of information through these differentiations.
Section 3 will:
(a) Establish a model comprising a phase space with boundary conditions and a particle which encodes information in discrete samples from a nearly continuous random process.
(b) Obtain equations of motion for a single particle within phase space for item (a);
(c) Discover the nature of forces to move the particle and establish a physical sampling theorem along with the physical description of signal bandwidth;
(d) Derive the interpolation of sampled motion;
(e) Describe the statistic of motion consistent with a maximum uncertainty communications process; and
(f) Discuss the circumstance for physically analytic behavior of the model
The preliminaries of this section pave the way for obtaining channel capacity in Section 4 and deriving efficiency relations of Section 5. Particular emphasis is applied to items (c) and (e).
3.1. Transmitter
The transmitter generates sequences of states through a phase space for which a particle possesses a coordinate per state as well as specific trajectory between states. Although more than one particle can be modeled, analysis of a single particle will be discussed since the model may be extended by assuming non-interacting particles. The information entropy of the source is assigned a mathematical definition originated by Shannon, a form similar to the entropy function of statistical mechanics. Shannon's entropy is devoid of physical association, and that is its strength as well as limitation. Subsequent models provide a remedy for this omission by assigning a time and energy cost to information encoded by particle motion. Section 8 provides a more detailed investigation of a time evolving uncertainty function.
3.1.1. Phase Space Coordinates, and Uncertainty
The model for the transmitter consists of a hyper spherical phase space in which the information encoding process is related to an uncertainty function of the state of the system:
H=−∫∫−∞∞ρ({right arrow over (q)},{right arrow over (p)})d{right arrow over (q)}d{right arrow over (p)} Figure 3-1
Where {right arrow over (q)}, {right arrow over (p)} are the vector position, in terms of generalized coordinates, and conjugate momenta of the particle respectively. In the case of a single particle system, one can choose to consider these quantities as an ordinary position and momentum pairing for the majority of subsequent discussion. A specific pair, {right arrow over (q)}(tl),{right arrow over (p)}(tl) along with time derivatives {right arrow over ({dot over (q)})}(tl), {right arrow over ({dot over (p)})}(tl) also defines a state of the system at time tl. H represents uncertainty or lack of knowledge concerning position of a particle in configuration space and momentum space, or jointly, phase space. Equation 3-1 is the differential form of Shannon's continuous entropy presented in Section 2. If state transitions are statistically independent, then uncertainty is maximized for a given distribution, ρ({right arrow over (q)},{right arrow over (p)}).
{{right arrow over ({dot over (q)})},{right arrow over ({dot over (p)})}} appear often in the study of mechanics and shall be occasionally referred to as the coordinate derivatives with respect to time, or conjugate derivative field. {{right arrow over ({dot over (q)})},{right arrow over ({dot over (p)})}} are random variables.
In an embodiment, a transmitter, by practical specification, may be locally confined to a relatively small space within some reference frame even if that frame is in relative motion to the receiver. The dynamics of particles within a constrained volume therefore cause the particles to move in trajectories which can reverse course, or execute other randomized curvilinear maneuvers while navigating through states, such that the boundary of the transmitter phase space not be compromised. If a particle is aggressively accelerated, its inertia defies the change of its future course according to Newton's first law. A particle with significant momentum will have greater energy per unit time for path modification, compared to a relatively slow particle of the same mass which executes the same maneuver through configuration space. The probability of path modification per unit time is a function of the uncertainty H. The greater the uncertainty in instantaneous particle velocity and position, the greater the instantaneous energy requirement becomes to sustain its dynamic range.
3.1.2. Transmitter Phase Space, Boundary Conditions and Metrics
In an embodiment, another model feature is that particle motion may be restricted such that it will not energetically contact the transmitter phase space boundary in a manner changing its momentum. Such contact would alter the uncertainty of the particle in a manner which annihilates information.
An example is that of the Maestro's baton. It moves to and fro rhythmically, with its material points distributing information according to its dynamics. Yet, the motions cannot exist beyond the span of the Maestro's arm or exceed the speeds accommodated by his or her physique and the mass of the baton. In fact, the motions are contrived with these restrictions inherently enforced by physical laws and resource limitations. A velocity of zero is required at the extreme position (phase space boundary) of the Maestro's stroke and the maximum speed of the baton is limited by the rate of available energy per unit time. The essential features of this analogy apply to all communications processes.
Suppose that it is desirable to calculate the maximum possible rate of information encoding within the transmitter where information is related to the uncertainty of position and momentum of a particle. Both velocity and acceleration of the transitions between states should be considered in such a maximization. Speed of the transition is dependent on the rate at which the configuration q and momentum p random variables can change.
The following bound for the motions of ordinary matter, where velocity is well below the speed of light, is deduced from physical principles;
Where νmax and Pmax are the maximum particle velocity and the maximum applied power respectively.
Equation 3-2 provides a regime of interest for engineer′ng applications, where forces and powers are finite for finite space-time transitions. Motions which are spawned by finite powers and forces will be considered as physically analytic.
It is most general to consider a model analyzing the available phase space of a hyper geometric spherical region around a single particle and the energy requirements to support a limiting case for motion. Section 10.1 (Appendix A) supports consideration of the hyper sphere.
The phase space volume 506 accessible to a particle in motion is a function of the maximum acceleration available for the particle to traverse the volume in a specified time, Δt. Maximum acceleration is a function of the available energy resource.
In an embodiment, an accessible particle coordinate at some future Δt must always be less than the physical span of the phase space configuration volume. Considering the transmitter boundary for the moment, the greatest length along a straight Euclidian path that a particle can travel under any condition is simply 2Rs where Rs 508 is the sphere radius.
At least one force, associated with {right arrow over ({dot over (p)})}, is required to move the particle between these limits. However, two forces are used to comply with the boundary conditions while stimulating motion. It is expedient to assign an interval between observations of particle motion at tl+1,tl and constrain the energy expenditure over Δt=tl+1−tl. Both starting and stopping the motion of the particle contribute to the allocation of energy. If a constraint is placed on {dot over (ε)}k, the rate of kinetic energy expenditure to accelerate the particle, then the corresponding rate must be considered as the limit for decelerating the particle. The proposition is that the maximum constant rate max {{dot over (ε)}k}=Pmax=Pm bound acceleration and deceleration of the particle over equivalent portions Δt/2 of the interval Δt, and to be considered as a physical limiting resource for the apparatus. Pm is regarded as a boundary condition.
Given this formulation, the maximum possible particle kinetic energy must occur for a position near the configuration space center. The prior statements imply that Δt/2 is the shortest time interval possible for an acceleration or deceleration cycle to traverse the sphere. The total transition energy expenditure may be calculated from adding the contributions of the maximum acceleration and deceleration cycles symmetrically;
2∫t
Peak velocity versus time is calculated from Pmax.
Where âR is the unit radial vector within the hypersphere.
The range, Rs, traveled by the particle in Δt/2 seconds from the boundary edge is;
The following equation summary and graphics provide the result for the one dimensional case along the xα axis where the maximum power is applied to move the particle from boundary to boundary, along a maximum radial.
Let tl equal zero for the following equations and graphical illustration of a particular maximum velocity trajectory.
The characteristic radius and maximum velocity are solved using proper initial conditions applied to integrals of velocity and acceleration.
Where νmax is the greatest velocity magnitude along the trajectory, occurring at
More detail is provided for the derivation of equations 3-10, 3-11 and 3-12 in Section 10.2 (Appendix B).
3.1.3. Momentum Probability
Next, is a statistical description for velocity trajectories within the boundaries established in the prior section.
The vector {right arrow over (ν)} may be given a Gaussian distribution assignment based on a legacy solution obtained from the calculus of variations. An isoperimetric bound is applied to the uncertainty function. H can be maximized, subject to a simultaneous constraint on the variance of the velocity random variable, resulting in the Gaussian pdf. In this case, the variance of the velocity distribution is proportional to the average kinetic energy of the particle. It follows that this optimization extends to the multi-dimensional Gaussian case. This solution justifies replacement of the uniform distribution assumption often applied to maximize the uncertainty of a similar phase space from statistical mechanics. While the uniform distribution does maximize uncertainty, it comes at a greater energy cost compared to the Gaussian assignment. Hence, a Gaussian velocity distribution emphasizes energetic economy compared to the uniform density function. A derivation justifying the Gaussian assumption is provided in Section 10.1 for reference.
The Gaussian assignment is enigmatic because infinite probability tails for velocity invoke relativity considerations, with c (speed of light) as an absolute asymptotic limit. Therefore, in some embodiments, the value of the peak statistic is limited and approximated on the tail of the pdf to avoid relativistic concerns. The variance or average power can be another important statistic. The peak to average power or peak to average energy ratio of a communications signal can be an especially significant consideration for transmitter efficiency. The analog of this parameter can also be applied to the multidimensional model for the transmitter particle velocity and will be subsequently derived for calculating a peak to average power or peak to average kinetic energy ratio, hereafter PAPR and PAER, respectively.
Whenever ν=4 or greater for the pdf with variance σ−2=1, the probability values are very small in a relative sense. If ν2/2 is directly proportional to the instantaneous kinetic energy, then a peak velocity excursion of 4 corresponds to an energy peak of 8. For the case of σ2=1, a range of ν=±2√{square root over (2)} encompasses the majority (97.5%) of the probability space. Hence, PAER≥4 is a comprehensive domain for the momentum pdf with a normalized variance. In an embodiment, the PAER must always be greater than 1 by design because σ2→0 as PAER→1. One can always define a PAER provided σ2≠0. This is a fundamental restriction. As σ2→0, the pdf becomes a delta function with area 1 by definition. In the case of a zero mean Gaussian RV the average power becomes zero in the limit along with the peak excursions if the PAER approaches a value of 1.
The probability tails beyond the peak excursion values can simply be ignored (truncated) as insignificant or replaced with delta functions of appropriate weight. This approximation will be applied for the remainder of the discussion concerning velocities or momenta of particles. PAER is an important parameter and may be varied to tailor a design. PAER provides a suitable means for estimating the required energy of a communications system over significant dynamic range. It will be convenient to convert back and forth between power and energy from time to time. In general, PAPR is used whenever variance is given in units of Joules per second and PAER is used whenever units in joules, are preferred.
Maximum velocity and acceleration along the radial is bounded. At the volume center the probability model for motion is completely independent of θ,ø in spherical geometry. However, as the particle position coordinate q varies off volume center, the spread of possible velocities must correspondingly be modified. Either the particle must asymptotically halt, move tangentially at the boundary or otherwise maneuver away from the boundary avoiding collision. The angular distribution of the velocity vector changes as a function of offset radial with respect to the sphere center.
Momentum will be represented using orthogonal velocity distributions. This approach follows similar methods originated by Maxwell and Boltzmann. The subsequent analysis focuses on the statistical motion of a single particle in one configuration dimension. As one skilled in the art would appreciate, additional D dimensions are easily accommodated from extension of the 1-D solution.
The configuration coordinate may be identified at the tip of the position vector 4 given an orthonormal basis.
{right arrow over (q)}==q1âx
Likewise the velocity is given by;
ν={dot over (q)}1âx
Distributions for each orthogonal direction are easily identified from the prior velocity profile calculations, definition of PAER, and Gaussian optimization for velocity distribution due to maximization of momentum uncertainty.
The generalized axes of the D dimensional space will be represented as x1, x2, . . . xD where D can be assigned for a specific discussion. Similarly, unit vectors in the xα dimension are assumed a given assignment of âα as the defining unit vector. Velocity and position vectors are given by {right arrow over (ν)}α and {right arrow over (q)}α respectively.
The radial velocity {right arrow over (ν)}r as illustrated is defined by {right arrow over (ν)}r=ναâα which is a convenient alignment moving forward. The equations for the peak velocity profile were given previously and are used to calculate the peak velocity versus radial offset coordinate along the x, axis. PAER may be specified at a desired value such as 4 (6 dB) for example and the pseudo Gaussian distribution of the velocities obtained as a function of qα.
The velocity probability density is written in two forms to illustrate the utility of specifying PAER.
{right arrow over (ν)}α_peak is the peak velocity profile as a function of qα which will occasionally be referred to as {right arrow over (ν)}p whenever convenient. In some embodiments, PAER is a constant. Therefore σν
Each value of qα along the radial possesses a unique Gaussian random variable for velocity. The graphical illustration of this distribution follows;
In plot 1000, probability is given on the vertical axis. The probability of the vector velocity is maximum for zero velocity on the average at the phase space center, with equal probability of positive and negative velocities at a given q. The sign or direction of the trajectory corresponds to positive or negative velocity in the figure. The velocity probability of zero occurs at the extremes of +/−Rs, the phase space boundary. Correspondingly, the variances of the Gaussian profiles are minimum at the boundaries and maximum at the center.
A cross-sectional view from the perspective of the velocity axis of plot 1000 is Gaussian with variance that changes according to qα. In this case, a PAER of 4 is maintained for all qα coordinates.
Suppose Pm decreases from 10 to 5 J/s. The corresponding scaling of phase space is illustrated in
In plots 1100 and 1200, the velocity dynamic range is decreased in comparison to the example shown in plot 1000 by the factor √{square root over (Pm_new/Pm_old)}. Rs, the characteristic accessible radius of the sphere, must correspondingly reduce even though the PAER=4 is unchanged. Thus, the hyper-sphere volume decreases in both configuration and momentum space.
Now that the momentum conditional pdf is defined for one dimension, the extension to the other dimensions is straightforward given the assumption of orthogonal dimensions and statistically independent distributions. The distribution of interest is 3 dimensional Gaussian. This is similar to the classical Maxwell distribution except for the boundary conditions and the requirement for maintaining vector quantities. The distribution for the multivariate hyper-geometric case may easily be written in terms of the prior single dimensional case.
The multidimensional pdf can be given as:
The covariance and normalized covariance are also given explicitly for reference:
Γnorm(α,β) is also known as the normalized statistical covariance coefficient. The diagonal of Equation 3-19 will be referred to as the dimensional auto covariance and the off diagonals are dimensional cross-covariance terms. These statistical terms are distinguished from the corresponding forms which are intended for the time analysis of sample functions from an ensemble obtained from a random process. However, a correspondence between the statistical form above and the time domain counterpart is anticipated and discussed in later sections. Discussions proceed contemplating this correspondence.
In an embodiment, [Λ] permits flexibility for determining arbitrarily assigned vectors within the space. Statistically independent vectors are also orthogonal in this particular formulation over suitable intervals of time and space. Equation 3-18 can account for spatial correlations. In the case where state transitions possesses statistically independent origin and terminus, the off diagonal elements, (α≠β), will be zero.
In the Shannon uncertainty view, each statistically independent state is equally probable at a successive infinitesimal instant of time, i.e. (Δt/2)→0. More directly, time is not an explicit consideration of the uncertainty function. As will be shown in Section 8, this cannot be true independent of physical constraints such as Pmax, and Rs. Statistically independent state transitions may only occur more rapidly for greater investments of energy per unit time.
3.1.3.1 Transmitter Configuration Space Statistics
In an embodiment, the configuration space statistic is a probability of a particle occupying coordinates qα. A general technique for obtaining this statistic is part of an overall strategy outlined in the following brief discussion.
A philosophy which has been applied to this point, and will be subsequently advanced, follows:
First, system resources are determined by the maximum rate of energy per unit time limit. This quantity is Pm. Pm limits {right arrow over ({dot over (p)})} which considers acceleration. Secondly, information is encoded in the momentum of particle motion at a particular spatial location. Momentum is approximately a function of the velocity at non-relativistic speeds which in turn is an integral with respect to the acceleration. The momentum is constrained by the joint consideration of Pm and maximum information conservation. Finally, the position is an integral with respect to the velocity which makes it a second integral with respect to the force and, in a sense, a subordinate variable of the analysis, though a necessary one.
The hierarchy of inter-dependencies is significant. Fortuitously, momentum couples configuration and force through integrals of motion. Since the momentum is Gaussian distributed it may be presented that the position is also Gaussian. That is, the integral or the derivative of a Gaussian process remains Gaussian.
The specific form of the configuration dependency is reserved for Section 3.1.10.1 where the joint density ρ({right arrow over (q)},{right arrow over (p)}) is fully developed.
3.1.4. Correlation of Motion, and Statistical Independence
Discussions in this section are related to correlation of motion. Since the RVs of interest are statistically independent zero mean Gaussian, they are also uncorrelated over sufficient intervals of time and space.
The mathematical statistical independence is presented here with the appropriate variable representation, preserving space-time indexing. Time indexing tl and tl+τ is retained to acknowledge that the pdfs of interest may not evolve from strictly stationary processes.
ρ(νβ(tl+τ)|να(tl)) is the probability of the (νβ, tl+τ) velocity vector given the να (tl) velocity vector. The following discusses the conditions enabling Equation 3-22.
Partial time correlation of Gaussian RVs characterizing physical phenomena is inevitable over relatively short time intervals when the RVs originate from processes subject to regulated energy per unit time. Bandwidth limited AWGN with spectral density N0 is an excellent example of such a case where the infinite bandwidth process is characterized by a delta function time auto-correlation and the same strictly filtered process is characterized by a harmonic sins auto-correlation function with nulls occurring at intervals τ=±n(½B), where B is the filtering bandwidth and ±n are non-zero integers.
The nature of correlations at specific instants, or over extended intervals, can provide insight into various aspects of particle motions such as the work to implement those motions and the uncertainty of coordinates along the trajectory.
Λ was introduced to account for the inter-dimensional portions of momentum correlations. Whenever να and νβ are not simultaneous in time, the desired expressions can be viewed as space and time cross-covariance. This is explicitly written for the lth and (lth+1) time instants in terms of the pdf as:
This form accommodates a process which defines the random variables of interest but is not necessarily stationary. This mixed form is a bridge between the statistical and time domain notations of covariance and correlation. It acknowledges probability densities which may vary as a function of time offset and therefore q, as is the current case of interest.
The time cross correlation of the velocity for τ offset is;
If α=β then Figure 3-26 corresponds to a time auto-correlation function. This form is suitable for cases where the velocity samples are obtained from a random process with finite average power. Whenever α≠β, the vector velocities are uncorrelated because they correspond to orthogonal motions. Arbitrary motion is equally distributed among one or more dimensions over an interval 2T and compared to time shifted trajectories. Then, the resulting time based correlations over sub intervals may range from −1 to 1. In the case of independent Gaussian RV's, Equations 3-25 and 3-26 should approach the same result.
In the most general case the momentum, and therefore the velocity, can be decomposed into D orthogonal components. If such vectors are compared at t=tl and t=tl+τ offsets, then a correlation operation can be decomposed into D kernels of the form given in Equation 3-25 where it is understood that the velocity vectors must permute over all indices of α and β to obtain comprehensive correlation scores. A weighted sum of orthogonal correlation scores determines a final score.
A metric for the velocity function similarity as the correlation space-time offset varies is found from the normalized correlation coefficient, which is the counterpart to the normalized covariance presented earlier. It is evaluated at a time offset.
It is possible to target the space and time features for analysis by suitably selecting the values α,β,τ.
A finite energy, time autocorrelation is also of some value. Sometimes this can be a preferred form instead of the form in Equation 3-26. The energy signal auto and cross correlation can be found from:
Now the character of the time auto-correlation of the linear momentum over some characteristic time interval, such as Δt=tl−tl+1, is examined. In an embodiment, the correlation must become zero as the offset time (tl+Δt) is approached to obtain statistical independence outside that window. In that case, time domain de-correlation requires that:
{right arrow over (p)}(t−tl)·{right arrow over (p)}(t−(tl+Δt))=0;t≥|(tl+Δt)| Equation 3-29
Similarly, the forces which impart momentum change must also decouple implying that:
{circumflex over ({dot over (p)})}(t−tl)·{right arrow over ({dot over (p)})}(t−(tl+Δt))=0;t≥|(tl+Δt)| Equation 3-30
Suppose it is desired to de-correlate the motions of a rapidly moving particle and this operation is compared to the same particle moving at a diminutive relative velocity over an identical trajectory. Greater energy per unit time is helpful to generate the same uncorrelated motions for the fast particle over a common configuration coordinate trajectory. The controlling rate of change in momentum must increase corresponding to an increasing inertial force. Likewise, a proportional oppositional momentum variation is useful to establish equilibrium, thus arresting a particle's progress along some path.
Another consideration is whether or not the particle motion attains and sustains an orthogonal motion or briefly encounters such a circumstance along its path. Both cases are of interest. However, a brief orthogonal transition is sufficient to remove the memory of prior particle momentum altogether if the motions are distributed randomly through space and time.
A basic principle emerges from Equations 3-29 and 3-30. This principle is that successive particle momentum and force states must become individually zero, jointly zero or orthogonal, corresponding to the erasure of momentum memory beyond some characteristic interval Δt, assuming no other particle or boundary interactions.
If a particle stops while releasing all of its kinetic energy, or turns in an orthogonal direction, prior information encoded in its motion is lost. This is because evolving uncertainty is coupled to the particle memory through momentum. Extended single particle de-correlations outside of the interval±Δt, with respect to ν
Autocorrelations will be zero outside of the window (−Δt≤τ≤Δt) for the immediate analysis unless otherwise stated. The reason for this initial analysis restriction is to bound the maximum required energy resource for statistically independent motion beyond a characteristic interval. In other words, there is no information concerning the particle motion outside that interval of time.
The derivative {dot over (ε)}k is random up to a limit, Pmax. {dot over (ε)}k is a function of the derivative field:
{dot over (ε)}k={right arrow over ({dot over (p)})}·{right arrow over ({dot over (q)})} Equation 3-31
This leads to a particular inter-variable cross-correlation expression:
The kernel is a measure of the rate of work accomplished by the particle. It is useful as an instantaneous value or an accumulated average. This equation is identically zero only for the case where {right arrow over ({dot over (p)})} or {right arrow over ({dot over (q)})} are zero or for the case where the vector components of {right arrow over ({dot over (p)})},{right arrow over ({dot over (q)})} are mutually orthogonal. If the vector components of {right arrow over ({dot over (p)})},{right arrow over ({dot over (q)})} are orthogonal for all time, then there is no power consumed in the course of the executed motions. Thus, the assumption for statistical independence of momentum and force at relatively the same instant in time for the case where the instantaneous rate of work is zero. Whenever there is consumption of energy, force and velocity share some common nonzero directional component and will be statistically codependent to some extent. This bridges between randomly distributed coordinates of the phase space at successively fixed time intervals. If we restrict motions to an orthogonal maneuver within the derivative field, we collapse phase access and uncertainty of motion goes to zero along with the work performed on the particle.
3.1.5. Autocorrelations and Spectra for Independent Maximum Velocity Pulses
At this point it is convenient to introduce the concept of the velocity pulse. Particle memory, due to prior momentum, is erased moving beyond time Δt into the future for this analysis. Conversely, this implies a deterministic component in the momentum during the interval Δt. Such structure, where the interval is defined as beginning with zero momentum in the direction of interest and terminating with zero momentum in that same direction is referred to as a velocity pulse. For example, the maximum velocity profiles may be distinctly defined as pulses over Δt.
The maximum velocity pulse possesses a time autocorrelation that is analyzed in detail in Section 10.3 (Appendix C). The corresponding normalized autocorrelation, is plotted in the following graph with Δt=1.
ℑ(g1*g2)=∫−∞∞{∫−∞∞g1(t−λ)g2(λ)dλ}e−iωtdt=G1(ω)G2(ω) Equation 3-33
The transform of the correlation operation for real functions is given by:
ℑ{g1g2}=∫−∞∞{∫−∞∞g1(t′+τ)g2(t′)dt′}e−iωtdt Equation 3-34
If (t′−τ)→(t−λ), then the convolution is identical to the correlation which is precisely the case for symmetric functions of time. Hence, the Fourier transform of the autocorrelation can be obtained from the Fourier transform squared of the velocity pulse in this case.
ℑ{∫−∞ν(t′+τ)ν(t′)dt′}=∫−∞∞{∫−∞∞ν(t′+τ)ν(t′)dt′}e−iωtdt=V(ω)V(ω) Equation 3-35
The maximum velocity pulse functions given above are not specified except at the statistically rare boundary condition extreme. Whenever the transmitter is not pushed to an extreme dynamic range, the pulse function can assume a different form.
According to the Gaussian statistic, the maximum velocity pulse, and therefore its associated autocorrelation illustrated in
3.1.6. Characteristic Response
Independent pulses of duration Δt possess a characteristic autocorrelation response. In an embodiment, all spectral calculations based on this fundamental structure will have a main lobe with a frequency span which is at least on the order of or greater than 2(Δt)−1 according to the Fourier transform of the autocorrelation. This can be verified by Gabor's uncertainty relation.
can be formed from elementary operations which possess significant intuitive and physical relevance. Any finite rectangular pulse can be modeled with at least two impulses and corresponding integrators.
Supposing that the impulse functions are forces applied to a particle of mass m=1, to obtain particle velocity one can integrate the acceleration due to the force. The result of the given integration is the rectangular velocity pulse versus time. This is a circumstance without practical restrictions on the force functions δ(t∓Δt/2), i.e. physically non-analytic, yet corresponds mathematically to Newton's laws of motion.
The result is accurate to within a constant of integration. Only the time variant portion of the motion can encode information so the constant of integration is not of immediate interest. Notice further that if the first integral were not opposed by the second, motion would be constant and change in momentum would not be possible after t=−Δt/2. Otherwise, uncertainty of motion would be extinguished after the first action. Thus, two forces are useful to alter the velocity in a prescribed manner to create a pulse of specific duration.
Recall the original maximum velocity pulse with one degree of freedom previously analyzed in detail. In that case at least two distinct forces are also used to create the velocity profile, which ensures statistical independence of motion outside the interval ±Δt/2.
Information is encoded in the pulse amplitude. This level is dependent on the nature of the force over the interval Δt and changes modulo Δt. Regardless of the specific function realized by the velocity pulse, at least two distinct forces permit independence of motion between succeeding pulse intervals. This property is also evident from energy conservation in the case where work is accomplished on the particle since:
{right arrow over ({dot over (p)})}1·{right arrow over ({dot over (q)})}1={right arrow over ({dot over (p)})}1,{right arrow over ({dot over (q)})}2Δt1+Δt2=Δt Equation 3-37
ε1=ε2 Equation 3-38
The left hand side of the equation is the average energy ε1 over the interval Δt1, the first half of the pulse. The right hand side is the analogous quantity for the second half of the pulse. If the average rate of work by the particle, {right arrow over ({dot over (p)})}1·{right arrow over ({dot over (q)})}1, increases, then Δt1 may decrease in turn reducing Δt, the time to uniquely encode an uncorrelated motion spanning the phase space. The total kinetic energy expended for the first half of the pulse is equivalent to the energy expended in the second half given equivalent initial and final velocities. If the initial and final velocities in a particular direction are zero then the momentum memory for the particle is reset to zero in that direction, and prior encoded information is erased.
This theme is reinforced by {dot over (p)}1(t) 2002 and {dot over (p)}2(t) 2004 associated with forces F1, F1 illustrating the dynamics of a maximum velocity pulse in
This is a physical form of a sampling theorem. Whether generating such motions or observing them, fs_min=2(Δt)−1 is a useful consideration for the most extreme trajectory possible, which de-correlates particle motion in the shortest time given the limitation of finite energy per unit time. The justification has been provided for generating motions, but the analogous circumstance concerning observation of motion logically follows. Acquisition of the information encoded in an existing motion through deployment of forces, utilizes extracting momentum in the opposite sense. Encoding changes particle momentum in one direction and decoding extracts this momentum by an opposite relative action. In both cases the momentum imparted or extracted goes to the heart of information transfer and the efficiency concern to be discussed further in Section 5.
Shannon's Sampling Theorem; If a function contains no frequencies higher than W cps, it is completely determined by giving its ordinates at a series of points spaced (2 W)−1 seconds apart.
In the same paper, Shannon states, concerning the sample rate: “This is a fact which is common in the communications art.” Furthermore, Shannon credits Whittaker, Nyquist and Gabor.
In the limiting case of a maximum velocity pulse, the pulse is symmetrical. The physical sampling theorem does not require this in general as is evident from the equation for averaged kinetic energy from the first half of a pulse over interval Δt1 versus the second interval Δt2. In the general circumstance, P1≠P2 and Δt1≠Δt2. Thus, the pulse shape restriction is relaxed for the more general case when {P1, P2}<Pm. Since the sampling forces which occur at the rate ƒs are analyzed under the most extreme case, all other momentum exchanges are subordinate. The fastest pulse, the maximum velocity pulse, possesses just enough power Pm to accomplish a comprehensive maneuver over the interval Δt, and this trajectory possesses only one derivative sign change. Slower velocity trajectories may possess multiple derivative sign changes over the characteristic configuration interval 2 Rs but ƒs will be greater than or equal to twice the number of derivative sign changes of the velocity and also be greater than or equal to twice the transition rate between orthogonal dimensions.
In multiple dimensions the force is a diversely oriented vector but possess these specified sampling qualities when decomposed into orthogonal components and the resources spawning forces support the capability of maximum acceleration and deceleration over the interval Δt, even though these extreme forces are seldom required.
The calculations for the maximum work over the interval Δt/2 and the average kinetic energy limit of velocity pulses in general, based on the PAER metric and practical design constraints. Equation 3-41 is due to the physical sampling theorem.
Equations 39, 40 and 41 may be combined and rearranged, noting that the average kinetic energy is less than or equal to the maximum kinetic energy. In other words, Pm is a conservative upper bound and a logical design limit to enable conceivable actions. Therefore:
The averaged energy εks is per sample. The total available energy εtot is allocated amongst say 2N samples or force applications. The average energy per unique force application is therefore just εtot/2N=εks. This is the quantity that should be used in the denominator of Equation 3-42 to calculate the proper force frequency ƒs. Using Equation 3-42, another form of physical sampling theorem can be stated which contemplates extended intervals modulo T/2N=Ts:
The physical sampling rate for any communications process is greater than the maximum available power to invest in the process, divided by the average encoded particle kinetic energy per unique force (sample), times the peak to average energy ratio (PAER) for the particle motions over the duration of a signal.
The prior statement is best understood by considering single particle interactions but can be applied to bulk statistics as well. We will interpret ƒs as the number of unique force applications per unit time and ƒs_min, is the number of statistically independent momentum exchanges per unit time. This rate shall also be referred to hereafter, as the sampling frequency. Adjacent samples in time can be correlated. If the correlation is due to the limitation Pm, then the system is oversampled whenever more than 2 forces per characteristic interval Δt are deployed. Conversely, if only two forces are deployed per characteristic interval, then it is possible to make them independent (i.e., unique) given an adequate Pm. Therefore, the physical sampling theorem specifies a minimum sampling frequency ƒs_min, as well as an interval of time over which successive samples are deployed to generate or acquire a signal. By doing so, all frequencies of a signal up to the limit B are contemplated. The lowest frequency of the signal is given by T−1.
More samples are useful when they are correlated because they impart or acquire smaller increments of momentum change per sample compared to the circumstance for which a minimum of two samples enable particle dynamics which span the phase space over the interval Δt.
Shannon's sampling theorem as stated is useful but not sufficient because it does not include a duration of time over which samples are deployed to capture both high frequency and low frequency components of a signal over the frequency span B, though his general analysis includes this concept. As Marks points out, Shannon's sampling number is a total of 2BTs samples to characterize a signal. Consider a 1 kg mass which has a peak velocity limit of 1 m/s for a motion which is random and the peak to total average energy ratio for a message is limited to 4 to capture much of the statistically relevant motions (97.5% of the particle velocities for a Gaussian statistic). Let the power source possess a 10 Joule capacity, εtot. If the apparatus power available to the particle has a maximum energy delivery rate limit of Pm equal to 1 joule per second and we wish to distribute the available energy source over 1 million force exchanges spaced equally in time to encode a message, then the frequency of force application is:
If ƒs falls below this value, then the necessary maneuvers required to encode information in the particle motion cannot be faithfully executed, thereby eroding access to phase space, which in turn reduces uncertainty of motion and ultimately information loss. If ƒs increases above this rate then information encoding rates can be achieved or increased, trading the reduction in transmission time versus energy expenditure.
Capacity equations can be related to the physical sampling theorem and therefore related to the peak rate of energy expenditure, not just the average. The peak rate is a legitimate design metric, and the ratio of the peak to average is inversely related to efficiency, as will be shown. It is even possible to calculate capacity versus efficiency for non-maximum entropy channels by fairly convenient means, an exercise of considerable challenge according to Shannon. By characterizing sample rate in terms of its physical origin, access to the conceptual utility of other disciplines such as dynamics and thermodynamics and be gained and advanced toward the goal of trading capacity for efficiency.
3.1.7. Sampling Bound Qualification
Shannon's form of the sampling theorem contains a reference to frequency bandwidth limitation, W. It is of important to establish a connection with the physical sampling theorem. An intuitive connection can be stated simply by comparing two equations (where W is replaced by B):
B will be justified as the variable symbolizing Nyquist's bandwidth for the remainder of this disclosure and possesses the same meaning as the variable W used by Shannon. Although both the inequalities in equation 3-43 appear different, they possess the same units if one regards a force event (i.e. an exchange of force with a particle) to be defined as a sample.
The bound provided for the sampling rate in equations 3-43 and Shannon's theorem are obtained by two very different strategies. Equation 3-46 is based on physical laws while Shannon's restatement of the sampling rate proposed by Nyquist and Gabor is of mathematical origin and logic. The conditions under which the inequalities in equations 3-43 provide the most restrictive interpretation of ƒs. are examined. This occurs as both equations in 3-43 approach the same value.
The arrow in the equation indicates “as the quantity on the left approaches the quantity on the right”, We will investigate the circumstance for this to occur. It will be shown that when signal energy as calculated in a manner consistent with the method employed by Shannon is equated to the kinetic energy of a particle, the implied relation of equation 3-44 becomes an equality.
A direct approach can be illustrated from the Fourier transform pair of a sequence of samples from a message ensemble member. This technique depends on the definition for bandwidth. Shannon's definition requires zero energy outside of the frequency spectrum defined by bandwidth B. A parallel to Shannon's proof is provided for reference. Shannon employs a calculation in his proof of the inverse Fourier transform of the band limited spectrum for a sampled function of time, g (t), sampled at discrete instants
This results in an infinite series expansion over n, the sample number.
Thus, with n treatment the kinetic energy of individual velocity samples for a dynamic particle are equated to the energy of signal samples so that:
When Equation 3-46 is true, then the right hand side of Equation 3-43 has a kinetic energy form and a signal energy form. Shannon's definition for signal energy will be used.
Consider the signal g(t) to be of finite power in a given Shannon bandwidth B:
Shannon requires the frequency span 2B to be a constant spectrum over G(ƒ). Since the approach is to discover how the particle kinetic energy limitations per unit time correspond to Shannon's bandwidth, a constant is substituted for G(ƒ) in Rayliegh's expression to obtain:
εg=2Bεg
Both sides of Equations 3-47 and 3-48 have been multiplied by unit time to obtain energy. εg
An alternate form of 3-44 may now be written;
Given equation 3-52 is now an equality, 3-44 may be employed as a suitable measure for bandwidth or sampling rate requirements. Thus, for a communications process modeled by particle motion which is peak power limited;
This equation and its variants shall be referred to as the sampled time-energy relationship or the TE relation. The TE relation may be applied for uniformly sampled motions of any statistic. If trajectories are conceived to deploy force rates which exceed ƒs_min, then B can also increase with a corresponding modification in phase space volume. In addition, the factor kp appears in the denominator. This constant accounts for any adjustment to the maximum velocity profile which is assigned to satisfy the momentum space maximum boundary condition. For the case of the nonlinear maximum velocity pulse, in the hyper sphere, kp≡1. This is one design extreme. Another design extreme occurs whenever the boundary velocity profile can also be physically analytic under all conditions. Finally, the appearance of the derivatives of the canonical variables, {right arrow over ({dot over (q)})},{right arrow over ({dot over (p)})}, in the numerator, illustrate the direct connection between the particle dynamics within phase space to a sampling theorem. In particular, these variables illustrate the increased work rate for encoding greater amounts of information per unit time. The quantity max {{right arrow over ({dot over (q)})}·{right arrow over ({dot over (p)})}} maximizes the rate of change of momentum per unit time over a configuration span.
An example illustrates the utility of Equation 3-53. Suppose a signal of 1 MHz bandwidth must be synthesized. Let the maximum power delivery for the apparatus be set to
Furthermore, the signal of interest is known to possess a 3 dB PAER statistic. From these specifications one calculates that the average energy rate per sample is 2.5e-7 Joules. If the communications apparatus is battery powered with a voltage of 3.3 V@1000 mAh rating, then the signal can sustain for 6.6 hours between recharge cycles of the battery, assuming the communications apparatus is otherwise 100% efficient.
3.1.8. Interpolation for Physically Analytic Motion
This section provides a derivation for the interpolation of sampled particle motion. The Cardinal series is derived from a perspective dependent on the limitations of available kinetic energy per unit time and the assumption of LTI operators for reconstructing a general particle trajectory from its impulse sample representation. A portion of the LTI operator is assumed to be inherent in the integrals of motion. Additional sculpting of motion is due to the impulse response of the apparatus. Together, these two effects constitute an aggregate impulse response which determines the form of the characteristic velocity pulse. The cardinal series is considered a sequence of such velocity pulses.
Up to this point, the physically analytic requirement for trajectory has not been strictly enforced at the boundary as is evident when reviewing
A remedy is now pursued which insures that all energy rates and forces are finite.
Suppose that there is a reservoir of potential energy εϕ available for constructing a signal from scratch. At some phase coordinate {q0,p0} at time t0−, the infinitesimal instant of time prior to t0, the quantity of energy allocated for encoding is;
εϕ(t−t0−) Equation 3-54
The initial velocity and acceleration are zero and the position is arbitrarily assigned at the center of the configuration space. σk_tot2 is a variance which accounts for the energy to be distributed into all the degrees of freedom forming the signal. The total energy of the particle is:
εtot=εϕ(t)+εk
εk(t−t0−)=0
εdis(t−t0−)=0 Equation 3-55
εtot remains constant and εdis(t) accounts for system losses. εk_tot(t), the evolving kinetic energy of the particle, will be focused on and dissipation will be ignored.
Signal evolution begins through dynamic distribution of εtot which depletes εϕ on a per sample basis when the motion is not conservative. Particle motion is considered to be physically analytic everywhere possessing at least two well behaved derivatives, {dot over (q)},{umlaut over (q)}. Such motions may consist of suitably defined impulsive forces smoothed by the particle-apparatus impulse response.
Allocation of the energy proceeds according to a redistribution into multiple dimensions;
All α=1, . . . D dimensional degrees of freedom for motion possess the same variance when observed over very long time intervals and thus the over bards retained to acknowledge a mean variance. In this case σk_tot2 is finite for the process and is allocated over a duration T for the signal.
The total available energy may be parsed to 2N samples of a message signal with normalized particle mass (m=1).
The time window T/2 is an integral multiple of the sample time Ts. NTs·±T/2 may approach ±∞. The equation illustrates how the kinetic energy εk is reassigned to specific instants in time via the delta function representation. The average energy per sample is simply;
And the average power per sample is given as;
The delta function weighting has a corresponding sifting notation;
να,n(t−nTs)=∫−∞+∞να,n(t)δ(t−nTs)dt=να(nTs) Equation 3-60
A sampled velocity signal is also represented by a series of convolutions;
Let {tilde over (ν)}α(t)=να(t)δ(t−nTs)*ht be a discretely encoded and interpolated approximation of a desired velocity for a dynamic particle. Obtaining an interpolation function for reconstitution of να(t) from the discrete representation is useful. It is logical to suppose that the interpolation trajectories will spawn from linear time invariant (LTI) operators given that the process is physically analytic. An error metric can be minimized to optimize the interpolation:
Minimizing the error variance σε2;
να(t)−να(t)δ(t−nTs)*ht=0 Equation 3-63
ht may be regarded as a filter impulse response where the associated integral of the time domain convolution operator is inherent in the laws of motion.
A schematic is a convenient way to capture the concept at a high level of abstraction.
An effective LTI impulse response heff=1 provides the solution which minimizes σε2. ht can be obtained from recognition that;
Convolution is the flip side of the correlation coin under certain circumstances involving functions which possess symmetry. ht*δ(t−nTs) can be viewed as a particular cross correlation operation when ht is symmetric.
Correlation functions for the velocity and interpolated reconstructions are constrained by the TE relation. The circumstances for decoupling of velocity samples at the deferred instants t−nTs are discussed in Section 10.5 (Appendix E). The cross correlation of a reference velocity function with an ideal reconstruction at zero time shift results
Therefore;
where:
As Section 10.5 (Appendix E) also shows, the values of a correlation function are zero at offsets:
Equations 3-66 through 3-69 are helpful to identify the cardinal series because the correlation function parameters as given are not unique. However, equations 3-66 through 3-69 along with knowledge that the signal is based on a bandwidth limited AWGN process fit the cardinal series profile.
The effective Fourier transform for a sequence of decoupled unit sampled impulse responses may be represented as follows:
The Fourier transform above is thus a series representation for the transform of the constant, unity. The response for Ht(ƒ) is symmetric for positive and negative frequencies. There are 2N such spectrums Ht(ƒ−nfs) due to the recursive phase shifts induced by a multiplicity of delayed samples. The time dependency of the frequency kernel has been supplanted by the preferred TE metric.
Consider the operation;
να(t)heff={tilde over (ν)}α(t)
Then the frequency domain representation is:
V(ƒ)*Heff={tilde over (V)}(ƒ) Equation 3-71
The series expansion for Heff is now tailored to the target signal ν(t). The spectrum of interest is simply:
In this representation V(ƒ) need not be constant over frequency contrary to Shannon's assumption.
It is evident from investigation of the magnitude response of Ht(f−nƒs)*V(ƒ) that Ht(ƒ) may not alter the magnitude response of the velocity spectrum V(ƒ) over the relevant spectral domain, else encoded information is lost and energy not conserved. Ht(ƒ) should possess this quality over the spectral range of V(ƒ), but not necessarily beyond it.
The magnitude of the complex exponential function is one. Also, the phase response is linear and repetitive over harmonic spectrums according to the frequency of the complex exponential. This is apparent when examining the spectral components of the original sampled signal.
From examination of LTI systems and the associated impulse response characteristics, V(ƒ−nƒs) possesses even magnitude symmetry and odd phase symmetry and this fundamental spectrum repeats every ƒs Hz. Thus V0(ƒ) implements reconstruction strategy because a spectral instantiation contains encoded information (i.e. V0(ƒ)=V1(ƒ)=V2(ƒ)= . . . Vn(ƒ)). Reconstruction of an arbitrary combination of Vn(ƒ), beyond V0(ƒ) spectrums, utilizes deployment of increased energy per unit time, violating the Pm constraint of the TE relation. In other words, preservation of an unbounded number of identical spectrums also represents an unsupported and inefficient expansion of phase space (requiring ever increasing power).
From the TE relation, the unambiguous spectral content is limited by {dot over (ε)}k such that;
Thus, optimal filter impulse response can be obtained from;
where the frequency domain of Ht(ƒ) corresponds to the frequency domain of V0(ƒ) (the 0th image in the infinite series), resulting in:
LL and UL are limits imposed by the allocation of available energy per unit time, i.e. the TE relation. Therefore:
ht is recognized as the unity weighted cardinal series kernel at n=0. This is the LTI operator which is recursively applied at the rate fs to obtain an optimal reconstruction of the velocity function να(t) from the discrete samples να(nTs). That is;
The cardinal series is thus obtained;
In D dimensions the velocity is given by:
The derivation above is different from Shannon's approach in the following significant way. In contrast with Shannon's approach, general excitations of the system are contemplated herein with arbitrary response spectrums automatically accommodated even when the maximum uncertainty requirement for {right arrow over (q)},{right arrow over (p)} is waived. Therefore, the result here is that the cardinal series is substantiated for all physically analytic motions, not just those which exhibit maximum uncertainty statistics.
By examining multiple derivatives that a cardinal pulse is physically analytic and therefore is a candidate pulse response up to and including phase space boundary conditions.
3.1.8.1. Cardinal Autocorrelation
The autocorrelation of a stationary να(t) process can be obtained from the Wiener-Kinchine theorem as the averaged time correlation for velocity;
When να has a maximum uncertainty (ν, =0) associated with the time domain response at regular intervals, NTs, the frequency domain representation of the process is also of maximum entropy form. The greatest possible uncertainty in its spectral expression will be due to uniform distribution. This can be verified through the calculus of variations. The result provides a basis for the discussions of Section 3.1.8 and the autocorrelation in general.
Taking the inverse transform for [V(ƒ)]2 reveals the autocorrelation for the finite power process which has maximum uncertainty in the frequency domain:
V2 is in watts per Hz. Likewise, ν2 is in watts. (τ)ν
Integration of any member of the cardinal series squared over the time interval ±00 will result in να2(NTs), a finite energy per sample.
Unique information is obtained by independent observation of random velocity samples at intervals separated by these correlation nulls located at modulo ±NTs time offsets. The cardinal series distributes sampled momentum interference for the duration of a trajectory throughout phase space. Hence, each member of the cardinal series will constructively or destructively interfere with all other members except at intervals deduced from the correlation nulls. Eventually, at ±∞ time offset from a reference sample time, memory of sampled motion dissipates leaving no mutual information between such extremely separated observation points. This is due to the decaying momentum for each member of the cardinal series. Each member function of the cardinal series is instantiated through the allocation of some finite sample energy.
3.1.8.2. Maximum Nonlinear Velocity Pulse Versus Maximum Cardinal Pulse
Two pulses can be considered for boundary conditions. The maximum velocity pulse is not physically analytic but does define an extreme for the calculation of energy requirements per unit time to traverse the phase space. A cardinal pulse can also be used for the extreme if the boundary must be physically analytic as well, though Pm has a different limiting value for the cardinal pulse option. This section discusses the tradeoff between the two pulse types in terms of trajectory, Pm, B, etc.
Comparison of both velocity types is provided in the
This analysis suggests that linear operating ranges can be established within the domain of the nonlinear maximum velocity pulse 2502 or classical cardinal pulse 2504 provided appropriate design margins are regarded.
The maximum velocity pulse in the above figure could be exceeded by the generalized cardinal pulse near the time t=0.5±˜0.07. A design “back off” can be implemented to eliminate this boundary conflict.
Consider sustaining identical span of the phase space for both maximum pulse types, given fixed Δt=2Ts. Solving the position integrals for both pulse types and equating the span covered per characteristic interval results in the following equation (refer to Section 10.6 for additional detail):
νm_card is the cardinal pulse amplitude to maintain a specific configuration space span. The relative velocity increase and peak kinetic energy increase, compared to the nonlinear maximum velocity pulse case, are:
This represents an increase in peak kinetic energy of roughly 1.07 dB. The relative increase for the maximum instantaneous power requirement is larger.
Hence, there is a relative parameter to enhance the peak power source specification by 3.34 dB to maintain a physically analytic boundary condition utilizing the maximum cardinal velocity pulse profile. Another way to consider the result is that one may design an apparatus choosing Pm using the nonlinear maximum velocity pulse equations and then expect perfectly linear trajectories up to ˜0.68 νm where νm, is the maximum velocity of the nonlinear maximum velocity pulse. Beyond that point velocity, excursions of the cardinal pulse begin to encounter nonlinearities due to the apparatus power limitations. Alternatively, one may use the appropriate scaling value for kp in the TE relation to guarantee linearity over the entire dynamic range.
In an alternate case, the value for Pm=1 and is fixed for both pulse types. In this case there are two separate time intervals permitted to span the same physical space. Let the time interval Tref=1 apply to the sampling interval for the nonlinear maximum velocity pulse and Ts apply to the sampling interval for the cardinal maximum velocity pulse. Ts may be calculated from (refer to Section 10.6 for additional detail):
Ts≡1.179Tref Equation 3-86
The bandwidth is then approximately 0.848 of the nonlinear maximum velocity case with Tref=1. Another way to consider the result is that for a given Pm in both cases, a physically analytic bandwidth 0.848 (Tref)−1 is always guaranteed. As a dynamic particle challenges the boundary through greater peak power excursions, violations of the boundary occur and some information will begin to be lost in concert with undesirable spectral regrowth. In the scenario, where Pm=Pmax_card, instantaneous peak power and configuration span are conserved for both pulse types and kp=1.179 for the TE relation.
The derivative illustrated in plot 2800 of
The tails of the sinc and its derivative extend in both directions of time to infinity. The sinc pulse will be a focus. Therefore, all extended physically analytic trajectories can be considered as a superposition of suitably weighted sinc like pulses.
Neither the nonlinear maximum velocity pulse nor the maximum cardinal pulse are required at the phase space boundary. They represent two logical extremes with constraints such as energy expenditure per unit time for the most expedient trajectory to span a space or this property in concert with physically analytic motion. There can be many logical constructions between these extremes which append other practical design considerations.
3.1.9. Statistical Description of the Process
This section establishes a framework for describing the characteristics of the model in terms of a stochastic processes. The more detailed discussion leverages certain conditional stationary properties of the model.
There are physical attributes attached to the random variables of interest with a corresponding timeline due to laws of motion. Each configuration coordinate has assigned to it a corresponding probability density for momentum of a particle, ρ({right arrow over (p)}|q) which is D dimensional Gaussian.
The following discussions assume that the continuous process can be approximated by a sampled process.
Even though the random variables associated with the process are Gaussian, the variance of momentum is dependent on the coordinate in space which in turn is a function of time. This is true whenever the samples of analysis are organized with an ordered time sequence, which is a desirable feature. On the other hand, statistical characterization may not require such organization. However, any statistical formulation which does not preserve time sequences resists spectral analysis.
It is possible to obtain the inverse Fourier transform for the general velocity pulse spectrum if the underlying process is stationary in the strict or wide sense. Such an analysis can prove valuable since working in both the time and frequency domain affords the greatest flexibility for understanding and specifying communications processes. However, sometimes the underlying process can evade fundamental assumptions which facilitate a routine Fourier analysis of the autocorrelation. Such is the case here.
A description is now provided of the stochastic process with an ensemble of functions possessing random values at regular time intervals separated by Ts.
As used herein, a random process refers to an uncountable infinite, time ordered continuum of statistically independent random variables. This tweak will be adopted for this definition to accommodate physically analytic processes which can adapt to classical or quantum scenarios;
As used herein, a random physical process refers to a time ordered set of statistically independent random variables which are maximally dense over their spatial domains.
In the following text, a time sampled or momentum ensemble view is discussed, as well as a reorganization of the time samples into configuration bins (configuration ensemble). The configuration bins are defined to collect samples which are maximum uncertainty Gaussian distributed for momentum, at respective positions q. Evolving time samples populate these configuration bins at random time intervals, modulo Ts.
A statistical treatment of the motions for particles within the phase space can be given when the ensemble members which are functions of time are sampled from the process. This is the procedure referred to here as a momentum ensemble. Consider the set of k sample functions extracted from the random process (q,p) organized as the following momentum ensemble:
(q,p)={[q(t),p(t)]1,[q(t),p(t)]2,[q(t),p(t)]3, . . . [q(t),p(t)]} Equation 3-87
If each sample function is evaluated (discretely sampled) at a certain time, tl, then the collection of instantaneous values from the k sample functions also become random variables. This means that a large number of hypothetical experiments or observations could be performed independently and in parallel, given multiple indistinguishable instantiations of the phase space.
A characterization of an ergodic process provides considerable utility, but it demands a process description which is stationary in the strict sense. The conditional stationary properties assumed by earlier discussions are explored here.
For an ergodic ensemble, the average of a function of the random variables over the ensemble is equal with probability unity to the average over all possible time translations of a particular member function of the ensemble, except for a subset of representations of measure zero.
It is clear from this definition that the process cannot be assumed ergodic from inspection.
The apparatus of each unique phase space (such as those depicted in
Each of the unique iid Gaussian sources possess space-time dependent variances. Each Gaussian RV may not be considered stationary in the usual sense at a specific configuration coordinate q because a particle in motion does not remain at one location. The momentum or velocity samples, at a specific time tl, come from differing configuration locations q1, 2 . . . k;l in the separate experiments. The conditional momentum statistic, ρ(p|q), is determined by the frequency of observed sample values over many subsequent random and independent particle trajectory visits to a specific configuration coordinate. It is not obvious that statistics of the ensemble collective predict the time averaged moments of ensemble members when considered in this manner or vice-versa. A reorganization of the data will however confirm that this is the case with certain caveats.
The relevance of organizing the RVs in a particular manner can be illustrated by revisiting the peak momentum profile and considering 3 unique configuration coordinates q1,q2,q3 located on the trajectory of a particle moving along the αth axis in a hyper space. This concept is illustrated in plots 3300 and 3500 of both the maximum nonlinear and the maximum cardinal pulse velocity pulses of
The extended tail response for the cardinal pulse is also illustrated in plot 3350 and reverberates on the αth axis ad infinitum. In contrast, the maximum velocity pulse profile is extinguished at the phase space boundary at relative times ±Ts corresponding to ±Rs.
Each position q1,q2,q3 has an associated peak momentum on the Gaussian pdf tail illustrated by the associated pdf profiles of
Thus, samples at different times which intersect these position coordinates can be collected and organized to characterize the random variables. The collection of samples at a specific configuration coordinate rarely encounter a circumstance where the specific configuration coordinate occupies back to back time samples because this would imply a nearly stationary particle. Rather, the instants at which the coordinates ql are repeated are separated by random quantities of time samples. Nevertheless, the new collections of samples at each coordinate bin can still be ordered chronologically. These new ensembles possess discontinuous time records even though the time records are sequential and each sample is still independent. Such a collection is suitable for obtaining the frequency of occurrence for specific momenta given a particular configuration coordinate, i.e. a statistical counting with dependency. Each pdf at each coordinate possesses a stationary behavior. In contrast, a continuous time record comprises values each from the collection of such differing Gaussian variables at Ts intervals. Each new RV in the time sampled momentum ensemble view is acquired through a time evolution governed by laws of motion. However, time sampled trajectories from the momentum ensemble do not represent a stationary set of samples because each sample comes from a pdf with a different second moment.
A new configuration bin arrangement for the random process can be written with the following representation; (the k-th ensemble member is followed by the set of all k members)
(q,p)k={(q1,[p(tl
Each of the k members of a time continuous momentum ensemble is partitioned into sub-ensembles with i configuration centric members. Each sub ensemble is time ordered but also time discontinuous. The momenta are statistically characterized by pdf s like the examples in
(qi,p(tl
There are (i) such sets. While suitable for statistical characterization, such an arrangement is not suitable for time domain analysis of a random process because time continuity is disrupted in this view. Thus, spectral analysis via the W-K theorem is out of the question for these records. The organization illustrated in
The configuration ensemble representation, (q,p) is a very different sample and ensemble organization than the momentum ensemble prescription for the random process given by (q,p). In the momentum ensemble arrangement, each sample function traces the unique trajectory of a particle sequentially through time and therefore provides an intuitive basis for understanding how one might extract encoded information. It is a continuum of coordinates tracing the particle history in space time. Autocorrelations and spectrums can be calculated via the W-K theorem for the momentum ensemble view only if the process is stationary in that view.
A reorganization of time samples into a configuration ensemble for purpose of statistical analysis does not alter the character of the configuration centric RVs. Their moments are constant for each qi. The justification for this stationary behavior in the configuration ensemble view is due to the boundary conditions, specifically:
An overall expected momentum variance can be calculated based on the variances at each configuration coordinate. Probabilities for conditional momenta, given position, will blend in some weighted fashion on the average over many trajectories, and time. One can calculate σq
3.1.9.1 Momentum Averages
At an arbitrary position, the velocity variance is based on the location of the particle with respect to the phase space boundary. The span of momentum values is determined by the PAERc and Pm parameters at each position, and the span of the configuration domain radius is ±Rs. PAERc is the peak to average energy ratio of the configuration ensemble. PAERp is typically specified for a design or analysis, not PAERc.
If each momentum sample function is of sufficiently long duration, comprising many independent time samples, then particle motions will eventually probe a representative number of states within the space and an appropriate momentum variance can be calculated from a densely populated configuration ensemble with diminishing bias on the alpha axis by averaging all configuration dependent variances. Such a calculation is given by:
The time average on the left is then equated with the statistical quantity on the right. This is a correct calculation even if the velocity variance is not stationary. There is an inconvenience with this calculation, however. One may only possess the velocity νq=νmax |q explicitly for trajectories of phase space at boundary conditions. Fortunately there is an alternative.
A time sampled trajectory from the momentum ensemble is composed of independent Gaussian random variables from the configuration ensemble. Hence, one can calculate an average momentum variance over i members of the configuration ensemble where i is a sufficiently large number and λi is a relative weighting factor for each configuration ensemble member variance
The variance on the left comes from a Gaussian RV because the variances on the right come from independent Gaussian RVs. Therefore, one can specify a desired variance of interest from the peak to average ratio of energy or power directly in the momentum ensemble, along with Pm, as design or analysis criteria. One need not explicitly calculate λi or even specify PAERc from the configuration ensemble because Equation 3-90 must be true from the properties of Gaussian RV's. Therefore:
Equation 3-91 is the velocity variance per sample for the ζth sample function of the momentum ensemble. Hence, the variables from the configuration ensemble, which are dictated by maximum uncertainty requirements, constrain samples from continuous time domain trajectories of the momentum ensemble to also be Gaussian distributed. The converse is also true. By simply specifying that the time domain sample functions are composed of Gaussian Random variables, one has ensured that the uncertainty for any position must be maximum for a given variance.
Equations 3-90 and 3-91 are verified more deliberately in a derivation where each sample function of the momentum ensemble is treated as a unique message sequence and the time ordered message sequence is reordered to configuration bins. In this analysis, each member of the message sequence is a time sample.
A message is defined by a sequence of 2N independent time samples similar to the formulation of chapter 2. The message sequence is then given by:
mζ(t−lTs)={(q,p)1,(q,p)2, . . . (q,p)2, . . . (q,p)2N}ζ Equation 3-92
The message is jointly Gaussian since it is a collection of independent Gaussian RVs. Position and momentum are related through an integral of motion, and therefore q also possesses a Gaussian pdf which can be derived from p.
The statistical average is reviewed and compared to message time averages from the perspective of the process first and second moments. The long term time average is nearly equivalent to the average of the accumulated independent samples, given a suitably large number of samples 2N.
The mean square of the message is likewise approximated by:
A long term time average is approximated by the sum of independent samples. It is reasonable to assume that the variance of each sample contributes to the mean squared result weighted by some number λi where i is a configuration coordinate index. The left hand side of Equation 3-94 is a time average of sample energies over 2N samples and the right hand side is the weighted sum of the variances of the same samples organized into configuration bins.
Each time sample may be mapped to a specific configuration coordinate and momentum coordinate at the lth instant. Each position qi is accompanied by a stationary momentum statistic, ρ(p|qi). The averaged first and second moments for each qi are therefore stationary. This insures that any linear functional of a set RVs with these statistics must also be stationary when averaged over long intervals. Thus, long term time averages inherit a global stationary property as will be shown. The right hand side of the prior equations are a sum of Gaussian RVs and Gamma RVs, respectively. Therefore, the mean and variance of the sum is the sum of the independent means and variances if the samples are statistically independent. The cumulative result remains Gaussian and Gamma distributed respectively. This permits relating the time averages and statistical averages of the messages in the following manner;
The right hand sides of these equations are a reordering of the left hand side time samples in a manner which does not alter the overall averages. λi are ultimately determined by the characteristic process pdf and boundary conditions and are related to the relative frequency of time samples near a particular coordinate qi. Whenever the averages are conducted over suitably large i, l the sampled averages are good estimates of a continuum average. Since the right hand side is stationary, then the left hand side is stationary also.
The prior analysis shows that the process appear stationary in the wide sense or that:
{{right arrow over (p)}αz}ζ=∫−∞∞{[{right arrow over (p)}α(qα)]zρ({right arrow over (ν)}α|qα)}ζd{right arrow over (ν)}α;z=1,2 Equation 3-97
The maximum weighting is at the configuration origin where it is possible to achieve νmax at the apex of the νp profile. The conditional pdf provides a weighting function for this statistic averaged over all possible positions qα. Over an arbitrarily long interval of random motion, all coordinates will be statistically visited. The specific order for probing the coordinates versus time is unimportant because the statistic at each particular configuration coordinate is known to be stationary. The time axis for the momentum ensemble member thus cannot affect the ensemble average or variance per sample.
In summary:
{tilde over (ε)}k may also be calculated for a maximum cardinal pulse boundary condition.
The average energy for the maximum cardinal velocity pulse main lobe is calculated from (ignoring the tails);
The average energy and momentum of all trajectories subordinate to the maximum cardinal pulse bound is therefore
The ratio of the average energy for the trajectories subordinate to the two profiles is approximately 1.1074 when νm_card2=νm2. If the two cases are compared with an equivalent R, design parameter, then the ratio of comparative energies increases to (1.13)(1.1074)˜1.25. This was obtained from Equation 3-103 and section 3.1.8.2, as well as Sections 10.6 and 10.7.
3.1.10. Configuration Position Coordinate Time Averages
Since the configuration coordinates are related to the momentum by an integral, the position statistic is also zero mean Gaussian with a variance related to the average of the mean square velocity profile.
Because the statistics of a position qi are stationary, the linear function of a particular qi also possesses a stable statistic.
In the prior sections, the Gaussian nature of momentum was presented from the maximum uncertainty requirement of momentum at each phase space coordinate. The position over an interval of time ta−tb is given by:
The momentum pζ(t) can be scaled by a continuous function of time aζ(t), resulting in an effective momentum, {hacek over (p)}ζ(t). Sample functions of this form produce output RVs which are Gaussian when the kernel pζ(t) is Gaussian. Furthermore, if for each ζ this is true it can also be shown that
and the output process is also Gaussian when A (t, τ) is a continuous function of both time and τ, an offset time variable. In such cases, the position covariance Kq due to this class of linear transformations can be obtained from:
An alternate form in terms of an effective filter impulse response and input covariance Kp, is given by:
When the covariance in each sample function is unaffected by time axis offset, then h(t)=u(t−ta) is the impulse response from the integral of motion, which leads to:
{hacek over (K)}p includes any time invariant scaling effects due to A(t). (σq2)s is a position variance per sample and Ts is a sample interval. Equation 3-108 is given in meters squared per sample. Alternately, the frequency domain calculation for the covariance is given by:
Sp(ω) is the double sided power spectral density of the momentum and Hp(jω) is the frequency response of the effective filter. For maximum uncertainty conditions, Sp(ω) is a constant power spectral density.
Finally, the variance of q is also given in terms of the q, variables from the prior section (for large i):
Therefore, if we specify σp2, PAERp, and in we can calculate σq2. A simulation creating the signals of
3.1.10.1 Joint Probability for Momentum and Position
ρ(p|q) is recalled as a point of reference. The multidimensional pdf may be given as (m=1):
σα2, the velocity variance and diagonal of Λ, are averaged over all probable configurations. Each configuration coordinate possesses a characteristic momentum variance which contributes to that average.
A phase space density of states in terms of configuration position must therefore be scaled according to:
The density along the αth dimension of phase space is obtained from:
ρ(να,qα)=ρ(να|qα)ρ(qα) Equation 3-113
Whenever the orthogonal dimensions are also statistically independent, each dimension will have the form illustrated in
A joint phase space density representation for the continuous RVs can be specified from the following synopsis of equations whenever momentum and position can be decoupled (case m=1).
This joint statistic is also zero mean Gaussian.
3.1.11. Statistical Behavior of the Particle Based Communications Process Model
Localized motions in time are correlated over the intervals less than Δt due to the momentum and associated inertia. Eventually, the memory of prior motions is erased by cumulative independent forces as the particle is randomly directed to new coordinates. This erasure requires energy. The evolving coordinates possess both Gaussian momentum and configuration statistics by design and the variance at each configuration coordinate is sculpted to accommodate boundary conditions. The boundary conditions require particle accelerations which may be deduced from the random momenta and finite phase space dimension. If a large number of independent samples are analyzed at a specific configuration coordinate, the momentum variance calculated for that coordinate is stationary for any member of the ensemble. Each configuration coordinate can be analyzed in this manner with its sample values reorganized as a configuration centric ensemble member.
The set of momentum variances from a plurality of configuration coordinates can be averaged. That result is stationary. Yet, the process is not stationary in the strict sense because the momentum statistics are a function of position and therefore fluctuate in time as the history of a single particle evolves sequentially through unique configuration states. The process is technically not stationary in the wide sense because the autocorrelations fluctuate as a function of time origin. The moments of the process are however predictable at each configuration coordinate though the sequence of such coordinates is uncertain.
This process shall be distinguished as an “entropy stable” stationary (ESS) process. The features of an ESS process are:
(a) Autocorrelations possess the same characteristic form at all-time offsets but differ in some predictable manner, for instance, variance versus position or parametrically versus time. The uncertainty of these variances can be removed given knowledge of relative configuration offsets compared to an average.
(b) Shannon's entropy over the ensembles is unchanging even though the momentum random variable is not stationary. The momentum does possess a known long term average variance.
(c) The long term time averages are characterized by the corresponding statistical average for a specific RV. The RV statistics (such as momentum) can change as a function of time but will be constant at a particular configuration coordinate.
(d) Time averages and statistical averages for the ensemble members can be globally related by reorganizing samples from the process to favor either the momentum or configuration ensemble views respectively. The statistics are unaltered by such comparative organizations.
(e) The variance of position may not necessarily be obtained through the momentum autocorrelation and system impulse response without further qualification. That is, the configuration variance may not always be calculated by direct application of the W-K theorem and system impulse response.
Items (a) and (b) are of interest because they illustrate that statistical characterizations which are not classically stationary still may possess an information theoretic stability of sorts.
Stability of the uncertainty metric should be the preoccupation and driving principle rather than the legacy quest to establish an ergodic assumption. Information can be lost or annihilated.
Generally, the entropy stable stationary communications process is a collection of individually stationary random variables with differing moments determined by physical boundary conditions and a time sequence for accessing the RVs which is randomly manifest whenever the process is sequentially sampled at sufficient intervals
3.2. Comments Concerning Receiver and Channel
For the purposes herein, both the channel and receiver are considered to be linear. Therefore, the signal at the receiver is a replica, or alias, of the transmit signal scaled by some attenuation factor, contaminated by additive white Gaussian noise (AWGN) and perhaps some interference with an arbitrary statistic. The channel conveys motion from the transmitter to the receiver via some momentum exchange whether field or material based.
The extended channel comprises a transmitter, physical transport media, and receiver. The physical transport medium can be modeled as an attenuator without adding other impairments except for AWGN noise. Although, the AWGN contribution can be distributed amongst the transmitter, transport medium and receiver, it is both convenient and sufficient to include its affect into the receiver since the concern is with the capacity of a linear system.
It is useful to connect this idea to the concepts of phase space. One approach is a global phase space model since it is an extension of the current theme and preserves a familiar analysis context.
Channel attenuation is a property of the space between the transmitter and receiver. Attenuation is different for mechanical models, electromagnetic models, etc. There is a preferred consideration for the case of free space and an electromagnetic model where the power radiated in fields follows an inverse square law. Likewise, the momentum transferred with the radiated field is well understood, and this momentum reflects corresponding accelerated motions of the charged particles within the transmitter and receiver phase spaces. This will be revisited in section 5.5.
If one assumes that transmission times are relatively long term compared to observation intervals, then average momentum densities at each point in the global phase space will be relatively stationary if the transmit and receive platforms are fixed in terms of relative position. The momentum density is 3 dimensional Gaussian with a spatial profile sculpted proportional to R−2 where R is the radius from the transmitter, excluding the near field zone. This follows the same theme as the analysis for the velocity profiles with the exception of the boundary condition. At large distances, the PAPR for the momentum profile is the same as for local fields but the variance converges as R−2. The pdf for the field momentum in the channel transport medium will be of the following form.
σp
There are two interfaces to consider; transmitter-channel and channel-receiver. Maximum power transfer is assumed at both interfaces. Hence, the effect of loading, is that half of the source power is transferred at each interface. Otherwise, the relative statistics for motions of particles and fields through phase space are unaffected except by scale.
Similar analogies can be leveraged for acoustic channels and optical channels. In those cases, momentum may be transferred by material or virtual particles, but the same concepts apply.
The receiver model mimics the transmitter model in many respects. The geometry of phase space for the receiver can be hyper-geometric and spherical as well. The significant differences are:
(a) Relative location of information source and phase space;
(b) The direction of information flow is from the channel which is reversed from the Tx scenario;
(c) The sampling theorem applies in the sense of measuring rather than generating signals; and
(d) There can be significant competitive interfering signals and contamination of motion beyond thermal agitation.
With respect to item (d), the relative power of the desired signal compared to potential interference power which may contaminate the channel can be many orders of magnitude in deficit. The demodulator which decodes the desired signal discriminates encoded information while removing the effects of the often much larger noise and interference, to the greatest extent possible.
Capacity is greatly influenced by the separation R of the information source and the information sink (see Equation 3-117). In an embodiment, the receiver must redact patterns of motions which can survive transfer through large contaminated regions of space (transport medium) and still recognize the patterns. The sensitivity of this process is remarkable in some cases because the desired signal momenta and associated powers interacting with the particles of the receiver can be on the order of pica watts. This requires very sensitive and linear receiver technology.
The same concepts for communications efficiency apply throughout the extended channel. Similarly, capacity, while independently affected by receiver performance, transmitter performance and extended channel conditions, finds common expression in certain distributed aspects of the capacity equation such as signal power, noise power, observation time, sampling time, etc. A high level analysis of capacity versus efficiency dependent on these common variables is applied to the current particle based model where information is transferred through momentum exchange.
This section discusses the following:
(a) Refining a suitable uncertainty metric for a communications process of the model described in Section 3.
(b) Deriving the physical channel capacity.
An uncertainty associated with coordinates of phase space can be obtained from a density of the phase space states which calculates the probability of particle occupation for position and momentum. Once the uncertainty metric is known, the capacity can be obtained from this metric, the TE relation, and some basic knowledge of the extended channel.
4.1. Uncertainty
Uncertainty is a function of the momentum and configuration coordinates. Thus, formulations from statistical mechanics can be adopted at least in part. However, one of the most powerful assumptions of statistical mechanics is forfeit. A basic postulate of statistical mechanics asserts that all microstates (pairings of {q,p}) of equal energy for a closed system be equally probable. This postulate provides much utility because particles possess equal energy distribution everywhere within a container or restricted phase space under equilibrium conditions. The communications process of Section 3 shows that the average kinetic energy for a particle in motion is a specific function of q due to boundary conditions. Therefore, communications processes require more detailed consideration of the statistics for the particle motion to calculate the uncertainty because they are not in equilibrium.
The uncertainty for a single particle moving in D dimensional continuum is given by:
HΩ=−∫∫ . . . ρ({right arrow over (q)},{right arrow over (p)})Ωln ρ({right arrow over (q)},{right arrow over (p)})ΩdD(q)dD(p) Equation 4-1
The joint density ρ(q,p)Ω was obtained in Section 3. Some attention is afforded to Jaynes' scrutiny of Shannon's differential entropy (Equations 2-11 and 4-1) which was earlier stated by Boltzman in his discussion of statistical mechanics. The discrete form of Shannon's entropy given in Equation 2-10 cannot be readily transformed to the continuous form in Equation 4-1, which can provide some ambiguity for the absolute counting of states.
It is the difference in entropy measures which is at the heart of capacity. This is because capacity is a property of the communication system's ability to both convey and differentiate variations in states rather than evaluate absolute states.
If the mechanisms which encode and decode information possess baseline uncertainties prior to information transfer, then such pre-existing ambiguity cannot contribute to the capacity. This, a change in state referred to a baseline state is used as a metric to calculate capacity. This is a kind of information relativity principle in that relative differences of some physical quantity may convey information.
In this section, a lower limit resolution is promoted for the momentum and configuration, based on quantum uncertainty. A discrete resolution is introduced to limit the number of states per trajectory which may be unambiguously observed.
Continuous entropies originate from observables connected to the phase space proper. In this connection the Gaussian distribution explicitly includes the variance of the observable as well as the character of its time evolution. If the discrete random variable is derived by sampling a continuous process then it can logically inherit attributes of the continuous physical process, if it is properly sampled. Conversely, if it is merely a probability measure of events without connection to physics, it may provide an incomplete characterization.
The approach moving forward, adopts the statistical mechanics formulation. The applicable probability density is normalized to a measure of unity while accommodating the quantum uncertainty by setting the granularity of phase space cells for each observable coordinate.
hD provides a scale according to a phase cell possessing a D dimension span on the order of h, Planck's constant.
The total uncertainty can be calculated from a weighted accumulation of Gaussian random variables. Each variable is associated with a position coordinate qα, and each coordinate possesses a corresponding probability weighting.
The relation between ΔΓ (number of relevant quantum states within a phase space) in quantum theory and ΔpΔq in the limit of classical theory where a cell of volume (2πh)s (where s is the number of degrees of freedom of the system) ‘corresponds’ in phase space to each quantum state the number of states ΔΓ may be written
The logarithm of ΔΓ is dimensionless when scaled by the denominator and that changes of entropy in a given process, are definite quantities independent of the choice of units.
The single particle uncertainty with finite phase cell, in 3 dimensions is:
This entropy is that of a scaled Gaussian multivariate and:
H=Hq+Hp Equation 4-4
Hq, Hp are the uncertainties due to position and momentum respectively which are statistically independent Gaussian RVs. The momentum and position may be encoded independent of one another subject to the boundary conditions.
Hq+Hp=ln(√{square root over (2πe)})2D+ln(|Λ|D) Equation 4-5
Λ is the joint covariance matrix (see Section 10.4).
The lower limit of this entropy can be calculated by allowing the quantity (σqσp), to approach the quantum value (σq
The number of single particle degrees of freedom D may be set to one since the entropy is extensible. Limit is achieved for σqσp→σq
Therefore, the minimum entropy is non negative and fixed by a physical constant, assuming the resolution of the phase space cell is subject to the uncertainty principle. This limit is approached whenever the joint particle position and momentum recedes to the quantum “noise floor.” Positive differences from this limit correspond to the uncertainty in motions available to encode information. The limit is also independent of temperature.
4.2. Capacity
Capacity is defined as the maximum transmission rate possible for error free reception. Error free is defined as the ability to resolve position and momentum of a particle. The following analysis is directed to the continuous bandwidth limited AWGN channel without memory. “Without memory” refers to the circumstance where samples of momentum and position from the random communications process can be decoupled and treated as independent quantities at proper sampling time intervals.
The capacity of a system is determined by the ability to generate and discriminate sequences of particle phase space states, and their associated connective motions through an extended channel. Each sequence can be regarded as a unique message similar to the discussion of Section 2. The ability to discriminate one sequence from all others necessarily must contemplate environmental contamination which can alter the intended momentum and position of the particle.
4.2.1. Classical Capacity
A summary of Shannon's solution follows:
Maximization is with respect to the Gaussian pdf ρ(x) given a fixed variance. The channel input and output variables are given by x, y respectively, where y is a contaminated version of x. The scale within the argument of the logarithm is ratio-metric, and therefore the concerns of infinities are dispensed, but only in the case where thermal noise variance is greater than zero, as will be shown. This form can also be applied to the continuous approximation of the quantized space or even the quantized space if each volume element is suitably weighted with a Dirac delta function. In the following derivation, differential entropy forms and take ratios are used. Ultimately, the quantum uncertainty will also be accounted for through distinct terms to emphasize its limiting impact on capacity.
The mutual information can be defined as:
ρ(x|y) is the probability of x entering the channel given the observation of y at the receiver load. This is the probability kernel of the equivocation Hy (x). The capacity for the discretely sampled continuous AWGN channel:
E is the expectation operator.
Finding the capacity includes weighting all possible mutual information conditions, resulting in an uncertainty relationship. The averaged mutual information of interest can be written as:
E[I(x;y)]=[
[
[
The joint density ρ(q,p)Ω developed in the previous sections accounts for this through detailed expansion of covariance as a function of time where all off diagonal terms of the covariance matrix are zero. The pdf for the channel output is given by:
ρ(y)=ρ({tilde over (q)},{tilde over (p)})Ω
The tilde represents the corrupted observation of the joint position and momentum. The variances introduced by a noise process can be represented by σq
Λx, Λy, are the input, output covariance matrices respectively for the samples. Λx, Λy are N square in dimension while Λx,y is a 2N by 2N composite covariance of the N input and output samples. The approach for the single configuration dimension thus mimics Shannon's where the independent time samples are arranged as a Gaussian multivariate vector of sample dimension N=2BT, sometimes referred to as Shannon's number. The extension of capacity for D configuration dimensions can then be calculated simply by using a multiplicative constant if all D dimensions are independent. The variance terms for the input and output samples are:
The variance terms are segregated because they have different units. Each sample has a unique position and momentum variance. Thus, position and momentum are treated as independent data types. Subsequently the units will be removed through ratios. kg is a gain constant for the extended channel and may be set to 1 provided the channel noise power terms are accounted for relative to signal power. The elements of the covariance matrices are therefore obtained from the enumeration of (i, j) over N for σxiσxj and σyiσyj. The elements for the joint covariance Λ are derived from the composite input-output vector samples. The compact representation for the averaged mutual information from 4-11 then becomes:
Maximization of this quantity yields capacity.
In the case where the process interfering with the input variable x is Gaussian and independent from x, the capacity can be obtained from the alternate version of
C=max {
Hx (y) is the uncertainty in the output sample given the desired variable x entered the channel. This is simply the uncertainty due to the corrupting noise or;
Likewise,
Since the corruption consists of N independent samples from the same process, samples possess a statistic with noise variance σn2 and the capacity becomes:
N is not present in the normalized capacity because of the ratio of Equations 4-13 and 4-14. Furthermore, it is assumed that the required variances are calculated over representative time intervals for the process.
The capacity of 4-17 is per unit sample for a one particle system. Capacity rate must consider the minimum sample rate ƒs_min which sets the information rate. This is known from the TE relationship as the minimum number of forces per unit time to encode information.
Now an appropriate substitution using the results of Section 3 can be made for σx2 and σn2 to realize the capacity for the case of a particle in motion with information determined from independent momentum and position in the αth dimension. Capacity can be organized into configuration and momentum terms.
It is presumed that there will always be some variance due to quantum uncertainty. The variances σq
σq
σp
This formulation estimates the maximum entropy of the quantum uncertainty to be based on a Gaussian RV. Therefore the variance of quantum uncertainty may add to the noise variance σq
If |ƒ(q)|2 and |g(p)|2 are both probability frequency functions and g(p) is the Fourier transform of ƒ(q) then the entropies of |(q)|2 and |g(p)|2 cannot be simultaneously concentrated in q and p.
For the case of information transfer via D independent dimensions, the available energy and information can be distributed amongst these dimensions. When all dimensions have parity, the capacity with a maximum velocity pulse boundary condition (kp=1), is given by;
where variances from Section 3 have been substituted and are also normalized per unit time.
A multidimensional channel can behave like D independent channels which share the capacity of the composite. Given a fixed amount of energy, the bandwidth per dimension scales as
and the overall capacity remains constant for the case of independently modulated dimensions. Capacity as given, is in units of nats/second but can be converted to bits/second if the logarithm is taken in base 2.
The capacity equation may also be written in terms of the original set of hyperspace design parameters (m=1).
This form assumes that D dimensions from the original hyper sphere transmitter are linearly translated through the extended channel. The signal is sampled at an effective rate of ƒs, though each dimension is sampled at the rate ƒs_α=ƒs/D. It should be noted that a reference coordinate system at the receiver can be ambiguous and the aggregate sample rate of ƒs can in general be required to resolve this ambiguity in the absence of additional extended channel knowledge.
can be replaced by the filtered variance of a noisy process with input variance
This was calculated in Section 3 and results in the substitution (for m=1):
After substitution into 4-23 and cancelling the Ts_α2 terms, the capacity equation becomes:
The influence of The TE relation in 4-25 indicates that greater energy rates correspond to larger capacities. The scaling coefficient is the number of statistically independent forces per unit time encoding particle information while the logarithm kernel reflects the allocated signal momentum squared relative to competing environmental momentum squared.
A similar result can be written for the case with a cardinal velocity pulse boundary condition by appropriate substitutions for the variance in equation 4-23. The proper substitutions from Section 3 are (m=1);
Both position and momentum are regarded as statistically independent and equally important in this capacity formula. This is an intuitively satisfying result since the coordinate pairings (q,p) are equally uncertain, at least to lower bound values just above the quantum noise floor. Although not contemplated by these equations, an upper relativistic bound would also limit the momentum accordingly. The implication of this model is that physical capacity summarized by equation 4-25 is twice that given in the Shannon-Hartley formula.
Quantum uncertainty prevents the argument of the logarithm in equation 4-23 from diverging when environmental thermal agitation is zero, unlike the classical forms of the Shannon-Hartley capacity equation. When the absolute temperature of the system is zero, the capacity is quite large but finite for finite Pm.
Capacity in nats per second and bits per second are plotted in
The capacity for the case of a cardinal velocity pulse boundary condition follows the same form, but the SNR for a given Pm_card must necessarily adjust according to the relationships provided in Equations 4-26, 4-27, and 4-28. There it was illustrated that the energy increase on the average for the cardinal case is approximately 1.967 times that of a maximum nonlinear velocity pulse boundary condition. This factor ignores the precursor and post cursor tails of the maximum cardinal pulse profile. If the tails are considered then the factor is approximately equal to the peak power increase requirement. The peak power increase ratio for the cardinal profile is 2.158. This corresponds to the circumstance where the same R, must be spanned in an equivalent time while comparing the impact of the two prototype pulse profiles. Thus, roughly 3 dB more power is required by the cardinal profile to maintain a standard configuration span for a given time interval and capacity comparison.
4.3. Multi-Particle Capacity
Capacity for the multi-particle system is extensible from the single particle case. Comments are now expanded to non-interacting species of particles under the influence of independent forces with multiple internal degrees of freedom.
The form for the uncertainty function is given as a reference for μ species of particle, where the particle clusters might exhibit dynamics governed by μ Gaussian pdfs. Each cluster can comprise one or more particles. A general uncertainty function considers coordinates from all the particle clusters which can contain νμ particles per cluster and lμ states per particle and spatial dimensionality=1, 2 . . . D. Within each cluster domain, particles can swarm subject to a few constraints. One constraint is that particle collisions are forbidden. The total number of degrees of freedom , can generally be considered as the product Dνμl, and for a single particle type with one internal state per sample, =D.
The pdf for this form of uncertainty can be adjusted using the procedures previously discussed.
The normalization integral is integrated over all states within the D dimensional hyper-sphere where the lower and upper limits (ll,ul) are set according to the techniques presented in Section 3.
The capacity for a system with K equivalent degrees of freedom is simply
Energy is equally dispersed amongst all the degrees of freedom in equation 4-30.
Whenever is not composed of homogeneous degrees of freedom, the form of 4-30 can be adjusted by calculating an
The multi-particle impact is an additional consideration which is important to mention at this point. The effect of particle number ν on the momentum and energy of a signal is as important as velocity. Energy and energy rate of signals is a central theme of legacy theories as well as the theories presented here. Modulation of momentum through velocity is emphasized for the present discussion. However, this presents the obvious challenge in the classical case because of the uncertainty ΔqΔp≥h. At the least, two factors which may accommodate this concern when particles are indistinguishable, are, (ν!hDν)−1 and m, where ν! is the Gibb's correction factor for counting states of indistinguishable particles. Mass m is extensive and therefore may represent a bulk of particles. Such a bulk at a particular velocity will have a greater momentum and kinetic energy as the mass (number of particles) increases. The same is true of charge. A multiplicity of charges in motion will proportionally increase momentum and the energies of interest both in terms of material and electromagnetic quantities. Hence, velocity is not the only means of controlling signal energy. The number of particles can also fluctuate whilst maintaining a particular velocity of the bulk. Such is the case for instance where current flow in an electronic circuit is modulated. The fundamental motions of electrons and associated fields can possess characteristic wave speeds in certain media yet the square of the number of wave packets per interval of time traversing a cross section of the media is a measure of the power in the signal. This means that counting particles and possibly additional particle states is every bit as important as acknowledging their individual momentums. Indeed, the probability density of numbers of particles possessing particular kinetic energies distributed in various degrees of freedom is the comprehensive approach. This requires specific detail of the physical phenomena involved, accompanied by greater analytic complexity.
This section discusses the efficiency of target particle motion within the phase space introduced in Section 3. Though we have a primary interest in Gaussian motion, the derived relationships for efficiency can be applied to any statistic given knowledge of the PAPR for the particle motions. This is a remarkable inherent characteristic of the TE relation.
The 1st Law of thermodynamics accounts for all types of energy conversions as well as exchanges and requires that energy is conserved in processes restricted to some boundary such as a closed system. One can account for energy at a specific time using simple equations such as:
In this representation, energy is effectively utilized, εe, wasted, εw, or potential, εφ. U is defined as the internal system energy. All forms of energy may be included in this accumulation, such as chemical, mechanical, electrical, magnetic, thermal, etc.
δQ is an incremental amount of energy acquired from a source to power an apparatus and δW is an incremental quantity of work accomplished by an apparatus. A change in the total internal energy of a closed system can be given in terms of heat and work as:
ΔU=Q−W
dU=δQ−δW Equation 5-2
This equation is useful for general purpose. dU is an exact differential and is therefore independent of the procedure required for exchange of heat and work between the apparatus and environment.
For a system in isolation, the total energy and internal energy are equivalent. Using this definition enables several interchangeable representations which will be employed from time to time depending on circumstance.
Q−W=ΣΔεe+ΣΔεw+ΣΔεφ
Δεtot=Q−(Weffective+Wwaste)
εtot=εe+εw+εφ=εφ∓εk Equation 5-3
εk and εy, are kinetic and potential energies respectively. One can account for the various quantities using the most convenient formulation to fit the circumstance and a suitable sign convention for the directional flow of work when the energy varies with time. Negative work means that the apparatus accomplishes work on its environment. Positive work means that the environment accomplishes work on the apparatus. Work forms of energy exchange such as kinetic for example or a charge accelerated by an electric field can be effective or waste. Thus, the change in total energy of a system can be found from, Q, the energy supplied and, W, the work accomplished with sign conventions determined by the direction of energy and work flow. The forms of energy exchanged for work in equation 5-3 is a form of the work energy theorem.
It is also desirable to define energy efficiency consistent with the second law of thermodynamics. The consequence of the second law is that efficiency η≤1 where the equality is never observed in practice. The tendency for waste energy to be translated to heat, with an increase of environmental entropy, is also a consequence of the second law. εw reduces to heat by various direct and indirect dissipative mechanisms. Directly dissipative refers to the portion of waste originating from particle motion and described by such phenomena including, drag, viscous forces, friction, electrical resistance etc. indirectly dissipative or ancillary dissipative phenomena, in a communications process, are defined as those inefficiencies which arise from the necessary time variant potentials synthesizing forces to encode information.
As will be illustrated, momentum exchange between particles of an information encoding mechanism possess overhead as uncertainty of motion increases. The overhead cannot be efficiently recycled and significant momentum must be discarded as a byproduct of encoding. εe is the deliverable portion of energy to a load which evolves through the process of encoding. εw is generated by the absorption of overhead momentum into various degrees of freedom for the system, including modes which increase the molecular kinetic energy of the apparatus constituents. This latter form is generally lost to the environment, eventually as heat.
The equation for energy efficiency can be written as;
represents a familiar definition for efficiency often utilized by engineers. In this definition, the output power from an apparatus is compared to the total input power consumed to enable the apparatus function. The proper or effective output power, Pe, is the portion of the output power which is consistent with the defined function of the apparatus and delivered to the load. Usually, one is concerned with the case where Pout=Pe. This definition is important so that waste power is not incidentally included in Pout.
In subsequent discussion the phase space target particle is considered as a load. Its energy consists of εe and εw corresponding to desired and unwanted kinetic energies, respectively. Not only are there imperfections in the target particle motion, but there will be waste associated with the conversion of a potential energy to a dynamic form. This conversion inefficiency may be modeled by delivery particles which carry specified momentum between a power source and the load. Thus, the inefficiencies of encoding particle motion are distributed within the encoding apparatus where ever there is a possibility of momentum exchange between particles.
5.1. Average Thermodynamic Efficiency for a Canonical Model
Consider the basic efficiency definition using several useful forms including the sampled TE relation from Section 3 (eq. 3-42):
In terms of apparatus power transfer from input to output;
εins is defined as the average system input energy per sample, given the force sample frequency ƒs obtained in Section 3. In systems which are 100 percent efficient, the effective maximum power associated with the signal, Pm_e, and maximum power required by the apparatus, Pm, are equivalent. In general though, Pm≥Pm_e or Pm=Pm_e/η, where,
In both 5-5 and 5-6 we recognize that PAPRe is inversely proportional to efficiency.
The phase space model is now extended to facilitate a discussion concerning the nature of momentum exchange which stimulates target particle motion.
The information source possesses a Gaussian statistic of the form introduced in Section 3. It provides instruction to internal mechanisms which convert potential energy to a form suitable to encode the motion of particles in the target phase space. The interaction between the various apparatus segments can be through fields or virtual particles which convey the necessary forces. The energy source for accomplishing this task, illustrated in a separate sub phase space, is characterized by its specific probability density for particle motions within its distinct boundaries. εsrc is used as the resource to power motions of particles comprising the apparatus. A modulator is required which encodes these particles with a specific information bearing momentum profile. As a consequence, delivery particles or fields recursively interact with the target particle imparting impulsive forces at an average rate greater than or equal to ƒs_min. The sculpting rate of the impulse forces may be much greater than the effective sample rate ƒs for detailed models. However, when ƒs is used to characterize the signal samples it is understood that a single equivalent impulse force per sample at the ƒs frequency may be used, provided the TE relation is regarded.
There are two delivery particle streams illustrated in
0≤Δpmod
0≥Δpmod
In the absence of Δ{right arrow over (p)}mod_b the particle accelerates up to a terminal velocity {right arrow over (ν)}max and can no longer be accelerated whenever {right arrow over (p)}tar≥{right arrow over (p)}max. {right arrow over (p)}max is a boundary condition inherited from the phase space model of chapter 3. The finite power resource Pm limits the maximum available momentum, system wide. The finite limit of the velocity due to forward acceleration can be deduced through the difference equation:
where {right arrow over (p)}tar
Δ{right arrow over (p)}tar
The output momentum at the lth sample is obtained by:
{right arrow over (p)}tar
Equation 5-9 indicates that an impulse momentum weighted by Δ{right arrow over (p)}tar
Referring back to
This unit-less control gates effective impulse momentum Δpsrc_b through to the branch segment labeled Δpmod_b such that
causing deceleration. It is a virtually lossless operation analogous to a sluice gate metering water flow supplied by a gravity driven reservoir. Impulse momentum Δpsrc_a is formed from the difference of the maximum available momentum pmax and target particle momentum ptar as indicated by equation 5-7, 5-8. This is a feedback mechanism built into nature through the laws of motion. This feedback control meters the gating function channeling the resource Δpsrc_a to generate Δpmod_a, which in turn causes forward acceleration. The gating process in the feedback path is virtually 100 percent efficient so that Δpsrc_a˜Δpmod_a.
Two input/delivery particle streams is calculated from corresponding cumulative kinetic energy differentials over n exchanges.
The time average and statistical average are approximately equal for a sufficiently large n, the number of sample intervals observed for computing the average. The final two lines of eq. 5-10 were obtained by substitution of the relevant pdf definitions for pφ and ptar (see
The effective output power is by definition σe2 and σφ2 is the information momentum pdf of interest. The maximum waveform momentum pmax, in equation 5-11 is twice that of the effective signal momentum. Therefore the efficiency is given by:
For large information capacity signals, the efficiency is approximately (2PAPRe)−1. This result can also be deduced by noticing that the total input power to the encoding process is split between delivery particles and the target particle. This power may be calculated by inspecting
Suppose that the model of
Referring back to
This model reflects an increase in efficiency over the apparatus of
5.1.1. Comments Concerning Power Source
The particle motions within the information source are statistically independent from the relative motions of particles in the power source. There is no a priori anticipation of information between the various apparatus functions. A joint pdf captures the basic statistical relationship between the energy source and encoding segment.
ρφε=ρφρsrc Equation 5-14
ρφε is the joint probability where the covariance of relative motions are zero in the most logical maximum capacity case. The maximum available power resource may or may not be static, although the static case was considered as canonical for analytical purposes in the prior examples. In those examples the instantaneous maximum available resource is always pmax, a constant. This is not a requirement, merely a convenience. If the power source is derived from some time variant potential then an additional processing consideration is required in the apparatus. Either the time variant potential must be rectified and averaged prior to consumption or the apparatus must otherwise ensure that a peak energy demand does not exceed the peak available power supply resource at a sampling instant. Given the nature of the likely statistical independence between the particle motions in the various apparatus functions, the most practical solution is to utilize an averaged power supply resource. An alternative is to regulate and coordinate the PAPRe and hence the information throughput of the apparatus as the instantaneous available power from a power source fluctuates.
5.1.2. Momentum Conservation and Efficiency
Section 5.1 provided a derivation of average thermodynamic efficiency based on momentum exchange sampled from continuous random variables. This section verifies that idea with a more detailed discussion concerning the nature of a conserved momentum exchange. The quantities here are also regarded as recursive changes in momentum at sampling intervals ƒs−=Ts, where samples are obtained from a continuous process. The model is based on the exchange of momentum between delivery particles and a target particle to be encoded with information. The encoding pdf is given by ρ(pφ), a Gaussian random variable.
The current momentum of a target particle is a sum of prior momentum and some necessary change to encode information. Successive samples are de-correlated according to the principles presented in Section 3. The momentum conservation equation is:
C is a constant. {right arrow over (p)}i− is the ith particle momentum tε seconds just prior to the nth momentum exchange. {right arrow over (p)}i+ is the ith particle momentum just after the nth momentum exchange.
{right arrow over (p)}i−={right arrow over (p)}i(t−nTs+tε)
{right arrow over (p)}i+={right arrow over (p)}i(t−nTs−tε)
In the following example only two particles are deployed per exchange. In concept, many particles could be involved.
The conservation collation over n exchanges is
First we examine the case of differential information encoding. The information is encoded in momentum differentials of the target particle rather than absolute quantities.
{right arrow over (p)}tar+−{right arrow over (p)}tar−=Δ{right arrow over (p)}tar
Also it follows that:
{right arrow over (p)}del−={right arrow over (p)}tar+−{right arrow over (p)}tar−=Δ{right arrow over (p)}tar
This comes from the fact that particle motions are relative and random with respect to one another, and the exchanging particles possess the same mass. {right arrow over (p)}del−={right arrow over (p)}φ+pdel− is exchanged in a set of impulses at the delivery and target particle interface at the sample instants, t=nTs. {right arrow over (p)}del− is an average overhead momentum for the encoding process. Using the various definitions the conservation equation may be restated as:
{right arrow over (p)}del+ on the right side of equation 5-17 can be discarded in efficiency calculations since it is a delivery particle recoil momentum and therefore output waste. Now we proceed with the efficiency calculation which utilizes the average energies from the momentum exchanges.
The left hand side of the above equation represents the input energy of delivery particles prior to exchange. The right hand side represents the desired output signal energy associated with a differential encoded target particle. For large n we approximate the sample averages with the time averages so that:
({right arrow over (p)}φ)2+2{right arrow over (p)}φ{right arrow over (p)}del−+({right arrow over (p)}del−)2=(Δ{right arrow over (p)}tar)2 Equation 5-18
We can calculate the efficiency along the αth axis from:
We now specify an encoding pdf such that max {pφ}=pdel− (ref.
Now the averaged efficiency over all dimensions may be rewritten as:
λα is a probability weighting of the efficiency in the αth dimension. Equation 5-20 is the efficiency of the differentially encoded case. When the PAPR is very large the efficiency may be approximated by (PAPR)−1.
Now suppose that we define the encoding to be in terms of absolute momentum values where the target particle momentum average is zero as a result of the symmetry of the delivery particle motions. The momentum exchanges per sample are independent Gaussian RV's so that the two sample variance forming (Δptar)2 is twice that of the absolute quantity (ptar+)2. That is,
If the same PAPR is stipulated for the comparison of the differential and absolute encoding techniques then the average of the delivery particle momentum must scale as
and we obtain:
In the most general encoding cases the efficiency may be written as:
σ2 is desired output signal power and kmod, kσ are constants which absorb the variation of potential apparatus implementations and contemplate other imperfections as well.
5.1.3. A Theoretical Limit
Suppose that a stream of virtual delivery particles, such as a photons, acts upon a material particle. Each delivery particle possesses a constant momentum used to accelerate or decelerate the target particle and the desired target particle statistic {right arrow over (p)}φ is created by the accumulation of n impulse exchanges over time interval Ts. The motion of the target particle with statistic {right arrow over (p)}φ is verified by sampling at intervals of time t−lTs where l is a sample index for the target particle signal. Also, we identify the time averages ({right arrow over (p)}φ)2≤({right arrow over (p)}del)2 and ({right arrow over (p)}del)2≤[max {{right arrow over (p)}del}]2. We further assume that the statistics in each dimension are iid so that efficiency is a constant with respect to α.
Time averages may be defined by the following momentum quantities imparted to the target particle by the delivery particles over n impulses exchanges per sample interval and N samples where N is a suitably large number:
And finally:
Equation 5-22 presumes that n the number of delivery particle impulses over the material particle sample time Ts can be much greater than 1.
When PAPR→1 the efficiency approaches 1. An example of this circumstance is binary antipodal encoding where the momentum encoded for two possible discrete states or the momentum required to transition between two possible states is equal and opposite in direction and {right arrow over ({dot over (p)})}→∞. This would be a physically non-analytic case.
5.2. Capacity Vs. Efficiency Given Encoding Losses
Encoding losses are losses incurred for particle momentum modulation where the encoding waveform is an information bearing function of time. This may be viewed as a necessary but inefficient activity. If the momentum is perfectly Gaussian then the efficiency tends to zero since the PARR for the corresponding motion is infinite. However, practical scenarios preclude this extreme case since Pm is limited. Therefore, in practice, some reasonable PAPR can be assigned such that efficiency is moderated yet capacity not significantly impacted.
A direct relationship between PA PR and capacity can be established from the capacity definition of equation 4-14.
C=max {
As before we shall assume an AWGN which is band limited but we relax the requirement for the nature of ρ(p) such that a Gaussian density for momentum is not required. Also the following capacity discussion is restricted to a consideration of continuous momentum since the capacity obtained from position is extensible. Technically we are considering a qualified capacity or pseudo capacity {tilde over (C)} whenever ρ(p) is not Gaussian, yet ρ(p) is still descriptive of continuous encoding.
We can rewrite equation 5-22 with a change of variables z=py/σp
For a given value of momentum variance σp
Equation 25 confirms that capacity is a monotonically increasing function of PAER without bound.
(√{square root over (PAER)})y includes the consideration of noise as well as signal. When the noise is AWGN and statistically independent from the signal:
σp
Thus PAPRy=Pm/σy2 is the output peak to average power ratio for a corrupted signal.
PAPRy may be obtained in terms of the effective peak to average ratio for the signal as:
PAPRn is the peak to average power ratio for the noise. PAPRy is of concern for a receiver analysis since the contamination of the desired signal plays a role. In the receiver analysis where the noise or interference is significant, a power source specification Pm must contemplate the extreme fluctuation due to px+pn. The efficiency of the receiver is impacted since the phase space must be expanded to accommodate signal plus noise and interference so that information is not lost as discussed in Section 3. Most often, the efficiency of a communications link is dominated by the transmitter operation. That is, the noise is due to some environmental perturbation added after the target particle has been modulated. We thus proceed with a focus on the transmitter portion of the link.
Whenever the signal density is Gaussian we then have the classical result:
It is possible to compare the pseudo-capacity or information rate of some signaling case to a reference case like the standard Gaussian to obtain an idea of relative performance with respect to throughput for continuously encoded signals.
We now define the relative continuous capacity ratio figure of merit from:
The uncertainty Hy is due to a random signal plus noise. CG is a reference AWGN channel capacity found in Section 4 and {tilde over (C)}ρ
A precise calculation of Cr first involves finding the numerator pdf for the sum of signal plus noise RV's. When the signal and noise are completely independent then the separate pHs may be convolved to obtain the pdf, ρy, of their sum. A generalization of Cr is possible whenever the numerator and denominator noise entropy are identical and when the signal of interest is statistically independent from the noise. In this circumstance a capacity ratio bound can be obtained from:
k is a constant and σx2 is the variance of a signal which is to be compared to the Gaussian standard. k is determined from the entropy ratio Hr of the signal to be compared to the standard entropy, ln(√{square root over (2πe)}σG). Most generally, the value for Cρ
Hr is the relative entropy ratio for an arbitrary random variable compared to the Gaussian case with a fixed variance. A bounded value for Hr can be estimated by assuming that the noise and signal are statistically independent and uncorrelated. It has been established that the reference maximum entropy process is Gaussian so that for a given variance all other random variables will possess lower relative differential entropies. This means that Hr≤1 for all cases since Hρ
An example illustrates the utility of Hr. Hr for the case when the signal is characterized by a continuous uniform pdf over {−νmax,νmax} (m=1) is found as:
The variance of the Gaussian reference signal and the uniformly distributed signal are equated in this example (σG2=σU2=1) to obtain a relative result. At large SNR, the capacity ratio can be approximated:
Therefore, the capacity for the band limited AWGN channel when the signal is uniformly distributed and power limited, is approximately 0.876 that of the maximum capacity case whenever the AWGN of the numerator and denominator are not dominant. Section 10.10 provides additional detail concerning the comparison of the Gaussian and continuous uniform density cases.
In general, the relative entropy is calculated from:
ρ(ν) is the pdf for the signal of analysis and ρG (ν) is the Gaussian pdf. νmax is a peak velocity excursion. The denominator term is the familiar Gaussian entropy, ln(√{square root over (2πe)}σG).
This formula may be applied to the case where ρ(ν) for the numerator distribution of a Cr≈Hr calculation is based on a family of clipped or truncated Gaussian velocity distributions. η is inversely related to PAPR by some function as indicated by two prior examples using particle based models, summarized in equations 5-11 and 5-12. PAPR can be found where ±νmax indicates the maximum or clipped velocities of each distribution.
Both variance and PAPR can vary in the numerator function compared to the reference Gaussian case of the denominator, though the variance must never be greater than unity when the denominator is based on the classical Gaussian case. In
The results indicate that preserving greater than 99% of the capacity results in efficiencies lower than 15 percent for these particular truncated distribution comparisons. In the cases where the Gaussian distribution is significantly truncated, the momentum variable extremes are not as great and efficiency correspondingly increases. However, the corresponding phase space is eroded for the clipped signal cases thereby reducing uncertainty and thus capacity. A PAPR of 16 (12 dB) preserves nearly all the capacity for the Gaussian case while an efficiency of 40% can be obtained by giving up approximately 30% of the relative capacity.
As another comparison of efficiency, consider
For relatively low PAPR, an investment of energy is more efficiently utilized to generate 1 nat/s of information. However, the total number of nats instantly accessible and associated with the physical encoding of phase space, is also lower for the low PAPR case compared to the circumstance of high PAPR maximum entropy encoding. Another way to state this is: there are fewer nats imparted per momentum exchange for a phase space when the PAPR of particle motion is relatively low. Even though a low PAPR favors efficiency, more particle maneuvers are required to generate the same total information entropy compared to a higher PAPR scenario when the comparison occurs over an equivalent time interval. Message time intervals, efficiency, and information entropy are interdependent.
The TE relation illustrates the energy investment associated with this process as given by eq. 5-5 and modified to include a consideration of capacity. In this case ℑ{{tilde over (C)}} is some function of capacity. The prior analysis indicates the nonlinearly proportional increase of ℑ{{tilde over (C)}} for an increasing PAPRe. The following TE relation equivalent combines elements of time, energy, and information where information capacity {tilde over (C)} is a function of PAPRe and vice versa. We will refer to this or substantially similar form (eq. 5-32) as a TEC relation, or time-energy-capacity.
If the power resource, sample rate and average energy per momentum exchange for the process are fixed then:
k is a constant. As ℑ{{tilde over (C)}} increases η decreases. The exact form of ℑ{{tilde over (C)}} depends on the realization of the encoding mechanisms. The ≤operator accounts for the fact that an implementation can always be made less efficient if the signal of interest is not required to be of maximum entropy character over its span {pmax,pmax}.
Since ℑ{{tilde over (C)}} is not usually a convenient function, it is often expedient to use one of several techniques for calculating efficiency in terms of capacity. The alternate related metric PAPRe may be used then related back to capacity. Numerical techniques may be exploited such as those used to produce the graphics of
The numerical constant in the denominator of the inverse hyperbolic tangent argument is the entropy for a Gaussian distribution with variance of unity. When Cr tends to a value of 1 then PAPRe tends to infinity.
This approximation is now re-examined using the general result extrapolated from equation 5-32, a TEC relation, and some numbers from an example given in section 3.1.6.
For the truncated Gaussian case:
ƒs, εins and Pm
Pm
If we wish a maximum capacity solution then the efficiency tends to zero in equation 5-35 verifying prior calculations. If we would like to preserve 70% of the maximum capacity solution then the efficiency should tend to 40% confirming the prior calculation. This would require that k≅1.554 for consistency between the formulations of 5-35 and numerical techniques related to the transcendental graphic procedure leveraging
Alternately, if k=1.554, then the efficiency calculates to 39.98%. This is a good approximation and a verification of consistency between the various theories and techniques developed to this point.
One may choose a variety of ratios and metrics to compare how arbitrary distributions reduce capacity in exchange for efficiency compared to some reference like the Gaussian norm. The curves of
5.3. Capacity Vs. Efficiency Given Directly Dissipative Losses
Directly dissipative losses refer to additional energy expenditures due to drag, viscosity, resistance, etc. These time variant scavenging affects impact the numerator component of the
The relationship between channel capacity and efficiency ηdiss_α can be analyzed by recalling the capacity equations of chapter 4 and substituting the total available energy for supporting particle motion into the numerator portion of
As the average efficiency ηdiss_α reduces, the average signal power Pe must increase to maintain capacity.
5.4. Capacity Vs. Total Efficiency
In this section, both direct and modulation efficiency (ηiss,ηmod) impacts are combined to express a total efficiency. The total efficiency is then η=ηdissηmod where ηmod is the efficiency due to modulation loss described in sections 5.1 and 5.2.
One can use the procedure and equations developed in section 5.2 to obtain a modified TEC relation:
The capacity equation 5-36 can be modified to include overall efficiency η=ηdissηmod. The following equation applies only for the case where the signal is nearly Gaussian. As indicated before, this requires maintaining a PAPR of nearly 12 dB with only the extremes of the distribution truncated.
η has a direct influence on the effective signal power, Pe=Psrcηdiss_αηmod_α. When the average signal power output decreases, then the channel noise power becomes more significant in the logarithm argument, thereby reducing capacity. For a given noise power the average power Pe for a signal must increase to improve capacity. In order to attain an adequate value for Pe=Psrcηdissηmod, Psrc must increase.
The capacity of equation 5-38 applies only to the maximum entropy process. Arbitrary processes can possess a lower PAPR and therefore higher efficiency, but the capacity equation must be modified by using the approximate relative capacity method of section 5.2 or the explicit calculation of pseudo-capacity for a particular information and noise distribution through extension of the principles from chapter 4.
All members of the capacity curve family can be made identical to the D=1 case if the sample rate fs_α, per sub channel is reduced by the multiplicative factor D−. That is, dimensionality may be traded for sample rate to attain a particular value of C, and a given η.
5.4.1. Effective Angle for Momentum Exchange
Information can be lost in the process of waveform encoding or decoding unless momentum is conserved during momentum exchange. The capacity equation may be altered to emphasize the effective work based on the angle of time variant linear momentum exchanges.
The subscript “in” refers to input work rate. cos θeff_α controls the efficiency relationship in the second equation. (|{right arrow over ({dot over (p)})}α∥{right arrow over ({dot over (q)})}α|)in_α cos(θeff_α) is the effective work rendered at the target particle. Therefore, ηα=cos θeff_α.
cos θeff_α must be unity for every momentum exchange to reflect perfect motion and render maximum efficiency. θeff_α=(θmod_α−θdiss_α) is composed of a dissipative angle and a modulation angle, relating to the discussion of the prior section. θ provides a means for investigation of the inefficiencies at a most fundamental scale in multiple dimensions, where angular values may also be decomposed into orthogonal quantities.
For an increasing number of degrees of freedom and dimensionality, the relative angle of particle encoding and interaction is important and provides more opportunity for inefficient momentum exchange. For example, the probability of perfect angular recoil of the encoding process is on the order of (2π)−D in systems whenever the angular error is uniformly distributed. Even when the error is not uniformly distributed it tends to be a significant exponential function of the available dimensional degrees of freedom.
Whenever D>1, the angle θeff_α can be treated as a scattering angle. This concept is understood in various disciplines of physics where momentum exchanges cN be modeled as the interaction of particles or waves. The variation of this scattering angle due to vibrating particles or perturbed waves goes to the heart of efficiency at a fundamental scale. Thermal state of the apparatus is one way to increase θdiss_α, the unwanted angular uncertainty in θeff_α. Interaction between the particles of the apparatus, environment and the encoded particles exacerbates inefficiency evidenced as an inaccurate particle trajectory. Energy is bilaterally transferred at the point of particle interface as has been noted from examining recoil momentum. Thus during every targeted non-adiabatic momentum exchange in which some energy is dissipated to the local environments, there is also some tendency to expose the target particle momentum to environmental contamination.
5.5. Momentum Transfer Via an EM Field
The focus of prior discussions has been at the subsystem level, examining the dynamics of particles constrained to a local phase space. However, the discussion of section 3.3 and the implication of Section 4 is that such a model may be expanded across subsystem interfaces. It is not necessary to resolve all of the particulars of the interfaces enabling the extended channel to understand the fundamental mechanisms of efficiency. Wherever momentum is exchanged, the principles previously developed can apply. It is valuable to understand how the momentum can extend beyond boundaries of a particular modeled phase space, particularly for the case of charge-electromagnetic field interaction. Here the discussion is restricted to the case where particles are conserved charges. Specifically, charges in the transmitter phase space do not cross the ether to the receiver or vice-versa yet momentum is transferred by EM fields. This is the case for a radio communications link.
{right arrow over (E)} is the stimulating electric field and H is the stimulating magnetic field. Often electronic communications application will stimulate charge motion using a time variant scalar potential φ(t) alone so that the magnetic field is zero. In those common cases:
The momentum of the transmitter charge is imparted by a time variant circuit voltage in this circumstance. Since the charge motions involve accelerations, encoded fields radiate with momentum. Radiated fields transfer time variant momentum to charges in the receiver, likewise transferring the information originally encoded in the motion of transmitter charges.
The receiver charge mimics the motion of the transmitter charge at some deferred time.
The equations of motion for the receiver charge are given by:
The Lorentz force, which moves the receiver particle, is a function of the dynamic electric ({right arrow over (E)}) and magnetic ({right arrow over (H)}) field components of the field bridging the channel. These fields can be derived from the potentials which in turn reflect variations associated with the transmitter charge motion. The so called radiation field of the transmitter charge is based on accelerations i.e.
The energy-momentum tensor provides a compact summary of the quantities of interest associated with the momentum flux of the phase space based on the calculations of the conservation equation. The tensor is related to the space-time momentum by:
α,β are the spatial indices of the tensor in three space and the 0th index is reserved for the time components in the first row and column.
The energy density associated with the phase space in joules per unit volume is given by:
The energy flux density per unit time crossing the differential surface element df (chosen perpendicular to the field flux) is given by the tensor elements T0β multiplied by c, where:
And Poynting's Vector is obtained from:
Maxwell's stress tensor expresses the components of the momentum flux density per unit time passing from the transmitter volume through a surface element of the hyper-sphere;
The second term in the integral equation of
Extended results are commented on by application of modulation to encode information in the fields.
In an embodiment, a modulated harmonic motion of an electron corresponds to a modulated RF carrier. It can be shown that the modulated harmonic motion produces an approximate transverse electromagnetic plane wave in the far field given by:
α(t) and ϕ(t) are random variables encoded with information in this view corresponding to the amplitude and phase of the harmonic field. The momentum of the field changes according to α(t) and ϕ(t) in a correlated manner. Therefore the Ey and Hz field components are also random variables possessing encoded information from which we may calculate time variant momentum using the integral conservation equation above.
Accelerating charges radiate fields which carry energy away from the charge. This radiating energy depletes the kinetic energy of the charge in motion, a distinct difference compared to the circumstance of matter without charge. The prior comments do not explicitly contemplate the impact of the radiation reaction on efficiency which may become significant at relativistic speeds.
The field energies calculated by Poynting's vector at the receiver are attenuated by the spherical expansion of the transmitted flux densities as the EM field propagates through space. This attenuation is in proportion to the square of the distance between the transmitter and receiver for free space conditions according to Friis' equation when the separation is on the order of 10 times the wavelength of the RF carrier or greater. Ultimately, the effect of this attenuation is accounted for in the capacity calculations by a reduction in SNR at the receiver.
Finally, it is posited that the principles of section 5.5 are extensible to the general electronics application. Variable momentum is due to the modulation of charge densities and their associated fields, whether it is viewed as simply a bulk phenomena or the ensemble of individual scattering events which average to the bulk result. A circuit composed of conductors and semiconductors can be characterized by voltage and current. Voltage is the work per unit charge to convey the charge through a potential field. When multiplied by the charge per unit time conveyed, one can calculate the total work required to move the charge. This is analogous to the prior discussions involving the conjugate derivative field quantities of particles in a model phase space used to calculate the trajectory work rate ({right arrow over ({dot over (p)})}·{right arrow over ({dot over (q)})}) which can be integrated over some characteristic time interval Δt, to obtain the total work over that interval.
Section 5 establishes the total efficiency for processing as η=ηdissηmod. ηmod applies for the modulation process wherever there is an associated efficiency for any interface where the momentum of particles must deliberately be altered to support a communications function. For communications this could include encoding, decoding, modulation, demodulation, increasing the power of a signal, etc. This section introduces a method for increasing ηmod while maintaining capacity. The method can apply to cases for which distributions of particle momentum are not necessarily Gaussian. Nevertheless, the Gaussian case is examined, since modern communications signals and standards are ever marching toward this limit.
6.1. Sum of Independent RVs
Consider the comparative case where N vs. some greater integer number where is the number of summed signal inputs xi to a channel. Suppose that it is desirable to conserve energy in the comparison. The total energy is allocated amongst ζ distributions with an ith branch efficiency inversely related to the PAPRi of the ith signal.
ηi=(kiPAPRi+ai)−1 Equation 6-1
Equation 6-1 is a general form suitable for handling all information encoding circumstances given a suitable choice of ki and ai.
An effective total efficiency can be calculated from the input efficiencies when the densities of xi are independent, beginning from the general form developed in Section 5 where kmod and kσ are constants based on encoder implementation.
Then, eq. 6-2 may be written for the ith branch as:
Defining kmod_i′=λikmod and kσ_i′=λikσ, Equation 6-4 becomes:
Forming a time average of equation 6-5 yields:
Stipulating that:
Equation 6-7 defines λi as a suitable probability measure for the ith branch. Comparing Equations 6-2 and 6-6 yields:
Equation 6-8 requires that the weighting coefficients associated with the ith branch be specified to yield the corresponding composite time average. Equations 6-1 through 6-6 suggest that a particular design PAPR can be achieved using a composite of signals, and the individual branch PAPRi can be lower than the final output which implies that overall efficiency can be improved.
Examination of
In electronics, the analogy is that all the input branches can interact via a circuit summing node through the branch impedances, thus distributing energy from the inputs to all circuit branches, not just the intended output load. Fortunately, there are methods for avoiding these kinds of redistributions.
6.2. Composite Processing
A sampled system provides one means of controlling the signal interactions at the summing node of
For a single dimension D=1, samples for each sub density ρi, occur at noninterfering sampling intervals. Thus, if this scheme is applied to the system 6900 illustrated in
This approach can be extended to each orthogonal dimension for D>1, since orthogonal samples are also physically decoupled. The intersection of the thresholds in multiple dimensions form hyper geometric surfaces defining subordinate regions of phase space. In the most general cases, these thresholds can be regarded as the surfaces of manifolds.
Figure 6-2 and equation 6-6 suggests that the optimal efficiency can be calculated from:
The coefficients, λi are variables dependent on the total number of domains ζ. The thresholds, {tilde over (η)}ζ, for the domains of each sub-density are varied for the optimization, requiring specific λi. η increases as ζ increases though there is a diminishing rate of return for practical application. Therefore, a significant design activity is to trade η vs. ζ versus cost, size, etc. The trade between efficiency and ζ is addressed in Section 7 along with examples of optimization.
In this Section, some modulator examples are presented to illustrate optimization consistent with the theory presented in prior Sections. Modulators encode information onto an RF signal carrier.
This Section focuses on encoding efficiency. Thus, we are primarily concerned with the efficiency of processing the amplitude of the complex envelope, though the phase modulated carrier case can also be obtained from the analysis.
7.1. Modulator
RF modulation is the process of imparting information uncertainty H(ρ(x)) to the complex envelope of an RF carrier. An RF modulated signal takes the form:
Any point in the complex signaling plane can be traversed by the appropriate orthogonal mapping of aI(t) and aQ(t). Alternatively, magnitude and phase of the complex carrier envelope can be specified provided the angle φ(t) is resolved modulo π/2. As pointed out in section 5.5, information modulated onto an RF carrier can propagate through the extended channel via an associated EM field.
Battery operated mobile communications platforms typically possess unipolar energy sources. In such cases, the random variables defining aI(t),aQ(t) are usually characterized by non-central parameters within the modulator segment. Efficiency optimization examples are provided on circuits which encode aI(t) and aQ(t) since extension to carrier modulation is straightforward. One need only understand the optimization of in phase aI(t) voltage or quadrature phase aQ(t) voltage encoding, then treat each result as independent parts of a 2D solution.
The following discussion advances efficiency performance for a generic series modulator/encoder configuration. Efficiency analysis of the generic model also enjoys common principles applicable to other classes of more complicated modulators.
The series impedance model for a baseband modulator in phase or quadrature phase segment of the general complex modulator is provided in
Sections 10.8 and 10.9 derive the thermodynamic efficiency for the type 1 modulator which results in a familiar form for symmetric densities without dissipation:
This formula was verified experimentally through the testing of a type one modulator.
Several waveforms were tested, including truncated Gaussian waveforms studied in Section 5 as well as 3G and 4G+ standards based waveforms used by the mobile telecommunications industry. The maximum theoretical bound for ηmod (i.e. ηdiss=1) represented by the upper curve is based on the theories of this disclosure, for the ideal circumstance. The efficiency of the apparatus due to directly dissipative losses was found to be approximately 70%. The locus of test points depicted by the various markers falls nearly exactly on the predicted performance when directly dissipative results are accounted for. For instance, a truncated Gaussian signal (inverted triangle) with a PAPR of 2 (3 dB) was tested with a measured result of ηmodηdiss=0.175. Dividing 0.175 by the inherent test fixture losses of 0.7 equates to an ηmod=0.25 in agreement with theoretical prediction of (2PAPR)−1. At the other extreme an IEEE802.11a standard waveform based on orthogonal frequency division multiplexed modulation was tested, with a result recorded by data point F. Data point E is representative of the Enhanced Voice Data Only services typical of most code division multiplexed (CDMA) based cell phone technology currently deployed. B and C represent the legacy CDMA cell phone standards. Data points A and D are representative of the modulator efficiency for emerging (WCDMA) wideband code division multiplexed standards. A key point of the results is that the theory of Sections 3 through 5 applies to Gaussian and standards waveforms alike with great accuracy.
7.1 Modulator Efficiency Enhancement for Fixed
An analysis proceeds for a type 1 series modulator with some numerical computations to illustrate the application of principles from Section 5 and a particular example where efficiency is improved.
Voltage domains are related to energy or power domains through a suitable transformation. ρ({hacek over (η)}(a(t)) or simply ρ({hacek over (η)}), can be obtained from the appropriate Jacobian to transform a probability density for a voltage at the modulator load to an efficiency (refer to Section 10.8). {hacek over (η)} is defined as the instantaneous efficiency of the modulator and is directly related to the proper thermodynamic efficiency (refer to Section 10.9).
Let the baseband modulator output voltage probability density, ρ(VL), be given by;
Equation 7-3 depicts an example pdf which is truncated non-zero mean Gaussian. VL corresponds to the statistic of a hypothetical in-phase amplitude or quadrature phase amplitude of the complex modulation at an output load. The voltage ranges are selected for ease of illustration but may be scaled to any convenient values by renormalizing the random variable.
Average instantaneous waveform efficiency is obtained from:
Sections 10.8 and 10.9 provide a discussion concerning the use of instantaneous efficiency in lieu of thermodynamic efficiency. In this example, the instantaneous efficiency is used to illustrate a particular streamlined procedure to be applied in optimization in section 7.3.
ηWF is the total waveform efficiency where the output power consists of signal power {tilde over (V)}L2 plus modulator overhead. That is, the RV of interest is VL={tilde over (V)}L+VL. This differs from the preferred definition of output efficiency given in Section 5. {tilde over (η)} is the thermodynamic efficiency and based on the signal output. {tilde over (η)} is based on the proper output power, due exclusively to the information bearing amplitude envelope signal. Optimization of ηWF and {hacek over (η)} also optimizes thermodynamic efficiency (reference Section 10.8).
Sometimes the optimization procedure favors manipulation of one form of the efficiency over the other depending on the statistic of the output signal.
We also note the supplemental relationships for an example case where the ratio of the conjugate power source impedance to load impedance, Zr=1.
More general cases can also consider any value for the ratio Zr other than 1. Zs has been defined as the power source impedance. The given efficiency calculation adjusts the definition of available input power to the modulator and load by excluding consideration of the dissipative power loss internal to the source. Vs therefore is an open circuit voltage in this analysis. Ultimately then, Zs limits the maximum available power Pmax from the modulator.
Now the waveform efficiency pdf is written.
The Jacobian,
yields:
This efficiency characteristic possesses an ηwƒ of approximately 0.347. The PAPRwf is equal to wf−1 or ˜4.68 dB. Just as the waveform and signal efficiency are related, the associated peak to average power ratios, PAPRwf and PAPRe, are also related by:
The signal peak to average power ratio PAPRe=11.11 for this example.
2 waveform voltage thresholds which correspond to 3 momentum domains are applied, using a modified type 1 modulator architectures illustrated in
In this example the baseband modulation apparatus possesses 3 separate voltage sources Vs1, Vs2, Vs3. These sources are multiplexed at the interface between the corresponding potential boundaries, V1, V2, as the signal requires. An upper potential boundary V3=Vmax represents the maximum voltage swing across the load. There is no attempt to optimally determine values for signal threshold voltages V1, V2 at this point. The significant voltage ranges defined by {0,V1}, {V1,V2}{V2,V3}, correspond to signal domains within phase space. We regard these domains as momentum domains with corresponding energy domains.
Domains are associated to voltage ranges according to:
Domain 1 if VL<V1
Domain 2 if V1≤VL≤V2
Domain 3 if V2<VL<V3
Average efficiency for each domain can be obtained from subordinate pdfs parsed from the waveform efficiency of
The calculations of {hacek over (η)}1,2,3 are obtained from;
ηζ=kζ
ζ is a domain increment for the calculations and kζ_norm provides a normalization of each partition domain such that each separate sub pdf possesses a proper probability measure. Thus, the averages of eq. 7-6 are proper averages from three unique pdfs. First we calculate the peak efficiency in domain 1, using a 2V power supply as an illustrative reference for a subsequent comparison.
{hacek over (η)}1peak is the instantaneous peak waveform efficiency possible for the modulator output voltage of 0.3V when the modulator supply is at 2V. {hacek over (η)}1 according to eq. 7-6, calculates to ≈0.131 in the domain where 0≤VL≤0.3V.
Now suppose that this region is operated from a new power source with voltage Vs
{hacek over (η)}1_norm is substantially enhanced because the original peak efficiency of 0.176 is transformed to 100 percent available peak waveform efficiency through the selection of a new voltage source, Vs
In domain 2 we perform similar calculations
{hacek over (η)}2peak=0.538;{Vs=2V,VL2=0.7V}
Again we use the modified CDF to obtain the un-normalized {hacek over (η)}2≈0.338 first, followed by η2_norm.
Likewise we apply the same procedure for domain 3 and obtain;
{hacek over (η)}3
The corresponding block diagram for an instantiation of this solution becomes that shown in
{hacek over (η)}1=0.744; 9.1% probability weighting
{hacek over (η)}2=0.629; 81.8% probability weighting
{hacek over (η)}3=0.626; 9.1% probability weighting
The final weighted average of this solution, which has not yet been optimized, is given by:
{hacek over (η)}tot=ηsx·[(0.091×0.744)+(0.818×0.629)+(0.091×0.626)]≅ηsx·0.64
As is shown in the next section, the optimal choice of values for V1, V2, can improve on the results of this example, which is already a noticeable improvement over the single domain solution of {hacek over (η)}mod=0.347.
ηsx is the efficiency associated with the switching mechanism which is a cascade efficiency. Typical switch efficiencies of moderate to low complexity can attain efficiencies of 0.9. However, as switch complexity increases, ηsx may become a design liability. ηsx is considered a directly dissipative loss and a design tradeoff.
Voltage is the fundamental quantity from which the energy domains are derived. Preserving the information to voltage encoding is equivalent to properly accounting for momentum. This is important because ρ({hacek over (η)}) is otherwise not unique. We could also choose to represent efficiency as an explicit function of momentum as in Section 5, thereby emphasizing a more fundamental view. However, there is no apparent advantage for this simple modulator example. More complex encoder mappings involving large degrees of freedom and dimensionality can benefit from explicitly manipulating the density ({hacek over (η)}(p)) at a more fundamental level.
7.3. Optimization for Type 1 Modulator, ζ=3 Case
From the prior example we can obtain an optimization of the form
max {{hacek over (η)}tot}=max {λ1{hacek over (η)}1+λ2λ2{hacek over (η)}2+λ3} Equation 7-7
Σλi=1
It is also noted that
{hacek over (η)}1={tilde over (ℑ)}{Vs
The goal is to solve for the best domains by selecting optimum voltages Vs
kζ
λ1={P(0≤VL≤VL
λ2={P(VL
λ3={P(VL
What must be obtained from the prior equations are VL
7.4. Ideal Modulation Domains
Suppose we wish to ascertain an optimal theoretical solution for both number of domains and their respective threshold potentials for the case where amplitude is exclusively considered as a function of any statistical distribution p(VL). We begin in the familiar way using PAPR and {hacek over (η)} definitions from Section 6.
This defines instantaneous {hacek over (η)} for a single domain. For multiple energy domains and the 1st Law of Thermodynamics we may write;
From the 2nd Laws of Thermodynamics we know
λi is the statistical weighting for {hacek over (η)}i over the ith domain so that:
It is apparent that each and every {hacek over (η)}i→1 for η to become one. That is, it is impossible to achieve an overall efficiency of {hacek over (η)}→1 unless each and every ith partition is also 100% efficient. Hence,
λi are calculated as the weights for each ith partition such that:
It follows for the continuous analytical density function ρ(VL) that
In order for the prior statements to be consistent we recognize the following for infinitesimal domains:
ΔVL
Δλi→λi−λi-1→dλ
ζ→∞
This means that in order for the Riemannian sum to approximately converge to the integral,
λi≈ρ(VL
The increments of potentials in the domains must become infinitesimally small such that ζ grows large even though the sum of all probabilities is bounded by the CDF. Since there are an infinite number of points on a continuous distribution and we are approximating it with a limit of discrete quantities, some care must be exercised to insure convergence. This is not considered a significant distraction if we assign a resolution to phase space according to the arguments of Section 4.
This analysis implies an architecture consisting of a bank of power sources which in the limit become infinite in number with the potentials separated by ΔVsi→dVs. A switch can be used to select this large number of separate operating potentials “on the fly”. Such a switch cannot be easily constructed. Also, its dissipative efficiency ηsx, would approach zero, thus defeating a practical optimization. Such an architecture can be emulated by a continuously variable power supply with bandwidth calculated from the TE relation of Section 3. Such a power supply poses a number of competing challenges as well. Fortunately, a continuously variable power source is not required to obtain excellent efficiency increases as we have shown with a 3 domain solution and will presently revisit for domains of variable number.
7.5. Sufficient Number of Domains, ζ
A finite number of domains will suffice for practical applications. A generalized optimization procedure can then be prescribed for setting domain thresholds.
This optimization procedure is applicable for all forms of ρ(VL), even those with discrete RVs, provided care is exercised in defining the thresholds and domains for the RV. Optimization is best suited to numerical techniques for arbitrary ρ(VL).
7.6. Zero Offset Gaussian Case
A zero offset Gaussian case is reviewed in this section using a direct optimization method to illustrate the contrast compared to the instantaneous efficiency approach. The applicable probability density for the load voltage is illustrated in plot 8100 of
The more explicit form with domain enumeration is given by;
PePini are the average effective and input powers respectively. Section 10.8 provides the detailed form in terms of the numerator RV and denominator RV which are in the most general case non-central gamma distributed with domain spans defined as functions ƒ{VT}i, ƒ{VT}i-1 of the threshold voltages.
The general form of the gamma distributed RV in terms of the average ith domain load voltage is:
Since a single subordinate density corresponds to
Table 7-3 and
Experiments were conducted with modulator hardware using 4, 6, and 8 domains with a signal PAPR ˜11.8 dB.
Experiments agree well with the theoretical optimization.
7.7. Results for Standards Based Modulations
The standards based modulation schemes, used to obtain the efficiency curve of
Section 10.12 provides an additional detailed example of an 802.11a waveform as a consolidation of the various calculations and quantities of interest. In addition, a schematic of the modulation test apparatus is included.
A variety of topics are presented in this Section to. The treatments are brief and include, some limits on performance for capacity, relation to Landauer's principle, time variant uncertainty, and Gabor's uncertainty. The diversity of subjects illustrates a wide range of applicability for the disclosed ideas.
8.1. Encoding Rate, Some Limits, and Relation to Landauer's Principle
The capacity rate equation was derived in Section 4 for the D dimensional case:
Consider the circumstance where
A limit of the following form is used to obtain the result of Equation 8-1:
The infinite slew rate capacity C∞ is twice that for the comparative Shannon capacity because both momentum and configuration spaces are considered here. This is the capacity associated with instantaneous access to every unique coordinate of phase space. One can further rearrange the equation for C∞ to obtain the minimum required energy per bit for finite non zero thermal noise where P is the average power per dimension:
No is an approximate equivalent noise power spectral density based on the thermal noise floor, No=2 kT°, T° is a temperature in degrees Kelvin (K°) and Boltzman's constant k=1.38×10−23 J/K°. A factor of 2 is included to account for the independent influence of configuration noise and momentum noise. Therefore, the number of Joules per bit for D=1 is the familiar classical limit of (0.6931) kT°/2 and the energy per bit to noise density ratio is
This is 3 dB lower than the classical results because we may encode one bit in momentum and one bit in configuration for a single energy investment.
Each message trajectory consisting of a sequence of samples would be infinitely long and therefore require an infinite duration of time to detect at a receiver to reach this performance limit. Moreover the samples of the sequence must be Gaussian distributed.
In the case where the values are binary orthogonal encodings it can be shown that:
Both momentum and configuration are included to obtain the result per dimension. The encoded sequence must be comprised of an infinite sequence of binary orthogonal symbols to achieve this limit, and both configuration and momentum must be used, else the results increase by 3 dB for the given Eb/No.
No as given is an approximation. Over its domain of accuracy the total noise variance may be approximated using:
σn2=∫0BNodƒ
A difficulty with this approximation arises from the ultra-violet catastrophe when B approaches ultra-high frequencies. Plank and Einstein resolved this inconsistency using a quantum correction which yields:
is composed of thermal and quantum terms which are plotted separately in plot 8600. The thermal noise with quantum correction has an approximate 3 dB bandwidth of 7.66e12 Hz for the room temperature case and 7.66e10 for the low temp case. The frequencies at which the quantum uncertainty variance competes with the thermal noise floor is approximately 4.26e12 and 4.26e10 Hz respectively. The corresponding adjusted values for Pn(ƒ)+hƒ are the suggested values to be used in the capacity equations to calculate noise powers at extreme bandwidths or low temperature. At the crossover points, the total value of
is increased by 3 dB. hƒ is apparently independent of temperature.
An equivalent noise bandwidth principle can be applied to accommodate the quantity Pn(ƒ)+hƒ and calculate an equivalent noise density Ño over the information bandwidth B.
We may combine this density with the TE relation to obtain:
If we consider antipodal binary state encoding then the energy per sample correspond to one half the energy per bit. At frequencies where thermal noise is predominate, one can calculate the required energy per bit to encode motion in a particle while overcoming the influence of noise such that over a suitably long interval of observation a sequence of binary encodings may be correctly distinguished.
The maximum work rate of the particle is therefore bounded by (for thermal noise only);
max {{right arrow over ({dot over (p)})}·{right arrow over ({dot over (q)})}}≤ƒskT°(PAER)ln(2) Equation 8-7
According to Section 5, a maximum theoretical efficiency to generate one bit is bounded by:
A plot 8800 of an example momentum space trajectory depicting a binary encoding situation is illustrated in
Suppose that binary data is encoded in position rather than momentum. This activity is illustrated in a plot 8900 of the velocity versus position plane for a single dimension for the position encoding of ±Rs, the extremes of configuration space shown in
max {{right arrow over ({dot over (p)})}·{right arrow over ({dot over (q)})}}ƒsh(PAER)ln(2) Equation 8-9
Note that PAER may only approach 1 as Δt approaches zero, requiring ƒs→∞. No matter the encoding technique we cannot escape this requirement. If we construct a binary system which transfers distinguishable data in the presence of thermal noise or quantum noise, independent states require the indicated work rate per transition. As discussed in Section 5, since one cannot predict a future state of a particle, the delivery particle possesses an average recoil momentum during an exchange equal and opposite in a relative sense to the target particle encoding the state. This recoil momentum is waste, and ultimately dissipates in the environment according to the second law. According to equation 8.8 (the thermal noise regime), the theoretical efficiency of 1 is achieved when Pm=fs kT° ln√{square root over (2)}, which is equivalent to an energy per sample of
(εk)s=kT° ln√{square root over (2)} Equation 8-10
Likewise for the case where T°→0, one has a minimum energy per sample limited by quantum effects.
εkhfs ln√{square root over (2)} Equation 8-11
In general, one can calculate a minimum energy to unambiguously encode a bit of information using a binary antipodal encoding procedure as:
If the binary antipodal requirement is removed in favor of maximum entropy encoding, then:
where Ño is given by equation 8-4.
However, this is for the circumstance of 100% efficiency, i. e. PAER→1 According to principles of Section 3, if the information is encoded in the form of momentum, this information can only be removed by re-setting the momentum to zero. This means that at least the same energy investment is required to reverse an encoded momentum state. Likewise, if the information is recorded in position then a particle must possess momentum to traverse the distance between the positions. In one direction, for instance moving from −Rs to Rs, a quantity of work is required. Reversing the direction requires at least the same energy. The foregoing discussion reveals a principle that at least Ño ln(2) is required to both encode or erase one bit of binary information. This resembles Landauer's principle which requires the environmental entropy to raise by the minimum of kT° ln(2) when one bit of information is erased. The important differences here are that the principle applies for the case of generating unique data as well as annihilating data. In addition, the rate at which one requires generation or erasure to occur, can affect the minimum requirement via the quantity PAER (ref. eq. 8-7) since transitions are finite in time and energy. Finite transition times correspond to PAER>1. This latter effect is not contemplated by Landauer. Thus efficiency considerations will necessarily raise the Landauer limit under all practical circumstances, because a power source with a maximum power of Pm is required which ensures a PAER>1. For the model of Section 3 applied to binary encoding where transitions are defined using a maximum velocity profile such as indicated in
8.2. Time Variant Uncertainty
Time sampling of a particle trajectory in momentum space evolves independently from the allocation of dimensional occupation. The dimensional correlations for α≠β will be zero for maximum uncertainty cases of interest. Likewise, the normalized auto-correlation is defined for α=β. It is interesting to interject the dimension of time into the autocorrelation as suggested in eq. 3-26 through 3-28. In doing so we can derive a form of time variant uncertainty.
The density function of interest to be used for the uncertainty calculation may be written explicitly as:
The notation is organized to enumerate the dimensional correlations with α,β and the adjacent time interval correlations with l,{circumflex over (l)}. The time interval is given by:
tl−tl+1=Ts
(tl−t{circumflex over (l)})≤Ts
{right arrow over (p)}Δ={right arrow over (p)}l−{right arrow over (p)}l
σΔ=√{square root over (σl2+σl2−2γl,{circumflex over (l)}σlσ{circumflex over (l)})} Equation 8-15
ρ({right arrow over (p)}Δ) represents the probability density for a transition between successive states where each state is represented by a vector. One can calculate the correlation coefficients for the time differential (t{circumflex over (l)}−tl) recalling that the TE relation defines the sampling frequency ƒs.
The uncertainty H(ρ({right arrow over (p)}Δ)) is maximized whenever information distributed amongst the degrees of freedom are iid Gaussian. It is clear from the explicit form of ρ({right arrow over (p)}Δ) that the origin and the terminus of the velocity transition can be completely unique only under the condition that γl,{circumflex over (l)}=0. This occurs at specific time intervals modulo Ts. Otherwise, there will be mutual information over the interval {l,{right arrow over (l)}}. Elimination of all forms of space-time cross-correlations maximizes ρ({right arrow over (p)}Δ). Given these considerations, the pdf for the state transitions may be factored to a product of terms.
The origin and terminus coordinates are related statistically through the independent sum of their respective variances. An origin for a current trajectory is also a terminus for the prior trajectory.
The particle can therefore acquire any value within the momentum space and simultaneously occupy any conceivable location within the configuration space at the subsequent time offset of Ts. The case where the time differential (t{circumflex over (l)}−tl) td is less than Ts carries corresponding temporal reduction of the phase space access, given knowledge of the prior sampling instant. If the phase space accessibility fluctuates as a function of time differential, then so too must the corresponding uncertainty for ({right arrow over (p)}Δ), at least over a short interval 0≤(tl−tl)≤Ts. The corresponding differential entropy which incorporates a relative uncertainty metric over the trajectory evolution is governed by the correlation coefficient γl,{circumflex over (l)}. If the time difference Δt=0 then by definition the differential entropy metric may be normalized to zero plus the quantum uncertainty variance on the order of h. This means that if a current sample coordinate is known that for zero time lapse it is still known. Adopting this convention, the relative entropy metric over the interval is defined as:
HΔ≡ln(√{square root over ((σ{circumflex over (l)}2+σl2−2γl,{circumflex over (l)}σlσ{circumflex over (l)})2πe+(1+2πeh))}) Equation 8-18
In this simple formula the origin state of the of the trajectory is considered as the average momentum state or zero.
When Ts=0 then γl,{circumflex over (l)}=1 and HΔ≥ln(√{square root over ((1+2πeh))}). If
then HΔ=ln(√{square root over (σ{circumflex over (l)}2+σl2)2πe+(1+2πeh()}). The plot 9000 of
At a future time differential of Ts, the particle dynamic acquires full probable access to the phase space and entropy is maximized. Once the particle state is identified by some observation procedure then this uncertainty function resets. HΔ is calculated based on an extreme where the origin of the example trajectory is at the center of the phase space. HΔ may fluctuate depending on the origin of the sampled trajectory.
8.3. A Perspective of Gabor's Uncertainty
In Gabor's 1946 paper “Theory of Communication,” be rigorously argued the notion that fundamental units, “logons,” were a quantum of information based on the reciprocity of time and frequency. Gabor punctuated his paper with the time-frequency uncertainty relation for a complex pulse:
This uncertainty is related to the ambiguity involved when observing and measuring a finite function of time such as a pulse. Gabor's pulse was defined over its rms extent corresponding more or less to energy metrics which can be considered as analogous to the baseband velocity pulse models of Section 3. Gabor ingeniously expanded the finite duration pulse in a complex series of orthogonal functions and calculated the energy of the pulse in both the time and frequency domains. His tool was the Fourier integral. He was interested in complex band pass pulsed functions and determined that the envelope of such functions which is compliant with the minimum of the Gabor limit to be a probability amplitude commonly used in quantum mechanics. Gabor's paper was partially inspired by Pauli and reviewed by Max Born prior to publication.
Nyquist had reached a related conclusion in 1924 and 1928 with his now classic works, “Certain Factors Affecting Telegraph Speed” and “Certain Topics in Telegraph Transmission Theory”, Nyquist expanded a “DC wave” into a series using Fourier analysis and determined the number of signal elements required to transmit a signal is twice the number of the sinusoidal components which must be preserved to determine the original DC wave formed by the signal element sequence. This was for the case of a sequence of telegraph pulses forming a message and repeated perpetually. This cyclic arrangement permitted Nyquist to obtain a proper complex Fourier representation without loss in generality since the message sequence duration could be made very long prior to repetition; an analysis technique later refined by Wiener. Nyquist's analysis concluded that the essential frequency span of the signal is half the rate of the signal elements and inversely related. The signal elements are fine structures in time or samples in a sense and his frequency span was determined by the largest frequency available in his Fourier expansion.
Gabor was addressing this wonder with his analysis and pointing out his apparent dissatisfaction with the lack of intuitive physical origin of the phenomena. He also regarded the analysis of Bennett in a similar manner concerning the time frequency reciprocity for communications, stating; “Bennett has discussed it very thoroughly by an irreproachable method, but, as is often the case with results obtained by Fourier analysis, the physical origin of the results remains somewhat obscure.” Gabor also comments; “In spite of the extreme simplicity of this proof, it leaves a feeling of dissatisfaction. Though the proof (one forwarded in Gabor's 1946 paper) shows clearly that the principle in question is based on a simple mathematics identity, it does not reveal this identity in tangible form.”
An explanation is now presented for the time-frequency uncertainty, using a time bandwidth product, based on physical principles expressed through the TE relation and the physical sampling theorem. An instantiation of Gabor's In-phase or Quadrature phase pulse can be accomplished by using two distinct forces per in-phase and quadrature phase pulse according to the physical sampling theorem presented in Section 3. The time span of such forces are separated in time by Ts. The characteristic duration of a pulse event is Δt=2Ts.
From the TE relation, one knows:
{tilde over (B)} the bandwidth available due to the sample frequency fs is always greater than or equal to B the bandwidth available due to an absolute minimum sample frequency fs_min so that:
Therefore:
This is called a time bandwidth product. If one wishes to increase the observable bandwidth {tilde over (B)}, then Ts_max can be lowered. If a lower bandwidth is required then Ts_max is increased where Ts_max is an interval of time required between forces such that the forces may be uncorrelated given some finite Pm.
An example provides a connection between the TE relation, physical sampling theorem and Gabor's uncertainty.
Only two samples are required to create or capture one cycle of the higher frequency sine wave. However, two samples separated in time by Ts cannot create the trajectory of the slower sine wave over its full interval 10Ts. That trajectory is ambiguous without the additional 8 samples, as is evident by comparing frame 2 with frame 1 of the figure. The sampling frequency of fs≈2fc is adequate for both sine waves but in order to resolve the slower sine wave and reconstruct it, the samples must be deployed over the full interval 10Ts. The prior equation may capture this by accounting for the extended interval using a multiplicity of samples.
The slow sine wave case is significantly oversampled so that all frequencies below B1 are accommodated but ambiguities may only be resolved if the sample record is long enough. This is consistent with Gabor's uncertainty relation as well as Nyquist's analysis.
We can address the requirement for an extended time record of samples by returning to the physical sampling theorem and a comparative form of the TE relation. The next equation calculates the time required between independently acting forces for a particle along the trajectory of the slow sine wave:
The result means that effective forces must be deployed with a separation of 5Ts1 to create independent motion for the slower trajectory. Adjacent samples separated by Ts=Ts1 cannot produce independent samples for the slower waveform because they are significantly correlated.
Hence the effective change in momentum {dot over (p)} per sample is lower for the over sampled slow waveform. As a general result, the corresponding work rate is lower for the lower frequency sine wave so that:
Even though 10 forces must be deployed to capture the entire slower sine wave trajectory over its cycle, only pairs taken from subsets of every 5th force can be jointly decoupled.
Gabor's analysis considered the complex envelope modulated onto orthogonal sinusoids. A complex carrier consisting of a cosine and sine has a corresponding TE equation:
The effective samples for in phase and quadrature components occur over a common interval so that the sample frequency doubles yet so does the peak power excursion Pm for the complex signal. This is analogous to the case D=2. Gabor's modulation corresponds to a double side band suppressed carrier scenario. This is the same as specifying pulse functions aI(t), aQ(t) in the complex envelope as zero offset unbiased RV's, where the envelope takes the form:
x(t)=a(t)ejω
To obtain Gabor's result, the peak power in the baseband pulses expressed by aI(t), aQ(t) will be twice that of the unmodulated carrier. Therefore the TE relation for the complex envelope of x(t) is given by:
This reduces to:
The time bandwidth product now becomes;
A variation in the sample interval for independent forces which create a signal must be countered by an inverse variation in the apparatus bandwidth or correspondingly the work rate. 2NTs=Δtmax for a sequence of deployed forces creating a signal trajectory; always extends to a time interval accommodating at least two independent forces for the slowest frequency component of the message. The minimum number of deployed forces occurs for N=1, a single pulse event.
This result is also equivalent to Shannon's number which is given by N=2BT where 2B=fsmin and T=Δtmax. Care must be exercised using Shannon's number to account for I and Q components.
Communications is the transfer of information through space and time via the encoded motions of particles and corresponding fields. Information is determined by the uncertainty of momentum and position for the dynamic particles over their domain. The rate of encoding information is determined by the available energy per unit time required to accelerate and decelerate the particles over this domain. Only two statistical parameters are required to determine the efficiency of encoding: the average work per deployed force and the maximum required PARR for the trajectory. This is an extraordinary result applicable for any momentum pdf.
Bandwidth in the Shannon-Hartley capacity equation is a parameter which limits the rate at which the continuous signal of the AWGN channel can slew. This in turn limits the rate at which information can be encoded. The physical sampling theorem determined from the laws of motion and suitable boundary conditions requires that the number of forces per second to encode a particle be given by;
This frequency also limits the slew rate of the encoded particle along its trajectory and determines its bandwidth in a manner analogous to the bandwidth of Shannon according to:
The calculated capacity rate for the joint encoding of momentum and position in D independent dimensions was calculated as:
As this capacity rate increases, the required power source, Psrc, for the encoding apparatus also increases as is evident from the companion equation;
Therefore, increases in the modulation encoding efficiency ηmod can be quite valuable. For instance, in the case of mobile communications platform performance, data rates can be increased, time of operation extended, battery size and cost reduced or some preferred blend of these enhancements. In addition, the thermal footprint of the modulator apparatus may be significantly reduced.
Efficiency of the encoding process is inversely dependent on the dot product extreme, max {{right arrow over ({dot over (p)})}·{right arrow over ({dot over (q)})}}=Pm divided by an average, {right arrow over ({dot over (p)})}·{right arrow over ({dot over (q)})}=σ2, also known as PAPR or PAER. The fluctuations about the average represent changes in inertia which require work. Since these fluctuations are random, momentum exchanges required to encode particle motion produce particle recoils which are inefficient. The difference between the instantaneous energy requirement and the maximum resource availability is proportional to the wasted energy of encoding. On the average, the wasted energy of recoil grows for large PAPR. This generally results in an encoding efficiency of the form:
Coefficients kenc and kσ depend on apparatus implementation. Several cases were analyzed for an electronic modulator using the theory developed in this work, then tested in experiments. Experiments included theoretical waveforms as well as 3G and 4G standards based waveforms. The theory was verified to be accurate within the degree of measurement resolution, in this case ˜0.7%.
The inefficiency of encoding is regarded as a necessary inefficiency juxtaposed to dissipative inefficiencies such as friction, drag, resistance, etc. Capacity for the AWGN channel is achieved for very large PAPR, resulting in low efficiencies. However, if the encoded particle phase space is divided into multiple domains, then each domain may possess a lower individual PAPR statistic than the case of a single domain phase space with equivalent capacity. The implication is that separate resources can be more efficiently allocated in a distributed manner throughout the phase space. Resources are accessed as the encoded particle traverses a domain boundary. Domain boundaries which are optimized in terms of overall thermodynamic efficiency are not arbitrary. The optimization in the case of a Gaussian information pdf takes the form of a ratio of composited gamma densities:
There is no known closed form solutions to this pdf ratio. A numerical calculus of variations technique was developed to solve for the optimal thresholds {VT}i and {VT}i-1, defining domain boundaries. The ith domain weighting factor λi is a probability of domain occupation where a domain is defined between thresholds {VT}i and {VT}i-1. In general, the numerator term corresponding to effective signal energy is based on a central gamma RV and the denominator term corresponding to apparatus input energy, is based on either a non-central or central gamma RV. Another optimization technique was also developed which reduces to an alternate form:
In this case, thresholds are determined in terms of the optimized threshold values for η{i-1}, ηi. Although this optimization is in terms of an instantaneous efficiency it was shown to relate to the thermodynamic efficiency optimum.
Modulation efficiency enhancements were theoretically predicted. Several cases were tested which corroborate the accuracy of the theory. Efficiencies may be drastically improved by dividing a phase space into only a few domains. For instance, dividing the phase space into 8 optimized domains results in an efficiency of 75% and dividing it into 16 domains results in an efficiency of 86.5% for the case of a zero offset Gaussian signal. Excellent efficiencies were observed for experiments using various cell phone and wireless LAN standards as well.
A key principle of this work is that the transfer of information can only be accomplished through momentum exchange. Randomized momentum exchanges are always inefficient because the encoding particle and particle to be encoded are always in relative random motion resulting in wasted recoil momentum which is not conveyed to the channel but rather absorbed by the environment. This raises the local entropy in agreement with the second law of thermodynamics. It was also shown that information cannot be encoded without momentum exchange and information cannot be annihilated without momentum exchange.
10.1 Isoperimetric Bound Applied to Shannon's Uncertainty (Entropy) Function and Related Comments Concerning Phasespace Hyper Sphere
It is possible to identify the form of probability density function, ρ(x), which maximizes Shannon's continuous uncertainty function for a given variance:
H[ρ(x)]=−∫−∞∞ρ(x)ln ρ(x) Equation A1.1
A formulation from the calculus of variations historically known as Dido's problem can be adapted for the required solution. The classical formulation was used to obtain the form of a fixed perimeter which maximizes the enclosed area. Thus the formulation is often referred to as an isoperimetric solution.
In the case of interest here it is desirable to find a solution given ν, a single particle velocity in the D dimensional hyper space and a fixed kinetic energy as the resource which can move the particle. Specifically, we wish to obtain a probability density function, (ν1, ν2 . . . . . VD), which maximizes a D dimensional uncertainty hyperspace for momentum with fixed mass, given the variance of velocity να, where α=1, 2, . . . D.
This problem takes on the following character:
The kernel of the integral in A1.2 shall be referred to as on occasion in its various streamlined forms.
This D dimensional maximization can be partially resolved by recognizing two simple concepts. First, in the absence of differing constraints for each of the D dimensions, a solution cannot bias the consideration of one dimension over the other. If all dimensions possess equivalent constraints, then their physical metrics as well as any related probability distributions for να will be indistinguishable in form. A lack of dimensional constraints is in fact a constraint by omission.
Second, if the D dimensions are orthogonal, then variation in any one of the να variables is unique amongst all variable variations only if the να are mutually decoupled. It follows that the motions corresponding to να must be dimensionally decoupled to maximize A1.2. Maximizing the number of independent degrees of freedom for the particle is the underlying principle, similar to maximum entropy principles from statistical mechanics.
{ν1, ν2 . . . VD} cannot be deterministic functions of one another else they share mutual information and the total number of independent degrees of freedom for the set is reduced. Therefore,
ρ(ν1,ν2 . . . νD)=ρ(ν1)ρ(ν2) . . . ρ(νD) Equation A1.3
for a maximization. The να are orthogonal and statistically independent.
This reduces the maximization integral to a streamlined form over some interval a, b:
{}=∫abℑ{να,ρ(να),{dot over (ρ)}(να)}dνα
Or more explicitly:
max {}=max {H[ρ(ν1,ν2, . . . νD)]}=max {−∫ρ(να)D ln((ρ(να))D)dνα} Equation A1.4
We now define integral constraints. The first constraint is the probability measure.
Since no distinguishing feature has been introduced to differentiate ρ(να) from any joint members of ρ(ν1, ν2 . . . νD), all the integrals of A1.5 are equivalent, which requires simply:
A final constraint is introduced which limits the variance of each member function ρ(να). This variance is proportional to an entropy power and can also be viewed as proportional to an average kinetic energy
Lagrange's method may be used to determine coefficients λα of the following formulation.
Euler's equation of the following form must be solved:
Since derivative {dot over (ρ)} constraints are absent:
From A1.10:
Since all of the D dimensions are orthogonal with identically applied constraints, D=1 is a suitable solution subset of A1.12. The problem therefore is reduced to solving:
A1.13 can be substituted into A1.7 to obtain:
Rearranging A1.15 gives:
This requires:
And:
It follows from A1.3 that the density function for the D dimensional case is simply:
This is the density which maximizes A1.2 subject to a fixed total energy σ2=Σασα2 where the D dimensions are indistinguishable from one another.
ν is Gaussian distributed in a D-dimensional space. This velocity has a maximum uncertainty for a given variance σα2.
Now if the particle is confined to some hyper volume it is useful to know the character of the volume. It was previously deduced that the dimensions are orthogonal. Thus we may represent the velocity as a vector sum of orthogonal velocities.
It was also determined that the ρ(να) have identical forms, i.e. they are iid Gaussian. Now let the maximum velocity νmax_α in each dimension be determined as some multiple kσα on the probability tail of the Gaussian pdf, ignoring the asymptotic portions greater than that peak. Then A1.21 may be written in an alternate form:
A1.21 together with A1.22 define a hyper sphere volume with radius.
k2 is the PAER and σp
The form of the momentum space is a hyper sphere and therefore the physical coordinate space is also a hyper sphere. This follows since position is an integral of velocity. The mean velocity is zero and therefore the average position of the space may be normalized to zero. The position coordinates within the space are Gaussian distributed since the linear function of a Gaussian RV remains Gaussian. Just as the velocity may be truncated to a statistically significant but finite value so too the physical volume containing the particle can be limited to a radius RS. Truncation of the hyper sphere necessarily comes at the price of reducing the uncertainty of the Gaussian distribution pdf in each dimension. Therefore, PAER should be selected to moderate this entropy reduction for this approximation given the application requirements.
The preceding argument justifying the hyper sphere may also be solved using the calculus of variations. Since a hyper sphere may be synthesized as a volume of revolution based on the circle, it possesses the greatest enclosed volume for a given surface. The implication is that a particle may move in the largest possible volume given fixed energy resources when the volume is a hyper sphere. The greater the volume of the space which contains the particle, the more uncertain its random location and if the particle is in motion the more uncertain its velocity. Joint representation of the momentum and position is a hyper spherical phase space.
10.2 Derivation for Maximum Velocity Profile
This Section derives the maximum velocity profile subject to a limit of Pm joules/second available to accelerate a particle from one end of a spherical space to the other where the sphere radius is Rs. Furthermore, it is assumed that the particle can execute the maneuver in Δt seconds but no faster. There is an additional constraint of zero velocity (momentum) at the sphere boundary. The maximum kinetic energy expenditure per unit time is given by:
max {{dot over (ε)}k}=Pm Equation B1.1
The particle's kinetic energy and rate of work is given by:
Since the volume is symmetrical and boundary conditions require |ν|=0 at a distance ±Rs from the sphere center:
Under conditions of maximum acceleration and deceleration the kinetic energy vs. time is a ramp, illustrated as a plot 9300 of kinetic energy versus time for maximum acceleration in
{right arrow over (q)} and {right arrow over ({dot over (q)})} are position and velocity respectively ({right arrow over ({dot over (q)})}={right arrow over (ν)}). Equations B1.5 and B1.6 can be used to obtain peak velocity over the interval Δt.
Equations B1.7 and B1.8 are defined as the peak velocity profile.
Positive and negative velocities may also be defined as those velocities which are associated with motion of the particle in the ±âr direction with respect to the sphere center.
It is possible to have ±νp over the entire domain since ±νp is rectified in the calculation of εk and boundary constraints do not preclude such motions.
Position q may be calculated from these quantities through an integral of motion
Integration of the opposite velocity yields:
±Rs is the constant of integration in both cases which may be deduced from boundary conditions, or initial and final conditions.
The other peak velocity profile trajectories (from B1.8) yield similar relationships:
where:
The result of B1.10 may be solved for the characteristic radius of the sphere, Rs:
At this point it is possible to parametrically relate velocity and position. This can be accomplished by solving for time in equations B1.10, B1.11 and B1.12 then eliminating the time variable in the q and {dot over (q)} equations.
Equations B1.15 and B1.16 may be substituted into the peak velocity equations B1.7 and B1.8.
Similarly
10.3 Maximum Velocity Pulse Auto Correlation
Consider the piece wise pulse specification:
The auto correlation of this pulse is given by (where we drop vector notations):
ν,ν=∫−∞∞να(t)να(t+τ)dt Equation C1.3
The auto correlation must be solved in segments. Since it is symmetric in time the result for the first half of the correlation response may simply be mirrored for the second half of the solution.
For the first segment of the solution the two pulses overlap with their specific functional domains determined according to their relative variable time offsets. The reference pulse functional description of course does not change but the convolving pulse domain is dynamic.
The first solution then involves solving:
The next segment for evaluation corresponds with the pulse overlap illustrated in plot 9500 of
The applicable equation to be solved is:
Equations C1.8 and C1.9 have been multiplied by 2 to account for both regions of overlap in
The last segment of solution also yields two results. The overlap region is indicated in plot 9600 of
The applicable integral is:
The total solution is found from the sum of segmented solutions, Equations C1.6, C1.8, C1.9, C1.11, C1.13 combined with its mirror image in time, symmetric about the peak of the autocorrelation.
ν,ν=ν
The terms in C1.14 may therefore be scaled as required to normalize the peak of the auto correlation corresponding to the mean of the square for the pulse. For instance, the peak energy of the maximum velocity pulse corresponds to a value of Pm/m. Plot 9700 of
10.4 Differential Entropy Calculation
Shannon's continuous entropy also known as differential entropy may be calculated for the Gaussian multi-variate. The Gaussian multi-variate for the velocity random variable is given as:
D is the dimension of the multi-variate. α,β are enumerated from 1 to D and Λ is a covariance matrix and (να−
From Shannon's definition:
H[ρ(ν)]=−∫−∞∞ρ(ν)ln(ρ(ν))d(ν) Equation D1.2
It is noted that:
Since there are D variables the entropy must be calculated with a D-tuple integral of the form:
H[ρ(ν)]=−∫−∞∞ . . . ∫−∞∞ρ(ν)ln(ρ(ν))d(ρ(ν))
ρ(ν)=ρ(ν1,ν2, . . . νD) Equation D1.4
The D=1 case is obtained in Section 10.10. Using the same approach we may extend the result over D dimensions:
D1.5 can be rewritten with a change of variables for the second integral;
The second integral then is simply the expected value for Zα over the D-tuple which is equal to the dimension D divided by 2 for uncorrelated RVs:
The covariance matrix is given by:
σ2 is a variance of the random variable. Γα,β is a correlation coefficient. The covariance is defined by
In the case of uncorrelated zero mean Gaussian random variables σα,β=0 for α≠β and 1 otherwise. Thus only the diagonal of D1.8 survives in such a circumstance. The entropy can be streamlined in this particular case to:
Equation D1.12 is the maximum entropy case for the Gaussian multi-variate.
In the case where να and νβ are complex quantities, D1.10 will also spawn a complex covariance. In this case the elements of the covariance matrix become:
{tilde over (Λ)}=E{(να−να)(νβ−νβ)T}+E{({tilde over (ν)}α−
The complex covariance matrix can be used to double the dimensionality of the space because complex components of this vector representation are orthogonal. This form can be useful in the representation of band pass processes where a modulated carrier may be decomposed into sin(x) and cos(x) components.
Hence the uncertainty space can increase by a factor of 2 for the complex process if the variance in real and imaginary components are equal.
10.5 Minimum Mean Square Error (MMSE) and Correlation Function for Velocity Based on Sampled and Interpolated Values
Let {tilde over (ν)}α(t)=να(t)δ(t−nTs)*ht be a discretely encoded approximation of a desired velocity for a dynamic particle. The input samples are zero mean Gaussian distributed and the input process possesses finite power. This is consistent with a maximum uncertainty signal. A focus is obtaining an expression for the MMSE associated with the reconstitution of να(t) from a discrete representation. From the MMSE expression one can also imply the form of an correlation function for the velocity. When {tilde over (ν)}α(t) is compared to να(t) the comparison metric is cross correlation and becomes autocorrelation for {tilde over (ν)}α(t)=να(t). The inter sample interpolation trajectories will spawn from a linear time invariant (LTI) operator *ht. With this background, a familiar error metric can be minimized to optimize the interpolation, where the energy of each sample is conserved:
Minimizing the error variance σε2 requires solution of:
να(t)−να(t)δ(t−nTs)*ht=0 Equation E1.2
Impulsive forces δ(t−nTs) are naturally integrated through Newton's laws to obtain velocity pulses. That analysis may easily be extended to tailor the forces delivered to the particle via an LTI mechanism where ht disperses a sequence of forces in the preferable continuous manner. ht may be regarded as a filter impulse response where the integral of the time domain convolution operator is inherent in the laws of motion.
It is evident that an effective LTI or linear shift invariant (LSI) impulse response heff=1 provides the solution which minimizes σε2.
The expanded error kernel may be compared to a cross correlation where ht is a portion of the correlation operation. The cross correlation characteristics are gleaned from the expanded error kernel and cross correlation definition:
σε(τ,nTs)2=να(t+τ)2−2να(t+τ)να(t−nTs)*ht+(να(t−nTs)*ht)2 Equation E1.3
σε(τ,nTs)2=ντ2−2|γτ,nT
The notation has been streamlined, dropping the a subscript and adopting a two dimensional variation to allow for sample number and continuously variable time offset. The reference function να(t+τ) is continuously variable over the domain while να(t−nTs)*ht is fixed. γτ,nT
The power cross correlation function (m=1) is defined in the usual manner:
The extremes can be obtained by solving:
If the particle velocity is random and zero mean Gaussian and of finite power, then it is known that τ,nT
Also, the correlation function may vary in the following manner:
Now this implies that the autocorrelation is zero for or τ=nTs≠0 because E1.7 permits only a max. or min. value for the magnitude of correlation coefficients. A local maximum would reflect a slope of zero not ±σnT
One cannot further resolve the form of the correlation function which minimizes the MMSE without explicitly solving for ht or injecting additional criteria. This can be accomplished by setting heff=1 in figure E.1 and solving for ht. When this additional step is accomplished the correlation function corresponding to the optimal impulse response LTI operator then takes on the form of the sinc function (see Section 3).
10.6 Max Cardinal Vs. Max NL. Velocity Pulse
This Section provides some support calculations for the comparison of maximum nonlinear and cardinal pulse types.
Let the fundamental cardinal pulse be given by:
The energy of the pulse is proportional to (m=1 unless otherwise indicated):
Then (for νm_card=1):
Pm_card is calculated from:
Now suppose that the prior case is compared to the maximum nonlinear velocity pulse case where νm=1 and Ts=1. Then Pmax=0.5 (see Section 10.2).
The ratio of the maximum power requirements is:
This is the ratio when the pulse amplitudes are identical for both cases at the time t/Ts=0. The total energy of the pulses are not equal and the distance a particle travels over a characteristic interval Δt is not the same for both cases. The information at the peak velocity is however equivalent. This circumstance may serve as a reference condition for other comparisons.
One can also calculate the required velocity in both cases for which the particle traverses the same distance in the same length of time Δt=2Ts. This is a conservation of configuration space comparison. The two distances are equated by:
2∫0T
The integral on the left is the distance for a nonlinear maximum velocity pulse case and the integral on the right is the maximum cardinal pulse case. Explicitly:
νm_card is to be calculated.
{tilde over (S)}i(Ts) is a function of the sine integral, integrated over the range 0≤t≤Ts, where Ts=1.
In terms of νmax:
The power increase at peak velocity for the cardinal pulse compared to the nonlinear maximum velocity pulse is:
This represents an increase of ˜1.07 dB at peak velocity.
The Pm increase however is noticeably greater and may be calculated using ratios normalized to the reference case:
Therefore:
And:
This represents an increase of approximately 3.34 dB required for the peak power source enhancement relative to the maximum nonlinear velocity pulse case, to permit a maximum cardinal pulse to span the same physical space in an equivalent time period Δt.
It is possible to calculate the required sampled time Ts for both pulse types in the case where the phase space is conserved for both scenarios and Pmax_card=Pm=1. The sample time is assigned the variable Tref for the maximum nonlinear pulse type.
νm_card is first calculated from (refer to reference case):
Therefore:
This corresponds to a bandwidth which is Ts−1 or ≈0.848 of the reference BW. Therefore, a lower instantaneous power can be considered as a trade for a reduction in bandwidth.
The characteristic radius of the cardinal pulse case is calculated from the integration of velocity over the interval Ts:
For the normalized case of Ts=π, one obtains
Rs=(1.85)(νmax_card)
10.7 Cardinal TE Relation
The TE relation is examined as it relates to a maximum cardinal pulse. Also, the two pulse energies are compared. Although the two structures are referred to as pulses, they are applied as profiles or boundaries in Section 3, restricting the trajectory of dynamic particles.
The general TE relation is given by:
In the case of the most expedient velocity trajectory to span a space kp=1. This bound results in a nonlinear equation of motion. Therefore, a physically analytic design will constrain motions to avoid the most extreme trajectory associated with a kp=1 case or modify kp.
The nature of the TE relation can be revealed in an alternate form:
Pmax is defined as the maximum instantaneous power of a pulse
over the interval Ts. εk_max is the maximum kinetic energy over that same span of time. Then from appendix F the cardinal pulse will have the following values for kp:
(εk_max_card/εk_max)=1,(Ts_max_card/Ts_max)=1,(Rs_max_card/Rs_max)=1 Case 1:
The subscript “max_card” refers to the maximum cardinal pulse type and the subscript “max” references the maximum nonlinear pulse type.
The total pulse energies for the 2 cases above are not equivalent. It should be noted that the energy average for the cardinal pulse is per unit time Ts. The total energy for both pulse types are given by:
If both energies are equated, then:
This reveals a static relation between the two pulse types whenever total energies are equal, which can be restated simply as:
10.8 Relation Between Instantaneous Efficiency and Thermodynamic Efficiency
In this Section, two approaches for efficiency calculations are compared to provide alternatives in algorithm development. Optimization procedures may favor an indirect approach to the maximization of thermodynamic efficiency. In such cases, an instantaneous efficiency metric may provide significant utility. This Section does not address those optimization algorithms.
Thermodynamic Efficiency possesses a very particular meaning, it is determined from the ratio of two random variable mean values.
Calculation of this efficiency precludes reduction of the power ratio prior to calculating the average. This fact can complicate the calculations in some circumstances. In contrast, consider the case where the ratio of powers is given by:
η and ηinst do not possess the same meaning yet are correlated. It is often useful to reduce ηinst rather than η to obtain an optimization, the former implying the latter.
The proper thermodynamic calculation begins with the ratio of two differing RV's. The numerator is a non-central gamma or chi squared RV for the canonical case, which is obtained from:
is the variable ({tilde over (V)}L−VL)2 where {tilde over (V)}L is approximately Gaussian for σ<<Vs. The completed transformation is given by:
This can also be obtained from the more general non-central Gamma multivariable sum:
where N=1 in the reduced form, I[(N-2)/2] is a modified Bessel function of the first kind, and σ2 is the variance of the Gaussian RV. The more general result applies to an arbitrary sum of N Gaussian signals with corresponding non-zero means.
The denominator of the thermodynamic efficiency is obtained from the sum of two RV's. One is positive non central Gaussian and the other is identical to ρ(X).
Hence, the proper thermodynamic waveform efficiency is obtained from (where statistical and time averages are equated):
One can work directly with this ratio or time averaged equivalents whenever the process is stationary in the wide sense. Sometimes the statistical ratio presents a formidable numerical challenge, particularly in cases of optimization where calculations must be obtained “on the fly.”
On the other hand, the averaged instantaneous power ratio is (where statistical and time averages are equated):
Now η and ηinst_WF are always obtained from the same fundamental quantities Pout and Pin with similar ratios and therefore are correlated. In fact they are exactly equivalent prior to averaging.
The instantaneous waveform power ratio for a type one electronic information encoder or modulator is given by:
where Zr is the ratio of power source impedance to load impedance. The meaning of this power ratio is an instantaneous measure of work rate at the system load vs. the instantaneous work rate referred to the modulator input. It is evident that the right hand side may reduce whenever the numerator and denominator terms are correlated. This reduction generally affords some numerical processing advantages.
One can verify that the thermodynamic waveform efficiency is always greater than or equal to the instantaneous waveform efficiency for the type 1 modulator.
Likewise:
The numerator and denominator can be divided by the same constant.
This result implies that:
η≥ηinst
always, because:
Whenever the signal component VL2>0 then σ2>0 and the thermodynamic efficiency is the greater of the two quantities.
Optimizing ηinst
This optimization is not arbitrary however and must consider the uncertainty required for a prescribed information throughput which is determined by the uncertainty associated with the random signal.
is therefore moderated by the quantity σ2. As α2, the information signal variance, increases, the quantity
must adjust such that the dynamic range of available power resources is not depleted or characteristic pdf for the information otherwise altered. In all cases of interest the maximum dynamic range of available modulation change is allocated to the signal. For symmetric signals this implies that
for maximum dynamic range and that the power source impedance is zero. Whenever the source impedance is not zero then the available signal dynamic range reduces along with efficiency.
An example illustrates the two efficiency calculations.
The apparatus comprises the variable impedance, or in this case resistance, Re{ZΔ}, and the load ZL. One is concerned with the efficiency of this arrangement when the modulation is approximately Gaussian. Zs impacts the efficiency because it reduces the available input power to the modulator at ZΔ. Vs is a measurable quantity whenever the apparatus is disconnected. Likewise, Re{ZΔ} can be deduced from measurements in static conditions before and after the circuit is connected, provided ZL, ZΔ, are known. The desired output voltage across the load is obtained by modulating ZΔ with some function of the desired uncertainty H(x). The output VL, is offset Gaussian for the case of interest and is given by:
Using the method of instantaneous efficiency, one obtains a continuous pdf for ηinst_WF
The thermodynamic waveform efficiency is found from:
Thus, the thermodynamic waveform efficiency is greater than the averaged instantaneous waveform efficiency in this example.
η may also be obtained from the statistical ratio:
The denominator pdf for Pin is the difference of the 1 W for Pout and the RV formed by the multiplication of VsVL where VL is non-central Gaussian. The marker is near the theoretical mean of 0.2725. The relative histogram for this RV is shown in plot 10800 of
The marker m6 is near the theoretical mean of 0.7275. Calculating the means of these two distributions and taking their ratios yields the thermodynamic waveform efficiency. Proper thermodynamic efficiency must remove the effect of the offset term of the numerator, leaving a numerator dependent on the information bearing portion of the waveform only. Section 10.9 further explores the relationship between η and {hacek over (η)}.
Certain procedures of optimization involving time averages can favor working with thermodynamic efficiency directly. However, if an optimization is based on statistical analysis, then instantaneous efficiency can be a preferable variable which in turn implies an optimized thermodynamic efficiency under certain conditions.
10.9 Relation Between Wave Form Efficiency and Thermodynamic or Signal Efficiency and Instantaneous Waveform Efficiency
This Section provides several comparisons of waveform and signal efficiencies. The comparisons provide a means of conversion between the various forms which can provide some analysis utility.
First, the proper thermodynamic waveform and thermodynamic signal efficiencies are compared for a type one modulator where Zr=1.
ηsig considers only the signal power as a valid output. This is as it should be since DC offsets and other anomalies do not encode information and therefore do not contribute positively to the apparatus deliverable. However, ηWF is related to ηsig and therefore is useful even though it retains the offset. If the maximum available modulation dynamic range is used then maximization of ηWF implies maximization of ηsig.
ηWF, ηsig may also be expressed in terms of the PAPR metric.
In the above equations PAPRwf/sig refers to the peak waveform to average signal power ratio and PAPRwf refers to the peak waveform to average waveform power ratio. These equations apply for PAPRwf>4 when the peak to peak signal dynamic range spans the available modulation range between 0 volts and Vs/2 volts at the load, and Zr=1. The dynamic range is determined by Zr, the ratio of source to load impedance.
Signal based thermodynamic efficiency can be written as:
Therefore, if ηWF, and PAPRwf are known then {tilde over (η)} may be calculated. Also, increasing ηWF, increases {tilde over (η)}. Under these circumstances, {tilde over (η)}≤½.
Now suppose that Zr≈0, corresponding to the most efficient canonical case for a type 1 modulator. In this case, the maximum waveform voltage equals the open circuit source voltage, Vs.
The relevant relationships follow:
{tilde over (η)} above is considered as a canonical case.
General cases where Zr≠0 can be solved using the following equations:
When Zr=1 then,
When Zr=0;
ZΔ is a variable impedance which implements the modulation. Its function is illustrated in Section 10.8.
Thermodynamic signal efficiency is similarly determined:
We can confirm the result by testing the cases Zr=0,1.
Instantaneous Efficiency
In addition to proper thermodynamic efficiencies, it is possible to compare instantaneous waveform and thermodynamic signal efficiencies discussed in Section 10.8. The most general form of the instantaneous power ratio
is:
This is the instantaneous waveform efficiency given a required signal variance. ηinst_WF/σ
Although the calculation, ηinst_WF/σ
It is desirable to minimize Zr to maximize efficiency. For the case of a single potential Vs, i.e. the case of a type one modulator, the maximum symmetric signal swing about the average output potential is always {tilde over (V)}m=VL_max/2=VL. Increasing Zr above zero diminishes the signal dynamic range converting this loss to heat in the power source. The quantity Vs/[2 (1+Zr)] is always considered as a necessary modulation overhead for a type 1 modulator.
Increasing VL increases the peak signal swing {tilde over (V)}m and therefore always increases the signal variance for a specified PAPR. Hence, increasing ηinst_WF/σ
VL is defined in terms of impedances and Vs above. From the definition 0≤{hacek over (η)}/σ
tends to infinity and {tilde over (η)} also tends to zero.
Although the prior discussions focus on symmetric signal distributions (for instance Gaussian-like), arbitrary distributions may be accommodated by suitable adjustment of the optimal operating mean ∞VLΠ. In all circumstances however, the available signal dynamic range must contemplate maximum use of the span {Vs,0}.
Source Potential Offset Considerations
The prior equations are based on circuits which return currents to a zero voltage ground potential. If this return potential is not zero then the formulas should be adjusted. In all prior equations, one can substitute Vs=Vs1−Vs2 where Vs1, Vs2 are the upper and return supply potentials, respectively. In such cases, the optimal VL is the average of those supplies when the pdf of the signal is symmetric within the span {Vs1,Vs2}. Otherwise, the optimal operational VL is dependent on the mean of the signal pdf over the span {Vs1, Vs2}. The offset does not affect the maximum waveform power, Pm_wf. However, the maximum signal power is dependent on the span {Vs1,Vs2} and the average VL. The signal power is dependent only on σ and any additional requirement to preserve the integrity of the signal pdf.
10.10 Comparison of Gaussian and Continuous Uniform Densities
This Section provides a comparison of the differential entropies for the Gaussian and Uniform pdf s. The calculations reinforce the results from Section 10.1 where it is shown that the Gaussian pdf maximizes Shannon's entropy for a given variance σG2. Also this Section confirms Section 10.4's calculations for the case D=1. There is a particular variance ratio σu2/σG2 for which, when exceeded, the uniform density possesses an entropy greater than that of the Gaussian. This ratio is calculated. Finally the PAPR is compared for both cases.
First, a calculation is presented of the Gaussian density in a single dimension D=1.
Applying the following two definite integral formulas obtained from a CRC table of integrals yields:
The final result is
Now the entropy Hu is obtained.
Let the uniform density possess symmetry with respect to x=0, the same axis of symmetry for a zero offset (zero mean) Gaussian density.
The variance is obtained from:
Now one can begin the direct comparison between HG and Hu.
Let σG2=σu2. Then:
ul=√{square root over (3σG)} for σG2=σu2
Therefore:
HG=ln(√{square root over (2πe)}σG)≅ln(4.1327σG)
Hu=ln(2√{square root over (3)}σG)≅ln(3.4641)
HG is always greater than Hu for a given equivalent variance for the two respective densities.
Considering the circumstance where Hu≥HG and σu2≠σG2:
Therefore, the entropy of a uniformly distributed RV must possess a noticeable increase in variance over that of the Gaussian RV to encode an equivalent amount of information.
It is also instructive to obtain some estimate of the required PAPR for conveying the information in each case. In a strict sense, the Gaussian RV requires an infinite PAPR. However it is also known that a PAPR≥16 is sufficient for all practical communications applications. In the case of a continuously uniformly distributed RV:
Suppose ul is calculated for the case where Hu=HG. Let σG2=1 for the comparison.
ul≅2.066
To obtain the entropy HG the upper limit, ulG, for the Gaussian RV must be at least 4. This means that roughly 4 times the peak power is required to encode information in the Gaussian RV compared to the uniform RV, whenever Hu=HG. Likewise, one can calculate PAPRG/PAPRu≅5.
10.11 Entropy Rate and Work Rate
The reader is referred to Sections 4, 10.1, and 10.4 to supplement the following analysis. Maximizing the transfer of physical forms of information entropy per unit time requires maximization of work. This can be demonstrated for a joint configuration and momentum phase space. The joint entropy is:
Maximum entropy occurs when configuration and momentum are decoupled based on the joint pdf:
It is apparent that the joint entropy is that of a scaled Gaussian multivariate and;
H=Hq+Hp Equation K1.2
Hq,Hp are the uncertainties due to independent configuration position and momentum respectively. If one wishes to maximize the information transfer per unit time, one needs to ensure the maximum rate of change in the information bearing coordinates {q,p}. When the particle possesses the greatest average kinetic energy it will traverse greater distances per unit time. Hence, one need only consider the momentum entropy to obtain the maximization sought.
Therefore maximizing K1.3 one can write:
max {eH
Recognizing that (√{square root over (2πe)})2D is constant and that D is represented exponentially in the second term of K1.5, permits a simplification:
max {eH
Suppose that we represent the covariance in terms of the time variant vector {right arrow over (p)}. K1.6 is further simplified:
max {|{right arrow over (p)}·{right arrow over (p)}|}=max {(|Λp|D)} Equation K1.7
We now take the maximization with respect to the equivalent energy and work form where mass is a constant:
max {{right arrow over ({dot over (q)})}·{right arrow over ({dot over (p)})}}=max {{dot over (ε)}k} Equation K1.8
Equations K1.8 and K1.7 are equivalent maximizations when time averages are considered. Equation K1.8 converts the kinetic energy inherent in the covariance definition of Λp to a power. It defines a rate of work which maximizes the rate of change of the information variables
{{right arrow over (q)},{right arrow over (p)}}. This is confirmed by comparison with a form of the capacity equation given in Section 5:
The variances of Equation K1.9 are per unit time and {right arrow over ({dot over (p)})}α·{right arrow over ({dot over (q)})}αeff_α in Equation K1.10, define an effective work rate in the αth dimension for the encoded particle. Increasing {right arrow over ({dot over (p)})}α·{right arrow over ({dot over (q)})}αeff_α increases capacity. Although this argument is specific to the Gaussian RV case, it extends to any RV due to the arguments of Section 5 which establish pseudo capacity as a function of PAPR and entropy ratios compared to the Gaussian case. If one wishes to increase the entropy of any RV, one must increase Pmax for a given {right arrow over ({dot over (p)})}α·{right arrow over ({dot over (q)})}αeff_α. Conversely, if a fixed PAPR is specified, increasing {right arrow over ({dot over (p)})}α·{right arrow over ({dot over (q)})}αeff_α increases Pmax by definition and phase space volume increases with a corresponding increase in uncertainty.
10.12 Optimized Efficiency for an 802.11a 16 QAM Case
This Section highlights aspects of the calculations and measurements involved with the optimization of a zero offset implementation of an 802.11a signal possessing a PAPR˜12 dB. A testing apparatus schematic 11100 is illustrated in
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections can set forth one or more but not all example embodiments of the present invention as contemplated by the inventors, and thus, are not intended to limit the present invention and the appended claims in any way.
Embodiments of the present invention have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the relevant art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by a person skilled in the relevant art in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described example embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application is a continuation of U.S. Nonprovisional patent application Ser. No. 14/679,928, filed Apr. 6, 2015, titled “An Optimization of Thermodynamic Efficiency vs. Capacity for Communications Systems,” which claims the benefit of U.S. Provisional Patent Application No. 61/975,077, filed Apr. 4, 2014, titled “Thermodynamic Efficiency vs. Capacity for a Communications System,” U.S. Provisional Patent Application No. 62/016,944, filed Jun. 25, 2014, titled “Momentum Transfer Communication,” and U.S. Provisional Patent Application No. 62/115,911, filed Feb. 13, 2015, titled “Optimization of Thermodynamic Efficiency Versus Capacity for Communications Systems,” all of which are hereby incorporated herein by reference in its entireties.
Number | Name | Date | Kind |
---|---|---|---|
9711041 | Wiley | Jul 2017 | B2 |
20050220232 | Kunnari | Oct 2005 | A1 |
20090244087 | Okano | Oct 2009 | A1 |
20110178764 | York | Jul 2011 | A1 |
20150295701 | Sengoku | Oct 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20160150438 A1 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
62115911 | Feb 2015 | US | |
62016944 | Jun 2014 | US | |
61975077 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14679928 | Apr 2015 | US |
Child | 14749463 | US |