ELASTIC TIMESTAMPING

Information

  • Patent Application
  • 20190013954
  • Publication Number
    20190013954
  • Date Filed
    December 21, 2017
    7 years ago
  • Date Published
    January 10, 2019
    5 years ago
Abstract
A method and system for elastic timestamping for use in computing and networking applications including telemetry is disclosed herein. A device that is part of a system may initially generate a variable size timestamp or elastic n-dimensional timestamp (ENTS) with n time dimensions fields for a corresponding event in the system for which timing or temporal order information is needed. The device may select a subset of the n time dimensions fields of the ENTS based on a relevant time granularity of the corresponding event to generate a compact ENTS with a reduced size. The device may communicate the compact ENTS for further processing. In an example, the ENTS may be generated for a device-specific action performed to gather telemetry data in response to received telemetry intent at the device, and the compact ENTS may be communicated with a corresponding telemetry response.
Description
FIELD OF INVENTION

The disclosure relates generally to a system and method for variable size elastic timestamping for use in computing and networking applications.


BACKGROUND

Timestamping, as may be used in computing and networking systems, involves recording digital timestamp values, which may be used to create a temporal order among a set of events. Timestamping techniques are used in a variety of computing fields, such as network management, computer security, and concurrency control. Timestamping is often used in network telemetry, which involves the use of automated tools and processes designed to collect measurements and other data at points throughout a network, which can then be used for network monitoring and performance analysis. Timestamping for telemetry can be used to verify the freshness of telemetry data collected and temporal ordering of telemetry data within a flow. For example, when using telemetry to analyze network congestion by looking at queue lengths, the time at which the queue length was measured is needed to correlate the queue length information to the flows present. In another example, for route tracing telemetry applications for specific flows, the temporal information may be used to analyze bottlenecks in the network for the specific flows. Thus, a sense of time, or “time plus context”, provided by timestamping is frequently needed for telemetry data to be useful.


SUMMARY

A method and system for elastic timestamping for use in computing and networking applications including telemetry is disclosed herein. A device that is part of a system may initially generate a variable size timestamp or elastic n-dimensional timestamp (ENTS) (a first timestamp) with n time dimensions fields for a corresponding event in the system for which timing or temporal order information is needed. The device may select a subset of the n time dimensions fields of the ENTS based on a relevant time granularity of the corresponding event to generate a compact ENTS (a second timestamp) with a reduced size. The device may communicate the compact ENTS for further processing. In an example, the ENTS may be generated for a device-specific action performed to gather telemetry data in response to received telemetry intent at the device, and the compact ENTS may be communicated with a corresponding telemetry response. In an example, the disclosed elastic timestamping system and method is used in a packet-optical in-band telemetry (POINT) framework designed for gathering multi-layer telemetry data in packet-optical networks.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a high-level diagram of an example packet-optical in-band telemetry (POINT) framework implemented in an example packet-optical network, in accordance with the disclosures herein;



FIG. 2 is a flow diagram of an example telemetry processing procedure that meets data freshness requirements and includes ENTS in the telemetry responses, in accordance with the disclosures herein;



FIG. 3 is a flow diagram of an example elastic timestamping procedure with event freshness for use in a computing or networking system, in accordance with the disclosures herein; and



FIG. 4 is a block diagram of a computing system in which one or more disclosed embodiments may be implemented.





DETAILED DESCRIPTION OF THE EMBODIMENTS

A method and system for elastic timestamping with variable and adjustable size for use in computing and networking systems are disclosed, where an entity or device in the system generates an elastic n-dimensional timestamp (ENTS) for a corresponding event or action in the system for which timing or temporal order information is needed. Each of the n fields or dimensions (time dimension fields) of the ENTS provides a different granularity in time (e.g., hours, minutes, seconds, milliseconds, etc.) and each of the n fields has a length that may be different from that of the other fields. The device selects a subset of the n dimensions of ENTS to generate a reduced or compact ENTS (i.e., a compact timestamp) for communication (e.g., reporting to a receiving entity for processing, recording to storage, displaying to an operator etc.) by determining the relevant time granularity for the corresponding event. The device may also communicate an indication of which dimensions are included in and/or excluded from the ENTS. The device may maintain and periodically report a mapping from ENTS to a local clock value at the device. The device may receive event or data freshness criteria or thresholds and may use the ENTS to ensure that the reported system events or actions meet the data freshness criteria, and drop information for system events that do not meet the data freshness criteria.


Examples of the disclosed timestamping system are given herein with reference to telemetry applications; however, any of the disclosed timestamping mechanisms, alone or in combination, may be used in any application that makes use of timestamping, including, but not limited to any computing or network application that requires record keeping, network administration and management applications, computer security applications, concurrency control applications, and synchronization applications.


Providing temporal information as part of telemetry data may be used to verify freshness of the telemetry information collected and temporal ordering of telemetry data within a flow. For telemetry applications, observation entails the act of actual telemetry data measurement by the hardware (e.g., measuring bit error rate (BER), measuring signal-to-noise ratio (SNR)) in response to a telemetry request, and recording the data involves collecting, logging, and/or displaying the response to the telemetry request. Consequently, there may be a delay between the observation of telemetry data and recording of telemetry data. In some cases, the timestamp at which observations were made may not be available and recording of the observation may happen at periodic intervals (e.g., every 15 minutes, once a day, etc.) instead of at the actual time of observation. Thus, the timestamp associated with the collection of telemetry data may not have the fine detail needed to order relevant network-wide events to effectively serve the purpose of telemetry to troubleshoot network issues across a network. This problem may be even more exaggerated in multi-layer networks comprising both packet and optical devices where delays between observation and recording of data may be greater.


In an example, data plane timestamping (i.e., where the timestamps are added to the telemetry data and transmitted in packets/frames, for example in the header, across the network) may be used to provide the temporal ordering of events on the same device, where the events include repeated readings of specific telemetry data from the device (e.g., link utilization, power levels, temperature, etc.). When the timestamping information is included with the telemetry data, the collected telemetry data may then be properly interpreted and correlated at a processing device (e.g., at a sink node) and put in a time sequence in order, even if the data arrives out of order. The receipt of telemetry data without temporal information may be unreliable because the data may be received out-of-order from how it was originally observed and/or transmitted.


Data plane timestamping may also be used to provide temporal ordering of network events that may occur across different devices, for example when network telemetry is used to analyze the impact of transient fiber failures in an optical or packet-optical network. Different devices in a network may have different clock values, and thus temporal ordering across devices is essential to be able to correlate the pieces of data collected from different network devices. Moreover, in many networks, it may not be realistic to assume that clocks across devices are synchronized. Clock synchronization may be coordinated across different devices but is typically a resource-consuming task. Therefore, temporal ordering is particularly useful when absolute clock values are not consistent.


While data plane timestamping has a lot of practical and essential uses for telemetry applications, it may also be very expensive to implement because it adds large amounts of overhead to the telemetry data. A timestamp will include detailed time information such as year, month, day, hour, second, and fraction of a second. For example, according to the network time protocol (NTP) timestamp format, a timestamp occupies 8 bytes (64 bits) with two 32-bit fields: an integer part of the number of seconds since the base date (i.e., a reference start time for the local clock) and the fractional part of the number of seconds (or nanoseconds field). In some use cases, the timestamp may only be needed at the ingress node and the egress node, for example to measure the overall delay across the network. However, certain network measurements require data and associated timestamp information to be collected at each hop along the network path (e.g., when measuring delay across particular links, or when trying to locate a particular link failure). In these cases, the accumulation of timestamp information for each recorded piece of telemetry data generates a substantial amount of overhead data.


An existing solution proposes using a differential timestamp where a time difference or “delta” time value from the timestamp from the timestamp of the previous event is communicated to reduce overhead. However, differential timestamps require a state be maintained at various points in the network and are limited in their use to timestamping on a per-flow basis only.


To address the above issues, the disclosed elastic timestamping system uses a timestamping format that can be adjusted in size, in terms of number of fields and/or total bits, to generate a compact timestamp that meets the desired time granularity needed by telemetry applications (or other computing and networking applications) while reducing overhead. The term “elastic” refers to the fact that the timestamp can be dynamically adjusted while still effectively conveying the relevant time information. With the disclosed compact timestamps, only the time dimensions that are most relevant to the telemetry data being collected are to be communicated with a response. The communication of other dimensions (e.g., higher dimensions such as date, day, or hour) may be amortized over several responses and thus not communicated with each response to reduce overhead. For example, if bit errors are being collected over the course of minutes, then there is no need to communicate the millisecond (or microsecond) timestamp values, and/or the date values may be amortized over several responses sent during the course of a particular day.


In an example, a timestamp is generated at a device to record the time at which a telemetry event occurred (e.g., by reading the value of a clock running locally at the device). In accordance with the disclosed variable size elastic timestamping system, the timestamp has an elastic n-dimensional timestamp (ENTS) format with up to n fields: [T11 . . . T1m:T21 . . . T2p: . . . Tn1 . . . Tnq]. Each field may provide a different unit or granularity of time (also referred to as dimension or time dimension field of ENTS). For example, for n=4 fields (dimensions), the fields may represent the hours, minutes, seconds, and milliseconds of the timestamp (e.g., HH:MM:SS:MSS). Each field has a respective length in bits, m, p, . . . , q, such that the lengths may or may not be the same (e.g., m≠p≠ . . . ≠q). In an example, the number of fields n and the lengths of the fields m, p, . . . q for the ENTS may be provided or fixed in advance for the participating devices in the network or configured dynamically during network operation. In an example, each participating device may derive the ENTS from a local clock value. For example, the local clock value may be read from the local or system clock at the device. In another example, the local clock value may be a subset of bits of the local clock.


According to the disclosed elastic timestamping system, a selected subset of dimensions (or fields) of ENTS are communicated for each telemetry data collected according to the relevant granularity for the time context of the event measured by the telemetry data. For example, a total number of dimensions s<n may be communicated as a compact or compressed ENTS (i.e., compact timestamp), such that the selected dimensions may include one or more ENTS dimensions closest to the granularity relevant to the telemetry observation. In some cases, the receiver that processes the telemetry data (e.g., the sink node) may have additional time information to expand the compact ENTS into an expanded timestamp.


In an example, events that need to be distinguished in time at the millisecond (ms) granularity (e.g., alarm indication for loss of packet or frame) need not send the hourly values (hourly field) of the ENTS. Similarly, events that need to be distinguished in time at the hourly granularity may not need to send the ms values (ms field) of the ENTS. For example, for telemetry data that gathers average temperature readings at each device averaged over an hour time window, the relevant ENTS dimension are the day and hour in the day that the temperature readings were averaged over. Second and millisecond information is not needed in this case, and thus the second and millisecond ENTS dimensions may not be communicated with the telemetry data. As an example of overhead savings, an ENTS of 22 bits in length can be used to monitor events at millisecond granularity within an overall period of one hour, thus removing the higher and lower time dimensions compared to an NTP timestamp of 64 bits, and thus reducing the size (length) in bits of the timestamp by over 65%.


Once the dimensions for the ENTS are selected, then only the selected ENTS dimensions for the corresponding telemetry data are used to generate a compact (compressed) ENTS. The compressed ENTS may be communicated to the responsible entity (e.g., the sink node) for further telemetry processing and analysis. In an example, the device may communicate the selected ENTS dimensions associated with particular telemetry data along with the telemetry data and may insert the selected ENTS dimensions in the packet or frame header carrying the corresponding telemetry intent or may communicate the selected ENTS dimensions with the corresponding telemetry data via another route such as in an out-of-band channel. In another example, an indication (e.g., a few bits) may be included with the selected ENTS dimensions and telemetry data to indicate the presence or absence of particular dimensions in the selected ENTS. For example, the indication may indicate with the bit sequence “0011” that the hour and minute dimensions are absent and the second and millisecond dimensions are present in the ENTS carried in the header.


In another example, each device may periodically communicate, to the entity that processes the telemetry data (e.g., the sink node), a mapping from its ENTS (or compressed ENTS) to the local clock value at the device. A device's local mapping from ENTs to its local clock value is useful in a network where it is cannot be assumed that the clocks across the network devices are synchronized or maintain strict synchronization. Because a packet (data flow) carrying telemetry data will travel across multiple devices, the ENTS values gathered at the devices may be out of synch relative to one another because the respective device clocks are not necessarily synchronized. Thus, the periodic mapping of ENTS to local clock value at each device along a path can be used at the sink node to determine the relative timing and temporal ordering of the events across the multiple devices.


In another example, in accordance with the disclosed elastic timestamping system, mechanisms may be provided to ensure data freshness of the telemetry data. For example, an intermediate device along a network path may be provided with a required or desired data freshness criteria for telemetry data that is used to validate the freshness of collected data before sending it as a response to an intent request. For example, the desired data freshness criteria may be that telemetry data be locally measured within a time frame, for example within the last 10 ms (e.g., when collecting packet loss information). In another example, the data freshness criteria may be that telemetry data be gathered or received within a time period (e.g., the last 15 ms), in cases where the measurements is measured remotely but collected locally at the device (e.g., where measurements are performed at a layer 0 (L0) device but the corresponding data is processed and inserted into the data flow by a layer (L1) device such as at a packet-optical layer boundary device in a packet-optical network). Data freshness criteria may be communicated to a device in a number of ways. For example, the data freshness criteria may be communicated along with the telemetry intent (e.g., in the packet or frame header), or via an out-of-band channel. In another example, the data freshness criteria may be programmed into individual devices. In another example, data freshness may not be stipulated as a requirement. Instead, the device may include information regarding the freshness of the telemetry data in the telemetry response.


In the following, telemetry applications are described in more detail and examples are given of the disclosed elastic timestamping system and method as used in telemetry applications. Although examples herein refer to telemetry applications, the disclosed elastic timestamping system and methods (including ENTS format, ENTS dimension selection, and mechanisms for data freshness) may be used in any application that uses timestamping, including, but not limited to any computing or network application that requires record keeping, network administration and management applications, computer security applications, concurrency control applications, and synchronization applications.


Streaming telemetry mechanisms, such as OpenConfig, are designed to streamline the notification of network state by having the network elements stream the telemetry data up to a central management entity where the data gets stored and processed. While streaming telemetry mechanisms employ extensive offline algorithms to process telemetry data, they are not designed to inherently improve the quality of the data collected.


To improve extensibility and bring flexibility into telemetry data collection, the In-band Network Telemetry (INT) framework was developed for packet networks. INT is implemented in the data plane such that telemetry information is carried in data packets (e.g., in the header of data packets) and can get modified with each hop. The data plane refers to the part of a device's architecture that makes forwarding decisions for incoming packets. For example, routing may be determined by the device using a locally stored table in which the device looks up the destination address of the incoming packet and retrieves the information needed for forwarding.


The INT framework relies on programmable data planes to bring flexibility to telemetry data collection. Devices with programmable data planes include network processors or general-purpose central processing units (CPUs) at the low end, and data path programmable switch chips at the high end. With INT, a source switch (or more generally, a source network device) incorporates an instruction header to collect network state information as a part of the data packet. Intermediate INT-capable switches (devices) interpret the instruction header and collect and insert the desired network state information in the data packet, which eventually reaches a sink switch and can be used as needed to monitor and evaluate the operation of the network. Advantages of INT include real-time telemetry rates, low CPU and operating system (O/S) overhead, and the flexibility to programmatically instrument packets to carry useful telemetry data. The programmable data planes used in INT have been explicitly designed for packet networks; however extending INT mechanisms into optical networks, where there is no notion of data packets, is far from straightforward due to factors such as layering and the presence of purely analog devices.


The emergence of integrated packet and optical networks, or “packet-optical networks”, such as those interconnecting data centers, see additional challenges when it comes to network telemetry because of the different types of telemetry data collected in packet versus optical networks. For example, the telemetry data collected in a packet layer of a packet network, such as packet loss and latency, on a per-flow basis cannot be easily attributed to or correlated with data collected in the optical layer of an optical networks, such as bit error rates (BERs) and quality factor (Q-factor). Moreover, the optical network lacks the digital constructs used by telemetry solutions such as INT, and the packet layer does not have access to measurements in the optical network. A further challenge occurs in associating packet flow telemetry data with the corresponding data from optical transport network (OTN) layers, which involves piecing together telemetry data from many devices.


Optical parameters may affect traffic flows. For example, if a link experiences degradation in Q-without link failure, operators can use the knowledge of that information to proactively move critical applications away from that particular link. Thus, it is useful for network operators to be able to monitor optical parameters over time to use in routing and other applications.


Thus, the packet-optical in-band telemetry (POINT) framework was developed (as described in U.S. patent application Ser. No. 15/801,526, which is incorporated herein by reference in its entirety) to achieve end-to-end correlation of collected network state data in mixed networks with multiple network layers, such as packet-optical networks.


According to the POINT framework, a source device inserts an intent (POINT intent) for telemetry data collection along with the data flow. The intent communicates the parameters of data collection such as conditions for data collection, entities being monitored, and the type of data to be collected for that flow. Intermediate devices on that data flow process the high-level intent if it is targeted towards them, translate the intent into a suitable device-specific action for data collection and execute that action to collect an intent response. At a layer boundary, such as packet to optical, or across optical layers such as a hierarchy of optical data units (ODUs), intermediate devices translate the intent and response using a layer-appropriate mechanism. For example, in the packet network, the intent and response may be encapsulated using IP options or VXLAN metadata header. At the packet-optical boundary, the intent can be retrieved from the packet header, and translated and encapsulated as ODU layer metadata, which remain accessible to all nodes along the end-to-end path of the ODU.


In another example, the POINT intent can be translated into an appropriate query for telemetry data collection via the management plane of the optical devices. As soon as the response of data collection is ready, it is communicated through the optical network and translated appropriately into a packet or packet header at the packet-optical boundary and forwarded to the sink for analysis. For example, the response communication may be out-of-band using the optical supervisory channel (OSC). The POINT framework also supports adding response metadata for incorporating deployment-specific reliability mechanisms.


Thus, the POINT framework provides hierarchical layering with intent and response translation at each layer boundary, and mapping of the intent to layer-specific data collection mechanism, such that the POINT framework can be deployed across a network layer hierarchy. The POINT framework also provides for fate sharing of telemetry intent and data flow. Telemetry data for a specific data flow can be collected in-band as the data traverses the network layers. By design, intent responses can be out-of-band to accommodate scenarios such as troubleshooting networks when there is no connectivity between the source and the sink. Additionally, intents are high level instructions for data collection and can be mapped to existing data collection mechanisms between two POINT capable intermediate network devices.



FIG. 1 is a high-level diagram of an example POINT framework 100 implemented in an example packet-optical network 102, in accordance with the disclosures herein. The example packet-optical network 102 includes packet devices 110, 112, 114, 116, 118 and 120, and an optical network 104 segment that includes optical devices 122, 124, 126, 128, 130 and 132. The POINT framework 100 can operate over an optical network 104 with Layer0 (L0) and/or Layer-1 (L1) circuits. The packet devices include a POINT source device 110 and a POINT sink device 120, as well as packet optical gateways (POGs) 114 and 116 located at the interfaces between the packet segments and optical network 104 segment of the packet-optical network 102. The packet devices 110, 120, 114 and 116 can operate at the packet layer, for example at layer 2 (L2)/layer 3 (L3) (e.g., L2 may be a data link layer and L3 may be a network layer, which exist above a physical layer). POGs 114 and 116 are also connected via lower layer devices, such as L1/L0 devices 122, 124, 126, 128, 130 and 132. In the example packet-optical network 102, POGs 114 and 116 and optical devices 126 and 128 are configured as POINT intermediate devices (i.e., devices with digital capability to interpret POINT intent, translate it across layers, and aggregate and propagate the telemetry state information in the packet-optical network 102).


According to the POINT framework 100, telemetry information for a packet-optical traffic flow 105, such as intent or POINT data (e.g., intent and response), in the packet-optical network 102 is gathered in the data plane 140 as part of the information carried in the network 102, as described below. The telemetry plane 160 represents the telemetry information for the packet optical flow 105 being mapped and correlated across network layers, constructs (e.g., secure network communications (SNC), label-switched path (LSP), or virtual local area network (ULAN)) and devices operating at different layers in the networking stack to give the end user (e.g., at the POINT sink) a unified view of the operation of the entire packet-optical network 102.


In accordance with the disclosed POINT framework 100, a POINT source device 110 may initiate a network telemetry data collection for a packet-optical flow 105 along a packet-optical data path from the source device 110 to a sink device 120. Along the packet-optical data path, POINT intermediate devices, such as POGs 114, 116, and optical devices 126, 128, may interpret the intent, collect the desired telemetry data, and encode it back into the packet (flow) 142, which eventually gets forwarded to the sink device 120. For example, as packet (frame) 142 traverses the packet-optical network 102 across devices and layers (e.g., packet layers L2/L3 and optical layers L1/L0), in the data plane 140 intent information is transferred into other layers, translated into device-specific actions, and responses are collected (e.g., added to POINT data in packet 142) for use at the POINT sink device 120. At the sink device 120, the collected telemetry data for the packet-optical flow 105 (collected from POINT source device 110 to POINT sink device 120) is processed as needed by the intended applications. Examples of telemetry data processing may include triggering a report to a management entity (e.g., using mechanisms like OpenConfig) or archiving collected data in a storage device.



FIG. 2 is a flow diagram of an example telemetry processing procedure 200 that meets data freshness requirements and includes ENTS in the telemetry responses, in accordance with the disclosures herein. The telemetry processing procedure 200 may be performed in an optical layer by an intermediate optical or packet-optical device in a packet-optical network, although the procedure is similar in a packet layer, as performed by a packet device, involving packets instead of ODU frames. The device may read POINT intent from the ODU frame 210, for example from the ODU-L1POINT-OH 214. A POINT intent may include a corresponding data freshness requirement that indicates a maximum time duration for which telemetry data is valid (e.g., Intenti has a 1 minute data freshness requirement, and Intent. has a 200 ms data freshness requirement).


The received intents may be added to the associative map 224 (also referred to as mapping table or table) while responses are generated. The device may check if the response for a particular intent is ready by looking it up in the associative map 224 that keeps a mapping of intent and corresponding response. At the point in time shown in the example of FIG. 2, Response2 and Response3 are available for corresponding Intent2 and Intent3, whereas a response is not yet available for Intenti. The device may also verify that the ENTS of each response (shown in this example to include three fields: (hour: minute: second)) meets the data freshness requirements for the corresponding intent. If a response is ready and meets the corresponding data freshness requirement, then the intent and response may be added to the ODU frame 210, for example into the ODU-L1POINT-OH 214. However, if the response is not ready or does not meet the data freshness requirement, the intent may be added to an intent queue 220 for intent translation and processing. Once the corresponding response is determined to be ready (e.g., Response2 for Intent2), it may be added to the associative map 224, the corresponding intent may be removed from the intent queue 220, translated, processed, and inserted in the associative map 224 to be inserted with the response in an ODU frame 210 (in this case a subsequent ODU frame from when the corresponding intent was received) for downstream forwarding. In an alternate example, the response may be communicated via an out-of-band channel.


ENTS in telemetry applications may be used to provide an ordering of events on the same device. For example, the ENTS dimensions and value within each dimension may be used to order events, for example in increasing or decreasing order in time (“dictionary order”). ENTS in telemetry applications may be used to provide an ordering of network events across different devices. For example, ENTs may be used to provide ordering of telemetry data within a telemetry (e.g., POINT) flow.


It may be the case that an ENTS value pertaining to an event is not the exact time value of the event. For example, the event may occur in hardware whereas the recording of the ENTS value may occur in software, such that there may be some delay between the event and the ENTS recording. Thus, in an example, latency bounds for local ENTS values may be generated, and may be provided to the sink node to provide a bound on the accuracy of the ENTS value and may be particularly useful when evaluating data freshness. In another example, ENTS values may be treated as logical clock values. ENTS in telemetry applications may be used to ensure data freshness, for example by communicating data freshness requirements (e.g., with intent data) as described herein.



FIG. 3 is a flow diagram of an example elastic timestamping procedure 300 with event freshness for use in a computing or networking system, in accordance with the disclosures herein. The elastic timestamping procedure 300 may be performed by an entity or device that is part of the computing or network system. At 302, an elastic ENTS is generated for a corresponding event or action in the system for which timing or temporal order information is needed. At 304, the ENTS is evaluated to verify if the event meets a freshness threshold.


If the event does not meet the freshness threshold, then at 306 the event and corresponding ENTS may be disregarded (e.g., data discarded). Otherwise, if the event meets the freshness threshold, at 308, a subset of the dimensions of ENTS are selected to generate a compact ENTS by determining the relevant time granularity for the corresponding event. At 310, the compact ENTS is communicated (e.g., reported to a receiving entity for processing, recorded to storage, displayed to an operator, etc.), thus achieving reduced overhead.


In an example, the disclosed elastic timestamping methods and system, and any subset or one or more component(s) thereof, may be implemented using software and/or hardware and may be partially or fully implemented by computing devices, such as the computing device 400 shown in FIG. 4.



FIG. 4 is a block diagram of a computing system 400 in which one or more disclosed embodiments may be implemented. The computing system 400 may include, for example, a computer, a switch, a router, a gaming device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer. The computing device 400 may include a processor 402, a memory 404, a storage 406, one or more input devices 408, and/or one or more output devices 410. The input devices 408 and output devices 410 may be generally referred to as interfaces for the computing device 400. The device 400 may include an input driver 412 and/or an output driver 414. The device 400 may include additional components not shown in FIG. 4.


The processor 402 may include a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core may be a CPU or a GPU. The memory 404 may be located on the same die as the processor 402, or may be located separately from the processor 402. The memory 404 may include a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.


The storage 406 may include a fixed or removable storage, for example, a hard disk drive, a solid state drive, an optical disk, or a flash drive. The input devices 408 may include a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices 410 may include a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).


The input driver 412 may communicate with the processor 402 and the input devices 408, and may permit the processor 402 to receive input from the input devices 408. The output driver 414 may communicate with the processor 402 and the output devices 410, and may permit the processor 402 to send output to the output devices 410. The output driver 416 may include an accelerated processing device (“APD”) 416 which may be coupled to a display device 418. The APD may be configured to accept compute commands and graphics rendering commands from processor 402, to process those compute and graphics rendering commands, and to provide pixel output to display device 418 for display.


In an example, with reference to FIG. 1, the point source 110, packet devices 112 ad 118, optical devices 122-132, and/or POGs 114, may be implemented, at least in part, with the components of computing device 400 shown in FIG. 4. Similarly, the procedures shown in FIGS. 2 and 3 may be implemented, at least in part, with the components of computing device 400 shown in FIG. 4.


It should be understood that many variations are possible based on the disclosure herein. Although features and elements are described above in particular combinations, each feature or element may be used alone without the other features and elements or in various combinations with or without other features and elements.


The methods and elements disclosed herein may be implemented in/as a general purpose computer, a processor, a processing device, or a processor core. Suitable processing devices include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors may be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing may be maskworks that are then used in a semiconductor manufacturing process to manufacture a processor which implements aspects of the embodiments.


The methods, flow charts and elements disclosed herein may be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non-transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).

Claims
  • 1. A device that is part of a system, the device comprising: a processor coupled to at least one interface;the processor configured to: generate a first timestamp with n time dimension fields for a corresponding event in the system, wherein the first timestamp has an adjustable size;select a subset of the n time dimension fields of the first timestamp based on a relevant time granularity of the corresponding event to generate a second timestamp; andthe processor and the at least one interface configured to communicate the second timestamp for further processing.
  • 2. The device of claim 1, wherein the n time dimension fields include at least one of the following time dimensions: date, hour, minute, second, millisecond, microsecond or nanosecond.
  • 3. The device of claim 1, wherein the processor and the at least one interface are further configured to amortize and communicate only once over a plurality events at least one of the n time dimension fields not in the subset.
  • 4. The device of claim 1, wherein the processor and the at least one interface are configured to communicate the second timestamp by reporting the second timestamp to a receiving entity, recording the second timestamp to storage, or displaying the second timestamp to an operator.
  • 5. The device of claim 1, wherein the corresponding event has a freshness threshold, wherein: on a condition that the first timestamp exceeds the freshness threshold, the processor is further configured to discard the first timestamp and the corresponding event.
  • 6. The device of claim 5, wherein the freshness threshold is provided to the device or is programmed into the device.
  • 7. The device of claim 1, wherein: the processor and the at least one interface are further configured to periodically generate a mapping between values of the first timestamp and local clock values and provide the mapping to a receiving entity for synchronization.
  • 8. The device of claim 1, wherein: the processor and the at least one interface are further configured to receive a packet including at least a header and a payload at a packet layer;the processor is further configured: read intent information from the header, wherein the intent information indicates a type of telemetry data;translate the intent information to trigger a device-specific action to provide the type of telemetry data, wherein the device-specific action is the corresponding event in the system;execute the device-specific action to generate a response corresponding to the intent;associate the second timestamp and the response with the intent; andencode the second timestamp and the response for downstream data forwarding.
  • 9. The device of claim 8, wherein the processor is configured to encode the second timestamp and the response for downstream forwarding by inserting the second timestamp and the response into the header of the packet with the intent information.
  • 10. The device of claim 8, wherein the processor is further configured to: add the intent information to an associative mapping table;match the second timestamp and the response to the intent information using the associative mapping table; andinsert the second timestamp and the response into a header of a subsequent packet.
  • 11. A method performed by a device that is part of a system, the method comprising: generating a first timestamp with n time dimension fields for a corresponding event in the system, wherein the first timestamp has an adjustable size;selecting a subset of the n time dimension fields of the first timestamp based on a relevant time granularity of the corresponding event to generate a second timestamp; andcommunicating the second timestamp for further processing.
  • 12. The method of claim 11, wherein the n time dimension fields include at least one of the following time dimensions: date, hour, minute, second, millisecond, microsecond or nanosecond.
  • 13. The method of claim 11, further comprising: amortizing and communicating only once over a plurality events at least one of the n time dimension fields not in the subset.
  • 14. The method of claim 11, wherein the communicating the second timestamp includes at least one of: reporting the second timestamp to a receiving entity, recording the second timestamp to storage, or displaying the second timestamp to an operator.
  • 15. The method of claim 11, wherein the corresponding event has a freshness threshold, further comprising: on a condition that the first timestamp exceeds the freshness threshold, discarding the first timestamp and the corresponding event.
  • 16. The method of claim 15, wherein the freshness threshold is provided to the device or is programmed into the device.
  • 17. The method of claim 11, further comprising: periodically generating a mapping between generated values of the first timestamp and local clock values and providing the mapping to a receiving entity for synchronization.
  • 18. The method of claim 11, further comprising: receiving a packet including at least a header and a payload at a packet layer;reading intent information from the header, wherein the intent information indicates a type of telemetry data;translating the intent information to trigger a device-specific action to provide the type of telemetry data, wherein the device-specific action is the corresponding event in the system;executing the device-specific action to generate a response corresponding to the intent;associating the second timestamp and the response with the intent; andencoding the second timestamp and the response for downstream data forwarding.
  • 19. The method of claim 18, wherein the encoding the second timestamp and the response for downstream forwarding includes inserting the second timestamp and the response into the header of the packet with the intent information.
  • 20. The method of claim 18, further comprising: adding the intent information to an associative mapping table;matching the second timestamp and the response to the intent information using the associative mapping table; andinserting the second timestamp and the response into a header of a subsequent packet.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/528,964, filed Jul. 5, 2017, which is incorporated by reference as if fully set forth herein.

Provisional Applications (1)
Number Date Country
62528964 Jul 2017 US