Message-ID: <3C0D522A.7030204@cs.waikato.ac.nz> Date: Wed, 05 Dec 2001 11:46:02 +1300 From: Stephen Donnelly <sfd@cs.waikato.ac.nz> Subject: Re: Timestamping Accuracy
Shawn Ostermann wrote:
>>> Supposing one had modified their computer with a very accurate time
>>> source (say using a GPS receiver to obtain a stratum 1 clock and had
>>> installed a very accurate crystal oscillator), and had accounted for
>>> all other sources of variance in the computers internals (interrupts
>>> and so forth), what is the best possible accuracy that could be
>>> obtained using TCPtrace on a Gigabit ethernet segment?
>>>
>>> I suppose the real question is this: at what point in the TCP trace program
>>> is the timestamp applied?
>>>
>
> Well, that depends on where the info came from. Not all of the packet grabbing
> programs that tcptrace can read record with the same accuracy. Netscout,
> for example, only stores milliseconds. If you're using tcpdump's pcap save
> format, then you get a 32-bit microsecond counter. How many of those bits
> are actually significant is up to the tcpdump/pcap implementation that
> saved the file.
>
> Tcptrace uses microsecond accuracy throughout (by only manipulating Unix
> timeval's regardless of where the data came from). In older versions,
> graphing was only done to millisecond accuracy, but that was changed to
> microsecond (6 digits past seconds) throughout.
Right. The Linux kernel for instance timestamps packet arrivals in the interrupt
handler with unix timevals, that is 1 microsecond resolution. libpcap then uses
these timestamps. Since the timestamping is done in the NIC device driver
interupt routine, timestamps are not generated until *after* the whole packet
has been received.
One solution is to avoid software timestamping and use hardware with higher
resolution timestamps, especially at gigabit and higher data rates. My thesis
(under examination) is on this topic, the group I worked with built some
hardware for this purpose with better than 60ns (0.06 microsecond) resolution,
and GPS synchronisation capabilities.
>>> Again assuming a GbE network, a minimum size frame (TCP connection packet)
>>> would be 66 bytes (528 bits). At 1e9 bits per second, it takes about
>>> .5 microsecond to transmit the frame. At the opposite end of the
>>> spectrum, a jumbo frame of size 9,000 bytes (72,000 bits) would take
>>> about 72 microseconds. If the timestamp were always applied at the end
>>> of transmission it would be a source of significant variance. However,
>>> if it were always applied at the end of the TCP header it would be
>>> consistent.
It's also worth noting that a minimum size gigE frame is smaller than the
resolution of a unix timeval, that is two minimum sized packets that arrive
immediately consecutively could recieve *identical* timestamps, potentially
confusing/distorting inter-packet-time or bandwidth calculations.
Stephen.
-- ----------------------------------------------------------------------- Stephen Donnelly (BCMS) email: sfd@cs.waikato.ac.nz WAND Group Room GG.15 phone +64 7 838 4086 Computer Science Department, University of Waikato, New Zealand --------------------------------------------------------------------------------------------------------------------------------------------------- To unsubscribe, send a message with body containing "unsubscribe tcptrace" to majordomo@tcptrace.org.
This archive was generated by hypermail 2b30 : 12/05/01 EST