Observation: Retransmitted data with FIN bit set. (Possible tcptrace bug)

From: Joseph Ishac (jishac@grc.nasa.gov)
Date: 01/23/02

  • Next message: webmaster@europort.net: "toner cartridges"

    Date: Wed, 23 Jan 2002 19:27:45 -0500
    From: Joseph Ishac <jishac@grc.nasa.gov>
    Subject: Observation: Retransmitted data with FIN bit set. (Possible tcptrace bug)
    Message-ID: <20020123192745.B2979@sunfire.grc.nasa.gov>
    
    
    

    Shawn,

    I am currently running several network tests in a closed network
    consisting of three machines. The goal is to send a file from one
    machine to another through the third machine, which transparently imposes
    an artificial delay.

    In performing these tests, I've observed a rather strange situation which
    causes tcptrace to report the incorrect value in several fields (such as
    "ttl stream length" and "missed data").

    Analysing the connection revealed that there were two FINs. The first FIN
    occurs logically in the final data packet. However, a loss occurs within
    the final window of data, and the resulting retransmission also carries a
    FIN. Also, the retransmission carries the sequence number of the data
    lost, which is lower than the value in the original FIN.

    The files attached help give a visualization of the connection that is
    causing the problem. Hopefully the files will clear up everything I left
    out in my babbling above :).

    So, I am unsure what the correct behavior should be...
    According to RFC 793, the FIN bit represents 'no more data to send', thus
    tacking it onto the retransmission does not seem to be counter intuitive -
    since the sender has no more data to send. However, setting the FIN bit
    isn't necessary either, since the stream is reliable. I've made a quick
    pass through RFC 793 and could not find anything that indicates which
    behavior is correct.

    In either case, my guess is that the retransmission with the FIN bit set
    is 'confusing' tcptrace, causing it to report erroneous values. If this
    is the case _and_ the behavior of the connection is valid, then tcptrace
    would likely need to use the FIN packet with the highest sequence number
    (or just ignore FIN packets whose sequence number is lower than previous
    FINs) to calculate the appropriate values. However, I have not had a
    chance to look into the tcptrace code to verify either the problem or
    justify the solution.

    Any ideas/thoughts on what the correct behavior should be?

    Thanks,

    Joseph Ishac
    Computer Engineer
    NASA Glenn Research Center
    Email: jishac@grc.nasa.gov

    Cc: tcptrace-bugs@tcptrace.org

    enc. 4 files

    dump.txt : Human readable tcpdump output for the connection of interest.
    sequence_graph.xpl : Time sequence graph of the connection. (Xplot)
    tcptrace.txt : Broken output of tcptrace.
    tcp-reduce.txt : Broken output of tcp-reduce.

    PS: A few things I forgot to mention:
         - The correct size of the file being transferred is 28960 bytes
         - The machines sending and receiving the file are running OpenBSD 2.9
         - The bridge imposing the delay is running Solaris 8
         - A similar note is being sent out to the folks in charge of tcp-reduce.

    
    

    
    

    
    

    
    



    This archive was generated by hypermail 2b30 : 01/24/02 EST