Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - OS tick granularity affecting outgoing TCP messages?: (4 Items)
   
OS tick granularity affecting outgoing TCP messages?  
Hi,

I am developing some test code that needs to send a single TCP message from a thread every 5mS.  Even in a tight while 
loop with no nanosleep, I am currently only able to send one TCP message from the test thread every 7.5mS.  

I have seen this time crop up before. When the OS tick time was set to the default, timers and similar events had this 
granularity.  I can already send UDP messages with fine-grain timing control down to hundreds or tens of microseconds, 
having used ClockPeriod to set 10uS.  The hogs utility shows utilization less than 10% with everything else running on 
the box.  

I have disabled Nagle with TCP_NODELAY (that did make a difference, since previously messages could be clustered around 
7.5mS - now I get exactly one message per 7.5mS).  I have tried both 'write' and 'send' functions (no difference).

I have elevated (doubled from default) the priority of the thread, used ClockPeriod within the thread itself (that 
should not be necessary as far as I understand it).  Nothing else I do seems to have any impact on the minimum time.  

My theory is that there is a 'tick' inside the TCP stack that is not reduced by a low value for ClockPeriod, but I have 
not found any additional TCP options that appear to control something like that.

Any ideas on what might be happening, and how to avoid it?  Many thanks for any ideas, or additional questions that 
might help narrow down the possibilities.

Best regards,

John

Code
===

The sending code in the thread is pretty straightforward, as shown below. Note that even when nanosleep is used, no 
times below 7.5mS are possible.  Two successive calls in the same loop also produce two messages 7.5mS apart, i.e. each 
call to 'send' or 'write' appears to block for 7.5mS.

  signal(SIGPIPE, SIG_IGN);                                         // If we don't choose to ignore the SIGPIPE error 
signal, we will never get the return fail from the write function (the whole program, not just the thread, will exit)
  
  while( 1 != iShutdown )
  {
    struct sockaddr_in saddrServer;
    memset( &saddrServer, 0, sizeof( saddrServer ));
    saddrServer.sin_family = AF_INET;
    saddrServer.sin_port = htons( APPLANIX_SERVER_PORT );
    saddrServer.sin_addr.s_addr = INADDR_ANY;
    
    int sktServer = -1;
    sktServer = socket(AF_INET, SOCK_STREAM, 0);
    if( 0 > sktServer )
    {
      printf("ERROR opening socket\n");
    }
  
    int iOptval = 1;
    setsockopt( sktServer, SOL_SOCKET, SO_REUSEPORT, &iOptval, sizeof( iOptval ));      // Allow port re-use so we don't
 get hung up waiting for ports to be released
    setsockopt( sktServer, IPPROTO_TCP, TCP_NODELAY, &iOptval, sizeof( iOptval ));      // Disable Nagle algorithm on 
output, otherwise messages will be delayed with spacing other than as specified
    
    if( 0 > bind( sktServer, (struct sockaddr *) &saddrServer, sizeof( saddrServer )) )
    {
      fprintf( stderr, "Warning: %.40s: %d: Error on binding\n", __func__, __LINE__ );
    }
    
    listen( sktServer, 5 );
    
    struct sockaddr_in saddrClient;
    memset( &saddrClient, 0, sizeof( saddrClient ));
  
    int sktClient = -1;
    unsigned int uiClientAddrLength = sizeof(saddrClient);
    fprintf( stdout, "Info: %.40s: %d: Attempting socket accept\n", __func__, __LINE__ );
    sktClient = accept( sktServer, (struct sockaddr *) &saddrClient, &uiClientAddrLength);
    if( sktClient < 0)
    {
      fprintf( stderr, "Warning: %.40s: %d: Error on accept\n", __func__, __LINE__ );
    }

    while( 1 != iShutdown )
    {
      int n = send( sktClient, &sTotalMsg, sizeof( sTotalMsg ), MSG_DONTROUTE );//MSG_DONTROUTE
      //n = send( sktClient, &sTotalMsg, sizeof( sTotalMsg ), MSG_DONTROUTE );//MSG_DONTROUTE
      //int n = write( sktClient, &sTotalMsg, sizeof( sTotalMsg ) );//MSG_DONTROUTE
      if( 0 > n )
      {
        fprintf( stderr, "Warning: %.40s: %d: Could not write to socket any...
View Full Message
Re: OS tick granularity affecting outgoing TCP messages?  
Yes, there is an internal timer tick in io-pkt that is not affected by ClockPeriod.

I don't understand the need for TCP to be sent at a particular time, TCP says nothing about when the data hits the wire,
 it is all about transferring the data reliably.
Re: OS tick granularity affecting outgoing TCP messages?  
Hi Nick,

Thank you for confirming there is something else at play.  That makes sense.

I have to simulate GPS devices that provide TCP data at 200Hz, and the spacing of messages is significant (they cannot 
be clumped).  I may have to send up to three or four TCP messages within one 5mS window.  The control of the individual 
message position within the window ideally would also be controllable down to say 50uS or so, but that may not be 
critical in this instance.

I guess while TCP makes no timing guarantees, in practice if the rest of the environment is controlled then most of the 
time people can get away with assuming there will be an 'acceptable' amount of delay and jitter.  I think in this case 
the vendor expects both to be under 100uS.

What options do I have to work around the tick time?  

I had wondered about bypassing the stack and using pcap or bpf.  I already use pcap for UDP, and whatever the io tick 
time is must be at a higher level than the pcap UDP goes in, since it is not affected, so I could send and receive raw 
TCP frames too and manage the TCP session from the application, but that is a huge pain.  Much easier would be something
 I could reconfigure in the driver.  And maybe second to that would be a custom driver, but I'm guessing that the tick 
is in the IP stack rather than the driver, so that might not help.

What would you recommend?

Many thanks,

John
Re: OS tick granularity affecting outgoing TCP messages?  
The internal tick isn't 7.5 ms.  You should take a pcap and
ensure the tcp window isn't closed.  If so you're limited
by acks from the other end.

On Fri, Jun 20, 2014 at 01:41:38PM -0400, J Sinton wrote:
> Hi Nick,
> 
> Thank you for confirming there is something else at play.  That makes sense.
> 
> I have to simulate GPS devices that provide TCP data at 200Hz, and the spacing of messages is significant (they cannot
 be clumped).  I may have to send up to three or four TCP messages within one 5mS window.  The control of the individual
 message position within the window ideally would also be controllable down to say 50uS or so, but that may not be 
critical in this instance.
> 
> I guess while TCP makes no timing guarantees, in practice if the rest of the environment is controlled then most of 
the time people can get away with assuming there will be an 'acceptable' amount of delay and jitter.  I think in this 
case the vendor expects both to be under 100uS.
> 
> What options do I have to work around the tick time?  
> 
> I had wondered about bypassing the stack and using pcap or bpf.  I already use pcap for UDP, and whatever the io tick 
time is must be at a higher level than the pcap UDP goes in, since it is not affected, so I could send and receive raw 
TCP frames too and manage the TCP session from the application, but that is a huge pain.  Much easier would be something
 I could reconfigure in the driver.  And maybe second to that would be a custom driver, but I'm guessing that the tick 
is in the IP stack rather than the driver, so that might not help.
> 
> What would you recommend?
> 
> Many thanks,
> 
> John
> 
> 
> 
> _______________________________________________
> 
> Technology
> http://community.qnx.com/sf/go/post110794
> To cancel your subscription to this discussion, please e-mail technology-networking-unsubscribe@community.qnx.com