Hi,
I am developing some test code that needs to send a single TCP message from a thread every 5mS. Even in a tight while
loop with no nanosleep, I am currently only able to send one TCP message from the test thread every 7.5mS.
I have seen this time crop up before. When the OS tick time was set to the default, timers and similar events had this
granularity. I can already send UDP messages with fine-grain timing control down to hundreds or tens of microseconds,
having used ClockPeriod to set 10uS. The hogs utility shows utilization less than 10% with everything else running on
the box.
I have disabled Nagle with TCP_NODELAY (that did make a difference, since previously messages could be clustered around
7.5mS - now I get exactly one message per 7.5mS). I have tried both 'write' and 'send' functions (no difference).
I have elevated (doubled from default) the priority of the thread, used ClockPeriod within the thread itself (that
should not be necessary as far as I understand it). Nothing else I do seems to have any impact on the minimum time.
My theory is that there is a 'tick' inside the TCP stack that is not reduced by a low value for ClockPeriod, but I have
not found any additional TCP options that appear to control something like that.
Any ideas on what might be happening, and how to avoid it? Many thanks for any ideas, or additional questions that
might help narrow down the possibilities.
Best regards,
John
Code
===
The sending code in the thread is pretty straightforward, as shown below. Note that even when nanosleep is used, no
times below 7.5mS are possible. Two successive calls in the same loop also produce two messages 7.5mS apart, i.e. each
call to 'send' or 'write' appears to block for 7.5mS.
signal(SIGPIPE, SIG_IGN); // If we don't choose to ignore the SIGPIPE error
signal, we will never get the return fail from the write function (the whole program, not just the thread, will exit)
while( 1 != iShutdown )
{
struct sockaddr_in saddrServer;
memset( &saddrServer, 0, sizeof( saddrServer ));
saddrServer.sin_family = AF_INET;
saddrServer.sin_port = htons( APPLANIX_SERVER_PORT );
saddrServer.sin_addr.s_addr = INADDR_ANY;
int sktServer = -1;
sktServer = socket(AF_INET, SOCK_STREAM, 0);
if( 0 > sktServer )
{
printf("ERROR opening socket\n");
}
int iOptval = 1;
setsockopt( sktServer, SOL_SOCKET, SO_REUSEPORT, &iOptval, sizeof( iOptval )); // Allow port re-use so we don't
get hung up waiting for ports to be released
setsockopt( sktServer, IPPROTO_TCP, TCP_NODELAY, &iOptval, sizeof( iOptval )); // Disable Nagle algorithm on
output, otherwise messages will be delayed with spacing other than as specified
if( 0 > bind( sktServer, (struct sockaddr *) &saddrServer, sizeof( saddrServer )) )
{
fprintf( stderr, "Warning: %.40s: %d: Error on binding\n", __func__, __LINE__ );
}
listen( sktServer, 5 );
struct sockaddr_in saddrClient;
memset( &saddrClient, 0, sizeof( saddrClient ));
int sktClient = -1;
unsigned int uiClientAddrLength = sizeof(saddrClient);
fprintf( stdout, "Info: %.40s: %d: Attempting socket accept\n", __func__, __LINE__ );
sktClient = accept( sktServer, (struct sockaddr *) &saddrClient, &uiClientAddrLength);
if( sktClient < 0)
{
fprintf( stderr, "Warning: %.40s: %d: Error on accept\n", __func__, __LINE__ );
}
while( 1 != iShutdown )
{
int n = send( sktClient, &sTotalMsg, sizeof( sTotalMsg ), MSG_DONTROUTE );//MSG_DONTROUTE
//n = send( sktClient, &sTotalMsg, sizeof( sTotalMsg ), MSG_DONTROUTE );//MSG_DONTROUTE
//int n = write( sktClient, &sTotalMsg, sizeof( sTotalMsg ) );//MSG_DONTROUTE
if( 0 > n )
{
fprintf( stderr, "Warning: %.40s: %d: Could not write to socket any...
View Full Message