How can I send lot of TCP packets really fast using C?

I have a problem sending TCP packets using C from one PC to another. Here are the steps:
I connect to the other PC as client
I use send() to send as many 1024 bytes packets as I can. Actually I try to send 100 packets of those.
I "guess" the NIC tries to fill a TCP packet with my bytes up to 1460, so that, it breaks one of my packets and send it in another TCP packet.
Then, the server only sees some of my packets, let's say 80 of my packets. I know it's related to how fast I'm sending the packets. Because if I use a small sleep between each packet, the NIC doesn't break any of my packets to fill its 1460 bytes. I can also say this because when I send packets of 512 bytes, the NIC sends two of my packets in one TCP packet of 1460 (including garbage).
BTW, I'm using ethereal to trace this and see all I'm telling you. So, my question is, is there any way in C, or any way to send my packet so that the NIC doesn't send more than one of my packets in a TCP packet?
Thanx
----> Update (5 minutes later):
To make this question more clear: I'm sending 3 packets each millisecond. I have to some tests, so I have to send as fast as possible, so probably a delay won't be a solution. Also, I have to use C for evaluation purposes. If I try to send a larger packet (> 1024 bytes) then the other side of the application, the server, starts to read garbage from the socket and it usually crashes. I'm stuck here!
thanx

The thing is that TCP conecptually works with streams, and not with packets. It generally tries to optimize the stream as much as possible by putting as much data as possible into one packet. The TCP protocol does not care about your different send calls - it only cares about what's in its send buffer at the moment, and tries to send that as fast as possible.

You've already mentioned the most common way of "forcing" the TCP protocol to put only one "send" in a packet : by adding a sleep between the sends (even a few milliseconds should be enough in most cases), you allow the TCP protocol to empty the send buffer, and send out a packet, then wait for the next "send" call that fills up the buffer again.

I would advise you not to tamper with the TCP protocol any other way than by adding sleeps. It does a great job on its own, and doesn't need you to decide how to optimize the streams.

For your situation specifically : is there any specific reason that you want to have one "send" per packet ? If the reason is that on the receiving end you want to easily recognise the different "sends", then what's usually done is adding a terminator at the end of each send (a combination of bytes that signifies the end of the "send"). That way, the receiver can scan the received data for that terminator.

Can you give a bit more information why you want this behavior exactly ?

Why would you want to control things at the packet level? TCP/IP is well optimized for sending quickly. I wouldnt tamper with adding sleep()s, they're only going to slow things down in the long run.

If you're sending some data, then waiting for a reply, you can sometimes speed things up by, after you've done all the writing, do a flush() on the socket. Usually not necessary, as most TCP stacks will do this on the next read() request.

It's not that I want only one send() packet per TCP packet. What it's happening is that for example, I send 100 packets of 512 bytes. Then TCP does something smart, I know it's trying to optimize, it sends two of my packets in one TCP packet. What the receiver application is saying it receives is only 50 packets. So, my receiver responds that it saw only 50 packets. Using simple math, when I send a 256 bytes packets, TCP send 4 of my packets in one TCP packet and the receiver sees only 25 packets. I don't know if maybe it's the way of reading the packet at the receiver. How should the receiver know how many packets is TCP going to send? and I think I should not care about how TCP is sending the bits as long as the receiver sees the same 100 packets in its side, which is not happening. Using sleep won't help, because it would slow it down, and I want as fast as possible.
---------------------------------------------------------------------------------------------------------------------------------
Can I just call flush() or something like that in C? is that just after send()?
Thanks

>> I don't know if maybe it's the way of reading the packet at the receiver.

Yes, the problem is that you think you get packets in your application. Packets are how the TCP protocol sends the data over the network. In your application you see a STREAM of data - the application has NO idea about the packets. The recv() just returns if data is available on the STREAM. At any given time, the stream might be empty, it might contain one packet, it might contain more than one packet, it might even contain a part of a packet.

So, as I suggested earlier : if on the receiver side you want to count the number of messages that have been sent, you could add a terminator at the end of each message. The receiver can then scan the received data for that terminator. It then knows where the message ends and the next message starts. The number of terminators it finds will be the number of messages that have been sent.

An alternative is to add your own headers to the message you send. The header will typically contain the length of the body of the message. So, you recv() a header, read the length of the body, then get the body of that length. And then you can recv() the next header, and do the same.

thanks for your recommendations. Using the headers is best way I guess, but I have several questions following your response. Should I declare an input buffer of the size of the tcp data field (1460 bytes)?
and sometimes It would break my packet and send it in another TCP packet. So, when I read the buffer, I could probably receive only a part of my packet. Then I'll have to wait to receive another stream to complete my packet. By the way, if my packet is broken in two pieces, should I always expect the end of my packet as the beginning of the next stream?. All of this would make me add more code I would like not to add. Is there a shorter way like the flush (in C) this user gr99 was talking about?

Just a clarification : there is only 1 stream ... all data is sent over that stream to the receiver. Forget about packets - they are an implementation detail, and are not visible in your application. Do not fixate on the 1460 value. That's the value used on your system at this moment. It might be different tomorrow, and it will most likely be different on other systems. 1500 for example is a common value for ethernet, but on other networks it might be 500 or 5000.

The TCP protocol has an internal buffer that keeps the data that it receives until you ask for it with recv().

You can of course use your own buffer in your application (and that's not a bad idea). You recv() data into that buffer, until you have received enough (ie. until you have received the whole message), and then you can work with it.

>> By the way, if my packet is broken in two pieces

As I said : forget the packets - that's all transparent to the application. The application sees a stream of data. Whatever the sender sends over the stream, will be received on the other side. If the TCP protocol decides that it needs to split up your message, you don't have to worry about that. The message will be re-assembled on the receiving end before it's sent to your application.

>> All of this would make me add more code I would like not to add.

The extra code will make your application more robust, and is basically needed to make it function correctly. That's a good thing !!

infiniti's advice is very sound. You can't rely on packet boundaries-- if the stuff goes through a gateway or router the packets might get rejuggled again and again. If you need to send meta-information, such as "Start of new data record" and "end of data record", then you have to send EXTRA information, usually in the form of headers saying "3030 bytes follow". YOU CANT DEPEND ON THE PACKET SIZES OR SHAPES.

well, according to your posts, your recommendations are to use a "message" delimitator. Now, before closing this question, I have something on my mind: I'm implementing a P2P emulator, so peers will exchange messages. Which do you think would be a good delimitator? I mean, a kind of sequence of bytes I could recognize as end or start? or, maybe the most common is the size of the data field, so I should expect to see x number of bytes. But, what about the begin of a message the first time I receive a stream? how do I know the start of a message? For example, I have an id field at the start, how do I know I'm reading an id?

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

In that case you're better off with the header solution I described. The first thing you receive is the length of the first message, then the first message itself, then the length of the second message, the second message itself, etc.

>> For example, I have an id field at the start, how do I know I'm reading an id?
Well, you know that the first thing in the message is the id, so by definition, you know what you'll receive first ...

thanks people, I actually opted for the delimiter. If I go public I will change it to the size field, it's better solution. Now, the receiver is working better than ever, no matter what I send, it parses the messages very well. I increased the points for this question because I still have some problems, related to the same original question, =). Now, the send() of the sender returns -1 for about 10% of the packets I send. I'm actually sending 2000 packets in ~ 3 seconds, ~1880 are not error (and the receiver acknowledges all of them!). And I see that -1 for some of the send() calls. The weird thing is that after some unsuccesful send() calls I see succesful send() calls again. Why do you think that is? I'm sending only 60 bytes per call.

hey, I'm using non-blocking sockets for applications matters. Is it possible that sometimes send() is not able to send as I desire and returns -1? would this happen with regular blocking sockets? BTW, -1 is Bad File error, like "socket is closed" which makes no sense to me.

yes I did. Possibly the reason for that error (-1), which by the way to answer you and grg99 it means "bad file number", could be that I'm trying to put too much in the buffer. But I wanted you to consider also if it could be because it's a non-blocking socket. For these sockets, it doesn't block in send() and recv() calls. Maybe, if the receiver is not ready, which by the way is a non-blocking socket also, the send() returns -1. Please let me know what you think about this point.

Featured Post

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Windows programmers of the C/C++ variety, how many of you realise that since Window 9x Microsoft has been lying to you about what constitutes Unicode (http://en.wikipedia.org/wiki/Unicode)? They will have you believe that Unicode requires you to use…

Windows 8 comes with a dramatically different user interface known as Metro. Notably missing from the new interface is a Start button and Start Menu. Many users do not like it, much preferring the interface of earlier versions — Windows 7, Windows X…