... is actually pretty good ;-) I wanted to ask about/confirm following numbers. When sending stuff TO Falcon, I'm getting about 790 - 830 KB/s but when sending FROM Falcon, it's about 3.7 MB/s. Is this normal? This is my custom code, nearly no hassle around, just pure send/recv from mintlib, Linux on the other side, again just recv/send calls.

Also, once, when sending from Falcon a 15 MB big file, I saw two errors like "buf_alloc RX failed, 1718", is this ok? (too bad I forgot to check file integrity in that case).

Btw, I realized there's one design flaw in the Ethernat driver -- unlike Svethlana, you can, in theory, connect two (three, four, ...) Ethernats at once to CT60. But how do you set MAC address for them or maybe even more generic, will the driver recognize all of them? Somehow I doubt it but feel free to prove me wrong ;) I'm going to try Ethernat+Svethlana combination, this will be fun.

Do you always get those numbers? As when I'm sending stuff to Falcon through svethlana I also get around average transfer 800KB/s, but only when sending smaller files like up to 50MB. When sending larger files like 700MB my average transfer drops to 400KB/s. Did you notice anything like that with your setup?

jury wrote:but only when sending smaller files like up to 50MB. When sending larger files like 700MB my average transfer drops to 400KB/s. Did you notice anything like that with your setup?

Haha, "larger", 700 MB is frakking large number for Atari world ;) I tried it with a 210 MB file, 813 KB/s. I'd say maybe your software tool is the culprit, as I said, I'm using my custom transfer utility, which doesn't do anything else than calling send()/recv() functions, ftp and friends may involve additional protocol logic.

mikro wrote:... is actually pretty good I wanted to ask about/confirm following numbers. When sending stuff TO Falcon, I'm getting about 790 - 830 KB/s but when sending FROM Falcon, it's about 3.7 MB/s. Is this normal? This is my custom code, nearly no hassle around, just pure send/recv from mintlib, Linux on the other side, again just recv/send calls.

About the buf_alloc RX failed: Sometimes MintNet seems to fail to allocate a buffer for incoming packets without any reason. You don't need to worry about that since the TCP/IP resending will fix it. I have not seen any broken files caused by this.

Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways ;))

mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways )

I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.

mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways )

I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.

mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways ;))

I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.

Although we would have understood it in a single post, too ;), after second thought I have my doubts -- if mintnet/freemint was the culprit, wouldn't it affect all the network drivers? I mean when using the good old EtherNEC, I got 300 kB/s in both ways. On the other hand, this symptom can be visible for high (>1 MB/s let's say) speeds only. Too bad my EtherNAT doesn't work anymore, so I could verify this :( Only from my memory I don't remember any discrepancies but my memory is not exactly the best.

mikro wrote:Thank you for those numbers, Evil! So there is some discrepancy between upload and download. I'm wondering whether the culprit is mintlib/freemint or it's something inside the FPGA/driver code? (yes, hoping to get 3.5 MB/s in both ways )

I got the impression from the Nature guys that there was something funny in mintnet which limits the performance of the svethlana driver.

Although we would have understood it in a single post, too , after second thought I have my doubts -- if mintnet/freemint was the culprit, wouldn't it affect all the network drivers? I mean when using the good old EtherNEC, I got 300 kB/s in both ways. On the other hand, this symptom can be visible for high (>1 MB/s let's say) speeds only. Too bad my EtherNAT doesn't work anymore, so I could verify this Only from my memory I don't remember any discrepancies but my memory is not exactly the best.

Playing with my modified experimental Firebee driver for EmuTOS, I _think_ I've found MiNT net goes wild for some reason if the ethernet driver allows multicast reception. Most (if not all?) other MiNT ethernet drivers seem to ignore/drop multicasts (this one does not).

Using iperf, I get 95Mbps and more (up and down) when connected to an (otherwise unused with all services disabled) separate "clean" NIC on my Linux machine (with a router setup, thus effectively filtering multicast packets), but only 1-3 Mbps (and lots of retransmissions) in my "multicast polluted" (STP and ipv6 NDP, UPnP) main network if I connect my Firebee directly to a switch port.

I don't know if this is a general MiNT-NET problem or one specific to the Firebee, but thought I mention it since it could provide a hint. Does the Svethlana driver allow multicasts?

I don't know if multicast support would be a limitation in the Svethlana driver or in the MAC that we use. It is from opencores.org, so not developed by us.

Regarding the funny issues that Pep mentions:A long time ago (my memory may be blurry on this) before the SV when we were developing the Ethernat we had two LEDs on it which we toggled in the Ethernat driver when a TX packet was written to either of the two TX buffers in the LAN91C111 chip. We were trying to see if Mintnet tried to burst more packets than one, since we had enabled multiple packets in Mintnet in the driver startup. But we never saw any activity on one of the LEDs. So we drew the conclusion that Mintnet never tries to send more packets than one, which is necessary to get higher speeds when you're not on a low-ping local network.

The Svethlana driver builds on the skeleton of the Ethernat driver and tries to enable multipacket buffer support in Mintnet too. The Svethlana MAC also currently has only 2 TX and 2 RX buffers, like the Ethernat, so its performance should be close to the Ethernat ( The difference is that the Svethlana MAC resides in the FPGA and is changeable. So we could add more buffers, or put the buffers in SV RAM and have lots of buffers. But that needs another DMA unit in the FPGA). I think the multi packet support is still not working with Mintnet+Svethlana just like it didn't work with Ethernat. And I think Mintnet is the culprit.