This is my first time at the tech report forums.I have some networking hardware questions, and as I've found none network hardware dedicated forums, this is my placeSo.

I'm trying to build a new network for my business and I want data transfer rates of 10Gbps, my question is if I would be able to reach that point with simple cable and Gigabits ethernets cards,Any point in the right direction, would be thxs.

I moved this topic to the Networking forum so it gets more traffic (pun intended).

10Gbps is certainly a sexy number, but the question is do you have an actual *need* for that kind of bandwidth? Could you give us some more details about the size of the network, what you're transferring, etc? Most servers out there for a typical business can't come close to pushing that kind of data, and 10Gbps is usually reserved for backbone equipment. We'll take it from there!

I recall seeing a setup that had two servers connected via a pair of quad-port gigE cards in each machine, for a total of 8 x 1000Mbps bandwidth between them (running some kind of network bonding software to actually make use of it). Of course cards like that are server-class and close to $500 each so you're looking at almost $2K in network hardware, not counting whatever the software cost (and whatever headaches it might bring with it).

So, what kind of data we transfer. My company makes equipment, kinda space telescope equipment, and simulating and testing those machines, all the raw data dumps have to be transfered over the networks. We also transfer high, super high resolution images, telescope sized images, and video/audio over the net, so I do need that bandwidth.Besides in here says that with 10GBbps I will only get 1Gb/s of transfer rate.Second: I know that kind of setup can't be mounted with commodity hardware, I know it will cost money, I have a couple of grands in my packets, no worry....So any tips.

I don't understand this. Maybe I am reading this wrong, but 10 GBps is 10 gigabytes per second.... that isn't usually how things are measured for network traffic. If we are talking about 10 gbps, we are looking at 10 gigabits per second, which is 1.25 GB (gigabytes) per second. Do you need 1.25 GBps of throughput?

Besides in here says that with 10GBbps I will only get 1Gb/s of transfer rate.Second: I know that kind of setup can't be mounted with commodity hardware, I know it will cost money, I have a couple of grands in my packets, no worry....So any tips.

Firstly, not quite sure what you were reading on that Wikipedia page, but there is a difference between a GBps and a Gbps. The former are gigaBYTES, the later gigaBITS. 1 byte equals 8 bits. Network connections are normally measured in bits per second. As for actual speed, a 1Gbps connection between systems will generally transfer 1GB of actual data in ~10 seconds.

As for money to burn, a 10Gbit port on a server is about $1000, and a switch is about $2000. 10Gbit switches start around the $15000 mark I believe. Factor that into your costs.

I really think you will have difficulty getting a server to feed a 10Gig link (fast enough storage and interconnects between the storage and the networking subsystems) plus as others have posted the costs are orders of magnitude over a 1Gig system. I could see maybe putting a 4x1Gig card in the server and getting a switch that supports link bonding for that link whilst all the clients connect in at 1Gig.

If your data dumps are 600-800MB (and not multiple seconds of 600-800MB) then I would expect a 1Gig setup to get close to 6-8 seconds to transfer those to the client, driving multiple clients I could see the server needing a little more but not 10Gig.

If your server isn't packed full of RAM to help cache the storage and the storage is less than RAID arrays of SSDs then I don't think you'll get anywhere near 10Gig out of the server regardless of what speed network card you put in. Similarly look at the backplane connectivity - are the RAID controller and the network card both plugged in to PCIe x16 slots that connect to the same PCIe switch?

I guess one reason that people is sceptical from the start when somebody mentions 10GigE without the why behind is that they have heard it before. When I worked for a client a few years ago, some marketing guy hade made a tech-specification for a commercial interactive screen system they wanted to build. Before it went out to the parties bidding for the projects, some other people that had some sense put the specification before a bunch of technical consultants. We were three consultants that tore it to pieces inside of five minutes.... but yeah, i guess an interactive "tv" for an infomercial really needed redundant 10GigE interfaces with no regard to what the distribution, no less the access layer could actually deliver. The guy had probably heard somewhere they we were building the new network with 10GigE capability and just went with it. That same network also transports surveillance images from a few thousand cameras in real time though.

In regards to the question.10GigE is coming down in price, but its not cheap yet, and there are some options where you can skimp on the fiber and use copper 10GigE or SFP-server cards and then there are several varieties of SFP cables for 10GigE also over copper. But a SFP server card is still $600 at least, and depending on the topology you want from the network, you might be fine with a single larger datacenter switch with the right cards, or you might need several switches.

Then of course you will need to have real time data generation on the workstation to fill it, without going to disk in between or you wont get much use for it. On the server side you will need a SAN or storage server with enough spindles to write those 1.25GB/s of data that you generate.

That said, the cheapest Cisco switches with at least 10GigE uplinks arent that expensive... starts at $9999 + interfaces, and depending on your lenght requirements they can be $400, but most options are ~$1500 interface or so. Other brands do have some cheaper options, but not by much. And if you want more then two 10GigE ports, it will get more expensive.

You gave me a bandwidth estimate when I asked for size. This can be interpreted several ways. I recommend tracing a typical data flow from creation to destination(s), and writing it out.

Sorry for asking can u give an example, I'm kinda loose at the technical side here.

You should know about the data that is being generated. Specifically:Large file vs collection of files Average/Maximum* file size (*not the largest ever, but which would hold true for 95% of instances) Frequency of generation

Data flow itself:After generation, where is the data located? What interacts with this data? (include number of systems)Is the data stored on those systems too? Pushed or pulled there?Is the data modified on those system?What happens to the data after these systems are finished? (deletion/archival/whatever)

This is my first time at the tech report forums.I have some networking hardware questions, and as I've found none network hardware dedicated forums, this is my placeSo.

I'm trying to build a new network for my business and I want data transfer rates of 10Gbps, my question is if I would be able to reach that point with simple cable and Gigabits ethernets cards,Any point in the right direction, would be thxs.

Routers, ethernet cards, and cables, suggestions pls

Thxs in advance.

Erick

My advice -- hire someone to do this right, assume that you don't have a competent IT organization already. There are a few folks on here who are top notch networking professionals and dozens who think they are because they wired their Linux system to a network attached hard drive. Can you tell the difference? There have been some good follow up questions asked, but in reality, the answers and information you get here will be worth exactly what you paid for it.

Putting in 10Gb network infrastructure is expensive. I was involved in the roll out of our 10Gb datacenter core at work and to give you a basic example, a 48 port 1Gb switch with two 10Gb uplinks is going to be in the 10k range. It is also technically complex. We have the storage problem. Even high end storage gear has a tough time sustaining 10Gb speeds once you exceed the size of their cache memory. We have the physical layer problem. Are you going to do copper or fiber? Do you know the limitations of each? We have the speed boundary issue. What are you going to do with gear that doesn't or can't support a 10Gb interface?

This is a project that will cost money. If you really need 10Gb gear, it's going to cost a lot of money. If you don't have the expertise in house then it is certainly worth it in terms of project success to engage a professional, either through a vendor or a consultant. Both have their benefits and drawbacks.

My advice -- hire someone to do this right, assume that you don't have a competent IT organization already. [...] If you don't have the expertise in house then it is certainly worth it in terms of project success to engage a professional, either through a vendor or a consultant. Both have their benefits and drawbacks.

--SS

Same thoughts here, I do know more than simple attach my linux box to the network, and do so with the others, but I do realize the issue here is way more bigger.Also I started to look into this more carefully and noted that maybe there no real need to that throughput. We doesn't have the servers architecture needed to engage with that flow of data. Actually If we build up 10Gb network then the bottleneck will be storage, and so many others issue, I just didn't get at first.

Yeah, my boss refuse to hire someone to do the job and I know that would be a better choice.Anyway thxs to your comments I now have the arguments I need to fight back the negative position.

In that case my recommendation would be to use gigabit ethernet over copper.

The cards are cheap, the wiring is cheap etc. etc.

Just don't skimp on your switch.If you can get a gigbit switch with channel bonding you can enable that later on if you start saturating the port to the server and the server has spare capacity left.

Otherwise you may want to go with a dual network setup where you have your regular network and a secondary network specific for your high-bandwidth needs. It would still be a gigbit ethernet network but data going over it would not saturate your primary network.

The data capture server would have two network cards one for each network so that data captured is still available to everyone else if it is needed. Again if there is spare capacity left on the server you could enable channel bonding if needed.

The secondary server could also be used as a mirror for the primary server for additional data redundancy for critical data if the primary server goes down (and it WILL eventually go down).

Otherwise you may want to go with a dual network setup where you have your regular network and a secondary network specific for your high-bandwidth needs. It would still be a gigbit ethernet network but data going over it would not saturate your primary network..

This seems like a right setup.I'm planning to split my network in two, between the production/investigation facilities and the general usage network