Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

Normally, what some of us do is to just host the database and web-server inside 1 machine. Meanwhile we also may focus a machine with high RAM just for the remote database on LAN and another machine for web-server.

How does it affect performance for I/O speed to high load and busy server and/or anything that might need to be put under consideration for database read/writes rate. Does it efficient and optimizing the resource well if using 2 separate machine? Or any better suggestion for high performance database optimized with the resource? How about the security?

edit: 2 server with 4 thread processor (1 Web-server & 1 Database) vs 1 Server with 4 thread processor. Does it matter (always in term of performance)? Also how does it apply like Cloud Hosting performance? It does share database too among all their cluster to distribute resources and high availability am I wrong?

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.

I feel something is missing from the last sentence. Anyway, in my practice at least, development workload rarely reaches the limits of the computer used for development. (If you plan to perform load tests then this is certainly not the case.) The speed of the LAN isn't that low either, so you don't need to worry, I think.
–
dezsoDec 18 '12 at 10:02

@dezso so is it better using 2 machine instead of 1? Sorry for being unclear before, I have edit to improve the question. I hope its clear enough. :)
–
RiseCakoPlusplusDec 18 '12 at 10:38

With cloud hosting you want to make sure you get the most bang for your cpu cycles, and base it on their pricing structure. If they charge you more for 2 virtual dual cores instead of a virtual 4 core then pick the cheaper one. If not, then base it on your workload.
–
Ali RazeghiDec 19 '12 at 9:15

2 Answers
2

If I understand correctly you are asking:
"Which would be better and faster, to run everything on 1 machine and have clients directly connect to the server to remove networking as a potential bottleneck, or having clients access the server from another machine by moving the web services and log in to that other machine?"

In I.T. we usually try to squeeze out all the performance we can for every dollar thus I could see why you would ask this, but this is a dangerous path that is being entertained.

Firstly, what you're really talking about is resource utilization and where does the bottleneck lie. If your bottleneck is in fact networkIO, then this would help performance but you run A LOT of risks which are outlined below. However, if NetworkIO REALLY is the bottleneck, you have to fix that. Use NOCOUNT ON in your code, verify the network is stable. Rarely is network_io the bottleneck compared to diskIO or RAM.

What DB Engine are you using? If you're using MS SQL Server you could do a simple

SELECT * FROM SYS.DM_OS_WAIT_STATS ORDER BY 3 DESC

How high does network IO show up on the list? Depending on your version you should get 450-500 rows or so returned. If NetworkIO isn't even in the top 50 then you probably shouldn't entertain this idea.

Having Everyone Directly Connect On 1 Machine And Run DB/Web Services Locally:

In a world where I.T. best practices trump all else, then you would without a doubt separate client access from the main machine. There is just way too many things that could go wrong. For example, if you're using SQL Server and SSMS, SSMS itself will EASILY eat up 500MB per instance. That means every client who connects and opens up SSMS will eat up RAM, so if you have 20 people connecting, you just ate up 10GB ram.

Also while devs might be great at development, but unless you have a strong DevOps environment they cannot be trusted in prod. They are under the gun to get things to run fast and will just hack around. When you remove devs from prod all kinds of mystery issues immediately leave.

Security would be another risk. In many I.T. environments I've consulted at, security was an afterthought. I wouldn't be surprised if devs and users would have access to install files, create shares, change settings, etc. on your prod server. This is horrible.

How about if they start saving large files on your drives and you run out of disk space?

Having everyone connect from a web server:

In this scenario, users are still connecting to the web server directly which means they could stop web services, however those could typically be recovered fast and it won't affect the overall health of the DB server. You isolate the DB from everyone and this is a much safer alternative.

I think this is clear enough for me atm. Thanks! Btw, I'm using Percona.
–
RiseCakoPlusplusDec 18 '12 at 18:27

I am not aware of how to view wait stats with Percona, but looking at the typical overwhelming reasons not to run it in prod it'll probably still be best! Please let us know if you have any follow up questions. Thanks.
–
Ali RazeghiDec 18 '12 at 19:43

I just added 1 more variables into consideration. :)
–
RiseCakoPlusplusDec 18 '12 at 20:38

1

I do wonder why you don't mention network round-trip times. They can be a bigger problem than link saturation. 1000 queries locally with a tiny RTT won't cause a problem. Do the same on a 100ms RTT link and you've got a massive problem.
–
PhilDec 19 '12 at 1:50

Hi @Phil, thanks for the reply and good point. I've been doing some work for a CDN and had several network scenarios play out causing various issues. Pulling 1k queries / second over about 25ms hasn't been an issue, not using connection pooling was more so due to cpu contention. A similar setup with a link between Asia and Los Angeles doing mirroring with about 450 queries per second and over 100ms latency thus far has worked out too, surprisingly!
–
Ali RazeghiDec 19 '12 at 9:14