Sunday, January 23, 2011

So we have an app. which we wan to run multiple instance of it in linux. The number should be configurable. We also want that whenever one of the instance disappears, a new one is booted up.

I was looking into C based programs, shell script, python script etc. but I was wondering what would be the most simple, easiest way to do it. Are there any tools out there? Can one simply use some linux built-in functionality?

This may be because of the semi-colon issue pointed out by @Dom - in which case the server logs on those two boxes should tell you that. If you're running BIND, use named-checkzone to check the syntax of your zone files.

If you've actually got the right syntax now, but it's still not working, you need to look at the ACLs in your server - make sure that you're actually permitting access to that zone from 0.0.0.0/0 (aka "any").

Hey Guys
I have made website.
But after few months successfully run now its showing virus attack.
Now how to remove this things?
And what to do to avoid this attacks in future?
I have put screen-shot so that u can understand well.

The only satisfactory solution is to reinstall from a backup taken when you knew the machine was clean. If you don't have a backup wipe it and start again. Properly and fully removing a virus is seldom a simple job, despite the claims made by antivirus software vendors.

I suggest you enlist the services of an experienced system administrator to help you fix the problems you have and to secure the server a lot better than it is now. This is not a job for the inexperienced, unless you want to go through this again... and again...

I have to set up a server that should be as secure as possible. Which security enhancement would you use and why, SELinux, AppArmor or grsecurity? Can you give me some tips, hints, pros/cons for those three?

AFAIK:

SELinux: most powerful but most complex

AppArmor: simpler configuration / management than SELinux

grsecurity: simple configuration due to auto training, more features than just access control

Personally, I would use SELinux because I would end up targeting some flavor of RHEL which has this set up out of the box for the most part. There is also a responsive set of maintainers at Red Hat and a lot of very good documentation out there about configuring SELinux. Useful links below.

Ophidian : I find yum's CLI significantly more intuitive than apt. SELinux is annoying when you're trying to go your own way with non-stock apps, but I've never had issues with the stock stuff beyond needing to turn on some sebool's to enable non-default functionality (e.g. Let httpd php scripts connect to the database)

A "server" to provide what kind of service? To what audience, in what environment? What constitutes "secure" to you in this context? Lots more information would be necessary to provide a useful answer. For instance, a pure IP Time-of-Day server can be very secure -- all ROM firmware, radio imput, self contained battery power with automatic charging. But that's probably not a useful answer for you.

So, what kind of service? Internet wide, enterprise wide, trusted work team, dedicated point-to-point networking, etc.? Is high availability a need? Reliability? Data Integrity? Access control? Give us some more information about what you want, and recognize that "secure" is a word whose meaning has many dimensions.

I have done a lot of research in this area. I have even exploited AppArmor's rulesets for MySQL. AppArmor is the weakest form of processes separation. The property that I'm exploiting is that all processes have write privileges to some of the same directories such as /tmp/. What nice about AppArmor is that it breaks some exploits without getting in the user/administrators way. However AppArmor has some fundamental flaws that aren't going to be fixed any time soon.

SELinux is very secure, its also very annoying. Unlike AppAmoror most legitimate applications will not run until SELinux has been reconfigured. Most often this results in the administrator misconfiguration SELinux or disabling it all together.

grsecurity is a very large package of tools. The one i like the most is grsecuirty's enhanced chroot. This is even more secure then SELinux, although it takes some skill and some time to setup a chroot jail where as SELinux and AppAprmor "just work".

There is a 4th system, a Virtual Machine. Vulnerabilities have been found in VM environments that can allow an attacker to "break out". However a VM has a even greater separation than a chroot becuase in a VM you are sharing less resources between processes. The resources available to a VM are virtual, and can have little or no overlap between other VMs. This also relates to <buzzword> "cloud computing" </buzzword>. In a cloud environment you could have a very clean separation between your database and web application, which is important for security. It also maybe possible that 1 exploit could own the entire cloud and all VM's running on it.

mathew : RewriteCond %{THE_REQUEST} ^[A-Z]+\ /search\.php\?q=(www\.)?([^/\ ]+)[^\ ]*\ HTTP/
this is for request of any kind of search domains..but what I dont know is how do I convert to http://www.mydomain.com/domain.com

cpbills : added another potential answer

mathew : nop it doesnt work

cpbills : does it do /anything/ can you provide more information as to how it doesn't help? maybe enable logging for RewriteRules `RewriteLog file-path` and `RewriteLogLevel 9` http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html you also have to be willing to play with the regular expression in the `RewriteCond` to see if you can at least get it to trigger. i don't know where you got that pattern, so i have no idea if it should work or not, you provided it.

See "To enable HTTP Compression for Individual Sites and Site Elements" here.

Edit: I misread the details of the question. I am pretty sure I have configured different file extensions for compression on different sites in the past, but I also can't seem to find any definitive answer right now. I'll check when I'm at work tomorrow.

Kev : I read that. That's just the settings to turn on/off compression by site. There's no mention about whether you can customise the file types on a site by site basis.

depending on if you're using centos or some other distribution that uses yum or debian/ubuntu, which uses apt and apt-get

those lines, in the /etc/sudoers file would allow username to run the commands /usr/sbin/apt-get update and /usr/sbin/apt-get install [packagenamex] or /path/to/yum install [packagenamex] as the root user, and they will be prompted for /their/ password, not root's. they will have no other privileged access to the machine.

beyond that, most packages can be compiled from source with commands like:

./configure --prefix=/home/username
make
make install

which will install the package to their home directory, usually creating a ~/bin~/lib and ~/usr, etc directories.

so maybe ./configure --prefix=/home/username/local or something would be more appropriate.

for setting up apache httpd, to allow each user their own control over their own virtualhost, etc, without running multiple instances, you can add an option to the apache configuration, something like /etc/apache2/apache2.conf, a line that says:

Include /home/*/httpd/user.conf

the configuration file can be named whatever you want, whatever might be more appropriate, but what this tells apache is to look in /home/*/httpd/ (where * is translated as a glob to whatever subdirectories are under /home) for a file called user.conf where you can permit your users to add information about VirtualHosts

a normal user could install or configure apache to run out of their home directory on a non-privileged port, if you wanted to grant them access in that way. a non-priv port being anything over 1024, they would have to add a directive to their personal apache configuration saying something like Listen ip.add.re.ss:8888 starting an apache httpd server running on port 8888

to be sure they cannot browse into your, or anyone else's home directories, make sure they are set chmod 700 or chmod 711 (to allow apache httpd access to execute their directory, to get through to /home/username/public_html if you want to have user dirs in apache) you can test this by doing ls -ld /home/username it should show:

drwx------ 185 username users 36864 May 18 17:05 /home/username/

for permissions 700, and drwx--x--x for 711. if it shows up drwxr-xr-x then you will need to run chmod 700 /home/username or chmod 711 /home/username

Depends on whether you'll be terminating SSL on the load balancer or web servers...

In general, if your load balancer can handle it, then better to do it all there and take the load off the web servers. Also it allows quicker deployment of new servers as it's one less step to worry about.

Having said that, once you have your private key and ssl cert from the provider, you can back these up and use them wherever you like (on LBs or servers), so you won't be tied to one method or the other permanently.

Warner : What? They're asking about the certificate request.

Robbo : Yes, and I added some relevant thoughts around using load balancers for SSL termination and finished by saying what you did.

We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record?

I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it.

Is there any way to make the OS check the rest of the search path after it finds the username?

I would take the large instance because You have more reserve, memory wise as well as cpu wise. Also I have read about the small EC2 instances becoming sluggish. A bit of headroom can't hurt.

There also are additional cores so the cpu load of running backups might have an even tinier performance impact.

Additionally You save one instance of Win2008 server 2008 which is cost and the associated cpu and memory overhead for running the OS two times. I have to admit that I don't know the pricing model of Win2008 Server. (Cost per CPU, thread or socket or ...)

If You ran into saturation of the large VM, this would have occurred far earlier with the little VMs as they aren't even half the specs of the big one.

Last but not least, if You really have to launch another instance for Backup, You only have to launch one instance.

So with windows as the os I don't really see a benefit from splitting the workload over two tiny isolated VMs.

devguy : thanks for the comment. It's not really a budget decision, since we already own the licenses, the the two setups differs for about 80-90€/month...so that doesn't impact a lot.
I'm mostly concerned about difficult scalability options...since I'd probably need to switch to a setup like #1 (but with more powerful machines) to be able to add more front-end servers keeping a single/shared powerful DB machine.

The two images is probably going to be better for scalability, administration, and general management.

The single image is probably going to be cheaper, especially if you never have to scale this site out much.

Performance will depend largely on your implementation, but will likely be similar on both setups. The single image has more RAM and processing cores; this may be very important to your implementation (or maybe it will make no difference in the slightest).

devguy : I'm interested in this part "Performance will depend largely on your implementation, but will likely be similar on both setups". How can they be similar? Setup #2 runs both services, but has 4 times the ram and cpu...and there's no network latency to bring the data from a separate server.
My main concerns are mostly about scalability (how do I attach a new IIS machine from the same image, since it also includes the DB??)

Chris S : The network latency between the two "machines" is likely to pale in comparison to the Internet latency of the client. Unless you're implementation is actively accessing more than ~1.5GB of DB data, the DB server's RAM is less important that the underlying disk storage, which is likely to be the same on both. Multiple machines can not run off the same underlying image; not yet anyway. If you went with setup #2, and added another IIS VM later, it would have to be different. I'm sorry for being vague, but I don't know the details of the application.

devguy : Thanks Chris for the additional info. That is actually what I'm considering about setup #2.
I have always read however that's best to have all the ram possible on the DB server (also on some Stack Overflow posts by Jeff) and that's why I thought it would be better to have a large machine with a lot of ram to run the DB. The IIS would be pretty light anyway.
Sure there's the problem of scalability this way...but I hope the large machine would suffice for quite some time...until a decent initial success at least...

deploymonkey : I looked for disk performance on EC2 and fount 2 interesting tests. There are People that run benchmarks on EC2. Just have a look.
http://blog.mudy.info/2009/04/disk-io-ec2-vs-mosso-vs-linode/
http://stu.mp/2009/12/disk-io-and-throughput-benchmarks-on-amazons-ec2.html
And considering the decision what to opt for, just run a simulation if You can, and You will notice, if disk io or CPU is what limits user numbers first. You might even be okay with one smaller instance. Depends on Your app.

The two previous answers give some good decision points, but one thing not mentioned is your site availability requirements - if you use either of the architectures you suggested, can you tolerate your site being down while you relaunch a crashed EC2 instance ? (startup times are especially long for Windows instances; I've seen it take up to 30 minutes)

Whichever way you go, I recommend storing your database on a separate Elastic Block Store volume so that you can easily reattach it to a new instance in case of failure. EBS volumes are also easy to back up using the snapshot facility provided by AWS.

devguy : yes, DB and website file would be on a separate EBS volume...so that I can also startup a new instance, attach it to that EBS volume, and stop the previous instance

As You just mentioned that You can throw money at the problem and that You anticipate some scaling, go with 2 instances. That way You can gain experience with separation of services and have a better starting point for profiling and benchmarking Your services.

You might even want to migrate Your DB to OSS at a later point which is easier that way.

(informative: cloning and duplicating instances in EC2 is possible. This article is for linux, but maybe it gives You a hint about how to make a running copy of Your installs.)

I was once told by a very clever network architect which I respect a lot, that keep each machine as simple as possible. Always!

So, I would go for small instances, seperately - once they become too small, consider upgrading them or spawning extra instances.

Because you have them splitup from the beginning, its easier to put in extra power where needed instead of paying too much for the wrong setup.

It becomes a bit harder to maintain and backup more images, but you also gain the benefits of more scaleability I think.

We have run with similar setup for quite some years now, running on VMware and the SQL-server is seperated from the 2 IIS machines.

We even have a secondary SQL now, and thats possible because we also could link them for sync purposes.

devguy : very correct about the scalability. I'm just a bit worried about two things:
1) the setup would be more complex, and there's the possibility that is would be more complex for nothing, if we don't reach a point where the 2 instances are not enough
2) I'm worried that the total performances of the 2 machines would be sensibly less that the performances of a single large machine. I think the single large machine could last quite longer than the 2 small...

Hi! I've updated from 9.10 to 10.04 but unfortunately the PHP provided with 10.04 is not yet supported by zend optimizer. As far as I understand I need to somehow replace the PHP 5.3 package provided under 10.04 with an older PHP 5.2 package provided under 9.10. However I am not sure whether this is the right way to downgrade PHP and if yes, I don't know how to replace the 10.04 package with 9.10 package. Could you please help me with that?

We have several websites (with several public IP addresses) running on a web server. In IIS, the IP address are internal IP addresses (192.168.xxx.xxx). How do I figure out which public IP address matches which internal IP address? My goal is to change some public IP addresses. The particular web server is running IIS 6 on a Windows 2003 Server. Thanks, in advance, for your help!

You must have a port forwarder or other device routing connections from the external public IP addresses to the internal addresses. Your best bet would be to get access to that device and look at the configuration mapping public to private.

If all the IIS machines run the same web applications, you may have a load balancer handling the connection routing. In this case, your problem is perhaps simpler where you simply have the load balancer listen to the new addresses, but continue routing to the same pool of private machines.

Based on your comment, you may need to add a Host Header definition to your IIS configuration. If you have two websites listening on the same port (e.g. 80), you need to tell IIS how to direct traffic to each site. You do this by telling it which host address is handled by each site. (I only have an IIS 5 server to look at, but the settings for this should be similar)

Right click the web site and select Properties.

Select the Web Site tab

Next to the IP Address, click the Advanced button.

In the Multiple Identities for this Web Site, edit the entry and set the Host Address to the host name of your website. For example if you access the web site at http://www.example.com, you would set the Host Address to www.example.com. Save the settings and restart IIS if necessary.

Now the certificate issue is another problem and may actually be the source of the 400 errors. In order to decrypt the request, IIS needs to know the key to use. Since the entire request is encrypted, the only thing it can use to determine which key to use is the port on which the request arrived. If you have more than one SSL/TLS enabled website on the server, you will need to have each one listen on different ports and your firewall will need to know to route the request to that port. This also means your firewall will need to route specific public IP addresses/ports to the specific port for the private IP address.

Charles : Thanks for your good suggestions. We do have a firewall that load balances a couple of websites. This particular website runs on one source webserver and I want to move it to another destination webserver. I used the same IP address as that of another website on the destination webserver, but I got a HTTP 400 error message and the home page wouldn't load. I was using IIS Manager (but it was only showing the internal IP address). I'll check the firewall. Also, the website has a SSL certificate. If there's anything else I should check, please let me know. Thanks to all of you!

David Smith : I updated my answer with more information based on your comment.

Are you doing 1:1 NAT (or it may be called Virtual IPs) in your edge firewall? That would be one way to tell what public IPs map to private IPs.

However it's likely that you're just using host headers in IIS for each website: do an nslookup server 8.8.8.8 and then lookup the A record for each domain listed (I'd do the www host as well) and the IP(s) that they resolve to will tell you what IPs are being used for your websites.

I put 8.8.8.8 (Google's nameserver) in the nslookup example in case you have split DNS setup internally; this will make sure that you're getting the public IP not an internal IP.

I'm a software developer and don't have much experience as a sysadmin. I developed a web app and was considering buying a server and hosting the web app on it.

Is this a huge undertaking for a web developer? What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff. I'm a single user, and not planning to sell the service to others.

Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

If you are going Linux/Apache and you want to keep it as simple as possible, I'd suggest going with Linode (http://www.linode.com/) and use their Ubuntu images. Linode is a VPS provider and provides a great number of tools to automate things such as backups and let you manipulate your system through an http based console if the need arises. You'll have root access. You won't ever have to deal with anything hardware related and very rarely will you have to deal with anything network related.

I don't think you'll have much problems doubling as a sysadmin if you remember to "automate everything" - I've done this before in a previous life. Learn how to write bash scripts (or scripts for the shell of your choice). Putting on a sys admin hat as a developer is a very useful exercise. It'll help you both appreciate the work admins do and also make you tailor your development processes to make life easier on admins.

Depends on your level of experience and how comfortable you are with breaking and fixing things.

What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff.

Package updates (I refuse to endorse source-based distros like Gentoo anyone who's not already a guru) are easy; securing web apps can be quite difficult, depending on what functionality you're trying to achieve. Web Application Exploits and Defenses is an interesting exercise in teaching developers to write secure applications, once the basics like PHP security and SQL injection are out of the way.

Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

Ubuntu is fairly newbie-friendly but may be too "simplistic" for some people's tastes. Both the distro and the community have a few strange ways of doing things, but filtered through a reasonable degree of cluefulness you should be able to achieve almost anything.

It's a good idea to try and find a community that you can ask questions of - IRC, and specifically the Freenode network is good for anything open-source related - and forums are good for almost anything, if you can find the right one. Real People Who Know Things are also invaluable when starting out.

The counter, Process(sqlservr)\% Processor Time, is hovering around 300% on one of my database servers. This counter reflects the percent of total time SQL Server spent running on CPU (user mode + privilege mode). The book, Sql Server 2008 Internals and Troubleshooting, says that anything greater than 80% is a problem.

I have two NIC cards in my computer - one is connected to our corporate network and the Internet, the other is connected to a private LAN through a Linksys WRT54G. Both cards use DHCP.

This was never an issue with Windows XP, but with Windows Vista (and Windows 7) the metric for the 0.0.0.0 route is the same (20), and it appears that some network traffic that should go out my main network card are going out my secondary card instead.

The solution to date is to delete the 0.0.0.0 route associated with the second NIC card, but I have to do this several times a day.

Do both cards use the same subnet or something? You say they both networks use DHCP, they really should be using different private ranges, otherwise you have a nonsensical network setup, if both cards have addresses in the same subnet then the machine will correctly assume they are the same network. If both networks use the same subnet the solution is to change the subnet one of the routers issues address in rather than butchering your network config. There are hundreds of private subnets to choose from after all.

You probaly have to delete that route several times a day because your dhcp lease on the card that usually has the default route is for about 2 or 3 hours???

I used to have a server with 2 network cards, one had a public ip address that can be contacted from the internet and the other card was plugged into the internal network. I found that every 2 hours, when the dhcp lease was renewed on the internal network card, it could change the routing i had set up until i started using "-p" at the end of the command which makes the routing permanent and you wont loose them not even after restarting.

Well ive had the same sort of problem as described but on windows XP, and it was solved by the automatic metric calculation answered, thanks kevin

I will explain my setup and exactly how I solved it.
My computer is connected to two networks, the first is a wifi card to a router for internet access, the second is by a wired network card attached to a hub. a temporary meassure put in place so I can configure a NAS box also attached to the hub.

My predicament - not being able to not browse the intruction manual online whilst I configure the NAS box through its own web based interface!
I set up the following subnets

192.168.100.1 - wired to the NAS box through a hub

192.168.200.1 - wifi to the internet through a router

The effect was strange, sometimes I could browse a page other times it would just time out, clearly internet traffic was getting lost down the wrong subnet.

Heres how to fix it..

Open up a command prompt and type 'route print', you can then verfiy the 'metric' for each subnet your running, look at the lines where the Netmask shows as 0.0.0.0, take a note of the metrics, the wired network will most likely be '20' and the wireless '25', note: the lower value tells your computer to use to use that subnet over the other, certainly in the case of web browsing.

Go ... Start menu > Control Panel > Network Connections > Open the Properties of the non-internet network > under the 'general' tab and from within the 'This connection uses the following items' list, select 'Internet Protocol (TCIP/IP)' and click the 'Properites' button > From under the 'General' tab click the 'Advanced' button > under the 'IP Settings' tab untick the 'Automatic metric' box and input into the 'Interface Metric:' field the higher of the two values collected from the 'route print' command.

Repeat again for the internet enabled subnet, but this time entering the lower value, you can then verify the settings by going back and running the 'route print' command again.

You can either wait for a distro-integrated kernel to include it, wait for someone in the community to build an appropriate kernel package (which probably won't take too long), or patch and build a kernel yourself. Unless you're familiar with the procedures for building a kernel and applying kernel patches (given that there'll likely be significant changes between the Ubuntu-released kernel and the bleeding edge kernel these patches are targeted at), I'd leave it alone and wait for someone else to do it. It won't be a trivial operation.