i just got a server & want to colocation it in datacenter
server details :
HP DL380,
2x intel Xeon (3,06GHz/533, 512KB L2 Cache),
8x Fans, Form Factor Rack (2U),
2x 400W Power Supplies,
the server have 2 psu, can i only turn on 1 psu, to reduce cost in colocation? will the server still running good?
the standart colocation packages in my city only give default power 400w, if need additional power 400w need additional cost about $40-60 again permonth
please give suggestion from your experience

I'd like to install XtraBackup (rpm -i percona-xtrabackup-2.1.9-744.rhel6.x86_64.rpm). During the rpm install it told me that it misses a dependency.
error: Failed dependencies:
perl(DBD::mysql) is needed by percona-xtrabackup-2.1.9-744.rhel6.x86_64
perl(Time::HiRes) is needed by percona-xtrabackup-2.1.9-744.rhel6.x86_64
Then I run yum install perl-Time-HiRes, and yum install perl-DBD-MySQL. For install perl-TImes-Hires has successful but not for perl-DBD-MySQL.
Error: file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-libs-5.1.73-3.el6_5.x86_64 conflicts with file from package MySQL-server-5.6.10-1.el6.x86_64.
I also had try to install :
yum install cpan
cpan DBI
cpan DBD::mysql
But still get the same error.
So I hope someone can explain me what the right fix is, to get XtraBackup running on MySQL.

I work with Ubuntu 10.04 everyday. Several days ago, when I release command sudo apt-get install .... it run very good, no error. I also able to open websites with my browser with no proxy.
But, today, I got error. Every time I release the command, the connection redirected to an IP in my local network. I can see it in the terminal window.
Several days ago I tried to connect to the internet throught the IP, by SSH tunneling. But I forget what I have done and there is no way home.
This is the output in terminal :
[email protected]:~$ sudo apt-get update
[sudo] password for deo:
Err http://cx.archive.ubuntu.com lucid Release.gpg [
Could not connect to 10.7.7.15:3128 (10.7.7.15). - connect (110: Connection timed out)
Err http://cx.archive.ubuntu.com/ubuntu/ lucid/main Translation-en_US
Unable to connect to 10.7.7.15:3128:
10.7.7.15 is an adress in my local network. Somebody please help me :)

I recently encountered a problem that is giving me a headache and I need help ...
The System consists of two subsystems, called A and B, each running on a standalone Tomcat instance and currently running on the same machine. A invokes B's service via Spring httpInvoker (i.e. over HTTP). B system also invokes the other system's services via HTTP.
Symptoms:
the system starts to run and appears to work normally for around 10-15 days;
the system will run for a period of time after an exception:
org.springframework.remoting.RemoteAccessException: Could not access HTTP invoker remote service at [http://xxx.xxx.xxx.xxx/remoting/call];
The nested exception is
java. net.SocketException: **Permission denied: connect**
when the exception occurs, the system continues. This happens always, not only occasionally. (It looks like some resources are exhausted, but CPU rate < 5%, memory < 15%, network < 5%).
when the system call between A and B fails, the B system call over HTTP to an external service also failed, with the same exception.
Restarting both Tomcat services makes the whole system work properly.
So repeatedly following steps 1 - 5, I have not found the root reason.
Environment:
windows 2008 R2
tomcat7.0.42 x86_64
oralce-jdk-1.7.0_40
Any ideas?

I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc)
I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this:
[email protected]:/home/matt# zpool import -fFX -d /dev/disk/by-id/
pool: storage
id: 15855792916570596778
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
config:
storage UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL
ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL
ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL
The symlinks for those in /dev/disk/by-id also exist:
[email protected]:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51*
lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb
lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9
lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd
lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9
lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde
lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9
Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array).
I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows:
version: 5000
name: 'storage'
state: 0
txg: 4
pool_guid: 15855792916570596778
hostname: 'kyou'
top_guid: 1683909657511667860
guid: 8815283814047599968
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 1683909657511667860
nparity: 1
metaslab_array: 33
metaslab_shift: 34
ashift: 9
asize: 3000569954304
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 8815283814047599968
path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18036424618735999728
path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 10307555127976192266
path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1'
whole_disk: 1
create_txg: 4
features_for_read:
Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check)
So, in summary:
I upgraded Ubuntu, and lost access to one of my two zpools.
The difference between the pools is the one that came up was JBOD, the other was zraid.
All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data
The pools were both created with disks referenced from /dev/disk/by-id/.
Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct
zdb can read the labels from the drives.
Pool has already been attempted to be exported/imported, and isn't able to import again.
Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

I'm currently running V4.7 and I haven't touched any of the user or share settings, and I'm periodically losing read.write permission on both my windows 7 pc and my android tablet connecting over the wireless. Sometime I can access my shares and see the folder directories, but when attempting to open a folder windows denies me access saying I don't have the proper permission. This is after I have logged in with my main account that has full read/write access of everything, same on my android device. This all started when I attempted to delete a large amount of files (8gb) to make more room and about half way through started getting permission errors.
What could be causing this?
Thanks

I have configured my linux machines (running CentOS 5.2) to authenticate against a Windows server running Active Directory. I have even enabled winbind offline logon. Everything works as expected, however I'm also looking to impose a TTL for the winbind authentication cache. So far all I found was the below snippet from the samba documentation
winbind cache time (G)
This parameter specifies the number of seconds the winbindd(8) daemon will cache user and group information before querying a Windows NT server again.
**This does not apply to authentication requests**, these are always evaluated in real time unless the winbind offline logon option has been enabled.
Default: winbind cache time = 300
Clearly the winbind cache time parameter does not control the cache TTL for authentication requests.
Is there any other way I can implement a cache timeout for winbind authentication requests?
Thank you

Assume I have around 50 Cisco IE2000 Switches connected together and I want to reconfigure some settings, the same settings for every switch.
Normally I would open a command line session via Putty and paste the commands. But as the number of switches is growing, even this method takes its time.
I am aware of Kiwi CatTools. Unfortunately it's not free so I'm wondering if there are other efficient ways to configure a large number of Cisco switches.

I currently have postfix configured so that all users get relayed by the local machine with the exception of one user that gets relayed via gmail. To that extent I've added the following configuration:
/etc/postfix/main.cf
# default options to allow relay via gmail
smtp_use_tls=yes
smtp_sasl_auth_enable = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt
smtp_sasl_security_options = noanonymous
# map the relayhosts according to user
sender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_maps
# keep a list of user and passwords
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
/etc/postfix/relayhost_maps
[email protected] [smtp.gmail.com]:587
/etc/postfix/sasl_passwd
[smtp.gmail.com]:587 [email protected]:user-one-pass-at-google
I know I can map multiple users to multiple passwords using smtp_sasl_password_maps but that would mean that all relay would be done by gmail where I specifically want all relay to be done by the localhost with the exception of some users.
Now I would like to have a [email protected] (etc) relay via google with their own respective passwords. Is that possible?

This question already has an answer here:
Hosting multiple distinct folders for distinct domains
1 answer
I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems.
Now my second domain name I can't use the same ip address as it points to the first domain name.
So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far.
Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe
Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'.
How do I avoid the Iframe solution for my second domain while not messing up my first domain?
Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen
My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

I am trying to set up mod_proxy_balancer for routing requests to 2 jboss7-servers. For the time being I am testing this setup on my local machine, using following config in httpd.conf:
ProxyRequests Off
<Proxy \*>
Order deny,allow
Deny from all
</Proxy>
ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On
<Proxy balancer://mycluster>
BalancerMember http://localhost:8080 route=node1
BalancerMember http://localhost:8081 route=node2
Order allow,deny
Allow from all
</Proxy>
and in the standalone.xml file of each jboss I have defined the jvmRoute system property:
<system-properties>
<property name="jvmRoute" value="node1"/>
</system-properties>
At http:// localhost/myapp the application is accessible but the java-session is not build up correctly. Consequently the authentication is not working.
The funny thing is, that everything is working if I turn off one JBoss-instance.
As I have tried a couple of settings already, I am thankful for any further suggestions.

I am using CruiseControl.Net for continuous integration which is now accessing the dashboard through login plugin, which in turn is authenticating and authorizing after verifying it with a set of users saved as XML file in the CruiseControl.Net server.
Now, i need to bring in Windows Authentication to the system whereby which CruiseControl.Net server webdashboard when accessed from a client machine(local machine associated with a common server), would be authenticated and be authorized to access the CruiseControl.Net features based on the authority of the logged in users.
Kindly guide me to go ahead with this, appreciate all kinds of resources that would be helpful for achieving this.
Thanks.

I'm trying to set up virtual hosts on Mac OS X. I've been modifying httpd.conf and restarting the server, but haven't had any luck in getting it to work. Furthermore, I notice that it's not serving files in the DocumentRoot mentioned in httpd.conf (Libraries/WebServer/Documents), but in a different directory (/usr/local/apache2/htdocs). I don't see this folder mentioned anywhere in httpd.conf. Furthermore, PHP works, but the "LoadModule php5_module" line is commented out. This makes me think it's using another .conf file. How can I figure out which config is actually being loaded?
Update: I just deleted that httpd.conf and apache behaves the same after restart, so it definitely wasn't using it!

I'm trying to send some data from my web browser to a txt file on another computer.
This works fine:
echo 'Done' | nc -l -k -p 8080 | grep "GET" >> request_data.txt
Now I want to do some further processing before writing the http request data to my txt file (involving regex maniuplation). But if I try to do something like this nothing is written to the file:
echo 'Done' | nc -l -k -p 8080 | grep "GET" | grep "HTTP" >> request_data.txt
(for simplicity of explanation I've used another grep instead of say awk)
Why does the 2nd grep not get any data from the output of the first grep? I'm guessing piping with netcat works differently to what I've assumed to get this far. How do I perform a 2nd grep before writing to my txt file?
My debugging so far suggests:
It is nothing to do with stderr vs stdout
Parentheses don't help

Is there a way to force Microsoft Word 2010 to keep the last line of a paragraph with the next paragraph?
An example of when this is relevant is when starting a block quote; it doesn't look good to have the block quote start at the top of a new page, particularly when it's introduced by a partial sentence, like this:
"Lorem ipsum" is sample text widely used in the publishing industry, as the
text has spacing roughly similar to that of English and therefore looks
"normal" but unintelligible to an English reader's eye, allowing the reader
to focus on design elements. It begins,
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nam
rhoncus laoreet risus, quis congue leo viverra congue.
Suspendisse magna massa, viverra imperdiet est eu, ultrices
volutpat lectus. Sed pulvinar est id risus lobortis venenatis.
There shouldn't be a page break after "begins," because it looks like the sentence ends abruptly.
"Keep lines together" won't work, because by definition we're talking about two paragraphs. "Keep with next" won't work if the first paragraph is larger than a couple of lines, because then you get an awkwardly large space at the bottom of a page. Manual line breaks obviously work, but only when the document is final, which is often less certain than it seems.
I know WordPerfect has a feature called "block protect" that does this, but I have not found even an acceptable substitute in Word. I have played with style separators and hidden paragraph breaks, but to no avail.
I would love a special character, kind of like the nonbreaking space or zero width optional space, that tells Word to move to the next page if the next paragraph would otherwise start the page. A macro would also be great, but I haven't been able to find a starting point (like how to detect where non-manual page breaks fall).
Edit: It looks like "Keep with next" works this way in Word 2013, but I specifically need a fix that works in Word 2010.

I'd like to know if it will work:
I have my domain and I´m serving a webpage in a nginx to the internet, but if I type my domain in my laptop inside LAN I access to my modem/router configuration, I cannot access to the web server unless I type the IP address. I would like to add a Bind server after the modem/router - (port forward, ports 80 and 5060), if the request is www.mydomain.com bind should resolve the nginx IP address and serve it, and if it is a voip request should address to the voip server and if I'd like to access to the website from inside LAN I'd like to type mydomain.com. Could I do it with this configuration? Do I need something else?
Thanks in advace!

I have a Motorola SBG6580 that is a modem and a wireless router in one. However, the wireless router part was bad so I disabled it and got a separate wireless router. I can go into the configuration pages of both the modem and the router now, and I'm confused as to which device needs to be configured for port-forwarding. I have a raspberry pi that I want to set up as a webserver.
Do I configure the router, the modem, or both?
Right now, the SBG6580's 1st LAN is connected to the wireless router's WAN, and the internet is working well. Note that the SBG6580 only has 4 ports, and I'm assuming they're LAN,as they are not labeled.

I used to be able to search/open items from my start menu, but recently items have gone "missing".
Example: I used to be able to hit the start menu, type "iis" and the top item would be iis manager and I could open and run ... now I get a list of items that are not IIS
The same is true with typing "servi" - previously i would get Services (i.e. open local services), now it isn't showing
I've checked the properties on Customise Start Menu for "Search other files and libraries" and it is selected as "Search without public folders" ... is there something else that is happening? It seems like something has changed, but I can determine what/how to revert to what it was.

Lately after a use of 6 months of my AMD FX8350 CPU I'm experiencing high temperatures and loud noise coming from the CPU fan(I set that in order to keep it cooler).
I decided to replace the stock fan with a water cooling system in order to keep my CPU quite and cool and add one or two more case fans too.
Here is my case's airflow diagram: http://www.coolermaster.com/microsite/silencio_650/Airflow.html
My configuration now is:
2x120mm intake front(stock with case)
1x120mm exhaust rear(stock with case)
1 CPU stock
I'm planning to buy Corsair Hydro Series H100i(www.corsair.com/en-us/hydro-series-h100i-extreme-performance-liquid-cpu-cooler) and place the radiator in the front of my case(intake) and add an 120mm bottom intake and/or an 140mm top exhaust fan.
My CPU lies near the top of the MO.
Is it a good practice to have a water-cooling system that takes air in?
As you can see here the front of the case is made of aluminum. Can the fresh air go in?
Does it even fit?
If not, is it wiser to get Corsair Hydro Series H80i (www.corsair.com/en-us/hydro-series-h80i-high-performance-liquid-cpu-cooler) and place the radiator on top of my case(exhaust) and keep the front 2x120mm stock and add one more as intake on bottom.
If you have any other idea let me know.
Thank you.
EDIT: The CPU fan running ~3000rpm and temp is around 40~43C on idle and save energy.
When temp is going over 55C when running multiple programs and servers on localhost(tomcat, wamp) rpm is around 5500 and loud!
I'm running Win8.1
CPU not overclocked
PS: Due to my reputation i couldn't post the links that was necessary. I will edit ASAP.

How can I permanently add virtual wireless interfaces to my network configuration with iw?
I created the following interfaces:
iw phy phy0 interface add vwlan0 type station
iw phy phy0 interface add vwlan1 type __ap
The first is configured as a wifi client connecting to an existing network (wpa_supplicant)
The second is configured as wireless hotspot (hostapd + dnsmasq)
The setup works, but now I can't quite figure out what the best strategy is to save this configuration permanently.
Have made an init script for wpa_supplicant
Have made an init script for the hotspot
Virtual adaptor network settings set in /etc/network/interfaces
But all this depends on the wireless interfaces being created. What would be the best way to make sure these interfaces are created before the network is set up and the services are run?
As a bonus, since this wireless interface is a usb device, would it be possible to have the interfaces created (and the services started) when the interface is hotplugged?
I know you can execute code after a network interface is up, but the wlan0 interface that is hotplugged should never be up.
Operating system is raspbian