tag:blogger.com,1999:blog-41626077042133678922017-07-11T06:59:29.102-04:00User Tolerant LivewareGrumbling about computers.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.comBlogger122125tag:blogger.com,1999:blog-4162607704213367892.post-18949282637415961302017-05-10T15:58:00.002-04:002017-05-10T15:58:42.916-04:00vmware-vdiskmanager and CentOS 6<p>This is how you install vmware-vdiskmanager on CentOS 6. I needed to do this so I could read my old vmware-server vmdk.</p> <p>First go <a href="https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1023856 ">this old KB article</a> and download 1023856-vdiskmanager-linux.7.0.1.zip. It's at the bottom, in the Attachements section. Now you do the following little dance:<pre>unzip 1023856-vdiskmanager-linux.7.0.1.zip<br />cp 1023856-vmware-vdiskmanager-linux.7.0.1 /usr/local/sbin/vmware-vdiskmanager <br />chmod +x /usr/local/sbin/vmware-vdiskmanager<br />yum -y install zlib.i686 glibc.i686 openssl098e.i686<br />md -pv /usr/lib/vmware/lib<br />cd /usr/lib/vmware/lib/<br />ln -s /usr/lib/libcrypto.so.0.9.8e <br />ln -s libcrypto.so.0.9.8e libcrypto.so.0.9.8 <br />ln -s libcrypto.so.0.9.8e libcrypto.so.0<br />ln -s libcrypto.so.0.9.8e libcrypto.so<br />ln -s /usr/lib/libssl.so.0.9.8e<br />ln -s libssl.so.0.9.8e libssl.so.0.9.8<br />ln -s libssl.so.0.9.8e libssl.so.0<br />ln -s libssl.so.0.9.8e libssl.so<br /></pre> <p>That fucking around in /usr/lib/vmware/lib is because even though VMware claims this is a static binary, it in fact dynamically loads crypto libraries at run time from non-standard places.</p> <p>You can now convert your split vmdk to a single file and mount it:<pre>vmware-vdiskmanager -r sda.vmdk -t 0 sda-single.vmdk<br />modprobe nbd max_part=8<br />qemu-nbd -r --connect=/dev/nbd0 sda-single.vmdk<br />kpartx -a /dev/nbd0<br />vgscan<br />vgchange -a y YOURVG<br />mount -o ro /dev/mapper/YOURVG-YOURLV /mnt</pre> <p>Aren't you glad you created a unique VG for each of your VMs? <p>To unmount:<pre>umount /mnt<br />vgchange -a n YOURVG<br />kpartx -d /dev/nbd0<br />qemu-nbd -d sda-single.vmdk</pre>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com1tag:blogger.com,1999:blog-4162607704213367892.post-70077514395778174622017-05-01T16:38:00.000-04:002017-05-01T16:38:53.798-04:00daemontools, system V init and mysql<p>This is how you setup a babysitter for a service started with system V init scripts using DJB's deamontools. We can't just put <code>run $service start</code> into a run file, because sys V init scripts start up background daemons. We have to use the daemon's PID file to watch what's going on. <p>First, I create mysql-babysit. I'm using mysql as an example. For other services, adjust <code>$service</code> and <code>$pidfile</code>.<pre><span class="code"># mkdir /var/daemontools/supervised/mysql-babysit</span><br /># <span class="code">cd /var/daemontools/supervised/mysql-babysit</span><br /># <span class="code">cat <<'SH' > mysql-babysit</span><br />#!/bin/bash<br /><br />service=mysql<br /><br />datadir=/var/lib/mysql<br />pidfile=$datadir/$(hostname).pid<br /><br /><br />##################<br />sleepPID=<br />function sig_finish () {<br /> echo $(date) $service "$1"<br /> service $service stop<br /> [[ $sleepPID ]] && kill $sleepPID<br />}<br />trap 'sig_finish TERM' TERM<br />trap 'sig_finish KILL' KILL<br /><br /><br />##################<br />echo $(date) $service start<br /><br />service $service start<br /><br />if [[ -f $pidfile ]] ; then<br /> pid=$(< $pidfile)<br /> if [[ $pid ]] ; then<br /> while grep -q $service /proc/$pid/cmdline 2>/dev/null ; do<br /> sleep 60 & sleepPID=$!<br /> wait $sleepPID<br /> done<br /> echo $(date) $service exited<br /> exit 0<br /> fi<br />fi<br />echo $(date) $service failed to start<br />sleep 5<br />exit 3<br /><span class="code">SH</span></pre> <p>Next we create and activate the run script<pre><span class="code">cd /var/daemontools/supervised/mysql-babysit<br /># cat <<'SH' >run</span><br />#!/bin/bash<br /><br />exec /var/daemontools/supervised/mysql-babysit/mysql-babysit<br /><span class="code">SH<br /># chmod +x run<br /># chkconfig mysql off<br /># service mysql stop<br /># cd ../../service<br /># ln -s ../supervised/mysql-babysit<br /></span><br /></pre> <p>We can control mysql with <pre>svc -d /var/daemontools/supervised/mysql-babysit # shutdown mysql<br />svc -u /var/daemontools/supervised/mysql-babysit # startup mysql<br />killall mysql # restart mysql<br /></pre>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-83427352688931899552017-04-27T16:12:00.001-04:002017-05-01T16:39:11.499-04:00Creating glue records with bind 9<p>It's not pretty. Basically, you have to create a zone with the exact name of your name servers. Even if one of those name servers are probably controlled by your ISP. Even if you already have an A record for your local NS</p> <p>In the following examples, <b>ns1.example.com</b> is your primary name server, <b>sdns1.isp.com</b> is the secondary name server your ISP is letting you use. <p>Add the following to /etc/named.conf: <pre class="code">zone "ns1.<b>example.com</b>" {<br /> type master;<br /> file "master/ns1.<b>example.com</b>.zone";<br />};<br /><br />zone "<b>sdns1.isp.com</b>" {<br /> type master;<br /> file "master/<b>sdns1.isp.com</b>.zone";<br />};<br /></pre> <p>This is master/ns1.<b>example.com</b>.zone: <pre class="code">$TTL 300<br />@ IN SOA ns1.<b>example.com</b>. root.<b>example.com</b>. (<br /> 2017042702 ; yymmdd##<br /> 2h ; Refresh<br /> 1h ; Retry<br /> 2W ; Expire<br /> 1h ; Minimum<br /> )<br /> IN NS ns1.<b>example.com</b>.<br /> IN NS <b>sdns1.isp.com</b>.<br /><br />@ IN A 1.2.3.4 // change this to the real IP<br /></pre> <p>This is master/<b>sdns1.isp.com</b>.zone: <pre class="code">$TTL 300<br />@ IN SOA ns1.example.com. root.awale.qc.ca. (<br /> 2017042702 ; yymmdd##<br /> 2h ; Refresh<br /> 1h ; Retry<br /> 2W ; Expire<br /> 1h ; Minimum<br /> )<br /> IN NS <b>ns1.isp.com</b>.<br /> IN NS <b>ns2.isp.com</b>.<br /><br />@ IN A 4.3.2.1 // change this to the real IP of sdns1.isp.com<br /></pre> <p>Get the real IP of sdns1.isp.com with <pre>host <b>sdns1.isp.com</b><br />sdns1.isp.com has address <b>66.51.199.62</b><br /></pre><p>You can find the NS records for sdns1.isp.com with <pre><span class="code"># dig NS <b>isp.com</b></span><br />; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.el6_9.1 <<>> NS isp.com<br />;; global options: +cmd<br />;; Got answer:<br />;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30596<br />;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2<br /><br />;; QUESTION SECTION:<br />;isp.com. IN NS<br /><br />;; ANSWER SECTION:<br />isp.com. 7200 IN NS <b>ns2.isp.com</b>.<br />isp.com. 7200 IN NS <b>ns1.isp.com</b>.<br /><br />;; ADDITIONAL SECTION:<br />ns2.isp.com. 172799 IN A 66.51.206.98<br />ns1.isp.com. 172799 IN A 66.51.202.50<br /><br />;; Query time: 210 msec<br />;; SERVER: 10.0.0.2#53(10.0.0.2)<br />;; WHEN: Thu Apr 27 16:10:12 2017<br />;; MSG SIZE rcvd: 93<br /></pre>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-45131727290099103302017-03-22T16:27:00.002-04:002017-04-18T20:52:19.842-04:00Wacom Intuos Photo and CentOS 6<p>I've had a Wacom Intuos 5x4 since 1988 or so. But support for the serial protocol it used has disappeared. What's more, my tablet was getting really crusty over nearly 20 years of use. So I went and bought a Wacom Intuos Photo, which has a smaller, wider surface (which is useful given I have 2 screens) but also can use a finger instead of a pen. <p>Of course, the new tablet didn't work out of the box. Well, it nearly worked. <p>Back in the day, I patched wacom_drv for XFree86 to get it working. Things have changed greatly since then. On modern Linux, the driver is in the kernel (wacom.ko) which creates an input and event device. X.org then uses HAL to enumerate input devices and HAL also provides hints on how to configure them. The long and short is we no longer need to mess around in xorg.conf when we change hardware. However, it means it gets very hard to debug when one of those layers does something annoying. <p>The easy way to get an Wacom Intuos Photo, Draw, Art to work on CentOS 6 is to install the <a href="http://linuxwacom.sourceforge.net/wiki/index.php/Input-wacom">backports of linuxwacom drivers</a>. If you are running the stock 2.6.32 kernel, everything will Just Work. <p>I'm using the <a href="http://elrepo.org/tiki/kernel-ml">elrepo's 4.10 kernel-ml</a>. This gets lm-sensors working for my motherboard and removes an annoying bug with my PS/2 keyboard. <p>In 4.10 kernel, the wacom driver is recognizing the tablet and doing it's job : creating 3 input event IDs one for the pad, one for the stylus and one for finger touch. However, lshal is rejecting the finger touch. I traced it down to HAL_PROP_BUTTON_TYPE not being set when hald-probe-input is called. This means the stylus automatically works on X.org, but touch doesn't. <p>To get finger touch to work on X.org, I had to force things. First, I need to create a symlink to the finger event ID using udev, then a partial config file for X.org : <p><b>/usr/local/lib/udev/wacom-type.sh</b> will output a short name for each device it is called on. Make sure this script is executable! <pre>#!/bin/bash<br /><br />name=$(cat /sys/$DEVPATH/device/name)<br /># echo "$DEVPATH=$name" >>/tmp/wacom-dev.txt<br /><br />shopt -s nocasematch<br /><br />if [[ $name =~ Finger ]] ; then<br /> echo finger<br />elif [[ $name =~ Pen ]] ; then<br /> echo pen<br />elif [[ $name =~ Pad ]] ; then<br /> echo pad<br />else<br /> echo unknown<br />fi<br /><br />exit 0</pre> <p><b>/etc/udev/rules.d/99-wacom.rules</b> convinces udevd to call the above when the tablet is detected. It also convinces udevd to create a symlink in /dev/input/wacom-finger. Note that I restrict to 056a:033c, which is a Wacom Intuos Draw/Photo/Art small version. You can find the USB ID of your tablet with <b>lsusb</b>. <pre>#<br /># Will create /dev/input/wacom-finger, I hope<br />#<br />ACTION!="add|change", GOTO="my_wacom_end"<br />KERNEL!="event*", GOTO="my_wacom_end"<br /><br />ENV{ID_VENDOR_ID}!="056a", GOTO="my_wacom_end"<br />ENV{ID_MODEL_ID}=="033c", PROGRAM=="/usr/local/lib/udev/wacom-type.sh", SYMLINK+="input/wacom-%c"<br /><br />LABEL="my_wacom_end"<br /></pre> <p>Test the above by doing <b>udevadm control --reload-rules</b>, unplug tablet, wait, plug in tablet, then <b>ls -l /dev/input</b> and you should see: <pre>lrwxrwxrwx 1 root root 7 Mar 22 15:50 wacom-finger -> event11<br />lrwxrwxrwx 1 root root 7 Mar 22 15:50 wacom-pad -> event12<br />lrwxrwxrwx 1 root root 7 Mar 22 15:50 wacom-pen -> event10</pre><p>The numbers after <b>event</b> will change each time you reboot or replug the tablet. <p><b>/etc/X11/xorg.conf.d/wacom.conf</b> will finally convince X.org to use the Wacom finger event id as a touch pad. <pre>Section "InputDevice"<br /> Identifier "Finger"<br /> Driver "wacom"<br /> Option "Vendor" "Wacom"<br /> Option "AutoServerLayout" "on"<br /> Option "Type" "touch"<br /> Option "Device" "/dev/input/wacom-finger"<br /> Option "Mode" "Absolute"<br /> Option "Touch" "on"<br /> Option "Gesture" "off"<br /># Option "Tilt" "on"<br /> Option "Threshold" "20"<br /> Option "Suppress" "6"<br /> Option "USB" "On"<br />EndSection<br /></pre> <p>I'd like to very much thank whot and jigpu who spent an impressive amount of time helping me over IRC. <p>24 hours later, I have <b>found a problem</b> with the approach. If you unplug and replug the tablet, the Finger event ID will change. And while the wacom-finger symlink will be updated, X.org will not know that it's changed and hold onto the old event ID. This means finger will no longer work after replugging the tablet, at least until you restart X.org.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-12864078975188847422017-03-20T16:26:00.000-04:002017-03-20T16:26:54.560-04:00Linux consoleSo I've finally upgraded Corey. By "upgrade", I mean "replaced every last component except the PSU." So basically it's a replacement. <p>This means I'm now running CentOS 6 on my desktop ("So soon!?" shut up). It also means I have to fix all the little annoying things about CentOS 6. One of which is that modern kernels us a framebuffer, which switches the console to illegibly small text. The solution is to put <b>video=640x480</b> or <b>video=800x600</b> on your kernel command line. Ideally in grub.conf.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-22114722693497017922016-06-22T15:33:00.000-04:002017-05-08T17:05:11.535-04:00Minimal Perl<p>Setting up Perl on CentOS 6. I'm putting this here so that I can find it easily.</p><pre>yum install perl perl-CPAN<br />cpan <br /># make everything automatic<br /><span class="type">o conf prerequisites_policy follow<br />o conf build_requires_install_policy yes<br />o conf commit<br />q</span><br />cpan local::lib<br />cpan Bundle::CPAN # keep an eye on this because Realine wants you to hit enter<br />cpan App::cpanminus<br /></pre> <p>Now we can install Imager (say)</p><pre>sudo yum install giflib-devel libjpeg-devel libpng-devel libtiff-devel freetype-devel t1lib-devel<br />cpanm Imager</pre>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-61137188531933559542016-06-16T18:44:00.001-04:002016-06-22T15:34:55.851-04:00sqlite3 vs firefox<p>Firefox keeps lots of useful in sqlite files in the user's profile. Under Linux, you will find these files in <tt>~/.mozilla/firefox/PROFILE_DIR/</tt>. To find PROFILE_DIR, you pull look in <tt>~/.mozilla/firefox/profiles.ini</tt>. Of interest to me is <a href="http://kb.mozillazine.org/Places.sqlite">places.sqlite</a>, which contains info on all sites visited. To pull a list out, simply do</p><pre> sqlite3 -csv ~/.mozilla/firefox/ucjmuboi.default/places.sqlite \<br /> 'SELECT url,title,visit_count,visit_date/1000000 FROM moz_historyvisits JOIN moz_places ON place_id = moz_places.id'<br /></pre><p>The visit_date is bizarely in microseconds, so divide by 1,000,000 to get epoch seconds. <p>Of course, the above doesn't work on CentOS 5 nor 6. You wil get a <b>Error: file is encrypted or is not a database error</b>. CentOS 5 ships with sqlite 3.3.6, CentOS 6 ships with 3.6.20. Firefox uses 3.7 and creates a file that isn't backward compatible. Here is how to install a compatible version: <pre>cd ~/work<br />wget <a href="http://www.sqlite.org/2016/sqlite-autoconf-3130000.tar.gz">http://www.sqlite.org/2016/sqlite-autoconf-3130000.tar.gz</a><br />tar zxvf sqlite-autoconf-3130000.tar.gz<br />cd sqlite-autoconf-3130000<br />./configure --prefix=/opt/sqlite-3130000<br />make all<br />sudo make install<br />sudo ln -s sqlite-3130000 /opt/sqlite<br />sudo bash -c "echo /opt/sqlite/lib > /etc/ld.so.conf.d/sqlite.conf"<br />sudo ldconfig<br /></pre><p>This will install 3.13.0. Make sure to check the <a href="http://www.sqlite.org/download.html">download page</a> for the latest version.</p><p>It should be pointed out that by putting /opt/sqlite/lib into ld.so.conf.d, we are overriding the .so default system .so. I don't know if this will break anything. I do know that it means that DBD::SQLite and /usr/bin/sqlite3 now use the new .so and this is what I want.</p>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-35586446359839360942016-03-23T13:09:00.001-04:002016-03-23T13:11:31.961-04:00Someone broke the build<p>One can no longer cleanly do <tt>cpan Bundle::CPAN</tt> on a fresh install of CentOS 6. Some dependencies don't install properly. I had to do the following:</p><pre>cpan CPAN::Meta::YAML Parse::CPAN::Meta <br />cpan Test::YAML<br />cpan Compress::Raw::Zlib<br />cpan Spiffy Test::Base<br />cpan Module::Metadata CPAN::Meta Perl::OSType version<br />cpan Compress::Raw::Bzip2<br />cpan Sub::Identify<br />cpan SUPER<br />cpan Test::MockModule<br />cpan Bundle::CPAN</pre><p>At least I didn't have to go into /root/.cpan and install things by hand.</p>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-55593474400840295932016-02-29T17:44:00.001-05:002016-02-29T17:44:20.370-05:00Fraud alert<p>If Jonathan Night calls you, leaving a blurry message in an Indian accent claiming you have unethical or illegal activity on your tax return and need to phone him? Yeah, that's fraud.</p><p>A simple <a href="http://lmgtfy.com/?q=613-699-4491">Google search</a> of the phone number will reveal this.</p>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-65036037037548708942016-02-22T15:49:00.000-05:002016-02-22T15:49:36.069-05:00SELinux vs SphinxSE<p>It should be noted that SphinxSE wants to talk to searchd on port 9312. SELinux will prevent this. To enable it:</p><pre>semanage port -a -t mysqld_port_t -p tcp 9312</pre>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-45651704842697990922016-02-16T14:50:00.002-05:002016-02-22T15:49:16.131-05:00SELinux vs mysql<p>I'm a strange kind of fool. I maintain my own mysql packages, which makes installing them annoying because everything wants to pull in mysql-libs from the mainline.</p> <p>I also sometimes want to install mysql in /home/mysql, not /var/lib/mysql as in standard on CentOS. SElinux is set up to prevent just this sort of thing. The short version is that everyhing in /home is has the home_root_t security context, which mysqld and mysqld_safe aren't allowed to interact with.</p> <p>The solution is the following:</p><pre># first we are setting up the directory<br /><span class="code">mkdir -p /home/mysql/{InnoDB,etc,log,data,tmp,bin,sbin}<br />mv /etc/my.cnf /home/mysql/etc<br />ln -s /home/mysql/etc/my.cnf /etc<br />for n in /usr/bin/my* ; do ln -s $n /home/mysql/bin ; done<br />for n in /usr/sbin/my* ; do ln -s $n /home/mysql/sbin ; done<br />chmod 1777 /home/mysql/tmp<br />chown mysql:mysql -R /home/mysql<br />joe /home/mysql/etc/my.cnf</span> # change datadir<br /><span class="code">joe /etc/init.d/mysql</span> # change datadir and basedir<br /><br /># now comes the part where we fight with selinux<br /><span class="code">semanage fcontext -a -t mysqld_db_t "/home/mysql(/.*)?"<br />semanage fcontext -a -t etc_t "/home/mysql/etc(/.*)?"<br />semanage fcontext -a -t bin_t "/home/mysql/bin(/.*)?"<br />semanage fcontext -a -t bin_t "/home/mysql/sbin(/.*)?"<br />semanage fcontext -a -t mysqld_tmp_t "/home/mysql/tmp(/.*)?"<br />semanage fcontext -a -t mysqld_safe_exec_t "/home/mysql/bin/mysqld_safe" <br />restorecon -R -v /home/mysql<br />service mysql start</span><br /></pre><p>But it's still failing, because /home/mysql/bin/mysqld_safe is a symlink. To fix this, I did</p><pre><span class="code">grep mysqld /var/log/audit/audit.log | audit2allow -M "mysqlhome"<br />semodule -i mysqlhome.pp <br />service mysql start</span><br /></pre> <p>Yay! Now it works</p>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-29843894726755356532016-02-12T15:30:00.001-05:002016-02-22T15:50:38.659-05:00NT_STATUS_ACCESS_DENIED<p>So I'm setting up SAMBA on a new machine, I can connect correctly but dir listings are failing. The problem is SELinux, because I tried <code>setenable 0</code> and it worked.</p><p>So I ask on IRC and find out I need to do the following:</p><pre><span class="code">semodule -BD</span> # turn off ignored AVCs<br /># redo the directory listing in another window<br /><span class="code">semodule -B</span> # turn AVCs ignoring on<br /><span class="code">grep smb audit.log | audit2allow</span> # parse those AVCs<br />#============= smbd_t ==============<br /><br />#!!!! This avc can be allowed using one of the these booleans:<br /># samba_export_all_ro, samba_enable_home_dirs, samba_export_all_rw<br />allow smbd_t user_home_t:dir read;<br /><span class="code">setsebool -PV samba_enable_home_dirs 1</span></pre>Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-46327452590932635702015-06-18T20:04:00.001-04:002015-06-19T04:15:51.655-04:00Still more fun with MySQL<p>Are you prepared to go mad? If so, compare these to statments and their results: <pre>mysql> SELECT warehouse.NUM,warehouse.date FROM warehouse JOIN sphinx ON sphinx.id = warehouse.DID WHERE sphinx.query = 'filter=tid,288215463; index=Y2015,YD2015; limit=500; maxmatches=2000; mode=all; offset=0; query=dominique; sort=extended:date desc, sNUM desc' ORDER BY warehouse.date DESC,warehouse.NUM DESC LIMIT 50;<br />Empty set (0.00 sec)<br /><br />mysql> SELECT warehouse.NUM,warehouse.date FROM warehouse JOIN sphinx ON sphinx.id = warehouse.DID WHERE sphinx.query = 'filter=tid,288215463; index=Y2015,YD2015; limit=500; maxmatches=2000; mode=all; offset=0; query=dominique; sort=extended:date desc, sNUM desc';<br />+---------+------------+<br />| NUM | date |<br />+---------+------------+<br />| AT00105 | 2015-06-17 |<br />+---------+------------+<br />1 row in set (0.00 sec)<br /></pre> <p>What's going on is that MySQL is asking searchd (part of Sphinx) to do a full text search on 2 indexes. It then does a join on the results. With the ordering, I get zero results. Without ordering I get the expected results. <p><b>This shouldn't be happening</b>. This can't be happening. <p>But then I found the answer: The first query in the example above was a cut and paste from the query log on my dev VM. This means that MySQL had already run that query and (more importantly) cached the results. The Sphinx indexes had been updated in the meantime. But searchd can't tell MySQL to invalidate the query cache. <pre>mysql> SELECT SQL_NO_CACHE warehouse.NUM,warehouse.date FROM warehouse JOIN sphinx ON sphinx.id = warehouse.DID WHERE sphinx.query = 'filter=tid,288215463; index=Y2015,YD2015; limit=500; maxmatches=2000; mode=all; offset=0; query=dominique; sort=extended:date desc, sNUM desc' ORDER BY warehouse.date DESC,warehouse.NUM DESC LIMIT 50;<br />+---------+------------+<br />| NUM | date |<br />+---------+------------+<br />| AT00105 | 2015-06-17 |<br />+---------+------------+<br />1 row in set (0.00 sec)<br /></pre><p>Sanitiy is restored. <p>Long and short of this is to ALWAYS use SQL_NO_CACHE when using the Sphinx plugin.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-60459990263137399872014-08-05T23:44:00.000-04:002015-11-27T14:57:24.776-05:00Do not try this at home<p>What you are about to see is bad and wrong. There are many better and easier ways to do this. I'm documenting it here so you can see how hoary it is. <p>Say you have a system set up with RAID1 on / and /boot, 2 active, 1 spare disks. This system can function even if reduced to a single active disk. Surely one can clone the system just by rebuilding the arrays with new disks? <p>Short answer is "Yes". <p>The longer answer is "No, don't do that. Use <a href="http://clonezilla.org/">Clonezilla</a>." <p>The reason you shouldn't do it this way is that Linux RAID (aka mdadm aka dm) uses UUIDs to identify arrays. It also has a field that contains the hostname the array was created on. What's more LVM has volume group names. These need to be unique if ever these the original and clone array are going to appear on the same system. Even if they you are sure they never will, some admin software you are going to try out some day. <p>But can't you change the UUIDs, VGs, hostname and so on? Yes you can. I just did it for a client. It was more pain then it was worth. <p>The following walk-through assumes my normal setup : first partition of each disk is part of a RAID1 3 active, no spares that goes on /boot (called md1 or md126). Second partition is partition of each disk is part of a RAID1 2 active, no spares that goes on / (called md0 or md127). <p> <p><b>DO NOT TRY THIS AT HOME</b>. I am a professional sysadmin with years of experience fucking up working systems. The following walk-through is provided without any warranty as to applicability or suitability to a any sane or useful or safe task. Back up your data. Verify your backup. RAID is not a backup. YMMV. HTH. HAND. <ol><li>Make sure the arrays are fully in sync.</li><li>Do a clean shutdown</li><li>Make sure you can boot from the first and second drives of the array.</li><li>Remove the first active and the spare drive from the computer. Label them well and set aside</li><li>Disconnect the second drive. This will be the first drive on the new clone</li><li>Boot from a LiveDVD or USB or something. You will need a distro that has mdadm, uuidgen, lvm. I used the CentOS 6.5 LiveDVD.</li><li><pre>telinit 1 # single user mode<br />pstree # make sure nothing unwanted is running<br />killall dhclient # kill everything unwanted. You will need udevd</pre></li><li>Plug the old-second-new-first drive in and wait for things to settle</li><li><code>cat /proc/mdstat</code> Make sure your arrays are inactive. They will have (S) to mean they need to sync with something.</li><li>This is the hairy bit:<pre># get /boot working<br />mdadm --stop /dev/md126<br />mdadm --assemble --update=uuid --uuid=$(uuidgen) /dev/md126 /dev/sda1<br />mdadm --stop /dev/md126<br />mdadm --assemble --update=name --name=$(hostname):1 /dev/md126 /dev/sda1<br />mdadm --stop /dev/md126<br />mdadm --assemble /dev/md126 /dev/sda1 --run<br />tune2fs -U $(uuidgen) /dev/md126<br /># get / working<br />mdadm --stop /dev/md127<br />mdadm --assemble --update=uuid --uuid=$(uuidgen) /dev/md127 /dev/sda2<br />mdadm --stop /dev/md127<br />mdadm --assemble --update=name --name=$(hostname):0 /dev/md127 /dev/sda2<br />mdadm --stop /dev/md127<br />mdadm --assemble /dev/md127 /dev/sda2 --run<br /># activate LVM on /dev/md127<br />vgscan<br /># rename VG<br />vgrename OLDVG NEWVG<br /># mount /<br />vgchange -a y NEWVG<br />tune2fs -U $(uuidgen) /dev/mapper/NEWVG-root<br />mount /dev/mapper/NEWVG-root /mnt<br /># mount /boot<br />mount /dev/md1 /mnt</pre></li><li>Now comes the really annoying part: You have to update /etc/fstab (CentOS 6 has the UUID of /boot array), /boot/grub/grub.conf (CentOS 6 has the UUID of / array and the VG of /) and and possibly /boot/grub/initramfs-MUTTER.img to use the new UUIDs. The really fun part (for me) is that the LiveDVD doesn't have joe. So I had to write the UUID down on a piece of paper, then write it into grub.conf.<p>You can find the UUID of an array with <pre>mdadm --detail /dev/md0</pre>If you want more flexibility, do <pre>mount --bind /proc /mnt/proc<br />mount --bind /dev /mnt/dev<br />mount --bind /sys /mnt/sys<br />mount --bind /tmp /mnt/tmp<br />chroot /mnt</pre> This will allow you to run <code>mkinitrd</code> if you need to. Note that this assumes your live DVD has a kernel that is compatible with your Linux distro.</li><li>Reboot to new system. Keep your fingers crossed.</li><li>Now you just insert your 2 other disks and run <pre>sfdisk -d /dev/sda | sfdisk /dev/sdb<br />sfdisk -d /dev/sda | sfdisk /dev/sdc<br />mdadm --add /dev/md126 /dev/sdb1<br />mdadm --add /dev/md126 /dev/sdc1<br /># wait until rebuild is finished<br />cat &lt;&lt;GRUB | grub<br />device (hd0) /dev/sdb<br />root (hd0,0)<br />setup (hd0)<br />GRUB<br />cat &lt;&lt;GRUB | grub<br />device (hd0) /dev/sdc<br />root (hd0,0)<br />setup (hd0)<br />GRUB<br />mdadm --add /dev/md127 /dev/sdb2<br />mdadm --add /dev/md127 /dev/sdc2<br /></pre></li><li>Ask yourself - was this really worth it? Wouldn't <a href="http://clonezilla.org/">Clonezilla</a> have been so much easier?</li></ol> <p>That didn't seem to hard, you might be saying. What I'm omitting is that when I booted to the new array, I got a lot of checksum errors and a failed fsck. I did fsck -y /dev/md127 a bunch of times until it came up clean. <p>Also - how do I get my arrays back to md1 and md0? The old method (--update=super-minor) no longer works.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-16736252965334141112014-06-12T20:42:00.000-04:002014-06-12T20:42:48.580-04:00End of an era<p>A hard drive was dying on Billy. <pre>Thu Jun 12 19:15:31 EDT 2014<br />19:15:31 up 1316 days, 19:15, 2 users, load average: 0.97, 0.97, 0.77<br /></pre> <p>But the server is rented. So while I would have tried to do a hot swap, iWeb wisely wanted to do a shutdown. <pre>Thu Jun 12 20:36:03 EDT 2014<br />20:36:03 up 8 min, 12 users, load average: 2.79, 1.28, 0.56</pre> <p>Oh well.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-8932225253424807612014-04-30T15:06:00.000-04:002014-04-30T15:06:08.581-04:00Furthermore.<p>A corollary to my <a href="http://utlw.blogspot.ca/2012/06/object-method-may-make-decision-or.html">previous</a> <a href="http://utlw.blogspot.ca/search/label/dictums">dictum</a> that a method may make a decisions OR do something is that you want to cut a larger into smaller pieces. And each piece generally looks like the following:<pre>sub doing_something {<br /> my( $self ) = @_;<br /> $self->prepare_something;<br /> if( $self->is_it_time_to_do_something ) {<br /> $self->before_something;<br /> $self->something;<br /> $self->after_something;<br /> }<br /> $self->unprepare_something;<br />}<br /></pre><p>In the above, <code>something</code> is just the name of the particular small piece of the larger task. The <code>prepare_something/unprepare_something</code> calls are there to avoid all possible side-effects in <code>is_it_time_to_do_something</code>. I would use <code>before_something/after_something</code> are there for things logging, timing, transactions and other "admin" actions that aren't related to <code>something</code>. <p>I feel like I've been infected by all the Java I did last spring.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-77141339527230967122014-04-09T16:02:00.000-04:002014-04-10T11:26:07.673-04:00Parsing HTTP::Request->content with CGI.pm<p>Back in the mists of time, when the Web was young and unconquered, Lincoln Stein wrote a module for Perl that would allow people to easily deal with parameters handed to a CGI program and to generate HTML. This module eventually grew to include not one but several kitchen sinks. It includes its own autoload mechanism, it's own file handle class and more. It Just Works when called FastCGI, Perlex, mod_perl and others. <p>While CGIs have all but disappeared, this module is still very useful for handling all the finicky edge cases for dealing with HTTP request content. But if you write your own web server environment, using CGI.pm to parse the HTTP content can get be hard. You basically have to fake it out. <p>This is how you get the params from a GET request. <pre># $req is a HTTP::Request object<br />local $ENV{REQUEST_METHOD} = 'GET';<br />local $CGI::PERLEX = $CGI::PERLEX = "CGI-PerlEx/Fake";<br />local $ENV{CONTENT_TYPE} = $req->header( 'content-type' );<br />local $ENV{'QUERY_STRING'} = $req->uri->query;<br />my $cgi = CGI->new();<br /><br /># Now use $cgi as you wish</pre> <p>And here we parse the params from a POST request. Note that POST request can be big. Very big. If you aren't careful, they will fill up your memory. Always check Content-Length before reading in a POST request. In the following code, all the content was written to a file. <pre># $req is a HTTP::Request object<br /># $file is a filename that contains the unparsed request content<br />local $ENV{REQUEST_METHOD} = 'POST';<br />local $CGI::PERLEX = $CGI::PERLEX = "CGI-PerlEx/Fake";<br />local $ENV{CONTENT_TYPE} = $req->header( 'content-type' );<br />local $ENV{CONTENT_LENGTH} = $req->header( 'content-length' );<br />local $CGITempFile::TMPDIRECTORY = "/YOUR/TEMP/DIR/HERE";<br /># CGI->read_from_client reads from STDIN<br />my $keep = IO::File->new( "<&STDIN" ) or die "Unable to reopen STDIN: $!";<br />open STDIN, "<$file" or die "Reopening STDIN failed: $!";<br />my $cgi = CGI->new();<br />open STDIN, "<&".$keep->fileno or die "Unable to reopen $keep: $!";<br />undef $keep;<br />unlink $file<br /><br /># Now use $cgi as you wish</pre><p>The fun is that CGI will only read POST data from STDIN, so we have to redirect that to our file, saving and restoring the previous STDIN. <p>The above code also works when you are uploading a file with <code>multipart/form-data</code> which is how I got caught up in all this kerfuffle. <p>It's really to bad that one can't just do <pre>my $cgi = CGI->new( $req );</pre> Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-47630743296336132042014-03-27T13:45:00.001-04:002014-04-09T15:31:26.141-04:00Dave Brubeck Quartet<p>And now for something completely different : Dave Brubeck Quartet's <a href="http://www.youtube.com/watch?v=o2In5a9LDNg">Time Out</a> album is currently on heavy play. I know I'm late to the party. In fact, I was born to late for this party. But I really really digging the combination of mellow, swing and exotica on this album.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-39987922756804106782014-03-20T13:07:00.001-04:002014-03-20T13:09:14.575-04:00 25,000 Linux/UNIX Servers Infected with Malware<p>And <a href="http://it.slashdot.org/story/14/03/18/2218237/malware-attack-infected-25000-linuxunix-servers">this is a big reason</a> is why you pay a real sysadmin to do your system administration. <p>In short, people were installing WordPress badly (friends don't let friends use PHP). They were allowing password authenticated ssh login over the internet. They were doing <code>chmod 0777 ~apache/html_docs</code>. They were doing other highly unsafe things. <p>If you can't see the problem with these things, then you need to talk to a professional sysadmin.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-84339261395310974052014-03-19T15:56:00.002-04:002014-03-19T15:56:12.442-04:00Remove an LVM volume group from the kernel<p>I FINALLY found it!</p> <p>The problem: You are messing around with loopback files and volume groups. You've removed the loopback, but the VG stays in the kernel's internal list. <code>vgremove</code> is of course NOT how you get rid of it. <p>The sane way of doing this is: <pre>vgchange -a n VolGroup00<br />kpartx -d /dev/nbd0<br />qemu-nbd -d /dev/nbd0<br />vgscan</pre> <p>But maybe you killed qemu-nbd by mistake? Or maybe your partition is long gone. Then you need to use dmsetup to remove all traces: <pre># ls -l /dev/mapper/<br />total 0<br />lrwxrwxrwx. 1 root root 7 Mar 18 22:08 GEORGE2-root -> ../dm-0<br />lrwxrwxrwx. 1 root root 7 Mar 18 22:08 GEORGE2-swap -> ../dm-1<br />lrwxrwxrwx. 1 root root 7 Mar 19 15:44 Test02-LogVol00 -> ../<b>dm-4</b><br />lrwxrwxrwx. 1 root root 7 Mar 19 15:44 Test02-LogVol01 -> ../<b>dm-5</b><br />crw-rw----. 1 root root 10, 58 Mar 18 22:08 control<br /># dmsetup info /dev/<b>dm-4</b><br />Name: <b>Test02</b>-LogVol00<br />State: ACTIVE<br />Read Ahead: 256<br />Tables present: LIVE<br />Open count: 0<br />Event number: 0<br />Major, minor: 253, 4<br />Number of targets: 1<br />UUID: LVM-zo1BvaMXr6TS1knhxoyjhtItHEaIVH4wGJz2s2w8w24za3486Aa9ur0igGMxpLf7<br /># dmsetup remove /dev/<b>dm-4</b><br /># dmsetup remove /dev/<b>dm-5</b><br /># vgscan<br /> Reading all physical volumes. This may take a while...<br /> Found volume group "GEORGE2" using metadata type lvm2</pre> <p>And there was much rejoicing. Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-80706868160410995622014-03-18T23:02:00.000-04:002014-07-30T11:42:14.459-04:00I am INVINCIBLE!<p>Nothing is beyond me. When it comes to computers, I am all conquering.</p> <p>That actually might be an exaggeration. But I just pulled off a stunt that really impressed me.</p> <p>I'm moving all my systems from CentOS 5 to CentOS 6. (<i>Why so soon?</i> Shut up) In the process, I need to move my VMs from VMware Server 1 (<i>Seriously?</i> Shut up) to KVM (libvirt specificly). For the most part, I'm actually starting up whole new VMs and reconfiging them. But I still might want to look at my old data, fetch old files, and what not. This means being able to read VMware's vmdk files. This is "easy": <pre>modprobe nbd max_part=8<br />qemu-nbd -r --connect=/dev/nbd0 /vmware/files/sda.vmdk<br />kpartx -a /dev/nbd0<br />vgscan<br />vgchange -a y VolGroup00<br />mount -o ro /dev/mapper/VolGroup00-LogVol00 /mnt/files</pre> <p>There are 3 complications to this: <p>First, you can't have multiple VGs with the same name active at once. Work around is to only mount one at a time. You can renamed VGs with <code>vgrename</code> but that's a job for another day. <p>Next off, I chose to have my vmdk split into multiple 2GB files. This makes copying them around so much more fun. But qemu only understands monolithic files, so you need <a href="https://my.vmware.com/web/vmware/details?productId=351&downloadGroup=VDDK550">vmware-vdiskmanager</a> to convert them. Specificly <pre>vmware-vdiskmanager -r /vmware/files/sda.vmdk -t 0 /vmware/files/single.vmdk</pre> <p>Lastly (and this is the main point of this post) CENTOS 6 DOESN'T SHIP WITH NBD! After WTFing about as hard as I could, I googled around for one. Someone must have needed nbd at some point, surely. The only solution I found was to recompile the kernel from scratch. Which is stupid. As a work around, I used the <a href="http://elrepo.org/tiki/kernel-lt">kernel-lt</a> from <a href="http://elrepo.org/tiki/tiki-index.php">elrepo</a>. But the real solution would be a kmod. I thought doing a kmod would be hard, so I set aside a few hours. Turns out, it's really easy and I got it right on the first try. <h2 style="font-size: 1.2em;">tl;dr - rpm -ivh <b><a href="http://awale.qc.ca/CentOS/rhel6/x86_64/kmod-nbd-0.0-1.el6.x86_64.rpm">kmod-nbd-0.0-1.el6.x86_64.rpm</a></b></h2> <p>I based my kmod on <a href="http://elrepo.org/tiki/kmod-jfs">kmod-jfs</a> from <a href="http://elrepo.org/tiki/tiki-index.php">elrepo</a>. <ol><li>Install the kmod-jfs SRPM;</li><li>Copy jfs-kmod.spec to nbd-kmod.spec;</li><li>Copy kmodtool-jfs-el6.sh to kmodtool-nbd-el6.sh</li><li>Edit nbd-kmod.spec. You have to change kmod_name and the %changelog section. You might also want to change kversion to your current kernel (uname -r). If not, you need to add --define "kversion $(uname -r)" when running rpmbuild;</li><li>Create nbd-0.0.tar.bz2;</li><li>Build, install and test the new module.<pre>rpmbuild -ba nbd-kmod.spec<br />rpm -ivh ~/rpmbuild/RPMS/x86_64/kmod-nbd-0.0-1.el6.x86_64.rpm<br />modprobe nbd<br />ls -l /dev/nbd*</pre><li>FLAWLESS VICTORY!</li></ol> <p>The hard part (of course) is that I wasn't sure what to put in nbd-0.0.tar.bz2. The contents of jfs-0.0.tar.bz2 just look like the files from drivers/jfs in the kernel tree with Kconfig and Makefile added on. So I pull down the kernel SRPM, did a <code>rpmbuild -bp</code> on that (just commend out all the BuildRequires that give you grief. You aren't doing a full build.) Then I poked around for nbd in ~/rpmbuild/BUILD/vanilla-<a href="http://vault.centos.org/6.5/updates/Source/SPackages/kernel-2.6.32-431.5.1.el6.src.rpm">2.6.32-431.5.1.el6</a>/. Turns out there's only nbd.c and nbd.h. So that goes in the pot. I copied over the Makefile from jfs, modifying it slightly because jfs is spread over multiple source files. Kconfig looked like kernel configuration vars. I just copied BLK_DEV_NBD out of vanilla-2.6.32-431.5.1.el6/drivers/block/Kconfig. <p>This entire process took roughly 1 hours. It worked on the first try. Of course, all the magic is in kmodtool-nbd-el6.sh. But I was expecting a lot of pain. Instead it worked on the first try. I was so surprised I did <code>modprobe -r nbd ; ls -l /dev/nbd*</code> just to make sure I wasn't getting a false positive. Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-42566982610276547282014-03-13T19:31:00.004-04:002014-03-14T10:42:34.356-04:00encfs<p><b>WARNING</b> The encryption of encfs is <a href="https://defuse.ca/audits/encfs.htm">severly broken</a>. Do not rely on it to keep anything secret.</p> <p>So one can layer encfs on top of google-drive-ocamlfuse.</p> <p>Here's how I set it up. <pre>yum --enablerepo=<a href="https://fedoraproject.org/wiki/EPEL">epel</a> install rlog-devel boost-devel<br />wget <a href="http://encfs.googlecode.com/files/encfs-1.7.4.tgz">http://encfs.googlecode.com/files/encfs-1.7.4.tgz</a><br />tar zxvf encfs-1.7.4.tgz<br />cd encfs-1.7.4<br />./configure --prefix=/opt/encfs-1.7.4 \<br /> --with-boost-serialization=boost_serialization-mt \<br /> --with-boost-filesystem=boost_filesystem-mt<br /><br />make all &amp;&amp; sudo make install<br />sudo sh -c "echo /opt/encfs-1.7.4/lib &gt;/etc/ld.so.conf.d/encfs-1.7.4.conf" <br />sudo ldconfig <br />for n in /opt/encfs-1.7.4/bin/encfs* ; do<br /> sudo ln -s $n /usr/local/bin <br />done <br /><br />encfs ~/<a href="http://utlw.blogspot.ca/2014/03/backups-in-cloud.html">googledrive</a>/Backup/Encoded ~/encfs<br /></pre> <p>And now "all" I have to do is <code>rsync -av /remote/pictures/ ~/encfs/Pictures/ --progress</code>. And wait. A lot, given I'm getting roughly 12.43kB/s though this setup.<br />Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-4651976297504755102014-03-13T17:42:00.000-04:002014-03-13T18:20:57.737-04:00Backups in the cloud.<p>Given that 1TB of backup from Google is now <a href="http://tech.slashdot.org/story/14/03/13/1834208/1gb-of-google-drive-storage-now-costs-only-002-per-month">10$ a month</a>, I had to look into doing cloud backups again. <p>Google doesn't have a native Linux client. So one has to use <a href="https://github.com/astrada/google-drive-ocamlfuse/wiki/Installation">google-drive-ocamlfuse</a>. Installing this on CentOS 6 is <a href="http://xmodulo.com/2013/10/mount-google-drive-linux.html">surprisingly complex</a>. And Google Drive's auth mechanism is based around a web interface. <p>But once I got it working, it Just Worked. Or rather, I could copy small files to ~/googledrive, see them via the web interface, delete them there and they are now missing. <p>Of course you wouldn't leave unencrypted backups on Google's servers. I futzed around with gpg a bit, but maybe layering encfs on top of google-drive-ocamlfuse would be a better idea. <p>My experimetation was cut short by supper, and by the fact that uploading a 400MB file was very slow :-) (sent 418536429 bytes received 31 bytes 219416.23 bytes/sec aka 1.6 Mbit/S) <p>For the record, here is how I installed google-drive-ocamlfuse<pre>yum install m4 libcurl-devel fuse-devel sqlite-devel zlib-devel \<br /> libzip-devel openssl-devel<br />curl -kL https://raw.github.com/hcarty/ocamlbrew/master/ocamlbrew-install \<br /> | env OCAMLBREW_FLAGS="-r" bash<br />source /home/fil/ocamlbrew/ocaml-4.01.0/etc/ocamlbrew.bashrc<br />opam init<br />eval `opam config env --root=/home/fil/ocamlbrew/ocaml-4.01.0/.opam`<br />opam install google-drive-ocamlfuse<br />sudo usermod -a -G fuse fil<br />google-drive-ocamlfuse <br />google-drive-ocamlfuse ~/googledrive</pre><p>The second to last command will open a browser to get an OAuth token. This means you need htmlview and a valid DISPLAY. The token is only good for 30 days. This is something that needs to be better automated. Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com2tag:blogger.com,1999:blog-4162607704213367892.post-23127110356706241052014-03-12T12:36:00.000-04:002014-03-18T23:03:50.974-04:00Death To Proprietary Drivers<p>I was working on a CentOS 6 install for work and figured "hey, I should upgrade Mustang to the latest version." Normally this means <pre>yum upgrade<br />shutdown -r now</pre> <p>Of course that didn't work; Mustang has an APU and uses a proprietary driver from AMD for X.org. I pretty much never use Mustang's console so I didn't notice this for 2 days, when my wife complained about not being able to watch Lost. <p>After much futzing, I find the error message: <code>symbol lookup error: /usr/lib64/xorg/modules/drivers/fglrx_drv.so: undefined symbol: GlxInitVisuals2D</code>. This means AMD's driver is doing something stupid. I of course can't compile it nor fix it. I tried to download the <a href="http://support.amd.com/en-us/download/desktop?os=Linux+x86">latest driver</a>, but that refused to install. Curse swear, google google and then I <a href="https://www.centos.org/forums/viewtopic.php?t=3716">found</a> it. <pre>rpm -ivh http://elrepo.org/linux/elrepo/el6/x86_64/RPMS/elrepo-release-6-6.el6.elrepo.noarch.rpm<br />rpm -e fglrx64_p_i_c-12.104-1 --nodeps<br />yum -y install fglrx-x11-drv-32bit fglrx-x11-drv kmod-fglrx<br />aticonfig --initial</pre> <p>First line installs <a href="http://elrepo.org/tiki/tiki-index.php">ELRepo</a>, which you might already have. Second line removes the previous drivers, which conflict with the new ones. The <code>--nodeps</code> is because Adobe really wants OpenGL installed. Third line is the important one, it installs the new drivers and does all the magic to get them working. Yes, the X.org driver needs to install a kernel module. Last line just makes sure that Xorg.conf is set up properly. I'd been playing around in it to try to get it to work. <p>So Death to Proprietary Drivers! And long live the guys at ELRepo!Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0tag:blogger.com,1999:blog-4162607704213367892.post-89763664629343017492014-01-24T13:41:00.000-05:002014-03-17T21:47:13.752-04:00Postfix mail relaying<p>RHEL 6 (and CentOS) have moved from sendmail to postfix. This is for the most part a Good Thing; sendmail was a mess. However it means I have to learn some new stuff. Specifically, how to convince postfix to relay email through my ISP's SMTP server. <p>First, I have a VM that does relaying for all my other computers. On this VM, I set up postfix to relay all mail : <pre># /etc/postfix/main.cf<br />myorigin = awale.qc.ca<br />relayhost = smtp.cgocable.ca<br />inet_interfaces = all<br />mynetworks = 10.0.0.0/24, 127.0.0.0/8</pre> <p><code>myorigin</code> means user@localhost becomes user@awale.qc.ca. <p><code>relayhost</code> is where email is relayed to. <p><code>inet_interfaces</code> means postfix will listen for SMTP on all the VMs networks (default is only localhost). <p><code>mynetworks</code> means postfix will trust any email coming from a host on my LAN. Yes, this is not very secure. But I trust my LAN explicitly. I have Wifi on a separate subnet, so anything on my LAN will have to be physically connected to my LAN. <p>On other computers/VMs, I just need <code>myorigin</code> and <code>relayhost</code>. This last points to the postfix VM, not COGECO.Philiphttp://www.blogger.com/profile/00040474015926871369noreply@blogger.com0