07/28/13

So, what started as take a week to set up a new nagios server at work ended up taking almost a month...because there were many days where I'd only have an hour or less to put some time into the side task. The other stumbling block was I had decided that the new nagios server configuration files would get managed under subversion, instead of RCS as it had been done in the previous two incarnations. New SA's don't seem to understand RCS and that the file is read-only for a reason...and its not to make them use :w! ... which lately has resulted in a the sudden reappearance of monitors of systems that had been shutdown long ago.

Though now that I think of it, there used to be the documented procedure for editing zone files (back when it was done directly on the master nameserver and version controlled by RCS.) Which as I recall was to perform an rcsdiff, and then use the appropriate workflow to edit the zone file.

But, when I took over managing DNS servers, I switched to having cfengine manage them and the zone files now live under masterfiles, so version control is now done using subversion. Had started butchering the DNS section in the wiki, probably should see about writing something up on all the not so simple things I've done to DNS since taking it over...like split, stealth, sed processing of master zone for different views, DNSSEC, the incomplete work to allow outside secondary to take over as master should we ever get a DR site, and other gotchas, like consistent naming of slave zone files now that they are binary.

Additionally work on the nagios at work was hampered by the fact that for Solaris and legacy provisioning is CF2, and the new chef based provisioning is still a work in progress...where I haven't had time to get into any of it yet. So, I had to recreate my CF3 promises for nagios in CF2.

But Friday before last weekend it finally reached the point where it was ready to go live. Though I've been rolling in other wishlist items and smashing bugs in its configuration, and still need to decide what the actual procedure will be for delegating sections of nagios to other groups.

One of the things I had done with new nagios at work, was set up PNP4Nagios...as I had done at home. And, while looking to see if I needed to apply performance tweaks to the work nagios, all the pointers were to have mrtg or cacti collect and plot data from nagiostats. Well, a new work cacti is probably not going to happen anytime soon, and the old cacti(s) are struggling to monitor what they have now (I spent some time a while back trying to tune one them...but its probably partly being hampered by the fact that its mysql can use double the memory that is allocated to the VM. though reducing it from running 2 spine's of 200 threads each...on the 2 CPU VM to a single spine with fewer threads has helped. Something like the boost plugin would probably help in this case, but the version of cacti is pre-PIA. But, it could be a long time before it get's replaced (not sure if upgrade is possible....) Our old cacti is running on a Dell poweredge server that has been out of service over 6 years... with the cacti instance over 8 years old (Jul 8, 2005)....and the OS is RHEL3.

Anyways, it occurs to me that there should be a way to get PNP4Nagios to generate the graphs, and I search around and find check_nagiostats. Though no template for it. Oh, there's a template nagiostats.php, if I create a link for check_nagiostats.php it should get me 'better' graphs. Which is what I have CF2 do at work.

07/14/13

So, recently there was a 'long' 4th of July weekend....on account that I opted to take Friday (the 5th) off as well.

I kind of thought I would tackle a bunch of different projects this weekend, though I've pretty much shelved the idea of re-IP'ng my home network. Perhaps something to do when I get my configuration management better fleshed out.

What I decided was that it looks like its just one last thing on one of the two Ubuntu servers that I'm retiring. So, I figured I'd quickly move that and then go onto the next thing. In the end, I didn't get it completed until Monday night.

For background, some years back...after my return to IRC, I had initially gone with Chatzilla (being that Firefox was my standard browser), which later moved to xulrunner and Chatzilla so it was independent of my browser. Though it was kind of annoying having it running at work and at home, and somewhat confusing for co-workers that ran text based IRC clients in screen somewhere and ssh'd in, etc. Most people that did this, were doing irssi.

So, I initially built it from source and was running on my old RedHat 7.3 server, and that was usable. Later when I setup an Ubuntu box to replace that server (the hardware had previously been SuSE....acting as an internal router for ivs status tracking....) It evolved, in that I would start screen detached from rc.local....which was important since the system would see patches on a regular basis, requiring reboots....which is kind of a reason for switching to FreeBSD.

Over time, I would make little tweaks here and there, to this irssi setup. Like twirssi, doing ssl, and later bitlbee to integrate Facebook chat (came across some stuff that I should add now...)

And, incorporating other tweaks I come across online when there's some problem that becomes sufficient bothersome that I want to address. The one problem I haven't haven't been able to solve is keeping server/system messages confined to the one window. Namely keeping system CRAP going to the system window, and allow channel CRAP to show up in the channel windows....but instead I'll get system CRAP in whatever channel window is active. Which is annoying because its usually the work channel. Where it be just signal and no noise.

Anyways...

I had started to move things more than a month ago, in that I built irssi and bitlbee (including the cfengine3 promise for it...not really much config wise for cfengine to manage for irssi...though I envisioned promising that its running all the time, though irssi has generally been stable everywhere else that I've run it.

But, the I got distracted by other cfengine3 work. Even though things started to get pressing when twirssi stopped working, due to API 1.0 going away...so I had to update Net::Twitter and twirssi. Updating twirssi wasn't that hard to do, but Net::Twitter was a problem, so I opted to remove it and its dependencies and then installing it and its dependencies using CPAN.

I also made note to install net/p5-Net-Twitter from ports on dbox.

twirssi seems to be having other issues, which I had intended to investigate...perhaps after I move... But, that was like a month ago....

07/06/13

Ran into a new problem recently....though the need for SSL in squid on ubuntu is deprecated, by the fact that I'm slowly replacing this server with a FreeBSD server.

As a result, I don't pay attention to this ubuntu server as much as I used to, so I've configured unattended-upgrade. It was installed, but it didn't seem to do anything in that on other servers I'd log in to find that there are lots (40+) of patches available and more than half that are security. Since I came across how to configure it to do more than just security patches, including send me email and on some systems automatically reboot when necessary. (should've thought to see how unattended-upgrade is configured and doing such things in the Ubuntu AMI I have in AWS)

Since I got unattended-upgrade configured on this old server (32-bit Ubuntu Server, which I've heard they have a 12.04LTS download for??? They had said they dropped 32-bit server support, so there was version with 10.04LTS. So I couldn't upgrade and now I'm way past EOL, which is causing problems...probably need to hunt down the landscape and ubuntuone services and nuke them, instead of letting them degrade my server for being EOL.) I've also had to update packages on here from outside sources to keep things running, so guess I should work harder on abandoning this server.... Where it'll likely get reborn as [yet ]a[nother] FreeBSD server....along with the server that I think I have all the parts collected for it, but just need to sit down and put it together. It started as a mostly function pulled 1U server, in need of ... well either new fans or a new case.... I opted for the new case route. It also needed drives and memory. But, as a result of the new case route...aside from case/powersupply...it meant I would need to get heatsinks...since the passive ones based on the 1U case channeling air flow....would be hard to recreate in the tower case I went with. Its a huge tower case, given that its an E-ATX motherboard...yet it isn't a full tower (like the formerly windows machine called TARDIS...someday I'll work its regeneration....need money to buy all the bits and pieces that'll make that up, which I haven't fully worked out what those will be....or where it'll go since my dual 23" widescreen FreeBSD desktop has consumed all of the desk that it would've shared....and not really keen on the idea of a KVM for this situation. )

Anyways...every day I get an email from unattended-upgrade for this system.... with:

This is because of that quirk where even though I rebuilt my version with SSL, and kept it the same version...it wants to install its version to replace mine (of the same version). Which is why I did the hold thing.

I could do the alternative of add a string to make my version advance from current....though I suppose I won't unhold...so that unattended-upgrade won't upgrade should such a thing appear (unlikely since both the OS and squid are ancient...and there'll be no more updates.) But, the intent is to hopefully silence unattended-upgrade in this matter.

Though kind of surprised its still doing something....hmmm, guess there was a new security patch to squid 2.7 back on January 29, 2013....that I've been missing (suppose its already downloaded the update in its 'cache'....or the backend is still there, its just not getting updates beyond what's there....whatever, I think I'm down to one more service to move off....)

Remove any rules to reinstall CF2 or add cfexecd or cfagent to crontabs

Remove cfexecd from start up

Edit update.cf

set email options for executor in promises.cf

cf-agent --bootstrap

If all went well, you are now running CFEngine 3.

Bootstrap policy server using:

cf-agent --bootstrap --policy-server

Remove all rules and policies that are capable of activating CFEngine 2 components

Convert cfservd.conf into a server bundle

Place a reference to this in promises.cf

Add converted CFEngine 2 policies or create new CFEngine 3 policies

Done????

Somethings missing....where's this interoperability taking place? Does CF3 know how to run CF2 policies? no... where's this replace CF2 with CF3 at my pace? Reads like its a full in-pace replacement of CF2 to CF3....

So I finally made a reference about this on a list...

Answer?!

It's why the CF3 binaries have dashes in the name. So you can drop them into the CF2 working directory.... The trick is editing the exec_command in the executor configuration, that's the command for running the agent; modify it to run both agents (v2 and v3).

In retrospect, maybe what I should've done is switched the origin of my sysutil/cfengine to sysutil/cfengine34 when 3.5.0 came out. Since, I see that cfengine-3.4.5 has recently come out, bug fixes to cfengine-3.4.4 were more of what I was after than new features. Though I am intrigued by what 3.5.0 appears to bring, and am considering making use of it...of course, by the time I get to it 3.5.1 or newer might be out.

OTOH, do I really want to build cfengine-3.4.5 in semi-usable package management system we use at work for building and maintaining packages for Solaris 9 and Solaris 10 SPARC, and Solaris 10 x64. The system builds everything 32-bit, though I'm pretty sure we don't have 32-bit hardware anywhere in the datacenter anymore. Though we still have a few Solaris 10 systems around.

But, this will be a big mess at work....nothing is using 0.9.8y yet (though I've been meaning to build it so I'll be ready when there's a bind-9.9.3-P2...had started building 9.9.3 when there was a security advisory of problem introduced in that version...so I'm waiting for the next 'real' security patch to do the upgrade...though maybe I shouldn't, since the intent is for this to be the first 64-bit build....)

06/30/13

This weekend, I decided it was time that I checked on port updates in my /compat/i386 FreeBSD 'system'. Which primarily exists to provide me some ports that don't build on 64-bit, namely emulators/wine-devel and net/nxserver. Don't recall the last time I used nx since I got it working, probably should check to see whether it is still working or not (probably okay on my home system, but might be broke on work one....and might see about setting it up on other work computer too).

Hmmm, hadn't updated ports since May 5th. Start with working through /usr/ports/UPDATING, run into a problem that on 20130609: AFFECTS: users of audio/flac and any port that depends on it, in that there it thinks perl depends on it (kind of an annoyance I have with dependencies....there can be miles of separation between one port and another port, but everything get's marked as depending on that very bottom port, when it in fact didn't or doesn't... Was annoying in trying to figure out why a port was marked BROKEN / DEPRECATED and not get any attention except that people should stop using it...when 100's of ports on my system depend on it. When it turns out that its one or two ports, had an option set that caused it to depend on it. While the other ports generally don't care what options are enabled in that port, just that the command exists for it...or other reason. Though there are some ports that do care about what options were used, which I had ranted about earlier...and I ran into Thunderbird also having that dependency, resulting in this kluge patch:

Code

--- Makefile.orig 2013-06-26 06:01:34.000000000 -0500

+++ Makefile 2013-06-27 20:07:04.142845537 -0500

@@ -98,6 +98,8 @@

.endif

post-patch:

+ @${REINPLACE_CMD} -e '/with SQLITE_SECURE_DELETE/s/_ERROR/_WARN/' \

+ ${WRKSRC}/mozilla/configure.in

@${REINPLACE_CMD} -e 's|%%LOCALBASE%%|${LOCALBASE}|g' \

${WRKSRC}/mail/app/nsMailApp.cpp

.if ${PORT_OPTIONS:MENIGMAIL}

But, I let the portmaster -r flac run aways, with the suspicion that it would break later because perl modules that depend on perl (and not flac) wouldn't get picked up as needed to be re-installed or upgraded, due to 20130612: AFFECTS: users of lang/perl* and any port that depends on it. But, would break the re-install or upgrade of a port somewhere and abort. Which is what I found when I checked on it this morning.

So, I did a portmaster -R -r perl, and noticed that it seemed to include most of the ports that the previous portmaster hadn't done. In fact it included all of them. I also peeked in /usr/local/lib/perl5/5.14.2 and /usr/local/lib/perl5/site_perl/5.14.2 to see what perl modules had gotten missed....mainly the p5-XML-* ones that caused the previous portmaster to abort.

Though I probably should've looked to see if the second portmaster was going to address those, instead of doing them while it was asking to proceed. Because that caused it to abort when re-installing those perl modules (that I had done while it was waiting), but restarting it got things done.

That leaves the latest entry 20130627: AFFECTS: users of ports-mgmt/portmaster, which is just informational and not currently applicable.

Before running in to the flac entry, there had been "20130527: AFFECTS: users of lang/ruby18" which was pretty straight forward, since it only exists as a dependency to ports-mgmt/portupgrade, which I seldom use now...but I have other scripts that use binaries that come as part of it (namely portsclean), which I could probably replace with the portmaster way or something else. But, its not really a priority, plus who knows if I won't decide to go back to using portupgrade...which has options in its pkgtools.conf that I haven't found equivalents for with portmaster, though isn't currently an issue right now. Except perhaps that I'm holding back on updating to the latest emulators/virtualbox-ose, since I've gotten warnings from various sources to stay away from it.

The other big one is what's the portmaster equivalent to portupgrade's ALT_PKGDEP?

Somehow expected there would be more than 12 ports wanting either client or server... probably missing the occurrences in multiline RUN_DEPENDS or some other way to specify the depend. Since pkg_info says there are 103 ports that depend on the client, and two ports that depend on the server (neither being www/owncloud or mail/roundcube, which are the ports that I'm running on 'zen' using the mysql server. On cbox, there are 73 ports that depend on it, some are obviously true...like net-mgmt/cacti and net-mgmt/cacti-spine, but nothing is depending on the server...though it is obviously being used by cacti. I left dbox with the default databases/mysql55-client...there are 71 ports depending on it.

Meanwhile I have postgres server running on zen, which was a depend of something else that I had since removed....but I haven't stopped or removed postgres yet....

In fact after dealing with ruby, flac and perl....the only ports left to update are:

06/29/13

Sometime around mid-May, I think, I had gotten a plain business size envelope with just the Cox logo in the top left corner and my address on it. No other indication as to what the content was, so basically resembling similar letters pitching new services or such. Though those will probably start appearing again when the students return....

Not sure why, I opened it for some reason...and eventually found it to be a letter informing me that Cox is about to go SDV and that I would need to obtain some 'free' tuning adapters (where the only 'free' part is no new monthly charge to my cable card service -- where I only needed 4 tuning adapters to go with the 6 cablecards I'm renting.....the two older TiVo HDs having 2 S-Cards each...had thought about switching both to single M-Cards, but given my experience with previous 'self-install M-Card, turning into a full-service install charge because the M-Card was defective....so a tech had to come out to make that determination and give me a non-defective one.)

The letter said that I could visit a Cox Home Solutions store or call to have them delivered. After some thought, I decide that I will call and have them deliver 4 tuning adapters versus no idea how big or when I would make it out that way. Memorial day weekend was out, as was the first weekend of June for sure. And, the switch over was to be like June 25th.

So, I call and delivery is offered to me, which I confirm that is what I want. Where the agent checks to make sure they do shipments to Manhattan, but then later on its apparently not possible to deliver. But, after some further investigation by the agent, it seems like it might be possible and eventually I get a call back saying that they have been shipped and I should get them around Friday (June 7th).

From past experience, items are shipped FedEx Home Delivery, so I had expected to find them sitting there waiting for me when I got home from work...but there wasn't anything. Perhaps Saturday instead....while I didn't intend to wait all the way to when they did show up.... cacti consumed more time than I had intended that Saturday...

The delivery was kind of annoying...in that the delivery person dropped the boxes loudly on my door step with a big whomp...and then it was like he tried to kick in my door before running off.... mainly because it set off my burglar alarm....

Anyways what I found were 4 huge boxes....big enough to contain DVRs or like components. Perhaps its the only size package they have for deliveries. But, after a quick unpack....the still packaged 4 tuning adapters would fit into one of the boxes, minus the ineffective foam insert...because the insert was sized to protect something bigger than the small boxes that had come instead.

At first it looked like I wasn't getting the complete self-install kit...but later I found that the self-install box contained one coax, splitter and filter...while the tuning adapter box contained the other coax along with usb cable and power wart.

Though I had decided before getting this that I was going to go with a splitter closer to wall and run new longer run to tuning adapter by each TiVo.... And, it was my intent for June 8th to be a Target run. It didn't pan out, and its not something that is sold by the union computer store or the union bookstore...so I ended up ordering cables from Amazon.com.

And, then it was a matter of figuring out when I would get around to setting them up. A number of times where I could've done it went by, but it didn't occur to me that I should do it then. Since the operation would be disruptive to my cable hookup, needed to avoid primetime. And, needed to be sufficiently awake and steady too.

So, there's this BOINC project out of Poland called Radioactive@Home, where you have a radiation detector hooked up to a computer taking samples, etc. Its my second BOINC project with a hardware sensor. Though I had signed up for this one first...back on June 16, 2011. QuakeCatcherNetwork had come later, but getting a sensor was quick (though there were delays in getting it working, they had switched to a new sensor where they didn't have Linux drivers yet...etc., etc.) But, doing Radioactive@Home took longer as sensors are built in batches, there had been early batches that I missed and I wasn't all that sure at first if I really wanted do go to the hassle of getting one.

But, then another user announced that he would do a group purchase of 50 or so, which it should cut shipping costs quite a bit by having a cheaper large shipment from Poland, plus domestic delivery for the last leg. The way delivery costs go, you can get up to 3 for the delivery charge...though most people only want one....at least initially.

Basically I ordered my first detector around August 2011, and finally received it in March 2012. And, it just runs...though occasionally I'll look to see if anything interesting is recorded (like the interesting trace for around the end of the world....)

Meanwhile, on June 26, 2012 there was an announcement of a new detector...a pretty looking one. My first sensor was a prototype type case with rough cutouts, etc. Not really bad looking, but still plain and crude looking. While the announced sensor looked neat, the kind of thing that I might considering putting on my desk at work....

So, there was basically an announcement that there wasn't going to be another bulk US purchase...so after some thought, I decided this new detector was just too pretty to pass up. So, I ordered one mid to late July, 2012. Got confirmation on July 23rd, 2012. 27 Euros for the detector plus 10 Euros for up to 3 detectors, more than 3 pay for the detector now, get bill for actually shipping cost later. Plus if I use PayPal to specify that I'll pay the transaction fees....

In the previous order, it had been requested that we have PayPal funds to pay for the transaction....or use a check. I had tried to keep a float of cash in my PayPal account....but when it finally came time to pay, there wasn't quite enough to do that, so I opted to just mail a check. For this second order, I went with PayPal and had PayPal add the transaction charges to my total.

First detector cost me $46.25 by personal check. Second detector cost me $47.36 (and conversion and including the transfer charge).... I sent the PayPal money on August 21, 2012.

And, then it was wait and wait and wait. I would check the boards now and then for updates...but it was mostly other people wondering the same thing.

Eventually, I stopped checking in...and kind of forgot all about the sensor. Though I did visit the site briefly, but didn't linger or read the detector threads...which I went to check what platforms the project supported. Because when I had originally ordered, I was down to a Solaris 10/x64 workstation, a Windows box, a first gen MacBookPro (32-bit Core Duo). and a dead Linux machine. Eventually, I got a computer to replace the dead Linux box...but I went with FreeBSD instead, and it eventually displaced the Solaris workstation. In February, 2013 while I was working late on my FreeBSD system, I saw the Windows box update itself and reboot, and then it failed to boot. It had killed itself....pretty much the same way my home Windows box had killed itself in an auto-update in February, 2012. I left it off, not sure what I would do with it....I thought about OmniOS or SmartOS...though it was a first gen i7, so no EPT for KVM. Eventually, I decided to install Ubuntu 12.04LTS on it....where its mainly backup for when my FreeBSD system crashes.... its one thing that new Seagate drives only have 1 year warranties...its another thing that they seem to have trouble lasting that long.....

And, then an iMac 27" appeared on my desk....back when it seemed bleak on getting FreeBSD working as my main workstation....I was talked into getting one. But, FreeBSD remains my main workstation....while there are somethings that the iMac is the only computer I have where things work (like being able to participate in WebEx, Lync, Google Hangouts or Xoom for web conferencing....plus it finally solves having mail staying open while I switch to the appropriate desktop to do whatever....I'm up to 17 now....where there are typically 4 to 12 windows...either of uniform size, or variable size, and some desktops the windows overlap, though that desktop is mainly for tailing logs.... Where I'm up to 2 full desktops and 2 half desktops for that.... Anyways, I had made a quick visit...because I wondered if Mac OS X was a supported platform (it wasn't) or if anybody was using FreeBSD for this project....didn't get any search hits. And, it seemed unlikely that the hardware part would work through the Linux emulation on FreeBSD (especially the Fedora 10, and I'm not sure what the process for converting to the CentOS 6 is, that wouldn't break all the things I'm using Linux emulation for....though it is mostly other BOINC projects.) Though doing the search now, I see that a couple days ago the question got raised....with not much luck on having it find the detector ... but ending with a link to a FreeBSD version of the application.... Though since I have a Linux system at my desk (where is primary purpose is to run VBoxHeadless containing Windows 7, for those occasions where I need to use vSphere Center...and passing the time doing BOINC)...I'll just go with running new detector should it ever arrive...on that.

06/24/13

So, this morning I was was wonder why my nagios was still warning about something that it shouldn't be. I was positive I had changed the warning threshold above where it was. I do an 'svn status' on my work dir, nothing uncommitted. I do an 'svn up' on the cfengine server....no updates, I drill down to the file and its correct (perhaps I need an alias on this side as well...though I usually only use 'cdn' for where my svn work dir is or on the nagios server....though its because at work....where this alias is used in association with nagios as well (where work nagios is not yet managed by cfengine, but was considering it for the new nagios server that I'm trying to set up between fires and stuff at work....except the fact that we're still running cfengine2 is really starting to become a problem......though I wonder if cfengine2 could do it, if it weren't hampered by how former admin had implemented things....The work cfengine made a mess with using it to setup a new system because of weird cross interactions between 'promises' and that the promise wasn't written in the same sequence it was running, things that probably aren't a problem when cfengine was original deployed to promise that nothing ever changes....)

Anyways....I finally hunt through the -v output... which is now not much different than debug noise, and nothing like what verbose used to be in 3.4.4.....no more search for 'E nagios' to find where the start of "BUNDLE nagios" is in the out, and then finding the specific file promise..... what a mess. Its like they don't want you to know what's going wrong....

Turns out I missed some more uses of 'recurse' from cfegine_stdlib.cf, where xdev=true is busted.

It was one of three bugs that I had logged for cfengine 3....#2983. Which was almost immediately flagged as a duplicate of #2965 (3.5.0rc fails to recursively copy files with strange message)...and this morning at 5:03am, my bug was closed as that it indeed seems to be fixed for 3.5.1 (soon...).

Wonder what the definition of soon is....had a previous problem where cfengine was complaining about bad regex....when the default for insert_lines: is that they are 'literal' strings. Which was making it hard to use cfengine 3.4.x to make edits to my crontab files. After putting up with it for a couple of months, I finally visit the bug tracker and find that its already been reported and fixed for next version. But, months and months go by and no new version appears. Though it does seem to be fixed in 3.5.0.

Anyways reading #2965 was interesting.... aside from where the dev? spots another bug in the same code and has that pulled as part of the bug. Also that it was reported against RC, and made it into release. Though I had reported a bug in against an ubuntu 12.04 beta release....and it persisted into the release version, where they debated fixing it because apparently LTS means don't update anything after its release...(though I thought they had said things like firefox would stay current instead of staying fixed at the version at time of release now...) Plus it seemed I had to keep reminding them that my bug was reported before release, so that should be reason enough to release the fix. I'm pretty sure they did, but I hardly use that ubuntu desktop anymore (or any ubuntu desktop....though I did fire up my laptop yesterday, but its because there was a new VirtualBox and I hadn't updated the XP VM on there in quite some time....though I've been thinking of whether a FreeBSD laptop is feasible.)

Someone asks that they have a unit test for this bug. Where the response is a unit test would need a running server, which they don't have (yet)...how long has cfengine been around for them to not be using it? Sure wouldn't want to be somebody who's paying for this.

So does that mean nothing is being tested, and that nobody involved in development use cfengine? Because this was the kind of bug that pretty much anybody that uses cfengine3 would run into. Considering I only have the 3 systems (zen - policyserver, cbox, dbox) at the moment....

Perhaps I'm jaded by having worked for an Enterprise software company and how we did full builds every week, and with full runs of automated and manual QA testing. And, having to create unit tests for less than trivial bugs as part of fix/review before closure process. Though what I'm hearing about Chef...its worse....

Still haven't decided what I'm going to do with my Linux systems....migrating the files from Orac if I were to turn it into FreeBSD is the stumbling block, plus I would lose certain services...some of which might not really be an issue, since its probably time I make the leap to blu-ray. And, either I get another Roku or figure out how to incorporate the smart side of my TV into my life (probably time to finally upgrade my receiver....purchased October 27th, 1999)....

06/21/13

After I got my new glasses after my eye exam back on February 8th, things were great...but gradually they started getting back again, which was some what distressing. Especially since I had also gotten that call about the results of the Retinal Thickness Analyzer (?) showing some thinning and that I needed to plan a follow up test in about 6 months. At first the vision would start getting bad as I was heading home from work, and it subtle...though it would get pretty bad about mid evening.

But, then it was starting to get pretty bad around mid afternoon at work, so I decided I needed to get my eyes rechecked. In the inbetween time of making the appointment and the appointment today, I now feel that my vision starts getting bad before I leave for work, sometimes wonder if I should even bother (though I end up going in anyways and struggle until quitting time...)

There's a slight change in the right eye, and even less of a change in the left eye....reading power another step up....

Seems a really strange jump and not really sure how to explain it. In what feels like an unusually rarity in my life, there hadn't been any medication changes from when I started having problems last fall to no, and no new diagnosis....though the latest problem that I've yet to receive a diagnosis for started mid-October....and it in November that my vision started getting bad (though with LISA and my mom's 70th birthday coming up, and thinking that I wanted to wait until I had FSA money...I had put off getting the eye exam. Plus it wasn't as bad as it has been lately...

As to why the shift...they had done a cornea surface mapping as they have done in all previous exams....and in comparison...my cornea surface was much more pointed in February while my current surface is similar to how it was back in November 2011. Not really sure how that came to be though. But, Optometrist said he would order up new lenses and that there would be no charge. Hopefully, they'll be able to get new lenses already pre cut for my glasses, as these are now my only pair....not having kept any of my old pairs this time around given the big change. (and not yet having gotten around to getting a second pair, because around when I was starting to think that I should get them...was when my vision started getting bad....)

Not sure where I'm at on getting a second pair now....will probably shoot for getting them before the upcoming NN Conference for sure.

Hopefully its not some kind of periodic variation, possibly similar to the variation in my sleep....which appears to still exist even with my Narcolepsy being pretty well controlled with medication now.

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.