lazytraphttp://www.lazytrap.com/trapped
Fri, 08 Feb 2019 23:55:36 +0000en-UShourly1https://wordpress.org/?v=5.0.4http://www.lazytrap.com/trapped/wp-content/uploads/2018/05/cropped-logo-32x32.pnglazytraphttp://www.lazytrap.com/trapped
323211677415Octoprint Autostart Scripthttp://www.lazytrap.com/trapped/?p=485
http://www.lazytrap.com/trapped/?p=485#respondFri, 08 Feb 2019 23:28:45 +0000http://www.lazytrap.com/trapped/?p=485Read More »]]>My goal: add a startup script to adjust my camera’s settings on reboot. I own Logitech c920 and I use it to monitor my prints on a Ender 3 pro. The c920 has several controls that can be adjusted and the primary one I wanted to change was to turn off Auto Focus.

Note: I originally tried restoring the camera control settings using the uvcdynctrl –load=file command, but something about loading the settings this way upon reboot fails to work completely. I found it more reliable to set each control’s value (as seen below).

The steps:1. SSH to your Pi running Octoprint.2. Create a startup script in init.d, I named mine camera3. Set the script to executable4. Edit script and add uvcdynctrl commands to adjust camera5. Require local filesystem, networking, and octoprint to start before executing6. Add the script to startup services using update-rc.d7. Reboot

Note: When you get around to adjusting your controls, watch your stream to see the results and dial it in how you desire. Write down your values and then you can then use them in an autostart script so the settings are restored upon reboot of the system. See here: Octoprint Autostart Script

Figuring that out isn’t too difficult if you just google some examples.

However, authenticating can be a trick as can be finding out how to authenticate correctly to any phone (assuming you have the rights to ..eh.. give yourself those rights).

Phone “Device Association” is single user for end users.. so you don’t do this through your admin or cucm account. You need to setup an Application User in CUCM and then from there you can select any device for association without messing anything up for the user or his/her’s phone.

Maybe I’ll come back and put some screens/examples of the process.. but this is really just a note-to-self.

]]>http://www.lazytrap.com/trapped/?feed=rss2&p=4610461Add AD module in Powershell for Windows 7 and Windows 10http://www.lazytrap.com/trapped/?p=457
http://www.lazytrap.com/trapped/?p=457#respondMon, 23 Jul 2018 15:04:12 +0000http://www.lazytrap.com/trapped/?p=457Just some links and walk-throughs I’ve needed to use several times over the years and always forget.

]]>http://www.lazytrap.com/trapped/?feed=rss2&p=4570457Site restoration and movehttp://www.lazytrap.com/trapped/?p=399
http://www.lazytrap.com/trapped/?p=399#respondMon, 21 May 2018 20:10:02 +0000http://www.lazytrap.com/trapped/?p=399Read More »]]>Site has slowly been restored from archive.org mirrors and some old db backups. All file links should still work.

This place is really just a scratch pad for myself, some experiments, art, work stuff and a place to jot down to my solutions to issues I didn’t find elsewhere – but – many posts became a regularly used resource for people so I’ve restored what I could manage & tolerate. Address has been moved from root directory and now resides @ http://www.lazytrap.com/trapped I plan to re-use the home directory for something else.

If you found your way here looking for obscure answers to random things and I helped, things i created, or you’re just exploring – Welcome (back) to my random corner.

]]>http://www.lazytrap.com/trapped/?feed=rss2&p=3990399Clearing the Offline cache / Windows CSC directoryhttp://www.lazytrap.com/trapped/?p=356
http://www.lazytrap.com/trapped/?p=356#respondTue, 12 Sep 2017 18:36:17 +0000http://www.lazytrap.com/trapped/?p=356Read More »]]>Offline cache/sync can become corrupted or otherwise a horrible mess. Sometimes starting over is the best or only option.

You won’t be able to use tab completion in CMD to browse to these directories, you just need to type them out. If you are unsure of what directories are in use or where the data is, I suggest downloading the free “TreeSize Free” application and run it as admin and do a scan on C:\.

robocopy /mir retains ownership and security permissions.. so this is good.

csc\vx.x.x\cache directory may vary.

Performed on Win10, Win7

]]>http://www.lazytrap.com/trapped/?feed=rss2&p=3560356Status Solutions’s Sara System + IPSession’s IPCelerate – delayed phone/message notificationshttp://www.lazytrap.com/trapped/?p=382
http://www.lazytrap.com/trapped/?p=382#respondWed, 10 May 2017 19:12:12 +0000http://www.lazytrap.com/trapped/?p=382Read More »]]>I currently work for a company that runs Senior Living, Memcare, etc. type facilities. At some of these locations Sara is used for Pendant alerts, roam tags, wander gaurd, door, pull chain, etc. type alerts and notifications for the residents. One of the main features is a phone messaging system that texts the alerts to to staff phones when a pendant is triggered, etc.

Recently these notifications were randomly getting delayed or dropped. After looking everything over I noticed the IPCelerate transaction log was almost 2 gigs and clearly the DB had never been maintained since it’s installation (almost 5 years). I was getting ready to go through an annoying (and potentially software breaking) routine of installing SP1 for 2008, updating .NET, etc. in order to get SSMS installed on a non-SP1 2008 R2 server. However, with some deep googling and some archive.org help, I found this guy:

The only guy on the internets that basically had the same issue was good enough to document it. I’m going to update it. It’s basically correct, but there are some things to correct just so someone less experienced can still make it through.

Keep in mind, these are about legacy installations. My version of Sara is 4.4 and this information should apply all the way up to 4.7, maybe 4.8 and up.. but I do not have that enviroment so cannot confirm. Our IPCelerate version is up to date (as is JTAPI) with our CUCM version (10.5(2.x)). One difference between my situation and “webmaxtor”‘s (I believe) is that we have a separate servers; One running IPCelerate and another running Sara. IPCelerate is located @ our DC that has our CUCM server and the Sara server is installed @ the location.

I’m going to replicate his information below, with my notes added in red.

Before continuing !*PERFORM A WINDOWS DISK DE-FRAGMENTATION FIRST ON THE DRIVE CONTAINING YOUR DB FILES. *IF YOU HAVE A SEPARATE DISK FOR YOUR DB, GO AHEAD AND DEFRAG YOUR SYSTEM DRIVE AS WELL. *WHY? Because if you’re in this state with a Sara or IPCelerate system, it’s because there has been no maintenance plan on these servers. *You want, nay, NEED, those transaction log files externally de-fragmented before you go and try to shrink them..

This is likely dated material as I haven’t had the chance to work with IPCelerate based solutions in years, but the following just helped me out of a jam. The issue was communication between a Status Solutions TAP paging interface, the IPCelerate IPSession server and ultimately Cisco 7925 handsets was delayed.

1. Open up Windows Service and stop the following services:
Apache Tomcat Tomcat5
Nipa
Nipads

Before you can shrink a transaction log, a backup HAS to be performed.This here is a ‘fake’ backup as you’re directing the output to a null device.

Note: This may take a few minutes before you receive a message similar to this:
Processed 33136 pages for database ‘NIPA’, file ‘NIPA_log’ on file 1.
BACKUP LOG successfully processed 33136 pages in 3.697 seconds (73.423 MB/sec).

4. Once you see this message type in the following.

1> dbcc loginfo(nipa)
2> go
This will display a long list and the far left column is where you will either see 0 or 2.
Example of 2: 2 5570560 224526336 200
0 64 1960000000961100001

Example of 0: 0 5570560 224526336 200
0 64 1960000000961100001

“where you will either see 0 or 2” is just a description. This information isn’t needed specifically for a later step.Also, this step accesses the log which will later prevent you from shrinking it due to “all transaction files are in use” type errors. So, this step, while good (it gives you an idea how many virtual logs are contained in the single transaction log file … and verifies it is actually there) — is out of order when you hit the “repeat” stage. I’ll clarify further ahead.

5. After running this wait about 5 minutes and then type exit to logout of the SQL.

Waiting won’t really help, so just logout go to #6. People say to wait when dealing with this because sometimes the loginfo is still running in the background and you can’t perform the shrink, but…just keep going. No need to wait.

6. Go to Windows Services and stop the MSSQLSERVER service.

7. Wait about 2 minutes and then Start the MSSQLSERVER service

8. repeat steps 2 through 4

Don’t repeat steps 2 through 4. Repeat steps 2-3. Running another loginfo will just prevent you from performing the shrink. So go through step 3 and perform the “fake” backup, then move on to step 9.

12. Start the following services in this order:
Apache Tomcat Tomcat5
Nipa
Nipads

Summary:1. open sql command line2. create “fake backup” to null device3. list entries in transaction file using loginfo. Good to keep a record of.4. Stop/Restart SQL Server service (aka MSSQLSERVER)5. Repeat steps 2 & 3 (just for verification and to see the difference in processed time/pages)6. After repeating step 3 (fake backup) – Shrink the transaction log (step 9)7. Verify the shrink took place.8. Exit & restart your services in the appropriate order

Additional notes:
If this is a VM, take a snapshot of your server so you have something to go back to if you mess up.
I rebooted my server after all was said and done
Check your event viewer for any “new” MSSQL errors. There shouldn’t be any.
Check IPCelerate to make sure the services are started. Easiest just to login to the webpage as admin and check connections.
etc. Due diligence.

Final notes – I will say I had very little expectation for this to solve our issues (at least entirely). Also, our transaction log was just over 1.4GB…which isn’t insane or unheard of. However, performing this DB maintenance totally did fix the delays we were experiencing.. other issues aside.

Also, if you’ve ever.. EVER.. google for information regarding the shrinking of a db transaction log (or db for that matter) then you’ll know that there are shit ton of DBA know-it-alls that act like using shrink for anything, ever, is a horrible idea and will make your life hell. I’m here to say that is BULLLLLLLLLLLLLLLLSHIT. It’s 100% ok to shrink your transaction log files for small db systems like this given you’ve performed other basic maintenance like disk de-fragmentation. There is very little in the way of something going wrong with shrinking a transaction log file and if there were something to go wrong it’s easy to fix. Besides… are you REALLY telling me you’re going to migrate and re-build/organize your entire DB that’s associated with the log… just to clean it up? Are these people out of their minds?

DB’s on the other hand, no matter the size or utilization, I would never shrink…ever…the idea of it makes me wanna shit my pants.

Explanation:

The archiving process creates a list of backups that are to be archived. It can be the case that while the Archive system is processing these backups for archiving, that one of your scheduled backups occurs. If a backup occurs the old backup is removed and a new one is created. The running archiver will then be unable to find the backup to archive since its name/location has changed since the initial list was created.

Solution:

The proper solution would be to analyze your schedules and clean them up to prevent this type of overlap.

An additional or alternative solution if that is unfeasible for whatever reason – would be to change the Storage Manager / ProtectArchiveBackups from False to True. This setting will tell the backup system to retain the backups that are currently being archived.

You could create additional problems by making this configuration change so monitor it through the week. For example, if you are toeing the line on your storage – limitations could now be exceeded because of the increased data used by the retained backup existing while a new backup and archive runs – this could eventually cause backups to fail until corrected. So, again, best course of action is to correct your scheduling.

]]>http://www.lazytrap.com/trapped/?feed=rss2&p=3850385_deviceImage-0.iso was not found while deploying OVF (or OVA)http://www.lazytrap.com/trapped/?p=403
http://www.lazytrap.com/trapped/?p=403#respondThu, 12 May 2016 20:16:41 +0000http://www.lazytrap.com/trapped/?p=403Read More »]]>Minor issue, but slowed me down. Seems so simple after the fact. Also, sort of a follow-up to unsupported hardware errors previously covered.

Recently while deploying an OVA that I created and tested on another host (ESXi 5.5), I was presented with the error:

File ds:///vmfs/volumes/52c252-0b-bf765663-bffd-0026b95-d10a0/_deviceImage-0.iso was not found

Expecting I left a .ISO mounted (even though I had NOT checked the option to “include image files attached to floppy and CD/DVD devices in the OVF package”) I extracted the OVF from the OVA using 7zip, opened it up and took a look. Sure enough, the CD-ROM still had vmware.cdrom.iso defined in it. No idea how it occurred. VMWare Tools had been installed for days and the entire system, including the host had been restarted, etc. But anyways, a quick search showed me to replace the entry with vmware.cdrom.remotepassthrough. Afterwards I had to update the hash in the manifest file as I previously explained here the ovf package requires unsupported hardware