Guillaume Boudreauhttps://www.pommepause.com/
Fri, 13 Oct 2017 15:06:35 GMThttp://hexo.io/Recording with HDHomeRun without Plexhttps://www.pommepause.com/2017/08/Recording-with-HDHomeRun-without-Plex/
https://www.pommepause.com/2017/08/Recording-with-HDHomeRun-without-Plex/Fri, 25 Aug 2017 23:20:09 GMT
<p>I’ve been using the DVR feature of <a href="https://www.plex.tv/" target="_blank" rel="noopener">Plex</a> since the first beta version, a
I’ve been using the DVR feature of Plex since the first beta version, and it replaced an aging Mac+EyeTV setup I was using before. Bought a HDHomeRun device, and I’ve been pretty happy with the setup since the beginning.

The main issue I have with the DVR feature is the EPG. For one of the channel I record a lot from (Télé-Québec), the EPG data is very bad. It’s missing a lot of shows (it has holes), and shows will sometimes start at a different time in the EPG than in reality, making me miss a few minutes at the beginning of the end of the shows I record, or complete episodes.I reported the issues to Plex, but there hasn’t been any improvements since I did.

So I went looking for a way to schedule a recording “manually”. i.e. specify a channel and start/stop times, and have Plex record that. I quickly realized it was just not possible. So I started looking outside Plex for a solution.

It appears recording from a HDHomeRun device is quite easy. One simply needs to use curl from the command line (or wget or anything that can download a file using an URL!), indicating the channel to use, and optionally a duration and transcoding settings, and boom, you receive a transport-stream (.ts) from the device.

where X.Y is the channel to record, Z is the duration in seconds, and PROFILE is one of the supported transcoding profiles of your device (native, mobile, heavy, etc.)

Being a programmer, and wanting something I could easily use long-term, I took a few hours to throw together a PHP-based app that allows one to schedule recordings, and have them placed wherever they want. It works by specifying the schedule in a .txt file, and can also be used though a very simple web UI, if you have a web-server to serve .php files with.

]]>https://www.pommepause.com/2017/08/Recording-with-HDHomeRun-without-Plex/#disqus_threadFixing Plex issue with Watch Later YouTube videoshttps://www.pommepause.com/2017/07/Fixing-plex-issue-with-watch-later-youtube-videos/
https://www.pommepause.com/2017/07/Fixing-plex-issue-with-watch-later-youtube-videos/Tue, 18 Jul 2017 22:51:09 GMT
<p>For quite some time now, <a href="https://www.plex.tv/" target="_blank" rel="noopener">Plex</a> has been plagued with a bug preventing Go
For quite some time now, Plex has been plagued with a bug preventing Google Chrome users from watching YouTube videos they saved into their Watch Later queue.Being a fan of Chrome (I’m not using anything else really), Plex, and their Watch Later queue, which I populate using both IFTTT (widget: IF new RSS item THEN send email to [my_plexit_email_address]) and their Plex It! bookmarklet, I decided to further debug the issue.Here’s how I found the problem, and a workaround that all end-users can use, until the Plex team releases a fix.

Step 1: Check the Plex Media Server (PMS) logs, to see if any errors could point out the problem.

What I learned: The web app sends an GET to /system/proxy, then a POST to /.../YouTube/PlayVideo, then nothing for 20 seconds, thenWe didn't receive any data from 127.0.0.1:some_random_port in time, dropping connection.I also found a bunch of data/prefs/cache files with YouTube in their name; deleting all of them didn’t help.

Step 2: Use the Chrome Developer Tools to see HTTP requests being sent, and what might be wrong in there.

What I learned: The GET /system/proxy was returning a response in a few ms, but the POST /.../YouTube/PlayVideo was left hanging; the server was not returning any response to that HTTP request.

Step 3: Use Charles Proxy to compare a working POST /.../YouTube/PlayVideo request, as sent by Safari, versus a non-working request, sent by Chrome.

What I learned: Safari is sending HTML text in the body of the POST, while Chrome is sending binary data, which I believe is the same HTML, but gzipped. That might not be an issue for HTTP responses, since all web clients can decompress gzipped content, but the Plex web app sometimes sending text, and sometimes gzipped data, that is most probably where the problem comes from. I doubt PMS is expecting either, so that’s why it works when Safari sends text, and it doesn’t work when Chrome sends gzipped data.

What I learned: The only HTTP request made before trying to play the video is GET /system/proxy. And evidently, the response of that request is what it getting sent to POST /.../YouTube/PlayVideo.

Step 5: Go back in Charles, and again compare a working and non-working request, this time for GET /system/proxy

What I learned: The only difference in the HTTP requests was the browser identifiers (User-Agent and what-not), and a very small addition to the Accept-Encoding HTTP header sent by Chrome: gzip, deflate, br, while Safari was only sending gzip, deflate.

Step 6: Use cURL on the command line to try to send a working and a non-working requests.

What I learned: After a few tries, I noticed that when sending gzip, deflate, br in the Accept-Encoding header, the returned response was NOT decompressed by cURL.In reality, it was decompressed, but it seems that the server was sending back double-compressed data… Which was quite evident, from the X-Plex-Content-Original-Length header returned then; it was almost the same size as the X-Plex-Content-Compressed-Length, which was not the case, when Accept-Encoding: gzip, deflate was sent instead.

So I found the culprit: Accept-Encoding: br, whatever it is supposed to do, is causing the GET /system/proxy to return double-compressed data, which the web app JS code was receiving decompressed only once, and was thus sending still compressed to the next HTTP request, POST /.../YouTube/PlayVideo.

Final step: Confirm the fix: removing br from the Accept-Encoding header in the GET /system/proxy request, using a Chrome extension that allows me to modify the HTTP headers sent for any HTTP request.I tried a few extensions, and was happy with the options given to me by the Requestly extension.I configured a rule to modify the Accept-Encoding header, for requests with path = /system/proxy, like so:

Final Final step: Let the Plex team know what the problem is, and post the workaround for end-users that might not be patient enough.

P.S. Looks like Accept-Encoding: br, AKA Brotli, is a new compression method enabled in Chrome starting in v50. “Advantages of Brotli over gzip: - significantly better compression density - comparable decompression speed” - RefThis would explain why I was never able to gunzip the data Chrome received, in the response to GET /system/proxy; it was compressed using a very funkily-named compression method!

]]>https://www.pommepause.com/2017/07/Fixing-plex-issue-with-watch-later-youtube-videos/#disqus_threadReplacing mint.com with my own (automated) webapphttps://www.pommepause.com/2017/01/Replacing-mint-com-with-my-own-automated-webapp/
https://www.pommepause.com/2017/01/Replacing-mint-com-with-my-own-automated-webapp/Sun, 29 Jan 2017 23:16:20 GMT
<p>I’ve been using mint.com since 2010. I was happy with the service; it worked pretty well, most of the time.<br>But recently, I decided <a
I’ve been using mint.com since 2010. I was happy with the service; it worked pretty well, most of the time.But recently, I decided it just wasn’t worth the risks.So I unlinked all my banking accounts, and then deleted my mint account.

My banking credentials (usernames, passwords, security questions answers) are stored only on my (Mac) computer, in Keychain. (Bonus: my hard-drive is encrypted, and I use a pretty long passphrase as my encryption/login/Keychain password.)

Daily (at 10am), a Python script runs: it uses those credentials to connect to all my institutions websites, and download either OFX or CSV-formatted exports of my recent transactions.

Those transactions are stored in a local SQLite database, and sent via a simple remote web service. (buzzwords: JSON, API, HTTPS, Bearer authentication token)

Newly inserted transactions are post-processed by matching them against a set of rules. This post-processing will rename, categorize and tag known transactions, pretty much the same way mint was doing. Bonus: I use regular expressions to find known transactions, can rename them using RE capture groups, and will add specific tags to some transactions, all of which mint was not allowing me to do.

Since I already had a lot of data in mint, I imported all of that into my own solution. I re-used most of the tags and categories I already had in mint, but cleaned them up a little.It took me a couple of days just to go through all of those, and make sure they were correctly categorized, tagged, and named. It didn’t help that I had my Paypal account in mint; when the Paypal transactions were not duplicates of other transactions (from my credit card or checking accounts), I had a hard time finding it they used my USD or CAD balance. I even had some transactions that were partially funded from my Paypal balance, and partially paid using my credit card. Those were a mess to sort!

I created a simple web frontend that allows me to browse those transactions, grouped by month. For each month, it lists all the categories of expenses and incomes, and displays those in both a table and a pie chart. Rows (and slices of pies!) can be clicked to see all the transactions that are in the specific category, for the selected month. At that point, I can manually edit each of those transactions, for example to add a comment, or change the tags, category or displayed name of the transaction.

Each year, during tax season, I had to go in mint, and look for specific transactions in order to be able to find all the expenses and income to include in our tax return forms. I now have a simple web page that lists all those transactions in a simple HTML table (I find them by category and/or tags). The result can be quite easily copy-pasted into Excel, in order to validate the transactions, and sum each category for easy inclusion in our taxes return forms.

Financial institutions support is pretty bare right now… But help me help you by implementing new institutions!

Some institutions have easy to use OFX URLs, which greatly simplify the downloading of transactions; I just used the ofxclient project to fetch those. (For me, only Tangerine was supported by ofxclient.)For other institutions, we need to fake browser access to download OFX or tab-separated data files.

]]>https://www.pommepause.com/2016/12/Fixing-PuPHPet-Vagrant-SSL-errors-with-rubygems/#disqus_threadNissan Connect Updated APIhttps://www.pommepause.com/2016/03/nissan-connect-updated-api/
https://www.pommepause.com/2016/03/nissan-connect-updated-api/Fri, 18 Mar 2016 03:55:23 GMT
<p>So Nissan published new versions of their mobile apps for the Nissan LEAF, with upgraded security.<br>A-Good-Thing™.</p>
<p>But doing so,
So Nissan published new versions of their mobile apps for the Nissan LEAF, with upgraded security.A-Good-Thing™.

But doing so, they added an unnecessary level of complexity on their API: the passwords sent by the mobile apps are now encrypted!

I battled most of the evening, trying to find how they encrypt the passwords, to be able to reproduce it, and I as finally able to! [1]

They encrypt the passwords using “Blowfish/ECB/PKCS5Padding”, and the key is a string returned by the API before the login API endpoint is used. That key seems to always be the same for now, but since they return it in an API response, I’m pretty sure the app will use that, and we should too.

Proof of concept in Java, PHP or Javascript, and a web service that takes a password in parameter, and outputs the encrypted password expected by the new Nissan Connect API.

Hooray!

I has now updated my Nissan Connect PHP Class and LEAF one-click mini site with that knowledge.[1] So how was I able to find that, you ask? Well, it wasn’t easy… I tried to enter many different passwords in the app, and checked what was actually sent to the server by the app, for each password. I noticed the password sent was:

Base-64 encoded;

Variable length, but always a multiple of 8 bytes (8 bytes for passwords less than 8 characters, 16 bytes for passwords less than 16 characters, etc.);

Always encrypted the same way, for all users.

So everything pointed to encryption, using padding, and a constant key (and salt). I tried many combinations manually, but couldn’t find an algorithm that resulted in the same encrypted string.

After about two hours, I had an idea: take the APK from my rooted Android phone, and try to look in there, if I could find what they used.

I found the login fragment, which referred to a Utilities class that contained all the encryption code. From there, I struggled for a bit because that class implements both Blowfish and EAS encryption, and I tried EAS first, and couldn’t make it work. Then I tried Blowfish, and it worked!

]]>https://www.pommepause.com/2016/03/nissan-connect-updated-api/#disqus_threadThe Case of the Dying Hard Drive That Flipped Bitshttps://www.pommepause.com/2016/02/the-case-of-the-dying-hard-drive-that-flipped-bits/
https://www.pommepause.com/2016/02/the-case-of-the-dying-hard-drive-that-flipped-bits/Mon, 29 Feb 2016 15:23:45 GMT
<p>The symptoms were hard to notice at first: downloaded files would sometimes be corrupted, especially large files; attempts to fix those downloads (using par2) would more often than not fail. Then it became bizarre; calculating the checksum of those files would sometimes, but not always, result in different values.</p>
The symptoms were hard to notice at first: downloaded files would sometimes be corrupted, especially large files; attempts to fix those downloads (using par2) would more often than not fail. Then it became bizarre; calculating the checksum of those files would sometimes, but not always, result in different values.

The last modified date wasn’t changing so the file I was testing with was not changing either. So why was its MD5 checksum sometimes different? Making it harder to debug was the fact that calculating a specific file MD5 over and over always returned the same result. But if I waited a couple of minutes before trying again, then the result would be different!

Bad memory maybe..? memtest said my four RAM sticks were working fine. So maybe the hard drive or its connection was an issue..? I connected the hard drive using another cable, to another SATA port (using a different controller), but the problem persisted.

Then I had an idea: maybe the problem was happening all the time, but the disk cache of the OS was preventing me from noticing it, when it was used instead of reading the data from the disk… So I found how to manually flush the disk cache on Linux (sync ; sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches') and re-run my MD5 calculation test. Sure enough, the file’s checksum was now almost all the time wrong!

Now that I was able to easily reproduce the problem, I had to find what it was… So I created yet-another-PHP-script [1] that would: 1) read the file into an array of bytes, 2) flush the disk cache, 3) re-read the file into another array, 4) compare both arrays byte-by-byte. The results were quite astonishing: each time a byte was different between the two arrays, one of the byte was always exactly 0x10 (decimal 16) lower than the other:

Even better, the read errors never occurred at the same position, and they were always 0x10 lower than the correct value.

So now I had a plan: copy the data from that evil drive onto another working drive, fix it [2] (knowing it was possible, from what I just discovered), and throw that drive as fas as possible from my home server, hopefully soon enough that its bad karma would not have infected my other hard drives!

That plan could be summarized like this:1. Copy the data from the evil drive onto another drive; let’s call it the savior drive.

sudo rsync -av /mnt/evil_drive/* /mnt/savior_drive/

2. Re-execute the rsync, but this time, just to calculate the checksum of the source and target files, and log each file for which the checksums differ.

3. Fix the files on the savior drive by comparing the bytes on there with the bytes on the evil drive, and keeping the largest of the two, when they mismatch.

sudo php ${HOME}/fix_files.php ${HOME}/files_to_check.txt

A couple of hours later, I had a copy of all the data from that bad, (BAD!) drive somewhere else, and I was (pretty) sure that all that data was OK.

Why did this happen? What caused it? When did it start? I will probably never know. But what I learned is that hard drives can (and will) die in very unusual ways, and for the few souls that will be lucky enough to notice a pattern in the errors that will occur, it is possible to not loose any of the data that was stored on those drives.

]]>https://www.pommepause.com/2016/02/the-case-of-the-dying-hard-drive-that-flipped-bits/#disqus_threadBuying in USD from Canadahttps://www.pommepause.com/2015/12/buying-in-usd-from-canada/
https://www.pommepause.com/2015/12/buying-in-usd-from-canada/Fri, 25 Dec 2015 21:36:32 GMT
<p>When buying things in the US from Canada, there are often different options to pay: PayPal, credit cards, etc.</p>
<p>In my case, I am (too) often paying for my voip.ms line (DID and usage).<br>Despite being a Canadian company, they only charge in USD, probably because they need to pay their providers in USD, and don’t want to lose money when the exchange rate is abysmal (1.40 CAD = 1 USD right now!)</p>
<p>When paying an invoice in USD from Canada, not all payments methods are equals. Some will charge more to convert currency. PayPal in particular is pretty bad.<br>
When buying things in the US from Canada, there are often different options to pay: PayPal, credit cards, etc.

In my case, I am (too) often paying for my voip.ms line (DID and usage).Despite being a Canadian company, they only charge in USD, probably because they need to pay their providers in USD, and don’t want to lose money when the exchange rate is abysmal (1.40 CAD = 1 USD right now!)

When paying an invoice in USD from Canada, not all payments methods are equals. Some will charge more to convert currency. PayPal in particular is pretty bad.

So if you often buy in USD from Canada, getting the VISA from Amazon might not be a bad idea. You’ll save (a lot) in the long-term.

]]>https://www.pommepause.com/2015/12/buying-in-usd-from-canada/#disqus_threadiOS 9 ATS (App Transport Security) exceptionshttps://www.pommepause.com/2015/09/ios-9-ats-app-transport-security-exceptions/
https://www.pommepause.com/2015/09/ios-9-ats-app-transport-security-exceptions/Wed, 02 Sep 2015 17:03:37 GMT
<blockquote>
<p>App Transport Security is a feature that improves the security of connections between an app and web services. The feature consists of default connection requirements that conform to best practices for secure connections. Apps can override this default behavior and turn off transport security.</p>
<p>Transport security is available on iOS 9.0 or later, and on OS X 10.11 and later.<br><a href="https://developer.apple.com/library/prerelease/ios/technotes/App-Transport-Security-Technote/" target="_blank" rel="noopener">Source</a></p>
</blockquote>
<h2 id="What-does-that-mean"><a href="#What-does-that-mean" class="headerlink" title="What does that mean?"></a>What does that mean?</h2><p>That unless you change something in your iOS app’ plist, your app will not be able to communicate with unsecure HTTP servers, when it runs on iOS 9.</p>
<p>That’s a good thing really; Apple is trying to force people to update their HTTP servers to use the latest HTTPS protocols &amp; recommendations: TLS 1.2, SHA256 or better, forward secrecy.</p>

App Transport Security is a feature that improves the security of connections between an app and web services. The feature consists of default connection requirements that conform to best practices for secure connections. Apps can override this default behavior and turn off transport security.

Transport security is available on iOS 9.0 or later, and on OS X 10.11 and later.Source

What does that mean?

That unless you change something in your iOS app’ plist, your app will not be able to communicate with unsecure HTTP servers, when it runs on iOS 9.

That’s a good thing really; Apple is trying to force people to update their HTTP servers to use the latest HTTPS protocols & recommendations: TLS 1.2, SHA256 or better, forward secrecy.

OK. A good thing. What’s the problem then?

Where this could cause you issues is if you use third-party services that have not yet adhered to those standards.

They might have a good reason not to (doubtful!), or they are working on implementing them, but are not there yet.

What’s the solution?

Right now, the best workaround is to whitelist all servers on a case-by-case basis, per the service providers own recommendations.Thus the need for the list in this repository.

In the ats.plist file in this repo, you’ll find the exceptions that you need to add to your app .plist, if you use some of the third-party services that published recommendations regarding ATS.

How can I help?

If you know of any other services that published recommendations for exceptions that should be used in iOS 9 apps, for their service to work as expected, please fork and create a pull request.

How can I test if my servers will work with ATS enabled, or what exceptions I might need?

nscurl --ats-diagnostics https://api.your-server.com

]]>https://www.pommepause.com/2015/09/ios-9-ats-app-transport-security-exceptions/#disqus_threadHow to extract your TOTP secrets from Authyhttps://www.pommepause.com/2014/10/how-to-extract-your-totp-secrets-from-authy/
https://www.pommepause.com/2014/10/how-to-extract-your-totp-secrets-from-authy/Sat, 04 Oct 2014 02:03:23 GMT
<p>Maybe you just want to back them up for when something goes wrong, or maybe you want to set up a new two-factor authentication app on a platform that Authy doesn’t support (<em>cough</em> Windows Phone <em>cough</em>). Whatever your reasons, if you want to export your TOTP secret keys from Authy, their apps or support guys won’t be much help to you.</p>
<p>The trick, that I just used to install all my existing TOTP secrets in the Microsoft Authenticator app, is to change one of their app of which we have the source, namely <a href="https://chrome.google.com/webstore/detail/authy/gaedmjdfmmahhbjefcbgaolhhanlaolb" title="Authy on the Chrome Web Store" target="_blank" rel="noopener">their Chrome app</a>, to show us what we want.</p>
Maybe you just want to back them up for when something goes wrong, or maybe you want to set up a new two-factor authentication app on a platform that Authy doesn’t support (cough Windows Phone cough). Whatever your reasons, if you want to export your TOTP secret keys from Authy, their apps or support guys won’t be much help to you.

The trick, that I just used to install all my existing TOTP secrets in the Microsoft Authenticator app, is to change one of their app of which we have the source, namely their Chrome app, to show us what we want.

So I opened gaedmjdfmmahhbjefcbgaolhhanlaolb/2.2.0_0/js/app.js from my Chrome Extensions folder (~/Library/Application Support/Google/Chrome/Default/Extensions on a Mac, ~/.config/google-chrome/Default/Extensions on Linux) in my favorite text editor (TextMate), and used JavaScript > Reformat Selection to be able to see what was happening in there. I then found that the shared secrets I was after was stored in GoogleAuthApp.decryptedSeed. I was looking for the decrypted version, because I didn’t want to have to understand where the encrypted values were stored, and how I could decrypt them myself; their Chrome app could decrypt them already, so all I needed was to add something in there that would somehow output them.

So I added a getter for the shared secret, in GoogleAuthApp (now obfuscated as d), like this:

Then I modified TokensView.prototype.updateTokens to output this info, along with the human-readable name of the entry, in a link to a QR code that I can scan in any TOTP client (compatible with Google Authenticator), like this:

I noticed TokensView.updateTokens is only called after the tokens expire, so at most 30 seconds after you decrypted your data by entering your password, but eh, good enough.

So, if you want to do that too, you can try to do the changes I detailed above, or just replace your gaedmjdfmmahhbjefcbgaolhhanlaolb/js/app.js with my version (based on version 2.2.0 of their Chrome app), and wait between 0 and 30 seconds to see the links appear:

Click them one by one, scan them with your new client, and voilà! You’re ready to rock two-factor logins on your new device.

Of note: CloudFlare doesn’t show a QR code in my screenshot above because it uses “Authy two-factor authentication”, which is not compatible with Google Authenticator. There’s just no point in exporting those out of Authy, since they are not usable anywhere else…

]]>https://www.pommepause.com/2014/10/how-to-extract-your-totp-secrets-from-authy/#disqus_threadRSS-For-Later: Replace Google Reader with Pockethttps://www.pommepause.com/2013/03/rss-for-later-replace-google-reader-with-pocket/
https://www.pommepause.com/2013/03/rss-for-later-replace-google-reader-with-pocket/Sat, 16 Mar 2013 14:42:18 GMT
<p>Google Reader is going away later this year. This means those of us using RSS to keep in touch with the world will need to find an alternative to be able to get our fix using our different devices.</p>
<p>I still remember the pre-Google Reader days of RSS, when RSS clients were silos that talked to nobody. This meant that trying to read articles on a PDA (Palm Zire anyone?), and on a PC, forced us to skip a bunch of articles each time we switched from one to the other…<br>Solutions for this problem <a href="http://gobblerss.pommepause.com" title="GobbleRSS - PDA-friendly web-based RSS reader" target="_blank" rel="noopener">existed at the time</a>, but were convoluted, and not that pleasant. i.e. I don’t want to go back there!</p>
<p>This morning, I read <a href="https://thelab.o2.com/2013/03/substituting-google-reader-for-some-ifttt-magic/" title="Substituting Google Reader for some IFTTT magic" target="_blank" rel="noopener">a post by Ruth John</a>, aka <a href="https://twitter.com/Rumyra" target="_blank" rel="noopener">@Rumyra</a>, about how she used <a href="https://ifttt.com/" target="_blank" rel="noopener">IFTTT</a> (If-This-Then-That) to inject the content of a RSS feed into the Pocket read-it-later service. This stuck me as a good idea, so I started with a Yahoo Pipe that took my OPML, and merged all articles into one feed, and I inputed that into IFTTT, and chose Pocket as the target. Sadly, that didn’t work so well; IFTTT has known issues with Yahoo Pipes RSS feeds. Next option: just do it myself!</p>
Google Reader is going away later this year. This means those of us using RSS to keep in touch with the world will need to find an alternative to be able to get our fix using our different devices.

I still remember the pre-Google Reader days of RSS, when RSS clients were silos that talked to nobody. This meant that trying to read articles on a PDA (Palm Zire anyone?), and on a PC, forced us to skip a bunch of articles each time we switched from one to the other…Solutions for this problem existed at the time, but were convoluted, and not that pleasant. i.e. I don’t want to go back there!

This morning, I read a post by Ruth John, aka @Rumyra, about how she used IFTTT (If-This-Then-That) to inject the content of a RSS feed into the Pocket read-it-later service. This stuck me as a good idea, so I started with a Yahoo Pipe that took my OPML, and merged all articles into one feed, and I inputed that into IFTTT, and chose Pocket as the target. Sadly, that didn’t work so well; IFTTT has known issues with Yahoo Pipes RSS feeds. Next option: just do it myself!

A couple hours later, I now have a working web-app that I used to import my list of feeds (in OPML format), that I can use to manage my feeds list, and that will automatically post new articles to my Pocket account. It even works with the existing RSS Subscription Extension (by Google), so I can easily subscribe to new feeds too.

Enter your email address to create an account. The email address is not really used for anything right now; it’s just a way to link an account to a real person.You will receive a secret URL that you will need to bookmark, as this will be the only way for you to manage your feeds later.

Import your OPML (exported from Google Takeout, or any other RSS client), or manually add new feeds.

Click “Connect to Pocket” to authorize RSS-For-Later to send new articles to your Pocket account.

There are no step 4. :) Just wait for new articles to start appearing in your Pocket account.

RSS-For-Later is now available on Github, for anyone to install on his own server/host, or to improve (submit pull requests).

Suggestions & comments are welcome.

]]>https://www.pommepause.com/2013/03/rss-for-later-replace-google-reader-with-pocket/#disqus_threadHow to run a super-fast Android emulator with Intel x86 system imageshttps://www.pommepause.com/2012/09/how-to-run-a-super-fast-android-emulator-with-intel-x86-system-images/
https://www.pommepause.com/2012/09/how-to-run-a-super-fast-android-emulator-with-intel-x86-system-images/Wed, 12 Sep 2012 15:44:32 GMT
<p>Note: I did this on my MacBook Pro, and saw a major difference between the x86 emulator, and the old ARM emulator. I guess I should thank the CPU my MacBook uses, which supports Intel® HAXM*. If yours doesn’t, you’re out of luck!</p>
<ul>
<li>Intel® HAXM requires an Intel® processor with support for Intel® VT-x, Intel® EM64T (Intel® 64), and Execute Disable (XD) Bit functionality.</li>
</ul>
Note: I did this on my MacBook Pro, and saw a major difference between the x86 emulator, and the old ARM emulator. I guess I should thank the CPU my MacBook uses, which supports Intel® HAXM*. If yours doesn’t, you’re out of luck!

In Eclipse (or not), run the Android SDK Manager, and install the latest “Intel x86 Atom System Image”. The latest one I see right now is in the Android 4.2.2 (API 17) section.There’s also images for 4.1, 4.0 and 2.3 if you want to run those emulators.Also, you might already have it installed.

Next, start the Android Virtual Device (AVD) Manager, and either create a new AVD, or edit an existing one.

Choose a Target for one of the Intel system image you downloaded. The CPU/ABI field will allow you to select “Intel Atom (x86)”. Yay!

In the Emulation Options section, select the Use Host GPU option.Step 6 is to marvel at the speed at which your AVDs now run!

]]>https://www.pommepause.com/2012/09/how-to-run-a-super-fast-android-emulator-with-intel-x86-system-images/#disqus_threadFlickr interesting or groups photos on your (jailbroken) Apple TV screensaverhttps://www.pommepause.com/2012/04/flickr-interesting-or-groups-photos-on-your-jailbroken-apple-tv-screensaver/
https://www.pommepause.com/2012/04/flickr-interesting-or-groups-photos-on-your-jailbroken-apple-tv-screensaver/Sun, 08 Apr 2012 11:10:34 GMT
<p>Since I replaced my Apple TV 1 with an Apple TV 2, and started using Flickr as the screensaver, I was wondering how I could use group photos or interesting photos from Flickr, instead of just a user’s photos, or the result of a search.<br>Today, I was able to hack it to do what I want!</p>
<p>(Note: You need a jailbroken Apple TV for this to work.)</p>
Since I replaced my Apple TV 1 with an Apple TV 2, and started using Flickr as the screensaver, I was wondering how I could use group photos or interesting photos from Flickr, instead of just a user’s photos, or the result of a search.Today, I was able to hack it to do what I want!

(Note: You need a jailbroken Apple TV for this to work.)

I was able to find otherworkarounds, but none seemed to do what I really wanted, or they were simply not good ideas in the long run.

To do what I wanted, I forwarded all requests intended for api.flickr.com, which the Apple TV uses for the Flickr screensaver, to my own server. The simple PHP script I created to receive those requests then checks if the request is a search, and if so, if it contains specific strings, namely group:some group name or explore:. If it does, it will use the Flickr API to do a group search for some group name, and will return the photos of the first group found, or it will return photos from the Explore section, respectively. All other requests are simply forwarded to the Flickr API as-is.

Second bonus: It’s actually quite easy to return pictures from any other website like this, so my script could be easily modified to return photos from a Gallery3 website, or another photo site that has an API, or whatever.

Usage:

To use it yourself, the only thing you need is to force all requests to api.flickr.com to go though my server:

To use a group photos for your screensaver, enter group:some group name in the search option.You’ll want to make sure the some group name you use returns the group you want to use as the first search result in the groups search on Flickr.

]]>https://www.pommepause.com/2012/04/flickr-interesting-or-groups-photos-on-your-jailbroken-apple-tv-screensaver/#disqus_thread[Updated] Phone Power in Canada: awesome features set, so-so routing & supporthttps://www.pommepause.com/2012/03/phone-power-awesome-features-set-so-so-routing-support/
https://www.pommepause.com/2012/03/phone-power-awesome-features-set-so-so-routing-support/Sat, 31 Mar 2012 14:56:30 GMT
<p>Last year, I took the plunge and switched from a big local telephony provider to a web-based VoIP provider: Phone Power.<br>Their features set is quite something: free second line, voicemails to email, some free international minutes, etc.<br>But, when it comes to routing local calls, they are so-so.</p>
<p>I live in Montreal, Quebec, Canada. Many local agencies &amp; companies have 1-800 numbers that are geolocation-locked; you can’t call those numbers from outside Canada, or outside Quebec (depending). How they detect the origin of the call is not based on the caller ID (the caller’s phone number); it has to do with how the call is routed, i.e. where it’s coming from <em>for real</em>.</p>
<p>Now, the problem is not that Phone Power isn’t technically capable of routing those calls correctly, since I have been able to call those numbers on multiple occasions. The problem is that <strong>they are unable to keep routing consistent</strong>. <strong>The result is that those calls will only work sometimes, and will sometimes fail.</strong> And, when it’s not working, and you call/chat support to fix it, <strong>they don’t know how to resolve the situation</strong>. Sometimes, after 20-30 minutes of back and forth, they are able to make those calls go through. Other times, they can’t fix it, and answer that they’ll investigate further, and contact me later when they found something.</p>
Last year, I took the plunge and switched from a big local telephony provider to a web-based VoIP provider: Phone Power.Their features set is quite something: free second line, voicemails to email, some free international minutes, etc.But, when it comes to routing local calls, they are so-so.

I live in Montreal, Quebec, Canada. Many local agencies & companies have 1-800 numbers that are geolocation-locked; you can’t call those numbers from outside Canada, or outside Quebec (depending). How they detect the origin of the call is not based on the caller ID (the caller’s phone number); it has to do with how the call is routed, i.e. where it’s coming from for real.

Now, the problem is not that Phone Power isn’t technically capable of routing those calls correctly, since I have been able to call those numbers on multiple occasions. The problem is that they are unable to keep routing consistent. The result is that those calls will only work sometimes, and will sometimes fail. And, when it’s not working, and you call/chat support to fix it, they don’t know how to resolve the situation. Sometimes, after 20-30 minutes of back and forth, they are able to make those calls go through. Other times, they can’t fix it, and answer that they’ll investigate further, and contact me later when they found something.

Here’s a timeline of my (10+) contacts with support regarding this:

2011-08-17Me: I can’t connect to the following number: 1-800-361-3977 Would it help if my voip phone number was a canadian phone number maybe ?Support: Possibly. Likely. Though you’d need to speak with billing for the logistics of that.

2011-09-26Me: Trying to dial [a 1-866 number], it’s telling me that number can’t be reached from my calling area.Support: Basically i tried forcing the call through all of the providers i have available to me and i get the same message from them all. Call the bank and see if they allow inbound calls from outside canada.

2011-09-27Me: Michael called me today regarding a problem I’m having with 1-866 numbers.Support (after 1+ hour of back and forth): Can you dial one more time plaese?Me: yes, it’s ringing! Bingo!!Support We will be calling you back because we do not want this issue to happen to you or an other customer. I will currently leave it on for now. They might be changing it to test it on our end.

2011-09-28Support: This is a follow up email regarding the ticket you recently opened with us about dialing toll free numbers in Canada. While we do bill calls to Canada at a domestic rate, the call features recognize Canadian numbers as International in nature (as we are an American based company). So with International dialing blocked, it would not permit you to call such numbers.Me: […] it seems like it only partially block Canadian numbers… Because I can call local Canadian numbers without a problem with the setting On. It only prevents me from being able to call some Canadian 1-800 numbers…

2011-09-29Me: Returning the call from Dude. He asked to call back if my 1-866 calls failed again. They do. it worked fine at the end of the chat session on the 27th, but now they are failing again.Support: Please try calling again and let me know what happens now.Me: It’s working. All of them seems to work again now.Support: We are still going to need to work on this issue to get it fully resolved. We will contact you once we have an update.

2011-10-14Me: “The number you have dial has not been recognized.”; different error message than earlier today.Support: [We need to trace the call; can you dial the number again?]Me: [No, my wife needs the phone now. I’ll call back again when the phone if unused.]

2011-11-10Support: We tried to reach you in regards to the problem or service issue you reported on your Phone Power account, but had no success. If you are still experiencing the problem or issue, please reply to this email, or contact us at 888-607-6937 (option 3) in the next 24 hours.Me: It’s working fine at this time.

2011-12-02Me: Since I last wrote to you about this on November 16, 2011, when all was working fine, some 1-800 number are not working again.Support: …Me: if you can’t fix it like that, just make it so all my calls are routed like they were on Nov. 16 and leave it at that.Support: I have already tested that and it did not work either. I will look into this more and contact you later.

2011-12-17Me: What’s the update on this ? Last contact was on December 5: “I will look into this more and contact you later.”Me: (7 hours later) Official complain email sent to sales@phonepower.com, with instructions to forward to the appropriate person, since there is no other contact information on their website.No response whatsoever.

2012-03-30Support: This is a follow up email regarding the ticket you recently opened with us about your outbound calls to toll free numbers, we attempted to reach you however were not successful. We have removed the test route, please re-test and let us know if problem still exist.Me: 1-800 / 1-866 calls are failing again.

Update: Someone from PP called last week. He sounded like someone who cares about the image of the company; maybe he was the director of operations or something. Anyway, he told me that my problem should now be resolved for good, and that he was really sorry about the time it took them to resolve this issue. Things are working fine right now. Let’s hope it stays that way.

Update 2: I switched to voip.ms since my contract with PhonePower expired. Pretty happy so far. It probably costs me a few more cents a month, but I’m happy with it so far.

]]>https://www.pommepause.com/2012/03/phone-power-awesome-features-set-so-so-routing-support/#disqus_thread[Updated] How to monitor the Apple Store for available refurbished items using cronhttps://www.pommepause.com/2012/03/how-to-monitor-the-apple-store-for-available-refurbished-items-using-cron/
https://www.pommepause.com/2012/03/how-to-monitor-the-apple-store-for-available-refurbished-items-using-cron/Sat, 31 Mar 2012 14:33:07 GMT
<p>So, you’d like to buy a refurbished product from the Apple Store, but it’s currently Out of Stock. And will probably be for a while, and when it’s not anymore, the few units available will be gone in minutes.<br>So you need a way to be notified ASAP when it’s available, so you can have a chance to order it.</p>
<p>Here’s a simple way using cron.</p>
So, you’d like to buy a refurbished product from the Apple Store, but it’s currently Out of Stock. And will probably be for a while, and when it’s not anymore, the few units available will be gone in minutes.So you need a way to be notified ASAP when it’s available, so you can have a chance to order it.

Here’s a simple way using cron.

1. Get the product URL from the Apple Store Refurbished page.You can find it on Google by searching for:site:store.apple.com country refurbished product name

Just change the product URL twice in the above, and change the email address, and you’ll receive an email within one minute of your product being available.

If you don’t have an always open Mac/Linux server that can send emails, to run this cron on, just send me your email address and the product URL you’d like to monitor. I’ll be happy to hook you up.Update: Just go to arsc.pommepause.com if you’d like to be notified of refurbished Apple products availability. It’s a little something I threw together that uses the above technique.

]]>https://www.pommepause.com/2012/03/how-to-monitor-the-apple-store-for-available-refurbished-items-using-cron/#disqus_threadShow/Hide DesktopShelves using a Hot Cornerhttps://www.pommepause.com/2011/10/showhide-desktopshelves-using-a-hot-corner/
https://www.pommepause.com/2011/10/showhide-desktopshelves-using-a-hot-corner/Mon, 03 Oct 2011 14:24:36 GMT
<p>Here’s how I setup a Hot Corner to show or hide my DesktopShelves.</p>
<p>(Note that this trick can also be used to launch any program or AppleScript using a Hot Corner.)</p>
Here’s how I setup a Hot Corner to show or hide my DesktopShelves.

(Note that this trick can also be used to launch any program or AppleScript using a Hot Corner.)

Extract the AppleScript from the above download, and put the .scpt file in ~/Applications (or anywhere really!)

If you’ll be using ActivateDesktopShelves.scpt, open it in AppleScript Editor (just double-click it) and change the 2nd line from the bottom to start your preferred screen saver. The default is “Flurry”, and the list of available screen savers appears just above that line. Save it once you’re done.

If you’ll be using ActivateDesktopShelves-iPhotoScreenSaver.scpt instead, there’s two additional steps:

Go in System Preferences, and select the iPhoto screen saver. Chooses the options you’d like to use.

In Terminal, create a copy of the screen saver preferences file. (Just paste the following 3 lines in Terminal.)

Click Hot Corners… Assign one or more Hot Corner to Start Screen Saver.The AppleScript will detect if the mouse is in a corner, and if so, will launch DesktopShelves.app, which will in turn display your shelves.

If the mouse isn’t in a corner, then your screen saver will start.

Voilà! A nice way to start any app using a Hot Corner, and a nice way to use DesktopShelves only with your mouse.

Footnotes

[1] MouseLocation is a simple Objective-C executable created using the following code:

]]>https://www.pommepause.com/2011/10/showhide-desktopshelves-using-a-hot-corner/#disqus_threadStart iPhoto screen saver from AppleScripthttps://www.pommepause.com/2011/10/start-iphoto-screen-saver-from-applescript-2/
https://www.pommepause.com/2011/10/start-iphoto-screen-saver-from-applescript-2/Mon, 03 Oct 2011 10:02:22 GMT
<p>Starting the screen saver from AppleScript is simple enough:</p>
<pre><code>tell application &quot;System Events&quot; to start current screen saver
</code></pre><p>Even starting another screen saver than the default from System Preferences is simple, if you want one of the standard screen saver:</p>
<pre><code>tell application &quot;System Events&quot; to tell screen saver &quot;Arabesque&quot; to start
</code></pre><p>But it becomes much more complicated if you’d like to start the iPhoto screen saver, and use another as the System Preferences default.<br>Here’s how I did it:</p>
Starting the screen saver from AppleScript is simple enough:

tell application "System Events" to start current screen saver

Even starting another screen saver than the default from System Preferences is simple, if you want one of the standard screen saver:

Go back in System Preferences, and change the screen saver to the other screen saver you’d like to use as your default.Once you did all that, here’s how to start the iPhoto screen saver from AppleScript:

line 2: find the machine’s UUID; this is needed because the screen saver preferences file contains that in it’s name;

line 3: the folder where the preferences file is: ~/Library/Preferences/ByHost/;

line 4: the name of the preferences file: com.apple.screensaver.[UUID].plist;

line 5: create a .orig backup of the current preferences file, and then overwrite it with the iPhoto copy that was created manually above;

line 6: start the iPhoto screen saver;

line 7: wait 5 seconds, to allow the iPhoto screen saver some time to launch, then put back the .orig backup we made at line 5, so that your default screen saver will be used when it’s time.Convoluted? Yes. Achieves the desired result? You bet!

]]>https://www.pommepause.com/2011/10/start-iphoto-screen-saver-from-applescript-2/#disqus_threadHas Gmail spam filter become too aggressive?https://www.pommepause.com/2011/04/has-gmail-spam-filter-become-too-aggressive/
https://www.pommepause.com/2011/04/has-gmail-spam-filter-become-too-aggressive/Thu, 07 Apr 2011 14:25:57 GMT
<p>I’m not sure if I’m the only one who noticed (I hope not!), but recently, the Gmail spam filter started marking as spam a lot of messages that were NOT spam.</p>
<p>Here’s the ones I found, while looking at only the first two pages of my Spam folder (about two days worth of spams):</p>
<ul>
<li>A Logitech.com shipment notification;</li>
<li>My monthly Yak invoice;</li>
<li>My monthly ‘your invoice is ready’ from Citibank;</li>
<li>The OpenDNS newsletter;</li>
<li>Two commit notifications from Google Code;</li>
<li>Three ‘your password has been reset’ emails, from Wordpress.org, and other less known bulletin boards.
I’m not sure if I’m the only one who noticed (I hope not!), but recently, the Gmail spam filter started marking as spam a lot of messages that were NOT spam.

Here’s the ones I found, while looking at only the first two pages of my Spam folder (about two days worth of spams):

A Logitech.com shipment notification;

My monthly Yak invoice;

My monthly ‘your invoice is ready’ from Citibank;

The OpenDNS newsletter;

Two commit notifications from Google Code;

Three ‘your password has been reset’ emails, from Wordpress.org, and other less known bulletin boards.

This makes me sad for multiple reasons. One is that while I have been able to catch some of them easily enough (the various reset password systems I used did point out that their email could end up in our Spam folders), I just found the others. That means I probably missed at least some other emails.

The second reason I’m sad is that now, I’ll need to go through all those spam messages to find the ones I care about! Not something I expected to do this morning, nor something that is particularly pleasant… Plus, I’ll need to repeat that every day now!

And finally, this make me sad because I trusted the Gmail team. I understand that spam filtering is not simple, but I would have greatly preferred for them to tweak their algorithm to push the balance in the other direction. It’s much easier for us to flag the occasional spam emails that would end up in our inboxes than to have to go through thousands of emails to find important messages!

Let’s hope the Gmail-Spam-Filter team hears this, and works toward a good resolution in a timely fashion.

What about you? How many not-spam messages can you find in your Gmail Spam folder in the next few minutes?

Important note: the applications below are now officially unsupported.I’m not a Videotron client anymore, so it became difficult, and without interest, for me to continue development on both of those solutions to track your bandwidth quota.If you know any developer that might be interested in continuing development and support, feel free to send them a note.(JavaScript is the main programming language of both softwares.)

Would you like to monitor your monthly Videotron Internet quota easily?

]]>https://www.pommepause.com/2011/03/videotron-internet-usage-monitor/#disqus_threadChrome Extension - Get comments in RSS formathttps://www.pommepause.com/2011/03/chrome-extension-get-comments-in-rss-format/
https://www.pommepause.com/2011/03/chrome-extension-get-comments-in-rss-format/Wed, 30 Mar 2011 01:34:50 GMT
<p>So you got a <a href="https://chrome.google.com/webstore/detail/fnhepcakkcnkaehfhpagimbbkpelkdha" title="Videotron Internet Usage Monitor - Chrome Extension" target="_blank" rel="noopener">nice Google Chrome extension</a>, right?<br>And people do leave comments / questions / hate mail on the extension page all the time.<br>But the only way for you to get those is to visit that page in your browser… Not cool. Not cool at all, Google!</p>
<p>Wanting to get the comments in Google Reader, I simply looked in the Inspect Element &gt; Network tab, to see what was going on, when I visited the Chrome Store page for my extension. And lo and behold, there’s an AJAX request to fetch the comments, with the results returned as a nice JSON-encoded object!</p>
So you got a nice Google Chrome extension, right?And people do leave comments / questions / hate mail on the extension page all the time.But the only way for you to get those is to visit that page in your browser… Not cool. Not cool at all, Google!

Wanting to get the comments in Google Reader, I simply looked in the Inspect Element > Network tab, to see what was going on, when I visited the Chrome Store page for my extension. And lo and behold, there’s an AJAX request to fetch the comments, with the results returned as a nice JSON-encoded object!

A couple of LOCs of PHP later, I now have a URL that takes in parameter an extension ID (that 32-characters-long string of letters you see in the URL, when you visit the extension URL; eg. fnhepcakkcnkaehfhpagimbbkpelkdha), and an optional extension name (to beautify the RSS a little), and gives me a RSS feed of all the comments for that extension.

]]>https://www.pommepause.com/2011/03/chrome-extension-get-comments-in-rss-format/#disqus_threadGreyhole Roadmap - version 0.9https://www.pommepause.com/2010/12/greyhole-roadmap-version-0-9/
https://www.pommepause.com/2010/12/greyhole-roadmap-version-0-9/Mon, 13 Dec 2010 02:39:45 GMT
<p>With 0.8 just out the door, I began thinking about 0.9.<br>That version will focus on fixing Greyhole’s greatest bug creator: the rename
With 0.8 just out the door, I began thinking about 0.9.That version will focus on fixing Greyhole’s greatest bug creator: the rename operation!There has been, and probably always will be, problems with the current implementation that Greyhole has to handle file and directory renames.I have some ideas on how to fix them all, once and for all, but this will need some non-trial development, clean up and regression testing.I’m hopeful that re-implementing this part of Greyhole will make it much less bug-prone, when renames are concerned.More about this soon (when 0.9 will be released I guess…)]]>https://www.pommepause.com/2010/12/greyhole-roadmap-version-0-9/#disqus_threadGreyhole 0.8 & Samba modulehttps://www.pommepause.com/2010/12/greyhole-0-8-samba-module/
https://www.pommepause.com/2010/12/greyhole-0-8-samba-module/Mon, 13 Dec 2010 02:39:25 GMT
<p>I just built and uploaded version 0.8 of Greyhole on Google Code.<br>This version doesn’t change much of what is normally visible to the end users (except the regular bug fixes). Instead, 0.8 focused on improving an area of Greyhole that has always been messy: the communication channel between Samba and the Greyhole daemon.</p>
I just built and uploaded version 0.8 of Greyhole on Google Code.This version doesn’t change much of what is normally visible to the end users (except the regular bug fixes). Instead, 0.8 focused on improving an area of Greyhole that has always been messy: the communication channel between Samba and the Greyhole daemon.

In the past, Greyhole used a log file to log all file operations that happened on Greyhole-enabled Samba shares. At one point, it used its own log file, then switched to using syslog (/var/log/messages usually).In version 0.8, Greyhole now use spool files to log those operations.This is similar to how email servers (like sendmail and postfix) work.Basically, Samba will create small data files in /var/spool/greyhole/, with filenames being timestamps of when the event happened.When the Greyhole daemon needs something to do, it will look in that directory, and process any files it finds.

All of this serves multiple purposes. One is to simplify Greyhole code.Parsing a log file wasn’t that pleasant: we had to remember where we stopped parsing the log the last time we looked at it, and we had to handle log rotation to not miss any operations.All of this is now a thing of the past. We now simply list a directory, and process the files we find there, before deleting them. Quite simple really!

Another goal which we’re aiming for with 0.8 is to have the Samba module that Greyhole use become part of Samba. That would mean that everyone with Samba installed would have at least one of the many blocks required to run Greyhole. That would also mean a lot more visibility for Greyhole than what we have. I’m working with Samba developers to make that a reality, and I expect to get this committed in the Samba mainline repository in the upcoming weeks.

]]>https://www.pommepause.com/2010/12/greyhole-0-8-samba-module/#disqus_threadGreyhole new websitehttps://www.pommepause.com/2010/12/greyhole-new-website/
https://www.pommepause.com/2010/12/greyhole-new-website/Mon, 13 Dec 2010 02:38:56 GMT
<p>This week, I created a <a href="http://www.greyhole.net" target="_blank" rel="noopener">new website for Greyhole</a>.</p>
<p>This website
This week, I created a new website for Greyhole.

This website centralize all the information one could want about Greyhole:

]]>https://www.pommepause.com/2010/11/tou-tv-pour-ipad/#disqus_threadFuel Consumption Trackerhttps://www.pommepause.com/2010/06/fuel-consumption-tracker/
https://www.pommepause.com/2010/06/fuel-consumption-tracker/Tue, 15 Jun 2010 00:12:29 GMT
<p>I wanted to keep track of fuel consumption (L/100km) for our two vehicles. I wanted to be able to send email to enter data, or use a simple web interface. The email part was important, because I don’t have a data plan on my cellphone, so being able to compose and queue an email at the pump, to have it sent automatically when I was later within reach of a known Wifi network, was a very nice to have.</p>
<p>Implemented in PHP, the result is not that pretty, but it’s nice enough, and the ease of use allows me to keep it updated without too much hassle.
I wanted to keep track of fuel consumption (L/100km) for our two vehicles. I wanted to be able to send email to enter data, or use a simple web interface. The email part was important, because I don’t have a data plan on my cellphone, so being able to compose and queue an email at the pump, to have it sent automatically when I was later within reach of a known Wifi network, was a very nice to have.

Implemented in PHP, the result is not that pretty, but it’s nice enough, and the ease of use allows me to keep it updated without too much hassle.

The webpage has 4 simple text fields to enter those values, and an email just needs to contain the KPL in whitespace-separated format. The webpage has one form per car, and an email just needs to contain the name of the car anywhere in the email (subject or body): “corolla” or “highlander”.

The database is simple enough: date, car ID, KPL (3 fields), and an auto-filled (using MySQL triggers) consumption field. A simple cars domain table links the car ID to it’s user-readable model, make and year, used to display the reports.

Throw in some Google Chart Tools to display graphs, and some general statistics (average consumption, total mileage, total fuel bought (in $ and L), and you’re done!

A nice to have I added later on is a next_service table, which contains a car ID, and a mileage. When data is entered that makes the total mileage of a car reach the mileage indicated in that table, an email is sent to the car owner, to remind him that his next service is due. Not that the dealer won’t remind me anyway, but still…

Here’s what it looks like, for the recent Highlander Hybrid (bought April 2008), and the 2002 Corolla:

The code that makes this work will be open-sourced if anyone is interested.

Or if you’d like to simply use it yourself as-is, I can set you up on my server, no problem. Just poke me.

]]>https://www.pommepause.com/2010/06/fuel-consumption-tracker/#disqus_threadHacking Crome extensions - How I added keyboard shortcuts to 1Password in Chromehttps://www.pommepause.com/2010/05/hacking-crome-extensions-how-i-added-keyboard-shortcuts-to-1password-in-chrome/
https://www.pommepause.com/2010/05/hacking-crome-extensions-how-i-added-keyboard-shortcuts-to-1password-in-chrome/Fri, 14 May 2010 13:02:30 GMT
<p>I love <a href="http://agilewebsolutions.com/products/1Password" target="_blank" rel="noopener">1Password</a>. It looks good, it’s safe, it has a web-accessible UI, it has an iPhone/iPad application…</p>
<p>What I didn’t like about it was it’s Chrome extension, which required me to use the mouse to click the 1Password icon in the toolbar each time I wanted to auto-fill a form with login details!! That was so annoying.</p>
<p>So annoying in fact that I took upon myself to implement keyboard shortcuts in the 1Password extensions.
I love 1Password. It looks good, it’s safe, it has a web-accessible UI, it has an iPhone/iPad application…

What I didn’t like about it was it’s Chrome extension, which required me to use the mouse to click the 1Password icon in the toolbar each time I wanted to auto-fill a form with login details!! That was so annoying.

So annoying in fact that I took upon myself to implement keyboard shortcuts in the 1Password extensions. I knew it wouldn’t be that hard, since Chrome extensions are basically JavaScript & HTML files.

And it turned out to be pretty easy indeed:

I added an event listener for keyUp in the content script (that’s executed each time a page is loaded):

window.addEventListener(“keyup”, keyListener, false);

Then in the keyListener function, I simply check for the keyboard shortcuts I want:

That sendRequest line simply calls another JavaScript function, but a function that is defined and executed in the ‘background’ context (the equivalent of a singleton pattern for Chrome extensions).In the background HTML file, I simply added some code in that function that would popup a small window that would show the same popup.html file as when I clicked the 1Password button in the toolbar.

The only thing left was to change the existing functions from popup.html that fetched the available login informations, and auto-filled the forms, to use the parent tab, instead of the current tab, when invoked from the popup.And how lucky I was; there was already a null parameter used for the target window in both those functions! I simply changed that parameter to the parent window id, if the popup was invoked from the keyboard, and that’s it! :)

I now have a working 1Password extension that I can use without my hands leaving the keyboard.

I created a patch of my changes, and posted it in the 1Password forums, so that them developers could take it and base the official implementation from there.

Restart Chrome.Then try the shortcuts: Ctrl-/ or Ctrl-\]]>https://www.pommepause.com/2010/05/hacking-crome-extensions-how-i-added-keyboard-shortcuts-to-1password-in-chrome/#disqus_threadAllowing programs run by regular users to open ports below 1024https://www.pommepause.com/2010/05/allowing-programs-run-by-regular-users-to-open-ports-below-1024/
https://www.pommepause.com/2010/05/allowing-programs-run-by-regular-users-to-open-ports-below-1024/Fri, 14 May 2010 12:30:31 GMT
<p>Normally, only the root user is allowed to open ports below 1024.<br>That’s why, if you try running an application as a normal (non-root) user, and that application tries to open a port below 1024, you’ll get an error (access denied most likely).</p>
<p>If you’re running Fedora (and that would probably work on other distros too), there’s a command you can run, as root, that will allow such programs to open any of those ports, even if they’re run by a regular user.
Normally, only the root user is allowed to open ports below 1024.That’s why, if you try running an application as a normal (non-root) user, and that application tries to open a port below 1024, you’ll get an error (access denied most likely).

If you’re running Fedora (and that would probably work on other distros too), there’s a command you can run, as root, that will allow such programs to open any of those ports, even if they’re run by a regular user.

setcap cap_net_bind_service=ep your-program-name

Example:

setcap cap_net_bind_service=ep /usr/local/bin/znc

You’ll then need to restart the program, if it was already running.

That’s it.

Kudos to stevea, a very prolific poster of FedoraForums.org (he nears 5k posts), for his answer to this question asked by another user.

]]>https://www.pommepause.com/2010/05/allowing-programs-run-by-regular-users-to-open-ports-below-1024/#disqus_threadNetwork-wide incoming calls notifications using Growl, Boxcar and XBMChttps://www.pommepause.com/2010/05/network-wide-incoming-call-notifications-using-growl-and-xbmc/
https://www.pommepause.com/2010/05/network-wide-incoming-call-notifications-using-growl-and-xbmc/Thu, 13 May 2010 00:33:42 GMT
<p>Earlier this week, I stumbled upon an iPhone app that allowed users to receive push notifications on XBMC.<br>When a notification is received in XBMC, it appears in the lower right corner of the screen.<br>Pretty cool.</p>
<p>This made me think it would be nice to see incoming phone calls there.</p>
<p>So I took out the Ovolab Phlink device I had sitting on a shelf, and created a small ‘ring’ script for it. That (Apple)script checks for the caller ID when the phone rings (and for a matching entry in my address book), and if it is available, calls an external PHP script that handles the network-wide notifications.
Earlier this week, I stumbled upon an iPhone app that allowed users to receive push notifications on XBMC.When a notification is received in XBMC, it appears in the lower right corner of the screen.Pretty cool.

This made me think it would be nice to see incoming phone calls there.

So I took out the Ovolab Phlink device I had sitting on a shelf, and created a small ‘ring’ script for it. That (Apple)script checks for the caller ID when the phone rings (and for a matching entry in my address book), and if it is available, calls an external PHP script that handles the network-wide notifications.

That script takes in parameters:

the message to send

a title

the list of recipients (computers)

the image to use for the Growl notifications (XBMC notifications don’t show any images, just a title and the message).So, before calling this PHP script, the ring script will create the message to send, and if there’s a picture for that person in my address book, it will save that picture to a shared directory on the local computer. Remote computers all have that shared directory mounted all the time, so they instantly have access to the caller photo, if any. :)

The notification PHP script then loop on all recipient computers, and depending on what they are will either:

Call growlnotify remotely using SSH

Make a HTTP call to the remote XBMC process, to send the notification

Make a HTTPS call to the Boxcar API, to send a Push notification on iPhone / iPad devicesThis worked really well. But I wanted to go one step further.

On the XBMC running on the Mac Mini that we use as home theater, I wanted to pause whatever was playing when the phone rang. Luckily, there’s also a HTTP call available to do that. Sadly, I soon realized that the “Paused” graphic appeared over any notifications! If I paused the video, the notification would simply not be readable.I fixed that by using Growl on that computer. The Growl notifications appear over everything, and the currently playing videos is paused. Hooray!

]]>https://www.pommepause.com/2010/05/network-wide-incoming-call-notifications-using-growl-and-xbmc/#disqus_threadBuilding a hush box to quiet a projectorhttps://www.pommepause.com/2010/05/building-a-hush-box-to-quiet-a-projector/
https://www.pommepause.com/2010/05/building-a-hush-box-to-quiet-a-projector/Wed, 12 May 2010 16:18:13 GMT
<p>A projector and 120” screen sure are nice to watch TV shows and movies, but having them on the 3rd floor of the house makes the projector unhappy.</p>
<p>Being on the ceiling of the almost highest point in the house, during hot summer days, that projector can become quite hot. And when it does, it tries to compensate by fuelling it’s fans with enough voltage to make them sound like jet engines.<br>(Not <a href="http://www.macintouch.com/readerreports/powermacg5/topic2215.html" target="_blank" rel="noopener">PowerMacG5-running-in-single-user-mode jet-engines-loud</a>, but still…)</p>
<p>To try to quiet it down a notch, I built what some people call a <em>hush box</em>.
A projector and 120” screen sure are nice to watch TV shows and movies, but having them on the 3rd floor of the house makes the projector unhappy.

Being on the ceiling of the almost highest point in the house, during hot summer days, that projector can become quite hot. And when it does, it tries to compensate by fuelling it’s fans with enough voltage to make them sound like jet engines.(Not PowerMacG5-running-in-single-user-mode jet-engines-loud, but still…)

To try to quiet it down a notch, I built what some people call a hush box. It’s basically an enclosure that you put over your projector to stop the noise.

Evidently, you need to be careful not to make the projector overheat (which will greatly reduce the lamp life). Projectors emit quite a lot of hot air, so having them in a confined space wouldn’t be a good idea without good ventilation.

So here’s how I did it.

I measured the size of the box I’d need to build to hide the projector and it’s cables. I added 2” on each side, where the air intakes are located on my projector. I added another 2-4” to all dimensions to allow me to place some kind of sound attenuation material inside the box.The plan was to place a 120mm fan behind the box to push cool air in, and another 120mm fan on the ceiling, pulling the hot air out of the box.

I built the box first, using scraps of wood I had, which I attached together using V metal brackets and small screws.

Next, I tried it over the projector to make sure my measurements were correct, and chose a spot to placed the ceiling fan.I climbed in the attic to install that fan. (That’s always fun, with all the pink isolation material in there…)

I then found a nicely sized glass piece that I could use to cover the hole I made for the image. I took it out of industrial 500w lights I had around (yes, they are now much more dangerous to use without that piece of glass!).

Looking for something that would attenuate the sound, I found some scrap pieces of carpet laying around. I cut out pieces of the right size, and place them at the bottom, front and sides of the hush box.

Almost done. I attached the box on the ceiling.

I used a piece of cardboard for the last side, and placed another 120mm fan in there. That will allow me easy access to the projector, if I ever need that.

Here’s the end result:

I’m pretty satisfied with the end result. I have now changed the settings of the projector to ‘High Altitude’, which makes the fans always run at their maximum speed. Even with that, the projector is much more quiet than it was before. Hooray! :)

]]>https://www.pommepause.com/2010/05/building-a-hush-box-to-quiet-a-projector/#disqus_threadVideos5 - A web application to stream videos to your iPad (and all)https://www.pommepause.com/2010/05/videos5-a-web-application-to-stream-videos-to-your-ipad-and-all/
https://www.pommepause.com/2010/05/videos5-a-web-application-to-stream-videos-to-your-ipad-and-all/Wed, 12 May 2010 15:26:11 GMT
<p>The iPad is now very popular in the house.<br>I seldom can use it as a recipe book to cook something, as it was intended… It’s either in my oldest’s hands, playing <a href="http://itunes.apple.com/us/app/labyrinth-2-hd/id307758975?mt=8" title="iTunes Link" target="_blank" rel="noopener">Labyrinth 2 HD</a>, either on my wife’s lap, browsing her Facebook &amp; reading her emails.</p>
<p>But still, sometimes, it’s nice to use it for other things.<br>One such other thing would be to stream videos from the <a href="http://www.amahi.org/" title="Amahi Home Server" target="_blank" rel="noopener">Amahi</a> home server sitting in a closet upstairs.<br>One can watch a recorded TV show in bed, or hand the iPad to the big kid to let him watch Cars or Nemo while we’re watching the news, or something non kid-friendly.</p>
<p>Being of the DIY kind, I made my own web-app to achieve this, using the new HTML5 videos tag.
The iPad is now very popular in the house.I seldom can use it as a recipe book to cook something, as it was intended… It’s either in my oldest’s hands, playing Labyrinth 2 HD, either on my wife’s lap, browsing her Facebook & reading her emails.

But still, sometimes, it’s nice to use it for other things.One such other thing would be to stream videos from the Amahi home server sitting in a closet upstairs.One can watch a recorded TV show in bed, or hand the iPad to the big kid to let him watch Cars or Nemo while we’re watching the news, or something non kid-friendly.

Being of the DIY kind, I made my own web-app to achieve this, using the new HTML5 videos tag. Yes, I know, there are existing ‘solutions’ that would allow me do to something very similar, but what’s the fun in that. Plus, building my own, I’ll be sure the features I need and want will be implemented in a timely fashion!

There’s user profiles, that can be password protected, that allows me to hide the inappropriate videos from my children.

It integrates nicely with XBMC, that I was already using for those videos, to use the same thumbnails, and import ratings.

There’s batch encode and batch ratings, for movies and TV shows.

There’s an Encode Queue page allowing me to monitor progress of all the queued encodes.

Encoded videos will play nicely on all devices (iPad, iPhone, Apple TV, XBMC).

The home page shown after selecting user profile can be bookmarked (in the browser bookmarks, or the iPad home screen) to allow easy access to that specific profile. Perfect to allow the kids to reach their videos easily.

If you want to try it, I created a pretty thorough README that you can follow.

You’ll need an HTTP server, PHP, MySQL, HandBrake-CLI, mediainfo and mplayer (command-line version), all of which are pretty easy to obtain for any OS.Final note: You’ll probably need to manually edit the index.php file to point to the correct paths for some executables. And I doubt it would work in it’s current state on Windows, though it should be able to, with a couple of minor modifications.

]]>https://www.pommepause.com/2010/05/videos5-a-web-application-to-stream-videos-to-your-ipad-and-all/#disqus_threadGreyhole: How cool is that?https://www.pommepause.com/2010/03/greyhole-how-cool-is-that/
https://www.pommepause.com/2010/03/greyhole-how-cool-is-that/Sat, 27 Mar 2010 15:10:12 GMT
<p>So, I’m now happily using <a href="/2009/12/greyhole-easily-expandable-redundant-storage-pool-using-samba/">Greyhole</a>. Good for me, yo
So, I’m now happily using Greyhole. Good for me, you say?

Not long ago, a 1 TB hard drive that was part of my storage pool died (my fault really, handling it while it was powered up). Greyhole handled this beautifully, re-creating duplicate copies of the files that were stored on that drive to continue protecting all my data. But I didn’t have enough free space on the other drives to allow all the duplicates I want to be created.

Perfect timing to test a very nice feature of Greyhole: inclusion of remote hard drives in the storage pool.

I have a 1 TB hard drive attached to my Airport Extreme router, that I use as my Time Machine backup destination (the Airport makes it available through AFP and Samba). It had about 600 GB free. Perfect candidate for this.I simply mounted that drive on my file server, and included it in my Greyhole storage pool. I then launched “greyhole –balance” to force Greyhole to balance the available space evenly on all drives. Files transferred at about 5MB/s from my file server to the remote drive, so I had to wait a couple of hours for the 600GB to get filled.

I now have about 10-12 GB free on all the drives included in my storage pool, and all my files are correctly protected once more.

Further thinking revealed an interesting use of such remote hard drive in a Greyhole storage pool. Since remote access is much slower than local access, it wouldn’t make much sense to keep a remote drive in my pool forever, since I do care about performance. But, for some files, performance is not an issue. For example, for my Photos share, I keep a copy of each file on all available drives in my storage pool (I do care about those files!) A remote drive could be used to store a copy of those files, and nothing else. The trick to achieve this is to simply indicate a very high number as the minimum free space for that drive in the Greyhole configuration.With such a configuration, the remote drive will only be used as a last resort choice when Greyhole chooses where a file copy should be kept. And, minimum free space will be ignored in the case of files that needs to go on all drives.What this means is that the remote drive will be used to store a copy of the files in my Photos share, and it will be used to store file copies on other shares only if all other hard drives are filled to capacity. Which is nice.My important files are now backed up remotely (well, in the next room is remote to the file server!), plus if all my fast drives get filled, this slower option will be used until I can free up some space (by adding another internal drive, most likely).

How cool is that? Very cool I think. I don’t know any other pooling / redundancy system that would allow you to do something like that with such ease! :)I’m glad to be using Greyhole right now. And you? :D

Important note: the software below is now officially unsupported.I’m not a Videotron client anymore, so it became difficult, and without interest, for me to continue development of this solution to track your bandwidth quota.If you know any developer that might be interested in continuing development and support, feel free to send them a note.(JavaScript is the main programming language.)

Having received earlier this week a letter from Vidéotron, my ISP, about my account getting capped at 100GB monthly in the upcoming months, I decided I needed an easy way to monitor my monthly bandwidth usage. A Dashboard widget was a good fit.

I downloaded a couple of widget samples from Apple.com, and started a new widget from there.

The end result:

A nice little widget, sitting on my Dashboard, that can tell me how much of my monthly quota I’ve used so far.Preferences are: Vidéotron Use Key (something that looks like FFFFFF1234567890), and if you’d like to visualize upload (versus download) using a different color, on the graph.

Enjoy, fellow Vidéotron users.

Changelog

1.3.9 - Bugfix: During the 1st day of the month, you could see a warning about reaching ‘infinity’ at the end of the month!

1.3.8 - Improvement: Give a warning when the entered User Key is not the right length (16 characters).

1.3.7 - Bugfix: date conversions were broken!

1.3.6 - Bugfix: Small fix for the ‘now’ arrow in the small UI.Bugfix: finally working on 10.4 and Lion (10.7).Improvement: New version available looks better in French and on small UI.

1.3.5 - Bugfix: Incorrect end date calculation placed the ‘now’ cursor at the wrong position.Bugfix: trying to make it work on 10.4.

1.3.3 - Bugfix: minimal UI was broken since 1.3.0. Also added surplus calculation details in Console, and changed back surplus number in green (was changed to red for no reason).

1.3.2 - Bugfix & improvements: spinning wheel now disappears when it should. Current now appears in the preferences. New versions notifications will appear below the widget.

1.3.1 - Bugfix for Mac OS X 10.6 and lower (1.3.0 only worked on Lion… Sorry!)

1.3.0 - Bugfix & Improvement: Now using the new public Videotron API, instead of data scraping the consumption web page! You’ll need your User Key. You can find it in your Videotron Customer Center, in the User Key tab of the Your Profile page.

1.2.8 - Bugfix: wrong plan would become selected when new plans became available. Improvement: the bandwidth used today is now accounted for. Improvement: better warning text when your limit is busted, including the approx. amount you’ll be charged for the extra.

1.2.7 - Bugfix: typo in JavaScript made the widget unusable on Lion. Fixed. (Thanks Anonymous for the pointer.)

1.2.6 - Bugfix: regression in 1.2.5; couldn’t save preferences!

1.2.5 - Bugfix: Basic (2GB) plan couldn’t be selected.

1.2.4 - Updated Vidéotron logo; bug fixed: accumulated daily surplus and ‘now’ arrow were 1 day off; clarified that the last updated date was in fact “$date @ 23h59”; visual fix when new versions are available.

1.2.3 - Changed text in options, to clarify that the data transfer packages are ‘extras’, and the ‘Plan’ option is what will define your monthly limit.

1.2.2 - Added option to select data transfer packages bought this month. The selected value will be reset when the billing month changes.

1.2.1 - Missing Business plans from 1.2.0; added an option to display upload using a different color on the graph.

1.2.0 - Easier configuration; new version available notification; now displaying numeric deviation from daily limit (surplus) - this was only shown using a small arrow on the graph before; added overcharge ($) you should expect on your invoice, if you’re over your limit.

1.1.6 - Better handling for wrong username; people entering anything else than their VLXXXXXX Vidéotron username will now get a relevant error message.

1.1.5 - Fetch new data less often; was previously checking for new data every 15 minutes when the Dashboard was open; changed that to once a day.

1.1.4 - Added small arrows on meters to show the current date. If the meter is higher than the arrow, it means you’ve transferred too much in regard to the current date versus the date you’re invoiced. Red arrow = bad; green arrow = good.

Update: Greyhole has matured quite a lot since I posted this. You can now learn about Greyhole on the official website: www.greyhole.net.

There. I did it. Greyhole is now available to all avid enthusiasts who would like to try it.

What is it? It’s Windows Home Server Drive Extender concept, but open source, running on Linux.To quote myself:

Greyhole is an application that uses Samba to create a storage pool of all your available hard drives (whatever their size, however they’re connected), and allows you to create redundant copies of the files you store, in order to prevent data loss when part of your hardware fails.You can read more about it on the official website: www.greyhole.net

I’m now looking for adventurous souls who would like to battle-test it.I’m sure there are bugs, and probably some of them will delete data it shouldn’t. So I’d like to find those ASAP, before I loose the 5TB of data I myself have stored in my own Greyhole server.

So if you’re fed up with the Windows 2003 Server overhead you’re now using just to get Drive Extender functionalities, or if you were waiting for an open source version to use it, or if you’d just like to help me… Start on the link above, and follow instructions in the INSTALL file.You’ll need to be somewhat familiar with Linux to be able to install it. Newbies should restrain themselves from using Greyhole until it reach the 1.0 milestone (or something).

Big red warning: Do NOT store your only copies of important files on Greyhole! It’s not ready for that. It needs to be tested first.I am using it to store 5TB of backups, photos, videos and music here, but I’m telling you that you shouldn’t do that! You’ll cry yourself to death when Greyhole eats all of it, as I’m sure it’s capable of!

That being said, the INSTALL file is pretty detailed, and the configuration file should help you understand what you need to know.

Feel free to discuss here, or open issues in the project page, if you have suggestions, comments, or bug reports.

]]>https://www.pommepause.com/2009/12/greyhole-easily-expandable-redundant-storage-pool-using-samba/#disqus_threadHow I plan to implement my own WHS Drive Extender-like system [UPDATED]https://www.pommepause.com/2009/11/how-i-plan-to-implement-my-own-whs-drive-extender-like-system/
https://www.pommepause.com/2009/11/how-i-plan-to-implement-my-own-whs-drive-extender-like-system/Sun, 29 Nov 2009 12:21:37 GMT
<p>[<strong>Update</strong>] I did it. Read about it <a href="/2009/12/greyhole-easily-expandable-redundant-storage-pool-using-samba/" title
[Update] I did it. Read about it here.

I do like Windows Home ServerDrive Extender technology. It allows oneself to aggregate a bunch of various-sized hard disks into one big storage pool. You can then define a number of shares that will share (!) that storage pool dynamically. AND you can define which shares you want duplicated, which basically mean that DE (Drive Extender) will make sure that all the files on there are on two different physical hard drives, to safeguard those files against hardware failures.

What I don’t like about WHS is that it comes with a lot of stuff I don’t need or want, like a Windows 2003 kernel, a Windows-only backup system, Video/Music/Photos sharing, etc.

I’ve been searching for quite a while to find a similar system that would fit my needs, and never found one. Then yesterday, I had an idea on how I could implement such a system myself.

Here’s how I’ll try to implement WHS Drive Extender-like functionality. I’m posting this expecting comments and feedback on the choices I made below, so feel free to leave comments about anything. Thanks!

I started with a clean Ubuntu 9.10.

Using the Samba log, and the extd_audit VFS module that comes with Samba, monitor all file activity on the SMB shares: file writes, deletes & renames and directory creates, deletes and renames.

Enable follow symlinks in smb.conf

Each minute (using cron), a parser starts, and reads the Samba log (starting where it stopped last time it ran - saved in a _last_read_ variable in the database), and for each extd_audit entry it finds in the log, it inserts a task in a database. When it’s done, update the _last_read_ variable.

A tasks executer runs in permanence, and when the I/O on the server is not over a specific threshold, will start executing the pending tasks. Executing a tasks means something different, depending on what was logged:

New files: pick X random drives (as defined in the config of the current share), and copy the new file on all those drives, then create a symlink to one of those copies in the actual share.

Existing files changed: since a symlink already exists, one copy of the file is already up to date; we just need to find the other copies, and copy the newly changed file over those old files.

Renamed files/directories: (Only the symlink has been renamed.) Find all copies of the file/directory, and rename all of them.

Delete files/directories: (Symlink is already deleted.) Find all copies of the file/directory, and delete all of them.

Executed tasks will be removed from the database (and archived somewhere else).

Every 10 seconds or so, the executer daemon will check again to see if the I/O of the server is too high, and if so, will pause. Obviously, it should ignore it’s own I/O, since that doesn’t really count as server-busyness! This might be a tricky part.

When a hard-drisk goes missing, a process will walk through the shares, and find all symlinks that point to that drive. All those symlinks will be changed to point to another copy of the file, if one is available. Another copy of the file should be made on another available hard drive, to keep the files safe. Not sure yet if this process should be manually triggered, or automatically triggered when a drive is missing for more than X minutes…So that’s it. I think that pretty much duplicate the only features of WHS I use: files on shares that a are important will be available on 2+ physical drives, and the combined storage space of all the hard drives will be available for any share.

The only downside of all this is that the reported free space of the shares will be the free space on the landing zone, not the actual free space of the complete storage pool. Might find a solution for this at some point… Maybe it’s possible to create a Samba VFS module to handle this..?[Update] Indeed; you can specify, in smb.conf, an external command that will be used to query the free and total space on the specified share.

So again, feedback on all this would be welcome.

/me continues implementation now

]]>https://www.pommepause.com/2009/11/how-i-plan-to-implement-my-own-whs-drive-extender-like-system/#disqus_threadFix a suddenly very slow SATA hard drive problemhttps://www.pommepause.com/2009/11/fix-a-suddenly-very-slow-sata-hard-drive-problem/
https://www.pommepause.com/2009/11/fix-a-suddenly-very-slow-sata-hard-drive-problem/Fri, 27 Nov 2009 03:57:30 GMT
<p>So, I was happily copying files around on my Windows Home Server, when I noticed that the speed of the transfers were now at 1.5MB/s… Unc
So, I was happily copying files around on my Windows Home Server, when I noticed that the speed of the transfers were now at 1.5MB/s… Uncool, when copying similar files from the same directories was at 50-80MB/s minutes earlier.

I tried to think what I might have changed since it worked fine. I tried reseting the CMOS, disconnecting all hard drives (IDE, USB, SATA) that were connected, except my primary drive, to no avail. OS was still very slow to load, and when I did let it load, I still measured 1.5MB/s transfer speeds using HDTune…

I finally found a solution. Disconnecting the SATA data cable from the first SATA port on the motherboard (where it was always connected), and connecting that same cable into the last SATA port. Bingo! Instantly, I’m back transferring files at decent speeds. I don’t know the exact cause of the problem, and I didn’t yet try that 1st SATA port with other hard drives, to see if the problem is with the port, or the hard drive & port combination. What’s important is that it now works fine!

I have to thank this guy for the idea on how to fix this.I guess I would have tried that at some point, but at least now I know I’m not the only one who had this problem. And I’m reposting the solution here, just to insure anyone else who faces this particular problem in the future can find this solution faster than I did!

]]>https://www.pommepause.com/2009/11/fix-a-suddenly-very-slow-sata-hard-drive-problem/#disqus_threadRemote-controlled air conditioning using Mac OS X and Shionhttps://www.pommepause.com/2009/07/remote-controlled-air-conditioning-using-mac-os-x-and-shion/
https://www.pommepause.com/2009/07/remote-controlled-air-conditioning-using-mac-os-x-and-shion/Sat, 18 Jul 2009 20:12:19 GMT
<p>My central <a href="http://en.wikipedia.org/wiki/HVAC" target="_blank" rel="noopener">HVAC</a> system has issues cooling down the 3rd flo
My central HVAC system has issues cooling down the 3rd floor of our house during the summer.Even with all 1st & 2nd floor traps closed, the system, which is in the basement, has difficulties cooling down the 3rd floor.The higher the exterior temperature, the higher the difference of temperature between the 1st and 3rd floors.The average difference is somewhere between 4 and 5°C, and I measured differences as high as 10°C in the beginning of the summer.That means if I set my (1st floor) thermostat to 23°C, the 3rd floor temperature can reach 33°C. Quite uncomfortable to watch TV or work at our desks / computers.Even more so since I installed a (ceiling-mounted) HD projector to replace our old plasma screen on the 3rd floor, where our home theater is setup. That thing will start making a lot of (fan) noise as the temperature climbs (fixed by another DIY project)… And temperature can climb pretty fast at the highest point of the house!

To solve the problem, I started using my central HVAC system to cool the 1st & 2nd floor only, and I’m using a portative air conditioning unit on the 3rd floor. It’s a 9000 BTU unit I bought a while ago (at Costco), when we lived somewhere else, and it was sitting in the storage room since we moved. After a couple of tests to make sure it worked fine, and was powerful enough to cool the complete 3rd floor, I installed it ‘permanently’ in the storage room, and attached it using a flexible duct to a hole I made in the storage room wall. I used semi-flexible duct to push the hot air outside. (I didn’t need to cut a hole for that since the storage room is in fact the attic, where there are ventilation holes.)

Here’s that setup in pictures:

To control the AC unit, I use my PowerMac (which is always on), to which a SmartHome PowerLinc USB Controller is connected.I created a small PHP script which runs every minute, and determine if the AC should be on or off, and tells Shion to turn it on or off. Shion talks to the USB controller, which in turns talk to the SmartHome ApplianceLinc, which then either activate or desactivate the power outlet. The AC unit is always on, so when the power outlet is on, the AC unit will always runs.

My PHP script uses many variables to decide if the AC should be on or off:

Programmation: I created programs, and saved them in a database. For example, I have one program which I called “We’re using the 3rd floor when the little ones are asleep”. This program defines the following:Every day @ 07:00 - 30°CEvery day @ 19:00 - 25°CEvery day @ 23:00 - offWeekends @ 13:00 - 25°CWeekends @ 16:00 - 30°CBasically, I want the temperature to always be kept below 30°C (except during the night), but from 7pm to 11pm every day, and from 1pm to 4pm on the weekends, I’d like the temperature to get to 25°C. This is when we usually use the home theater, since this is when our kids are asleep, and we’re not.Another program is called “We’re not home”, which I activate when we all leave the house for a couple of days or more (vacation or something). AC is always off in that program.

The AC unit performance rating: I measured how fast the AC unit is capable of lowering the temperature, and use that number to start the AC early, in order to try to reach the programmed temperature on time. (My current calculated AC performance rating is 0.012°C per minute.) For example, if my program says the temperature should be 25°C at 7pm, the AC will start earlier than that, if necessary, in order to be at 25°C at 7pm. I have implemented a maximum time to start in advance; I won’t allow the AC to turn on more than 3h before a programmed event, even if that means the temperature won’t be reached in time.

Manual override: I can manually override any scheduled program by turning the unit on or off myself. My script will then ‘protect’ my manual setting for 3 hours. At some point, I might change that so that it’s protected until the next programmed event, but I can’t really find a good incentive to implement that at this time.

Temperature thresholds: I selected thresholds to allow the temperature to vary a little above or below the programmed temperature, in order to control the number of start / stop of the AC unit. If the programmed temperature is set at 25°C, the unit will work until that temperature is reached, will then stop, and won’t start again until the temperature is 26°C. (My current thresholds are 0°C below, 1°C above.)My PHP script uses AppleScript (using the osascript command line executable) to talk to Shion. I had to run the Apache server as myself, instead of the usual nobody/www user, since this is required to be able to interact with any program that I run.

I made a simple iPhone webpage to display some stats, and allow me to start or stop the AC manually, or change the selected program. I made it accessible from the Internet (running on a non-standard port, and protected using an htaccess password), so that I can control the AC from anywhere.The same webpage renders quite well in any Safari, so I use the same page on all the computers at home to control the AC when I’m at home, and my iPhone is turned off.

Here’s a screenshot of the iPhone webpage, which looks like a native app once bookmarked on my iPhone home screen!

To measure the temperature, I’m using TEMPer USB devices. I bought two from eBay for a couple of dollars.I found a working command line Mac (universal) executable that reads the temperature from the TEMPer device, and outputs it to stdout.I read the temperature continually from the device, and save the average each minute in a database. I can graph temperature of the 1st and 3rd floors using this data, if I want to. This is what I used to measure the difference of temperatures before I started all this.

If you’d like to see the source of either my program.php script, from my iPhone webpage, or my database, here’s the very raw package: automate.zip

]]>https://www.pommepause.com/2009/07/remote-controlled-air-conditioning-using-mac-os-x-and-shion/#disqus_threadAuto-start XBMC on Apple TV boothttps://www.pommepause.com/2009/07/auto-start-xbmc-on-apple-tv-boot/
https://www.pommepause.com/2009/07/auto-start-xbmc-on-apple-tv-boot/Fri, 03 Jul 2009 11:15:02 GMT
<p><strong>Update</strong>: There is now a much easier way to <a href="http://forum.xbmc.org/showthread.php?t=66320#post476643" target="_bla
Update: There is now a much easier way to auto-start XBMC on Apple TV boot.See details here.I’ll leave the below post here for posterity…

`That’s it. You can then delete the two files left on your Desktop.]]>https://www.pommepause.com/2009/07/auto-start-xbmc-on-apple-tv-boot/#disqus_threadTV Forecast widget not working? Here's how to fix it.https://www.pommepause.com/2009/03/tv-forecast-widget-not-working-heres-how-to-fix-it/
https://www.pommepause.com/2009/03/tv-forecast-widget-not-working-heres-how-to-fix-it/Fri, 06 Mar 2009 04:56:53 GMT
<p><strong>Update</strong>: The information below is about TV Forecast version 2.3.5. Since I wrote the post below, Matt Comi released an of
Update: The information below is about TV Forecast version 2.3.5. Since I wrote the post below, Matt Comi released an official fix, namely version 2.4, which now use TVRage.com as its data source. You should use TV Forecast official releases (1st link below), instead of trying to hack an old widget’s code like I did below.

TV Forecast is a nice widget that Matt Comi created. You tell it what TV shows you watch, and it will keep track of the upcoming episodes for those shows.

The only down side to that widget is that it data-scrape TV.com to get its data.Not only is this illegal, but it also tends to break the widget every time TV.com change their layout in any way.

Getting tired of that, I decided to open the widget’s code, and change the data source to something more stable: TheTVDB.comThey provide a quite stable API to query their database, which is user-maintained.TVRage.com has something similar, but my eyes hurt each time I go on their site, so I picked TheTVDB for now!

Update: Another good reason to pick TheTVDB over TVRage at this time: TVRage XML feeds are currently unavailable.

TheTVDB offer a search service, so changing the search function of the widget was easy enough.

Fetching the next episode information was trickier.The API doesn’t offer an easy way to do that. They only provide XML data of all the episodes of a show; this is quite a lot to download for a small widget, just to know what’s the next episode…So I created a middle-man.This data proxy queries the API for an answer, and will then cache the result for 24 hours (48 hours for not running series; 1 week for ended series), before it will refresh its information again.That way, each client running the widget can query the data proxy, which can use cached informations to return answers very fast, and in a very efficient way.

Note that the files packaged below are still Matt Comi’s property. I do not pretend to now own them in any way. I redistribute them here to allow end users to continue using his nice widget until he has time to fix his widget himself.

So… want to fix your own TV Forecast widget? Here’s how:

Right click ~/Library/Widgets/TV Forecast.wdgt and select Show Package Contents.

This in how this should look before you touch anything:

Copy the files found in this archiveone by one into the TV Forecast.wdgt directory. Do not attempt to copy the TV folder itself found in my archive, or you’ll remove necessary files from the widget.i.e. You need to copy the files inside it, but not the folder itself.

There’s one new file (TvShowParser2.js) that goes in the TV folder, the rest are files I modified, so you need to overwrite the existing files with mine.

Here’s how it should look after copying the new files:

Reload (Cmd-R) the widget, or remove then re-add it to your Dashboard.

]]>https://www.pommepause.com/2009/03/tv-forecast-widget-not-working-heres-how-to-fix-it/#disqus_threadEncrypt Gmail offline (Gears) datahttps://www.pommepause.com/2009/02/encrypt-gmail-offline-gears-data/
https://www.pommepause.com/2009/02/encrypt-gmail-offline-gears-data/Tue, 03 Feb 2009 23:40:34 GMT
<p>_(Note that the following _<strong><em>Mac</em></strong>_-related; I only own Mac computers, so I didn’t try to find a solution to this p
_(Note that the following _Mac_-related; I only own Mac computers, so I didn’t try to find a solution to this problem on Windows.)_Since Gmail released the Offline feature in Labs, I guess many people have enabled it. I did, as soon as the feature was available in my account. And one of the first thing I did after enabling it, is trying to see how I could secure the data it downloads.

Why? Simply, because anyone with physical access to your computer can see your Gears database, which, if you enabled Offline mode in Gmail, now includes all your recent emails, including attachments. By encrypting the Gears database, if someone steals your computer, he won’t be able to read the Gears database, since it will be protected by a password. And no, just having auto-login disabled doesn’t protect you against this. Anyone can insert the Mac OS X install DVD, reset your password, and enter your account. This is why you either need something like FileVault, or use encrypted disk images (DMG) for your sensitive data. I use a technique similar to the one describe below to secure my important documents, like my tax e-papers, banking infos, etc.

I’m not using FileVault; that would have been one way to secure the Gears database, but since I’m using CrashPlan for backups, and prefer to run it all the time versus only when I’m logged in (a requirement to allow CrashPlan to backup files in a FileVault-ed home directory), I looked for another way.

It seems it’s pretty easy: just create an encrypted DMG with the content of the Gears directory, create a symbolic link to it at the original location, and auto-mount the DMG on login.

Here’s an easy to follow walkthrough of the necessary steps:

1. Start Disk Utility

2. File > New > Disk Image from Folder…

3. Here, you need to find the Gears directory you want to secure. If you use Safari, It’s~/Library/Applications Support/Google/Google Gears for Safari

If you use Firefox, it’s ~/Library/Caches/Firefox/Profiles/something/Google Gears for Firefox

4. Select where you want the DMG to be created. I selected the parent directory (it’s the default). Anywhere is fine.

At the bottom, in Image Format, select read/write. Compressed probably works too, but I doubt you’d really save space; the Gears files are probably compressed enough.

In Encryption, pick anything else than none. I selected 128-bit. Feel free to pick 256-bit if you prefer.

(Close your browser now, to make sure the Gears database is not modified while the DMG gets created. You can re-open it once the DMG is complete.)

Click Save, and select a good password.

If you need to enter a password to login on your computer (i.e. auto-login is disabled), choose the option to save the password in your keychain.Important:Do NOT select this option if you have auto-login enabled. If you do, all this serves nothing, as anyone who will open your computer will have access to Gears data!

Side-note: Why is it safe to save the password in your Keychain if you have auto-login disabled? Simply, because unlike your account password, the keychain password can’t be reset. That means if someone steals your computer, he wouldn’t be able to access your keychain, even after resetting the account password using a Mac OS X install DVD.Side-side-note: Yes, normally, the keychain password follows the account password. So if you change your password by providing you old password (like you usually do), the keychain password will also be changed. But this is not true when you reset your password without providing your old password, because Apple designed the keychain to be secure against such attacks.Enough babbling; on with the rest of the procedure.

5. Once the DMG is complete, mount it (double-click it).

6. Delete the original Google Gears for … directory.

7. Open Terminal, and enter the following commands (depending on what browser you use):

If you use Safari, execute this:cd 'Library/Application Support/Google/'ln -s '/Volumes/Google Gears for Safari'If you use Firefox, execute this:cd Library/Caches/Firefox/Profiles/*.defaultln -s '/Volumes/Google Gears for Firefox'8. Open System Preferences > Accounts.

Select the Login Items pane.

Drag-and-drop the DMG from the Finder into Login Items list.

This will auto-mount the DMG when you login, so it’s always available.

And you’re done. Now, the Gears data is on an encrypted disk, which is only available with the password you provided, or with access to your keychain (which require your account password; hope it’s a good password too, and not your wife’s name!)

If you have suggestions on how to improve this, or how to do the same thing on Windows, feel free to comment below. I’ll be happy to link to other related information.

The principle is easy enough: install the software client (available for Windows, Mac OS X or Linux), change the default selection of files that will be backed up (if you want to) and you’re on your way to never loose a file again!

Where are your backup stored? You have the choice.

One option is to pay CrashPlan developers (the company is called Code 42) a small fee each month for them to store your backup: this is what they call CrashPlan Central. Pricing and details from their FAQ: 50GB of storage for 5$/month, 0.10$ per additional GB.

The second option is to backup to other computer(s) you own. Just install the client on any another computer, re-use the same email address & password that you used to create your account on your personal computer, and you’ll be able to select those computers as a backup destination.

The third option is to backup to **friends. Friends are anyone who installed CrashPlan, and allowed you to use their computer as a backup destination. This is probably the best option for most people. Especially if you use it in both directions: allow your friends to backup to your computer(s), so that you’ll store their backup on your hard drive, and they’ll store your backup on theirs. Everyone wins! Note that backups are encrypted** before they are sent to remote destinations, so your friends won’t be able to see the data you’re storing on their hard drive. Only you will be able to restore the data to it’s original usable state.

How do I use CrashPlan

I have a Windows-based PC that I use as my file server at home, so I’m using it as a backup destination.

I was already paying (50$ US per month) for a dedicated Linux server in a data center in California. I’m using this computer as a backup destination too.

I installed CrashPlan on my parent’s computer, and added them as my friends. I allowed them to backup to my home file server.

Costs

To backup to others, you’ll need to pay a one time fee of 20$ US (or 60$ US for the PRO version which offers a couple of added features), after the usual 30 days trial.

It costs nothing to run the client in backup destination mode; i.e. if you install CrashPlan on a computer only to use it as a backup destination, it doesn’t cost anything. You only have to buy licenses for computers which you want to backup.

Support

I almost always received answers within 24h to emails sent to Code 42 support. In all cases, they were able to help me, or if I asked for a new feature or fix, it was always implemented or fixed in a subsequent version of the client (which auto-update itself by the way). Very good support.

Recommendation

Get it! It just works, and it’s an easy way to backup all your family’s / friend’s data.

I have a Gallery where all our family pictures can be seen. My mom, dad and brother all frequently upload pictures on this site, which makes it a central point for all our family’s digital photography sharing & archiving needs.Encouraged by the fast growth of the Facebook social-networking site, I wanted to be able to easily transfer pictures from my Gallery into Facebook photo albums so that my Facebook friends could see my little guy.

Someone of the Gallery forums had already showed interest in such an application, so I started there and created a new Facebook application that could be used to import pictures.Not that difficult; just a simple PHP application with forms and a session to gather the user’s Gallery URL, which album and photos he wants to import, and then call the Facebook API (Platform) repeatedly to upload each picture one at a time.

Challenges I faced:

Uploading too many pictures at once caused timeouts. Facebook isn’t very patient when waiting for an external application (web page) to complete. I had to upload one picture per page, and repeatedly reload this page until all pictures have been uploaded.

Facebook limits photo albums to 60 pictures max. I had to code something that would create a new album each time the current album would be filled with pictures.

eAccelerator (0.9.5) has a known bug where try…catch blocks in PHP code are ignored, and Uncaught Exception errors are generated instead. This caused the code that handled expired session to fail until I disabled eAccelerator Optimizer (_php_flag eaccelerator.optimizer 0_ in a .htaccess file).

Watermarked pictures that can be found in a Gallery are fetched differently than normal pictures, so I had to modify the helper script to look for watermarked pictures, and use that if any are found.

Multisite Gallery installation requires modifications so that the helper script would work correctly.

Now when I want to import Gallery pictures into Facebook, I’m just a few clicks away!

]]>https://www.pommepause.com/2007/06/facebook-application-import-gallery-pictures-into-facebook/#disqus_threadReview of Netgear HDX101 Powerline HD Ethernet Adaptershttps://www.pommepause.com/2007/06/review-of-netgear-hdx101-powerline-hd-ethernet-adapters/
https://www.pommepause.com/2007/06/review-of-netgear-hdx101-powerline-hd-ethernet-adapters/Fri, 22 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span><em>Why should you not buy the Netgear HDX101 Powerline HD Ethernet Adapters
Why should you not buy the Netgear HDX101 Powerline HD Ethernet Adapters (or any other Powerline network adapters for that matter)?Offering 200 mbps of network bandwidth, those adapters looks like a very good replacement for a slow wireless network you might have. No long cable required; you simply need to plug one adapter in room A, another adapter in room B, and attach both to the respective room’s LAN / network appliance to create a instant long-range LAN in your house. Does that look too good to be true? I’d say it is.

I bought those things 2 months ago (the HDXB101 package includes two HDX101 adapters). 160$ US + shipping on buy.com.Since then, I had time to:

Be happy to receive them and thrilled to try them.

Be happy about how easy they were to install.

Be disappointed about the performance I got on the first try (600 KBps).

Loose a considerable amount of time upgrading their firmware, testing them in different configuration, with different devices.

Loose what seems like a colossal amount of time with Netgear’s technical support to finally be told I should return the units back through RMA to get new ones.

Loose my patience too many times to count while talking to an indian technical support representative, trying to explain I was told to get an RMA number in the online case #XYZ, and try to spell my address countless times (and he still got it wrong!)

Send back the adapters at my expense.

Open a new technical support case online to give in writing my complete address so they would change what the guy I spoke to wrote in my file!

Wait 3 weeks for nothing to happen, loose patience and open a new technical support case online asking why I wasn’t getting my new adapters back.

Wait 1 week to receive Netgear’s package.

Be called by DHL support to ask me what my real address was, because the address the Indian technical support guy wrote something incomprehensible, even if another technical support technician assured me that my address had now been corrected according to my last tech support online case.

Be somewhat happy to finally receive it at work, and impatient to try them out again at home that night, even if I was pretty sure it wouldn’t change anything.

Be very angry when I opened the package at home, and found only one HDX101 adapter in the box they sent me.

Open another tech support case online to tell them they didn’t send me the right product back! I also mentioned they didn’t send it at the right address, and added a reference to the other tech support online case I had previously opened for that.

Wait another week to receive Netgear’s package.

Be called by DHL support again to ask me what my real address was, because they sent it once more to the unintelligible address!

Finally receive the second HDX101 adapter.

Test the new adapters once more in many configuration, and with many devices.

Open another tech support online case mentioning all the tests I did and the crappy performance I was getting.

Get a message back saying they would (finally!) escalate my problem to a level 2 engineer.

Receive a nice message from the level 2 engineer, basically telling me that the adapters would never work at 200 mbps except in lab conditions.Here’s parts of his last message to me:

When opting to use powerline networking, it is important to understand that currently, no standards exist regarding data communications over electrical lines.I have the HDXB101 myself, and I do not get the 200 Mbps speed. Most of the time it is between 60 and 90 Mbps, and on rare occasions I will see a little over 100. My brother wanted to test the kit in his own home which was built around 3 years ago. He could not even get a sync between the 2 devices.

It may very well be that your home’s electrical environment is not favorable for powerline networking. It is clear that this is not a hardware problem since replacing the device yielded the same results. At some point there will be universal standards for data communications over electrical wiring. However, until that time, the results will vary from location to location.Nice!

So bottom line, if you want to try those new things, make sure you buy them at a retailer that will take them back if you’re not satisfied. Or wait for the future to arrive, since this seems to be the best advice Mr. 2nd level had for me.And if you want good technical support, don’t deal with Netgear!

]]>https://www.pommepause.com/2007/06/review-of-netgear-hdx101-powerline-hd-ethernet-adapters/#disqus_threadHow to fix non-working Folder Action on an external hard diskhttps://www.pommepause.com/2007/06/how-to-fix-non-working-folder-action-on-an-external-hard-disk/
https://www.pommepause.com/2007/06/how-to-fix-non-working-folder-action-on-an-external-hard-disk/Thu, 21 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span>I had a problem with <a href="http://www.apple.com/applescript/folderactions/
I had a problem with Folder Actions: they always stopped working after a while! Quite annoying. After some digging on Google, I found one post on MacOSXHints forums where a user mentioned that a FA attached to a folder on an external HDD would always stop working after un-mounting / re-mounting the hard disk; exactly what was happening to me! So I started fiddling with AppleScript, and found a way to detach then re-attach a FA associated with a folder. Next, to automatize the execution of this fix, I needed a way to execute it when my external HDD was mounted. Another FA to the rescue, this one attached to the /Volumes folder. Each time a new folder would appear in /Volumes (like my USB drive for example), my FA script would be executed, and it would detach then re-attach the original FA script.

`

on adding folder items to target_folder after receiving added_items set volumeName to “USBDrive1” – This is the display name of your external HDD volume; any of them if you have more than one. repeat with added_item in added_items if the displayed name of (info for added_item) is equal to volumeName then volumeMounted() exit repeat end if end repeat quit application “System Events”end adding folder items to

-- Re-Attach Folder Action attach action to folder "USBDrive1:Music:TV Shows" using "Mac Mini HD:Users:me:Library:Scripts:Folder Action Scripts:Import TV shows into iTunes.scpt" -- First parameter is the folder you want to attach to, the second is the scpt file you want to execute as a FA for that folder.end tell

end volumeMounted`To use this script: Attach this FA script to the /Volumes folder and it should fix this problem.To attach a FA to /Volumes, hit Cmd-Shift-G after clicking the + in the folder column of Folder Actions Setup.app, and enter “/Volumes” then click Go, then Open.

After I finished reading my latest book, which is what I usually do during my daily commuting to and from work, I started looking for a way to read my RSS feeds on a Palm device during that time. First, a friend of mine lent me his Palm Zire 31 (thanks!). Next, I started looking for a way to get unread RSS items from Google Reader into the Palm device. I was pretty happy with Google Reader itself, since I could read news at home, or at work (and soon enough, between the two) and not have to deal with duplicates etc. Using a desktop client would have been much more complicated, especially since I use a Mac at home, and a Windows PC at work. So, after looking for a while for a Google Reader API, or some other ways to download unread RSS items from Google Reader, I gave up.

I then started to implement my own Google Reader, which I dubbed GRC - Google Reader Clone.

If you think a little, it’s quite simple to implement. You need to save subscriptions (RSS feeds’ URLs basically) in a DB table, and at regular intervals, download all of those URLs, parse it, and save the result in another DB table. I started with a simple PHP page which did just that, and used a cron job to call that PHP script every 6 hours. That way, all RSS articles will be saved in the DB, waiting to be read. I then added a couple of flags to each RSS article (starred, unread, new) to be able to sort them, show only one type of items, or star / keep unread specific articles; a-la Google Reader indeed. I then created a pretty simple PHP web page which listed unread articles title, and when I clicked one of them, used AJAX to download the article from the DB and show it in the right-hand panel. Again, very Googlesque! And finally, I created a PHP web page which would load a predetermined number of unread items, show all of them in a very simple layout, and mark all of them as read automatically. This is the web page I download when I sync my Palm. That way, I have RSS articles on the Palm which I can read whenever, and they are all marked as read in the DB already, so I won’t have to go over all of them when reading other articles at home or work.

All this left me with a very usable web-based RSS reader, missing a couple of details, all of which I added at some point:

a subscriptions management page

a RSS feed of my starred items to be able to share them with my friends

a link to trigger the download of all RSS feeds manually from the reading page

a search functionality (which was really missing from Google Reader! Not sure if the new version has it - I stopped using Google Reader just before they made their last big update)

a way to lock down the application so that I would be the only one able to use it - I used htaccess IP address Allow rules, mixed with password authentication for when I’m not at home;

keyboard shortcuts to allow easy reading with a minimum of efforts

and a way to mark articles on the PDA for later reviewingI solved that last problem by writing the article numbers next to their title on the PDA page. This allowed me to note down article numbers on the PDA while I was travelling, and when I got home, I would enter those numbers on a special field on the reading page to review all of them. Easy enough.

]]>https://www.pommepause.com/2007/06/gobblerss-pda-friendly-web-based-rss-reader/#disqus_threadBlowfish encryption plugin for Colloquyhttps://www.pommepause.com/2007/06/blowfish-encryption-plugin-for-colloquy/
https://www.pommepause.com/2007/06/blowfish-encryption-plugin-for-colloquy/Tue, 19 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span>_Do you want to skip all this text and just get the Colloquy Blowfish Plugin?
_Do you want to skip all this text and just get the Colloquy Blowfish Plugin?Then click here._Lately, IRC channels where I hang started using Blowfish encryption for all messages sent to the channel. mIRC and eggdrop support Blowfish pretty easily, but there was nothing for Colloquy (the IRC client I use - Mac OS X only) to encrypt & decrypt Blowfish encrypted messages.

So I started looking for a way to encrypt & decrypt Blowfish messages using a Colloquy plugin. Colloquy allows all sort of plugins: AppleScript, Python, Obj-C, etc. I tried to use the Python template plugin, but it wouldn’t even load on Colloquy, spitting some error on load. So I downloaded existing plugins to see how they worked. Most of them seems to use the AppleScript API, and it seems easy enough for my purpose.

Now I needed code to encrypt & decrypt Blowfish messages. Google suggested a Java class that allowed easy encryption & decryption. Adding a simple main method in it, I was able to encrypt & decrypt messages via the command line by passing the key & message as arguments. While this worked just fine, launching the JVM each time I received a message or I sent a message was quite resource-consuming, even with my dual G5.

So I started looking for C or C++ code that would do the same thing. After trying many implementation, I decided to try to compile the eggdrop Blowfish module by itself. I had to change a couple of things to make the blowfish.c file compile alone, but I made it. I also added a new main function to be able to use the encrypt & decrypt functions from the command-line, which worked just fine, and was much faster than the previous Java implementation I used.

Note to Java haters: The Java implementation I used was slower because I had to start the JVM each time I wanted to encrypt or decrypt a message. If I had the JVM already running, and a service that would listen for encrypt & decrypt commands on a socket or something, it would very probably have been almost as fast as the C implementation I currently use. I just wanted to go with the more minimalist approach I could find, so I opted for the C program.

So now, I was receiving encrypted messages in Colloquy, my AppleScript plugin was called for each message, and I was able to change the received message into it’s unencrypted form by calling my Blowfish command-line program. Sent messages followed a similar path: Colloquy > AppleScript plugin > command-line encryption > IRC server.

The only problem I had now was special characters. Bold, underline & colors IRC characters were not correctly handled by the AppleScript plugin. So I took the HTML-ized version of the message to be sent and translated all the HTML tags I found into IRC characters. For received message, I did the reverse: I translated IRC characters into HTML tags and passed that new string to Colloquy for display.

Limitations: For sent messages, I only translate specific colors into standard IRC colors; I used the quick picks at the bottom of the “Show Colors” panel in Colloquy. So the colors that will be sent correctly when encrypted are: #FF0000 (red), #44B958 (green), #0013FF (blue), and #EBB51B (orange). All others colors will be replaced with black when sent.Received message don’t have this limitation, so colors in received messages should pretty much always look OK.Also, my plugin doesn’t support background colors, so messages containing those might look weird when decrypted.

That’s it. I now have working Blowfish in my Colloquy, and so can you.

You can download the latest version of the Colloquy Blowfish Plugin (version 04) for PowerPC or Intel.Changelog:Version 02 fixes a small problem with messages containing colors > 9 (IRC color codes 10 to 15). Those messages would not be shown in Colloquy because of this problem.Version 03 allows to send or receive unencrypted messages in normally encrypted channel or private messages for messages that start with +p.Version 04 fixes a problem with % in sent messages, and adds support for multiple channels using just one plugin.

]]>https://www.pommepause.com/2007/06/blowfish-encryption-plugin-for-colloquy/#disqus_threadFrontRow Enabler for 10.4.8https://www.pommepause.com/2007/06/frontrow-enabler-for-10-4-8/
https://www.pommepause.com/2007/06/frontrow-enabler-for-10-4-8/Mon, 18 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span>_Do you want to skip all this text and just get the new FrontRow Enabler for
_Do you want to skip all this text and just get the new FrontRow Enabler for 10.4.8?Then click here._Ok. So I wanted to install FrontRow 1.3 on my old PowerMac G5… Downloaded FrontRow Enabler 1.3, followed instructions, rebooted… And bam, no more login screen. How fun. Checking the comments on Andrew Escobar’s page, I realized 10.4.8 wasn’t supported just yet. So I used my trusty FireWire cable to fix my PowerMac (see how in the comments of Andrew’s page), and started to look for a way to patch FrontRow Enabler 1.3 to make it compatible with Mac OS X 10.4.8.

Here’s what I did.

Mount Andrew Escobar’ FrontRow Enabled DMG, and open/Volumes/Enabler_1.3/Enabler.app/Contents/Resources/Scripts/main.scptThis is the script executed when you execute Enabler.app

Use Pacifist to extract/System/Library/PrivateFrameworks/BezelServices.framework/Versions/A/BezelServicesand/System/Library/LoginPlugins/BezelServices.loginPlugin/Contents/MacOS/BezelServicesfrom MacOSXUpdCombo10.4.7PPC.pkg

Those are the two files that FrontRow Enabler is patching when you click the ‘Enable FrontRow’ button; I found this info in main.scpt

cd ~/Desktop/10.4.7-BezelServices.framework/Versions/A/cp BezelServices BezelServices.bakbspatch BezelServices BezelServices.patched ~/Desktop/frameworkpatchUse hexdump to be able to see the binary data of each file in clear text:cd ~/Desktop/10.4.7-BezelServices.loginPlugin/Contents/MacOShexdump BezelServices>BezelServices.hexhexdump BezelServices.patched>BezelServices.patched.hex

cd ~/Desktop/10.4.7-BezelServices.framework/Versions/A/hexdump BezelServices>BezelServices.hexhexdump BezelServices.patched>BezelServices.patched.hexdiff the original and patched files to see what the patches changed:cd ~/Desktop/10.4.7-BezelServices.loginPlugin/Contents/MacOSdiff BezelServices.hex BezelServices.patched.hex

< 000fb60 4bff fed1 2f83 0000 419e 0020 8001 0058

< 00026f0 4800 01d1 8101 0058 3821 0050 7f83 e378

> 00026f0 4800 01d1 8101 0058 3821 0050 3860 0003`Ok, so now I know what bytes needed to be changed in 10.4.7 to enable FrontRow.Now I just need to find the same bytes in the new 10.4.8 files, and create patches for those files.

00026f0 4800 01d1 8101 0058 3821 0050 7f83 e378`Well, what do you know… Exact match. And at the exact same location (00026f0) than the 10.4.7 file; I guess this patch doesn’t have to be changed after all.

Close to a solution now; just need to create a new .pluginpatch that will patch the correct byte, and it should work fine.

sudo port install bsdiffcp BezelServices BezelServices.patchedNow to patch the correct byte, I need an hex editor.Seems 0xed would do fine.

[Update] Elgato now offers Canadian EPG in EyeTV 3.1 (they’re now using TV Guide as their source for both US and Canada). The first year is free for all users, but starting in 2010, we’ll need to pay 20$ US a year to continue receiving the EPG.__I own a Mac-based DVR for quite some time now; a Miglia EvolutionTV. Since I bought it, I switched from the packaged EvolutionTV software to the more mature EyeTV software package (non free).Recently, EyeTV added a full screen interface which integrates beautifully with Front Row. The actual full screen interface for EyeTV is actually quite the same as the Front Row interface, and one can launch Front Row from a menu item in there.This made using the EyeTV for more than just recording an actual option.The only problem I had left with the EyeTV software was the EPG (Electronic Program Guide); there is no EPG support for canadian users!This was a fact of life I lived with ever since I bought the EvolutionTV hardware, until I gave up waiting for Elgato to release an EPG for Canada.Taking the matter into my own hands, I created a TitanTV (US-only EPG) account, configured EyeTV to download TitanTV data, and sniffed the HTTP packets exchanged between my computer and TitanTV’s server. Luckily, nothing was encrypted. I was able to intercept the provider lookup from a zip code, the channels lineup lookup using the chosen provider, and finally the EPG data download using the chosen lineup.I then added an entry to my own web server in my Mac’s /etc/hosts file, and started to create PHP scripts on my server that would answer EyeTV’s requests for EPG data.Faking the provider list and the channels lineup was easy enough; a simple XML format was used.For the EPG data, I first had to find valid canadian EPG data. A big thanks to Zap2It for that! :) They provide EPG data using a web service (SOAP), free of charge for personal use.I then fiddled with PHP’s SOAP capability, and Zap2It web service, until I was able to successfully pull the correct data from their server (compressed, to lower the amount of data transferred between both servers).To make sure I didn’t try to download EPG data from Zap2It for no reason, I added a one day disk cache (PHP’s un/serialize functions) for this data.Half the work was done. The only thing left was converting the actual EPG data into TitanTV XML format.I struggled for some time with timestamps conversion (TitanTV use a base-time value, and schedule’s timestamps are relative to that base-time) and program IDs (EyeTV doesn’t support alpha-numeric IDs - I had to convert Zap2It IDs into pure numbers) but I was finally able to completely simulate TitanTV data from my own server.

The result :

I’m very satisfied with the results. I can now schedule recordings using valid EPG data for my provider, and browse the currently playing shows easily, all from the full screen interface of EyeTV. :)

_[New] _I added support for XMLTV data source, instead of Zap2It. This should allow people from Europe to use it by using the XMLTV grabber that fits their need. For Canadian users, there’s really no need to use XMLTV, since the XMLTV support for North America comes from Zap2It anyway, so it’s better to just use Zap2It Web Service directly.

]]>https://www.pommepause.com/2007/06/adding-epg-for-canada-to-eyetv-software/#disqus_threadCompiling PHP 5 on Mac OS Xhttps://www.pommepause.com/2007/06/compiling-php-5-on-mac-os-x/
https://www.pommepause.com/2007/06/compiling-php-5-on-mac-os-x/Sat, 16 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span>Not that easy…</p>
<p><em>Note that the following will only work on Tiger, no
Not that easy…

Note that the following will only work on Tiger, not on Panther nor Leopard!

]]>https://www.pommepause.com/2007/06/compiling-php-5-on-mac-os-x/#disqus_threadCanadian Holidays in iCalhttps://www.pommepause.com/2007/06/canadian-holidays-in-ical/
https://www.pommepause.com/2007/06/canadian-holidays-in-ical/Fri, 15 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span>I publish and maintain a calendar of all the Canadian Holidays.<br>I started
I publish and maintain a calendar of all the Canadian Holidays.I started it in iCal.app (on my Mac) and I was publishing it using WebDAV on my web server, but I now imported it in Google Calendar, where I maintain and share it.

]]>https://www.pommepause.com/2007/06/canadian-holidays-in-ical/#disqus_threadActivism: RapidWeaver Contact Form vulnerabilityhttps://www.pommepause.com/2007/06/activism-rapidweaver-contact-form-vulnerability/
https://www.pommepause.com/2007/06/activism-rapidweaver-contact-form-vulnerability/Thu, 14 Jun 2007 03:00:00 GMT
<p><span style="font-size:15px; font-weight:bold; "><br></span>RapidWeaver is a nice software for Mac OS X that allows people with no knowle
RapidWeaver is a nice software for Mac OS X that allows people with no knowledge of HTML/CSS whatsoever to create very nice websites. (This website was created with RapidWeaver.) It comes with themes, page templates, etc. One of those page template is a PHP Contact Form. It has come to my attention (from the RealmacSoftware support forums) that the PHP code generated by RapidWeaver (version 3.2.1 or less) is vulnerable to mail header injection attacks. I created web pages in both french and english on how to temporarily fix this vulnerability until RealmacSoftware release RapidWeaver 3.5 which is supposed to close this issue. I also did a quick search on Google to find RW-created contact forms, and I tried to exploit them. Each successful exploit was then logged and a warning email was sent to the webmaster with links to RMS forums and to the above page on how to fix this vulnerability. I received a couple of negative answers about this, but I received much, much more positive replies, thanking me for the warning or asking me for help to implement the fix.

Why did I do this: I feel like when you have the resources and know-how needed to help people and make the Internet a better place, it’s never a bad idea to use them and act. That is, when time permit..!