Archive for the ‘Cocoa’ Category

Yes, the footer displaying the current mouse-over link in the beta is back.

Brent Simmons, a developer I have never met, has sent me a christmas gift. As those of you in the beta know, the web view that displays the details for an item is all brand new in the latest version and based on WKWebView. One of the features that got lost in the beta was the mouse-over link being shown in the footer bar.

The method for showing this information is there in WKWebView but is marked with the dreaded underscore prefix, which means it’s for Apple use only. I left it as a todo a few weeks back, I figured that with some JavaScript I could get a call back when mouse entered a link, but did not want to switch from programming in Objective-C to JavaScript at that exact moment.

Luckily for me Brent ran into the same issue and his project being open source and MIT license meant it saved me a lot time. I was able to piggy back off his code, without having to write any JavaScript, simply using his.

For those looking for an Objective-C version here is mine. I have templates being loaded from many places including user generated ones. So instead of embedding the JavaScript in the file, I inject it with the same WKWebView userContentController that handles the call back.

Now included in the latest beta for those that mourned the lost functionality.

On a related note, I can’t wait for further improvements on Brent’s latest newsreader project: Evergreen. I’ve been using version 3.1 of NetNewsWire for ages now, as the new versions were never as great as the original. Sadly feeds are starting to go dead, I think due to some type of HTTPS compatibility.

With this christmas gift we are getting closer to that inevitable 6.0 release. But first it’s time to take a little break. Happy holidays everyone!

Although the internet is overflowing with deep data most of it is trapped in old APIs that make working with the data complicated. Wikipedia, one of the largest repositories of contributed information in several languages, still relies on an old MediaWiki API that is almost useless at any meaningful search. Sites have sprung up to try to deal with these shortcoming such as DBPedia that tries to wrangle all this information into a standard meaningful API. But how great would it be instead if Wikipedia developers actually created and maintained a powerful API. It would mean giant leaps to the integration and usefulness of Wikipedia. All of the sudden any app would be able to take advantage of all the crowd sourcing done in Wikipedia in a reliable way.

As much as I wish the above for Wikipedia, this blog post is about the developers who have done just that. The Discogs developers have been constantly maintaining their API. Although they had gone down a path I did not approve of recently with adding limits to the number of images that could be downloaded per account as well as requiring OAuth verification, that was complex and cumbersome to implement. They are now fixing those issues while maintaing backwards compatibility. So the good news is that our search plugin for Discogs will continue to work without update. We have already used the Google OAuth framework to implement authentication with our own private account, to make it as easy as possible for users. But new developers wishing to integrate Discogs will have a much easier time using a simple key/secret pair. The 1000 image limit has also been removed and the signed URL provided directly in the downloaded information. This is the kind of simplicity one wishes in all APIs.

I mention Discogs by name only because they are the latest site in adding simple new features to make accessing their data by their users easier. But the truth is many developers have been doing an extraordinary job sharing their data via an API. The TMDB API always comes to the top when I am asked about good API designs, it has been easy to work with and 100% reliable for DVDpedia. The TV Rage and TheTVDB are more complex APIs but work well all the same. In the music world MusicBrainz has been solid. MovieMeter has been moving backwards in features since maintaining the original API required a lot of work for a single developer. BoardGameGeek has an API that is embedded into their system that works but could do with some 2015 technology improvements. But speaking of improvements the API that could really do with some love is the Library of Congress. Although now using a new SRU standard, it’s still based on a system that is decades old. But it seems updating the way libraries share their catalog is a gigantic undertaking. The above list is not extensive as there are many other search sites with APIs that we are thankful exist: OFDB, Open Library, Google Books, Freebase (Bought by Google), Amazon AWS and AbeBooks.

We look forward to a world where standards improve, for example JSON format has been marvelous at making integrating APIs easy and consistent. Some day information on the internet will be truly available to our future computer program overlords that will do all the communicating for us and present the information we want on command. Hopefully they will be able to even update themselves as data improves. In the meantime we hope developers continue the hard work of maintaining all these endpoints and adding more where needed.

From time to time we receive requests that are easy to implement but don’t really fit in with the programs because they seem very specific to one user’s needs.

But maybe we’re wrong to think that others wouldn’t enjoy these little fixes too. So I have started a new plugin that is a repository of several commands created for specific users to share them with all. You can install the plugin automatically by clicking here or download it and double-click to install. The installer is specific to DVDpedia, but the plugin will work in the other programs as well. Just download the file and change the ending from “.pediaextra_d” to “.pediaextra_b” (Bookpedia), “.pediaextra_c” (CDpedia) or “.pediaextra_g” (Gamepedia) and double-click the file to install.

The plugin is called Title Case after the command that initially started it all and the commands will appear under the menu Movie (Book, Album, Game) > Fixes / Links as well as in the contextual menu for an entry.

Title Case: Will replace the current title on the selection with the properly capitalized version based on the John Gruber algorithm. I used the Objective-C port kindly created by Marshall Elfstrand (I couldn’t resist a website with such a great name).

Languages and Subtitles Alphabetically: Places the languages and subtitles in DVDpedia in alphabetical order.

Fix Spaces: Turns dashes into spaces and removes double spaces from the title.

Duration to Hours: Changes duration from 123 to 2:03.

Rename Linked File to Title: Updates the name of the linked file to reflect the title. So a file called AAA-1023.mp4 linked to a DVDpedia entry Star Wars: Episode III -Revenge of the Sith will become Star Wars: Episode III -Revenge of the Sith
.mp4

Show in Finder: Reveals the linked file in a Finder window.

Create Cover from File: Replaces the cover image with a screen grab 10 seconds into the linked film.

The source code is clean and if you’re looking to add a new command to the Pedias this might be the plugin to start from as it will facilitate a lot of the boiler code by simply copy pasting one of the existing menu commands.

It might also be useful to rebuild the plugin if you find yourself using a command frequently as this would allow you to add a keyboard shortcut to the command. For example to make command-shift-L the keyboard shortcut you would add:

Currently the source code is available as a zip format, in the future it will be up on a version control system so that we can all update it. In the meantime do send any useful improvements for inclusion.

Update: New menu command under Links that creates cover images from linked files that QuickTime can understand. This new command make the plugin 10.7+ only as it requires the AVFoundation.framework included in Lion.

For a while now (since OS X 10.7) EVP_ calls in OpenSSL have been deprecated. But Wolf Rentzsch’s article has recently stirred developers to look for replacements for these calls in Common Digest. Most developers only use the EVP_ functions in order to validate a Mac App Store receipt hash, as it’s used in the sample code provided by Apple (listing 1-7). It’s easy to replace the 6 calls to EVP with the following Common Digest code:

I always use Objective-C where I can, hence my input string is already concatenated and the SHA1 digest result is converted into a NSData so that I can compare it with stored GUID in the receipt directly. Here is the code in case you need to copy paste:

Hope someone find this useful as a few questions at Stack Overflow are going unanswered due to their more generic phrasing about replacing EVP calls in general. Most of the EVP algorithms are not replaceable but if you are using any of the MD or SHA versions then you can use the above solution.

Today Brent Simmons improved the developer pool of reusable code by posting his code for testing if the value of an object is empty. Most programmers start down this path with NSString where for most purposes an empty string is the same as nil. From there it grows to include NSNull which JSON parser will commonly return, as well as other common objects found in collections such as NSNumber and NSDate and the collections themselves NSDictionary and NSArray.

If you take a look at his code you will notice it’s a function (same as Wil Shipley’s). Because I write almost exclusively Objective-C a function call in my code makes me pause and think. I avoid this by using categories and get to call methods instead of functions. This also avoids the issue of having to write an optimized version of the function to remember to use on NSStrings. With a category you can optimize each individual call for the specific object. So in the hope of improving that shared code pool below is my implementation. Overwhelmingly (97.63% of the time so far) I want to do something with an object when it’s not empty, so my method is reversed. Also isNotEmpty sent to a nil object will result in the correct NO value being returned.

I wrote these categories in 2003 and haven’t thought about them since then. I have 674 total calls to these functions in my code (how I know the exact percentage on how often I use it). Since I recently discovered the easy to use BNRTimeBlock for testing performance I ran the three implementation 10 million times and came up with the following averages after 10 runs:

This shows that at .000000022 seconds per call you should be using these easy to read and maintain functions and/or methods all over your Objective-C code. Brent’s longer RSIsEmpty performs similar to Wil’s function as it includes the necessary call to respondsToSelector:. At these speeds I think the RSStringIsEmpty optimization is unnecessary except in the heaviest of loops.

I do believe my version should be renamed isSet or isFull to avoid using “not” in the method name; but although one learns a lot in 10 years, I also have been using isNotEmpty for 10 years.

Mark Dalrymple wrote an interesting article about the speed difference between isEqual: and isEqualToString: methods. Choosing between these methods is a small decision programmers make everyday. The article should reaffirm readers beliefs that concentrating on readability is more important than performance.

The short of it is that he runs the methods 10 million times to show results in a speed difference of 3% to 7%. About .0000000005 seconds gained for every call of isEqualToString method over isEqual. Most of my code would use those methods at most a few tens of times when a user makes a choice in the Pedias and five billionths of a second is certainly not a difference anyone can notice. In fact the difference in results is low enough that they become statistically meaningless. Trying to unravel the implications of compiler optimizations as well as the testing environment is more complex and time consuming than the numbers justify.

However the article did get me thinking about an old function I optimized a few year back. The Pedias include a bit of code for sorting that ignores the article at the beginning of a title. When sorting a large number of entries this code can be run in the vicinity of a million times. This fits my number one rule about optimization: loops are the place where optimization makes the most difference.

The original code used the standard NSString hasPrefix: method to do the comparison.

On testing collections with 20,000 items the sort felt sluggish. Especially when compared to a sorting that did not take the article into account, hence I knew the article comparison could do with some optimization. I changed the code to do the comparison based on each individual character. (This all started with a much longer version of the code that ignores articles in other languages that take into account a lot more than the possible 3 articles in English.) By looking at the first letter and making chain decisions I thought I would save a lot of time.

With the new code the sorting was blazing fast in testing and it’s been great for actual users ever since. My interest peaked by Mark Dalrymple article and his timing code convenient and thoughtfully in a gist I ran my two methods in it to finally get some raw numbers.

Title “King Kong”:
Time for characterAtIndex: 0.300036
Time for hasPrefix: 3.165767

Title “The Lord of the Rings”:
Time for characterAtIndex: 3.218064
Time for hasPrefix: 4.198877

Right off you can see the new code (which is really 5 years old) is 10x faster than the old version when no article is present. For the string with “The” the code is much slower in both cases. But this can quickly be attributed to the actual “substringFromIndex:” method that both versions need to return the string to be sorted. Removing this operation, since we are only interested in the comparison timing, gives new numbers.

Title “The Lord of the Rings” with an assignment call instead of substringFromIndex.
Time for characterAtIndex: 0.665993
Time for hasPrefix: 1.568902

As you can see the difference is still 0.98 seconds between them but the time it takes to do 10 million substringFromIndex: calls has been stripped, better reflecting that in the case of “The” article being present the new call is 2x as fast. A title with “An” of course is even faster as the original method does not test for just the presence of “A” first but the entire “A ” and “An ” prefix.

So sometimes it does make sense to write your own versions of the built-in functions and lengthen your code when you can use the internal information in the algorithm to achieve more specific results. However, keep in mind it’s not really worth the time unless making millions of calls to that function.

As a side note, it’s faster in 32 bit than in their 64 bit counterparts. A meaningless statistic since the code is so basic that there is no advantage or disadvantage in the code itself that would affect the timing depending on the architecture.

For programmers out there looking to ignore articles when sorting, the above code will do the trick nicely when wrapped in the below category method of NSString and used from a NSSortDescriptor:

I am an avid Google user and even though the quality of results has suffered in the last few years, I still use Google almost exclusively when researching a topic. When the question is simple, a visit to Wikipedia or Stackoverflow will suffice but when things get complicated I tend to open several of the search results for browsing. Inevitably the time comes when I realize I want to open a previous result that was prematurely closed but is now key – unfortunately Google has left my history useless.

I’ll never find that page even though I distinctly remember having “Step by Step” in the title and a favicon that had some sort of diagonal stripes.

Why is my history so useless? Is my computer broken or has Google broken the internet? Google actually displays the actual link under each title of the page to let users know what page they will be visiting. The link is also the actual href attribute of the link and is displayed at the footer status bar during a mouse over. But why isn’t it the link I get when I click or copy it? Because when the time comes to use a link Google uses Javascript to replace the link with a redirect to their own servers in order to keep track of what links users are clicking on (except when the link points to their own Google domain like images or maps).

As wonderful as that information might be to Google in improving their results rankings and knowing what results I favor most, it’s messing with my history as well as my ability to copy and paste a link (“Copy Link” shown above picks up the modified link) for a blog post or to email to a friend.

There must be a way to get the old direct-to-result Google back. There are no options to disable this behavior that I could find, not even in the name of privacy. After some searching I ran into a Japanese gentleman with the same linking sensibilities as mine who has written a Grease Monkey script that solves the issue. But in this day and age I want a simple Safari Extension that I can install and manage natively in Safari. After poking around for such a script (Dear Apple, could we please have search functionality in the extension gallery) I decided to build my own. I know nothing about Safari Extensions or programming one, but I am a Mac developer and I know what I want, so I started developing.

One hour and three lines of code later I present to you Google Direct. The Safari Extension that will remove the redirects from the Google links by stripping the “onmouseover” events Google uses to trick the href link into the replacement link.

With Google Direct Safari extension installed I am a happy camper, my history is looking unique and identifiable (the same results but now with names and favicons).

‘Copy link’ also gives me the crisp, short, original URL instead of a indecipherable mess:

Are there any downsides? Depends on where you stand. I am indeed depriving Google of knowing what links I followed and approximating how long I stayed (how long it took me to come back and click on another link of the same result set). On my side of the fence, where I usually stand, it’s an added bonus that I now also control a little more of my privacy with my new extension.

P.S. I ended up using Fine Cooking Classic Croissants recipe for making croissants. Now it has an elegant white orange hat and a title that was easy to spot in my history when I went back two days later – after tasting the results – to add it to my permanent bookmarks.

Since the Mac App Store is Intel only, developers have to submit applications that are stripped of all PPC code. Slowly more and more developers are building their applications Intel only, given that the latest versions of OS X, Snow Leopard and soon Lion, are Intel only. The Pedias rely on a large number of external frameworks, so it was not as simple as changing the build setting in Xcode. While the transition happens here is an easy command we used to find PPC code used in our applications.

find Bookpedia.app -exec file {} \; | grep "ppc"

Of course run in Terminal while inside the build directory and change Bookpedia for your app name. Should any frameworks or libraries be reported to contain PPC code, you can strip the PPC without rebuilding by using the lipo command.

I stored these new Intel-only versions of the frameworks to be used by Xcode when building our software for the Mac App Store. Of course there is also the option of doing the stripping to the entire app after building with the ditto command (but I prefer to do modifications before Xcode’s final code signing step, even though Mac App submissions are re-signed for any changes during upload).

A lot of developers are in the process of rebuilding their apps for the Mac App Store. One of the guidelines is not using any update mechanism and leaving the updating to the App Store. Like most developers we also use the Sparkle framework to do our non-App Store updates and figuring out how to build a version of your application without Sparkle can be frustrating. Some developers have already solved it in clever ways – our favorite being Gus Mueller from Flying Meat who uses a script to run a find and replace:

Then when my build script is run for App Store builds (-s) it’ll use /usr/bin/sed against Acorn’s project.pbxproj and replace all instances of Sparkle.framework with Noop.framework. Then the automated build takes place, and Noop.framework gets copied in instead of Sparkle.

But we feel there’s still space for another solution on how to conditionally include Sparkle:

1. You should have a configuration for the App Store. It can be created under the project settings “Configurations” tab. You should duplicate your Release configuration for these new settings.

2. Remove Sparkle from being automatically included in all linker calls. To do this remove it from under the Targets–>Link Binaries With Libraries.

3. Under all your regular configurations (Debug, Release, …) add Sparkle to the linker flags. Open the build setting and set “other linker flags” to “-framework Sparkle”.

OTHER_LDFLAGS = -framework Sparkle

4. You now need to remove Sparkle from the Mac App Store application folder after building it. Create a new custom script build phase to run at the end of your build (Custom Package Processing above) and make the script the following (do make sure that the value of $CONFIGURATION matches the name you set your new configuration for Mac App Store releases):

The above takes care of the physical framework and linking but you still need to remove any code that calls the Sparkle framework. To do so set a variable in the Mac App Store build that you can use in your code to conditionally include Sparkle calls.

7. Don’t forget to update your interface if necessary. Remove any menus and check for update preferences. Include the menus programmatically for non-Mac App Store builds at applicationWillFinishLaunching:

The same method can be used for the eSellerate registration engine. The linker flags would be, “-lEWS -lValidateUniversal” and you would remove both the libEWS.a library and the compressed .zip version of the engine. Hoping the following proves useful to a few developers. Code on.

Dan at RowdyPixel.com has created a plug-in for TV Rage for those with a lot of TV season episodes. The plug-in has been crafted in order to download information specific to each single episode. In fact you have to include the season and episode number as part of the search to be able to determine the exact episode. “Burn Notice 02×01” would download information for season two, episode one of Burn Notice. This is incredible useful for those keeping linked video files for each episode inside DVDpedia.

It’s not just a simple search plug-in, it also includes a level of flexibility unprecedented in any other plug-in. Under the plug-in menu you will find the “TVRage Options…” command that will let you change a number of settings on how to store the information, including the order of title-season-episode information. Head on over to rowdypixel.com to download and install the plug-in as well as to learn more about it. Please post any feedback in our forum and do not hesitate to donate to rowdypixel.com if you find the plug-in useful.