Thoughts on PHP, Apple, and whatever else I want to write about

Main menu

Post navigation

A lot of virtual ink has been spilled the past few weeks writing about the pros and cons of Apple Watch one year into its life. Folks are talking about must-have features for version 2 and of course there’s plenty of debate about whether to classify Apple Watch as a flop or not.

I won’t spend time re-hashing any of that as you can go read it elsewhere, but I do want to give my impressions of the device and share how I have settled in to using it. These habits give some insight, at least anecdotally, into what Apple Watch’s actual potential might be for “revolutionizing” its segment.

I still wear my watch every day. My default/favorite watch face is the Modular Multi-Color. In terms of complications on this face I use the date, time (obviously), next calendar event, battery percent, outside temperature, and finally drive time in minutes (via ETA app).

I also consume a large number of notifications on the watch, including email (both personal and work) as well as text messages. I recently upgraded to the iPhone 6s from the 6 and sold the old phone on Ebay–the bid notifications on my wrist were helpful and fun.

The most compelling benefit of the notifications is that I don’t pull my phone from my pocket as often, nor am I constantly checking the phone for activity.

Glances get occasional use. I do check the weather glance a couple times a week, and perhaps the heart rate just for amusement (more on activity tracking later), but most of the glances go un-used.

This leads me to the biggest weak spot on the phone: the apps.

I don’t use apps on the phone, pretty much at all. I don’t find them in any way compelling and in no way are any of them more convenient than the same feature/app on the phone. In most cases apps are more of a pain due to the small screen, clumsiness of tapping little virtual buttons and manipulating the interface, and limited features.

I will occasionally go into the Weather app from the Weather glance, and I’ve recently installed the Canary security app (though have yet to use it). I have a total of three 3rd party apps installed, almost never used.

I’m not going to make a phone call on my wrist, compose an email on my wrist, or browse my photo library on my wrist. I’m not going to look at a map on my wrist, or check my calendar on my wrist. Though I’ve only used them a couple of times, the timer and clock apps do at least make sense–the Apple Watch is a timekeeping device after all. Most of the rest of the apps are gimmicky or have no real use case, however.

Third party apps on the watch are almost entirely a joke. Why would I want to look at Salesforce.com data on my wrist? Why would I EVER want to play any games on my wrist? It begins to get tiring very quickly to hold your wrist up long enough to engage in a game. Nobody is going to choose that over their phone for gaming.

Finally, I’ll talk about the exercise and activity tracker components. I am not a fitness nut per se, but I am an avid pick-up basketball player, typically playing 2-3x a week. Though I’m very curious about activity and calories burned during games, I don’t wear the watch for basketball because I don’t want to 1) get the watch damaged or 2) hurt somebody else getting hit (e.g. in the nose) with the watch. I do find the activity tracking pretty compelling, I just don’t get use out of that feature myself.

It seems to me Apple could not come up with the one compelling use case for the watch, so they crammed in everything to see what would stick. I don’t have a problem with this approach, and we’ve certainly had time to see what has stuck.

Most folks are calling for more independence from the phone via cellular networking in the watch. I disagree. More autonomy is not what is needed, as the watch is never going to be a phone replacement for anyone. Its highest and best use is as a compliment to the watch and a first-order member of the Apple ecosystem. Rather than cram in more features, I want to see the watch case slimmed down and more attention given to making Apple Watch the best notifications and activity companion on the market. Trim down the default apps to the most compelling use cases, and make them more compelling.

On the third party apps side, companies need to think seriously about real uses cases for their watch apps, not just being “on the board” with a watch app just for the sake of having it. The watch is not a phone, so why are we trying to build tiny-sized phone apps for it? Instead, think about the watch and wrist in their own light and how to exploit the location for the most impact on the user’s routines.

Finally, a trimmed down Apple Watch, devoid of gimmickry, needs a lower price point. People perceive value in terms of what they get back for their money. Its OK for the watch to be more focused and do less (this is the Apple way, right?), but the price might need to come down on models other than the Sport in order for the value proposition to work for a larger market. If Apple could trip the Watch down in terms of its scope, perhaps production costs would come down too, therefore a price reduction would not effect margins. I am not a reactionary “Apple must build cheaper stuff to increase its profits” goon, I recognize their strength is at the premium end of the market, where they can vacuum up all the profits in the segment and avoid the commoditization of their products. In this case, however, I think the Apple Watch is simply out of reach for too many otherwise curious interested buyers. Since its not a must-have product (let’s face it, you more or less have to have a smart phone these days) the potential buyers are more price sensitive. Maybe the recent price drop on the Sport is enough–we’ll see.

Bottom line is this: I like my watch and would buy it again, but its not life changing and it is not must-have. Apple has some more work to do here.

Cole Flourney has proposed a great little project on Quirky.com to build an aesthetically pleasing, integrated docking cable adapter, much like the W1PPS solution that failed at Kickstarter. Please check it out and vote for the project!

Problem: New Macbook Pro, two offices, lots of peripherals/monitors/etc to plug in and disconnect at least twice a day.

I plug in four to five different items into my new MacBook Pro with Retina Display every morning when I get to work. All of these get unplugged at the end of the day, or alternatively when I leave to go to my other office on the other side of campus, in which case I’m plugging/unplugging stuff four times in a day.

Aside from the tediousness of it, there’s also the issue of keeping the two display adapters in the proper order so my external displays aren’t swapped back and forth every time I plug in.

Check out the pictures below of the 0.1 version of my homemade dock thingy:

The dock thingy inserted into the MacBook Pro

I have some gaps between the adapters I need to seal up, and the MagSafe slips out too easily, but otherwise this is a good version 0.1

Yes, its hideous, but it took a total of 5 minutes of work. Turns out the hot water outlet on the office coffeemaker is the exact right temperature needed to soften the pellets. Just pour the pellets into a ceramic cup (don’t use paper or styrofoam unless you want the plastic to stick to the cup), then fill the cup with hot water. Once the pellets turn clear, they are ready to be formed. Poor the water out of the mug and then extract your mass of plastic.

I “cooked” too much plastic (about half the 6oz container) so I broke the glob in half and put the rest back in the bottle. I rolled the remaining part out with my hands like you would have in elementary school when making a snake with art clay. Then I wrapped it around the cables, being sure to press down around each adapter end and especially in between them. After that’s done, you let it cool until the product turns white again. If you mess up, just re-heat the product and start all over again.

This only took about 1/4 of the 6oz container, so I have plenty of pellets left to make one for my other office and will still have 3oz of Instamorph to piddle around with.

I briefly toyed with iOS app development a couple of years ago (back when you still didnt need to advertise your app to get noticed in the App Store). Having no formal training, and coming from a scripting language context, trying to learn Objective-C and at the same time understand “desktop” or “client” based programming paradigms was just too much. I made the decision that professionally it was a better use of my time to become a better Web Developer than to try to strike gold with an iPhone app.

I’ve used some of these apps. You have too. Typically its some news outfit that just had to have an app. Nevermind it is just serving up a UIwebView window with their mobile site embedded in it. These companies are still banking on being “found” in the App space.

But I’ve found myself actually NOT using these apps. I delete them, and I simply bookmark their sites in mobile Safari. Its just easier. And it leads me to this rather obvious conclusion: NOT EVERYTHING HAS TO BE AN APP. NOT EVERYTHING IS IMPROVED BY BEING “NATIVE.”

Apps should do something more than just provide content for reading. I use this distinction all the time at work. When educating new audiences about our web apps, I refer to a web application as a “web site that does something”. Maybe web sites don’t need apps, period. Its redundant and causes extra work on developers to support multiple platforms.

So, FT’s news today just brightened my day a bit. We’re getting more and more mobile-centric, though I assume that will peak some day in the near future. However, there still is and probably always will be a significant (even if not dominant) role for the browser based web experience (particularly in enterprise). To take it further, some web apps are simply to complicated to offer a full feature set in the mobile space, web or native. That doesn’t mean you don’t bother with mobile, but it means that maybe all this “mobile first” stuff isn’t a universal constant.

To get more specific, PHP will continue to have a role to play in the “front end” web, and won’t just be relegated to a server side way to respond to requests for content content from native apps.

Over the course of several weeks, I recently took an existing web app I had started from scratch 5 years ago and refactored it into a full stack Zend Framework application. This involved several steps essentially to organize the code in a similar fashion to ZF’s project layout, then incorporating certain components, and then finally finishing it off.

The application was simply too complicated to convert in one fell swoop or to start a new ZF project and then go feature-by-feature rebuilding. I was already using components from the ZF library for a lot of things (DB, cache, registry, etc.) so the refactor really consisted of converting from my series of root level page-controllers to ZF’s front-controller pattern.

I’ll share the code below for the most useful step in the conversion: Using PEAR class naming convention, I was able to create controllers using the Application_Controllers_Name naming convention. These controllers then extended Zend_Controller_Abstract (actually I created a custom abstract controller of my own that extended ZF’s). This gets you most of the functionality of the framework while not having to convert all the way over to the full ZF stack.

To finish this intermediate step off, you need a way to route requests to the proper controller. I wasn’t ready to implement the ZF front controller pattern, which would have required a bootstrap file that extends Zend_Application_Bootstrap and ends up calling the dispatcher and router…. too complicated for my code at this intermediate stage.

So I wrote the following index.php file that handles the routing itself. You’ll need your own custom bootstrap file.

This is not drop-in-ready code, it is shared to give you an idea of how to route your controllers outside of the full ZF stack. This code creates the request and response objects and passes them into the applicable controller Index controller is called if no controller is specified.

Be sure to add “Application_” to your autoloader scheme so that when the code constructs the class name, it can find it in your include path.

Also I need to note, this code doesn’t take into account pretty URLs, so you’ll have to re-jigger this to your own needs if you’re using .htaccess to re-route requests. The above code works with index.php?controller=Foo&action=Bar urls. In fact, my application, being an enterprise web app for one, and also being on a server that doesn’t permit URL rewriting, retained the ugly url syntax, and fortunately for me I found a ZF router for just such scenarios on Rob Allen’s blog from a long time ago. You DON’T need Rob’s router for my intermediate code, but if you’re not going to convert to pretty URLs then you will.

This code helped me get my application to a point where I could do the final conversion to ZF relatively painlessly. Good luck.

One of the things occupying my time lately has been a marked increase in the need to interact with Oracle databases.

DISCLAIMER: this is a rant and I didn’t stop to research, so I cannot guarantee there are no false assertions below.

Coming from MySQL, I find Oracle infuriating for several reasons.

1. Sequences: Why are sequences such a giant pain? Why can’t I generate sequences on the fly when I designate primary keys? Why does this not work in any way like Autoincrement in MySQL? Get started on that ASAP please!

2. Why is it so hard to make changes? OK, so I picked the wrong data type for a column, so I need to change it, right? Not so fast. If you already have data in that column for any row, you get an error when trying to change it. Drop the column? Again, better hope the col is empty. I get that Oracle is trying to enforce data integrity, but this behavior crosses over into frustration.

3. Limit clause (or lack thereof). Why does the BEST relational product not have a limit clause? This makes pagination queries a giant hot mess, and it wastes resources when you know you’re just looking for one row. This seems like a no-brainer. Tell me where I’m wrong.

Is Oracle the best choice for enterprise-class relational data storage? Yes! Does that lessen to any degree the amount of frustration one has trying to develop for the web with Oracle ? No!

Sorry for the hyperbolic headline, that’s just to get your attention. Now that I have you here, lets have a serious discussion.

The TIOBE Index was just updated with the February programming language rankings, and the PHP community seems to be a mild panic about dropping two spots in the rankings (from 3rd to 5th) and now falling behind Python (4th) on the list. The Python and Ruby camps are pretty happy with their results, and many seem vindicated as the results seem to have only amplified the PHP bashing out there.

Being the contrarian that I am, I could not just take the TIOBE results at face value. Sure it shows PHP search traffic decreasing relative to the other terms, but what are we actually measuring here.

Are we measuring the installed base? Are we measuring the number of programmers/users? Are we measuring the number of applications? The answers are no, no and no. TIOBE measures search engine traffic, period. TIOBE’s methodology is limited to the volume of searches for $language . “programming”. For example, they would have pulled search volume for “Java programming”, “PHP programming”, “Ruby programming” and so on.

To be sure, this approach is consistent across the languages, but there’s a significant amount of measurement error here if our goal is to determine the popularity of each language.

I went to Google Trends to look at search results of PHP against Ruby and Python, and yes indeed there is an alarming drop in PHP search volume going back to 2004 (see below). However, the Ruby and Python search volume is by comparison nearly off the chart scale except for the blips in the last few months for Ruby. We’ll have to wait and see if the Ruby spike is a trend or an anomoly, but looking at the historical data we can say the observable trend is zero growth in Ruby or PHP “popularity” as measured by TIOBE’s methodology.

Search volume for PHP, Python and Ruby Since 2004

This observation doesn’t change the alarming drop in PHP search volume, but what good is this metric in a vacuum? The following chart shows search volume for Java, C++ and PHP respectively. Notice a trend?

Search volume for Java, C++ and PHP since 2004

All three of these languages are experiencing significant drops in search volume since 2004 (that’s as far back as Google Trends goes). In fact, Java’s decline looks to be twice as bad as PHP’s. Where are the Java developers jumping out of windows? Does this mean each of these languages is fatally flawed and on its way out to be replaced by up and comers like Ruby and Python. Of course not. There’s a correlation among the drops in these three languages, and I would hypothesize that there’s an external variable that is depressing search volume here. The alternative explanation is that Java, C++ and PHP are each, by coincidence, experiencing major drops in popularity. I think that’s a far less likely possibility.

I would also wager that in the case of PHP, the proliferation of frameworks mean fewer people are searching Google for “PHP.” Instead, we are all busy searching for “Cake PHP”, “Symfony” or “Zend Framework”. Google Trends shows fairly low search volume for these three terms, so this is not an answer, but it could be a small contributing factor.

The real question is what’s going on with search in general? Is overall search traffic down this much? Surely not. Are people getting their programming language knowledge in other ways?

Ultimately I am reassured as a PHP developer that although something is going on out there, its not hitting just us. Further, PHP is still in the top 5 on the TIOBE index, and its pretty rare company up there with three of the five being older, more established “enterprise” scale/desktop languages. We’ll have to watch the TIOBE index over the next few months to see if the Python spike is a trend or a blip, so I’m not going to spend a lot of time worrying about it right now.

As technology professionals we can’t be afraid of change, so if there is a death knell sounding for PHP, we have to be able to accept it and move on. Having said that, there’s no bell tolling yet… I just don’t see it. PHP’s only recently begun to see serious enterprise adoption, and the trend is accelerating, not decelerating. Even if PHP wasn’t cool anymore it would be a decade before all this enterprise adoption was undone in favor of other platforms.

I’m all ears if you have any explanations, and please try to back up with data if you can find it.

As my applications have grown in complexity, I’ve followed a path probably quite similar to many of you with respect to .js file maintenance. In the beginning I had one js file to include in the site’s/app’s header, containing just a few basic js functions used across the site.

As my JavaScript codebase grew, I added more .js files, trying to specialize the files and even went through the trouble of including specific files on some pages and not on others. Then I adopted a framework (jQuery in my case) and that is just one more script tag.

At some point I became aware of the minifcation trend for JS and CSS files, and began looking at how much bandwidth I could save per page load by doing so. Using an online minifier, I began minifying each .js file after every modification. This became very unmanageable very quickly. I also had to consider the impact of multiple file loads on the browser and how that impacts performance.

I decided to find a way to automate this process. I stumbled upon the JSmin class PHP class, which is an implementation of Douglas Crockford’s JSMin. The solution would implement JSmin, but with a wrapper class that would read each .js file, minify (and compress if possible), and then output into a single file. More helpful ideas were found in this blog article at verens.com.

What I came up with accomplishes the following:

– Given an array of .js filenames, reads and minifies each, writes to a single new file.

– Reads file modification date of each file, if none are newer than the auto-generated output file, the process is skipped.

This results in an on-the-fly minifier that only runs when JavaScript code has been modified in any one of the original files. This makes code deployment simpler….just sync updated js files to the appropriate directory.

I’ve encountered a couple of negatives which are easily mitigated. First, in production the process is slow…sometimes 15 seconds. That first user to hit the site after a new js file has been uploaded is going to think the server is down. Remedy this by uploading at off-peak times and immediately surf to the site yourself, saving an unwitting user the 15 second wait. Second, I’ve experienced some kind of funky file collision on occasion which resulted in the minification running on every page load (think 15 second page loads for every page, every time), so when syncing from test to prod I will typically delete the generated file from test first, so prod can then generate its own clean file.

// create array of .js filenames to be minified$scripts=array('jquery','jquery.colorbox','jquery.livequery','jquery.tipsy','jquery.validate','functions','menu','childtables','datepicker');// call the fetch static method, supplying the source dir, target dir and the scripts array$scriptfile= App_Minifier::fetch('scripts','temp',$scripts);// put the result in a script tag in your html header

Yes I realize that a static class perhaps wasn’t the best choice, but it works and it keeps memory usage to a minimum. I’d probably write it differently today, and may yet refactor it to remove the static.

The output $scriptfile will be a .js filename, generated by hashing the concatenation of all the filenames in the scripts array. This permits different combinations of files to produce different output files, if that’s something you need.

Also note my comment in fetch() about the gzip feature not being used. This caused problems in my particular environment so I’m not using it, but it may work for some of you and I’d be eager to hear from you if it does. To enable, just change line 50 from

$minified= JSMin::minify($contents);

$minified = JSMin::minify($contents);

to

$minified= JSMin::minify($code.$contents);

$minified = JSMin::minify($code . $contents);

In my specific example I was loading as many as 9 different .js files per page load, totaling 250kb. Now all that JavaScript loads in 1 file measuring 147kb.