Blog

I recently released collective.restapi.navigationtree to the Python Package Index, an add-on extending Plone's REST API with an endpoint that returns the site's navigation tree down to a configurable depth.

Plone has a beautiful RESTful Hypermedia API, but its @navigation endpoint (see docs) is a bit too simplistic. As evidenced by the fact that the vast majority of websites out there have one form or another of dropdowns in their main navigation menus, going beyond the top level menu is almost non-negotiable. But @navigation does not offer this ability. What is one to do?

Well, after opening an issue in github, I decided to create a separate add-on as a proof of concept:

Tests are included to make sure it runs on Plone 4.3.latest, 5.0 and 5.1, for both Archetypes and Dexterity.

By default, Plone does not provide dropdown navigation menus. But pretty much every Plone site I have ever worked on has webcouturier.dropdownmenu installed to fill this gap. So I borrowed some of its code to generate the JSON response, and introduced a new endpoint called @navigationtree.

Currently, collective.restapi.navigationtree depends on webcouturier.dropdownmenu (as well as plone.restapi, of course), but my assumption was that if you need the former, you probably already have the latter installed. I also lean on webcouturier.dropdownmenu's configuration, in particular its dropdown_depth parameter. So you will get the same depth of your navigation tree in the JSON response as the site's menu. However, I'm already rethinking this dependency. It would be much cleaner to just add a query parameter to the endpoint to specify the desired tree depth than to rely on an external add-on's configuration. At some point I will release a new version with the dependency on webcouturier.dropdownmenu stripped out.

Example

A very simple Ansible playbook that allows you to dump the distribution and kernel version of all the hosts in your inventory to a local file.

If you want to quickly find the exact kernel versions of a large number of hosts, Ansible is the perfect tool. It will save you from having to manually log in and run uname -ir on each one, and copy and paste the results in some local file.

I am going to share a little Ansible playbook below, which I came up with just the other day. The impetus came in the form of an announcement from DigitalOcean about the Spectre and Meltdown vulnerabilities.

While tinkering with Ansible, I discovered the hostvars dictionary, an awesome data structure containing every last detail about the operating system of each host in the playbook's inventory. hostvars is populated in the gather_facts step of a playbook execution. There are two items in a host's hostvars data which I needed:

ansible_distribution_version: this contains the version of the host's particular OS distribution. All my hosts are running Ubuntu, and the values for me are 14.04, 15.04 and 16.04.

ansible_kernel: this is the kernel version currently running, e.g. 3.13.0-141-generic.

The playbook contains two hosts sections. One for all, the only purpose of which is go through the gather_facts step. The second hosts section is for localhost.

Starting from the bottom, the end result we are going for is to write the distribution version and the kernel version for each host into a local file. We can create a file using the template action and an appropriate jinja2 template. We only want one file, and we want it locally, hence the first reason for this hosts: localhost section. Otherwise, we would create a file on each of the remote hosts in the inventory.

We want our template to render the contents of a dictionary into which we have stored all the version information we have gathered from our hosts. So let's create this dictionary with a set_fact: task. We can use the with_inventory_hostnames iterator, which lets us loop over all the hosts and puts each hostname in the item variable. In this loop, we update the versions dict using the following syntax:

{{ versions | combine( { item: somevalue } ) }}

The python equivalent would be:

versions.update( { item: somevalue } )

or in other words:

versions[item] = somevalue

Remember, item is a hostname, and in place of somevalue we want to put a string containing both the distribution and the kernel version.

If we remember to initialize the versions variable to be an empty dict at the beginning, we have all the pieces we need.

Faceted search, as provided by eea.facetednavigation, offers many advantages over Plone's default search page. Thanks to the Zope Component Architecture, swapping out the default search page for a customized faceted search page is only a few quick steps away, as this Howto demonstrates.

If you are familiar with Plone add-ons and the Zope Component Architecture it all boils down to overriding the @@search browser view. We'll see at the end of this post why this is the case.
Let's look at exactly what needs to be done.

Register the override

Create a file called overrides.zcml in your custom add-on. This file should be in the same folder as the main configure.zcml file in your add-on. Here it is:

When this override is registered (i.e., the next time your site is restarted), the result will be that every time a client requests the browser view @@search we are going to execute our code in .search.Search. So let's write that code now.

Implement our custom browser view

We are going to need a crucial piece of information before we start writing our browser view. The information we need is the name of the text widget on our faceted search page.
We can find it with our browser's developer tools. So, load your faceted search page in the browser, then inspect the text input field. This field will have both a name and an id
attribute, both of which should have the same value. The value will be a short string, likely consisting of one letter and one digit. In the figure, this value is c4. In your case, it will likely be different.

Now that we have the text field id, let's create a new file called search.py in the same folder as configure.zcml. (Note: in a typical add-on, you would put this code in the browser folder, or anywhere you like, but let's keep this example as simple as possible.)

Here is the code you need, and remember to use the id you just found instead of c4.

As mentioned above, the faceted navigation page in this example has faceted_search as its id.

Save your files and restart the site.

Testing

In your browser go to http://yoursite/@@search?SearchableText=hello

(if you are running Plone locally, on port 8080 and your site id is Plone, then use http://localhost:8080/Plone/@@search?SearchableText=hello)

The site should automatically redirect to http://yoursite/faceted_search#c4=hello. Moreover, this should load your faceted search page and the text field should have the word hello in it. If any content on your site contains the word hello, there should also be some search results listed.

How this works

The Default @@search Browser View

We know that @@search is a browser view because of that "double @" prefix, and a quick grep reveals that it is defined in Products/CMFPlone/browser/configure.zcml like this:

The ajax-search view is invoked for live-search, but we are not touching that here. If we look at the Search class in Products/CMFPlone/browser/search.py, we see that it does not have a __call__() method. Thus, it leaves all the rendering to its template as defined in the configure.zcml file above.

What happens when we define our override as described above is that we are circumventing the default template from rendering, allowing our __call__() method to run instead.

Redirect

Our __call__() method does a self.request.response.redirect(...), which allows us to send all searches to our faceted navigation page.

SearchableText

Of course, we also want to tell our faceted navigation page what to search for when we redirect to it. It turns out that all search forms in Plone (be they the default viewlet that is in the portal header of every page, or the search portlet, etc) submit the text that the user types in the form as a SearchableText query parameter. So this parameter is easy to retrieve from the @@search request before doing the redirect by doing this:

self.request.form.get('SearchableText', None)

Now we want to pass this SearchableText to our faceted navigation page. That's where the c4 field name comes in, which we inspected. Faceted navigation uses URL fragments instead of regular query strings, i.e. it uses the # hashmark to append queries and state to its URL. So
we turn this:

?SearchableText=hello

into this

#c4=hello

Subject

The faceted navigation page I created for one specific project has a checkbox widget for the tags used on the site. (We only used a controlled vocabulary of tags, so that normal editors are not allowed to add tags willy-nilly to their content. Therefore, the number of available choices in the widget is relatively small.)

Now, by default, Plone adds "Filed under:" links at the bottom of each content item that allow the visitor to view the results of a search for all content that has the same tags. Also, it adds the same links to each search result.

It is straightforward to use the same technique as described above to redirect these links to the faceted navigation while pre-selecting the right tag in the tags widget.

If we want to add some static text anywhere on a Faceted Navigation page, eea.facetednavigation allows us to use portlets as widgets in any of the widget containers. Here is how. (Requires ZMI access).

You may want to use an existing portlet, but if you want to create an ad-hoc portlet, you can do it in the ZMI:

Go to portal_skins

Go into the custom folder

Select Page template from the Add dropdown menu, top right

Give it a simple name - I will refer to this name below as your_portlet_id

Over the years, I have been asked a number of times to "fix" the title in the browser window or tab for the homepage. As it turns out, there is a simple solution to this, and it's better than entering a space in the title field.

It's true -- you learn something new every day!

Plone automatically generates the <title> element of every page by concatenating two strings, separated by an &mdash;:

The value of the Title field of the current page

The value of the Site Title on the /@@site-controlpanel.

Thus, this page for example has the title: A Title for the Homepage — Soliton Consulting

But what if you want some page to just show the site title? Typically, you might want this on the homepage.

Simple! Just give your homepage the same title as the site, and Plone will skip the concatenation business.

I stumbled across this when I went to look at the code, expecting to need to customize it. It's a viewlet after all (plone.htmlhead.title), so that approach is simple enough. No customization needed!

I have been using the Ansible playbook for Plone lately, but I ran into a problem because of its nginx role. Currently, the nginx role is written to disallow access to any URL path that contains /manage_, which is a good idea to prevent direct access to the ZMI. It forces you to use an SSH tunnel when you are making any TTW changes in the ZMI. However, Plomino defines several methods that start with manage_, and they end up getting blocked by nginx with 403 errors. I wanted to preserve the added safety, while not breaking my Plomino apps, so I defined a nested location directive to do that.

Here is the location directive created by the Ansible playbook nginx role:

A simple customization of the simple-todos tutorial app using Google accounts

In Chapter 9 of the Meteor Tutorial you can learn about how to add user accounts and login/out functionality to your sample todo app. Towards the end the tutorial suggests to the adventurous to add the accounts-facebook package, to enable Facebook login. I did, verified that I get a Facebook login button, and promptly removed the package (not a Facebook fan here!). Instead, I added the Google accounts package:

> meteor add accounts-google

added oauth at version 1.1.2added google at version 1.1.2added oauth2 at version 1.1.1added accounts-google at version 1.0.2added accounts-oauth at version 1.1.2accounts-google: Login service for Google accounts

When you do, you get a nice Google button in the Sign in menu, but it's all red and says "Configure Google Login". In other words, a little setup is needed before you can log in with a Google account. Fortunately, if you click the red button, you get detailed and fairly straightforward instructions for how to do so. In short order, you should have it all working.

Customizing the {{username}}

The tutorial has us identify each task by the username of the account that created the task with the {{username}} template tag. This works fine as long as we use simple username/password authentication, but as soon as we replace or augment it with Google accounts, the template tag is replaced with an empty string.

Since this template tag is in the scope of the task template, which is called in the context of an iteration of the results of a Tasks.find(...), the value of {{username}} comes from the expression Meteor.user().username in:

If you logged in with a Google account, you will see it listed. Note how in the whole json structure of this user object there is no username key. That's why.

However, there are several interesting fields that could be used instead, or for other purposes: name (the full name), email, given_name, family_name, gender, and picture. Let's use given_name as the name to show next to each task. Because of the way the json object for the Google account is nested, this is how we can refer to it:

Learn more

The solution to a couple of problems installing the SDKs required to run Meteor as a mobile app

Today I'm skipping ahead to the Running your app on Android or iOS page of the Meteor tutorial. The vast bulk of time required to perform these steps is taken up by downloading the various SDKs that are needed. For this reason, I limited myself to just the Android version, and left the iOS version for another day. Other than that, a couple of very simple commands are all it takes to get our simple-todo Meteor app to run either in an emulator, or directly on a mobile device. And you are not limited to your local server, either - your mobile device app can immediately start talking to the remote server deployed on page 6 of the tutorial. It is quite exhilarating to see your fully functional mobile app launched so quickly!

I encountered a couple of gotchas while running the add-platform android and the run android commands, due to environment variables not being set properly during the installation of the SDKs. My platform is OS X Yosemite (10.10), and the Java environment I installed is the SE Development Kit 8 (jdk-8u-25). This page automatically opened up when I ran the meteor install-sdk android command, and it contained the installation instructions. I was also prompted to install the HAXM emulator acceleration, which I did.

After installing Meteor itself, the Meteor tutorial instructs you to create your first application with the following command:

meteor create simple-todos

The result is an application made up of three files (an html template, a javascript file for the application logic, and an empty css file), plus a folder of "internal Meteor files".

After spending a couple of minutes to see how the javascript file is structured and how it ties into the html template, I got curious about the magic that makes it all work.

The .meteor folder

The first level of the "internal Meteor files" folder looks rather harmless, with ids, lists of packages used, etc. One hint of the submerged portion of the iceberg is given by the versions file, which lists 52 packages or libraries or whatever these things are called in the javascript world.

Initially, that's all you get from running the meteor create simple-todos command. However, things get more interesting when you start the application:

> cd simple-todos> meteor

When meteor starts, another folder is created, called local, which in turn contains two more folders, build and db. This is where things get interesting. But before diving in, let's see what the application sends to the client.

The client point of view

If you load the application in the browser as instructed by the tutorial and by Meteor itself at the command prompt, by navigating to http://localhost:3000, you can then inspect the resulting page with Firebug or your browser's developer tools. The resulting html closely mirrors the application template, but don't be fooled! Do an old-fashioned "view source" instead, and you'll see something rather different: your browser actually received an html file with a <body> that is completely empty! The <head> on the other hand, loads something like 40 different javascript resources, plus a dictionary of application constants.

What this means is that the page's entire DOM gets generated on the client by scripts. Indeed, at the bottom of the list of the 40 javascript files that are loaded we can see something interesting. The last one, /simple-todos.js, is the same as the one in our project top-level directory, except that before being sent to the client it got wrapped inside a

(function(){ ... })();

The <script> just before that is even more revealing. It's called template.simple-todos.js, and contains:

In other words, our template gets parsed by meteor and compiled into a script, which is sent to the client, and upon execution builds the DOM.

I feel a little uneasy about this. Granted, a DOM inspector (like Firebug) shows me the rendered html, so it should be debuggable just like in the old days, but what if something goes awry in this whole chain?

It downloads the meteor bootstrap tarball and extracts it to ~/.meteor-install-tmp.

It moves ~/.meteor-install-tmp/.meteor to ~/.meteor.

It finds the symlink ~/.meteor/meteor, and copies the script scripts/admin/launch-meteor in the same directory to /usr/local/bin/meteor (sudo required).

It prints the helpful message:

To get started fast:

$ meteor create ~/my_cool_app

$ cd ~/my_cool_app

$ meteor

Or see the docs at:

docs.meteor.com

This is the happy path, but of course, the installer also deals with various kinds of error conditions.

The version is set in the variable RELEASE in the script, so I suppose if you want to upgrade to a later version you need to download the script and run it again. I presume the URL in the install command will always point to the latest version.

In a future installment, I will dissect the launch-meteor and the meteor scripts themselves, because they seem to be responsible for downloading all the node and other javascript dependencies. For the time being, I am trying to achieve some kind of isolation by doing all this inside a nodeenv virtual environment.

A useful regex that can be plugged into about any environment, to detect nearly all major devices known to WURFL

Recently I had a need for a simple way to redirect all requests for a website to a different URL if the request was coming from a mobile device. That was about the extent of it, no mobile framework was required, no special library or API. I was happy to find an open source solution: http://detectmobilebrowsers.com/

This very useful solution basically offers a single regular expression that is capable of detecting 15777 devices and 15606 user agent strings (as of this writing), which encompasses nearly all major devices detected by WURFL. You can download it in 16 different flavors, ranging from Apache to IIS to Nginx rewrite rules, to pretty much any popular web development environment, such as Javascript, Python, Rails, Perl, PHP, ASP, C#, and more.

Android tablets, iPads, Kindle Fires and PlayBooks are not detected by design. To add support for tablets, add |android|ipad|playbook|silk to the first regex.

In my case, I opted for the Apache rewrite condition. In the following example, I added an extra twist: if a mobile browser requests the same "regular" URL within a 24 hour period, the redirect will only happen on the first request. So I set a cookie with an expiration time of 1440 minutes.

Thanks to a whole array of excellent add-ons for Plone to integrate it with Salesforce, your web content and your constituency database can seamlessly be tied together to simplify your work processes.

If you already have a Plone site and a Salesforce account (or are considering adding one or the other to your IT toolset), it won't be difficult for you to imagine ways in which your organization could become more effective, and its workload eased, if only the two could work together. A few examples:

If the e-newsletter subscription form on your website could directly save to your Salesforce contacts, your staff could do away with the manual data entry, say from a subscription email to a new Salesforce contact.

Suppose your Plone site has a custom content type for directory entries for public display, but your "master" directory is maintained in Salesforce. The two directories could be automatically kept in sync, so any edits to a given record in Salesforce are promptly reflected on the public website. Or you may want the ability to edit a record on either side, and the synchronization to be bi-directional. This way, you would only have to make an update once, and not worry about keeping track of having to repeat the same edits on multiple platforms.

If your Salesforce data is structured using multidimensional custom categories, you might want the same structure to be reflected on your website. Out of the box, Plone can handle a single taxonomy with multiple tags for each content item, but you can have custom types with as many categories as are needed. Managing multiple categorization vocabularies can become a site configuration chore. By synchronizing your Salesforce data with Plone, you can expose a rich and multi-faceted vista on your valuable content to your audience, without any additional editorial intervention.

Soliton Consulting can support you with these use cases, and more.

To showcase some of our experience in this field, please consider the following client solutions:

Web-to-Lead Forms

The Fund for Global Human Rights recently upgraded their multi-lingual website from Plone 2.1.4 to 4.2. At the same time, their E-Newsletter signup form was integrated with their Salesforce account using the well-established Salesforce PFG Adapter. This is a very quick and affordable solution that provides immediate benefits to any organization.

Content Synchronization

Online directories are prime candidates for batched synchronization between a Salesforce database and the content of a Plone site. Think Local Seattle took advantage of this solution for a streamlined workflow.

Configuration of Complex Data

501 Commons has a sophisticated search functionality for their provider directory. Each directory entry is tagged for multiple orthogonal categories, such as: Areas of expertise, Counties served, Experience, Foreign Languages, Communities of color served, Other special populations served, as well as other keywords. The vocabularies, i.e. the sets of all possible values for each of these categories, are fetched dynamically from Salesforce to build the search options on the navigation page. Of course, all the provider directory entries themselves are also synchronized directly from Salesforce. Please see the post Diazo for Web Grafting for other aspects of this interesting project.

Please contact us if you would like more information about integration of Plone and Salesforce.

A report out from PLOG, which took place April 3rd - 7th, 2013, in beautiful Sorrento, Italy.

O' sole, o' mare...! Calling attention to the sunshine and the Gulf of Naples, punctuated with quick arm and hand sweeps, and uttered with the appropriate Neapolitan accent, one is happy to let it all sink in, and finally leave behind the damp, grey weariness of another godforsaken Pacific Northwest winter.

There is no better way for plugging into the Plone community than to show up at any one of the many events happening year-round and worldwide

For the seventh year, a contingent of Plone professionals again converged on the classy Hotel Mediterraneo in Sorrento for the annual Plone Open Garden, and I counted myself among the lucky ones to partake in the five days and four nights of intense, yet relaxed coding, sharing and - most of all - bonding with other members of this extraordinary community. For some years now I had PLOG on my radar, but this year was my first opportunity to experience it first-hand. The superb dining and the impeccable style of the hotel's ambiance and of its garden certainly helped, but the overwhelming feeling I got from all fifty-odd participants was one of delight at being reunited one more time and have a chance to spend a few days together doing what we all love to do. Coming from all corners of Europe (Finland, The Netherlands, the United Kingdom, France, Slovenia, Catalonia, Germany, Spain, and, of course, Italy) and from as far as Brazil, not to mention yours truly from the United States, for many this was the first chance to meet face to face since the October 2012 Plone Conference in Arnhem. All our electronic communications channels notwithstanding, the Plone community is very much a human community, and humans need personal interactions to reinforce this sense of belonging everybody craves. Anyone out there wanting to find ways to plug into Plone, or just learn more about it - take note: there is no better way than to show up at any one of the many events happening year-round and worldwide. Without aiming to detract from any of them, in my humble estimation, PLOG tops them all. My heartfelt appreciation goes to the Abstract team who made it all happen, fearlessly led by Maurizio Del Monte.

From the many excellent morning talks in the Speakers' Corner and all the conversations and sprints that happened on this occasion, I got the distinct sense that the energy and momentum behind several strategic directions are significantly increasing: to name a few, the marketing effort and the upcoming plone.com and the Products Party.

Personally, with Asko Soukka's help I learned how to integrate plone.app.robotframework (slides), Travis continuous integration tests, and Saucelabs into any given add-on, which is a terrific testing framework, and I integrated robot tests into Plomino. I enjoyed learning about NixOS, plone.app.contenttypes and plone.app.widgets. I also want to re-share a 2007 paper by Jonah Bossewitch: Fabricating Freedom: Free Software Developers at Work and Play. Brought to our attention over dinner and tweeted by Silvio Tomatis, Jonah paints a picture of the open source community, and the Plone community in particular, in which many of us will not fail to recognize ourselves.

The Yahoo! Query Language unifies access to a plethora of web services with a simple SQL-like language. Apps run faster, with fewer lines of code, fewer network calls, and eliminating the pain of locating the right URLs and API documentation to access and query each Web service.

Exhibit B: The Console

The large text box at the top is pre-populated with the previous query

Directly under it, click the Test button, and try switching between XML and JSON, as well as between the Formatted or the Tree representations.

The two right tabs allow you to experiment with the two individual data sources that are joined by the query.

Finally, in the text box along the bottom you can find the REST Query I linked above, which returns the raw data.

The Proof

The little weather icon to the left was generated with a small snippet of JQuery utilizing the same query URL from above. Note how no javascript API is loaded from remote sources, and we are combining two different web service data sources in a single AJAX call. Safe and fast.

Piecing it together

Go back to the YQL console, and drill down into the list of Data Tables on the right, until you find weather.forecast. The large text box at the top will be populated with a sample query:

select * from weather.forecast where woeid=2502265

Here, woeid=2502265 represents Sunnyvale, CA.

Next, go back to the Data Tables list, and click on geo.places. This time, the sample query is:

select * from geo.places where text="sfo"

Copy the query, and go back to weather.forecast. Instead of woeid=...., let's use the in operator, and put a pair of brackets around the 2502265 value. Finally, replace the 2502265 value with the query from geo.places:

select * from weather.forecast where woeid in (select woeid from geo.places where text="arnhem")

That's how easy the console makes it for us to discover how to piece together any web service query we can think of!

Finally, it's just a matter of pulling out the pieces of data from the JSON response with a little bit of JQuery.

Of course, by playing around with the console, or even reading the extensive YQL documentation, we can make the queries much more efficient and optimized, but this is a great start.

If you want to use data from such disparate sources as Zillow, Craigslist, Flickr, Pidgets, bit.ly, Wordpress, Yelp, Facebook, Twitter, YouTube, Answers, and many others, I can't recommend YQL highly enough.

At first glance, these two sites seem to have nothing in common, except for the general topic they seem to present: They come from different organizations, they look very different, and judging by the respective main navigation menus, the rest of the site has very different content. However, if you look just a little more closely you will notice that apart from the header and footer of the two sites, the main page body is actually the same. It works the same in both sites, too: you can click on checkboxes, expand the various filters in the middle column such as the "Counties served", and the results in the right column are dynamically updated accordingly. All the filters and search results are the same, too. (This is an example of a "faceted navigation", which is some interesting functionality in its own right.)

I'll let you in on the secret now: they are actually the same site.

A bit of history: Over a year ago I participated in creating the 501 Commons site, with its faceted navigation directory, by customizing the Plone add-on product eea.facetednavigation, where all the filters and search results are dynamically loaded from SalesForce. Earlier this year, Washington Nonprofits kicked off a project to redesign their old website. As part of this re-vamping, they negotiated with 501 Commons to have the same resource directory embedded within their new site. The redesign of the new Washington Nonprofits site was commissioned to a separate company, but I was pulled in to solve this particular embedding problem. The requirement was that the new washingtonnonprofits.org site would launch with the 501Commons resource directory seamlessly embedded into one of their pages, while leaving all control over the directory itself in the hands of 501 Commons.

Diazo - plastic surgery without the scalpels

With Diazo, all that was needed for this to happen was the HTML and CSS of the new Washington Nonprofits site design, which was available through the browser at a temporary URL (the new site had not launched yet). I never needed access to the source code or any implementation detail of the new Washington Nonprofits site, and it never had to be modified or altered. A subdomain (directory.washingtonnonprofits.org) merely had to be set up and pointed to the server hosting 501commons.org.

Even more remarkably, the implementation on the 501 Commons site did not need to be altered for this to happen, either. Consider that the two sites are built on completely different platforms, hosted in different environments, and managed by independent organizations. 501 Commons is a Plone site with a SalesForce integration, hosted on Soliton Consulting's servers [now moved to a different hosting provider], while Washington Nonprofits is definitely not Plone, and could literally run on any other platform.

Applying a custom graphics design to a website is a process known in the industry as "skinning", or "theming". That is, designers produce the desired look and feel of a site, usually manifested in the form of Photoshop composite files. Then, that design is converted into HTML and CSS code. The resulting code is then usually applied to the underlying website platform code. In most Content Management Systems, this requires writing code that is tailored to the very specific implementations of the various functionalities of the site, e.g. menus, search, sidebars, etc.

Diazo makes it possible to "skin" a site without modifying any of the underlying code. The magic happens in a so-called "rules" file, which is an XML file containing a set of transformation rules. These rules are then translated into XSLT transforms, which are applied on the fly to the HTML dynamically generated by the server. The rules act as go-betweens to modify a static HTML theme file, and place the dynamic content into the theme skeleton. For example, rules can say "drop this element of the theme", or "replace that block of theme with this piece of content", or "insert this piece of content before this block of the theme". XPath or CSS3 selectors are used in the rules to identify elements in the theme or the content. The theme skeleton can thus be completely rearranged by the rules. Of course, the theme refers to the CSS styles, which is where the graphic design takes shape. Please refer to the last section below for some example rules.

Diazo also includes the ability to selectively apply a theme, depending on the URL used in the request to the server. And so it is that the same server, indeed the same Plone site, can serve up two apparently completely different sites. One site is the "original" 501commons.org/directory, which is left untouched by Diazo, and the other has the Diazo skin for directory.washingtonnonprofits.org. (Of course, the former has a skin of its own, but that is a "traditional" skin, deeply ingrained in the code that generates all the site components.)

The reason why I called Diazo a new "technology" at the top of this article, is that it is completely independent of any web framework. It works on any platform, regardless of whether you use Plone, Drupal, Wordpress, Django, Pyramid, Ruby on Rails, or what have you. Of course, it is now part of the Plone core, so Plone makes it particularly easy to adopt, but that does not make it specific to Plone.

New prosthetics with Diazo

Medical science has opened up many new possibilities with artificial limbs, organs, skin transplants, etc. Diazo allows similar advances in web development. No longer do we have to put all our eggs in one monolithic technology basket. It is now very easy to just take one site's skin, and graft it onto a different site. The end result is that the two sites appear to be one and the same, with the capabilities of both. And why limit ourselves to two?

Every web platform has distinct strengths and weaknesses. Blogs, shopping carts, custom data-driven web applications, wikis, issue trackers, forums, ... many platforms have tried to incorporate as many different applications into their core or their set of add-on plugins as possible, often with less than stellar results. It is now easier than ever before to use different solutions and integrate them into one seamless site.

Use a WordPress blog inside your Plone site

Integrate a Trac issue tracker within a Drupal site

Merge a Plone site with a Django application and a separate shopping cart framework

If you have any specific ideas for how Diazo might apply to your situation, please let me know in the comments below!

A few sample rules

The following rule takes the <title> element from the content, and replaces the last 11 characters, i.e. it substitutes "501 Commons" with "Washington Nonprofits" in the title:

I wrote a little Python script to solve a find and replace problem:The problem was that I had a directory tree with several thousand files, about 2000 of which were static HTML (yeah, don't ask....I blame my predecessor for this), with the typical Google Analytics tracking code, e.g.:

Now my client started worrying about the privacy of their users, and asked me to remove all these snippets. By the way, In Europe it will soon become illegal to use Google Analytics without asking visitors whether they allow tracking them.

I wrote a regex that would capture this multi-line snippet easily. In addition, several HTML files also had event handlers calling the GA tracking script, e.g:

It is significant that so many of the dedicated contributors to the Plone core and its ecosystem as a whole felt compelled to weigh in to the discussion. Without exception, all the voices in the discussion melded together to form a decidedly constructive chorus. Clearly, a nerve was struck. If you were not in Munich, I can attest to the fact that this topic had the ability to galvanize every single person who participated in the discussions, no matter their level of experience in Plone development.

As so often happens in a lively debate, minds produce copious amounts of ideas. This, of course, is a good thing. It can be bewildering, too. We are lucky that all of these ideas were not just voiced in fleeting verbal conversations, but that we have them, black on white (or whatever colors you use), in our inboxes and list archive. It would certainly be useful to attempt to synthesize all the viewpoints we have heard so far.

My intention, though, is to go back to the start. It seems to me that there was a point in the Munich open space where the discussion definitely lifted off. The lift-off happened when someone admitted to not knowing, or not being able to remember, how to write the code to do something that should be simple, such as copying a content object. Everyone could relate to that frustration. Everyone. No doubt, it wasn't just about "copying an object".

I think we should not confuse the momentum behind plone.api with wanting to create a "great" API. If we let the conversation go in that direction, everyone is going to produce a different wishlist, and there is no way we can make everyone happy. I'm also not completely on board with the idea of solving 20% of the use cases that cause 80% of the problems. That sounds too much like a common denominator approach, that could end up making everybody unhappy.

The momentum originates from the possibility that, someday, with this API I might actually be able to write (and remember) the simple method call required to do a very simple thing. And so, while I'm proud of the sphinx docs we produced, and of our "document first" approach, perhaps to some extent this approach distracts us from where the energy is, and what we are trying to do.

Instead (or in addition) the energy comes from: "I really hate that to do A I have to use this crappy xyz code!"

So, can we start a collection, a little gallery of horrors? Here is a silly example of what I mean. I'm going to paste a code snippet that I hate, and I'm going to explain what I would like to have instead. After that, people can weigh in on what disadvantages my desired "API" would have, or why it would not work, or how it could be solved better.

Why this sucks

The problem is not TAL, it's the double indirection to a method that I have to call with what looks like a set of positional arguments in an arbitrary order. Could I please just have a global is_manager that I don't need to define? If I set up my own set of custom permissions, I guess I'll be fine doing the python:context.portal_membership.checkPermission('Do something unusual', context). Plone ships with a set of stock permissions, other than manager, so all of those should be available globally. Actually, it would be nice if a global is_mycustomperm could be generated automatically when a new permission is defined.

Actually, this example contains two horribles in one. What's with the ${context/portal_url}? I can never remember when I can use portal_url and when I can't. Why context? Why would portal_url depend on it? Subsites don't ship with Plone out of the box.

Discussion, pros/cons

Is there a performance penalty to having all the permissions computed for the context at request time?

I don't have a strong preference on how this little gallery of horrors should be implemented. Sphinx might work. Google moderator, maybe. [I'm a fan of wikis, I like how in MediaWiki (e.g. wikipedia) there is a separation between the content and the discussion about the content (they are on different tabs), and yet there is no barrier to either editing the content, or adding to the discussion, and full history is preserved (again with no barrier, no context switch).]

It's great that we started writing the documentation for plone.api, and even included examples for each element of it. But somehow divorcing this documentation from the horrors we are trying to fix seems counterproductive.

Of course, the "little gallery of horrors" and the "official" documentation have to be integrated somehow, and this is another problem.

Finally, I think that while it's certainly better to start small than not at all, it should be possible to let plone.api grow over time to cover more than the 20/80 scenario that was proposed.

This presentation attempted (and in my opinion succeeded in) making the case that an entire web application can be built in Javascript. At this point, I am not too hard to convince anymore. As Philipp von Weitershausen demonstrated at the 2011 Plone conference in San Francisco, Javascript is plenty fast, so no concerns there. Daniel also claimed that the error logging problem can be solved with some tools that send log events back to the server (did not write down names). Javascript quite naturally allows teamwork with a separation of concerns between people working on the templates, the CSS and the scripts. Of couse, there is JSON to handle sending data back and forth between client and server. One thing Daniel did not talk about is the server side, and that's about my only complaint. He also touched upon compression of HTML, CSS and Javascript. And he mentioned A/B testing for interface design.

The whole presentation was based on the experience Daniel gained rewriting an e-commerce application in Javascript, but there was no demo or details of the project.

The title may seem a bit hyperbolic, but by the time we got to the demo it became clear that it was no exaggeration. Red Turtle pulled off an amazing feat here.

I can't remember if it was the Emilia Romagna region, or the European Union that partially funded this collaborative effort between Red Turtle and two other local companies. The premise was: we use a lot of different tools to fulfill our project management needs, but there isn't a single one that does it all. So we are going to have to build it. But why reinvent the wheel? Just use all the tools we currently use as components of a "mega mashup".

Pyramid for main application, good support for third-party authentication thanks to Velruse. The Pyramid admin UI is the glue that holds everything else together, with one common page frame for Plone, Trac, Google Apps.

Plone for SSO, intranet and knowledge management, easy to integrate with Pyramid and Trac

Trac for bug tracking, flexible reports, supports WSGI, easy to integrate with Pyramid, using a few plugins

Google Apps, oauth, scheduling, document management.

Twitter Bootstrap: CSS framework. This allowed to build a beautiful UI with progressive enhancement out of the box.

It was amazing to see in the demo that through Pyramid all the components could use each other's data.

I have a few misgivings about this one. For one, the sound volume was so low that I could hardly hear the moderator or the two contenders, and my jetlag-addled mind took that as a cue to seek some sleep whenever it could. For the other, I had never really heard of TYPO3 before, and I doubt I will in the US, so that too contributed to my interest being fairly sluggish. On the other hand, the idea was good, and since TYPO3 is a very popular LAMP-based CMS in Germany, everybody else seemed to be really into it. It might be interesting to do a "shootout" between Plone and Drupal in the US. TYPO3 seems to have a pretty powerful backend UI, with what looked to me like a Deco-like drag-n-drop tile based layout system. Timo scored a point and a round of wild applause when he demoed Diazo to instantly "steal" the TYPO3 skin and apply it to an OOTB Plone site.

Prof. Helmbrecht is the director of Enisa, the European Network and Information Security Agency. And Enisa uses Plone. His talk was pretty interesting, from a perspective of how an agency such as Enisa has to look forward to all kinds of emerging threats. E.g.: cloud computing: governments may not want to put their sensitive data, or the sensitive data of major national industries, in the cloud if there are no guarantees that the data will not get stored in datacenters outside of its borders, especially in a country that could potentially become hostile. But on the other hand, the economies of scale that make cloud computing possible would break down if such restrictions were to be imposed on it. In the end, any unrealized risks might at some point in the future become reality, as the botnet and stuxnet cases proved. Then there is social networking, and the risks involved in mobile apps and HTML5.

He also talked about how most of the advisory reports produced by Enisa are put together by teams of independent experts, and some discussion arose at the end around the question of putting a Plone community member on one of those committees. We could certainly contribute a lot, and so it seems like a good idea.

I feel it was a very smart strategic move on the part of the Konferenz organizers to invite Prof. Helmbrecht to keynote for us.

Open Spaces

I joined the open space that picked up where yesterday's left off.

In the spirit of making things easy that should be easy, and after realizing, as a group, that there was no consensus or even clarity on how to duplicate a content item, it was decided to start writing a wrapper API that would allow to accomplish some common tasks with one simple (and easy to remember) method call.

We discussed various approaches. In the end we decided that what would make things the easiest would be a PHP-like API for about 20 of the most desired tasks, and not to worry that it may not be "pythonic". Treating objects like python dicts (e.g. a User) would cause significant complications (e.g. a User could be an ACL object, an LDAP user, or a membrane user, each of which has to be treated differently), and we don't want to have to cover all possible cases. We also thought that our API would be split into two sets: one set that will simply be the "easy" and "recommended" way to do things from now on. The other that would only be around as long as the thing it's trying to work aorund is fixed for good, and would subsequently be deprecated.

This brings us to CMS licensing costs. These can be modest, or they can add up to millions of dollars, depending on which solution you're looking to buy. Your budget can start at $5,000; $20,000; $50,000; $100,000; or $250,000, just for the license. It is still a common misconception that open source WCM is free. You may not pay for the license, but you get what you pay for.

And then further down it has the comment in the title of the keynote, which Matt used sarcastically: there are only certain types of relationship which you can buy with a wad of cash, and they usually don't last very long... As Plone developers/implementers/... you can pay us for our services, but our relationships are real.

City Tour

As a pre-party extra-curricular activity we were invited to participate in a guided tour of Munich. The title was "The other Munich". Behind the architecture, the hospitality, the art and the celebrations Munich has a completely different historical dimension, which many are only vaguely aware of. Before some city government PR geniuses renamed Munich as the "Metropolis with a Heart", it used to be the "Capital of the Movement". Here is where the NSDAP (aka the Nazi party) was founded, this is where the national socialist movement started its ascent to power. Munich is also known for many courageous acts of resistance, such as the students' "White Rose" and Georg Elser. Resistance came from bourgeois and religious circles, as well, and even from the nobility. Munich is full of buildings, streets and squares that remind us of those times.

I really enjoyed this tour, and want to convey all my appreciation to the organizers!

Oh, and while we were looking at the plaque commemorating the place where the former Gestapo headquarters used to stand, a cab and a van got into a wreck:

Party

It wouldn't be a Plone conference (whether with a C or a K) without a great party. We had the run of the entire Villa Flora restaurant, with a very civilized buffet-style dinner made up of too many great selections to count, and unlimited beverages. A DJ, too. Great fun!

Some other random notes

First of all, one of the participants was a PhD student who is currently doing research on the factors that cause retention or attrition in FLOSS projects. He had no prior connection to the Plone community, and so he didn't know much about it. I thought it was really interesting to have an outsider asking us lots of questions, forcing us to think about who we are, why we do what we do, what caused us to maybe change or stay the same, etc. It's one thing to hear a keynote tell us what a great community we are (which is true), but it's another to be put in a position to articulate it ourselves, trying to be as objective as possible.

Possibly the only snafu of the whole conference was that the wifi in the main auditorium stopped working about halfway through the second day, and never came back up. I know the organizers were very chagrined about it, but were powerless to fix it.

A brief report, my favorite links and keywords after the first day at the Plone Konferenz in München

I have never posted reports from conferences before, so a few words on my intentions: I don't plan to be exhaustive, nor even coherent. I just want to share some notes I took at the various talks I attended today. The very minimum that happens for me when I see an interesting talk is that I will get inspired to find out more about a number of concepts the presenter touched upon. That's usually the extent of my notes - leads for later exploration. My notes generally are all in Evernote, perfectly searchable to begin with. By posting them online I will get the added benefit of having them indexed by google...

The slides for most of the talks of today are available online already. Just go to the schedule page, click on the talk you are interested in, and look under the presenter's profile box. If there is a line called Folien, the link next to it points to the slides. They will be all in german, of course, with a few exceptions.

The MC in the main lecture hall was Philip Bauer, of Starzel.de. To introduce the conference, some Department Chair or other said a few words, of which I only remember these: "I always tell my students that the best programs are written not at the keyboard, but while taking a swim or walking in the park." Philip also thanked him for donating us the venue for free, which is remarkable.

A big kudos for the conference organizers: by the professionalism and flawlessness of the first day's proceedings one would surmise that these folks are old hands at whipping up conferences of this caliber, and not that this the first ever German Plone Konferenz. Hats off to you!

Liz's talk was truly excellent. The gist of it: she is fed up that coding for Plone is so ridiculously hard, so often. Example: it takes 6 files and 20 lines of code just to add a new stylesheet. Another example:

You might recognize this as the code you need to grab the site portal. How silly is this? Nobody can ever remember it, it's always copied and pasted from somewhere else.

It's not just that Plone developers want their lives to be easier. It really is about the success of the platform.

"If you want a platform to be successful, you need massive adoption, and that means you need developers to develop for it. The best way to kill a platform is to make it hard for developers to build on it. Most of the time, this happens because platform companies ... don't know that they have a platform (they think it's an application)." ~ Joel Spolsky

Easy things should be easy. They should be so easy that we don't even have to look at documentation, or find code samples to copy and paste from.

The most-tweeted quote from the presentation was "Plone developers cost much more than the competition because they are highly skilled + scarce" (in which the first tweeter mistakenly wrote "scared" instead of "scarce").

I highly recommend the slides (linked above).

Liz ended by announcing that this will be her crusade for the year, and that she would send an email to plone-developers to solicit input on all the things that frustrate us and that should be improved. She promptly did, and set up a google moderator space to collect input from the community. As of this writing, there are 19 posts already.

This was about multilingual sites. Working in the US, I rarely have the chance to use the many ML Plone features and add-ons. In fact, I vividly remember a sprint in Seattle, in which we tried to do "the right thing" by declaring all the i18n domains, and stuff, but we it would be immediately obvious to whoever would look at our code that we didn't know what we were doing. We hoped that at least our effort would be appreciated...

slc.xliffvalentine.linguaflow + XLIFFMarshall

slc.linguatools

slc.quickchange

Babel (PyPI)extract_messagesinit_catalogcompile_catalog

I liked one feature of slc.linguatools: your content gets a ML workflow, so that when one item is edited, all the translations of that item get an info box at the top of the page to alert you that those translations are out of date.

This was really cool. The only bummer is that the product described is not released, understandably, since it depends on a whole setup external to Plone.

It's about using Plone as a delivery and reporting platform for e-books. The product provides two new content types: an e-book, and an e-book container. You set all the metadata, upload the e-book. Then you decide how many random download codes you want to assign to a given e-book. Now you can print fliers with the URL+code, or stickers, or what not. It even gives you little snippets of HTML that can be embedded on any other website, which generate a download form: it asks for email address, name, and whatever other information is required. The form invokes a view, which in turn wraps up the e-book with whatever DRM "envelope" is associated with the code, and starts downloading it. Then you can get statistics on how many downloads per book, over different time ranges.

Alan joined us live on Skype from Houston TX. Biggest takeaway for me is that I really want to move my hosting to relstorage. Also interesting: demostorage. This takes an existing filestorage, network storage (ZEO) or relstorage as it was at startup time, and does all the writes to RAM, i.e. the persistent storage is not touched.

Lightning Talks

Jan Ulrich Hasecke talked about the german user manual. It's written with sphinx, so it can produce a nice online version, as well as a great PDF for hard copies. All the images can be replaced, to customize it to a specific deployment.

Daniel Nouri talked some more about Kotti, a lightweight CMS built on Pyramid.

Runtime rules, static is subordinate, Don't mess with a framework,Keep it simple and pythonic,No fights with storage,Use chains and trees, as structures.

Jens Klein on YAFOWIL(Yet another form widget library)

Jens decided he hates form libraries. z3c.form is insane. So he wrote his own with extreme simplicity as his goal. He wants a form to be generated with no python code, or as little as possible. A form is a data structure (trees and chains), so it can be represented either as a dict, or in YAML.

I arrived in Munich from Seattle on Tuesday afternoon, jetlagged and tired, but otherwise unscathed. The reason why this is the first post on this site is that the Plone Konferenz is the first occasion where I will be giving out my new business cards, and there was no website at the domain printed on them until today. So I got busy and set up a brand new 4.2b2 site and started pulling content together. Better rough than nothing!