Testing Plone projects with Gitlab-CIhttp://cosent.nl/en/blog/plone-gitlab-continuous-integration
How to run continuous integration on private Plone projects, for free.
Continuous integration testing is a key enabling practice for agile coding projects.
In the Plone community, it's become best practice to combine a Github code repository
with a Travis-CI testing integration for separate packages.
In addition, the Plone CI team uses Jenkins to test the overall Plone integration.

Both Github and Travis are free for public open source projects, but running private
projects on Travis is an expensive proposition. Jenkins can be hard to set up
and maintain, and the user interface is becoming long in the tooth.

Enter Gitlab and Gitlab-CI. Gitlab shamelessly replicates most of the Github user
experience. But, in contrast to Github, private repositories are for free on Gitlab.
In addition, Gitlab-CI offers Continuous Integration features for free also.

For a new Quaive customer project, I needed a private repository with continuous
integration testing. Below I'm sharing with you how you can set up your private Plone
projects on Gitlab-CI, too.

If you want to use Gitlab as a second repository and pipeline, in addition to
e.g. an existing Github origin, you can simply add another remote and push there:

git remote add gitlab <url>
git push gitlab master

Docker rules

If you're still afraid of Docker, now is the time to overcome that fear.
The way Gitlab-CI uses docker is awesome and a gamechanger for testing.
Here's the benefits I'm enjoying:

Full isolation between test builds and test runners

We've been wrestling with cross-build pollution on Jenkins,
especially with the Github pull request builder plugin.
No such thing on Gitlab-CI, every test run is done in a new,
pristine Docker container which gets discarded after the run.

Fully primed cache for fastest buildout possible

This was the key thing to figure out. On Travis the buildout is typically
primed by downloading the Plone unified installer, extracting the eggs cache
from the tar ball and using that to speed up the buildout. That still leaves
you with running that download plus any extra eggs you need, for every build.
Not so on Gitlab-CI. I'll explain how to set up your own Docker cache below.

Unlimited, easily scalable test runners

The Gitlab-CI controller is for free. The actual test runs need slaves to run
your builds. What is super neat about Gitlab-CI, is that your test runners can
run anywhere. Even on your laptop. That means you don't have to pay for extra
server capacity in the cloud: you can just use your desktop, or any machine with
spare capacity you have, to run the tests.

Imagine what this does during a sprint. Normally, CI capacity can quickly become
a bottleneck if lots of developers are pushing a lot of code on the same days.
Because of the scalable nature of Gitlab-CI, every developer could just add a runner
on his laptop. Or you could hire a few cheap virtuals somewhere for a few day to
temporarily quadruple your testing capacity for the duration of the sprint.

Parallel test runs for faster end results

As a result of all of the above (fast buildouts, scalable runners) it becomes feasible
to split a long-running test suite into several partial runs, you then run in parallel.
At Quaive, a full test run with nearly 900 tests takes the better part of an hour to run.
Being able to split that into two shorter runs with faster feedback on failures is a boon.

Prepare a Docker image to use as a base for Gitlab-CI testing

You have to, once, prepare a Docker image with all system dependencies and an eggs cache.
That image will be cached locally on your CI runners and used as a base for every test run.

I use a Docker image with a fully primed cache:

The Dockerfile pulls in all system dependencies and runs the buildout once,
on creation of the Docker image.

docker build -t yourorganisation/yourproject

The buildout pulls in all the needed eggs and download sources and stores them
in a buildout cache. Note that in case of complex buildout inheritance trees the
simplest thing to do is to just list all eggs alphabetically, like I've done here.

Once that's done, you create an account on Docker hub and push your image there:

docker push yourorganisation/yourproject

Note that Quaive is quite a complex code base. For a less complex Plone project you could
prune the list of installed system packages and eggs. YMMV.

Prepare your code for Gitlab-CI testing

You simply add a .gitlab.yml script that configures the test runs.
This is required and the file must have that name.

In addition, it makes sense to add a specialized gitlab.cfg buildout file, but this
is not required and the name is completely free.

For those of you who have worked with .travis.yml before, this will look very similar:

before_script configures the system to be UTF-8 safe, bootstraps the buildout and
then builds out a gitlab-ci.cfg which is just a normal buildout, but stripped from
everything not needed in the test. We then start any required services, in our case
we need redis. A Xvfb virtual framebuffer is needed for the robot tests.

Here in the last question you use the name of the Docker image you pushed to Docker hub before.

Repeat the registration process in case you want to add multiple runners.

That concludes the hard part.

Push your work on any branch and check test results

Now, any push to any branch on your Gitlab project will trigger the builds you configured.
You can see them in the sidebar of your project under "builds" and you can follow the console
of the build as it's progressing.

What I like about this build system is the "Retry build" button. No more pleading with Jenkins
on Github comments to try and trigger the pull request builder (which happens to ignore the
trigger phrases you configured because it always only uses it's own hardcoded triggers).

Also, you don't need to open a fake pull request just to trigger a build. So annoying to open
a pull request overview on Github and see lots of outdated, broken builds which are not really
pull requests but just a hack to get Jenkins going. No more.

Gotchas

There's two gotchas you need to be aware of here:

Docker may not have been started properly.

There's a known race condition between the docker
service and your firewall which prevents your dockers from running properly. That shows up as
"hanging" builds in your build overview. The solution is simple: just once after reboot,
issue a sudo /etc/init.d/docker restart and that should fix it.

Branch testing on Gitlab is subtly different from pull request testing on Github.

Gitlab is very straightforward: it tests your branch at the last commit you pushed there.
The downside of that is, that if your branch is behind master, a merge may result in a broken master,
even though the branch itself was green. The way to prevent that is to allow only fast-forward
merges, which you can configure. Personally I have reasons to not always fast-forward. YMMV.

Github on the other hand, tests your pull request after virtually merging to master.
The upside of that is clearly, that a green PR indicates that it can be safely merged.
The downside is more subtle. First off, not many developers are aware of this virtual
merging. So if your branch is behind master it may break for a regression which you
cannot reproduce on your branch because it needs a merge/rebase with master first.
So you can have an unexplicable failure. The other is also possible: you may have a green
test result on your PR but still breakage on master, if some other PR got merged in the meantime
and you did not trigger a new test on the to-be-merged PR.

Conclusion

On Quaive, we now use Jenkins-CI with Github pull request integration, in parallel with the
Gitlab-CI setup described above. This makes it very easy to triangulate whether a test failure
is caused by code or by the test server. It also provides us with both the benefits of per-branch
per-push Gitlab-CI testing and the virtual merge pull request testing of Gitlab + Jenkins.

If you're interested in setting this up for your own projects and running into issues, just
start a thread on community.plone.org and ping me (@gyst), or leave a comment below this post.

The layered interaction design is adaptive: users can safely ignore the "social layer" and still have a fully functional user experience that addresses document management and process support.
This enables a gradual evolution of organisational change, without requiring retooling.

Update

There's now a video available where Guido summarizes this talk, and outlines his vision for the future of intranets.

]]>No publisherGuido Stevensintranetcomplexitymanagementdesign2016-03-04T12:10:00ZBlog EntryPlone Intranet: from plan to realityhttp://cosent.nl/en/blog/plone-intranet-plan-to-reality
Collaborating with nine companies across five countries, the Plone Intranet Consortium moved from plan to working code within months. How did we do it?

In the fall of 2014 we
announced
the formation of the
Plone Intranet Consortium -
a collaboration between Plone technology companies to develop a Plone based
open source digital workplace platform.

As you can see in the video, for all of us it's very important to be part of the open source community
that is Plone.

At the same time, we use a different process: design driven, which impacts our code structure
and the way integrators can leverage the Plone Intranet platform.

Sharing and re-use

All of our code is open source and available on Github.
In terms of re-use we have a mixed strategy:

First of all it's important to realize we're doing a design-driven
product, not a framework. We have a vision and many components are
closely integrated in the user experience (UX). From a UX perspective, all of Plone
Intranet is an integrated experience. Sure you can customize that but
you have to customize holistically. You cannot rip out a single feature
and expect the UX for that to stand on it's own.

In the backend the situation is completely different. All
the constituent packages are separate even if they live in one repo
and one egg. You can install ploneintranet.microblog without installing
the whole ploneintranet stack: i.e. the whole ploneintranet source needs
to be there (at the python level) but you can load only the
ploneintranet.microblog ZCML and GS and you'll be fine. All our packages
have their own test suites which are run independently. Of course you
need activitystream views to display the microblog - and that's frontend
UX and one of the most complex and integrated parts of our stack, with
AJAX injections, mentions, tagging, content mirroring and file preview
generation.

Another example is search: a completely re-usable backend but you'd
have to provide your own frontend. Our backend is pluggable - we
currently support both ZCatalog and Solr engines and expect to also
support Elastic Search in the future.
We have documented our
reasons for not reusing collective.solr.

Design and user experience are key

We don't believe that loosely coupled components with
independent developer-generated frontends create a compelling user
experience. Instead of working from the backend towards the frontend, we
work the other way around and focus on creating a fully integrated,
beautiful user experience.

The downside of that is that it becomes more
difficult to reuse components independently. That's a painful choice
because obviously it reduces open source sharing opportunities.
We do open source for a reason, and
you can see much evidence that we care about that in
the level of our
documentation, in our code quality, and in the careful way we've
maintained independent backend packages, including listing component
package dependencies, providing full browser
layer isolation and most recently providing clean uninstallers for all
our packages.

Plone Intranet is a huge investment, and we're donating all our code
to the Plone community. We hope to establish a strong intranet sub-community
while at the same time strengthening the vibrancy of the Plone community
as a whole.

The site offers an online knowledge platform for the entire Dutch environmental movement.
The Change Factory is structured using a classical watch-learn-do approach:

Watch

The Knowledge Base provides searchable background information on various topics.

Learn

The Toolbox offers step-by-step help in organising a new initiative.

Do

The Network connects activists with each other and facilitates learning by sharing practical experiences.

The site is currently only available in Dutch. See the Dutch intro video to get a feel:

Knowledge Management in action

Effective knowledge management and sharing of knowledge results not just from
publishing documents ("explicit knowledge"). Much learning and knowledge sharing
results from the interactions between people, in which they exchange "tacit knowledge" -
stuff you know but didn't know you knew.

The Change Factory is designed to support both aspects, in the form of a knowledge base
with documents (explicit knowledge) on the one hand, and a social network geared towards
conversation and interpersonal contact (implicit knowledge sharing) on the other hand.
A toolbox with learning tools connects both aspects into a learning resource.

As a knowledge platform, the site supports a cycle of knowledge flow and knowledge creation
following the well-known SECI model:

Socialization: sharing knowledge in conversation.

The network promotes direct contact and dialogue between environmental activists,
by not only describing the what of an activity, but also who the organisors are
and presenting their contact info. Additionally, a discussion facility on the network
makes it easy to exchange experiences.

Externalisation: writing down your knowledge.

The network is built around the exchange of "experiences", documenting an action format,
so people can learn from successes in another town and replicate the format.
This helps to articulate tacit knowledge into explicit knowledge.

Combination: searching and finding knowledge.

The searchable knowledge base, organised by theme, facilitates the re-use of knowledge.
Documented action formats in the network all follow the same stepwise model,
making it easy to mix and match steps from various formats in creating your own activity.

Internalization: turning learning into action.

The toolbox with process support documentation helps you assimilate best practices
by bringing them into practice, following a simple four-step plan.
Here, you absorb explicit know-what knowledge and internalize that into tacit know-how.

De combination of these approaches turns The Change Factory into much more than
just a set of documents. The site is a living whole where people communicate and
help each other to become more effective, in facilitating the transition of our society
to a more sustainable world.

Realisation

Following the initial project brief, Cosent performed design research in the form of
a series of interviews with intended users of The Change Factory. These interviews
inquired into the way activists collaborate and communicate in practice,
focusing on what people actually need and how a online platform could contribute
to their success.

What emerged from the research, is that nobody wants more long documents.
Nor was there any need for a marketplace-like exchange of services and support.
Rather the interviewees articulated a need for quick-win snippets of actionable knowledge that can
immediately be put into practice.

Based on the outcomes of this research, we introduced the Network aspect of the platform:
a social network centered on the sharing of succesful action formats, structured in a way
that facilitates dialogue, direct contact, and re-use of proven formats across multiple cities.

After articulating this concept into an interaction design and visual design, Cosent built
the site in Plone CMS. An initial crew of editors seeded the site with initial content.
Recently, the platform was publicly unveiled and immediately attracted scores of active users.

]]>No publisherGuido Stevensknowledge managementdesign researchplonesustainability2015-04-28T09:29:32ZBlog EntryPlone Intranet München sprint reporthttp://cosent.nl/en/blog/plone-intranet-munchen-sprint
Sprinting in München transformed both the team and the code base of Plone Intranet.
The Plone Intranet project represents a major investment by the companies that together
form the Plone Intranet Consortium. Last week we gathered in München and worked really
hard to push the Mercury milestone we're working on close to an initial release.

Mercury is a complex project, challenging participants out of their comfort zones
in multiple ways:

Developers from 6 different countries are collaborating remotely,
across language barriers and time zones.

People are collaborating not within their "own" home team but across company
boundaries, with people whom they haven't really collaborated with before,
and who have not only a different cultural background but also a different
"company coding culture" background.

The backend architecture is unlike any stack peope are used to work with.
Instead of "normal" content types, you're dealing with async and BTrees.

The frontend architecture represents a paradigm shift as well, requiring
a significant change in developer attitude and practices. Many developers are
used to work from the backend forward; we are turning that on it's head.
The design is leading and the development flow is from frontend to backend.

So we have a fragmented team tackling a highly challenging project.
The main goal we chose for the sprint therefore, was not to only produce code
but more importantly to improve team dynamics and increase development velocity.

Monday we started with getting everybody's development environments updated.
Also, Cornelis provided a walkthrough of how our Patternslib-based frontend
works.
Tuesday the marketing team worked hard on positioning and communications,
while the developer teams focused on finishing open work from the previous sprint.
As planned, we used the opportunity to practice the Scrum process in the full team
to maximize the learning payoff for the way we collaborate.
Wednesday we continued with Scrum-driven development.
Wednesday afternoon, after the day's retrospective,
we had a team-wide disussion resulting in a big decision:
to merge all of our packages into a single repository and egg.

The big merge

Mercury consists of 20 different source code packages,
each of which had their own version control and their own build/test tooling.
This has some very painful downsides:

As a developer you need to build 20 separate testing environments.
That's a lot of infrastructure work, not to mention a fiendishly Jenkins setup.

When working on a feature, you're either using a different environment than the
tests are run in, or you're using the test environment but are then unable to
see the integrated frontend results of your work.

Most user stories need code changes across multiple packages, resulting in
multiple pull requests that each depend on the other. Impossible to not break
your continuous integration testing that way.

We had no single environment where you could run every test in every package at once.

So we had a fragmented code base which imposed a lot of infrastructure work overhead,
created a lot of confusion and cognitive overhead, actively discouraged adequate testing,
and actively encouraged counterproductive "backend-up" developer practices instead of fostering
a frontend-focused integrative effort.

Of course throwing everything into a big bucket has it's downsides as well, which is why
we discussed this for quite some time before taking our decision.

Code re-use

The main consideration is code re-use and open source community dynamics.
Everybody loves to have well-defined, loosely coupled packages that they can
mix and match for their own projects. Creating a single "big black box" ploneintranet
product would appear to be a big step backward for code re-use.

However, the reality we're facing is that the idea of loosely coupled components
is not how the code actually behaves. Sure, our backend is loosely coupled.
But the frontend is a single highly integrated layer. We're building an integrated
web application, not a set of standalone plugins.

We've maintained the componentized approach as long as we could, and it has cost us.
A good example is plonesocial: different packages with well-defined loosely coupled
backend storages. But most of our work is in the frontend and requires you to
switch between at least 3 packages to make a single frontend change.

In addition, these packages are not really pluggable anymore in the way Plone devs are used to.
You need the ploneintranet frontend, you need the ploneintranet application,
to be able to deliver on any of it's parts. Keeping something like
plonesocial.activitystream availabe as a separately installable Plone plugin
is actively harmful in that it sets wrong expectations.
It's not independently re-usable as is, so it should not be advertised as such.

We see different strategies Plone integrators can use ploneintranet:

Light customization.

You take the full ploneintranet application and do some cosmetic overrides,
like changing the logo and colours of the visual skin.

Full customization.

You design and develop a new application. This starts with a new or heavily
customized frontend prototype, which you then also implement the backend for.
Technically you either fork and tweak ploneintranet, or you completely build
your own application from scratch, re-using the ploneintranet parts you want
to keep in the backend via library mode re-use, see below.

Library mode cherry-picking.

You have a different use case but would like to be able to leverage parts of
the ploneintranet backend for heavy lifting. Your application has a python
dependency on those parts of ploneintranet you want to re-use: via ZCML
and GenericSetup you only load the cherries you want to pick.

Please keep in mind, that this situation is exactly the same for the companies who
are building ploneintranet. We have those same 3 options. In addition there's a
fourth option:

Extension.

Your client needs features which are not currently in ploneintranet but
are actually generally useful good ideas. You hire the ploneintranet designer
to design these extensions, and work with the ploneintranet consortium
to develop the new features into the backend. You donate this whole effort
to ploneintranet; in return you get reduced maintenance cost and the opportunity
to re-use the ploneintranet application as a whole without having to do a full
customization.

You'll have to join the Plone Intranet Consortium in order to pursue this fourth strategy.
But again, there's no difference for current members: we had to join as well.

To make individual component re-use possible, we've maintained the package separation we already had -
ploneintranet may be one repository, one egg, but it contains as separate python packages
the various functional components: workspace, microblog, document preview, etc.
So we do not subscribe to JBOC: Just a Bunch of Code. We don't throw everything into
one big bucket but are actively investing in maintaining sane functional packages.

A variant of cherry-picking is, to factor out generically re-usable functionality
into a standalone collective product. This will generally only be viable for
backend-only, or at least frontend-light functionality, for the reasons discussed above.
A good example is collective.workspace: the ploneintranet.workspace implementation
is not a fork but an extension of collective.workspace. This connection enables us
to implement all ploneintranet specific functionality in ploneintranet.workspace,
but factor all general improvements out to the collective. That has already been
done and resulted in experimental.securityindexing.

Current status

On Thursday we announced a feature freeze on the whole stack,
worked hard to get all tests to green and then JC performed
the merge of all ploneintranet.* into the new ploneintranet consolidated repo.
Meanwhile Guido prepared the rename of plonesocial.* to ploneintranet.*.
On Friday we merged plonesocial into ploneintranet and spent the
rest of the day in hunting down all test regressions introduced by the merges.
Because we now have a single test runner across all packages that meant
we also identified and had to fix a number of test isolation problems
we hadn't seen before.

Friday 20:45 all tests were finally green on Jenkins!

We still have to update the documentation to reflect the new consolidated situation.

Results

In terms of team building this sprint has been phenomenal.
We've been sprinting on ploneintranet for five months now, but this was the first
time we were physically co-located and that's really a completely different experience.
We already did a lot of pair programming remotely, but it's better if you are sitting next
to each other and are actually looking at the same screen. Moreover feeling the vibe in
the room is something you cannot replicate remotely. The explosion of energy and
excited talking after we decided to do the consolidation merge was awesome.

On top of that we now have a consolidated build, and I can already feel in my own
development the ease of mind from knowing that the fully integrated development
environment I'm working in is identical to what all my team members are using,
and is what Jenkins is testing. Instead of hunting for branches I can see all
ongoing work across the whole project by simply listing the ploneintranet branches.
Reviewing or rebasing branches is going to be so much more easier.

On top of all that we also made significant progress on difficult features like
the document previewing and complex AJAX injections in the social stream.

We started with a fragmented team, working on a fragmented code base.
We now have a cohesive team, working on a unified code base.
I look forward to demoing Mercury in Sorrento in a few weeks.

]]>No publisherGuido Stevensopen sourceplone2015-03-14T14:43:05ZBlog EntryDeath of Sharepoint triggers search for intranet alternativeshttp://cosent.nl/en/blog/death-sharepoint-intranet-alternatives
IntraTeam conference highlights ongoing disruption in intranet market.Never mention the word "intranet" on a date, or in any conversation for that point.
It bores people to death.

Intranets are dying

Throw an intellectual heavy weight like Dave Snowden into that mix,
and he'll happily challenge some of the audience's dearly held beliefs.

"The future is distributed.
I don't believe, in five year's time, there'll be significant presence
at any conference to do with intranets."

"The intranet is going to die.
We're moving to fully distributed systems.
The sooner you start shifting the better."

Snowden thinks, apps are much better at playing this new field of distributed information
and knowledge management.

Sharepoint is dying

Microsoft appears to have arrived at the same conclusion as Snowden.

"Office 365 is a collection of small applications, maybe loosely tied together.
Or completely siloed."

And this app based system is what replaces SharePoint, up to now the dominant
software package for building intranets.

"The next version of SharePoint is Office 365, and it is already here.
SharePoint 2015 will just be a fancy service package."

Perttu Tolvanen, an analyst at North Patrol, makes the point that Microsoft's
big money maker is Office, and the whole company strategy is aimed at protecting
that asset. SharePoint used to support Office, but now that Office has moved
to the cloud in the form of Office 365, on-premise SharePoint is no longer needed.

"For Microsoft, it's a supporting business for Office. That's why Microsoft has decided to let go of this business.
SharePoint 2013 is a dying system. ... All the product development focus of Microsoft is in Office 365."

Other analysts at the conference confirmed this conclusion.
It's a public secret anyway.

Let me put that into context. On an intranet conference, you meet someone with the job title
"digital workplace application analyst" - that's a SharePoint application manager if you
talk to them. Or you encounter a "technology independent intranet consultant" - you'll
find out that their bread and butter is as a SharePoint project manager.
It's as if SharePoint and intranet were close synonyms.

SharePoint is a swiss army knife system strong enough that IBM and Oracle
had given up on the intranet and document management market.

And now the 800-pound gorilla dominating the industry just -poof- disappears!

WCM best positioned to take advantage

On first sight, the Office 365 cloud offering looks similar to the SharePoint on-premise version.
But if you look closer, you'll find that
complex portals or integrated digital workplaces cannot be migrated to the cloud.
You need a complete re-design to bring your existing intranet to Office 365.

"Microsoft doesn't care for the on-premise customers for the next five years.
There will be a lot of customers looking for alternatives to SharePoint,
once they realize that SharePoint 2013 is dying."

Web Content Management systems and portal systems are already better than SharePoint
for building complex systems with multi-language capabilities and integrations with other systems.
The departure of SharePoint from the intranet market will accelerate the search
for alternative solutions. The departure of the
dominant supplier leaves a lot of extra oxygen available for the rest.

Return of the portal

What we're seeing here, is a transition from the outdated intranet concept, to a new
digital workplace paradigm.

Dave Snowden:

"We're moving into a radical new approach to software which is fully distributed.
The only interesting question at the moment is: what is the glue that holds it all together?
That is probably the big strategic area to grasp."

Paradoxically, one of the contenders for the integrative part that brings everything together
is a revival of the portal concept.

The screenshot below is from James Robertson's presentation
and shows how a HR page becomes dramatically more useful by pulling relevant, personalized
information from various back-office systems.

The blurring is a design feature by the way.
There's a button to un-blur your pay info when nobody is watching over your shoulder.

Seeing that screenshot, Kristian Norling tweeted:

And that's a something that Perttu Tolvanen also referred to.

This is not your grandfather's portal anymore, though. Snowden again:

"The other point is, things will be loosely coupled.
This, by the way, is where object orientation comes in big time, but true object orientation."

Connecting heterogeneous networks of loosely coupled business objects is core business for web technology.
Open standards, open source approaches are especially well placed, to thrive in such environments.

]]>No publisherGuido Stevensintranettechnology2015-03-03T10:25:40ZBlog Entry3 signs your communications project is off-track, and how to fix it with personashttp://cosent.nl/en/blog/3-signs-communications-off-track-fix-with-personas
Well-researched audience personas communicate deep insightsThe meeting was a disaster. It was supposed to be a "rubber-stamp" type of event,
just to discuss a few questions, after the proposal and the budget had been approved
already. After the design had been approved half a year ago, already.

Unfortunately, some key questions could not be answered.

3 questions to check your communications vision

To validate any communications project, three basic questions are useful:

Who are we serving with this project?

What will be the difference for them, a year from now, if we succeed?

How will we interact with our audience and improve their lives?

Boom. The client has some ideas, just enough to invalidate the current design.
But at the same time, those ideas are vague enough that it is
not possible to articulate design directions, or make decisions.

The standard Dutch reflex to such situations is:
we need to have some more meetings to discuss this
(codespeak for: let's try and negotiate our differences away).

That reflex is wrong. You don't need to negotiate opinions. You need more data.

Personas synthesize research based insights

If you're building web sites, or any service that people interact with,
you need a clear picture of who your intended audience is. You don't
get that picture by sitting at your desk. You need to get out of the
building and interview real, live humans to find out what makes them smile.
To find out how exactly your service can add value to their lives.

Once you've done that research, you can then summarize and synthesize the findings
into audience personas, profiles of fictional people that represent a typical customer.
Persona's show describe key aspects of a person's life, goals and behaviors.
Below are some example personas, based on Mailchimp user research:

Those personas then drive your design. They enable you to empathize with your audience.
They enable you to create a design that touches people in their hearts.
And they enable you to overcome the sometimes weird ideas of people holding purses,
who happen to like the color "red", by grounding design decisions in a solid,
data-driven understanding of audience needs and preferences.

Buyer personas

Originating in the realm of web design and service design, the use of audience personas
is now also gaining traction in the world of marketing,
in the form of buyer personas.

Instead of making stuff up about your customers,
you can ask them for the five key insights you need to focus your strategy:

Priority initiatives

What business need drives them to search for a solution in your market space?

Success factors

What do buyers expect to achieve by implementing your solution?

Perceived barriers

Which were the reasons to not buy your solution? Make sure to also interview non-customers!

Buyer's journey

Who is involved in the decision making process, what resources are trusted?

Decision criteria

Which factors are key in weighing alternative options and making a purchasing decision?

Gaining a deep understanding of these five points is key not only for buyer personas,
but for design personas in general.

If you've articulated these five insights, you'll know the answers to the who, what and how questions.
You'll know who you're serving, what value you're adding, and how this fits into the lives of your audience.
The rest is execution.

The takeaway

Don't be fuzzy.

If you don't know the who, what and how: acknowledge you don't know enough.

Doing the legwork to develop true insights is hard work. But it's actionable, not magic.

Use it or lose it.

Services and businesses that invest in a solid evidence-based understanding of their audiences' needs, will out-shine those who design and market is based on hunches.

]]>No publisherGuido Stevensdesign research2015-02-23T11:06:44ZBlog EntryPlone Intranet Consortiumhttp://cosent.nl/en/blog/plone-intranet-consortium
Creating a new open source digital workplace platform.
At the Plone Conference 2014 in Bristol, Guido Stevens presented the Plone Intranet Consortium
to an audience of Plone core developers. The presentation focused on the need for a new business
model for open source development, and the specific traction we're achieving within the Consortium already.

The "spare time" model of doing open source is broken. We need a better model.
In the Plone Intranet Consortium, a dozen Plone companies are jointly investing in
the creation of a high-quality out-of-the-box open source intranet platform based on Plone.
What is different is not just the funding model, but also the design-first approach we're taking.

Project Mercury

The Consortium is now working on creating the alpha release, codename: Mercury, which is scheduled early 2015.
We'll inform a wider audience via a website and a series of workshops, in the run-up to the Mercury release. Stay tuned!

]]>No publisherGuido Stevensintranetopen sourceplone2014-12-04T09:22:19ZBlog EntrySocial intranet sprint Berlinhttp://cosent.nl/en/blog/social-intranet-sprint-berlin
A week of intense collaboration in Berlin has significantly accelerated the development of a new Plone-based social intranet platform.
The Plone social intranet sprint in Berlin at the Humboldt University was attended to by about 16 participants across two tracks, plus a social program in the evenings. Apart from all the hard work we also had a lot of fun. Berlin is a vibrant city and the Ploners who are also Berliners took us to some great restaurants.

strategy track

The strategy track saw enthousiastic responses to a plan for joint investment by ~10 Plone companies into a new Plone intranet software suite. We'll use the coming weeks to solidify the momentum for this initiative and try and convert positive intentions into hard investment commitments.

A key part of the proposed plan is a design-first process to create a compelling user experience, leveraging Plone5-compatible frontend technologies. Netsight, Cosent and Syslab have already made an exploratory first iteration with this design process, focussed on re-designing the microblogging and social activity stream interactions in Plonesocial.

coding track

In the coding track, we've taken these new frontend designs and implemented them on top of the existing Plonesocial code base. The result is, that we now have a Patternslib based Plonesocial implementation that can be installed and run in a Plone 4 installation. Because Plone5 Mockup is also Patternslib-based, the work we're doing is forward compatible and will be easily portable to Plone 5.

In addition to implementing the existing Plonesocial features, the sprint also resulted in the integration of plonesocial.messaging (private one-on-one messages) and new "reply" functionality (conversation threading).

The new frontend introduces a host of new features that are not yet provided by the backend and need to be architected and coded: file uploads, URL and file previews, "@mentions", "liking", "favoriting" etc. Also, we're already working on extending the design with a number of subtle but powerful micro-interactions in the form of shortcodes to provide a pluggable linking system.

The current development version of Plonesocial is more advanced than the last released version, but it needs more work before we can make a production-quality release - so come and join us in the next sprint at Plone Conference 2014 in Bristol.

]]>No publisherGuido Stevensintranetplonesocialplone2014-09-15T08:58:20ZBlog EntryPlone Intranet front endhttp://cosent.nl/en/blog/ploneintranet-theme
Front-end development for ploneintranet has started.While many people are still enjoying the beach, we're already gearing up to accelerate design and development
for Plone Intranet this fall.

Following the Mosaic sprint and a ploneintranet summit in Gatwick we're now
collaborating with with Syslab and Cornelis Kolbach.
to turn wireframes and high-fidelity designs of social interactions
into HTML/CSS/Javascript prototypes,
using Patternslib (similar to Mockup).

Please contact us if you're not a member of the Plone Intranet Consortium and would like
to get involved and gain commit access to our development repositories.

]]>No publisherGuido Stevensintranetplone2014-08-04T10:16:44ZBlog EntryPlone Open Garden 2014http://cosent.nl/en/blog/plog2014
The Plone Open Garden event in Sorrento, Italy, is reliably a highlight of the year to look forward to.
This year's edition was no exception. More than 50 Plonistas, wives and kids (and even one mother-in-law) included,
gathered to renew friendships, lounge in the sun, discuss arcane technologies after midnight,
and generally have a great time together. Oh we also had technical presentations every morning.

We talked about intranets and ways in which we can jointly strengthen Plone as an intranet platform.
Netsight and Cosent outlined their research and development timeline for the coming year
and worked with other Plone companies to maximize community involvement.

A recurring topic this year was the question, how we can modernize the page layout engine for Plone.
We already have a lot of machinery to manage layouts in the form of portlets, portlet managers, viewlets and
METAL macros. In addition we have the newer blocks and tiles to further complicate the picture.
The discussion oscillated between:

Let's stick with portlets. They are a proven, powerful and widely used technology.

Being technologists, we did not spend much energy on the first two points which are mostly about opinion.
Rather, we focused on the last point which presents technical challenges.
The gist of what we discussed can maybe best be expressed by a story:

A editor opens a page. On the "display" menu she chooses "create new layout". A layout editor opens and lets her place and arrange tiles on the page. For each tile, she defines a policy of when (context, view, ...) and where (priority, position hint) to show this tile. For the layout as a whole, she defines a policy where this layout should be used (context, type, subtree, ...). She checks previews of the layout for various display media (desktop, tablet, mobile), tunes some tile placements and then applies the layout.

This is just one possible scenario and it will likely change. To explore the possibilities
we will get together in Barcelona in the second week of June and sprint to create a proof of concept.

]]>No publisheradminplone2014-04-29T12:39:55ZBlog EntryDesigning an open source social intranethttp://cosent.nl/en/blog/designing-open-source-social-intranet
Cosent and Netsight are designing an open source social intranet platform.Netsight have invited Cosent to collaborate on designing
a complete social intranet software suite, to be developed
in collaboration with the open source Plone community.

A "Plone Intranet" summit in the wake of the 2013 Plone conference
listed user experience, that is: design, as the single most important challenge
to tackle if we want to strengthen Plone's attractiveness for
the intranet market.

As everybody knows, design is not a problem one solves in a committee.
We're using a hybrid model of collaboration styles that allows us
to combine the design strengths of a core team with the scaling capabilities
of an open source community.

So, what have we been up to?

Following up on that, we've been analysing the competition, both
in terms of the user experience their platforms offer but also
in the kind of problems they solve, i.e. what markets they're in.
We see significant market potential for a Plone-based solution.

Additionally, we've analysed dozens of cases studies of award-winning intranet
designs and have clustered hundreds of intranet screenshots to understand
common functional areas, or landing pages, in intranets, mapping those against the model
provided by the Digital Workspace Technology Roadmap.

Last week, Netsight and Cosent have been sprinting to turn the insights gained
from all of that
into actionable designs, that can be used to guide software development.

We selected three types of landing pages in intranets for deeper investigation.
For each of these pages, we brainstormed specific functions that users would want to use
and card-sorted those into families of similar functionality.

We then picked a single landing page to work on and created several epics with short
scenarios about a typical sequence of actions a user would execute to obtain a
specific outcome. For example, one of our epics is:

(Team Member) Wendy receives an email from Peter with a list of questions and data that need to be collated before the next meeting of the project board. She forwards the mail into the intranet, where she flags it as a todo for next week on Project X, tags it as "board meeting", adding a note with some initial ideas and could @marcella maybe share her thoughts on this?

For each epic, we created a diagram that sequenced every function invoked as part of
the scenario, and then expanded each function step into a full-fledged user story. For example,
one of the steps halfway the above epic is the following user story:

Team Member can mention other Team Members in the note (using '@' syntax).

Don't shuffle the stack

Fleshing out those user stories was a lot of work, and involved detailed discussions
about our assumptions and choices regarding security architecture and overall strategy.
This was done by part of the team, while the other half worked on wireframing possible
solutions for the epic. That was a bad idea. They had the same discussions, with different
conclusions.

Moving from epic to wireframing involves jumping a level up the design stack in
the Garrett five-level model of the design process.
When we brought the finished user stories together with the wireframe sketches we had
some major inconsistencies. This appears to confirm the model and indicate that you
need to get your foundations right before moving to more concrete designs.
In this case, you really need to define your scope in detail, before wireframing solutions.

After re-syncing our minds and merging our work, in the final day we ventured
into wireframing territory not for a whole page, but for exploring a set of micro-interactions
that form the core of a cohesive social intranet experience.
We also deepened our understanding of user needs and elaborated
on the personas we're using to drive the design.

All in all, we feel we have not only made significant progress towards valuable design outcomes,
but also have prototyped a repeatable design process that tackles very complex design challenges in a systematic way.

We plan to have another design sprint in a few weeks to prepare for Sorrento, and look forward to sharing our work there.
See you in lovely Italy!

]]>No publisheradminintranetdesignploneopen source2014-03-26T09:07:37ZBlog EntryUser Centered Re-Designhttp://cosent.nl/en/blog/user-centered-redesign
A website redesign can greatly benefit from combining User Centered Design with an analysis of the existing web site.
User Centered Design is a methodology Cosent uses to systematically involve end users in the
design process. It improves the quality of our designs by maximizing our understanding of
the intended users of systems, what their goals are and how the site we're building fits into their lives.

In a recent project, we found that this method is especially powerful when you're redesigning
an existing web site. Users who have actually been using the old site develop strong judgments
on what does, and especially what doesn't work for them.

You'd like to drop dead when you open this site.

Sentiments like these are very helpful when a client needs to be steered away from imposing
bad design choices on a web site. You can show the client how that didn't work in the past.

Combining user centered design with an analysis of the old web site design highlighted some
major problems: inconsistent navigation, too much navigation, too much graphics, noisy page layout.
It's not that the previous designer did a bad job. When working on the design for the new site,
we found that the design patterns of the old site were a "natural" consequence of the features
requested by the site owner. We could've easily used those same patterns our new design.

Except, our user research told us forcefully we really needed a different approach.

As a result, we focused on simplifying the navigation and visual layout.
We moved from a complex multi-level categorization to a simple category menu, augmented
with free-style tags which are shown only in the content area, not in the navigation bar.
We consolidated multiple web sites into a single consistent system.
Instead of showing lots of small thumbnail images,
we used varying image sizes to grab attention and structure the page.
Finally, we used an accordeon in-page navigation to reduce visual clutter but at the same
time enable web site visitors to quickly orient themselves in the site.

]]>No publisheradmindesign researchdesign2014-03-13T15:19:10ZBlog EntryThe Future of Knowledge Workhttp://cosent.nl/en/blog/future-knowledge-work
Free e-book about knowledge work at the intersection of social and knowledge technologies

A new Cosent publication shows, how knowledge technologies can be combined with social technologies and legacy applications to optimize knowledge flows, accelerate innovation, improve process efficiencies and engage stakeholders.

]]>No publisheradmin2013-11-14T09:23:09ZBlog EntryPloneSocial Roadmap at Plone Conference 2013: videos + slideshttp://cosent.nl/en/blog/plonesocial-ploneconf2013
An overview of the status of PloneSocial and where we're heading. Also introduces a preview of the Digital Workplace Technology Roadmap that will be published soon. Update: talk video now available.Update: the video registration of this talk is now available at Youtube.