Developing SaaS? Forget Scrum, Check Out Kanban and Similar Approaches

At WebCollage, we are releasing a new version of our SaaS based solution to our customers every two weeks. We released 23 versions in 2011, and will be releasing the 6th version of our software over this upcoming weekend. In other words, we are firm believers in agile development and in its ability to help obtain continuous market feedback (here’s a previous post on this topic).

For various reasons, though, agile development has become somewhat synonymous with one specific approach, namely Scrum. Realizing that Scrum is widely accepted, I previously expressed an opinion that Scrum is perhaps an interesting recipe, but is far from being the best approach to SaaS agile development (and web application development in general). I have received quite a lot of feedback on that other post, some with contrarian views arguing that Scrum is perhaps a silver bullet after all.

There’s always something to be said for using the most popular approach. As an old IT saying goes, no one ever got fired for buying IBM. In this regard, there are intrinsic advantages to using Scrum, most notably the industry ecosystem: ability to easily find knowledge, share best practices, etc.

Insomuch as the actual methodology goes, though, there are simply better alternatives for many software development scenarios. Here’s a sketch of how we at WebCollage develop software, and the advantages it has over Scrum. Our approach is an adaptation of Kanban/Lean software development.

As mentioned above, our approach is based on software Kanban, so a few words about software Kanban are in place. Oftentimes, software Kanban is marketed as an “evolutionary” approach rather than a “revolutionary” approach (a trait attributed to Scrum). A different way to view Kanban, though, is as a set of (Agile) principles and practices, stripped out of some of Scrum’s “New Age” ideas.

In the end of the day, when it comes to a software development methodology, whether people should have stand-up or sit-down meetings, work in the same office or in separate offices, or hang around for beer after office hours, are all nice preferences, but have nothing to do with software development in particular. Some people management practices are arguably more successful than others, especially in the 21st century (empowering team members, for example, is pretty much taken for granted). But, clearly software can be developed with multiple approaches to managing people and to facilitating communication.

Either way, below are some of the highlights of our approach.

1. We use a simple (traditional) development “pipeline”

In the end of the day, ever since the “old” waterfall days, most software development goes through a series of steps that have some planning (be it MRDs, PRDs and SRSs, or story cards), some development, some testing (or validation), and then some release process.

(One may note that this somewhat high level, not to say naïve, set of steps is as applicable to non-software projects as it is to software.)

Perhaps the biggest change introduced by agile development is the understanding that software can be developed more effectively by shortening each of the steps and executing them in parallel, at least to a certain degree.

In other words, iterative (or agile) development looks closer to the picture below (with more or less overlap between the steps based on the specific circumstances):

With the Kanban approach, each Issue (Feature, Story, …) follows the same “pipeline”, albeit potentially at a different pace. For example, the planning step for Feature A might require a short discussion, while for Feature B it may require a set of team meetings. For Feature C, the planning step might require technical research, while for Feature D the solution may be straightforward. Unlike typical Scrum-based approaches, we at WebCollage are not trying to use the same recipe for all types of Issues (e.g., there’s no fixed-duration Sprint Planning Meeting).

2. We parallelize and visualize

With the Kanban approach, multiple Issues can be and are in fact executed in parallel. Traditional project management techniques cannot cope with such a large number of moving parts. Instead, a Kanban-based approach uses a Kanban Board like the one below to track the phase in which each Issue is at. At a given point in time, there may be (and typically will be) Issues at multiple phases of the pipeline. This approach is sometimes referred to as visual control.

The diagram below shows a very partial list of Issues that were open in our system at a given point in time (for confidentiality reasons, I’ve only included relatively insignificant and technical issues):

The above Kanban Board might be seen by some as the equivalent of a sprint backlog and a task board, using the Scrum jargon.

3. We have well-defined flow rules

Kanban literature (based on Toyota’s manufacturing process) emphasizes the concept of “pull”, which is rooted at Toyota’s just-in-time approach. Back at Toyota, when a certain “piece” was needed (for example: a car wheel), it was ordered “just in time” using a paper note, called Kanban in Japanese. Some argue that the software analogy is that when (say) a developer has completed a task, they “pull” the next Issue from a bucket of Issues that are to be developed.

Personally, I don’t feel that the metaphor is critical to success. In the end of the day, one needs to define how Issues flow through the system. Whether a tester “pulls” an Issue or whether the Issue is “pushed” by a developer downstream to a tester seems to be a matter of definition, with little practical difference. The important part is to ensure that the issues actually flow through the system and not pile up in one phase (a concept called Limit the WIP in the Kanban jargon).

We at WebCollage manage the active Issues in a collection called In Play (often referred to as Work in Progress in the Kanban literature). We use the following transition rules to let an Issue flow through the pipeline:

Whenever a developer starts working on an Issue, they transition it to the In Development state. Once they’ve completed development (which includes unit tests) and submitted the code to the mainline, the continuous integration server builds the change and transitions the Issue to a Development Complete status.

Similarly, when a tester starts testing the Issue, they transition it to an In Testing state, and when the Issue is completely verified, it is transitioned to a Testing Complete state.

We normally do not transition Issues backward. When a defect is found, it is opened as a new sub-issue, and is managed through a short (Open–Resolve–Close) cycle. Due to the short time lag between development and the next step of verification, developers can usually fix defects extremely quickly, and we usually don’t leave known defects open for a subsequent iteration.

4. We have multiple sources for incoming requests

At WebCollage, new Issues can come from two different sources.

First, there is the well-known Backlog. We manage two levels of backlog Issues. The first is a Wish List, which includes any idea or request that is a candidate for inclusion in the software. In many cases, this includes a request in its raw form (I need a button that does this and that), which may transform into a different approach during product design. The second is a true Backlog, which includes items that are candidate for implementation in the near future. Issues are moved from the Wish List to the Backlog, and then, when their time has come, to the In Play collection.

The second source of Issues is a Ticket system. At WebCollage, the development organization receives approximately 2 tickets a day. The Kanban approach lets us handle important tickets as they come even if they arrive during a development iteration. When an Issue is determined to be of a high priority or requires very little effort, we put it in a Fast Track, and usually address it either during the current iteration or the next one. Consequently, we can resolve an issue in an average of less than two weeks. With such a turnaround, the need for hot fixes and other exceptional handling is greatly reduced.

The ticket workflow we use is the following:

5. We communicate with the larger product team regularly

One of the omissions by most agile development methodologies is the end-to-end communication of new production functionality, including documentation, release, rollout, etc.

At WebCollage, we’ve created dashboards that communicate new functionality to people outside the core development organization (e.g., product marketing, product support, technical services). We classify new functionality (Issues) into New Feature (major impact on customers), Enhancement (tactical change), Bug (resolution of a malfunction), Internal Change (invisible to customers) and Epics (the agile lingo for a mega-change). A sample dashboard is below:

Because the status of each Issue is always up to date in the system, the dashboards always display the latest information. Dashboards are available online for the upcoming version and for the last version; the information is (obviously) available historically as well. We meet weekly to review the previous release and the upcoming one.

Examples: Developing using a flow-based approach

To illustrate how Issues flow through the system, here are three random Issues we addressed in the last release, called 2012.05. One is a Ticket, a malfunction reported by a customer; another is a small Enhancement; the last is a lengthy Feature (a rewrite of a GUI component). For confidentiality reasons, I’ve omitted the details of each Issue.

Example 1: A Customer-Reported Issue (IP-421)

Here is the sequence of events for one Ticket addressed during the iteration, as taken from the Issue management system.

Date

Action

2012-02-09 7:29pm

It is Thursday afternoon. The final release for version 2012.04 has been built and will make it to the market during the upcoming weekend. TKT-267 is opened for a malfunction of the previous version, 2012.03. The ticket is opened with priority P2, which indicates that the issue is not urgent. It may become a roadmap item, and there’s no commitment for a fix date.

2012-02-13 6:51pm

It is now two business days later, Monday evening. Release 2012.04 is already public. Release 2012.05 is already two days into development. The ticket is reviewed by the support team (the delay is acceptable due to the relatively low priority of the original ticket). The Issue seems easy to fix, and seems important because it relates to a feature that was released in the previous iteration. The Ticket is moved to Fast Track.A linked Issue IP-421 (of type Bug) is (automatically) opened.

2012-02-15 5:01pm

Two more days have passed. The Issue is reviewed by the product owner, and is identified to be of a higher priority than originally assessed, because it adversely affects heavy users. It is ranked higher in the system.

2012-02-15 6:20pm

An hour and 20 minutes pass. The Issue is now at the top of the queue, and is picked by a developer. It is marked as In Development.

2012-02-15 9:12pm

3 hours have passed. The Issue has been resolved, and the fix is submitted to the mainline. The continuous integration server marks it as Development Complete.

2012-02-15 11:01pm

An hour and 50 minutes have passed. The continuous integration server has completed building the component, successfully running all tests. It associated the Issue with the newly built binaries.

2012-02-16 11:13am

It is now the following morning, Thursday. The QA lead notices the Issue in the Development Complete step, and assigns it to a specific tester.

2012-02-16 12:00pm

45 minutes have passed. The designated tester moves the Issue to In Testing. It is now Thursday afternoon.

2012-02-19 6:59pm

A weekend has passed. During the weekend another ticket, TKT-278, is opened with the same symptoms. The new ticket is linked to the same Issue.

2012-02-20 12:16pm

It is now Monday. The iteration will be complete by the end of this week. The tester has completed testing and moves the Issue into Testing Complete.

2012-02-23 10:05pm

It is now Thursday night. The version is being released and Issue IP-421 is marked as Released. The two tickets, TKT-267 and TKT-278, are marked as Resolved and the openers are notified of this new status by e-mail. The software will be installed in the production environment shortly.

At no point during the execution of this Issue was it formally estimated (neither using Story Point nor using any other point system). No SRS document or Story Card was produced.

Example 2: A Minor UI Change (IP-412)

Here is the sequence of events for another Issue, a minor UI change:

Date

Action

2012-02-12 4:58pm

It is Sunday, a working day in our R&D facilities in Israel. A new iteration, towards release 2012.05, has just started. The Issue is moved from Backlog to In Play, and named IP-412. It is assigned to a specific developer. Its ranking is increased. This is a small visual enhancement, so the issue is marked as requiring visual design guidance (this is not a phase in the process but an indicator set up for the Issue).

2012-02-14 10:34am

Two days pass. It is Tuesday, three days into the iteration. The designer uploads the revised graphics.

2012-02-15 9:07pm

Another day passes. It is Wednesday night. The developer moves the Issue to In Development.

2012-02-15 9:12pm

5 minutes have passed. The developer has replaced one graphics file with another. The developer submits the Issue to the mainline. The continuous integration server marks the issue as Development Complete.

2012-02-15 11:01pm

An hour and 50 minutes have passed. The continuous integration server has completed building the component, successfully running all tests. It associated the Issue with the newly built binaries.

2012-02-16 11:12am

It is the following morning, Thursday. The QA lead notices the Issue in the Development Complete step and assigns it to a specific tester.

2012-02-19 9:37am

A weekend has passed. It is now Sunday, the beginning of the working week in Israel. The tester is starting to test the Issue, marking it as In Testing.

2012-02-19 9:53am

16 minutes have passed. The tester has validated that the change correctly addresses the business need expressed, and transitions the Issue to Testing Complete.

2012-02-23 10:05pm

It is Thursday night. Version 2012.05 is released and Issue IP-412 is marked as Released. The Issue appears in the dashboards as part of release 2012.05, as an Enhancement. The software will be installed in the production environment shortly.

At no point in the execution of this Issue was it discussed by the team (neither in a daily stand-up meeting nor in any other meeting). In fact, any type of a team discussion would have probably doubled the overall time spent on this Issue. Not all people involved work in the same office; in fact, some people completed some of the work from home.

Example 3: A Major GUI Component Rewrite (IP-220)

Here is an example of a major GUI component rewrite. The component is presented on most leading retailer sites and used by many millions of shoppers monthly, so it must meet high quality standard. And, while this GUI rewrite is itself part of a larger Epic, we could not find a way to break it down to smaller tasks, because we could not afford to release incomplete functionality to the mass market. Similarly, our analysis indicated that splitting this task to multiple developers would not be efficient.

Date

Action

2012-01-11 3:54pm

The preparation of this Issue is completed. The functionality design and the visual design are ready, pending further review and tuning that will occur during development.

2012-01-29 9:07

Almost three weeks have passed. The design was reviewed and discussed. The developer assigned to this Issue has just completed a previous task and is now starting to develop this Issue. The Issue is transitioned to In Development.

2012-02-15 6:24pm

Almost three more weeks have passed. Release 2012.04 was out during that period. The Issue is now submitted to the mainline and marked as Development Complete. It is Wednesday, and we’re not yet sure if the Issue will make it into 2012.05, which is due in the end of the following week.

2012-02-15 8:24pm

Two hours later, the Issue is built by the continuous integration server.

2012-02-16 11:11am

It is the following morning. The Issue is assigned to a specific tester.

2012-02-19 9:29am

A weekend has passed. It is now Sunday, and testing is starting. The Issue is transitioned to In Testing. Release 2012.05 is due in four days.

2012-02-23 12:21pm

Four days have passed. The Release is due later today. There are still open Defects for this Issue. Some defects are due to a faulty third-party component, which requires further communication with the vendor. The Issue will not make it to Release 2012.05. It is moved to Release 2012.06.

2012-03-08 6:10pm

The Issue is officially transitioned to Testing Complete and is ready for release as part of 2012.06. A couple of defects remain. They remain in the In Play collection and will be addressed in the subsequent iteration.

The execution of this Issue was visible on the Kanban board throughout the weeks it was active. Yet, it was not formally estimated, nor was it tracked using any Burn-Down Chart.

So, why not Scrum?

If you’re a Scrum fan, you may be thinking that this can all be done using Scrum. However, the examples above illustrate some of the benefits that a flow-based approach provides over a (more recipe-based) Scrum approach:

We were able to address customer issues (“Tickets”) as they came up, and did not have to wait for the next iteration to keep the Sprint intact. Consequently, we were able to release corrections to a new product feature as soon as possible.In the past, we used rigid iterations (that was several years ago). At the time, we found ourselves skipping iterations. A new feature was released in Iteration N. Iteration N+1 has started. Now, customer feedback comes in. To keep Iteration N+1 in scope, feedback was pushed to Iteration N+2. Now, Iteration N+3 starts. More feedback comes. It is pushed to Iteration N+4. The end result was that a particular new feature was spread across every other iteration, doubling the time to true completion.

We were easily able to address non-breakable tasks. We did not have to come up with an alternate methodology for an Issue that spanned 3 iterations.

We were not forced to use a particular office structure (e.g., open space) nor were we forced into any other structured set of meetings. Some regular meetings were held during that time. Some meetings specific to a particular Issue were convened ad-hoc.

We were not forced to assemble a team of generalists. Our system is composed from some 20 unique sub-systems, dealing with Content Editing, Content Serving, Content Processing, Content Feeds, internal SDKs, Analytics, MPN/GTIN Matching, SEO, and more. Our expertise areas range from back-end systems (SQL, NoSQL) to business rule programming and front-end development (JSP, HTML, CSS, JavaScript). Attempting to get everyone to master all systems would be a futile effort. Trying to normalize all Issues into a universal scale (like Story Points) and distribute equally among team members would be a folly.

Presumably, there are ways to deal with the above issues within the Scrum framework. But, little of Scrum remains if you strip it off the New Age ideas (which are a set of behavioral preferences) and adapt its otherwise rigid recipes to address more flexible needs.

So, the bottom line is: if you’re developing a complex customer-facing multi-disciplinary hosted software product, look beyond Scrum. Surely you can fight with your hands tied behind your back (even with your hands chopped off, you can still bite, as you can see in Monty Python’s Black Knight Fight scene). But then, why would you want to?

1) How is ranking done, and based on what information? It seems that multiple people rank items in all 3 examples you gave.
2) The dev queue seems to be ‘highest priority on top’, but how do you handle fixed dev resources that compete for larger items like your Ex 3. and smaller items? How do you communicate to your market when a future item will be available?
3) You mention the release dashboard that shows the various things accomplished in that release – are all the other teams like doc, support and sales following a Kanban type process as well? I’m very curious to understand how the business and customer facing groups cope with the amount of releases, do you roll up release ever for “market launches”? How do you build up sales momentum?

Good post, how do you see product management or the product owner role play out in this scenario?

Great questions. I wish I had perfect answers, but on our end some are still work in progress and/or what I feel are inherent limitations of agile approaches in general.

Here are some thoughts:

1. Regarding ranking, a few guidelines come into play. We have quarterly high-level planning, which brings about the high level directions and/or main themes we want to tackle. Other than true emergencies (which are VERY rare), we don’t interrupt current tasks, so a person who has picked up an item will continue it all the way through, even if a “higher priority” item has come up. Based on these principles, ranking is determined either when a new high priority Ticket comes up (which may put it at the top) or “on a periodic basis”, to reshuffle some of the other items. I feel that as long as one can maintain the “no interrupt” principle (which is analogous to a Sprint, but, very importantly, only executed at the item level), then ranking procedure is not that critical. In the end of the day, the worst case scenario would be that Issue X (which is Rank 1) would be executed before Issue Y (which is Rank 2, but should have been Rank 1 in an ideal world). But then, if the Issues are small enough, this has little effect. And, in all fairness, assuming the bigger themes are agreed upon, how confident can one be around micro-ranking anyway?

2. The day-to-day answer is that when a customer commitment is made and/or there is urgency around a bigger feature, it would stay at the top of the ranking, or a certain person or team would be earmarked to handle it (as part of the day-to-day management task around assignment of Issues to individuals). The conceptual answer is that there’s an intrinsic tradeoff between flexibility and productivity on the one hand and predictability on the other (e.g., one can set goals at 120% or 80%). We consciously decided on the former; I feel that this is an inherent limitation of agile development, where if one isn’t willing to commit on the content of each new development up-front (which is at the heart of agile), predictability is limited. But then again, did non-agile approaches ever produce true predictability?

3. We’re definitely not leveraging the same “Here’s the Great New Version 8” type of communication (and sales momentum) we’ve seen in the past. The single most important concept in my mind is what is sometimes called “Feature Flag” (or Switch). This technique lets us pull the trigger on the actual activation of each new feature based on business need, and after the version has been released and deployed. This lets us control the rollout pace (e.g., beta versus full deployment) as well as the communication internally and externally on a case-by-case basis. And, while we’ve had cases that activation and communication lingered behind actual release, we haven’t had to resort to put features on hold until a full “market launch”. Overall, I feel that when one feature is big enough, it can get momentum independently of a big market release (and, conversely, that part of the waterfall sales momentum of a “Big New Version 8” is due to the starvation that came in the year after “Big New Version 7”). But, in all honestly, I can’t say we at WebCollage cracked the nut (or got even close) as to how to optimize communication in an agile environment.

4. Regarding the role of the product owner or product manager, our environment is a little bit unique (due to geographies and people involved) but overall since we’re using a pretty standard development pipeline, we view the role of product management is pretty “traditional” (e.g., the Pragmatic Marketing methodology). Our product management role (which is nowadays a bit blurry people-wise) is responsible for ranking and communication, in addition to even more traditional roles around requirements and product design, and—of course (overlapping with Product Marketing)—the non-development-related roles such positioning, pricing, sales tools, etc.

Great post Eilon!
So cool how far teams can get with Kanban with such lightweight bootstrapping
What I’m interested in hearing is about how you evolve your system of work over time, what are some triggers for improvement discussion.
I understand you are not really limiting work in progress explicitly so am intrigued to hear whether you are experiencing enough drive to lean up your flow.

Would love to also hear what you see as the next goals for your system and how you are bringing them about.

Again, great case study, looking forward to seeing it action when I’m in the neighborhood again :-)

Thanks Yuval, and thanks again for dropping by the other day and giving us a head start.

And, yes, we too are interested to see how we can further fine tune the system as we move forward. There are many improvements we’re working on in parallel/circumferential areas (e.g., more automated end-to-end tests, more streamlined deployment, faster builds, configuration management), but also around the core process and its automation.

Eilon,
Great posting! I will greatly appreciate if you could send me a copy of the mentioned deck as well. Currently looking researching what to do with a newly formed Cloud Engineering team, and this looks very simple, elegant and solid solution.

Hi Eilon,
I just read your great post, late is better then never.
As a product manager at Wix.com – we are struggling with best practices and how to reflect them using Jira.
I will be happy to receive the mentioned deck as well.

One issue I’m facing:
Lets take for example No.3. What happens when it is a really big feature, and a CI SaaS product, you decide to roll out in phases?
In V1 you will have sub features A,B,C
in V2 you will have D,E,F, etc…
and while in development phase, bugs starts to appear, and some will be dealt in V1, some will wait for V2 (even though the are related to feature A)

I don’t think there’s a really elegant solution to this common scenario. And, it becomes worse once it involves rollout (i.e., when does one start telling some/all customers about it, etc. – though this may be tougher in our environment because our our customers are large enterprises versus Wix’s audience).

Anyway, in our implementation, we use JIRA Epics to manage the “big features” (and BTW sadly will only upgrade to their latest version in a few weeks, when they supposedly improved how Epics are presented). And then we naturally have a link from the Epic to its smaller features. We are working quite hard to ensure each such feature is self contained and meaningful. Defects are normally opened as JIRA sub-tasks (called Defects) of the specific smaller feature and we make every attempt to fix within the same iteration. When not, the issues may be opened as high level Bugs and associated with the Epic and solved later, handled just like another small feature.

With this in mind, if you’re interested, drop me a line and feel free to hop over and I can show you our tool and methodology with live data and discuss further. I would be curious to see what the approach at Wix is.

Thank you for your detailed answer. we will soon have our first sprint or version fully with Jira and we will see how it goes.
Anyway, two more questions:

1. How do you link/track related issues? do you use the “component” field, the free text labels, custom field, link issues or any other way?

2. How do you create a checklist in an issue? it was a very useful feature in Trello we miss with Jira. we try to use sub-tasks, but I’m not sure it is the best way. We used checklist for sometimes very small tasks, like “change border color to blue”, or “correct spelling mistake abc -> bbc”.

We use mostly links to achieve this. We use the component field to track the system component in which an issue is addressed. We use free text labels to track less structured data such as interested customers. We did try using free text labels for tracking Epics, as is suggested by Atlassian, but there was no global lists of Epics which we found inconvenient.

We try to create a separate issue (versus sub-tasks) for each change. Since our releases our small, the number of issues is still very manageable. Admittedly the overhead, while small, is still troubling, but unfortunately I guess that no case management system, and certainly not JIRA (which I don’t view as having super great UI, to say the least), can compete with the convenience of lightweight task management tools…

Great post, thanks for sharing.
We are working in a similar way (more like scrum and less kanban in the implementation or the way it looks).

How do you handle assingment of features to versions? How it is tracked/viewed?
We are using Jira as well (on the scrum board) and we use Epics to define the high level areas of investment, but we found it very hard to connect the content of a sprint to a specific release (we use assign to version field, but this is not done automaticly when you pull an item into a sprint).
Also, a specific sprint might include items that are long term (i.e. for a later release, just like in your example) alongside items for the current release.

So to sum up this confused question (sorry…), how is the assigment for a release is managed?

We loosely set the Fix Version field for each issue (“story”) at the beginning of each iteration (if we feel it has a good chance of making it into the iteration; we use open iterations so there’s no “commitment” at this stage). Our build server further assigns this number if and when the feature is committed to the mainline (people may commit sub-tasks along the way). Towards the end of each iteration we may need to reset the Fix Version field if a certain issue did not make it.

We also use Epics to assemble finer-grain issues. Our dashboard shows the issues and the Epics associated with each iteration, which allows us to track completion of issues. We often label issues that are meaningless out of context as Internal Change rather than Feature, to get them off the business-facing dashboards.

In retrospect (and will probably change this in the future) I would have had two different fields: Planned For and Fixed In (or similar), to track each separately. We are also running on a JIRA version that’s N-1 so we’re looking to revise this once we upgrade in a few weeks.

While we don’t use Scrum, managing this process may be easier in Scrum given the strict timing of the planning meeting – this may be a good time to set the Fix Version field. But then, this is more of a guess.

Some help :-)
It seems that there are some missing features in Jira that makes this process a bit harder than should have (like a concept of a release in the scrum view in our case).
We are using Epics as high level topic (e..g UI improvment, Quality) as well as for big features, so at any givven time we have less than 10 epics around and the list is relativly static. Are you using them for smaller things and manage a rather dynamic list of epics?
One more questions – I assume that you used to had a product roadmap that was showing a plan for 6-18 month or something like that. My question is – do you have such a roadmap plan now? How the roadmap is build/commited? When working the way you are working (and I think that this is the only way to work…), it is hard to commit for a long term roadmap, but the business kind of demands for it…

We are using Epics mostly to manage “big features” that need to be split across iterations. I think high level areas (like performance) are normally referred to as Themes in the agile vernacular, though I think that strangely enough the JIRA recommendation is to use the same field for both (we don’t use Themes at all). So I guess the bottom line is that we do use a dynamic list of Epics, with only a handful of Epics active at any point in time (the fewer, the better as it means we were able to split work better and have less balls in the air at any one time).

RE: product roadmaps, take a look at this post, which discusses our approach at length (if I had more time, I would have written it shorter, as the saying—attributed to Cicero—says ;-).

This is a useful post. Really any Agile process is better than no process. I don’t have a ton of kanban experience but I do have a lot of scrum experience and tbh, I don’t see much of what you’ve written as incompatible with scrum. Isn’t much of this scrum without the names?

People are implementing many processes which are are sometimes loosely described as Scrum rather than simply agile. At its core, Scrum is very recipe-driven, and is a lot about very specific cadences and ceremonies, which have their pros and cons. This post highlights some of the downsides of the rigid structure of Scrum.