Meanwhile it has emerged Salford’s chain will have two tiers of membership: “full integrated members”, and “associates”. Reading between the lines, the former are Salford, and other organisations that need Salford to do much of the heavy lifting in terms of leadership. The latter are trusts like Wrightington, Wigan and Leigh, or Bolton, which merit more autonomy. My colleague Lawrence Dunhill reports: “Sir David [Dalton, Salford chief] hopes that associate members would become full members over time, partly because this is considered potentially more attractive to NHS England.” Wrightington CEO Andrew Foster said there are two views about how the chain should work – a centralised command model and a partnership model. Clearly, the latter is more attractive to trusts like his, which have much offer on their own terms. Allowing the two tiers of membership is sensible because it enables the chain to encompass trusts like Pennine (or West Herts) that require support, while still remaining attractive to providers that don’t need to join because they are well run and viable on their own.

Finally: two examples of trusts taking on council staff to bring social care workers into the NHS. Surprise, surprise, one is Salford (is Salford the most experimental place in the NHS at the moment?), the other is Wirral Community FT. There is some expectation that delayed transfers are now such a glaring problem for the NHS that providers are going to have to get increasingly involved: these are some early examples of that happening in practice. Again, what is interesting is we now have two alternative approaches. One is the acute trust, which the delays have the greatest impact on, getting on and intervening directly. The other is to bring social care staff into the community trust, whose work is more closely aligned with the day to day business of social care. Expect more of this sort of thing.

There are enough developments to be worth gathering them in one place here. Here are the main things that were new to me:

The new care models team is going to commission an independent evaluation of the vanguard programme, with reports produced every year from 2016-17.

Intriguingly, one of the roles of the external body doing this work will be to advise the new care models team on designing various further evaluation activities and research. And, independent evaluators could “take a more strategic view” of what individual vanguards are doing, through site visits for example. They could also draw conclusions from local monitoring and evaluation, and national monitoring.

It seems to me these tasks describe a pretty significant chunk of the new care models team’s core work. While getting a second opinion is always valuable, is this a recognition that the new care models team lacks the capacity to do it itself? Or even a lack of self-confidence that it will get it right? Alternatively, perhaps it simply demonstrates a desire to focus on all the central team’s other responsibilities, such as producing template contracts or legal guidance, or spreading learning.

Apart from the national in-house NHS England evaluation and the independent evaluation, there will also be local evaluations in each vanguard.

So I make that three layers of evaluation. “Intelligence from the vanguards will be essential to the national evaluation to help us understand the ‘active ingredients’ [a trusty bit of Stevensese, that] that enable success, scale and replicability.”

The new care models team “will contribute significant resource to each vanguard to help with local evaluation”.

So these look like they will be hefty pieces of work. Again – is this a recognition of the limits of how well the vanguards can be understood and assessed from the centre?

The evaluation strategy emphasises the importance of understanding what, specifically, is driving improvement in performance.

So it won’t be enough to say “vanguard X has improved on indicator Y, therefore vanguard X is a successful model”. With its eye firmly on replicating models of care that can be successful everywhere, the new care models team will want to know exactly which interventions are making the difference. It’s good that there is a strong focus on this. But given that the ever-lengthening STP process is the main way new care models are supposed to be adopted nationally, we’re going to need to know what works and why pretty quickly.

That includes taking into account local context, such as relationships, culture and history. It also involves capturing data for groups of the vanguard populations affected by specific interventions – for instance the subset set to benefit from a change to the diabetes pathway, or people being admitted to hospital from care homes. Without this level of detail in reporting, an important change may be missed because it doesn’t affect enough people to move the needle on a whole-population measure.

Core metrics for MCPs and PACs.

We knew roughly what these would be, but they’re set out in full in the new document. Vanguards are getting regular reports against them via a dashboard being sent out from the centre.

Care plan: percentage of patients who helped put their written care plan together

Health and wellbeing metrics:

Quality of life (from the GP patient survey).

It’s interesting that, despite the thorough thinking that’s gone into vanguard evaluation, the core metrics remain fairly commonly used indicators which have been around for a while. Making the best of what you already have is a sensible approach given the timescales and the risk of taking years to design something new that turns out no better than the existing data.

That said, there will also be some “enabler metrics”, measuring whether vanguards have the conditions in place to make progress. These are not listed, but might include “multidisciplinary teams, integrated care records, and a whole population budget”, the document says.