Meta

What was wrong with the modelling done for the CCFAS

By Matt L, on April 9th, 2013

These days, no transport project gets built or policy signed off without first being run through a model. I’m not talking about a scale model but a mathematical computer model that is designed to estimate just how people might use a project or how much a project and/or policy will affect the transport system. To do this, these models take historical data like traffic volumes and land use and mix them with assumptions about the future to get a result. Things these days have gotten to the point where people won’t make any decisions without running it though a model, after all if the computer gives the answer, it must be right. Right?

Auckland uses two general types of modelling, these are described below:

Travel demand models cover the region and are concerned with broad travel patterns and flows. These are usually calibrated on observed data (base year) and are then used to forecast responses to land use and transport changes or interventions.

Operational models usually cover a smaller area, are more detailed, and are used to assess detailed traffic operations on a section, approach, lane or turning movement level. AT operates two general types of operational models, one being flow based (traffic as a “stream”) and the other being micro-simulation (each vehicle or unit is simulated travelling through a network).

Demand models are typically used for long range forecasting whereas operational models range from “now” options to medium range forecasts.

As mentioned in the description about the travel demand models, they are calibrated against a base year. That means the data is put into them and they are tweaked so that they deliver the same results as what actually occurred in that base year. Data from subsequent years would then be added to that. At the highest level we have the Auckland Regional Transport Model (ART 3). This looks at travel demand across the entire Auckland region however this is where the first major problem lies. It was last calibrated against 2006 data which means it is almost 7 years out of date. That might not seem like much but the last 7 years have probably seen more changes in transport behaviour than any time during the prior five decades. Note: The ART3 model is actually controlled by the council, not AT. AT do however control a Passenger Transport model (APT) which looks at the impact on PT however this is even worse with AT saying that it was last calibrated against 2001 data.

As part of the work before AT start on a new CRL business case, they have said that both models are going to be updated to a 2013 base year. Although considering that the modelling was also being used to inform the massive roadfest that is the Integrated Transport Programme, you would have thought it would have been a good idea to update it earlier. A few million spent updating it would likely have had massive implications on the outcome of both the CCFAS and the ITP.

Sow how did modelling work for the CCFAS? Well AT used both travel demand models and more detailed operational models. A diagram of how they interacted is below.

The ART3 model was used to produce initial results based on the employment, population and land use assumptions used in the project (remember these were agreed to by representatives of all organisations). That then kicks out data on vehicle and PT demand which is then fed through the APT model. One of the developments that came about from the CCFAS was a new function to address crowding on PT as after all, if people can’t get on a bus, they aren’t going to be able to use it are they? But here is where there start to be some major flaws in my opinion.

The people who were ‘crowded off’ the PT system were then added back into to the ART model as not being able to use PT. But they get added to the number of trips taken by car and the model then recalculates vehicle travel times with this extra traffic included. As the MoT said in its response to the report, there are no feedback loops to take into account the impact of the changed conditions. In reality people crowded off PT (and we know from the CCFAS this was affecting the bus network) would look for another mode of travel, change their travel time or perhaps not travel at all. While undoubtedly some will drive, the impact of them doing so might force someone else to change their mode, perhaps catching a non-crowded rail service instead.

The traffic results from this recalculated ART model are then fed into a Saturn model, which is a more detailed operational model, to get more detailed outcomes on the impact of the various options. Once again there were also no feedback loops from this stage either meaning that once again, the impacts of the congestion caused by the options were not fed back through the system.

So in summary we have a regional transport model that was last calibrated against 2006, feeding into a PT model last calibrated in 2001 that just assumes that anyone who can’t catch a bus because it is full will instead turn to driving on already congested roads. It is these issues that I think led the MoT to conclude that the modelling was likely overestimating the demand for private vehicle trips while underestimating demand for PT trips. This is likely the reason why the model suggested that during the morning peak period, we would have almost 50% more people entering the CBD via private vehicle in 2041 compared to now while over the same period removing space for cars. For reference the annual screenline survey recorded less than 34,000 people entering the CBD by private vehicle in 2012 while the reference case for the CCFAS suggests over 49,000 will do so.

It seems that until AT start really addressing some of these glaring issues, modelling the true impact of the CRL will remain elusive.

interesting post, but with one glaring omission, which is that the models rely not so much on traffic patterns, but an origin/destination travel demand matrix that is reflected in the traffic patterns

ask the simple question: what do the years 2001, 2006 and 2013 represent? the census travel to work data provides that O/D matrix which cannot easily be sourced anywhere else, so recalibrating the model to any year other than a census year is simply wishful thinking

any model of anything is a flawed representation of reality, but their strength is in suggesting what the comparable effects of different options might be. As such models should only be used as an informative aid to judgement based decision making, rather than a devolution of the decision to the computer output

That begs an interesting question, why do we rely so heavily on self reported journey to work data from a one day snapshot. There are some studies out of Melbourne that claim work related trips account for less than a third of peak hour travel, if that is true here then our models are massively underinformed for peak times let alone any time else.

We know the old ARA/ARTA days and travel surveys they did that 40% plus of the peak time travel in Auckland is said to be education related.
If we take that as read for now, then your comment begs the question, if only 30% is work related, whats the remaining 30% actually?

Is it that education related peak trips is far higher than 40%
– or there is another “great attractor” of traffic, i.e. some kind of traffic “dark matter” – that we can’t see – but we can see the effects of where ever we look?

Everything that’s not work or education I suppose, people going shopping, going for food or drinks, acessing entertainment, visiting friends and family, accessing healthcare, running errands to the bank or drycleaners, going to the gym, who knows? But surely there is a lot more going on than just work or school.

any model of anything is a flawed representation of reality, but their strength is in suggesting what the comparable effects of different options might be. As such models should only be used as an informative aid to judgement based decision making, rather than a devolution of the decision to the computer output

Seems to me that out of date models using out of data base data are pretty much less than useless.
But while NZTA may criticise AT and AC for their deficiencies, can’t the same criticism be levelled at NZTA, MoT and all for doing the exact same thing with “their” models and base data and assumptions.
How else can you explain the modelling outputs for the RoNS and second harbour bridge?

So seems to me a very fundamental flaw exists currently – that is A nasty set of postive and negative feedback loops..
So that if PT users are crowded out of buses/trains and presumably into cars as the model assumes, then what happens when the roads get crowded with cars – does the model then assume people crowded out of cars then decide to use PT – but PT is now full and (mor eimportantly) just as slow and they can’t get on half the time.
So that they have to use their cars, but wait the roads are so full they can’t, so they have to use PT.
And wait more – the roads are now full of cars, so the buses can’t go anywhere.
And around and around on the modelling roundabout to hell – until the model disappears up its own exhaust pipe in a cloud of exhaust fumes…

All this “failure to predict” reminds me of the modelling they did before and after of how people supposedly respond to fires (like the Kings Cross Station Tube fire in the 80’s) – what they found was that their computer models simplistically treated people as dumb things that couldn’t change their minds once set (well there are quite a few people around like that – but thats a different story).
When in fact its way more dynamic than that. So the models could not explain why the tube fire was so deadly/how the people reacted to it. Wasn;’t until they built new models some 20 years AFTER the fire that took into account the ability of people to behave individually and collectively at once and model that that they came even close to accurate models of how people respond to real fires/emergency situations. Which of course means back to drawing board (or CAD workstation) for the designers of such places as the perceived wisdom doesn’t actually occur.
Same has just come out about the Soccer stadium disaster in 1989 – models didn’t predict the actual event until they changed the models and base data/assumptions.

Not that their models weren’t showing this data up earlier, its just that they, put so much faith in the models they were blind to the reality.

So what to replace the models with?

For a start, having data that reflect the latest reality for both PT and roads and factor in modal flexibility, travel demand management and yep, even some teleworking – then run the predictions forwards AND backwards in time.
You’ve got to back in time, and run your model forward with the data – to see if the results even match reality. Because if they don’t work for the past, they can’t work for the future any more reliably.

And then of course, lets stop putting so much faith in models to predict the future accurately.
The models are only as good as the base data they have and the base assumptions they’re built on. And understadning both these is key to knowing where the model stops working.

The future isn’t razor sharp like hindsight – its cloudy and unpredictable and things just aren’t cut and dried like the bean counters want them to be.

But one things for certain, you can only look forward, if you also look in the mirror as well – can’t tell where you’re going to be – if you don’t know where you’ve been & where you are now.

Perhaps it’s just a case of relying far less on models for decision making, particularly at the strategic level, and going back to first principles and even good old fashioned planning. Maybe instead of asking “what would a computerised estimate of human behaviour built around a large series of simplifications, assumptions and seven to twelve year old data say people do?”, perhaps we could just ask “what would we like people to do?”.

I can see a lot more value in tactical models, what does changing this signal phase do to the eastbound tail of cars? What does doubling the frequency of this bus do to interpeak patronage, etc. Not sold on the grand city wide strategic models, I get the feeling all they tell us about is the assumptions used to build the model and the poverty of data informing it.

Yes, indeed. Modelled demand for something does not mean we should automatically provide for it, even if the model output looks accurate. It is perfectly possible to model demand for Heroin use; certainly doesn’t mean we should invest in meeting that demand.

I understand the attraction of these tools as they offer the possibility of something approaching objective information instead of just hunches or preferences. But then they still are the product of hunches and preferences in the form of the assumptions on which they are based, hunches and preferences that are now masquerading as fact. Wrong information is potentially more damaging than no information, as it gives the illusion of certainty.

If we all admit that we are discussing what we’ed like the city to be at least a more honest and balanced discussion can take place. We are creating the future now and do have choices to make.

Very nice post Matt. I’ve always been of the opinion that these global models project too much in the way of cars and not enough in the terms of PT. You do have to wonder why they use such an old model; just the size of the city and the time involved I guess, that and the fact the city has been getting reshuffled a few times of late.

I think the general plan involves updates with each new set of census data, that normally means a five year cycle but obviously the last was delayed. But generally I think you are right, it’s probably just a case of time and money. Can’t be cheap or easy to recreate these huge models regularly and ARTA never had much budget. I know that Sydney rebuilds their PT model annually and inputs travel survey and ticketing data each time, but their PT planning budget is probably ten times what Auckland gets to spend given it is funded at the state level.

I’ve heard suggestions that HOP data will be very useful for origin-destination inputs into the models. Like any travel survey they are a bit reflexive (i.e. you can only travel where the existing network lets you travel, not necessarily where you your actual desire line lies) but there should be a mountain of regularly accessible detailed information to work with. The really cool thing is it should allow good estimates of the impact of service frequency, travel time, number of connections etc on patronage by comparing various routes and trip chains around a dependent variable, meaning we should be able to closely estimate the outcomes of removing connections, speeding up trips, increasing frequency and the like.

Another critical difference across the Tasman is that their census measures ‘Travel for work or study’. Not us; our census expressly ignores the study market source of travel demand.

Any models or policies that use census data as their main travel demand input will be hopelessly out as a result before they begin, especially so for the active and Transit modes, and especially for places with high concentrations of schools or tertiary institutions.

That is why models never see the 881 bus from the shore to Newmarket being so busy.
I think there are around 7000 students from the shore going to UoA and this service would be how at least half of them would prefer to get in. Such a shame we have been crowded out.

I wouldn’t get to hung up on the limitations of the census “journey to work” question; much more detailed information is collected from household interview surveys (or in Melbourne, travel diaries), and while I was in Christchurch studying they got us all to fill out a travel survey that could also feed into the modelling process.

So, do i have this right? The models account for the effect of crowding on PT demand, but do not account for the effects of congestion on road travel demand? That is a bit weird that the PT model is evidently more sophisticated than the road transport model?

Not quite correct. The ART3 model has a different level of capacity programmed into it for roads that depends on how many lanes wide the road is. Intersections also impact capacity and these are modelled as well. As a road gets more congested the ‘travel time’ along it will lengthen in the model (time wise) and that particular route will become less attractive.

The important thing to remember is that each model does a different job. ART3 is a high level regional strategic model that is designed to take the inputs of dwelling growth, employment growth, petrol prices, changing demographics, parking prices, expressed preferences in a travel survey and put them all together to create the output which is travel demand, and then to allocate that travel demand across the transport network – for PT, for freight and for general vehicles. APT is a more detailed public transport model which needs to look more closely at issues like bus crowding levels while other models like the SATURN model are even more detailed in terms of analysing things such as the sequencing of traffic signals at different intersections.

It is important not to think of the models as a “crystal ball” that will tell you the future, but rather as a tool to help answer questions that we might have about what happens if certain inputs are changed but everything else is held constant.

“It is important not to think of the models as a “crystal ball” that will tell you the future, but rather as a tool to help answer questions that we might have about what happens if certain inputs are changed but everything else is held constant.”

Truer words probably have not been spoken on this blog. Models should never, ever be used in the deterministic sense but only to to help understand. The biggest issues with models is that their assumptions and simplificatios (model limitations) are not discussed enough. The only thing you know with certainty regarding models, is that they’re wrong.