A different way of adjudicating...

People take music exams, and the examiner marks each entrant without comparison to the other entrants. And the entrants (on the whole) accept the process and the results. There's consistency between examiners. This is my point - music examiners don't try to rank the entrants. There's no winner, no second place, and so on. Several entrants will get the same mark.

Seems to me having done a few brass band contests now that adjudicators feel they have to rank the bands. I think it would be fairer to mark them like the music examiners do, and just award points without forcing them into a ranking, which distorts the results. Also, to publish those criteria on which the marks have been awarded. I'm sure I've read other posts that complain that scores in band contests don't tell you very much. You know what 140 in a music exam means. But 185 in a brass band contest?

That is similar to the adjudication used at lots of music festivals. They give out different grades of award like 'Highly Commended' etc. It would be interesting to see it tried although I think most banders like having something to win!

Interesting idea, and I kind of know where your thought process is coming from with this. But... surely the difference between an exam and a competition is: an exam is set to gauge if a candidate is able to meet or better a set criterion, whereas a contest ranks the entry in order of their merit? So a contest may be of a low standard - perhaps lower than would be considered acceptable (or "pass") but the judge (adjudicator) still needs to rank the entries.

Maybe I'm being a little pedantic with the definitions. I can see the argument that its not healthy to judge a subjective situation like a band contest by "is entry A better than entry B?" but at the end of it all the entries must be ranked in some sort of order, so while it maybe shouldn't be the primary factor, it surely has some bearing on the final outcome for the result to be a contest?

I fully agree with your suggestion to publish the judging criteria beforehand. However, having been machine-gunned down here on several occasions for suggesting the very same thing I reckon you should prepare yourself for the backlash!!

Here's another interesting twist on the 'different adjudication' theme.

OK, this couldn't be used at our incestuous, one-piece 15 bands type of contests, but say at an entertainments contest - somehow, if we could get the audience to vote - then this would be interesting. I appreciate there are technicalities involved, but think about it...

How about a long term contest covering all bands who participate in a summer park job. At each event, get the audience to fill in a form etc... average out the number of points awarded each week... maybe it'd work who knows.

Our MD tells of a local contest in which one of the sections attracted only one entry, who duly came on and played the testpiece. The adjudicator later pronounced in effect "This might have been a foregone conclusion. On the other hand I believe that a winning performance needs to be first-class, and since I didn't hear a first-class performance in this section the band drawn number one gains second place".

Interesting idea, and I kind of know where your thought process is coming from with this. But... surely the difference between an exam and a competition is: an exam is set to gauge if a candidate is able to meet or better a set criterion, whereas a contest ranks the entry in order of their merit?

Click to expand...

Don't contests like the areas try to serve the dual purpose of ranking bands and determining if they meet the criteria to stay in that section (as opposed to promotion or relegation)?

So wouldn't it make sense for the areas instead of doing these three year averages for placements, do a three year average on points? Adjudicators could make sure the winner and runner-up have different scores (if even only by a half point) to decide who advances to the finals.

Here's another interesting twist on the 'different adjudication' theme.

OK, this couldn't be used at our incestuous, one-piece 15 bands type of contests, but say at an entertainments contest - somehow, if we could get the audience to vote - then this would be interesting. I appreciate there are technicalities involved, but think about it...

How about a long term contest covering all bands who participate in a summer park job. At each event, get the audience to fill in a form etc... average out the number of points awarded each week... maybe it'd work who knows.

Click to expand...

So the band that brings a coach-load of supporters, plays terribly but still gets top marks from its fan club, trounces all the others

(surely you've taken part in work-shops and seminars where you've conspired to put in some stupid responses to wreck the system :wink: )

So wouldn't it make sense for the areas instead of doing these three year averages for placements, do a three year average on points? Adjudicators could make sure the winner and runner-up have different scores (if even only by a half point) to decide who advances to the finals.

Click to expand...

Possibly - but only if there was some sort of consitancy between the adjudicators' scoring systems over the years. Some of them seem to struggle to get a consistant result over a single contest, let alone over a three year time span!!

Possibly - but only if there was some sort of consitancy between the adjudicators' scoring systems over the years. Some of them seem to struggle to get a consistant result over a single contest, let alone over a three year time span!!

Click to expand...

It seems this sort of consistency is what people want though, no? I'd think it be much easier to achieve if point were the goal rather than rankings...then (to open another can of worms) you could deal with that other issue of # of championship section bands in each area...perhaps LSC might only have 5 performances that meet the criteria for championship section band, etc...

Often I think there is no real ranking and it would be fairer to award points against a predefined set of criteria, and then leave the bands to compare their points afterwards. I'm always surprised to see score sheets from contests where the marks awarded differ by 1 all the way down the list, almost as if the adjudicator had chosen the ranking first because that was what was expected of him and then awarded points afterwards to reflect the ranking. Why aren't there more tied places? I'm puzzled.

I don't think there is a simple solution. Personally I don't agree with the "one contest and thats your lot mate" as it tends to favour "well connected" bands who whilst short of players for the bulk of the year can pull them in for one contest. I'm not against a band doing such but I feel that a bands ranking should be a measure of it's overall standard, not about how well or how badly it's done in one particular contest.
The three year system does go some way to alleviate this as it means that bands which can hang on to a complete band for a long period will tend to do OK. But there is always the problem that results are subjective and for the most part based upon the interpretation of a single individual ( in L&SC anyway ).
A better system would be for a band to be given a ranking based on a set number of contests with possibly a weighting applied depending upon the number of entrants and ( possbly ) region ( to prevent bands from one region doing all the "easy" contests ). However as we all know this is an organisational nightmare and not all bands can afford to do ( or want to do ) more than one contest a year.
Sad to say the current system we have is probably about as good as we're going to get.
With regards to Adjudication, I really feel that 2 in a box (preferably 2 in 2 boxes! ) is an absolute must and clearer definition of adjucation cirteria should be provided.

It seems this sort of consistency is what people want though, no? I'd think it be much easier to achieve if point were the goal rather than rankings...then (to open another can of worms) you could deal with that other issue of # of championship section bands in each area...perhaps LSC might only have 5 performances that meet the criteria for championship section band, etc...

Click to expand...

Isn't that very difficult - if not impossible - to achieve though? Unlike an exam where questions are either right or wrong (and don't forget to show your working out!!) a musical performance is a subjective assessment - unless you start counting splits etc. Even musical grades have sections to remove some of the subjectivity - you either get the scale / sight-reading / oral tests etc right or not. Yes, there are pieces to play, but the assessor is only having to deal with one player playing a piece from a limited choice.

A better system would be for a band to be given a ranking based on a set number of contests with possibly a weighting applied depending upon the number of entrants and ( possbly ) region ( to prevent bands from one region doing all the "easy" contests ). However as we all know this is an organisational nightmare and not all bands can afford to do ( or want to do ) more than one contest a year.

Click to expand...

Going to the "easy" contests wouldn't matter if scores were based on criteria...

e.g. a score from 180-200 would be a championship section performance regardless of the contest...a 160-179 would be a first section performance, etc...