|Posted by Thomas J. West on August 18, 2011 at 9:05 AM|
Historically, American scholastic marching band has always been associated with competition. From its earliest days around the turn of the 20th century, youth marching band and drum and bugle corps organizations were sponsored by VFW posts, Boy Scout troops, and church youth programs. They made the move into school systems along the way primarily as entertainment during football games.
Most of the youth organizations centered around summer competitive shows sponsored by the VFW in which units would start at the goal line (the starting line), march to midfield playing a march, post the colors on the front sideline, perform selections in concert formation, then parade off the opposite finish line. Groups were evaluated primarily on adherence to flag code and the level to which they performed cleanly and precisely. Evaluators typically used a "tick system", keeping a tally of hash marks on their clipboard for every error the saw or heard. The group with the fewest ticks would win.
Over the decades, as marching units became more innovative and artistic in nature, the system of adjudication adjusted to reward creativity and originality but still maintain precision as a major component. In the past ten years, most competitive circuits have adopted what is a very effective system of rewarding both creativity and precision for a wide range of groups, from novice units with a small skill set to the most advanced professional-level performers.
It's All In The Language
Judges' sheets vary from circuit to circuit primarily in their language and in how the numbers are managed. Most sheets are divided into two subcaptions, usually displayed on the sheet as the "top box" and "bottom box". Typically, the top box evaluates the content of the show from a design standpoint. It's the "what" in the equation. For example, on an Ensemble Music judges' sheet, the top box evaluates the effectiveness of the musical arrangements being played by the unit from a construction standpoint (do the arrangements highlight every section of the unit? Do they demonstrate the technical capabilities of the performers? etc.).
The bottom box typically evaluates the level of precision with which the group performs the content from the top box. It's the "how" in the equation. This box is often called "Excellence" and evaluates the technical performance of the unit - how well do they perform the content? For the aforementioned Music Ensemble judge, things evaluated in this box are things like "Are the voices of the ensemble equally present in balance? Does the unit perform with consistent intonation across sections? Is rhythmic timing and alignment consistent across the ensemble at all times?"
The language used to describe these two boxes is incredibly important in determining how an adjudicator evaluates the ensemble. Just a small change in the wording can have a tremendous effect over the kind of feedback the group receives.
Numbers Management - How Do They Come Up With Those Scores?
Scoring a competitive music ensemble is a challenging endeavor. Judges refer to the process of scoring units over the course of a competition "numbers management." Because of the subjective nature of the art form, personal opinion is reflected in a judge's scoring and commentary. However, over the course of competitive music's long history, the system of evaluation has become quite effective at establishing a common level of expectation based on the construction of the judges' sheets and the general quality level of the competitive units in any given circuit of competition.
Scoring is typically divided into either four or five point ranges that are also known as boxes. These point ranges vary from one competition circuit to another. These boxes loosely correspond to the adjectives of "Superior" (box 5), "Excellent" (box 4), "Good" (box 3), "Fair" (box 2), and "Poor" (box 1). Traditionally, all competitive music judging systems use a 100-point scale to rank and rate competitive units. Very few, if any, judging systems allow a score less than a 50. It is typical that most units start the competitive season scoring in the 60-80 point range and finish the season scoring anywhere from the upper 70's to the upper 90's.
Each box has with it descriptive phrases that describe the level of performance in that box. Adjectives like "rarely", "sometimes", "frequently", or "always" are used to quantify the level of quality of a performance. To score a unit, a judge will reflect on the commentary they provided on the verbal recording and on the written sheet, then decide which of the scoring boxes the unit's performance in their caption falls into. They will then decide where within the spectrum of that box the unit's score should be - are they a box 2 approaching box 3?, etc.
Ranking And Rating
When it comes to scoring, a judge's job is to first get the competitive units ranked in the correct order based on the performance displayed during this competition only. In general, judges do not compare scores from week to week and contest to contest (though we know that most band/corps directors do). Judges do begin to evaluate units as soon as they have wrapped up the scoring and commentary from the unit before, even if the official adjudication period has not begun yet. For example, how a unit comes into the stadium and takes the field already communicates quite a bit about both their level of experience and skill sets and also the level of excellence they have achieved so far.
A judge will usually be able to decide within the first few minutes of a show which scoring box this performance belongs in. As the show progresses, the judge may adjust this reaction up or down based on the entire content of the show. When it comes time to write down the number, the judge will decide which part of which box the unit's score belongs in. They will also take into consideration how many more units have yet to compete and will typically leave enough space between the scores they assign to allow them to score another group. The primary goal is to get the ranking order correct, then score the units appropriately based on their performance that day.
Individual judges don't always "get it right" when it comes to ranking and rating, but over the course of an entire competition, a panel of judges typically "gets it right" collectively as the scores add up to 100. For championship shows, many competitive circuits will use a double panel, having two judges for each caption in order to minimize the ability of one judge to sway the units score dramatically.
Judges Are People Too - Suggestions For Critique
Judges, just like referees in baseball, are often the target of criticism and ire because their personal opinion can affect the outcome of the event dramatically. The majority of adjudicators employed by the various judging circuits are individuals with years of experience in the caption they have been assigned to judge. Whether or not they are a music teacher, a professional performer, an instructor, or just a life-long enthusiast for the activity, judges by and large want to see performing units succeed and want to help them improve their performances.
It is common at many competitions to have a post-show meeting with the judges to have direct interaction, commonly referred to as "critique". Here are some suggestions for any band/corps staff going into a critique session:
Competitive music can be exhilarating, frustrating, character-building, and character-damaging. Having been a band director of both competitive and non-competitive bands, and having judged a wide variety of bands, I can honestly say that competition is not for everyone. Band directors should seriously consider whether competition is a good fit for their program, and whether or not the community that you work in will be supportive of a competition band.
This article (c) 2011 Thomas J. West. All content on ThomasJWestMusic dot com is licensed under a Creative Contributions Attribution-No Derivative Works 3.0 License. Please contact the author before publishing on or off-line.