John Luczak of Education First gives a presentation to the Performance Evaluation Advisory Council last week

Calling it another step to “fix what’s broken in our public schools,” Gov. Dannel P. Malloy on Monday announced which school districts will test a much-debated plan to grade teacher performance that will soon be mandatory for every teacher in the state.

A total of 10 applicants — both individual school districts and consortia of districts — were chosen to pilot the new teacher and principal evaluation system under the state’s recently passed education reform package.

What happens in those districts in the coming school year will be closely watched, since their successes and failures will help shape a new statewide evaluation plan starting in the fall of 2013.

Those ratings will eventually be used to determine if teachers get tenure under the new state law. Teachers judged “inefficient” will get additional support but can be fired if they fail to improve.

“The fact is that many of our state’s schools and most of our teachers are doing a tremendous job,” Malloy said in statement announcing the pilot. “But without a fair and reliable evaluation system, teachers and administrators are left with no clear indicators of where they are succeeding and where they should improve.”

The announcement comes as a state panel continues under a looming deadline to hammer out new evaluation guidelines that the State Board of Education must approve by July 1.

The Performance Evaluation Advisory Council was still debating the biggest sticking point until last Thursday, when the group finally reached consensus on how much weight standardized test scores should carry in the teacher ratings.

The framework approved by the panel in January called for 45 percent of a teacher’s grade to be based on evidence of student achievement. But a battle erupted last month over how much of that would be based on test scores.

The panel agreed test scores would make up half — or 22.5 percent of the student achievement portion — but the dispute centered on what could be included in the remaining 22.5 percent. The panel specified that 22.5 percent should be based on “other indicators” of student achievement, such as portfolios of students’ work.

Committee members representing school boards and administrators suggested schools should be able to choose to include additional tests as one of the “other indicators.” Leaders of the state’s two teacher unions strongly objected, saying that’s not what the panel approved.

“Why did we say 22.5 percent would be state tests?” panel member Mary Loftus Levine, executive director of the Connecticut Education Association, asked during a meeting Thursday. “If we meant for state tests to be in both (groups) then we would have just said 45 percent would be student tests and other indicators.”

The compromise reached last week — proposed by state Education Commissioner Stefan Pryor — allows a maximum of one additional standardized test and a minimum of one indicator that is not a test in the disputed 22.5 percent. The mix would be determined by mutual consent between the teacher and evaluator.

Some members of the panel — including Loftus Levine and Connecticut Association of Public School Superintendents Executive Director Joseph Cirasuolo — disagreed with the consensus for different reasons but said they would not block it.

“We don’t agree but we can live with it,” Cirasuolo said.

Loftus Levine worried the compromise would result in evaluations that rely too much on tests. She said it is “a lot more work” to use other indicators such as behavior, attendance and portfolios of students’ work than to look at a test score, but it gives a truer picture of a teacher’s abilities.

“I don’t think we should take the easy way out here,” she said.

Cirasuolo wanted to give the pilot districts more leeway to come up with their own formulas.

“I think you need to just let folks go out and set those learning objectives teacher by teacher, school by school, district by district and let’s see what happens,” he said. “If we don’t like what happens, we can always adjust.”

In addition to student performance measures, the evaluation framework calls for 40 percent of a teacher’s grade to come from classroom observations, and the rest from student, parent, and peer feedback.

Pryor said he was looking for a diversity of approaches in the pilot districts but the state also needs to give direction about the panel’s preferences. He said a key purpose of the pilot is find out what works best so it can be included when the model is rolled out statewide.

The University of Connecticut’s Neag School of Education has been tapped to study the pilot and report to state lawmakers by October 2013.

Uncertainty over how the new evaluations will play out didn’t stop dozens of districts from applying to take part in the test run.

The State Department of Education received 36 applications for the pilot, and not just from struggling urban districts. Several of the state’s high-performing suburban districts applied, as did some rural districts and a charter school. In a letter inviting superintendents to volunteer, Pryor said the goal was for the pilot to “represent the state’s diverse regions and school systems.”

Districts were chosen based on size, location, designation as rural, suburban or urban, levels of academic performance, and socio-economic and ethnic diversity.

Cirasuolo said districts were enticed by the opportunity for intense state-funded training and the chance to try out the new system before it becomes mandatory.

“You have a chance to have a shakedown cruise,” he said. “You get a chance to try it out before it has any real consequences.”