Commit Message

Hi,
Quite some time back someone had pointed out that the ARM backend
used optimize_size in quite a few areas and that backends shouldn't
use this directly in patterns any more. I had written this patch up a few weeks
back and it was in one of my trees and had gone through some degree of
testing.
While the ARM backend doesn't support hot cold partitioning of basic blocks
because of issues with the mini-pool placements, I suspect
this by itself is a good cleanup. The bits of the use that I'm not
convinced about
yet are the changes of optimize_size in thumb_legitimize_address
to optimize_insn_for_size_p and I'm looking for some comments there.
There are still other uses of optimize_size and here are some thoughts
on what we should do there. I will go back and do this when I have
some more free time next but I hope to have the changes in by the time
stage1 is over if these are deemed to be useful.
- arm/aout.h : ASM_OUTPUT_ADDR_DIFF_ELT : replace with
optimize_function_for_size_p ?
- arm/arm.h : TARGET_USE_MOVT (probably again something that could
benefit with the change.)
- arm/arm.h : CONSTANT_ALIGNMENT - probably should retain optimize_size .
- arm/arm.h : DATA_ALIGNMENT - Likewise.
- arm/arm.h : CASE_VECTOR_PC_RELATIVE should go hand in glove with
addr_diff_elt output.
- arm/coff.h or arm/elf.h : JUMP_TABLES_IN_TEXT_SECTION :
optimize_function_for_size_p () ?
- arm/arm.c (arm_compute_save_reg_mask) : Replace optimize_size
with optimize_function_for_size_p ().
- arm/arm.c (arm_output_epilogue): Replace optimize_size with
optimize_function_for_size_p ().
- arm/arm.c ( arm_expand_prologue): Likewise
- arm/arm.c (thumb1_extra_regs_pushed): optimize_function_for_size_p
- arm/arm.c (arm_final_prescan_insn): Probably optimize_insn_for_size_p () .
- arm/arm.c (arm_conditional_register_usage): optimize_function_for_size_p.
Ok for trunk after a bootstrap, test run ? Thoughts about what we do
with the rest of the uses ?
cheers
Ramana
* config/arm/arm.md ("*mulsi3_compare0_v6"): Replace optimize_size
with optimize_insn_for_size_p.
("*mulsi_compare0_scratch_v6"): Likewise.
("*mulsi3addsi_compare0_v6"): Likewise.
("casesi"): Likewise.
(dimode_general_splitter): Name existing splitter and like above.
("bswapsi2"): Likewise.
* config/arm/thumb2.md (t2_muls_peepholes): Likewise.
* config/arm/arm.c (thumb_legitimize_address): Replace optimize_size
with optimize_insn_for_size_p.
(adjacent_mem_locations): Likewise.
(arm_const_double_by_parts): Likewise.
* config/arm/arm.h (FUNCTION_BOUNDARY): Use
optimize_function_for_size_p.
(MODE_BASE_REG_CLASS): Likewise.
* config/arm/constraints.md (constraint "Dc"): Use
optimize_insn_for_size_p.

Comments

Ramana Radhakrishnan <ramana.radhakrishnan@linaro.org> writes:
> While the ARM backend doesn't support hot cold partitioning of basic blocks> because of issues with the mini-pool placements, I suspect> this by itself is a good cleanup. The bits of the use that I'm not> convinced about> yet are the changes of optimize_size in thumb_legitimize_address> to optimize_insn_for_size_p and I'm looking for some comments there.
I'm not sure it's correct for the define_insns either. I think it can
only be called during passes that produce new code, and which explicitly
set the global state appropriately (e.g. expand, split and peephole2).
I might be wrong, but as things stand, I don't think you can guarantee
that the global state describes the insn under test every time that
recog is called.
For existing insns, I think optimize_bb_for_speed_p (BLOCK_FOR_INSN (insn))
is the canonical check.
Richard