Estimating test execution

Estimation is a tricky subject. It happens at the macro level (projects) and micro level (test cases or sessions). When you're asked to estimate how long it will take to test something, do you have a framework you use to build your estimate? Here's the one I use when estimating how long it might take for me to test a feature:

How long will it take me to wrap my head around what I'm being asked to test (review documentation available, hallway conversations, etc...)?

How long will it take me to draft my initial test charters?

How long will it take me to review those charters with the programmers working the feature, other testers in the group, and make any necessary updates?

What environment setup is required to prep my testing? Do I need hardware? Firewall ports opened? Will I need to find or create specialized data? Do I have the tools I need? Will I need to write test code? How long will it take me to get all this stuff pulled together?

How many charters do I think I'll execute before testing for the feature is "done?"

How much time should I build into my estimate for when I'm delayed (code not ready, need a bug fix before I can go further, environment not available, etc...)? This estimate likely depends on the project or my manager's preference...

How much documentation do I need to do to wrap up my testing and do I need to estimate that separately? (For example, in a regulated environment I might add time for checking all the boxes.)