I didn't just "mention" it; I talked about the behavior of the sum of the
series of I'1 = C(O, P, I), I'2 = C(P, O, I + I'1), I'3 = C(P, O, I + I'1 +
I'2), etc. I don't see any realistic way to get steady progress from this
model. Flat, yes, jumps, yes, but not a constant derivative.

point the AI can't redesign or optimize itself. I don't believe in slow,
steady, improvement. Debugging, yes. But if you're talking about the slow
reworking of code, line by line, you're really talking about a large jump in
slow motion because the AI is slow - if the AI can rework a line of code well
enough to get improvement, without needing to add more intelligence, it's all
part of the same "increment", the same I' or O'. If the partial reworking
adds even more intelligence, then the equation runs even faster.

Final remark: Given the relative computational requirements of consciousness
and algorithmic thinking, and given the Principle of Mediocrity, and given the
relative linear speeds and the relative processing power compared to the human
brain, I would find it to be a remarkable coincidence if a major jump was
slowed down exactly enough that it looked like slow and steady improvement on
the human timescale, and not flat or vertical. It might happen, because the
human programmers could be unable to work on things that happened on other
scales, but it wouldn't happen by coincidence.