If a webrtc client can scale the input more efficiently than webrtc itself can, then that client may want to observe adapt_reason and, when the adapter downscales due to CPU limitation, also downscale the input to match. However, this creates a paradox: downscaling the input to match the encode resolution causes adapt_reason to be zero.
Instead, adapt_reason should continue to indicate that it is CPU-limited, until the CPU monitor determines that webrtc is ready to encode at the previous resolution again.
This requires some care, to avoid "ping-pong" between resolutions, if reducing the encode resolution triggers a chain of events that causes other CPU consumption to decrease. Perhaps webrtc should record any decrease in CPU utilization associated with a decrease in input resolution, so that it does not set adapt_reason to zero until a similar increase could be tolerated.