I'm working on new site now and we have a req that all sp for our SSRS reports utilized that approach, the very last SELECT comes from predefined #SSRSTable, so local sp are very bigin terms of number of lines, rationale behind this is that SSRS will be able to update columns/fields if any change happened in dataset (?).

I know that it's not true but probably there are some other reasons I'm not aware of ? Not counting that local dbd paid by line? I always try to make my sp compact and optimized in terms of everything.

I think the merits of this approach depend on the details. Obviously, if you create a temp table, then insert data into it via a join, then query the temp table, this takes extra time than if you performed the query without the temp table. However, does the extra time matter?

I once worked on a long stored procedure with various persistent tables and a junior programmer told me that I should remove these to improve performance. I replied that since the routine is running in the morning that there is no need to save the extra milliseconds; and meanwhile, I had the data in persistent tables in case something was wrong, and I could quickly find what the issue was. The primary goal of the routine was accuracy, and if there was any problem with my final output my job could be at risk unless I could figure out the issue ASAP.

Performance isn't everything. It all depends on what is the highest priority for what you are doing.