TotalFunctionalProgramming greatly simplifies type analysis of the language and many other properties (e.g. equality between codata types), and allows a great many optimizations. With divergence out of the picture, one can guarantee that lazy, strict, lenient (parallel), partial, and other evalution strategies will always reach the same result. Thus, the decision becomes strictly one of optimization. Given that analysis is easier, optimizers are also much easier to verify automatically (e.g. using Coq).

Any terminating function can be written in TotalFunctionalProgramming. If you can bound the number of steps, you could always set an integer to that bound and count down on that integer to prove termination... brute force, kludgy, but it works. The thing is that not every algorithm can be written conveniently or 'elegantly', especially given restrictions common to the design (such as using primitive recursion rather than allowing Coq-powered theorem proofs of termination).

Fortunately, ~75% of everything we ever write in FunctionalProgramming is PrimitiveRecursive in first or second order, and much of the rest can readily be adapted. Primitive support for transforming Natural numbers from the 's(s(s(s(s(0)))))' PeanoArithmetic representation into the 'cons(1 cons(0 cons(1 nil)))' binary representation does help quite a bit in optimizing various mathematics algorithms.

TotalFunctionalProgramming does not preclude CollectionOrientedVerbs or support for 'sets' as a primitive type, so long as those operations are guaranteed to terminate (usually they are, since sets are finite). TotalFunctionalProgramming does support 'infinite' codata, but not the ability to filter it (unless you can somehow prove that there are a finite number of steps between each positive match).

Writing something like Ackermann's function does become a bit of a challenge. However, while the former is an exercise worth pursuing, the latter is a pretty complex and difficult to prove correct in a normal FunctionalProgramming language. It is for tasks like writing up parsers that it becomes easy to appreciate knowing the function will terminate.

As already mentioned there are some efficient algorithms that are not easily expressed with total functions or in a functional style at all. For an example see SameFringeProblem.

How do you justify that example? The HaskellLanguage answer to SameFringeProblem is the cleanest, most easily expressed implementation of all. TotalFunctionalProgramming allows lazy, lenient, strict, etc. evaluation - all lead to the same answer, so more options are available. More generally, I agree with your statement... there are many efficient algorithms not expressed efficiently with FunctionalProgramming in general (including TFP), especially those that involve manipulating state and pointers. However, the particular example you provided is not a clear indicator of this at all.

If you're stuck with strict evaluation, then I suppose you'd have difficulty efficiently implementing the SameFringeProblem in TFP and functional. TotalFunctionalProgrammingtypically allows definition of codata, but TFP implemented by enforcing functions be PrimitiveRecursive cannot recurse over the resulting codata, and so you must keep one tree in its normal representation and fold over it monadically.