1 Answer
1

The glm function uses a maximum likelihood estimator (or restricted maximum likelihood). Maximum likelihood does not minimize the squared error (this is called [ordinary] least squares). Sometimes both estimators give the same results (in the linear/ordinary case for normal distributed error terms, see here) but this does not hold in general. Since the coefficient of determination $R^2$ is calculated by ordinary least-squares regression and not by maximum likelihood, there is no reason to display this measure.

PS:
Also regard Nick Cox very valid comment below: $R^2$ may be also well-definied and interesting for GLM. My personal experience is that (as so often) some people like/accept it, while others do not.

$\begingroup$This is a little strong. For example, Zheng, B. and A. Agresti. 2000. Summarizing the predictive power of a generalized linear model. Statistics in Medicine 19: 1771–1781 argue cogently that the square of the correlation between predicted and observed is well-defined and often interesting and useful for GLMs. It's just that some of the interpretation of $R^2$ that goes with regression is irrelevant or inappropriate in a wider context.$\endgroup$
– Nick CoxAug 30 '16 at 12:02

3

$\begingroup$I also warn against conflating linear regression and OLS; for example, if regression were calculated by a general maximum likelihood routine, then $R^2$ wouldn't lose validity or value.$\endgroup$
– Nick CoxAug 30 '16 at 12:03

$\begingroup$Given that GLMs are fit using iteratively reweighted least squares, as in bwlewis.github.io/GLM, what would be the objection actually of calculating a weighted R2 on the GLM link scale, using 1/variance weights as weights (which glm gives back in the slot weights in a glm fit)?$\endgroup$
– Tom WenseleersJun 11 '19 at 13:16