# -*- coding: utf-8 -*-"""Extension to chisquare goodness-of-fit testCreated on Mon Feb 25 13:46:53 2013Author: Josef PerktoldLicense: BSD-3"""defchisquare(f_obs,f_exp=None,value=0,ddof=0,return_basic=True):'''chisquare goodness-of-fit test The null hypothesis is that the distance between the expected distribution and the observed frequencies is ``value``. The alternative hypothesis is that the distance is larger than ``value``. ``value`` is normalized in terms of effect size. The standard chisquare test has the null hypothesis that ``value=0``, that is the distributions are the same. Notes ----- The case with value greater than zero is similar to an equivalence test, that the exact null hypothesis is replaced by an approximate hypothesis. However, TOST "reverses" null and alternative hypothesis, while here the alternative hypothesis is that the distance (divergence) is larger than a threshold. References ---------- McLaren, ... Drost,... See Also -------- powerdiscrepancy scipy.stats.chisquare '''f_obs=np.asarray(f_obs)n_bins=len(f_obs)nobs=f_obs.sum(0)iff_expisNone:# uniform distributionf_exp=np.empty(n_bins,float)f_exp.fill(nobs/float(n_bins))f_exp=np.asarray(f_exp,float)chisq=((f_obs-f_exp)**2/f_exp).sum(0)ifvalue==0:pvalue=stats.chi2.sf(chisq,n_bins-1-ddof)else:pvalue=stats.ncx2.sf(chisq,n_bins-1-ddof,value**2*nobs)ifreturn_basic:returnchisq,pvalueelse:returnchisq,pvalue#TODO: replace with TestResultsdefchisquare_power(effect_size,nobs,n_bins,alpha=0.05,ddof=0):'''power of chisquare goodness of fit test effect size is sqrt of chisquare statistic divided by nobs Parameters ---------- effect_size : float This is the deviation from the Null of the normalized chi_square statistic. This follows Cohen's definition (sqrt). nobs : int or float number of observations n_bins : int (or float) number of bins, or points in the discrete distribution alpha : float in (0,1) significance level of the test, default alpha=0.05 Returns ------- power : float power of the test at given significance level at effect size Notes ----- This function also works vectorized if all arguments broadcast. This can also be used to calculate the power for power divergence test. However, for the range of more extreme values of the power divergence parameter, this power is not a very good approximation for samples of small to medium size (Drost et al. 1989) References ---------- Drost, ... See Also -------- chisquare_effectsize statsmodels.stats.GofChisquarePower '''crit=stats.chi2.isf(alpha,n_bins-1-ddof)power=stats.ncx2.sf(crit,n_bins-1-ddof,effect_size**2*nobs)returnpower

[docs]defchisquare_effectsize(probs0,probs1,correction=None,cohen=True,axis=0):'''effect size for a chisquare goodness-of-fit test Parameters ---------- probs0 : array_like probabilities or cell frequencies under the Null hypothesis probs1 : array_like probabilities or cell frequencies under the Alternative hypothesis probs0 and probs1 need to have the same length in the ``axis`` dimension. and broadcast in the other dimensions Both probs0 and probs1 are normalized to add to one (in the ``axis`` dimension). correction : None or tuple If None, then the effect size is the chisquare statistic divide by the number of observations. If the correction is a tuple (nobs, df), then the effectsize is corrected to have less bias and a smaller variance. However, the correction can make the effectsize negative. In that case, the effectsize is set to zero. Pederson and Johnson (1990) as referenced in McLaren et all. (1994) cohen : bool If True, then the square root is returned as in the definition of the effect size by Cohen (1977), If False, then the original effect size is returned. axis : int If the probability arrays broadcast to more than 1 dimension, then this is the axis over which the sums are taken. Returns ------- effectsize : float effect size of chisquare test '''probs0=np.asarray(probs0,float)probs1=np.asarray(probs1,float)probs0=probs0/probs0.sum(axis)probs1=probs1/probs1.sum(axis)d2=((probs1-probs0)**2/probs0).sum(axis)ifcorrectionisnotNone:nobs,df=correctiondiff=((probs1-probs0)/probs0).sum(axis)d2=np.maximum((d2*nobs-diff-df)/(nobs-1.),0)ifcohen:returnnp.sqrt(d2)else:returnd2