The likelihood ratio test (LRT) may be used to compare statistically any two nested models defined by the constraints they place on the parameters of psychometric functions (PFs). As such the LRT provides for a flexible method of testing a wide variety of research questions (e.g., are the thresholds and/or slopes in conditions A and B statistically different? Is the lapse rate significantly different from 0?). However, such statistical comparisons between models are valid only insofar the assumptions that are made by both models are correct. For example, when testing the equivalence of the threshold parameters between two conditions both models might assume that the slopes are identical between conditions. Here, using Monte Carlo simulations, the robustness of the LRT for a variety of violations of assumptions is investigated in the context of different possible model comparisons. Results indicate that statistical p-values associated with model comparisons are robust against many violations of assumptions even if these violations result in biased parameter estimates. For example, though it has been well established that threshold and slope estimates might be seriously biased when the assumed lapse rate differs from the actual (generating) lapse rate, here it is shown that such violations have a negligible effect on statistical decisions regarding the equivalence of either the thresholds or slopes of PFs across multiple datasets. Also shown is that an inflation of type-I-error rates in situations where violations of assumptions do affect statistical p-values (e.g., violations regarding assumptions of a PF's slope parameter) may be successfully avoided by performing a goodness-of-fit test of the fuller (less constrained) model and allowing rejection of the simpler model only when the fuller model fits well. Also considered are the effects of violations of assumptions on model comparisons made using the AIC and BIC information criteria.