Abstract: Measurement theory has played a foundational role in educational, psychological and psychiatric assessment.This talk will introduce various statistical models that have been serving as key tools in measurement theory. It will provide detailed discussions on existing and new statistical models and statistical inferences thereof, with special focus on certain latent class and latent factor models as well as their extensions for categorical and counting process data. The new developments will be applied to examples in educational assessment and psychological evaluation.

ShinyItemAnalysis covers broad range of methods and
offers data examples, model equations, parameter estimates, interpretation of
results, together with selected R code, and is thus suitable for teaching psychometric concepts with R. Besides, the application aspires
to be a simple tool for analysis of educational tests and other composite
measurements by allowing the users to upload and analyze their own data and to
automatically generate analysis report in PDF or HTML.

The
R package difNLR
has been developed for detection of potentially unfair items in educational and
psychological testing, analysis of so called differential item functioning,
based on extensions of logistic regression model. For dichotomous data, six
models have been implemented to offer wide range of proxies to Item Response
Theory models. Parameters are obtained using non-linear least square estimation
and DIF detection procedure is performed by either F or likelihood ratio test
of submodel. For unscored data, analysis of differential distractor functioning
(DDF) based on multinomial regression model is offered to provide closer look
at individual item options (distractors).

We
argue that psychometric analysis should be a routine part of test development
in order to gather proofs of reliability and validity of the measurement. With
example of admission test to medical school we demonstrate how presented R packages and Shiny application may
provide simple and free tools to routinely analyze tests and to explain
advanced psychometric models to students and to test developers.Attention
will also be paid to technical details of automatically generated reports.

Abstract: Understanding how neurons and neuronal assemblies communicate is one of the greatest challenges of modern science. Adequate description and quantification of brain connectivity (i.e., communication between neuronal assemblies) is not only important for understanding the structure and function of brain networks, but also for diagnosis and treatment of neuropsychiatric diseases, since brain disorders – from schizophrenia to depression to post-traumatic stress disorder – are considered as disorders of connectivity. Functional brain networks are derived from multivariate time series of a quantity reflecting time evolution of brain activity. Modern neuroimaging methods became popular for the inference of the functional networks, however, the scalp electroencephalogram (EEG) is probably the most available and least expensive, non-invasive method to record the brain electrical activity. We will discuss measures of synchronization and coherence which can be used to infer connectivity patterns from scalp EEG, with a special emphasis on measures designed to cope with the effects of conductivity and reference electrode. Another challenging topic is the detection of cross-frequency interactions, namely the phase-amplitude coupling. We will ask whether we can detect cross-frequency interactions from scalp EEG and whether we can identify them as causal, e.g. in the sense “the phase of slow oscillations determines the amplitude of fast oscillations.”

Abstract: The talk will present two kinds of personal experience: In the first place, personal experience with attempts to apply knowledge and skills from numerical linear algebra (matrix computations), my primary field of research, to some selected statistical problems. The experiences were gathered mainly while co-authoring papers on classification with discriminant analysis and on robust multivariate scatter estimation. In the second place, I will report on personal experience and future work in solving statistical tasks that arise in pharmaceutical research. This second part does not present any original research, its intention is rather to inform the audience about an application area that is perhaps not very well known to statisticians.

Hemodynamics data carries information about both central and peripheral arterial system. Photoplethysmography (PPG) is widely used as a simple diagnostic tool. In this contribution we investigate some methods of statistical modelling including smoothing, orthogonal harmonic regression, parametrization of the PPG pulses and clustering and classification of different curve shapes in the multivariate parametric space to aid diagnosis. We also investigate alternative method called baroplethysmography (BPG) which records blood pressure signal. Both methods are non-invasive and are applied on patient’s finger. We conclude that both measurement methods can be applied simultaneously and the measured signal may provide information transormed into multiple (typically 10-20) independent parameters which are used both to long-term stability monitoring of a particular patient and to classify different patients with respect to their arterial system and possible related diseases.

The hazard function is an important tool in survival analysis and reflects the instantaneous probability of failure occurrence within the next time instant. The hazard function can depend on any covariates as age, gender, etc. We use method of kernel smoothing for modeling of the unconditional and conditional hazard function. We include estimates using Cox proportional hazard model smoothed by kernel methods as well. Attention is also paid to comparing both approaches. These methods are applied to the real data from Slovak Arthroplasty Register about implants of an artificial hip joint replacement implemented in all 40 orthopedic and traumatology departments in the Slovak Republic (coverage of 99.9%) with a maximum duration of follow-up of twelve years from Jan 1 2003 to Dec 31 2014. The set of 46 859 operations with 1005 implant failures is stratified based on types of fixation, diagnosis, and gender. The hazard function conditioned on age in years is calculated for pre-specified data-subsets and visualized as color-coded surfaces. These results will lead to an improvement of the quality of care for patients after artificial joint replacements.

Inter-rater reliability (IRR), commonly assessed by intra-class correlation coefficient, is an important statistic for describing the extent to which there is consistency amongst two or more raters in assigned measures. In organizational research, the data structure is often hierarchical and designs deviate substantially from the research ideal of a fully crossed design. Previous research has also shown some evidence of existence of moderators of IRR in selection instruments. However, estimation of IRR in more complex multilevel settings accounting for possible moderators of IRR has not been fully addressed. In this work, we use mixed-effect models to estimate IRR for rubrics used to rate teacher applicants to a school district. In this complex multilevel context, we estimate within school and across school IRR and we test hypotheses about possible moderators of IRR, such as applicant type (internal or external). We also enumerate the direct effect of IRR on predictive power of the selection instrument and its components, and we offer practical applications and policy implications of the methods we employ.

We deal with sequences of observations that are naturally ordered in time and assume various underlying stochastic models. These models are parametric and some of the parameters are possibly subject to change at some unknown time point. The main goal is to test whether such an unknown change has occurred or not. The core of the change point methods presented here is in ratio type statistics based on maxima of cumulative sums.

Firstly, an overview of starting points is given. Then we focus on methods for detecting a gradual change in mean. Consequently, procedures for detection of an abrupt change in mean are generalized by considering a score function. We explore the possibility of applying the bootstrap methods for obtaining critical values, while disturbances of the change point model are considered as weakly dependent.

Procedures for detection of changes in parameters of linear regression models are shown as well and a permutation version of the test is derived. Then, a related problem of testing a change in autoregression parameter is studied. Finally, our interest lies in panel data of a moderate or relatively large number of panels, while the panels contain a small number of observations. Asymptotic and bootstrap testing procedures to detect a possible common change in means of the panels are established.

All the theoretical results are illustrated through simulations. Several practical applications of the developed procedures are presented on real data as well.