The Bank of Korea changed its method of GDP estimation from a fixed-weighted to a chain-weighted measure in 2009. The fixed-weighted method had had problems such as substitution bias and the rewriting of economic history. As a result of the change, annual growth rates calculated using the chain-weighted method from 1970 through 2008 turned out to be 0.8%p higher on average than the existing rates. The quarterly average chain-weighted growth rates were 0.19%p higher than the fixed-weighted ones, but they changed in the same directions. In this paper we analyze whether the differences in rates between the two calculation methods would bring about a difference in the cyclical characteristics of GDP. We conclude that although there were differences in growth rates after introduction of the chain-weighted method, there was no difference in the cyclical fluctuation.

In this paper, we propose a statistic to measure investor sentiment. It is a usual phenomenon that an asymmetric volatility (referred to as the leverage effect) is observed in financial time series and is more sensitive to bad news rather than good news. In a bubble state, investors tend to continuously speculate on financial instruments because of optimism about the future; subsequently, prices tend to abnormally increase for a long time. Estimators of the transformation parameter and the skewness based on Yeo-Johnson transformed GARCH models are employed to check whether a bubble or abnormality exist. We verify the appropriacy of the proposed interpretation through analyses of KOSPI and NIKKEI.

This paper examines the relationship between the stock and futures markets in terms of lead-lag relationship, correlation and the hedge ratio using wavelet analysis. The basic finding is that the relationship between the two markets significantly depends on the time-scale. First, there is a feedback relationship between the stock and futures markets in the long-run scale; however, weaker evidence is observed in shorter-run scales. Second, wavelet correlation between the two markets increases for a longer time scale. Third, the hedge ratio and the effectiveness of hedging strategies increase as the investment horizon gets longer. The results in this paper indicate that the stock and futures series are perfectly correlated in the long run and are tied together over long horizons.

We perform a comparison of time series models that include seasonal ARIMA, Fractional ARIMA, and Holt-Winters models; in addition, we also consider hourly and daily air passenger data. The results of the performance evaluation of the models show that the Holt-Winters methods outperforms other models in terms of MAPE.

The article introduces a method to estimate a cointegrated vector autoregressive model, using mixed-frequency data, in terms of a state-space representation of the vector error correction(VECM) of the model. The method directly estimates the parameters of the model, in a state-space form of its VECM representation, using the available data in its mixed-frequency form. Then it allows one to compute in-sample smoothed estimates and out-of-sample forecasts at their high-frequency intervals using the estimated model. The method is applied to a mixed-frequency data set that consists of the quarterly real gross domestic product and three monthly coincident indicators. The result shows that the method produces accurate smoothed and forecasted estimates in comparison to a method based on single-frequency data.

In this paper we used the KLIPS(Korean Labor and Income Panel tudy) data that surveyed from 2006(wave 9) to 2009(wave 12). Other previous studies are concerned with the panel attrition in the early wave, but this study classifies the response pattern and investigates some factors that influence panel attrition when the panel tends to stabilize. It was revealed that panel attrition was influenced by relocation and housing type through the logit model. Besides it was appeared that panel attrition was affected by the monthly living expenses and the overall household income through the decision tree.

When modeling event times in biomedical studies, the outcome might be incompletely observed. In this paper, we assume that the outcome is recorded as current status failure time data. Despite well-developed literature the routine practical use of many current status data modeling methods remains infrequent due to the lack of specialized statistical software, the difficulty to assess model goodness-of-fit, as well as the possible loss of information caused by covariate grouping or discretization. We propose a model based on pseudo-observations that is convenient to implement and that allows for flexibility in the choice of the outcome. Parameter estimates are obtained based on generalized estimating equations. Examples from studies in bile duct hyperplasia and breast cancer in conjunction with simulated data illustrate the practical advantages of this model.

Each year research institutions such as the Korea Employment Information Service(KEIS), a government institution established for the advancement of employment support services, and Job Korea, a popular Korean job website, announce first job waiting times after college graduation. This provides useful information understand and resolve youth unemployment problems. However, previous reports deal with the time as a completely observed one and are not appropriate. This paper proposes a new study on first job waiting times after college graduation set to 4 months prior to graduation. In Korea, most college students hunt for jobs before college graduation in addition, the full-fledged job markets also open before graduation. In this case the exact waiting time of college graduates can be right-censored. We apply a Cox proportional hazards model to evaluate the associations between first job waiting times and risk factors. A real example is based on the 2008 Graduates Occupational Mobility Survey(GOMS).

Data quality and climate forecasting performance deteriorates because of long climate data contaminated by non-climatic factors such as the station relocation or new instrument replacement. For a trusted climate forecast, it is necessary to implement data quality control and test inhomogeneous data. Before the inhomogeneity test, a reference series was created by index to measure the temperature series relationship between the candidate and surrounding stations. In this study, a inhomogeneity test to each season and climatological station was performed on the daily mean temperatures, daily minimum temperatures and daily maximum temperatures. After comparing two inhomogeneity tests, the traditional and the adjusted SNHT method, we found the adjusted SNHT method was slightly superior to the traditional one.

The sales volume of men`s cosmetics has drastically increased in Korea. In recent years, men`s needs for cosmetics have been diversified and the consumer demand for functional cosmetics has greatly risen. In particular, male consumers have become more interested in essence product that is a light and concentrated treatment to correct skin problems. This research analyzes consumer preferences for essence-for-men through the use of choice-based conjoint analysis. This approach is adopted since the task of respondents to choose the most preferred option from several alternatives closely mimics actual marketplace purchasing behavior by consumers. New technique for the construction of choice sets is suggested based on the balanced incomplete block design, to accommodate a larger number of product profiles. The proposed design for choice sets is balanced and provides a tool to filter the contradictory choices. Conjoint analyses are performed to assess the relative importance of attributes and identify the most preferred profile of essence-for-men with respect to attributes such as emphasized function, price, type of content, and design of container. Some differences are indicated in the analysis results between age brackets as well as between groups classified by the amount of fashion item expenditures.

A Bayesian multiple change-point model for small data is proposed for multivariate means and is an extension of the univariate case of Cheon and Yu (2012). The proposed model requires data from a multivariate noncentral -distribution and conjugate priors for the distributional parameters. We apply the Metropolis-Hastings-within-Gibbs Sampling algorithm to the proposed model to detecte multiple change-points. The performance of our proposed algorithm has been investigated on simulated and real dataset, Hanwoo fat content bivariate data.

Persistence homology (a type of methodology in computational algebraic topology) can be used to capture the topological characteristics of functional data. To visualize the characteristics, a persistence diagram is adopted by plotting baseline and the pairs that consist of local minimum and local maximum. We use the bottleneck distance to measure the topological distance between two different functions; in addition, this distance can be applied to multidimensional scaling(MDS) that visualizes the imaginary position based on the distance between functions. In this study, we use handwriting data (which has functional forms) to get persistence diagram and check differences between the observations by using bottleneck distance and the MDS.

This paper presents a method for the identification of "edge observations" located on a boundary area constructed by a truncation variable as well as for the identification of outliers and the after fit of multivariate skew -distribution(MST) to asymmetric data. The detection of edge observation is important in data analysis because it provides information on a certain critical area in observation space. The proposed method is applied to an Australian Institute of Sport(AIS) dataset that is well known for asymmetry in data space.

Among various biometrics recognition systems, statistical fingerprint recognition matching methods are considered using minutiae on fingerprints. We define similarity distance measures based on the coordinate and angle of the minutiae, and suggest a fingerprint recognition model following statistical distributions. We could obtain confidence intervals of similarity distance for the same and different persons, and optimal thresholds to minimize two kinds of error rates for distance distributions. It is found that the two confidence intervals of the same and different persons are not overlapped and that the optimal threshold locates between two confidence intervals. Hence an alternative statistical matching method can be suggested by using nonoverlapped confidence intervals and optimal thresholds obtained from the distributions of similarity distances.

The generalized linear mixed model(GLMM) is widely used in fitting categorical responses of clustered data. In the numerical approximation of likelihood function the normality is assumed for the random effects distribution; subsequently, the commercial statistical packages also routinely fit GLMM under this normality assumption. We may also encounter departures from the distributional assumption on the response variable. It would be interesting to investigate the impact on the estimates of parameters under misspecification of distributions; however, there has been limited researche on these topics. We study the sensitivity or robustness of the maximum likelihood estimators(MLEs) of GLMM for counts data when the true underlying distribution is normal, gamma, exponential, and a mixture of two normal distributions. We also consider the effects on the MLEs when we fit Poisson-normal GLMM whereas the outcomes are generated from the negative binomial distribution with overdispersion. Through a small scale Monte Carlo study we check the empirical coverage probabilities of parameters and biases of MLEs of GLMM.

Nonparametric methods are often used as an alternative to parametric methods to estimate density function and regression function. In this paper we consider improved methods to select the Bezier points in Bezier curve smoothing that is shown to have the same asymptotic properties as the kernel methods. We show that the proposed methods are better than the existing methods through numerical studies.