This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Purpose

We aim to identify what potential bias factors affected students’ overall course evaluation, and to observe what factors should be considered in the curriculum evaluation system of medical schools.

Methods

This study analyzed students’ ratings of preclinical instructions at the Ajou University School of Medicine. The ratings of instructions involved 41 first-year and 45 second-year medical students.

Results

There was a statistically significant difference between years of study and ratings’ scoring. Learning difficulty, learning amount, student assessment, and teacher preparation from second-year students were significantly higher than first-year students (p<0.05). The analysis results revealed that student assessment was the predictor of ratings from first-year students, while teacher preparation was the predictor of ratings from second-year students.

Conclusion

We found significant interactions between year of study and the students’ rating results. We were able to confirm that satisfaction of instructions factors perceived by medical students were different for the characteristics of courses. Our results may be an important resource for evaluating preclinical curriculums.

Introduction

Since 2000, all medical schools in South Korea have conducted course evaluation of students to improve the quality of teaching [1]. Most medical schools ask students to rate the quality of their learning experience and use this feedback to improve future ratings [2,3,4]. Similarly, the majority of medical schools conduct improvement of their teaching, including curriculum evaluation of the students’ rating results as a formative purpose. Furthermore, a number of medical schools utilize administrative for incentive or promotion purposes [2].

It has been indicated that the crucial factors in attaining these purposes are the validity in the ratings of students [5,6]. The widespread use of these students’ rating about faculties’ instruction of university has base on the belief and evidences that students’ rating of instructor are valid and evaluations measure, without bias, variables that indicate effective teaching [7]. But many faculty members believe that a number of factors unrelated to teaching effectiveness bias student responses on students’ instruction and course evaluation [8]. By definition, the bias in the students’ rating of teaching is that it is a circumstance that unduly influences a teacher’s ratings, although it has nothing to do with the teacher’s teaching effectiveness [9].

A lot of researcher has been examined a number of factors that have the potential to bias students’ ratings of their teacher and course, including (1) course characteristics such as class size, discipline, and difficulty level of the course; (2) student characteristics such as sex, grade point average, and attitude toward the instructor; (3) instructor characteristics such as personality, research productivity, and seductiveness; (4) circumstances under which evaluations are made such anonymity of student raters, purpose of ratings, and presence of instructor during rating [10,11,12]. This means that these factors are not only more importance factors, but also these factors are possible affect to students’ rating of teaching.

Can the results of previous studies be applied to medical schools? The present research started with this question. The results of these prior reports have mostly been derived from studies unrelated to medical education. Since medical education differs considerably from other nonmedical settings, an analysis of factors influencing overall student ratings with a focus on medical education is critical.

Such ratings of medical students to date have been indicated to be derived from academic fields that have been irrespective of medical education environments. Since there exist distinctive features of medical education compared to higher education, an analysis of factors that influence student ratings with a specific focus on medical education settings is needed. Such differences are conceivable given that undergraduate medical curricula differ from other higher education curricula in many respects. For example, many professors and lecturers participate in weekly teams to teach block lectures in a single medical school curriculum [13]. Medical school teaching methods are also markedly diverse. For example, most subjects place priority on laboratory and clinical training, making other educational activities such as conferences and small group learning difficult to be evaluated objectively. Current medical schools provide teaching and performance examinations that are more diversified compared to other nonmedical courses, including modalities such as problem-based learning, team-based learning, case-based learning, objective structured clinical examination, and clinical performance examination. Therefore, course evaluations in medical education must incorporate its unique features, and interpretation of obtained results should be used as a foundation for their medical curriculums [14].

The present research focuses not on the difference between medical and non-medical education, but on the structures within a preclinical courses in a single medical academic setting. Courses described herein are instruction, method of education, instructor, student, and examination. In other words, Discipline may be distinguished as basic and clinical courses, educational methods may differ according to different instructors, the number of lecturers may differ according to type of courses, and exam procedures may also be diverse within medical education courses.

In this paper, we assume that there is a difference within the preclinical courses. Our aim is to identify what potential bias factors affected the overall course evaluation of students in preclinical courses. Furthermore, we attempt to observe what factors should be considered in the application of curriculum evaluation system of medical schools.

Subjects and methods

1. Features of preclinical courses of Ajou University School of Medicine

The preclinical courses in Ajou University School of Medicine (AUSOM) was divided into basic medical science and organ based integrated courses. Table 1 shows the course characteristics such as period, discipline, number of teachers, instructional methods, grading methods in first- and second-year courses. Students take part in the basic medical science instructions for 21 weeks in their first year and in the organ based integrated course for 36 weeks until the beginning of the third-year clerkship. For basic medical science instructions, the mean number of faculty per curriculum participating in the team teaching was 5.8, and the small group learning consisted of 9.9% of the total lecture time, and the students were tested once a week by written exam. On the other hand, small group learning consisted of 17% of the total lecture time for the organ based integrated course. Student assessment comprised of not only written exams, but also quizzes, oral presentations essays, and oral tests. Those faculty participating in the integrated courses numbered 21 per course.

2. Methodology

This study analyzed students’ ratings of preclinical 22 courses at AUSOM in 2014. All first- and second-year students completed students’ ratings of course evaluations using online system after final test after each course. This study involved 41 first-year and 45 second-year medical students.

The nine dimensions of the SEEQ (Student’s Evaluation of Educational Quality) were developed by Marsh [7] in 1984. The Office of Medical Education in AUSOM has developed similar systems. Questionnaire items have formulated by the Office of Medical Education were that learning difficulty, learning amount, objectives, materials, relevance, student assessment, teacher’s preparation, and overall satisfaction (Table 2). Questionnaire of course evaluations contained eight items whereby each item represents one factor. Each item was evaluated using 5-point Likert-type scale (score of 1 indicating strongly disagree to score of 5 indicating strongly agree). The Cronbach α value of course evaluations was 0.86. IBM SPSS ver. 22.0 (IBM Corp., Armonk, USA) was used for statistical data analysis using t-test and multiple linear regression.

Results

1. Differences between year of study and students’ ratings of course evaluation

Differences between years of study and students’ ratings of course evaluation are shown in Table 3. Based on data learning difficulty, learning amount, student assessment, and teacher preparation, the satisfaction of second-year students was significantly (p<0.05) different than that of first-year students.

Discussion

The students’ ratings of instruction in medical education had been designated as a basic standard in medical school accreditation. Hence, the awareness and interest have increased with regard to the ratings of instruction process. Many medical schools currently employ students’ ratings of course evaluations through curriculum evaluation, feedback to course directors, improvement of educational contents, and data collection to facilitate faculty appointment and promotion [15].

In this present study, we investigated the potential bias factors of curriculum influencing students’ overall ratings and we attempted to ascertain whether these factors may play a role as tools in curriculum evaluation. The results observed in this study showed that students’ ratings of course evaluations were significantly different when years of study were different. Scores relating to learning difficulty, learning amount, student assessment, teacher readiness, and satisfaction scores from second-year students were higher compared to those from first-year students. In other words, the satisfaction of instruction scores from second-year students who took the organ based integrated instructions were higher than those from first-year students who took basic medicine instructions. In addition, our analysis of factors affecting students’ ratings of course evaluations results for each student year demonstrated that student assessment was by far the strongest predictor of overall rating for first-year students, whereas teacher preparation was the one that influenced the overall ratings of second-year students. The learning difficulty and learning amount significantly influenced ratings of instruction’s results of both first and second years.

In summary, we were able to confirm that satisfaction of instructions factors perceived by medical students were different for each academic year, depending on the characteristics of each instruction. It also can provide the instructor with information on the accomplishment of particular learning outcomes and on the level of satisfaction with and influence of various courses including the planning, organizing of contents, teaching methods, grading practices, and feedback and so forth. These information can be used by the instructor to enrich and improve the course.

If this is true, then how does elucidation of factors that affect students’ evaluation influence education restructuring and reform? After the Flexner report of 1910 [16], many educational institutions were in a dilemma as to the educational curriculum for nurturing able persons, a goal of many universities [17,18]. For example, there is a debate as to whether the medical school curriculum should be an integrated or separate basic science and clinical instruction, and if so in what manner should it be integrated or combined [19,20]. The students who have received basic science and humanities courses begin by facing a tremendous quantity of new learning material when entering the medical curriculum which may lead to marked stress and academic failure. To overcome these obstacles, there exists a necessity to objectively assess and ascertain what the educational process factors are that are a basis for medical school education for medical students [21].

Through results of this study, we would suggest two important messages. First, it is an improvement of the student assessment system. There are not many schools that have more examinations than medical schools. Irrespective of the type of assessment, all medical school students experience anxiety, depression, and negative psychology, and this is one of the significant factors in the assessment of curriculum assessment. University organizations should place priorities not on interuniversity competition, but should place emphasis on inter-university cooperation, should avoid anxiety inducing programs and enhance student assessment systems that are fun and interesting. In particular, we feel that there is a need for close examination of difficulty and fairness of examination perception by medical students, especially in the fields of the basic sciences.

Second, it is an improvement of the team-teaching curriculum. Teachers participate in teams to teach block lectures in preclinical curriculum. To evaluate the teaching quality of an individual teacher in multi-teacher contexts is not easy. There should be a restriction in the number of teachers who partake in the team teaching so as to strengthen the quality of the curriculum. Medical schools differ from others in that teachers provide lectures that are 1–2 hours per lecture. While this system may entail the advantages of team teaching, it is difficult to achieve uniformity within one subject and continuity within the lecture contents [13].

Obviously, this study has some limitations. First, subjects were from a single education institution with about 40 students enrolled each year. Second, the evaluation process entailed only preclinical curriculum portion, which may not represent the majority of medical schools in South Korea. Nonetheless, this investigation is a step towards addressing the evaluation of medical school curriculum.

In conclusion, we found significant interactions between year of study and the students’ rating results. we identified potential bias factors that affect the course evaluation of students in preclinical courses are student assessment, teacher preparation, learning difficulty, learning amount, and satisfaction of instructions factors perceived by medical students were different for the characteristics of courses. Our results provide insight into future research pertaining to medical school curriculum evaluation.

Acknowledgments

None.

Notes

Funding

None.

Conflicts of interest

None.

Table 1.

Course Characteristics of First- and Second-Year Curriculum

Variable

First year (n=41)

Second year (n=45)

Period (wk)

21

36

Discipline

Basic medical science

Organ based integration

Mean no. of teachers

5.8

21.2

Small group learning proportion of total instruction (%)

9.9

17.7

Grading method

Written exam every week

Written exam every 2 wk 10% formative assessment

Table 2.

Contents of Course Evaluation

Factor

Contents

1. Learning difficulty

The course was appropriate for student’s level

2. Learning amount

The workload of the course was appropriate

3. Learning objectives

Learning objectives were clear

4. Learning material

Learning materials posted on the online curriculum system were useful

5. Relevance

Contents between instructions were organically relevant

6. Student assessment

Student assessment tested contents actually taught

7. Teacher preparation

Lecturers prepared their instructions with care

8. Satisfaction

Overall, this instruction was satisfactory

Table 3.

Difference between Year of Study and Students’ Ratings of Course Evaluation