Local evaluations of clinical training programs in psychology are generally conducted from a formative perspective, necessitating careful examination of the training structure, process, and results. To learn about and document the extent of local training evaluation enterprises, two national surveys that solicited general program information, information about the nature and frequency of use of various methods for evaluating training, subjective assessments of evaluation impact, and obstacles to evaluation were mailed to directors of psychology training clinics and to university directors of clinical psychology. The responses (N=87; N=67) indicated that the most frequently used and highest rated source of qualitative evidence of training impact on students was the supervisors' oral evaluations of clinical trainees. The most popular, systematic, qualitative sources of evidence of student performance included supervisors' written evaluations (84 percent) and internship supervisors' reports (84 percent). Some quantitative sources of evidence of student performance used by a minority of respondents included rate of student acceptance to first-choice internships (43 percent), knowledge tests (40 percent), and supervisors' quantitative ratings of students (40 percent). Qualitative means of assessing programs were supervisors' written evaluations and American Psychological Association accreditation reports. Quantitative evidence of program effectiveness included students' course evaluations and ratings of clinical supervisors. The pattern of results attested to the extensive, highly variable nature of current training evaluation activities. Inadequate resources (time, money, personnel) to conduct meaningful evaluation were perceived as the most serious source of difficulty for both clinical and program directors. (NRB)