Ø Performance tests that require a person to perform a task (such as teaching a lesson to a class);

Ø Observers assess the effectiveness of the performance Written tests of ability or knowledge Reviews that rely on existing documentation, such as examination of medical and school attendance records;

Ø Analysis of the content of published and unpublished articles and diaries;

Ø Interpretation of the activities of online chat and support groups

Surveys can be used in deciding policy or in planning and evaluating programs and conducting research when the information needed come directly from people. The data they provide are descriptions of feelings and perceptions, values, habits, and personal background or demographic characteristics such as age, health, education, and income.

Self-administered questionnaires and interviews

All surveys consist of questions and responses. Questions are sometimes referred to as items, including forced-choice question and open-ended question. And the selection, wording, and ordering of questions and answers require careful thought and a reasonable command of language.

To get accurate data, you must account for a survey’s:

Ø Sampling and design

The sample is the number and characteristics of people in the survey. The design refers to how often the survey takes place (just once, or cross-sectional; over time, or longitudinal), whether the participants are selected at random or are chosen some other way, and how many separate groups are included.

Example question: Does knowledge about physical fitness change over a 12-month period among graduates of the class of 2015?

Survey method: Online questionnaire

Sample: All 1,000 graduates from State College’s class of 2015

How often survey takes place: Twice—at graduation and 12 months later

How participants are selected: All graduates are eligible

How many groups: Just one—the class of 2015

Design: Longitudinal cohort

Ø Data processing or “management” and analysis

The plan of analyzing the survey’s data must be considered ahead. Choices: compute percentages; produce averages; compare groups; look for relationships; look for changes over time

Ø Pilot testing

A pilot test is a tryout, and its purpose is to help produce a survey form that is usable and that will provide with the information needed. All surveys must be pilot tested before being put into practice. It will help to understand questions below:

Can they follow the interview form easily? Are the spaces on printed surveys large enough for recording responses? Do interviewers know what to do if the computer “freezes” while they are in the midst of a computer-assisted interview? Does the respondent understand how to move back and forward through an online survey? how much time it will take to complete the survey?

Ø Response rate

How high should the response rate be? If it is a large, complex survey, statistical procedures should be used to answer this question. If the survey is relatively simple, then it can be decided how many people are needed for the results to be believable.

Except when done statistically, the desired response rate tends to be entirely subjective, and the general rule is “higher is better.”

Key survey procedures

Ø Deciding on the type of survey

Ø Selecting the survey’s content and writing questions and trying out the form

Ø Deciding who should participate

Ø Administering the survey (Who should conduct the interview? By when must the online questionnaire be submitted?)

Ø Processing the data (How will data be entered: manually or electronically from survey to database?)

Ø Analyzing and interpreting the results

Ø Reporting the results orally or in writing using text, charts, tables, and graphs.

Criteria for selecting among various survey types

Reliability and validity: obtained by making sure the definitions and models used for selecting questions are grounded in theory or experience.

Usefulness or credibility of results: figuring out the preferences of the people who will use the results.

Costs: referring to the financial burden of developing and administering each type of survey.

The special case of online surveys

Online surveys are self-administered questionnaires. Respondents complete online surveys on laptops, desktops, notebooks, tablets, and cell or mobile phones. Surveyors like online surveys, and respondents are becoming used to them. Surveyors like online surveys because they can easily reach very large numbers of people across the world and because online survey software is accessible and relatively inexpensive.

1. The American Association for Public Opinion Research (AAPOR) has developed a code to describe the obligations that all professionals have to uphold the credibility of survey and public opinion research. Available in the site: http://www.aapor.org/AAPORKentico/Standards-Ethics.aspx.

2. The Council of American Survey Research Organizations (CASRO) has prepared a Code of Standards and Ethics for Survey Research (http://www.casro.org/).

Key point of Chapter 2

1. To decide on a survey’s content, the attitude, belief, value, or idea being measured have to be defined.

2. The responses to closed questions can take the form of yes-or-no answers, checklists, and rating scales. Checklists give respondents many choices (e.g., “check all that apply”). Rating scales require respondents to rank (e.g., “1 = top, 10 = bottom”) or order (e.g., “1 = definitely agree, 2 = agree, 3 = disagree, 4 = definitely disagree”) ideas. Respondents may also be asked to give a number for an answer (e.g., “What is your age?”).

3. Responses to questions produce data that are called categorical (e.g., “yes or no”), ordinal (e.g., “big, bigger, biggest”), and numerical or continuous (e.g., “20, 21, 22 years old”). The surveyor has to consider the type of data the survey produces in order to come up with an appropriate method of analyzing and interpreting its findings.

The content is the message

1. define the terms

Many human attitudes and feelings are subject to a range of definitions. Surveyors should review the research to find out what is known and theorized about a concept, or defining the concept himself (may not be convinced of its validity).

2. select information needs or hypotheses

3. make sure the information can be got

4. do not ask for information unless you can act on it

Once you have selected the content and set the survey’s boundaries, your next task is to actually write the questions. Write more questions than you plan to use because several will probably be rejected as unsuitable. First drafts often have questions for which everyone gives the same answer or no one gives any answer at all. Before deciding on the number and sequence of questions, you must be sure that you cover the complete domain of content you have identified as important to the survey.

Writing questions

1. open-ended and closed questions

open-ended question means that the respondents agree to answer the question or respond to the statement in their own words. Closed question force the respondent to choose from preselected alternatives.

The overwhelming majority of surveys rely on multiple-choice questions because they are efficient and are often more reliable than other question types. Their efficiency comes from being easy to use and score.

2. making the decision: open-ended versus closed questions

Choose open-ended questions when you want to give the respondents the opportunity to express opinions in their own words and you have the interest in and resources to interpret the findings. Open-ended questions tend to be relatively easy to compose.

Choose closed questions for their relative ease of scoring, analysis, and interpretation. Closed questions can be difficult to prepare because they require that all respondents interpret them the same way and that all relevant choices are included, mutually exclusive, and sensible.

Techniques for LB (like best)/ LL (like least) survey

Step 1: asking respondents’ opinions: ask respondents to list what is good and what is bad, and always set a limit on the number of responses.

Step 2: coding LB/LL data: create categories based on the review of the responses or based on past experience with similar programs. But try to keep the categories as precise as possible.

Step 3: LB/LL data: count the number of responses for each code.

Rules for writing closed survey questions

1. each question should be meaningful to respondents

2. use standard language rules

3. make question concrete

4. avoid biased words and phrases

5. check your own biases

6. use caution when asking for personal information

7. each question should have just one thought

Three types of rating or measurement scales

1. categorical: These are sometimes called nominal response scales. They require people to affirm or name the groups to which they belong: gender, religious affiliation, school, or college last attended.

2. Ordinal. These scales require that respondents place answers in order of importance. A person’s economic status (high, medium, or low) would provide an ordinal measurement.

3. Numerical scales. Numerical scales take two forms: discrete and continuous. A discrete scale produces a precise number, whereas a continuous scale produces a number that falls on a continuum.

Checklist

1. It provides respondents with a series of answers. They may choose just one or more answers depending on the instructions.

2. It helps remind respondents of some things they might have forgotten.

Practical concerns

Length counts

The length of a survey depends on what you need to know and how many questions are necessary so that the resulting answers are credible. Another consideration is the respondents. How much time do they have available, and will they pay attention to the survey?

Getting the survey in order

All surveys should be preceded by an introduction and the first set of questions should be related to the topic described in it. Sometimes the answer to one question will affect the content of another. When this happens, the value of the questionnaire is seriously diminished.

Questionnaire format: aesthetics and other concerns

A questionnaire’s appearance is vitally important. A written self-administered questionnaire that is hard to read can confuse or irritate respondents.

Response format

The general rule is to leave enough space to make the appropriate marks.

Ø Prepare a short, formal explanation to accompany the questionnaire form

Ø Consider sending respondents a summary of the findings

Ø If you ask questions that may be construed as personal—such as gender, race/ethnicity, age, or income—explain why the questions are necessary

Ø Keep the questionnaire procedures simple

Ø Keep questionnaires short

Ø Consider incentives

Ø Be prepared to follow up or send reminders

Interviews

1. finding interviewers: Interviewers should fit in as well as possible with respondents; It is also important that the interviewers be able to speak clearly and understandably.

2. training interviewers: To ensure that all interviewers know what is expected of them and that they ask all the questions in the same way, within the same amount of time.

3. conducting interviews: Make a brief introductory statement; try to impress on the person being interviewed the importance of the interview and of the answers; prepare yourself to be flexible; interview people alone; If using a printed interview survey, be sure to ask questions as they appear in the interview schedule; interviewers should follow all instructions given at the training session and described on the interview form; make certain interviewers understand the informed consent process, and that they adhere to it.

4. monitoring interview quality: Establish a hotline; provide written scripts for the interview; make sure you give out extra copies of all supplementary materials; prepare an easy-to-read handout describing the survey; provide a schedule and calendar so that interviewers can keep track of their progress; consider providing interviewers with visual aids; Consider the possibility that some interviewers may need to be retrained and make plans to do so.

One way to make sure that you are using a reliable and valid survey is to rely on one that has been carefully tested by other surveyors and researchers in a scientific study.

Guidelines for finding useable and useful surveys in the research literature

1. select one or more online digital libraries that are relevant to the survey

2. specify the topics or variables the survey need to be covered

3. learn how to search the appropriate electronic libraries

4. make certain that you can administer the survey the way it was intended or that you have the resources to adapt it.

5. check on the characteristics of the survey respondents

6. select the survey that has the best available evidence of reliability and validity

7. try out the survey with a small group of people to see if it is useable and useful

Finding Surveys on the Web

Some organizations offer survey questions free online to the public. Here are just a few examples.

The General Social Survey has been conducted since 1972 by the National Opinion Research Center. It measures attitudes toward social and public policy issues, economic status, political events, work, and family life.

Pew provides access to surveys and survey questions concerning the public on media and journalism, religion and public life, society and the Internet and global attitudes, and Hispanics in America and public opinion.

This is a major source of survey questions. Regardless of how large or small a survey is, and even if it takes place outside the United States, this is an excellent site to guide the development and adaptation of survey questions about age, ethnicity, income, education, and occupation. The U.S. Census also provides survey questions that are relevant to criminal justice (http://www.census.gov/govs/cj/). Sample surveys include the annual parole and probation survey and surveys of sexual violence.

The Institute of Education Sciences through the National Center for Education Statistics

This site has an extensive library of surveys. An example of the surveys you will find if you go to the site appears on the next page. As you can see, international surveys are included.

Guidelines for pilot testing

1. try to anticipate the actual circumstances in which the survey will be conducted and make plans to handle them

2. starting by trying out selected portions of the survey in an informal fashion

3. choose respondents similar to the ones who will eventually complete the survey

4. enlist as many people in the pilot as seems reasonable without wasting resources

5. for reliability, focus on the clarity of the questions

Some commonly used method for analyzing survey data

1. descriptive statistics

Ø Frequencies: a computation of how many people fit into a category

Ø Average: the mean, median, mode

The mean: the arithmetic average, requires summing units and dividing by the number of units you have added together.

The median: the point on a scale that has an equal number of scores above and below it. Another way of putting it is that the median is at the 50th percentile.

The mode: is a score (or a point on the score scale) that has a higher frequency than other scores in its vicinity. You might use the mode when you suspect that you have a group with widely differing characteristics.

Ø Variation: range, variance, and standard deviation

Ø Correlation and regression

2. differences between groups

The cross-tabulation: a method used to describe two variables at the same time. It is often displayed as a table. The table’s name is usually defined by the number of its rows and columns.

3. survey differences: usual methods

Chi-square: It tests the hypothesis that survey data expressed as proportions are equal.

The t Test: It allows to compare the means of two groups to determine the probability that any differences between them are real and not due to chance.

The Mann-Whitney U Test: also called the Wilcoxon rank sum, enables to compare two independent groups when the t test cannot be seen—because the sample size is too small. This statistical method is a test of the equality of the medians

Risks and Odds: they are used to describe the likelihood that a particular outcome will occur within a group, or they can be used to compare groups.

Seven questions to answer before choosing an analysis method

1. How many people are you surveying? Sample size is an important consideration in selecting an analytic strategy.

2. Are you looking for relationships or associations?

3. Will you be comparing groups?

4. Will your survey be conducted once or several times? This question is to find out whether the survey design is cross-sectional or longitudinal.

5. Are the data recorded as numbers and percentages or scores and averages?