U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.179(3); 2008 Jul 29

Logo of cmaj

A guide for the design and conduct of self-administered surveys of clinicians

Associated data.

Survey research is an important form of scientific inquiry 1 that merits rigorous design and analysis. 2 The aim of a survey is to gather reliable and unbiased data from a representative sample of respondents. 3 Increasingly, investigators administer questionnaires to clinicians about their knowledge, attitudes and practice 2 , 4 , 5 to generate or refine research questions and to evaluate the impact of clinical research on practice. Questionnaires can be descriptive (reporting factual data) or explanatory (drawing inferences between constructs or concepts) and can explore several constructs at a time. Questionnaires can be informal, conducted as preparatory work for future studies, or formal, with specific objectives and outcomes.

Rigorous questionnaires can be challenging and labour-intensive to develop, test and administer without the help of a systematic approach. 5 In this article, we outline steps to design, develop, test and administer valid questionnaires with minimal bias and optimal response rates. We focus on self-administered postal and electronic surveys of clinicians that are amenable to quantitative analysis. We highlight differences between postal and electronic administration of surveys and review strategies that enhance response rates and reporting transparency. Although intended to assist in the conduct of rigorous self-administered surveys, our article may also help clinicians in the appraisal of published surveys.

Determining the objective

A clear objective is essential for a well-defined survey. Refining initial research objectives requires specification of the topic, respondents, and primary and secondary research questions to be addressed.

Identifying the sampling frame

It is often impractical for investigators to administer their questionnaire to all potential respondents in their target population, because of the size of the target population or the difficulty in identifying possible respondents. 4 Consequently, a sample of the target population is often surveyed. The “sampling frame” is the target population from which the sample will be drawn. 6 The “sampling element” refers to the respondents from whom information is collected and analyzed. 6 The sampling frame should represent the population of interest. To this end, certain sampling techniques (e.g., surveying conference attendees) may limit generalizability compared with others (e.g., surveying licensed members of a profession). Ultimately, the sampling technique will depend on the survey objectives and resources.

Sample selection can be random (probability design) or deliberate (nonprobability design). 6 Probability designs include simple random sampling, systematic random sampling, stratified sampling and cluster sampling.

  • Simple random sampling: Every individual in the population of interest has an equal chance of being included in the sample. Potential respondents are selected at random using various techniques, such as a lottery process (e.g., drawing numbers from a hat) and random-number generator. 7
  • Systematic random sampling: The investigator randomly selects a starting point on a list and then selects individuals systematically at a prespecified sampling interval (e.g., every 25th individual). In systematic random sampling, both the starting point and the sampling interval are determined by the required sample size.
  • Stratified random sampling: Potential respondents are organized into strata, or distinct categories, and randomly sampled using simple or systematic sampling within strata to ensure that specific subgroups of interest are represented. Stratified sampling can be proportionate (sampling the same proportion of cases in each stratum) or disproportionate (sampling fraction varies across strata). 6
  • Cluster sampling: Investigators divide the population into clusters and sample clusters (or individuals within clusters) in a stepwise manner. Clusters should be mutually exclusive and exhaustive and, unlike strata, heterogeneous.

With the exception of cluster sampling, investigators require lists of individuals in the sampling frame, with contact information, to conduct probability sampling. It is important to ensure that each member of the sampling frame can be contacted. Table 1 presents the advantages and disadvantages of different approaches to probability sampling. 8

An external file that holds a picture, illustration, etc.
Object name is 16TT1.jpg

A nonprobability sampling design is chosen when investigators cannot estimate the chance of a given individual being included in the sample. Such designs enable investigators to study groups that may be challenging to identify. Nonprobability designs include purposive sampling, quota sampling, chunk sampling and snowball sampling. 6

  • Purposive sampling: Individuals are selected because they meet specific criteria (e.g., they are physiotherapists).
  • Quota sampling: Investigators target a specific number of respondents with particular qualities (e.g., female physicians between the ages of 40 and 60 who are being promoted).
  • Chunk sampling: Individuals are selected based on their availability (e.g., patients in the radiology department's waiting room).
  • Snowball sampling: Investigators identify individuals meeting specific criteria, who in turn identify other potential respondents meeting the same criteria. 6

The extent to which the results of a questionnaire can be generalized from respondents to a target population depends on the extent to which respondents are similar to nonrespondents. It is rarely possible to know whether respondents differ from nonrespondents in important ways (e.g., demographic characteristics, answers) unless additional data are obtained from nonrespondents. The best safeguard against poor generalizability is a high response rate.

Development

Item generation.

The purpose of item generation is to consider all potential items (ideas, concepts) for inclusion in the questionnaire, with the goal of tapping into important domains (categories or themes) suggested by the research question. 9 Items may be generated through literature reviews, in-depth interviews, focus-group sessions, or a combination of these methods with potential respondents or experts. Item generation continues until no new items emerge, often called “sampling to redundancy.” The Delphi process, wherein items are nominated and rated by experts until consensus is achieved, can also be used to generate items. 5 , 10 Following item generation, investigators should define the constructs (ideas, concepts) that they wish to explore, 5 group the generated items into domains and begin formulating questions within the domains.

By creating a “table of specifications,” investigators can ensure that sufficient items have been generated to address the research question and can identify superfluous items. 2 Investigators list research questions on the vertical axis and either the domains of interest or the type of information sought (knowledge, attitudes and practice) on the horizontal axis. Subtopics or concepts can be added within identified domains. 10 This table is revisited as questions are eliminated or altered and to establish validity. 10

Item reduction

In this step, investigators limit the large number of potentially relevant questions within domains to a manageable number without eliminating entire domains or important constructs. The requirement for information must be balanced against the need to minimize respondent burden, since lengthy questionnaires are less likely to be completed. 11 , 12 In general, most research questions are addressed with 25 or fewer items 5 and at least 5 items in each domain. 11

Item reduction is an iterative process that can be achieved using one of several methods, some of which require respondent data. Redundant items can be eliminated in interviews or focus-group sessions with content experts or external appraisers. Participants are asked to evaluate the relative merit of included items by ranking (e.g., ordinal scales) or rating (e.g., Likert scales) items or by providing binary responses (e.g., include/exclude). Alternatively, investigators may reduce items using statistical methods that examine the relation between and among items within domains; this method requires data obtained through pilot testing.

Questionnaire formatting

Question stems.

The question stem is the statement or question to which a response is sought. Each question should focus on a single construct. Question stems should contain fewer than 20 words and be easy to understand and interpret, 5 , 13 nonjudgmental and unbiased. 13 Investigators should phrase questions in a socially and culturally sensitive manner. They should avoid absolute terms (e.g., “always,” “none” or “never”), 11 abbreviations and complex terminology. 2 Investigators should specify the perspective from which questions should be addressed, particularly for questions about attitudes that may elicit different responses depending on how they are worded. 14 The language used influences the response formats used, which may affect the response rate. Demonstrative questions are often followed by binary responses, whereas question stems requesting respondents to rank items or elicit their opinions should adopt a neutral tone. The wording of the question and the order of response categories can influence the responses obtained. 3 , 15 Moreover, the manner in which question stems and responses are synthesized and presented can influence potential respondents' decisions to initiate and complete a questionnaire. 3

Response formats

Response formats provide a framework for answering the question posed. 5 As with question stems, investigators should develop succinct and unbiased response formats, either “open” (free text) or “closed” (structured). Closed response formats include binary (yes/no), nominal, ordinal, and interval and ratio measurements.

  • Nominal responses: This response option consists of a list of mutually exclusive, but unordered, names or labels (e.g., administrators, physicians, nurses) that typically reflect qualitative differences in the construct being measured.
  • Ordinal responses: Although ordinal responses (e.g., Likert scales) imply a ranked order, they do not reflect a quantity or magnitude of the variable of interest. 16 Likert scales can be used to elicit respondents' agreement (ranging from strongly disagree to strongly agree) with a statement.
  • Interval and ratio measurements: These response options depict continuous responses. Both formats demonstrate a constant relation between points. However, only ratio measurements have a true zero and exhibit constant proportionality (proportions of scores reflect the magnitude of the variable of interest).

Collaboration with a biostatistician is helpful during questionnaire development to ensure that data required for analyses are obtained in a usable format.

When deciding on the response options, investigators should consider whether to include indeterminate response options, to avoid “floor and ceiling” effects and to include “other” response options.

  • Indeterminate response options: Although the inclusion of indecisive response options (e.g., “I don't know,” “I have no opinion”) may let respondents “off the hook” too easily, 17 they acknowledge uncertainty. 13 These response options may be suitable when binary responses are sought or when respondent knowledge, as opposed to attitudes or opinions, is being probed. 2
  • Floor and ceiling effects: These effects reflect responses that cluster at the top or bottom of scales. 5 During item reduction, investigators should consider removing questions that demonstrate floor or ceiling effects, or using another response format to increase the range of responses. Providing more response options may increase data dispersion, and may increase discrimination among responses. 5 Floor and ceiling effects sometimes remain after response options are modified; in such cases they reflect true respondent views.
  • “Other” response options: Providing an “other” response option or requesting “any other comments” allows for unanticipated answers, alters the power balance between investigators and respondents, 18 and may enhance response rates to self-administered questionnaires. 3 During questionnaire testing, “other” response options can help to identify new issues or elaborate on closed response formats. 18

Questionnaire composition

Cover letter.

The cover letter creates the first impression. The letter should state the objective of the survey and highlight why potential respondents were selected. 19 To enhance credibility, academic investigators should print cover letters on departmental stationery with their signatures. To increase the response rate, investigators should personalize the cover letter to recipients known to them, provide an estimate of the time required to complete the questionnaire and affirm that the recipient's participation is imperative to the success of the survey. 20

Questionnaire

Some investigators recommend highlighting the rationale for the survey directly on the questionnaire. 13 Presenting simple questions or demographic questions first may ease respondents into questionnaire completion. Alternatively, investigators may reserve demographic questions for the end if the questions posed are sensitive. The font style and size should be easy to read (e.g., Arial 10–12 point). The use of bold type, shading and broad lines can help direct respondents' attention and enhance visual appeal. McColl and colleagues 3 highlighted the importance of spatial arrangement, colour, brightness and consistency in the visual presentation of questionnaires.

Questionnaires should fit neatly inside the selected envelope along with the cover letter, a return (stamped or metered) envelope and an incentive, if provided. Often longer questionnaires are formatted into booklets made from larger sheets of paper (28 × 36 cm [11 x 14 inches]) printed on both sides that are folded in half and either stapled or sutured along the seam. Investigators planning to send reminders to nonrespondents should code questionnaires before administration. “Opt out” responses identify respondents who do not wish to complete the questionnaire or were incorrectly identified and can limit additional correspondence. 2

For Internet-based surveys, questions are presented in a single scrolling page (single-item screen) or on a series of linked pages (multiple-item screens) often with accompanying electronic instructions and links to facilitate questionnaire flow. Although the use of progress indicators can increase questionnaire completion time, multiple-item screens significantly decrease completion time and the number of “uncertain” or “not applicable” responses. 21 Respondents may be more likely to enter invalid responses in long-versus short-entry boxes, and the use of radio buttons may decrease the likelihood of missing data compared with entry boxes. 21 [Radio buttons, or option buttons, are graphic interface objects used in electronic surveys that allow users to choose only one option from a predefined set of alternatives.]

Questions should be numbered and organized. Every question stem should include a clear request for either single or multiple responses and indicate the desired notation (e.g., check, circle). Response options should appear on separate lines. Tables can be used to present ordinal responses of several constructs within a single question. The organization of the questionnaire should assist respondents' thought processes and facilitate questionnaire flow. 5 Questions can be ordered on the basis of content (e.g., broad questions preceding specific ones), 3 , 13 permutations in content (scenario-based questions) or structure (questions presented within domains or based on the similarity of response formats when a single domain is being explored). 5 Operational definitions are helpful before potentially ambiguous questions, 5 as are clear instructions to skip nonapplicable questions. 17

In a systematic review, Edwards and colleagues 22 identified 292 randomized trials and reviewed the influence of 75 strategies on responses to postal questionnaires. They found that specific formatting strategies (e.g., the use of coloured ink, the placement of more interesting questions first, and shorter length) enhanced response rates ( Table 2 ).

An external file that holds a picture, illustration, etc.
Object name is 16TT2.jpg

Pre-testing

The quality of questionnaire data depends on how well respondents understand the items. Their comprehension may be affected by language skills, education and culture. 5 Pre-testing initiates the process of reviewing and revising questions. Its purpose is to evaluate whether respondents interpret questions in a consistent manner, as intended by the investigator, 23 and to judge the appropriateness of each included question. Investigators ask colleagues who are similar to prospective respondents 14 to evaluate each question through interviews (individual or group) or written feedback. They also ask them to determine a course of action, including whether to accept the original question and meaning, to change the question but keep the meaning, to eliminate the question or to write a new question). 24

Pilot testing

During pilot testing, investigators present questions as they will appear in the penultimate draft of the questionnaire to test respondents who are similar to the sampling frame. 24 The purpose is to assess the dynamics of the questionnaire in a semistructured interaction. The respondents are asked to examine the questionnaire with regard to its flow, salience, acceptability and administrative ease, 23 identifying unusual, redundant, irrelevant or poorly worded question stems and responses. They are also asked to record the time required to complete the questionnaire. Pre-testing and pilot testing minimize the chance that respondents will misinterpret questions, fail to recall what is requested or misrepresent their true responses. 23 The information obtained through pre-testing and pilot testing is used to improve the questionnaire.

Following pilot testing, investigators can reduce items further through factor analysis by examining mathematical relations among items and seeing how items cluster into specific domains. 25 Measures of internal consistency (see “Reliability”) can assess the extent to which candidate items are related to selected items and not to other items within a domain. Correlations between 0.70 and 0.90 are optimal; 26 correlations below 0.70 suggest that different concepts are being measured, and those above 0.90 suggest redundant items. 26 At least 5 respondents per candidate item (i.e., 100 respondents for a 20-item questionnaire) are required for factor analysis. Factor analysis can highlight items that require revision or removal from a domain. 26

Clinical sensibility testing

The goals of clinical sensibility testing are to assess the comprehensiveness, clarity and face validity of the questionnaire. The testing addresses important issues such as whether response formats are simple and easily understood, whether any items are inappropriate or redundant or missing, and how likely the questionnaire is to address the survey objective. During clinical sensibility testing, investigators administer a 1-page assessment sheet to respondents with the aforementioned items presented as questions, with either Likert scale (e.g., very unlikely, unlikely, neutral, likely, very likely) or nominal (e.g., yes/no/don't know/unclear) response formats. An example of a clinical sensibility testing tool is shown in Appendix 1 (available at www.cmaj.ca/cgi/content/full/179/3/245/DC1 ). Following pre-testing, pilot testing and clinical sensibility testing, questionnaires may need to be modified to an extent that additional testing is required.

Although some overlap exists among pre-testing, pilot testing and clinical sensibility testing, each is distinct. Pre-testing focuses on the clarity and interpretation of individual questions and ensures that questions meet their intended purpose. Pilot testing focuses on the relevance, flow and arrangement of the questionnaire, in addition to the wording of the questionnaire. Although pilot testing can detect overt problems with the questionnaire, it rarely identifies their origins, which are generally unveiled during pre-testing. 23 Clinical sensibility testing focuses on how well the questionnaire addresses the topic of interest and the survey objective.

Reliability

Ideally, questions discriminate among respondents such that respondents who think similarly about a question choose similar responses, whereas those who think differently choose diverse responses. 5 Reliability assessment is part of rigorous evaluation of a new questionnaire. 27

  • Test–retest reliability: With this method, investigators assess whether the same question posed to the same individuals yields consistent results at different times (typically spanning 2–4 weeks).
  • Interrater reliability: Investigators assesses whether different respondents provide similar responses where expected.
  • Internal consistency: Investigators appraise whether different items tapping into the same construct are correlated. 6 Three tests can be used to assess internal consistency: the corrected item-total correlation (assesses the correlation of an item with the sum of all other items), split-half reliability (assesses correlation between scores derived by splitting a set of questions in half) and the α reliability coefficients (derived by determining key dimensions and assessing items that tap into specific dimensions).

The reliability assessment required depends on the objective of the survey and the type of data collected ( Table 3 ). 27

An external file that holds a picture, illustration, etc.
Object name is 16TT3.jpg

There are 4 types of validity that can be assessed in questionnaires: face, content, construct and criterion validity.

  • Face validity: This is the most subjective aspect of validity testing. Experts and sample participants evaluate whether the questionnaire measures what it intends to measure during clinical sensibility testing. 20
  • Content validity: This assessment is best performed by experts (in content or instrument development) who evaluate whether questionnaire content accurately assesses all fundamental aspects of the topic.
  • Construct validity: This is the most abstract validity assessment. It should be evaluated if specific criteria cannot be identified that adequately define the construct being measured. Expert determination of content validity or factor analysis can substantiate that key constructs underpinning the content are included.
  • Criterion validity: In this assessment, responses to survey items are compared to a “gold standard.”

Investigators may engage in one or more assessments of instrument validity depending on current and anticipated uses of the questionnaire. At a minimum, they should assess the questionnaire's face validity.

Administration

Advanced notices, for example in professional newsletters or a premailed letter, should announce the impending administration of a questionnaire. 19 Self-administered questionnaires can be distributed by mail or electronically via email or the Internet. The administration technique chosen depends on the amount and type of information desired, the target sample size, investigator time, financial constraints and whether test properties were established. 2 In a survey of orthopedic surgeons, Leece and colleagues 28 compared Internet ( n = 221) and postal ( n = 221) administration techniques using alternating assignment. Nonrespondents to the mailed questionnaire were sent up to 3 additional copies of the questionnaire; nonrespondents to the Internet questionnaire received up to 3 electronic requests to complete the questionnaire and, if necessary, were mailed a copy of the questionnaire. Compared with the postal arm, Internet recipients had a lower response rate (45% [99/221] v. 58% [128/221]; absolute difference 13%, 95% confidence interval 4%–22%; p < 0.01). Other studies 29 , 30 also showed a lower response rate with electronic than with postal administration techniques, which suggests that a trade-off may exist with electronic administration between cost (less investigator time required for questionnaire administration) and response rate. A systematic review of Internet-based surveys of health professionals identified 17 publications of sampling from e-directories and Web postings or electronic discussion groups; 12 reported variable response rates ranging from 9% to 94%. 31

Internet-based surveys pose unique technical challenges and methodologic concerns. 31 Before choosing this administration technique, investigators must have the support of skilled information technologists and the required server space. They must also ensure that potential respondents have access to electronic mail or the Internet. Electronic software is needed for questionnaire development and analysis; otherwise commercial electronic survey services can be used (Vovici [formerly WebSurveyor], SurveyMonkey and QuestionPro). As with postal surveys, an advanced notice by email should closely precede administration of the electronic questionnaire. Potential respondents can be sent an electronic cover letter either with the initial or reminder questionnaires attached or with a link to the an Internet-based questionnaire. Alternatively, the cover letter and questionnaire can be posted on the Web. Incentives can also be provided electronically (e.g., online coupons, entry into a lottery).

Response rate and estimation of sample size

High response rates increase the precision of parameter estimates, reduce the risk of selection bias 3 and enhance validity. 28 The lower the response rate, the higher the likelihood that respondents differ from those of nonrespondents, which casts doubt on whether the results of the questionnaire reflect those of the target population. 5 Investigators may report the actual response rate , which reflects the sampling element (including respondents who provide partially or fully completed questionnaires and opt-out responses), or the analyzable response rate , which reflects information obtained from partially or fully completed questionnaires as a proportion of the sampling frame (all potential respondents contacted). Although response rates of at least 70% are desirable for external validity, 2 , 4 , 5 , 17 response rates between 60% and 70%, and sometimes less than 60% (e.g., for controversial topics), may be acceptable. 17 Mean response rates of 54% 32 to 61% 33 for physicians and 68% 32 for nonphysicians have been reported in recent systematic reviews of postal questionnaires.

In another systematic review of response rates to postal questionnaires, Nakash and colleagues 34 identified 15 randomized trials in health care research in patient populations. Similar to Edwards and colleagues, 22 whose systematic review of 292 randomized trials was not limited to medical surveys, Nakash and colleagues found that reminder letters and telephone contact had a favourable impact on response rates (odds ratio [OR] 3.71, 95% CI 2.30–5.97); shorter versus longer questionnaires also had an influence, although to a lesser extent (OR 1.35, 95% CI 1.19–1.54). However, unlike Edwards and colleagues, Nakash and coworkers found no evidence that providing an incentive increased the response rate (OR 1.09, 95% CI 0.94–1.27) (see Appendix 2, available at www.cmaj.ca/cgi/content/full/179/3/245/DC1 ).

Reminders have a powerful and positive influence on response rates. For postal surveys, each additional mailed reminder yields about 30%–50% of the initial responses. 17 If the initial response rate to a questionnaire is 40%, the response rate to a second mailing is anticipated to be between 12% and 20%. In this circumstance, a third mailing would be expected to achieve an overall response rate of 70%. Dillman and colleagues 35 proposed the use of 3 follow-up “waves”: an initial reminder postcard sent 1 week after the initial mailing of the questionnaire to the entire sample, and 2 reminders (a letter plus replacement questionnaire) sent at 3 and 7 weeks to nonrespondents, with the final letter and replacement questionnaire sent by certified mail. As with postal surveys, the use of reminders with electronic surveys of health professionals have been found by several authors 36–38 to increase response rates substantively.

The survey objective, hypotheses and design inform the approach to estimating the sample size. Appendix 3 (available at www.cmaj.ca/cgi/content/full/179/3/245/DC1 ) outlines the steps involved in estimating the sample size for descriptive survey designs (synthesizing and reporting factual data with the goal of estimating a parameter) and explanatory or experimental survey designs (drawing inferences between constructs to test a hypothesis). 6 In Appendices 4 and 5 (available at www.cmaj.ca/cgi/content/full/179/3/245/DC1 ), we provide commonly used formulas for estimating sample sizes in descriptive and experimental study designs, respectively. 39

Survey reporting

Complete and transparent reporting is essential for a survey to provide meaningful information for clinicians and researchers. Although infrequently adopted, several recommendations have been published for reporting findings from postal 40–42 and electronic surveys. 43 One set of recommended questions to consider when writing a report of findings from postal surveys appears in Table 4 . 40 Reviews of the quality of survey reports showed that only 51% included a response rate, 44 8%–16% provided access to the questionnaire, 45 , 46 and 67% reported validation of the questions. 45 Only with sufficient detail and transparent reporting of the survey's methods and results can readers appraise the survey's validity.

An external file that holds a picture, illustration, etc.
Object name is 16TT4.jpg

In this guide for the design and conduct of self-administered surveys of clinician's knowledge, attitudes and practice, we have outlined methods to identify the sampling frame, generate items for inclusion in the questionnaire and reduce these items to a manageable list. We have also described how to further test and administer questionnaires, maximize response rates and ensure transparent reporting of results. Using this systematic approach (summarized in Appendix 6, available at www.cmaj.ca/cgi/content/full/179/3/245/DC1 ), investigators should be able to design and conduct valid, useful surveys, and readers should be better equipped to appraise published surveys.

Supplementary Material

Acknowledgments.

We thank Dr. Donnie Arnold, McMaster University, Hamilton, Ont., for his review of this manuscript.

This article has been peer reviewed.

Contributors: Karen Burns, Neill Adhikari, Maureen Meade, Tasnim Sinuff and Deborah Cook conceived the idea. Karen Burns drafted a template of the guide, reviewed and synthesized pertinent literature and prepared the initial and subsequent drafts of the manuscript. Mark Duffett and Michelle Kho reviewed and synthesized pertinent literature and drafted sections of the manuscript. Maureen Meade and Neill Adhikari contributed to the organization of the guide. Tasnim Sinuff contributed to the organization of the guide and synthesized information in the guide into a summary appendix. Deborah Cook aided in drafting the layout of the guide and provided scientific and methodologic guidance on drafting the guide. All of the authors revised the manuscript critically for important intellectual content and approved the final version submitted for publication.

Karen Burns and Tasnim Sinuff hold a Clinician–Scientist Award from the Canadian Institutes of Health Research (CIHR). Michelle Kho holds a CIHR Fellowship Award (Clinical Research Initiative). Deborah Cook is a Canada Research Chair of the CIHR.

Competing interests: None declared.

Correspondence to: Dr. Deborah J. Cook, Professor, Department of Clinical Epidemiology and Biostatistics, McMaster University, Rm. 2C11, 1200 Main St. W, Hamilton ON L8N 3Z5; fax 905 521-6068; ac.retsamcm@koocbed

a cover letter is typically used with self administered questionnaires

  • Executive Committee
  • Advisory Committee
  • Contributors
  • Study Design and Organizational Structure
  • Study Management
  • Tenders, Bids, and Contracts
  • Sample Design
  • Questionnaire Design
  • Instrument Technical Design
  • Management and Budgeting
  • Shared Language Harmonization
  • Cognitive Interviewing
  • Interviewer Recruitment, Selection, and Training
  • General Considerations
  • Face-to-Face Surveys
  • Telephone Surveys

Self-Administered Surveys

  • Paradata and Other Auxiliary Data
  • Data Harmonization
  • Data Processing and Statistical Adjustment
  • Data Dissemination
  • Statistical Analysis
  • Survey Quality
  • Ethical Considerations
  • How to Cite
  • Frequently Asked Questions

a cover letter is typically used with self administered questionnaires

⇡ Back to top

1.1     Assess the postal system in the study country and use it to develop a timeline for data collection that is realistic given the local context. In a 3MC survey, there are often differences in postal reliability, cost, possible carriers, and timeliness.

1.2     When designing materials (letters, questionnaires, etc.) that will be mailed to the respondent, assess the following:

1.2.1    Literacy levels among the target population.

1.2.2    Use of languages and/or regional dialects other than the country’s official language(s), and any implications for the feasibility of a self-completed questionnaire. Indeed, there are some languages and dialects that do not have a written form.

1.3     Determine how the data entry of returned mail questionnaires will occur. Data entry can occur manually, but it is more efficient to use optical or intelligent character recognition software, wherein the computer will read and code responses from paper questionnaires.

1.4     Before mailing out the paper questionnaire, consider sending a well-written advance letter to legitimize the survey and reassure and motivate potential respondents. Most effective is a carefully drafted, simple, short letter [zotpressInText item="{2265844:8EFBXS7Q},{2265844:G4I87XIU},{2265844:H2QCLNXW}"].

1.5     Develop a cover letter to include with the paper questionnaire, introducing the research study, explaining the purpose of the survey, and providing instructions on how to complete the instrument and organization contact information for any questions the respondent might have.

1.6     Develop an instrument appropriate for the mode and target population, keeping in mind that there will be no interviewer present to assist with the survey administration.

1.6.1    Assess the literacy of the target population, and adjust the text for comprehension if necessary.

1.6.2    Place instructions clearly next to the survey questions to which they correspond.

1.6.3    Make the layout of the instrument visually appealing and the question order easy to follow. Use visual elements (e.g., brightness, color, shape, position on page) in a consistent way to define the desired path through the questionnaire [zotpressInText item="{2265844:HTV52MIL},{2265844:DDYLNLET}"].

1.6.4    Use skip patterns only when absolutely necessary. Include clear instructions for skip patterns, and reinforce with visual and graphical cues such as boldfacing and arrows.

1.6.5    Limit the number of open-ended questions.

1.6.6    Ask only one question at a time. Combining multiple items into one question places a heavy cognitive burden on respondents and can impact data quality.

1.7     Provide clear instructions for returning the completed survey to the research organization or other point of collection. Adequate postage should be provided on the envelope so as not to incur cost to the respondent.

1.8     Develop a sample management system (and procedures for its execution) to process completed paper questionnaires.

1.9     Institute protocols to protect respondent confidentiality. It is common for research organizations to assign a unique identification number to each sampled household’s questionnaire for sample management purposes as questionnaires are mailed back to the office. This ensures that if a paper questionnaire is lost in the mail or is not otherwise returned to the survey organization, the respondent’s answers cannot be linked to their identity by a third party.

1.10   Develop a protocol for addressing nonresponse, including how many attempts to reach respondents by mail and/or other possible methods will be made.

1.1     Because a mail survey is self-administered without an interviewer present, it is crucial that the layout and design of the questionnaire elements is clear and easy to follow and that instructions are visibly marked. Often, the first page of a mail survey contains a lengthy set of instructions, which [zotpressInText item="{2265844:HTV52MIL}" format="%a% (%d%)"] argue respondents generally skip or do not retain when completing the questionnaire. They advise the placement of relevant instructions to be directly where they need to be.

1.2     A recent mail survey in Siberia, which varied experimental factors across random subgroups of respondents, achieved greatest response rates when official university letterhead was used in correspondence, when there was an incentive offered, and when a larger number (vs. a smaller number) of contacts with the respondent were attempted [zotpressInText item="{2265844:L9ICIYPJ}"].

1.3     Expected response rates for mail surveys will differ by country. For a limited set of studies examining cross-national differences in response rates, see [zotpressInText item="{2265844:8EFBXS7Q}" format="%a% (%d%)"], [zotpressInText item="{2265844:UR2PC7V2}" format="%a% (%d%)"] , [zotpressInText item="{2265844:BUGMUMVC}" format="%a% (%d%)"], and [zotpressInText item="{2265844:VITCJ2LX}" format="%a% (%d%)"] .

2.1     Assess each study country’s technological infrastructure to select software appropriate for use, depending on instruments prevalent in the study country, for the development, distribution, and completion of the Web survey.

2.1.1    Assess Internet speed and reliability in the study country and the potential impact on ease of Web survey use by respondents, and design the survey to fit the country’s bandwidth limitations.

2.1.2    Determine which Web browser(s) fully supports the Web-based survey instrument, and communicate this to the respondent. Consider including a link to download a specific browser to facilitate the respondent’s participation in the Web survey.

2.1.3    Consider that respondents will likely use different devices to access the survey, including desktop computers, laptop computers, tablets, smartphones, and other electronic devices. The Web survey should be able to be completed on a Web browser, regardless of the type of device. See Instrument Technical Design for additional information on preparing style sheets appropriate for multiple devices.

2.1.4    Plan for adequate programming and testing time on multiple devices. For example, software that is compatible with Android devices may have glitches in iOS (Apple) devices.

2.2     Determine how respondents will be invited to participate in the Web survey.

2.2.1    Before disseminating the link to the Web-based survey instrument, consider sending a well-written advance letter to legitimize the survey and reassure and motivate potential respondents. Most effective is a carefully drafted, simple, short letter [zotpressInText item="{2265844:8EFBXS7Q},{2265844:G4I87XIU},{2265844:H2QCLNXW}"].

2.2.2    Mode of invitation will be limited by the respondent contact information available from the sample frame. For example, a Web survey using a sampling frame consisting solely of email addresses will not be able to send an invitation via postal mail because of the lack of a mailing address.

2.3     Determine how respondents will gain access to the survey. [zotpressInText item="{2265844:H2QCLNXW}" format="%a% (%d%)"] proposes providing a PIN limit access to only people in the sample. Another option is to provide each respondent with a unique URL linking to the survey, which is linked to the respondent’s sample ID.

2.4     Develop a concise introduction to be presented at the start of the Web survey, introducing the research study, explaining the purpose of the survey, and providing instructions on how to complete the survey and organization contact information for any questions the respondent might have.

2.5     Develop and test the Web survey, keeping in mind that there will be no interviewer present to assist with the survey administration.

2.5.1    Assess the literacy of the target population, and adjust the text for comprehension if necessary.

2.5.2    The first question should be an item that is likely to be interesting to most respondents and easy to answer.

2.5.3    Place instructions alongside the survey questions to which they correspond.

2.5.4    Make the layout of the instrument visually appealing.

2.5.5    Program any skip patterns used directly into the instrument, relieving the respondent from navigational decisions.

2.5.6    Keep the survey as brief and engaging as possible. The longer the questionnaire and the greater the number of screens, the more likely the respondent will not finish the questionnaire [zotpressInText item="{2265844:3RYH9HTP}"].

2.5.7    Limit the number of open-ended questions.

2.5.8    Ask only one question at a time. Combining multiple items into one question places a heavy cognitive burden on respondents and can impact data quality.

2.5.9    Make prompts, particularly those asking for the respondent to correct an answer, helpful, polite, and encouraging.

2.5.10   Decide whether respondents can navigate backwards to revisit and/or revise previous survey items and responses.

2.5.11   See Instrument Technical Design for additional guidance on the layout and technical design of the Web survey.

2.6     Decide whether respondents will be permitted to complete the questionnaire in more than one session, allowing for the data to be saved in the interim, and program the instrument accordingly.

2.7     Institute protocols to protect respondent confidentiality.

2.7.1    Ensure that electronic transmission of the data from the respondent’s computer to the survey firm collecting the data is secure.

2.8     Select an appropriate electronic sample management system and develop procedures for its execution. If an electronic sample management system is used, coordinating centers  may play a role in monitoring fieldwork. See  Study Design and Organizational Structure  for details.

2.9     Determine which paradata will be collected. Paradata from Web surveys can be used to enhance respondents’ experience or to understand more about the respondents and how they interact with the Web survey [zotpressInText item="{2265844:QHSVB7BL}"]. See Paradata and Other Auxiliary Data for more information and examples.

2.10   Develop a protocol for addressing nonresponse, including how many attempts to reach respondents by email and/or other possible methods will be made.

2 .1     Web surveys are often used in subsequent waves of panel surveys following an interviewer-administered baseline study, and can be a practical and cost-effective mode choice. In such cases, the respondent is already familiar with the study, and strategies to minimize nonresponse can be executed via phone, mail, and even in-person visits because complete contact information is generally available.

2.2     With adequate design, Web surveys can achieve response rates comparable to non-Web surveys.

2.2.1    A randomized telephone-/Web-mode experiment in a Swiss election study found that the use of an incentive in a Web survey produced response rates comparable to those from the telephone survey (which also included incentives). The Web survey was much less costly, even accounting for the cost of incentives, than the telephone survey [zotpressInText item="{2265844:U6CXVPS8}"].

2.2.2    However, like 3MC surveys conducted in other modes, Web surveys can produce difference response rates across countries. A comparison of data collected through a Web survey from Italy, France, Turkey, and the U.S. showed that France had the highest overall refusal rate, but low item nonresponse for those who did participate. Italy and the U.S. had response rates and low item nonresponse. Respondents in Turkey had the lowest contact and response rates, and the highest item nonresponse for sensitive questions [zotpressInText item="{2265844:DLPTFD4B}"].

2.3     Internet censorship occurs at the national level in at least several non-Western countries, such as China and Iran. If planning a survey in a country where censorship occurs, consider the survey topic and technical programming and determine whether the Web is an acceptable form of data collection for the particular study country.

2.3.1    Censorship by certain governments can impact the types of questions that are permitted on a Web survey questionnaire.

2.3.2    Censorship can impact response rates due to confidentiality and security concerns among respondents.

2.3.3    If the study country engages in censorship, consider the location of the server hosting the survey, and whether the study respondents will be able to access the server in its host country; that is, whether the server website IP address is accessible from the study country.

2.4     Software and website vendors can restrict access by users in other countries. Regardless of any government censorship, verify that respondents in the study country can access the survey.

2.5     Smartphone apps are currently being used for time use surveys. For example, a research study in the Netherlands is using a smartphone app to collect time use data in combination with auxiliary data. By requiring respondents to install an app, rather than access a website to complete the survey, researchers can guarantee that respondents will visually see the instrument exactly as the researchers intended. The app does not need permanent Internet access, as completed survey data is stored and transmitted as Internet access permits [zotpressInText item="{2265844:6MMUBUF6}"].

3.1     Determine which IVR software will be used to carry out the survey, including whether the IVR system will accept incoming telephone calls from respondents and/or initiate outgoing calls to respondents to complete the survey.

3.2     Determine how respondents will be invited to participate in the IVR survey. The mode of invitation will be limited by the respondent contact information available from the sample frame.

3.2.1    If postal addresses are available, respondents can receive an invitation with a telephone number to call to participate.

3.2.2    If email addresses are available, respondents can receive an invitation and telephone via email.

3.3.3    If only telephone numbers are available, the invitation to complete the IVR will occur by telephone.

3.3     If an automated dialing system will be used to initiate contact with the respondent, assess any legal restrictions in place that apply to the use of such systems in the study country.

3.4     Develop a concise introduction to be presented at the start of the IVR survey, introducing the research study, explaining the purpose of the survey, and providing instructions on how to complete the survey and organization contact information for any questions the respondent might have.

3.5     Decide whether to program the IVR system as touchtone, voice input, or a combination of the two.

3.5.1    When deciding on the programming, consider the target population. Studies in rural India and Botswana found that respondents with less education and lower literacy do better with touchtone, and cited privacy for touchtone preference as well [zotpressInText item="{2265844:PKGIXZ8N},{2265844:I3L3PZDH}"].

3.5.2    A study in Pakistan found that a well-designed speech interface was more effective than a touchtone system for respondents regardless of literacy level [zotpressInText item="{2265844:796M9DDZ}"].

3.6     Devote sufficient time to the development of a high-quality IVR system to maintain respondent interest and continued cooperation.

3.6.1    The IVR system must have a high-quality recording, as the respondent is likely to break off the survey if quality is poor.

3.6.2    See [zotpressInText item="{2265844:HBKSVNAK}" format="%a% (%d%)"] for a guide to the development of an IVR system and the associated speech characteristics which need consideration.

3.7     Select an appropriate sample management system, and develop procedures for its execution.

3.7.1    If an electronic sample management system is used,  coordinating centers   may play a role in monitoring fieldwork. See  Study Design and Organizational Structure for details.

3.8     Develop a protocol for addressing nonresponse, including how many attempts to reach respondents by telephone and/or other possible methods will be made.

3.1     Consider the voice used for recording.

3.1.1    In a health helpline project in Botswana, researchers employed a well-known local actress for the IVR recording, and users reacted very positively [zotpressInText item="{2265844:PKGIXZ8N}"].

3.1.2    Depending on the social context, using an IVR recording of a man for male respondents and a woman for female respondents may elicit more accurate reporting, particularly of sensitive information.

3.2     [zotpressInText item="{2265844:DFMSKAYM}" format="%a% (%d%)"] developed an innovative approach to the challenge that dialectical variation and multilingualism poses to speech-driven interfaces for IVR in India, applicable to other settings as well. In their approach, people from specific villages are recorded during interactions, and their speech is semi-automatically integrated into the acoustic models for that village, thus generating the linguistic resources needed for automatic recognition of their speech.

3.3     Consider an alternate mode for first contact to inform respondent of impending IVR survey, such as SMS or a mailing. In a study in rural Uganda, the IVR survey call was preceded by an SMS message about the upcoming call 24 hours prior. In a pretest, respondents who didn’t receive the text were unable to make sense of the later survey call [zotpressInText item="{2265844:CHXGGNYG}"].

3.4     A survey of teachers in Uganda resulted in a number of useful considerations when designing an IVR system to improve response rates and data quality [zotpressInText item="{2265844:CHXGGNYG}" etal="yes"].

3.4.1    The IVR call began with the immediate information that “This is a recorded call from Project X. You are not talking to a real person.”

3.4.2    The IVR call provided very specific instructions about whether to use keypad or to speak.

3.4.3    Respondents were initially confused by the automation of the IVR system. Researchers had better results when using a chime to get respondents’ attention before the automated voice gave instructions.

3.4.4    Leveraging conversational and turn-taking conventions of normal conversation in the IVR system lead to more success than detailed instructions in eliciting desired user behavior.

3.4.5    An IVR system which projected a loud voice, with prompts recorded like the speaker was using a poor cell connection, resulted in a survey that was easier for respondents to follow.

3.4.6    When producing the IVR recording, use slow speech to get slow speech — respondents will emulate the voice, and resulting data will be easier to understand.

3.4.7    The IVR recording included 3 seconds of silence before the recorded speakers says “thank you” and moves onto next question, which was reported as well-received by respondents.

Self-Administered Questionnaire Method: Definition, Advantages, Disadvantages

Self-Administered Questionnaire Method

What is a Self-Administered Questionnaire?

A self-administered questionnaire (also referred to as a mailed questionnaire) is a data collection tool in which written questions are presented that are to be answered by the respondents in written form.

A written questionnaire can be administered in different ways, for example:

  • Sending questionnaires by mail with clear instructions on how to answer the questions and requests for mailed responses;
  • Gathering all or part of the respondents in one place at one time, giving oral or written instructions, and letting the respondents fill out the questionnaires; or
  • Hand-delivering questionnaires to respondents and collecting them later.

Other delivery modalities include computer-delivered and intercept studies.

To reach their respondents, computer-delivered, self-administered questionnaires use organizational intranets, the Internet, or online services. Intercept studies are those that are conducted in person, generally in a public place or business point.

For instance, interviewers might approach patrons leaving a restaurant and ask to interview them about their experiences. Interviewers might ask the questions or simply explain the project and give the questionnaire to the respondents.

The surveys might be completed on paper, on a tablet (iPod, Android, etc.), or a laptop.

Intercept studies may use a traditional questionnaire or a computerized instrument in a predetermined environment without the interviewer’s assistance.

The questions included in the questionnaire can be either open-ended or closed (with pre-categorized answers).

Electronic Method of Self-Administered Questionnaire Method

In recent times, electronic surveys can be conducted by e-mail or administered on the Internet or the Web (Malhotra, 2007).

E-mail interviews

To conduct an e-mail survey, a list of e-mail addresses is prepared. The survey is posted within the body of the e-mail message. The e-mails are then sent out over the Internet.

E-mail surveys use pure text (ASCII) to represent questionnaires and can be received and responded to by anyone with an e-mail address, whether or not they have access to the Web.

Respondents type the answers to either closed-ended questions at designated places, and click on ‘reply.’ Responses are entered in a pre­designed sheet and tabulated. Note that data entry is typically required in such surveys.

Internet Interviews

In contrast to e-mail surveys, Internet or Web surveys use Hypertext Markup Language (HTML), the language of the Web, and are posted on a Web site.

Respondents may be recruited over the Internet from a potential respondent database maintained by the research firm, or they can be recruited by conventional methods (mail, telephone). Respondents are asked to go to a particular web location to complete the survey.

Many times, respondents are not recruited. Rather, they happen to be visiting the Web site where the survey is posted (or other popular Web sites), and they are invited to participate in the survey.

Many national dailies are currently conducting opinion polls addressing current issues of national interests employing this device.

Advantages of Self-administered Questionnaire

Considerably low cost.

The economy is one of the most obvious benefits of a mailed questionnaire. The mail questionnaire does not require a trained staff of interviewers and supervisors; it only requires the cost of planning, sampling, duplicating, mailing, and providing self-addressed envelopes for the returns.

Processing and analysis costs are usually simpler and cheaper than other survey methods.

Ease in locating respondents

Except in extreme cases, locating respondents in a mailed questionnaire survey is sometimes easier, especially if the survey is conducted with specialized and homogeneous samples.

Saving of time

The mailed questionnaire can be sent to all respondents simultaneously, and most replies will be received within a week or so. It is, however, also true that final returns may take several weeks or longer.

Respondent’s convenience

The respondent may devote total time on it than he or she can do so in an interview study. This convenience may help him or her to answer more correctly. Also, this gives him or her more time to deal with difficult questions.

Greater anonymity

The absence of an interviewer provides the respondent with greater anonymity. This makes him or her more willing to provide socially undesirable answers or answers that violate norms.

Less chance of biasing error

There is no opportunity for the respondent to be biased by the presence of an interviewer. The personal characteristics of the interviewer and the variability of their skills may result in the biasing effect.

In a face-to-face interview, the respondent may mistrust the interviewer or dodge certain questions or give misleading answers. A mail questionnaire is, in general, free from this error.

Standardized wording

A comparison of respondents’ answers is facilitated by the fact that each respondent is exposed to the same wording.

However, this advantage may be waived out of the respondents’ varying levels of understanding due to the differences in their level of education.

Ease in securing information

The mail questionnaire allows the respondent to consult his records, and personal document, and consult with colleagues or other people for genuine information that he wants to provide.

Greater accessibility

Finally, respondents who are widely dispersed geographically can all be reached for a price of a postal stamp, as compared to expensive travel costs for interviewers.

Disadvantages of Self-administered Questionnaire

Limitations of the questionnaire.

The only short and straightforward questionnaire with a few complex, open-ended, screening, and/or tedious questions can be used so that the respondents can understand with the help of the printed instructions and definitions.

Low response rate

The greatest disadvantage of the mailed questionnaire is its low response rate. In contrast, in interview studies, the vast majority of the interviews are completed, and the reasons for non-response rates are known.

Mailed studies sometimes receive a response rate of as low as 10 percent, and 50 percent is considered adequate.

Inflexibility

The answers received in a mailed questionnaire have to be accepted as final because there remains no scope to probe beyond the given answer to clarify an ambiguous one, to overcome unwillingness to answer a particular answer.

Verbal behavior

No interviewer is present to observe non-verbal behavior or to make personal assessments of the respondent’s social class or other pertinent characteristics. A lower-class respondent may pass himself off as upper class in a mailed questionnaire, with no challenge from an interviewer.

Reasons for refusal not known

It is not always possible to determine the characteristics of non-respondents and reasons for refusals.

No control over the sequence

No control can be maintained over the sequence in which the questions are answered.

When the respondent fills in the questionnaire, he can see all the questions before answering anyone, and the different answers cannot be treated as independent. Following the sequence of a questionnaire is important because it eliminates response bias.

No control over the environment

In interview studies, the interviewer often takes great pains to ensure that a standardized environment exists for every interview.

In a mailed questionnaire study, there is no assurance that the respondent can complete the answers without interference from others. This may also lead to an invasion of privacy.

High item non-response bias

Without supervision, the respondent may leave some unanswered questions while filling in the questionnaire. This is particularly true for sensitive and socially undesirable questions.

Cannot record spontaneous answers

When it is important to secure only one person’s views uninfluenced by others, this method is inappropriate. Moreover, the respondent has an opportunity to erase a hasty answer that he or she later decides is not diplomatic.

No way to supplement the answers

With a mail questionnaire, there is no opportunity to supplement the respondent’s answers by observational data.

No way to check the correct identity of the respondents.

With a mail questionnaire, the investigator cannot be sure that the right person has completed the questionnaire.

Dealing with Non-response in Mailed Interview

Low non-response rates may also be achieved through sub-sampling the non-respondents.

Suppose a sample of size 1000 is selected, and a questionnaire is mailed to them. The study’s objective is to ascertain the prevalence of smoking (P) among the respondents. Suppose further that 700 of them fill in and return the same.

Thus the initial number of non-respondents is 300. As a first step, reminders are sent to these 300 non-respondents. Assume now that out of these 300 non-respondents, 100 fill in and return the questionnaire.

There are now 800 respondents from whom responses have been received. The response rate is 0.80, and the non-response rate is 0.20.

The second step calls for selecting 40 of those non-respondents and interviewing them. We assume that it has been possible to elicit responses from all 40 individuals. To estimate P, the following estimator is now used:

where p 1 is the estimate applied to the data collected by mail, and p 2 is the estimate applied to the data collected by interview. P1 would have been the estimator if no interviews had been carried out.

Improving Response Rates in Mail Survey

We enumerate below a few points that lead to a higher response rate in the mail survey:

Follow-up: Follow-ups and reminders are highly remunerative in getting good returns. Since each successive follow-up produces more returns, researchers can potentially achieve an extremely high total response rate through repeated follow-ups.

Prior notification

Evidence is there that advance notification is effective in increasing response rates. For this purpose, telephoning seems to be the best device for advance notification.

Return envelopes

Including a self-addressed and stamped envelope encourages the respondents to return quickly.

Cash incentives

Some provision of monetary incentives is highly likely to increase the response rate.

Sponsorship

The sponsorship of the mail questionnaire significantly affects respondents, often motivating them to fill it out and its quick return. Sponsorship guarantees the study’s legitimacy and value. Therefore, investigators must include information on sponsorship, usually in the questionnaire’s cover letter.

The researcher should appeal to the respondents’ goodwill, telling them that he/she should participate by filling out the questionnaire and mailing them back.

30 Accounting Research Paper Topics and Ideas for Writing

  • Technical Support
  • Find My Rep

You are here

How to Conduct Self-Administered and Mail Surveys

How to Conduct Self-Administered and Mail Surveys

  • Linda Bourque - University of California at Los Angeles, USA
  • Eve P Fielder - UCLA, USA
  • Description

"The authors discuss self-administered questionnaires, the content and format of the questionnaire, "user-friendly" questionnaires and response categories, and survey implementation. They offer excellent checklists for deciding whether or not to use a mail questionnaire, for constructing questions and response categories, for minimizing bias, for writing questionnaire specifications, for formatting and finalizing questionnaires, and for motivating respondents and writing cover letters." --Peter Hernon, Graduate School of Library and Information Science, Simmons College

How do you decide whether a self-administered questionnaire is appropriate for your research question? This book provides readers with an answer to this question while giving them all the basic tools needed for conducting a self-administered or mail survey. Updated to include data from the 2000 Census, the authors show how to develop questions and format a user-friendly questionnaire; pretest, pilot test, and revise questionnaires; and write advance and cover letters that help motivate and increase response rates. They describe how to track and time follow-ups to non-respondents; estimate personnel requirements; and determine the costs of a self-administered or mailed survey. They also demonstrate how to process, edit, and code questionnaires; keep records; fully document how the questionnaire was developed and administered; and how the data collected is related to the questionnaire. New to this edition is expanded coverage on Web-based questionnaires, and literacy and language issues.

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

"The authors discuss self-administered questionnaires, the content and format of the questionnaire, "user-friendly" questionnaires and response categories, and survey implementation. They offer excellent checklists for deciding whether or not to use a mail questionnaire, for constructing questions and response categories, for minimizing bias, for writing questionnaire specifications, for formatting and finalizing questionnaires, and for motivating respondents and writing cover letters."

For instructors

Select a purchasing option.

SAGE Research Methods Promotion

This title is also available on SAGE Research Methods , the ultimate digital methods library. If your library doesn’t have access, ask your librarian to start a trial .

  • Self Administered Survey: Types, Uses + [Questionnaire Examples]

busayo.longe

Sometimes, individuals and businesses send out surveys and questionnaires that are designed to be completed without any interference from the researcher. In other words, the researcher doesn’t need to be there when the respondents are filling these questionnaires. These types of surveys are known as self-administered surveys or stand-alone questionnaires. 

When designing a self-administered survey, you should take extra care and ensure that the questions are easy to understand, and the survey has a decent layout. In this article, we will look at some of the most common types of stand-alone questionnaires. 

What is a Self Administered Survey?  

A self-administered survey is a data-collection process where the researcher is entirely absent when respondents are filling out the survey, hence the term—self-administered. In other words, the researcher sends out the survey to respondents with instructions on how they can fill it about, and waits for their responses. 

Most of the questions in a self-administered survey are open-ended questions that require the respondents to fully communicate their thoughts with little or no restrictions. One of the most common types of self-administered surveys are mail-in questionnaires. Online questionnaires sent out to respondents via email invitations is another example of a self-administered survey. 

Apart from paper and online forms, self-administered surveys also come in the form of oral tests. In this case, the researcher gathers all the participants in a single location, gives them relevant instructions, and then leaves them to complete the survey within a specific time of the day. 

Importance of a Self Administered Survey

  • It helps you to reduce the costs of data collection in research . For instance, instead of traveling all the way to meet up with research subjects or hiring face-to-face interviewers, you can simply send mail-in questionnaires to respondents and gather the data you need easily. 
  • Self-administered surveys are more convenient for participants because they do not have to fill the questionnaires immediately. This can improve your survey participation rates. 
  • Because research subjects do not have to fill and submit your questionnaire immediately, they can take their time to think about each question and fill in the best responses. This helps to improve the validity of your research data. 
  • It also reduces research bias since the researcher does not have any contact with the respondents as they fill out the survey. This limits their ability to subtly influence the responses from participants physically. 
  • Survey respondents enjoy better privacy with self-administered questionnaires. The absence of the researcher can make the respondent feel at ease and more willing to provide unique and unconventional answers.  
  • Using self-administered surveys and questionnaires for data collection allows you to gather data from a large sample size spread over different geographical locations. 

Types of Self Administered Survey  

1. written self administered surveys .

  • Mail Questionnaire

A mail questionnaire is a quantitative method of data collection where the researcher selects a research sample size and then, sends the questionnaires to each participant via the postal service. The questionnaires are sealed in a postage-paid envelope and contain specific instructions on how they should be completed. After filling the questionnaire, the respondent sends it back to the researcher via mail.

Mail questionnaires are one of the oldest types of self-administered surveys. Till date, many businesses use this method to collect employee feedback and measure customer satisfaction. Mail surveys allow you to target specific audience segments as you have access to the complete name and home address of the members of your target population. 

Advantages of Mail Questionnaires  

  • Mail surveys help you to cut down costs for data collection. While you can spend up to $5,000 to administer a medium-scale survey, you’d spend a lot more if you had to conduct the same process via telephone surveys or interviews. 
  • Mail surveys do not need much manpower. In some cases, one person can post the questionnaires to all respondents. 
  • It allows you to accurately target survey respondents and gather valid responses. 

Disadvantages of Mail Surveys  

  • Mail Questionnaires have low response rates. Sometimes, only 3–15% of the people who receive these Questionnaires actually fill and return them. 
  • Poor survey design can affect the entire data collection process.
  • Hand-delivered Surveys 

This type of survey works just like mail questionnaires. However, in this case, the researcher drops off the survey physically and then, goes back to pick up the completed form after a few hours. It is designed to be more convenient for the survey participants. 

Hand-delivered surveys are also called direct questionnaires because the researcher distributes them directly. They often contain close-ended questions with checkboxes that respondents tick to indicate their answers. 

Advantages of Hand-delivered Surveys

  • It is more convenient for respondents as they do not have to bother about mailing the completed surveys back to you.
  • The researcher can explain the instructions and purpose of the study directly. 

Disadvantages of Hand-delivered Surveys

  • Hand-delivered surveys are time-consuming.
  • You’d incur a lot of costs; especially in terms of logistics. 

2. Electronic Self-administered Surveys 

  • Email Surveys 

An email survey is a quantitative data collection process that is conducted via email. Email surveys use pure text known as ASCH to represent the questionnaire in the body of the email, and then the survey is posted to a curated list of email addresses. 

Email surveys are mostly made up of closed-ended questions. In the end, participants’ answers are exported to a prepared spreadsheet and tabulated. 

Advantages of Email Surveys

  • You can send out email surveys to large numbers of people at the same time. 
  • It is one of the most efficient and reliable methods of collecting data; especially customer feedback. 
  • Email Surveys have high response rates because people check and respond to emails, frequently. 
  • Businesses can integrate other communication materials in an email survey. For example, you can include a coupon, discount code or even promotional materials for your brand. 

Disadvantages of Email Surveys

  • It can be difficult to get the email addresses of all the members of your target audience for your email survey. 
  • The responses from email surveys can be unreliable; especially when there are incentives for completing the survey. In other words, people can simply choose any options because they want to enjoy the incentives. 
  • Internet Interviews

An internet interview is a set of questions written in HTML and added to a website. Respondents come across these surveys spontaneously as they interact with the website and they can choose to respond to the questions or not. 

There are many data collection tools that you can use to conduct internet interviews for your target audience. Common examples of internet interviews include social media polls and online surveys. 

Advantages of Internet Interviews

  • It is cheaper to create and administer internet interviews than other types of self-administered surveys. You can use free and paid web survey software to build your questionnaire from scratch and share with your audience. 
  • You get real-time responses as survey participants vote on your online poll. 
  • Respondents can answer the questions when they want to. 

Disadvantages of Internet Interviews

  • It can be difficult to target a specific audience for your survey. 
  • Low survey response rates due to poor distribution. 

Examples of Self Administered Survey/Questionnaire  

  • Let’s say you are an e-commerce company that deals in baby clothes. After a customer registers and shops on your website, you can send a customer satisfaction survey via email to know how they feel about the overall shopping experience. You can even add a promo code or exclusive discount offers as incentives. 
  • A home construction company can mail surveys or to their customers or send hand-delivered questionnaires to ask for service reviews. 
  • To gather public opinion on social issues, you can create an online poll using Formplus and share with your social media community using our direct social media sharing buttons. 

Why Use Formplus to Create a Self Administered Survey?  

  • Email Invitations

Inviting people to fill your survey via email is one of Formplus multiple form sharing options for questionnaires and surveys . With email invitations, you can invite respondents to take part in your survey by uploading their email addresses. 

Sending out email invitations also allows you to track survey responses and prevent multiple form submissions. You’ll know when a submission is pending and when it is completed. Once your survey gets a new response, you can prevent the participant from accessing the survey again. 

  • Offline Forms

With our offline form feature , survey participants can take part in your research, even when they are in remote areas with poor or no internet access. All responses submitted in “offline mode” are automatically updated on the Formplus servers when internet access is restored. 

  • Mobile Forms

Respondents can view, fill and submit your self-administered surveys from their smartphones; without having to pinch out or zoom in on their screens. Formplus forms are mobile-responsive which means you can conveniently interact with surveys and questionnaires on your mobile device. 

  • Conditional Logic

Conditional logic improves the quality of data submitted in your surveys by allowing you to collect only relevant data. It hides and shows form fields to respondents based on their previous answers so they only have to view and complete fields that are relevant to them. 

  • Multiple Form Fields

There are more than 30 form fields in the Formplus builder which means you can collect different types of data in your surveys. From text input fields to e-signatures and even unlimited file uploads, Formplus gives you numerous options to collect data the way you like. 

  • Autoresponders

Once you receive a new submission of your survey or questionnaire, you can automatically send out confirmation emails to respondents. As part of the confirmation email, you can include a copy of the completed survey to eliminate any form of miscommunication. 

  • Form Analytics

The form analytics dashboard in the Formplus builder displays important analytics and metrics for your data collection process. Here, you can access form metrics like the total number of survey responses, the total number of form views, and the geographical locations from where form submissions were made. 

You can also build custom visual reports using the builder’s report summary tool. Simply click on the form field or data category to automatically display your data as custom graphs and charts.

Advantages of Self-administered Surveys

  • Self-administered Surveys help you to save money off logistics and personnel. For mail surveys, you do not need any trained personnel; all you need is proper planning in terms of sampling, duplicating, mailing, and providing self-addressed envelopes for the returns.
  • It is relatively easy to target respondents with self-administered surveys. 
  • Self-administered surveys are designed for convenience. Respondents can fill them whenever they want to. 

Disadvantages of Self-administered Surveys

  • Self-administered surveys have low response rates. For instance, very few people would be interested in responding to a mail survey when there is no incentive. 
  • The researcher cannot couch for the validity of the responses from self-administered surveys. In online polls, for example, the researcher has little or no control over who fills the survey. 

Conclusion  

The goals and objectives of your research should determine if you can opt for self-administered surveys. If you’re conducting a complex research, the researcher may need to supervise the data-collection process in person. Hence, self-administered surveys may not work.

Self-administered surveys are ideal for situations where there’s an existing relationship between the researcher and the target audience—for example, a fashion store’s customers. In this context, the audience will find it easy to complete the surveys, independent of the researcher. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • accuracy surveys
  • autoresponder
  • compatibility questionnaires
  • conditional logic
  • email surveys
  • Online Form
  • self administered surveys
  • busayo.longe

Formplus

You may also like:

Survey & Questionnaire Introduction: Examples + [5 Types]

The Golden Rule of Surveys: Be Polite. Whether online or offline, you need to politely approach survey respondents and get th

a cover letter is typically used with self administered questionnaires

33 Social Media Survey Questionnaires

33 online shopping questionnaire + [template examples].

Learn how to study users’ behaviors, experiences, and preferences as they shop items from your e-commerce store with this article

9 Types of Survey + [Template & Question Examples]

Ultimate guide to survey types; online & offline surveys, longitudinal surveys, trend surveys, cohort surveys, evaluation survey,...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Self- Administered Questionnaires: Advantages, Disadvantages, Enhancement, and Analysis

Self-administered questionnaires are a very common and mostly used tool for collecting data in research. This article will provide all necessary information regarding this data collection tool.

a cover letter is typically used with self administered questionnaires

What is the Self-Administered Questionnaire (SAQ)?

This is a type of data collection tool. However, it is less common than interviewer-administered questionnaires, but it is now becoming widely acceptable due to its countess beneficial features.

An SAQ is a method used for collecting data without needing an interviewer or surveyor. This tool eliminates the need for an interviewer to conduct research. In this method, written questions are provided to the respondents, who have to answer independently. They are also known as mailed questionnaires.

As we know, the cost of an interviewer-administered survey is quite high. Therefore, many researchers are now considering SAQ to minimize cost and time. The respondents have to fill in the answers in written form.

Types of SAQ

As the SAQ is in written form, it can be administered differently. Some of the ways or types are listed below.

Through Mail: in this type, a questionnaire is mailed to the respondents with all the necessary and detailed information. The respondents then fill it out and mail it back to the researcher.

Gathering Group of People: in this type, a group of people from a sample population are gathered in one place. The researcher gives all the necessary information and instructions about how to fill out the questionnaires. After filling in the data, they return the questionnaires to the researcher on the spot.

Hand-Delivered: in this type, the researcher delivers the questionnaires by hand to the respondents. They give some time to the respondents to let them fill properly and collect later.

Computer-Delivered: this data collection is usually performed in offices or organizations. The questionnaires are delivered to the employees’ computer systems through the organization’s intranet.

Intercept Studies: in this method, the questionnaires are distributed at a place with a large group of people from the sample population for example, some offices, shopping malls, etc., respondents are given some time to fill, and the researcher then collects back the questionnaires. In this case, the researcher first has to explain the purpose of the research to the respondent in order to achieve desired results.

Email Survey: the questionnaires are sent to the respondents through email. They fill out the questionnaires and email them back to the researcher. They can be both open or closed-ended.

Internet Survey: they are posted on a website. Respondents are asked to visit a specific website to fill out the survey.

The survey can be conducted on a phone, iPad, tablet, computer, etc.

Advantages of Self-Administered Questionnaires

There are many considerable benefits of SAQs. Some of them are described below.

More Budget Friendly

Budget is no doubt an important part of every research. Many researchers fail to succeed due to low-budget issues. Therefore, we must acknowledge that it is an important factor to consider when we plan to do research.

In SAQs, there is no need to hire supervisors or interviewers. They are indeed an added cost in the survey process. The only cost in this type of method is the planning, designing, sampling, implementing, analyzing, etc. These costs are much lower than the costs of other survey methods.

More Time Saving

Traditional survey methods and tools are more time-consuming as compared to SAQ. In traditional methods, the supervisors or interviewers have to go by themselves to complete the questionnaires. However, in SAQ, you can mail it to the respondents. You can send the questionnaires to all the respondents at the same time. They send it back to you after recording their responses. In this way, much time and effort can be saved.

Easy Sampling

This research tool has made the sampling easier. In most cases, this tool uses homogenous samples. Therefore, the sampling gets easier and more accurately done with less effort and time.

More Convenient for Respondents  

When the respondents get the questionnaires on their phone or computer, they find it more convenient to get them filled by allocating proper time to filling the questionnaires. They feel more convenient to get them filled at their place without the intervention of an interviewer or the need to answer the supervisor. Moreover, when the respondents feel relaxed, it is obvious that the responses recorded will be more accurate and usable.

Bias less Data Collection

When the interviewer is present on the spot, the respondents often make biassed choices while answering the questions. The technical skills they use to get responses from people often result in respondents’ biassed choices.

However, in the case of SAQ, there is no chance of respondent bias as no one is present on the spot to influence or interfere with the responses. This way, more natural response collection is possible, and the data becomes more reliable and valid.

Wide Range Accessibility

With the help of mailed questionnaires, you can record responses from people worldwide. In this way, you can get huge and geographically dispersed sample data without travelling to different places. The only cost you have to bear is the cost of a postal stamp, etc. This data collection method provides a wide range of accessibility to people worldwide.

Disadvantages/ Limitations

Some of the disadvantages or limitations of this data collection tool are listed below.

Questionnaire Design is Limited

One of the significant drawbacks of this type of questionnaire is that its design options could be more extensive. This is so because you send the questionnaire to the respondent with written instructions to fill out the questionnaire. Someone else is present on the spot to explain the respondent. That is why the questions must be made simpler and easier to let the respondent understand them easily.

SAQs are mostly designed with straightforward, fewer open-ended, and short questions to make it easily understandable for the respondents.

Response Rate is Less

This is also considered a major drawback because mailed questionnaires receive a very low response rate. This happens because, in many cases, people tend to ignore random mails and emails. They do not even open the mail to see what is inside it. That is why the response rate in some studies remains as low as 10%.

However, interviews, on the other hand, have a very good response rate; in that case, the reasons for the non-responsiveness of the respondents are also known.

Results are Inflexible

The answers you get in a mailed questionnaire are totally inflexible. You have to accept the answers as they are written or filled. You cannot probe the respondent further to get better answers. You also get ambiguous answers to many questions, and these answers make your overall result more reliable and valid in some cases.

Respondent’s Behavior is not Known

As there is no interviewer or supervisor present in this type of data collection process, therefore, the behavior of the respondents cannot be recorded.

The interviewers are experts in judging the respondent’s body language to assess their social behaviours. In this way, better judgments can be made, and the research results can be made more realistic. However, in the case of mailed questionnaires, the respondent’s behaviors are unknown.

For example, suppose there is a questionnaire that divides the respondents according to social class. In that case, the lower-class respondent may also fill out the questionnaire that is supposed to be for the upper class, and in this way, the results will be biased and unrealistic.

Uncontrolled Environment

When an interview is conducted, it is conducted in a controlled environment so that there would be minimum interference from the respondents. In this way, the response bias can also be minimized.

However, in the case of mailed surveys, the researcher cannot control the environment. He is still determining who else will interfere with the respondent during the survey filling process. Outside interference is a huge source of response bias and hinders accurate results. That is why an uncontrolled environment is considered to be a major drawback of SAQs.

Improperly Filled Data

As no interviewer or supervisor is present on the spot to guide the respondent, there is a huge chance that many questions will not be answered properly. In many cases, respondents do not even bother to answer ambiguous questions. For example, some surveys could include socially unacceptable questions that the respondent will not even bother to answer or will not answer properly, thus producing non-factual results.

This is also one of the reasons why the response rate of SAQs is low as compared to interview surveys.

Unidentified Respondents

This is also a major disadvantage in self-administered questionnaires. You mail the questionnaire to the respondent’s address, thinking he will get the mail, fill in the data, and mail it back to you. However, this is only true in some cases. You need to find out whether the data is filled by the intended person or not. 

The actual person may make someone fill out the questionnaire and mail it back to you because there is no way for the researcher to find out the respondent’s identity. In this way, data and results’ accuracy, reliability, and validity are highly compromised.

Enhancing SAQ’s Response Rate

As we mentioned above, the response rate of mail surveys is relatively lower as compared to the interview survey method. That is why we have collected some useful tips to enhance the response rate of SAQs or mail surveys.

Following up after sending the mail surveys is important in enhancing the response rate. Follow-up is a reminder for the respondent that he has to fill up the data and return the survey on time.

Repeated follow-ups play an important role in enhancing the overall response rate and producing more accurate results subsequently.

Notifying in Advance

It has been proved through evidence that if you notify the respondent in advance that you will send a survey questionnaire to be filled out by them, then the response rate gets better. You can call or text the corresponding person to send a survey. This helps the respondents to remember and fill the questionnaire on time and thus helps increase the response rate.

Mention Sponsors

Mentioning a sponsor is another powerful way to enhance the response rate. It would help if you mentioned the sponsors in the survey’s cover letter. This helps increase the worth of your survey and the response rate.

Final Analysis

Self-administered questionnaires have both merits and demerits that are enough to show the importance of this data collection tool.

The survey without an interviewer makes the respondent more comfortable and feels more at ease to respond according to his will. He feels no one notices his behavioural responses; therefore, he fills more comfortably and without any haste.

On the other hand, sometimes, we need to return the survey questionnaires. People need more time to fill them or else they do not bother to mail them back. This results in lowered response rate, and the process of research lags, slows down or fails to succeed.

Moreover, the information provided is sometimes very biased or improper, resulting in unreliable or invalid results and thus killing the research’s actual purpose.

However, this method is still widely used by many researchers worldwide due to its countless benefits, especially time and money saving. You only have to analyze first whether the nature of your research allows the use of this data collection tool or not. 

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Types of surveys

Learning objectives.

  • Define cross-sectional surveys, provide an example of a cross-sectional survey, and outline some of the drawbacks of cross-sectional research
  • Describe the three types of longitudinal surveys
  • Describe retrospective surveys and identify their strengths and weaknesses
  • Discuss the benefits and drawbacks of the various methods of administering surveys

There is immense variety when it comes to surveys. This variety comes both in terms of time —when or with what frequency a survey is administered—and in terms of administration —how a survey is delivered to respondents. In this section, we’ll look at what types of surveys exist when it comes to both time and administration.

In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys are those that are administered at just one point in time. These surveys offer researchers a snapshot in time and offer an idea about how things are for the respondents at the particular point in time that the survey is administered.

An example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy, Martos, Boland, & Horvath-Szabo, 2011) of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of one’s life and health. The researchers found from analysis of their cross-sectional data that anxiety and depression were highest among those who had both strong religious beliefs and some doubts about religion.

Yet another example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman, Pike, & Butler, 2011) of how the perceived publicness of social networking sites influences users’ self-disclosures. These researchers administered an online survey to undergraduate and graduate business students. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.

a cover letter is typically used with self administered questionnaires

One problem with cross-sectional surveys is that the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. They change over time. Thus, generalizing from a cross-sectional survey about the way things are can be tricky; perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. For example, think about how Americans might have responded to a survey asking their opinions on terrorism on September 10, 2001. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. The point is not that cross-sectional surveys are useless; they have many important uses. But researchers must remember what they have captured by administering a cross-sectional survey—a snapshot of life as it was at the time that the survey was administered.

One way to overcome this sometimes problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys are those that enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys. Retrospective surveys fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey . The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time the researchers gather data, they ask different people from the group they are describing because their concern is the group, not the individual people they survey. Let’s look at an example.

The Monitoring the Future Study is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, NIDA distributes surveys to students in high schools around the country to understand how substance use and abuse in that population changes over time. Recently, fewer high school students have reported using alcohol in the past month than at any point over the last 20 years. Recent data also reflect an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. The data points provide insight into targeting substance abuse prevention programs towards the current issues facing the high school population.

Unlike in a trend survey, in a panel survey the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for, say, 5 years in a row. Keeping track of where people live, when they move, and when they die takes resources that researchers often don’t have. When they do, however, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). Contrary to popular beliefs about the impact of work on adolescents’ performance in school and transition to adulthood, work in fact increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey. In a cohort survey , the participants have a defining age- or time-based characteristic that the researcher is interested in studying.  Common cohorts that may be of interest to researchers include people of particular generations or those who were born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common.  In a cohort study, the same people don’t necessarily participate from year to year.  But each year, participants must belong to the cohort of interest.

An example of this sort of research can be seen in Christine Percheski’s work (2008) on cohort differences in women’s employment. Percheski compared women’s employment rates across seven different generational cohorts, from Progressives born between 1906 and 1915 to Generation Xers born between 1966 and 1975. She found, among other patterns, that professional women’s labor force participation had increased across all cohorts. She also found that professional women with young children from Generation X had higher labor force participation rates than similar women from previous generations, concluding that mothers do not appear to be opting out of the workforce as some journalists have speculated (Belkin, 2003).

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. This means that if whatever behavior or other phenomenon the researcher is interested in changes, either because of some world event or because people age, the researcher will be able to capture those changes. Table 7.1 summarizes these three types of longitudinal surveys.

Finally, retrospective surveys are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal- like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine, for example, that you’re asked in a survey to respond to questions about where, how, and with whom you spent last Valentine’s Day. As last Valentine’s Day can’t have been more than 12 months ago, chances are good that you might be able to respond accurately to any survey questions about it. But now let’s say the researcher wants to know how last Valentine’s Day compares to previous Valentine’s Days, so she asks you to report on where, how, and with whom you spent the preceding six Valentine’s Days. How likely is it that you will remember? Will your responses be as accurate as they might have been had you been asked the question each year over the past 6 years, rather than asked to report on all years today?

In summary, when or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are certainly preferable in terms of their ability to track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. As you may have guessed, the issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal—these are really matters of all research design. But we’ve placed our discussion of these terms here because they are most commonly used by survey researchers to describe the type of survey administered. Another aspect of survey administration deals with how surveys are administered. We’ll examine that next.

Administration

Surveys vary not just in terms of when they are administered but also in terms of how they are administered.

Self-administered questionnaires

One common way to administer surveys is in the form of self-administered questionnaires. This means that a research participant is given a set of questions, in writing, to which they are asked to respond. Self-administered questionnaires can be delivered in hard copy format, typically via mail, or increasingly more commonly, online. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or via snail mail. Perhaps you’ve taken a survey that was given to you in person; on many college campuses, it is not uncommon for researchers to administer surveys in large social science classes (as you might recall from the discussion in our chapter on sampling). If you are ever asked to complete a survey in a large group setting , it might be interesting to note how your perspective on the survey and its questions could be shaped by the new knowledge you’re gaining about survey research in this chapter.

Researchers may also deliver surveys in person by going door-to-door and either asking people to fill them out right away or making arrangements for the researcher to return to pick up completed surveys. Though the advent of online survey tools has made door-to-door delivery of surveys less common, it still happens on occasion. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.

If you are not able to visit each member of your sample personally to deliver a survey, you might consider sending your survey through the mail . While this mode of delivery may not be ideal (imagine how much less likely you’d probably be to return a survey that didn’t come with the researcher standing on your doorstep waiting to take it from you), sometimes it is the only available or the most practical option. As mentioned, though, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey.

Often survey researchers who deliver their surveys through the mail may provide some advance notice to respondents about the survey to get people thinking about and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010).  Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope.

a cover letter is typically used with self administered questionnaires

Online surveys are becoming increasingly common, no doubt because it is easy to use, relatively cheap, and may be quicker than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, a researcher may subscribe to a service that offers online delivery or use some delivery mechanism that is available for free. Both SurveyMonkey and Qualtrics offer free and paid online survey services. One advantage to using services like these, aside from the advantages of online delivery already mentioned, is that results can be provided to you in formats that are readable by data analysis programs such as SPSS. This saves you, the researcher, the step of having to manually enter data into your analysis program, as you would if you administered your survey in hard copy format.

Many of the suggestions provided for improving the response rate on a hard copy questionnaire apply to online questionnaires as well. One difference of course is that the sort of incentives one can provide in an online format differ from those that can be given in person or sent through the mail. But this doesn’t mean that online survey researchers cannot offer completion incentives to their respondents. Sometimes they provide coupon codes for online retailers or the opportunity to provide contact information to participate in a raffle for a gift card or merchandise.

Online surveys, however, may not be accessible to individuals with limited, unreliable, or no access to the internet or less skill at using a computer. If those issues are common in your target population, online surveys may not work as well for your research study. While online surveys may be faster and cheaper than mailed surveys, mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The choice of which delivery mechanism is best depends on a number of factors, including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.

Sometimes surveys are administered by having a researcher poses questions verbally to respondents rather than having respondents read the questions on their own. Researchers using phone or in-person surveys use an interview schedule which contains the list of questions and answer options that the researcher will read to respondents. Consistency in the way that questions and answer options are presented is very important with an interview schedule. The aim is to pose every question-and-answer option in the very same way to every respondent. This is done to minimize interviewer effect, or possible changes in the way an interviewee responds based on how or when questions and answer options are presented by the interviewer. Survey interviews may be recorded, but because questions tend to be closed ended, taking notes during the interview is less disruptive than it can be during a qualitative interview.

a cover letter is typically used with self administered questionnaires

Interview schedules are used in phone or in-person surveys and are also called quantitative interviews. In both cases, researchers pose questions verbally to participants.  Phone surveys make it difficult to control the environment in which a person answers your survey. Another challenge comes from the increasing number of people who only have cell phones and do not use landlines (Pew Research, n.d.).  Unlike landlines, cell phone numbers are portable across carriers, associated with individuals, not households, and do not change their first three numbers when people move to a new geographical area. However, computer-assisted telephone interviewing (CATI) programs have also been developed to assist quantitative survey researchers. These programs allow an interviewer to enter responses directly into a computer as they are provided, thus saving hours of time that would otherwise have to be spent entering data into an analysis program by hand.

Quantitative interviews must also be administered in such a way that the researcher asks the same question the same way each time. While questions on hard copy questionnaires may create an impression based on the way they are presented, having a person administer questions introduces a slew of additional variables that might influence a respondent. Even a slight shift in emphasis on a word may bias the respondent to answer differently. Consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. Quantitative interviews can also help reduce a respondent’s confusion. If a respondent is unsure about the meaning of a question or answer option on a self-administered questionnaire, they probably won’t have the opportunity to get clarification from the researcher. An interview, on the other hand, gives the researcher an opportunity to clarify or explain any items that may be confusing. If a participant asks for clarification, the researcher must use pre-determined responses to make sure each quantitative interview is exactly the same as the others.

In-person surveys are conducted in the same way as phone surveys but must also account for non-verbal expressions and behaviors. In-person surveys have one distinct benefit—they are more difficult to say “no” to. Because the participant is already in the room and sitting across from the researcher, they are less likely to decline than if they clicked “delete” for an emailed online survey or pressed “hang up” during a phone survey.  In-person surveys are also much more time consuming and expensive than mailing questionnaires. Thus, quantitative researchers may opt for self-administered questionnaires over in-person surveys on the grounds that they will be able to reach a large sample at a much lower cost than were they to interact personally with each and every respondent.

Table 7.2 summarizes the various ways to collect survey data.

Key Takeaways

  • Time is a factor in determining what type of survey researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are administered over time.
  • Retrospective surveys offer some of the benefits of longitudinal research but also come with their own drawbacks.
  • Self-administered questionnaires may be delivered in hard copy form to participants in person or via mail or online.
  • Interview schedules are used in in-person or phone surveys.
  • Each method of survey administration comes with benefits and drawbacks.
  • Cohort survey- describes how people with a defining characteristic change over time
  • Cross-sectional surveys- surveys that are administered at just one point in time
  • Interview schedules- a researcher poses questions verbally to respondents
  • Longitudinal surveys- surveys in which a researcher to make observations over some extended period of time
  • Panel survey- describes how people in a specific group change over time, asking the same people each time the survey is administered
  • Retrospective surveys- describe changes over time but are administered only once
  • Self-administered questionnaires- a research participant is given a set of questions, in writing, to which they are asked to respond
  • Trend survey- describes how people in a specific group change over time, asking different people each time the survey is administered

Image attributions

company social networks by Hurca CC-0

posts submit searching by mohamed_hassan CC-0

talk telephone by MelanieSchwolert CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Designing a...

Designing a questionnaire: Send a personal covering letter

  • Related content
  • Peer review
  • A F Bissett
  • Department of Public Health Medicine, Grampian Health Board, Aberdeen AB9 1RE.

EDITOR, - In writing about designing a questionnaire D H Stone only briefly mentions a covering letter for postal questionnaires. 1 The importance of such letters has been debated: one experiment found no significant difference in response rate between a sample of people sent a relatively impersonal letter and a sample sent a relatively personal letter. 2 Another study found that the response rate to a questionnaire was significantly higher when the covering letter was written by the patient's general practitioner than when it was written by a doctor in a research unit. 3 It seems sensible to devote attention to the covering letter, making it personal and attractive and stating the purposes and sponsorship of the study and why the respondent's views are particularly sought. Clear instructions and examples should be given, and a statement about confidentiality should be included. 4

Cartwright has written of the problems of writing about the design of questionnaires without stating the obvious, 5 yet poorly designed questionnaires are common, create irritation, and waste resources. Departmental audit of questionnaires can be educational and can be used to set standards and devise a quick checklist against which new questionnaires can be compared and audited.

  • Smith WCS ,
  • Crombie IK ,
  • Campion PD ,
  • Abramson JH
  • Cartwright A

a cover letter is typically used with self administered questionnaires

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 10 October 2022.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs surveys, questionnaire methods, open-ended vs closed-ended questions, question wording, question order, step-by-step guide to design, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleaning and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalise your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimising these will help you avoid sampling bias .

Prevent plagiarism, run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • Cost-effective
  • Easy to administer for small and large groups
  • Anonymous and suitable for sensitive topics

But they may also be:

  • Unsuitable for people with limited literacy or verbal skills
  • Susceptible to a nonreponse bias (most people invited may not complete the questionnaire)
  • Biased towards people who volunteer because impersonal survey requests often go ignored

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • Help you ensure the respondents are representative of your target audience
  • Allow clarifications of ambiguous or unclear questions and answers
  • Have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • Costly and time-consuming to perform
  • More difficult to analyse if you have qualitative responses
  • Likely to contain experimenter bias or demand characteristics
  • Likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions, or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalisable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert-type questions collect ordinal data using rating scales with five or seven points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio data, you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer ‘multiracial’ for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle to productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarising responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorise answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counterargument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favour flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barrelled questions. Double-barrelled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organise the questions logically, with a clear progression from simple to complex. Alternatively, you can randomise the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioural or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimise order effects because they can be a source of systematic error or bias in your study.

Randomisation

Randomisation involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomisation, order effects will be minimised in your dataset. But a randomised order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Follow this step-by-step guide to design your questionnaire.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalise your variables of interest into questionnaire items. Operationalising concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivised or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomise questions. Randomising questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis.

You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved 27 May 2024, from https://www.scribbr.co.uk/research-methods/questionnaire-design/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, doing survey research | a step-by-step guide & examples, what is a likert scale | guide & examples, reliability vs validity in research | differences, types & examples.

IMAGES

  1. Sample Cover Letter For E3 Evaluation Questionnaire Date

    a cover letter is typically used with self administered questionnaires

  2. How to Address a Cover Letter—20+ Examples & 3 Easy Steps

    a cover letter is typically used with self administered questionnaires

  3. Cover Letter Example For A Job Application 89 Cover Letter Samples

    a cover letter is typically used with self administered questionnaires

  4. How to Write a Cover Letter in 2021

    a cover letter is typically used with self administered questionnaires

  5. Cover Letter Heading Spacing

    a cover letter is typically used with self administered questionnaires

  6. questionnaire cover letter sample

    a cover letter is typically used with self administered questionnaires

VIDEO

  1. What Is The Enclosure Of A Cover Letter?

  2. Lecture 7(1) Business Research Methods Survey Research Tools

  3. Applying For Research Jobs and Not Getting Selected? Try These Expert Cover Letter Writing Tips

  4. Survey Instrument

  5. CIVIL SERVICE REVIEWER 2023

  6. Personally Administered Questionnaires Mgt602 Lecture in Hindi Urdu 13

COMMENTS

  1. Chapter 8: Designing the Questionnaire Flashcards

    false (It is difficult to use screening questions in many self-administered questionnaires, except for computer-assisted surveys.) ... A cover letter is typically used with self-administered questionnaires. true. The process of implementing a survey is standardized for both self-administered and interviewer-completed surveys.

  2. MAR3613 exam 2

    T/F: A cover letter is typically used with self-administered questionnaires. True. False. 4 of 20. Term. T/F: Complex formats or designs can ensure reliable and valid results. True. False. 5 of 20. ... T/F: A cover letter is typically used with self-administered questionnaires. Choose matching definition. True. False. Don't know? 4 of 20.

  3. A guide for the design and conduct of self-administered surveys of

    Potential respondents can be sent an electronic cover letter either with the initial or reminder questionnaires attached or with a link to the an Internet-based questionnaire. Alternatively, the cover letter and questionnaire can be posted on the Web. Incentives can also be provided electronically (e.g., online coupons, entry into a lottery).

  4. PDF Cover Letter for a Survey

    Here is a step-by-step method to write a cover letter. Use a new paragraph for each item. Step 1. State the problem that exists, mentioning the group to which the respondent belongs and how the group is affected by the problem. Explain why the respondent's participation is important. Say the study will benefit the group the recipient belongs to.

  5. Self-Administered Surveys

    1.2.2 Use of languages and/or regional dialects other than the country's official language(s), and any implications for the feasibility of a self-completed questionnaire. Indeed, there are some languages and dialects that do not have a written form. 1.3 Determine how the data entry of returned mail questionnaires will occur.

  6. Self-Administered Questionnaire Method: Definition, Advantages

    A self-administered questionnaire is a data collection tool in which written questions are presented to be answered by the respondents in written form. ... Therefore, investigators must include information on sponsorship, usually in the questionnaire's cover letter. Persuasion. The researcher should appeal to the respondents' goodwill ...

  7. PDF Structured Methods: Interviews, Questionnaires and Observation

    An offer of a copy of the final research report can help in some cases. Ensure that the questionnaire can be returned with the minimum of trouble and expense (e.g. by including a reply paid envelope). Keep the questionnaire short and easy to answer. Ensure that you send it to people for whom it is relevant.

  8. Questionnaire Design

    Revised on June 22, 2023. A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information. Questionnaires are commonly used in market research as well as in the social and health sciences.

  9. How to Conduct Self-Administered and Mail Surveys

    This book provides readers with an answer to this question while giving them all the basic tools needed for conducting a self-administered or mail survey. Updated to include data from the 2000 Census, the authors show how to develop questions and format a user-friendly questionnaire; pretest, pilot test, and revise questionnaires; and write ...

  10. Self Administered Survey: Types, Uses + [Questionnaire Examples]

    One of the most common types of self-administered surveys are mail-in questionnaires. Online questionnaires sent out to respondents via email invitations is another example of a self-administered survey. Apart from paper and online forms, self-administered surveys also come in the form of oral tests. In this case, the researcher gathers all the ...

  11. Self- Administered Questionnaires: Advantages, Disadvantages

    This is also a major disadvantage in self-administered questionnaires. You mail the questionnaire to the respondent's address, thinking he will get the mail, fill in the data, and mail it back to you. However, this is only true in some cases. You need to find out whether the data is filled by the intended person or not.

  12. 7.3 Types of surveys

    One common way to administer surveys is in the form of self-administered questionnaires. This means that a research participant is given a set of questions, in writing, to which they are asked to respond. Self-administered questionnaires can be delivered in hard copy format, typically via mail, or increasingly more commonly, online.

  13. How do you administer questionnaires?

    All questions are standardized so that all respondents receive the same questions with identical wording. Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

  14. Designing a questionnaire: Send a personal covering letter

    Designing a... Designing a questionnaire: Send a personal covering letter. Department of Public Health Medicine, Grampian Health Board, Aberdeen AB9 1RE. EDITOR, - In writing about designing a questionnaire D H Stone only briefly mentions a covering letter for postal questionnaires. 1 The importance of such letters has been debated: one ...

  15. MAR3613 exam 2

    Study with Quizlet and memorize flashcards containing terms like T/F: Questions related to income are considered sensitive in nature., T/F: The number of interviewers required for a study is typically mentioned in the cover letter., T/F: Self-administered surveys have very high response rates. and more.

  16. How do you administer questionnaires?

    Questionnaires can be self-administered or researcher-administered. Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording. Researcher-administered questionnaires are interviews that take ...

  17. Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    This structure of the questionnaires was preferred as it made the data collection in self-administered questionnaires responding to possible answers to closed-ended items without requiring assistance.

  18. Enhancing Self-Administered Questionnaire Response Quality Using Code

    While self-administered questionnaires (SaQ) can facilitate convenient and inexpensive data collection from large, diverse, and representative respondent samples, there are numerous concerns over various aspects of the response quality such instruments produce (Barnette, 1999).

  19. Questionnaire Design

    Revised on 10 October 2022. A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information. Questionnaires are commonly used in market research as well as in the social and health sciences.

  20. CH 7 T/S Flashcards

    Study with Quizlet and memorize flashcards containing terms like In any sampling plan, the first task of a researcher is to choose a method of data collection., In the context of the factors that play an important role in determining sample sizes with probability designs, the higher the level of confidence desired, the smaller the sample size needed?, Since quota sampling is a nonprobability ...

  21. A cover letter is typically used with self-administered questionnaires

    Click here 👆 to get an answer to your question ️ A cover letter is typically used with self-administered questionnaires only. a. True b. ... A cover letter is typically used with self-administered questionnaires only. a. True b. False. loading. See answer. loading. plus. Add answer +5 pts. loading. Ask AI. more. Log in to add comment.

  22. MKTG FINAL T/F Flashcards

    Study with Quizlet and memorize flashcards containing terms like Despite advances in communication systems, the Internet, and software, the principles behind designing questionnaires remain essentially unchanged., The first step in the process of designing a questionnaire is to select the appropriate data collection method., Before beginning the process of designing a questionnaire, the ...

  23. Ch. 7 Quiz Questions Flashcards

    Study with Quizlet and memorize flashcards containing terms like A(n) _____ survey is a self-administered questionnaire posted on a website. A. Internet B. kiosk C. e-mail D. electronic, When an interviewer unintentionally and mistakenly checks the wrong response on a checklist during an interview, this is an example of _____. A. auspices bias B. social desirability bias C. interviewer ...