Internal Consistency – The degree to which different questions (for example, in a questionnaire) measure the same problem or subject. The mutual coherence of the questions.
Interrater Reliabilty – Syn: Interjudge reliability/Intersubjectivity See: Interjudge Reliability
Intersubjectivity – Syn: Interjudge Reliability/Interrater reliability See: Interjudge Reliability
Method of Item Analysis – A method of measuring the reliability of a test. Its basis is that all test questions are considered to be separate tests; thus, a test consisting of 45 questions is viewed, theoretically, as 45 separate tests. The mutual relation between the 45 questions is investigated, which permits a calculation as to the coherence between these questions. It is expressed in terms of a number known as the homogeneity index. See also: Coefficient of item-consistency
Parallel form Method – A method of determining the reliability of a test or questionnaire. A procedure in which the same persons are studied twice using material considered to be equivalent Maximum similarity is sought for. The correlation-coefficient obtained in this manner is called the coefficient of equivalence. See also: Correlation-coefficient/Learning effectiveness
Predictive Validity – The adequacy of a test or questionnaire in indicating accurately the future behavior of the person tested. For example, purchasing behavior
Reliability– The degree to which measurements can be repeated. A reliable questionnaire is one in which the results remain stable. In general, this reliability increases with an increase in the number of questions.
Reliability Coefficient – A reliability coefficient is a correlation-coefficient. It involves the correlation between, for example, two segments of a test or questionnaire. See also: Correlation-coefficient
Split-half Method – A method of determining the reliability of a measuring instrument. The measuring instrument is split into two parts, for example, the first half and the second half. A theoretical problem that arises in this connection is that, in fact, only the internal coherence between parts is being measured and, quite naturally, such coherence does not necessarily directly indicate reliability in every instance. The correlation-coefficient obtained in this manner is termed the split-half reliability coefficient. See also: Correlation-coefficient
Split–half Reliability Coefficient – The correlation-coefficient between two segments of a test. It is measure of coherence between these two halves, expressed as a number. See also: Split-half Method/Correlation-coefficient.
Test Stability – The extent to which a test (value) or questionnaire remains stable, unaltered and is, therefore, not based on coincidence (except for slight fluctuations).
Test–retest Method – A method of determining the reliability of a test or questionnaire. The same group of persons is, after a specific period of time, confronted with the same material. To what extend do the results between both measurements correspond? The reliability of the research project will be greater to the degree that the similarity between both measurement is greater. This method is simple; however, there is a disadvantage. People become familiar with research with research material over time. This may be anticipated to some degree by the extension of the time lapse between the measurements; however, that the method may in turn cause time problems. The correlation measure that is calculated by means of this method is termed the coefficient of stability. See also: Correlation-coefficient/Learning effect
Validity – The extent to which a test or questionnaire meets its intended purpose. For example, a question is connection with buying intentions should measure just buying intentions (and not some other factor) See also: Psychometric/Face validity
Validity Coefficient – Validity expressed in terms of a number. It is the correlation-coefficient of the predictor (for example, a test) and the criterion (examination grade obtained). The correlation-coefficient indicates the magnitude of this relation by means of a number that falls between -1 and + 1. See also: Correlation-coefficient/Validity
Artifacts – The result of a study that is not the effect of what is being measured but that emanates from another source. This source may be situated outside the study or it may be an unknown source from within the research situation itself.
Bias – The distortion of the results of a research project as a result of either systematic or random (coincidental) errors in the design of the sample, the questionnaire, the processing, the analysis, etc. The term bias is also meant to signify the distortion that may occur in the answers of the subjects interviewed as a result of the influence of the interviewer or similar factors.
Contamination – In some cases the researcher is influenced by the experimental research to such a degree that subjectivity influences his or her judgment. For example: sympathy for specific respondents “contaminates” the research results. See also: Subjectivity
Demand Characteristics – Any suggestion in a research survey that may give the respondent an idea of the intention behind the survey. He therefore forms a picture of what is required of him and will give biased answers, i.e., socially desirable answers. See also: Socially desirable answer
Distortion – Systematic interference in a (n) (experimental) research project.
Environmental Factors – Factors that influence research results in some subtle way, but of which (usually) no account has been taken. They are often difficult to quantify. For example, the brand awareness of certain product is measured periodically. Suddenly it is in the news a lot because, for example, it is suspected of containing something that is likely to cause cancer.
Error – Research errors of any kind resulting from any cause.
Error in Notation – One of the many errors that can be made during a study or experiment. For example: the interviewer accidentally marks down the answer as “yes” when it was in fact “no”.