Social Research Methods/Experiments

From Wikibooks, open books for an open world
Jump to navigation Jump to search

The Classical Experiment[edit | edit source]

-The most customary experiment consists of three main parts:

    1) Independent and Dependent Variables
    2) Pre-testing and Post-testing
    3) Experimental Control Groups

Independent and Dependent Variables

    Independent variable- A manipulated variable, in an experiment or study, whose presence or degree incurs a change in the
    dependent variable.
    Dependent Variable- The variable being studied in the experiment; it is expected to change when the independent variable is changed. 
  • Thus, a typical experiment will examine the effects of the independent variable on the dependent variable.
  • the independent variable will usually be the "experimental stimulus."
  • it can also be described as a dichotomous variable
ex. having two characteristics
 -present or non-present
  • the independent and dependent variable must be limitless.
ex. a variable may be the independent variable in one study but serve as a dependent variable in another experiment.
  • it is very helpful and important to define the independent and dependent variables in your research, and this should be done at the beginning of any experiment.

Pretesting and Post-testing

    Pre-testing- the initial measurement of a dependent variable among subjects
    Post-testing- the re-measurement of a dependent variable among subjects, after they have been introduced to the independent variable.
  • once pre-testing and post-testing are conducted, any deviations between the first and final measurements are then stated as characteristics of the independent variable.

Subjects tend to undergo behavioral changes as a result of participating in an experiment. Examples of such changes include:

  • The Hawthorne Effect- Subjects perform better when they are in an experiment
  • Demand Characteristics- Subjects try to give the answer that they think is "correct" rather than the honest answer
  • Placebo Effect- Subjects respond to the belief that they are receiving a drug, regardless of whether they actually are or not

Experimental and Control Groups

    Experimental Group- a collection of subjects to whom the independent variable is administered.
    Control Group- a collection of subjects that do not receive the independent variable but should mimic the experimental group.
    The comparison of both groups at the conclusion of the experiment will point out the effects that the independent variable has had.
  • when doing experimental research, it is very important to observe the experimental and control groups very closely.
  • using a control group helps a researcher observe changes in the experiment due to the independent variable, by making such changes aeem more obvious.

(Insert graph pg 234)

The Double-Blind Experiment

    Double-Blind experiment- an experimental design in which the researchers are ignorant to which groups are experimental or control.
  • using a double-blind experiment reduces biases of results from researchers.
ex. If you know which group is the experimental group you may pay more attention to that group, potentially to the extent that you ignore the control group entirely.
    This will cause a problem at the end of the experiment, because you will not be able to witness or analyze the full effects that the independent variable has had.
  • Pgymalion Effect- people perform better when more is expected of them

Selecting Subjects[edit | edit source]

  • For example, college students are frequently used in experiments. While they are a relatively easy group to access and analyze, one issue of concern is their generizability: given that college students are such an enormously diverse group, is it sensible to make generalizations about them? This question points out a potential drawback to using such large and heterogeneous groups as college students.
  • Probability sampling, randomization, and matching are methods of attaining comparability between the experimental and control groups.
  • Randomization is the preferred method.
  • However, randomization and matching may be used together.

Probability Sampling

  • one begins with a sampling constant containing the entire population involved in the study. Then the researcher selects two samples that will be copies of each other.
  • the degree of resemblance (representativeness) will be a product of the sample size.
  • this type of sampling is seldom used in any experiment.

Randomization- a process for selecting people to be in a control or experimental group. Randomization is preferable because it limits the potential bias (systemic error) in the experiment, as it provides an equal likelihood that "good" and "poor" performers will be in the experimental group. However, there is still a chance that more of one category may end up in a given group. The best way to overcome this is a large sample size; hence, randomization is ideal when the population is very big.

-there are several ways of randomly selecting people for a control or experimental group; for example:

 1) Out of a sample of 1600, you can select every 8th person for each group. (selection rate of 1/8)
 2) Out of a sample of 100 you can select every other person to be in each group. (selection rate of 1/2; a higher rate yields a better likelihood for authenticity, and you can afford a higher rate when the sample size is small)
  • Regardless of the way that the researcher decides to place subjects into each group, this process must be done in a fair and equal manner, because each sample will be a reflection of the total population's characteristics.

Matching

    Matching- a process in which subjects are paired based on the similarities of one or more variables. One member of the pair is assigned to the experimental group while the other is assigned to a control group.
  • matching is a way to compare the experimental and control groups.
  • matching is more efficient if a quota matrix is constructed for all of the relevant characteristics.
    Relevant characteristics - attributes that are related to the dependent variable.
  • the overall average description of the experimental and control group should be the same.
  • As a rule of thumb, both the control and experimental groups should have the same ages, gender and racial composition, etc.

Matching or Randomization?

Randomization over Matching

  • a researcher may not be aware which variables are relevant for the matching process, and thus he should turn to randomization
  • most statistical calculations that are used in analyzing results of experiments will all assume the randomization process is occurring.

Combining Randomization and Matching

  • allows researchers to discover different ways of analyzing situations.

Variations on Experimental Design[edit | edit source]

Pre-experimental Research Designs

  • Donald Campbell and Juilan Stanley introduce three "pre-experimental" designs:
  1) One-Shot Case Study
    -a researcher measures a "single group of subjects on a dependent variable following the administration of an independent
     variable" (cite 238)
    -represents an everyday logical reasoning
  2) One Group Pre-test|Post-test Design
    -adds a pre-test for the experimental group but does not contain a control group.
    -is difficult to analyze because it "suffers from the possibility that some factor other than the independent variable might
     cause a change between the pretest and post-test results." (cite 238)
    -describes with better evidence of influences of variables.
  3) Static Group Comparison
    -a research void of pretests for the experimental and control group.
    -uses data that will exactly pin-point the changes in a research.

Validity Issues in Experimental Research[edit | edit source]

Sources of Internal Invalidity

    Internal Invalidity- explains that there is a possibility that conclusions gathered from experimental results may not precisely
    represent the occurrences in the experiment.
  • Campbell, Stanley, and Thomas Cook pointed out several sources of internal invalidity:
  1) History
    -during the experiment historical events may occur. As a result the experimental results may be different.
  2) Maturation
    -because life is constantly changing people are influenced to do the same which can reflect in the experimental results.
  3) Testing
    -the processes of testing and re-testing may influence people's behavior.
  4) Instrumentation
    -since different ways of measuring variables are used in the pretest and post-test, how can we be sure that they are equally
     comparable.
  5) Statistical Regression
    -the group will show some improvement over time called regression to the mean.
  6) Selection Biases
    -Comparisons do not have any meaning unless you have equally and fairly selected individuals for the experimental and control
     group.
  7) Experimental Morality
    -explains that results and outcomes maybe different due to people dropping out of the experiment before it is finished.
  8) Causal Time Order
    -confirms that A occurred before B, if we are trying to assert that A caused B
  9) Diffusion or Initiation of Treatments
    -results maybe different due to an uncontrollable interaction between the experimental and control group.
  10) Compensation
    -people in the control group are often denied sufficient resources that the experimental group are receiving.
  11) Compensatory Rivalry
    -the control group may try to work harder to make its groups results better than the experimental group results.
  12) Demoralization
    -feelings of incompleteness or less of a human in the control group may result in them giving up.

Sources of External Invalidity

    External Invalidity- explains the possibility that conclusions gathered from results may be a "generalization about
    the real world."
  • Campbell and Stanley describe four forms:
  • Soloman Four Group Design
  • four rules of interactions of pre-testing, post-testing, and variables.
  • Post-testing-Only Control Group Design

Validity Internal validity

– the ability to eliminate alternative explanations of the treatment effect

- a combination of validity and reliability issues

- represents the possibility that the results of an experiment do not accurately reflect what occurred in the experiment

- classical experiments tend to eliminate all internal validity

External validity

- the ability to generalize experimental findings to events and settings outside the experiment itself

- results of the experiment may not translate to the real world

Alternative Experimental Settings[edit | edit source]

Web-Based Experiments

  • nowadays researchers are using the internet for conducting experiments.

Reasons:

   -cheaper
   -less time consuming

Natural Experiments

  • often occur but not prominent

Strengths and Weaknesses of the Experimental Method[edit | edit source]

Weaknesses

  • artificial
  ex. what happens in the experiment may not take place in the real world.

Strengths

  • isolation
  ex. the independent and dependent variable are isolated from one another in study. This makes changes easy to spot and
      conclusions to be drawn.
  • relative ease of replication
  • scientific rigor

Ethics and Experiments[edit | edit source]

  • experiments involve misleading subjects
  • experiments may potentially cause harm to individuals

Avoid biased Items

An issue with prejudice is the lack of a correct definition. Given different circumstances, how prejudice is defined can be derived. Concerning questionnaires, bias is defined by the property of the question in-which encourages the respondent to answer in a particular way. Ex. "Don't you agree with the CEO of Apple..." would manipulate the question by promoting agreement with Apple's CEO. This bias usually would increase support in this situation, yet at the cost corrupting the results. An example of this would be:

More support vs Less support

"Halting rising crime rate" vs "Law enforcement"; "Dealing with drug addiction" vs "Drug rehabilitation"; "Scholarly financial support" vs "Financial aid"

Furthering biasism, the social desirability regards questions and answers. In this case, people are more prone to respond with an answer that will make them look good. This can deter a participants revealing their true thoughts, especially when being questioned face-to-face. To counter this problem, a questioner should deter from asking inquires that makes the questioner feel embarrassment, inhumane, perverted, stupid, or socially disadvantaged. The asking of hypothetical situations can also elicit biased responses due to the respondent answering a question that has no direct consequence on his/her being. The use of proper or specific names has also shown biases, with more positive valued names directed to men than woman.

Experiments involve taking action and then observing the consequences of that action. Experiments seek to answer the question: how do subjects change as a result of the experimental treatment? They fundamentally address causality; however, establishing causality in experiments is not necessarily an easy process. Experiments are infrequently used in sociology. Within the social sciences, they are more often used in social psychology. They are frequently used in the natural sciences and medicine.

Experiments are typically well-suited for projects that involve limited and well-defined concepts and propositions. They are better suited for explanatory purposes rather than descriptive ones.

The classical experiment design serves as the foundation for all modern experiments. Major components of this method include independent and dependent variables, pre-testing and post-testing, and experimental and control groups.

Validity issues in experimental research - Internal validity refers to the possibility that the conclusions drawn from experiment results may not accurately reflect what went on in the experiment itself. External validity refers to the possibility that conclusions drawn from experimental results may not be generalizable to the "real" world.


Internal validity is the ability to eliminate alternative explanations of the treatment effect. It is really a combination of validity and reliability issues. Internal validity represents the possibility that conclusions drawn from experimental results may not accurately reflect what happened in the experiment itself. Sources range from history, maturation, testing, instrumentation, statistical regression, selection bias, experimental mortality, causal time order, diffusion or imitation of treatments, compensation, compensatory rivalry, and demoralization. Classic experimental designs eliminate virtually all threats to internal validity. External validity – the ability to generalize experimental findings to events and settings outside the experiment itself. It represents the possibility that conclusions drawn from experimental results may not be generalizable to the “real” world. A general problem with experiments is that subjects are rarely recruited using probability sampling techniques. Another problem is the artificiality of the experimental setting. Internal and external validity are both very important when it comes to experiments.

Summary of the Classical Experiment[edit | edit source]

- Basis for all modern experiments

- Major components of classical experiment

  • Independent and dependent variables
  • Pre-testing and post-testing
  • Experimental and control groups

In social research, experiments are a mode of scientific observation. Experiments involve taking action and observing the consequence of that action. Experiments are more appropriate for some topics and research purpose than others. Experiments are especially well suited to research projects involving relatively limited and well defined concepts and propositions. Because experiments focus on determining causation, they are also better suited to explanatory than to descriptive purposes.

The most conventional type of experiment involves three major pairs of components: independent and dependent variables, pretesting and posttesting, and experimental and control groups. An experiment examines the effects of independent variable on a dependent variable. The independent variable takes the form of an experimental stimulus which is either present or absent. The experimenter compares what happens when the stimulus is present to what happens when it is not. To be used in an experiment, both independent and dependent variables must be operationally defined and they must be operationally defined before the experiment begins.

In experimental design subjects are measured in terms of a dependent variable which is called pretesting, exposed to a stimulus representing an independent variable, and then remeasured in terms of the dependent variable which is called posttesting. Any difference between the first and last measurements on the dependent variable are then attributed to the independent variable. Experimental group is a group of subjects to whom an experimental stimulus is administered in experimentation. Control group is a group of subjects to whom no experimental stimulus is administered and who should resemble the experimental group in all other respects in experimentation. The comparison of the control group and the experimental group at the end of the experiment points to the effect of the experimental stimulus.

Pre-testing: the measurement of a dependent variable along subjects

Post-testing: the measurement of a dependent variable among subjects after they have been exposed to an independent variable.

  • Post testing allows researchers to measure if the experiment changed something

Hawthorne Effect:subjects perform better when in an experiment

Demand Characteristics:subjects try to give the answer they think is correct

Placebo Effect: subjects respond to the belief that they are receiving a drug

Experimental group:a group of subjects to whom the experimental stimulus is administered

Control group:a group of subjects to whom no experimental stimulus is administered but who should resemble the experimental group in all other respects

The Double-Blind Experiment:an experimental design in which neither the subjects nor the experimenters know which is the experimental and which is the control group

Selecting Subjects:

  • Randomization: a technique for assigning experimental subjects to experimental and control groups
  • Matching: the procedure whereby pairs of subjects are matched on the basis of their similarities on one or more variables and one member of the pair is assigned to the experimental group and the other to the control group
  • Quota matrix: used to create experimental and control groups by finding pairs of matching subjects and assigning one to the experimental group and the other to the control group

Variations on Experimental Design

  • One-shot case study: a single group of subjects is measured on a dependent variable following an experimental stimulus
  • One-group pre-test post-test design: a pre-test is added for the experimental group, but lacks a control group
  • Static-group comparison: includes experimental and control groups, but no pre-test

Validity in Experimental Design

  • Internal Validity: the ability to eliminate alternative explanations of a treatment effect (i.e., a combination of validity and reliability issues). Internal validity represents the possibility that conclusions drawn from experimental results may not accurately reflect what happened in the experiment itself.
  • External Validity: the ability to generalize experimental findings to events and settings outside the experiment itself. External validity represents the possibility that conclusions drawn from experimental results may not be generalizable to the real world.

Strengths of Experimental Method:

  • Isolation of experimental variable's impact over time
  • Replication

Weaknesses of Experimental Method:

  • Artificiality of laboratory setting
  • Cost
  • Ethics


Ethics and Experiments As with other methods for conducting social research, there are ethical considerations to take into account when creating and carrying out an experiment. 1) Deception should only be used if it is necessary for the purposes of the research; that is, it must be confirmed that there is no way of getting around the use of deception. Additionally, deception should only be used when the potential benefits of the research outweigh the risks of deceiving subjects. Using deception is considered an ethical violation, so its use must essential, and the research in which it is used must have the potential for valuable, implicative findings. 2) If it is necessary that the experiment is intrusive in some way in the participants' lives, considerations should be made so that they will not be physically or psychologically damaged. The potential value should, again, outweigh the possible risks of such intrusion in experiments.