Quantcast
Channel: CkBooks Online - Free Online Books » Experimental Psychology
Viewing all articles
Browse latest Browse all 5

EXPERIMENTAL RESEARCH:

$
0
0
EXPERIMENTAL RESEARCH
Psychological Experiment

We use the term psychological experiment to refer to investigations in which at least one variable is manipulated in order to study cause-and-effect relationships. We will emphasize experimental research in which the researcher manipulates some factors (variables), controls others, and ascertains the effects of the manipulated variable on another variable. In some experiments the researcher may not manipulate a variable physically, but can manipulate it through selection, as was the case in the toy selection example. The researchers did not inject androgen into the bloodstream of the girls (although such a procedure might be an even more direct test of the hypothesis), but they did select children whom the researchers had good reason to suspect had a high level of androgen in their blood. In this case, the manipulation of one variable was done through selection rather than with the imposition of a factor. It is this search for relationships between specific events and their consequences (cause and effect) that is so characteristic of scientific research.

The tendency for boys to play with dump trucks, tractors, race cars, and Erector sets, and for girls to play with dolls, doll furnishings, and kitchen equipment, has been known for a very long time. What causes this difference in play between the sexes? How might it be studied and analyzed? What would the results of such a study tell us about gender behavior?

These questions and others were posed by Sheri Berenbaum and Melissa Hines, who work as research psychologists in medical settings.* In an article that appeared in Psychological Science (Berenbaum & Hines, 1992), entitled “Early Androgens Are Related to Childhood Sex-Typed Toy Preferences,”** the basis of gender-specific preferences for certain types of toys was examined.

It is well known, not only from the psychological literature but also from common knowledge,* that young boys and young girls are encouraged to play with certain toys and are discouraged from playing with others. Boys who play with dolls, for example, soon learn that such behavior may be seen as unacceptable and they may be labeled as “sissies,” while girls who eschew kitchen toys for dump trucks may be labeled “tomboys.” Although social learning and social pressure surely influence what toys children play with, might other forces be operating, such as hormones and/or genetics?

To test this idea, Berenbaum and Hines selected girls who experienced a genetic disorder known as congenital adrenal hyperplasia (CAH), a condition that produces a high volume of the hormone androgen, normally found in large concentrations in boys. (Boys with similar conditions were also reviewed in this study.) The researchers then evaluated the amount of time girls with CAH spent playing with “boys’ toys” versus the amount of time with “girls’ toys” and “neutral toys.” They discovered that girls who had CAH spent more time playing with cars, fire engines, and Lincoln Logs than did girls with similar environmental backgrounds but without CAH. The authors concluded that “early hormone exposure in females has a masculinizing effect on sex-tied toy preferences.” Thus, one more part of the determinants of sex-linked behavior was solved.

Scientific Methodology

Our approach to the use of scientific methodology in the study of psychology is established on two principles. The first is that scientific observations are based

*Dr. Berenbaum is at the University of Chicago Medical School, Department of Psychology and Department of Psychiatry, and Dr. Hines is at the UCLA (University of California at Los Angeles) School of Medicine, Department of Psychiatry and Biobehavioral Sciences. Experimental psychologists do not trust common knowledge but seek scientific validation for beliefs. On sensory experiences, we see, hear, touch, taste, and smell the world we live in. These observations, which are made under certain defined circumstances, or controlled conditions as they are called in experimental psychology, should correspond to the observations made by another scientist under comparable conditions. This feature of replicating the results of an experiment is called reliability of results and is a major requisite of scientific credibility.

However, because our sensory systems are limited in capability as well as scope, many signals outside the range of normal sensitivity remain unnoticed, meaning that those things that are detected take on a disproportionately greater significance. We call this the tyranny of the senses, recognizing that it is difficult to consider the importance of some of the real phenomena in the universe that lie outside the range of unassisted human perception.

Consider electromagnetic forces, which are all around us. At one end of the electromagnetic spectrum are cosmic rays, gamma rays, and X rays, and on the other end are radio waves and television waves. In the middle, between about 380 and 760 nanometers, are waves that are detectable by the human eye. For most of human history, the energy that fell within the visual spectrum was considered to be “reality.” The same constricted view of reality applies to all the other senses. Even though we have become aware of the presence of forms of energy that we cannot experience, we continue to emphasize the sensations that we can detect through the ordinary senses.

Some augmentation of the senses has been achieved through the development of technology. Many instruments and techniques in science are designed primarily to make “visible” those things that are “invisible” to the unaided sensory system. These instruments—the microscope, radio telescope, and spectroscope, for example—translate energy outside the normal range of human detection into signals that can be understood by humans. In psychology, many sophisticated instruments have been developed that allow us to see deep within the psyche of a species and reveal secrets of human and animal life that were invisible and left to conjecture and speculation only a few years ago. We will encounter many of these techniques in this book, and we remind reader that if a technique does not yet exist for a topic of interest, there is no prohibition against the invention of one. The second principle upon which scientific methodology relies is that observations from our senses are organized logically into a structure of knowledge. Frequently in experimental psychology these structures of knowledge are called models. Cognitive psychologists may, for example, develop a model of memory based on their observations of two types of memory and the laws that govern their relationship and the storage of information. The structural web that creates models from observations and turns these models into theories is based on the principles of logic, which in the current context has developed out of the rich history of Western empirical thought. It is not our purpose here to deal in detail with this structural web, but rather to discuss some topics in the philosophy of science that relate to experimental psychology. The interested student can find extensive writings on this topic.

Development of Thoughts and Hypotheses in Experimental Psychology

One of the most difficult tasks confronting beginning students in experimental psychology is to organize their thoughts and develop a testable hypothesis for a given topic. For many reasons, this is difficult not only for the novice but also for the seasoned researcher. A major reason for the difficulty is a lack of knowledge. New research ideas rarely, if ever, erupt spontaneously from an intellectual void. Rather, new ideas and hypotheses usually are built on existing knowledge and past research. Therefore, the best advice on how to develop new thoughts and hypotheses is to immerse yourself in the literature of a branch of psychology that holds some real interest for you.* Read, discuss, investigate, and become well versed in the subject matter. But passive knowledge is not enough. As you acquire knowledge about a topic, question the premise, the conclusions, and the technique and relate it to your knowledge of other matters. The development of new ideas in psychology, as well as in other disciplines, rests on the acquisition of the fundamental elements of a subject and flexibility in thinking about them, which allows one to combine and recombine the elements of thought in increasingly novel and meaningful patterns. New ideas are based on old ideas, new inventions on old inventions, and new hypotheses on old hypotheses. Contrary to popular lore and media fiction, scientific advancements often come from small increments of progress, rather than from a single brilliant discovery. Of course, we all aspire to that major scientific breakthrough and your budding enthusiasm for achieving scientific eminence should not be  discouraged. But such profound achievements are rare, and while most research projects fall short of seminal programs, they can nonetheless contribute mightily to the overall growth of scientific knowledge. To illustrate the point of the accumulation of knowledge, consider an innocent question asked a few years ago Dy a son of one of the authors: “Who invented the automobile?” Trying to be instructive, the author told the boy that in about 1886 Karl Benz invented the automobile. “Wow, he must have been a real genius to figure out the-engine, the brakes, the spark plugs, the wheels, and how everything worked together!” “Well, there were others, such as Henry Ford, R. E. Olds, and Daimler, and someone else invented the tires; I think it was Firestone. And then there was even the person who invented the wheel. . . But then the author experienced a moment of realization. “I think I may have

Recently, the growth of knowledge in nearly every branch of scientific inquiry, including psychology, has been so rapid that it is difficult for students and professional scientists to keep up with current facts and theories in their field. More and more we are seeing scientists use data banks, which store vast amounts of information in computer memories. As a consequence of the explosion of scientific information, a first step in the experimental process is frequently a computer-assisted search of the literature

No one person invented all of the components of the automobile any more than a single person invented the television, the theories of memory, or the symphony. Many people made significant discoveries that led to the invention of the automobile as well as scientific discoveries.” The development of knowledge in psychology progresses along similar lines. Given an inquiring and creative mind, knowledge, resources, flexibility, dedication, and a determined heart, many important scientific truths lie waiting to be uncovered by future scientists. Past discoveries beget future discoveries, past knowledge begets future knowledge, and, indeed, past wisdom may beget future wisdom.

Nonexperimental (but Empirical) Research

Rigid thinking and dogmatism have been the two enemies of creativity and flexibility in research. For years experimental psychology in America eschewed research that did not conform to the paradigm in which a variable is introduced (or inferred) and its consequence observed. The traditional experimental design followed the pattern of finding cause-and-effect relationships between antecedent events and their consequences. But there are many interesting psychological issues that do not lend themselves to this neat experimental paradigm, and we need to investigate these issues with reliable measures. Some of these issues include the buying habits of steel workers in Pittsburgh, the difference between the number of bipolar personalities in Miami and Seattle, and the trends in fashion over the past century. These topics, and hundreds of other like topics, are interesting, worthwhile, and important to psychologists and they may be investigated scientifically, studied empirically, and can yield reliable data. The task of the researcher is to make decisions and justify them. The first of which is often the type of experiment or study to conduct, given the particular researcher question. Therefore, it is important that the student of experimental psychology be familiar with a variety of research methods in order to know when it is (and when it is not) appropriate to use an experimental design.

Operational Definitions

Before a researcher proceeds with an experiment he or she usually has conceptual definitions of variables to be studied—anxiety, intelligence, ego involvement, drives, distributed practice, and reinforcement, for example. But to do creditable research, which not only communicates effectively with one’s audience but also allows others to replicate one’s work, psychologists must operationally define these concepts (words) by specifying precisely how each is manipulated or measured. An operational definition is a statement of the operations necessary to produce and measure a concept. In other words, it defines a concept in terms of how it is measured. There is considerable variability as to the extent to which variables can be operationally defined in a precise manner that retains the full meaning of the concept. On one hand, variables such as the spacing of practice, as used in the Lorge experiment, or the delay of feedback, as used in More’s (1969) experiment later in this chapter, or the presence of congenital adrenal hyperplasia (CAH) in young girls, as indicated in the Berenbaum and Hines experiment, are fairly easy to operationally define. On the other hand, psychologists use abstract concepts such as intelligence, hostility, antisocial behavior, or anxiety, which may be somewhat more difficult to operationally define in a manner that includes the full complexity Experimental Psychology – PSY402 VU of the concept and has some empirical basis. What exactly do those terms really mean? Good experimental psychologists insist on the “operational definition” of terms. Words that describe concepts in psychology need to be tied to objective circumstances. Anxiety is a good example of such a variable. Almost everyone has some idea of what anxiety is. There are several dictionary definitions of anxiety, most of which agree that it is a complex emotional state with apprehension as its most prominent component. In attempting to operationally define this variable, researchers have used pencil-and-paper tests, a Palmar sweat technique, the galvanic skin response, heart rate tests, and eye movement tests. Each of these operational definitions probably measures some part of anxiety, although none of them measures its total complexity. A researcher must choose and develop an operational definition that is suited for the specific research question. It is absolutely necessary that the variables used in research be operationally defined.

Experimental and Control Groups

Whereas in many experiments treatment groups are exposed to different levels of the independent variable, on other occasions an experimental group and a control group are used. Although these experiments can be described using our definition of an independent variable, the concepts are discussed here because they present some unique problems in experimental design. The experimental group is the group that receives the experimental treatment—that is, some manipulation by the experimenter. The control group is treated exactly like the experimental group except that the control group does not receive the experimental treatment. The Spallanzani experiment is a good example of this. The group of female dogs receiving the sperm-free filtrate was the experimental group, and the group receiving the normal semen was the control group. In the next example we look at control and experimental groups where the participants are treated differently based on group assignment.

Case Study

Blind people are very adept at avoiding obstacles; however, little was known about how they do this. One hypothesis was that blind people developed “facial vision”; that is, they react to air pressure on exposed parts of the skin. A second theory was that avoidance of obstacles comes through the use of auditory cues. Supa, Cotzin, and Dallenbach (1944) set out to test these theories. They had blind people walk around in a large room in which obstacles (screens) had been set up. Two experimental treatments were used. In the first treatment, blind participants wore a felt veil over their face and gloves on their hands (thus eliminating “skin perception”). In the second treatment, blind participants wore earplugs (thus eliminating auditory cues). A third treatment was the control treatment, in which blind participants walked around the room as they would normally. The results indicated that participants in the control group and in the felt-veil group avoided the obstacles every time, but the participants in the earplug group bumped into the obstacles every time. Based on these results, the authors concluded that the adeptness of the blind in avoiding obstacles is due primarily to their use of auditory cues and not to any facial vision.

The previous experiment is an abbreviated version of a series of experiments on the perception of sighted and blind subjects. In this example, it is somewhat difficult to specify an independent variable. The experiment is most easily described as having two treatment groups—one in which facial vision is eliminated and one in which auditory cues are eliminated. The control group is treated the same as the other treatment groups except they do not receive the veil or earplug treatment. The control group provides a baseline to help determine whether the treatments improve or hamper the avoidance of obstacles. The dependent variables in this study “the ability to respond to sensory-deprived cues” was measured by the number of times the subjects walked into the obstacles. Sometimes more than one control group is needed. For example, in pharmacology a placebo control group is frequently used. A placebo group is best described as a group who is made to believe that it is getting a treatment that will improve its performance or cure some symptom, when in fact no treatment is provided. This type of control group is also used in testing the effectiveness of therapy. Consider the following example drawn from the psychological literature.

Case Study

Paul (1966) conducted an experiment to test the effectiveness of two types of therapy in treating speech phobia. His subjects were students enrolled in public speaking classes at a large university. Paul took 67 Experimental Psychology – PSY402 VU students who had serious performance problems in the course and assigned them to one of four conditions. One group of 15 participants received a form of behavior therapy. A second group of 15 participants received an insight therapy. A third group of 15 participants received placebos in the form of harmless and ineffective pills, and were told that this would cure them of their problems. A fourth group of 22 participants was informed that they would not be given any treatment and simply answered questionnaires that were also given to the other three groups. All participants had to give one speech before the treatment began and one after the treatment had been completed. The dependent variable was the amount of improvement shown by the participants from the first to the second speech, based on racings made by four clinical psychologists. The four psychologists were not involved in the treatment the participants received nor did they know which participants were in which treatment group. The results indicated that 100 percent of the behavior therapy participants improved, 60 percent of the insight therapy participants improved, 73 percent of the placebo participants improved, and 32 percent of the no-treatment control participants improved.

The Paul experiment illustrates the need, in some experiments, for different types of control groups. The interpretation of the results of the experiment would have been quite different if Paul had not used a placebo control group. Without it, insight therapy would have appeared effective as a therapy in improving speech-giving difficulties. On the contrary, with the placebo group included in the design it appears that the insight therapy was ineffective as a therapy and may have only acted as a placebo. In fact, the placebo group’s performance improved more than the insight therapy group’s performance. The experiment also points out the need for a no-treatment control group, as over 30 percent of the subjects in this group improved in spite of the fact that they received no treatment. This can form a baseline to measure the effectiveness of a treatment compared to no treatment at all.

Different types of control groups are used in different areas of research. Researchers who remove a part of the brain of animals and use the animals as an experimental group sometimes use a control group that undergoes all of the surgical procedures except that the brain is left alone. This would control for a factor such as postoperative shock causing the effect found in the experimental group. The point to remember is that the control group must be treated exactly like the experimental group except for the specific experimental treatment.

The Paul experiment also illustrates an important control procedure used to avoid experimenter bias. The psychologists who rated the subjects’ speaking performances were not the same people who treated the subjects in therapy, nor did they know which subjects were in which experimental group. It is reasonable to assume that therapists might be biased when it comes to evaluating the improvement of their own patients. Furthermore, the four judges might prefer a particular therapy, and if they knew which subjects had received this therapy, they might be prone to see more improvement in these subjects than in subjects in the other experimental conditions. Or perhaps the judges would have assumed that the subjects in the no-treatment control group could not have improved and would therefore rate those people’s performances accordingly. Paul controlled for these potential biasing effects by using independent judges and keeping the judges blind as to what experimental group a particular subject belonged.

The term blind is used in a special sense in experimental research. Single blind usually means that the participants in an experiment are not informed as to which treatment group they are in and might not he informed as to the nature of the experiment. Double blind is frequently used in drug research or any research that involves observers who are judging the performance or progress of the participants of an experiment. Both the judges and the participants are kept blind as to the type of experimental treatment that is being used as well as the type of effect that might be expected. The case study we have selected illustrates the influence of experimental bias in experimental psychology and the serious implications it can have for psychology and other scientific studies.


Viewing all articles
Browse latest Browse all 5

Latest Images

Trending Articles





Latest Images