Anda di halaman 1dari 19

23 Chapter 2: Behavioural Learning Theories

Chapter 2: : BEHAVIOURAL LEARNING THEORIES


Upon completion of this chapter, you should be able to: Define what is behaviourism Explain classical conditioning Explain operant conditioning Give everyday examples of classical conditioning in daily life Identify the characteristics of Thorndikes theory of learning Describe the principles of operant conditioning Discuss the application of operant conditioning in teaching and learning

Chapter 1 Introduction Chapter 2: Behavioural Learning 2.0 Introduction Theories 2.1 Classical Conditioning Chapter 3: Early Cognitive Theories 2.2 Classical Conditioning in Daily Chapter 4: Information Processing Life Model 2.3 Behaviourism Chapter 2 examines behavioural 5: Constructivism and its origin and proponents of Chapter theories of learning, the perspective. J.B. Watson who was inspired by the works of 2.4 Watsons Experiment with LittleIt was proposed by Learning Ivan Pavlov. Behaviourism dominated psychology until the 1950s. It emphasised Albert Chapter 6: Metacognition the need the 2.5 Classical Conditioning infor scientific study of learning focusing on behaviours that was observable. Chapter 7: Thinking and Learning The main proponents of behaviourism are Watson, Thorndike and Skinner who Classroom Chapter 8: Understanding Individual 2.6 Connectionism: essentially worked with animals and applied their theories in explaining human Edward Differences behaviour. Behaviourism has had a Emotion and Learning teaching and learning in Thorndike Chapter 9: significant impact on schools, training organisations and continue to andso. 2.7 Implication of Thorndikes Chapter 10: Learning do Handling Theories Text 2.8 Operant Conditioning 2.9 Schedules of Reinforcement 2.10 Shaping 2.11 Applying Operant Conditioning in the Classroom 2.12 Summary Key Terms 2.1 CLASSICAL References IVAN PAVLOV CONDITIONING BY

CHAPTER OVERVIEW

Ivan Pavlov was born in Russia and spent most of his time studying physiology (study of the functions of organisms and their parts such as the physiology of the liver). He was noted for his work on the physiology of digestion and was awarded the Nobel Prize for work in this area. However, he only became interested in psychology in 1900 at the age of 50. In his classic experiment with dogs, he measured the saliva secreted by the animals when food was given (see Figure 2.1).

Ivan Pavlov 1849-1936

24 Chapter 2: Behavioural Learning Theories

2.1: Dog with tube inserted Rotating in its cheek. When drum the dog salivates, the saliva is collected in the test tube and its quantity is recorded on the rotating drum [source: Great Experiments in Psychology. p.5 by H.H. Garrett, 1951. New York: AppletonCentury-Crofts] Step 1: Before Conditioning He gave a hungry dog a bowl of food. The dog is hungry, the dog sees the food and the dog salivates. Food Unconditioned Stimulus (US) Salivation Unconditioned Response (UR)

Figure

This is a natural sequence of events, an unconscious, uncontrolled, and unlearned relationship. Stimulus means something that is given to initiate a response. So Unconditioned Stimulus and Unconditioned Response simply means that the stimulus and the response are naturally connected. They just came that way, hard wired into the brain of the organism. "Unconditioned" means that this connection was already present in the dog before Pavlov began his experiments. For example, when you see someone eating something sour such as pickled fruit, you tend to swallow your saliva. Thus, an unconditioned stimulus (pickled fruit) elicited an unconditioned response (swallowing your saliva).

Step 2: During Conditioning Next, Pavlov, presented the hungry dog with food and simultaneously rang a bell, and the dog salivated. Food Unconditioned Stimulus (US) Salivation Bell Unconditioned Response (UR)

25 Chapter 2: Behavioural Learning Theories Conditioning Stimulus (CS) This action (food and bell ringing) was done at several meals. Every time the dog sees the food, the dog also hears the bell. "Unconditioned" means unlearned, untaught, preexisting, already-present-before-we-got-there. "Conditioning" just means the opposite. Pavlov was trying to associate, connect, bond or link something new with the old relationship. He wanted this new thing (the bell) to elicit the same response. Step 3: After Conditioning This time Pavlov rang only the bell at mealtime, but he did not show any food. Guess what the dog did. Right. Bell Conditioning Stimulus (CS) Response (CR) The bell elicited the same response as the sight of the food gets. Over repeated trials, the dog has LEARNED to associate the bell with the food. The bell has the power to produce the same response as the food. In other words, the dog has been conditioned to salivate when hearing the bell. Conclusion This is the essence of Classical Conditioning. You start with two things that are already connected with each other (food and salivation). Then you pair a third thing (bell) with the conditioned stimulus (food) over several trials. Eventually, this third thing may become so strongly associated, that it has acquired the power to produce the old behaviour. The organism is conditioned to respond to the third thing or stimulus. Pavlov extended his experiment by using bells of different tones. Surprisingly, the dog still salivated when it heard the different tones. The dog responded even though the tones of the bells were different or nearly the same. In other words the dog is capable of generalisation, and able to generalise across different tones. For example, when driving and you hear the sound of a siren behind you and you immediately move to the side to give way. You do not discriminate whether it is the sound of the fire-truck, the ambulance or the police (which may be different) but you react in the same way. In other words, you have generalised that any sound of the siren, you will respond similarly. Pavlov also found that when the tone of the bell that was closer to the sound of the original bell, the dog salivated. When the tone of the bell was very different from the sound of the original bell, the dog salivated less frequently. In other words the dog is capable of discrimination, and able to differentiate among the different tones. The dog is responds to one stimulus and not to another stimulus. However, when Pavlov continued ringing the bell and after many trials it was not followed by food, the dog gradually did not salivate. In other words, extinction took place and the dog did not salivate after sometime when it realised that food was not forthcoming. Salivation Conditioning

26 Chapter 2: Behavioural Learning Theories 2.2 CLASSICAL CONDITIONING IN DAILY LIFE

The smell of fresh bread baking makes my mouth water. This is probably the result of Classical conditioning. In the past the smell of the fresh bread immediately preceded putting a piece in my mouth, which causes salivation. Through the mechanism of Classical conditioning the smell itself comes to elicit salivation. After the bad car accident Jeffri had last year, he would cringe and break into a sweat at the sound of squealing brakes. This is Classical conditioning. The cringing, which is an unconditioned response to pain or fear, was produced by the accident and its accompanying pain. That accident was probably preceded by the sound of squealing brakes, which became a conditioned stimulus for the conditioned response of cringing. To treat alcoholics, we sometimes put a chemical in their drinks that makes them sick. Eventually, the taste of alcohol becomes aversive. This is Classical conditioning. The chemical that makes the drinker sick is being paired with the taste of alcohol so that the alcohol itself becomes the conditioned stimulus for being sick. Classical conditioning works with advertising. For example, many product ads prominently feature attractive young women. The young women (Unconditioned Stimulus) naturally elicit a favorable, mildly aroused feeling (Unconditioned Response) in most men.
1.2 ACTIVITY

Classical conditioning is a pervasive form of influence in our world. Give examples of classical conditioning in daily life, in the workplace, in child rearing practices and in the classroom. 2.3 FATHER OF BEHAVIOURISM place work John B. Watson was born in 1878 and grew up in South Carolina in the United States. He entered Furman University at the age of 16 and graduates with a masters degree. Later, he studied at the University of Chicago and earned his Ph.D. in psychology in 1903. He began teaching psychology at John Hopkins University in 1908. In 1913, he gave a seminal lecture at Columbia University titled Psychology as the Behaviorist Views It, which essentially detailed the behaviourist position. According to Watson, psychology should be the science of observable behavior. Introspection forms no essential part of its methods, nor is the scientific value of its data. Watson remained at John Hopkins University until 1920. He had an affair with Rosalie Rayner, his graduate assistant. He divorced in first wife, and was asked by the university to resign his position. Watson later married Rayner and the two remained together until her death in 1935. After leaving his academic

J.B. Watson (1878-1958)

27 Chapter 2: Behavioural Learning Theories position, Watson began working for an advertising agency where he remained until he retired in 1945. He spent his last years living a reclusive life on a farm in Connecticut and died in 1958. Watson who subscribed to classical conditional developed by Ivan Pavlov, was dubbed The Father of Behaviourism and strongly believed that human emotion (i.e. fear, rage and love) was the product of both heredity and experience. Through the conditioning process, these three basic emotions become attached to different things for different people. He strongly believed that any human being can be conditioned to do anything regardless of their attitudes, abilities or experiences. His extreme belief is reflected in this famous (or infamous) statement he made in 1926: Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and Ill guarantee to take any one at random and train him to become any type of specialist I might select doctor, lawyer, merchant, chief, and yes, even beggarman and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors (1926, 10). Watson set the stage for behaviorism, which soon rose to dominate psychology. While behaviorism began to lose its hold after 1950, many of the concepts and principles are still widely used today. Conditioning and behavior modification are still widely used in therapy and behavioral training to help clients change problematic behaviors and develop new skills. 2.4 WATSONS EXPERIMENTS WITH LITTLE ALBERT To demonstrate how inborn emotional reflexes become conditioned to neutral stimuli, Watson and Rosalie Rayner (1920) performed an experiment on an 11-monthold infant Albert adopting Pavlovs approach (discussed earlier). In the beginning of the experiment, the infant was shown a white rat (see Figure 2.2).

Figure 2.2 Albert and the white rat

He reached out and tried to touch the animal. Later, whenever Albert reached out and tried to touch the rat, Watson took a hammer and struck a steel bar behind the infant, making a loud noise. Obviously, Albert got a fright and jumped and fell forward. Again, he tried to touch the rat and the bar was struck, making a loud noise. Albert jumped violently and cried. A week later when Albert came into contact with the rat he was more cautious and withdrew his hand. He had developed a strong fear of the rat and

28 Chapter 2: Behavioural Learning Theories began to cry. He tried to raise himself and crawled away rapidly. Albert had LEARNED to fear the white rat because of its association with the loud noise. Before Conditioning White Rat Unconditioned Stimulus (US) During Conditioning White Rat Unconditioned Stimulus (US) Loud Noise Conditioning Stimulus (CS) Albert cries and avoids touching Unconditioned Response (UR) No Fear Unconditioned Response (UR)

After Conditioning White Rat Conditioned Stimulus (CS) Fear Conditioned Response (CR)

It was also shown that Alberts fear generalised to a variety of other objects such as a rabbit, fur coat, and even a Santa Claus mask. In other words, any object that was furry brought fear to the infant. The experiment by Watson showed that our emotional reactions can be rearranged through classical conditioning. Watson SELF-CHECK 2.1 demonstrated that an emotion such as fear could be transferred to an organism that a Explain how a finding is significant because originally that not have )such a fear. Thebehaviour can be conditioned it implies that if fears are learned, b) should be possible to unlearn ordiscrimination and Unfortunately, it What is meant by generalisation, extinguish them. extinction in in classical conditioning? Watson and Rayner never removed Alberts fears because his mother removed him c) What is behaviourism? being conducted shortly after fear was from the hospital where the experiment was instilled.

2.5 CLASSICAL CONDITIONING IN THE CLASSROOM It is the first day in school and suddenly Suzy hears her teacher Ms. Lim yell Keep Quiet at the top of her voice. Suzy was startled and terrified and started to cry. In the next few days, whenever Ms. Lim entered the class she cried. She had associated the presence of Ms. Lim with fear. In other words, she has been

29 Chapter 2: Behavioural Learning Theories conditioned to respond by crying whenever encountering Ms. Lim even though she had not yelled, Keep Quiet. Stimulus Generalisation Suzy has learned to associate fear with Ms Lim. Could that fear generalise to other teachers? Stimulus generalisation occurs when the organism responds to stimuli that are similar or related. If Suzy cried each time any teacher (other than Ms. Lim) entered the class, than Suzy has generalised. For example, in Watsons experiments, Little Albert avoided any thing that was furry indicating that the child has generalised fear to stimuli that is similar or related to the white rat. Stimulus Discrimination When other teachers enter the class, Suzy does not cry but when she encounters Ms. Lim she cries. Apparently, her classically conditioned response seems to be limited to one stimulus; Ms. Lim. It appears that Suzy is showing signs of stimulus discrimination. Extinction Suzy has associated Ms. Lim with the yelling of Keep Quiet which terrified her. However, if the stimulus (yelling Keep Quiet) is not applied and the response has not generated over a period of time, then the probability of conditioned behaviour (crying) may decay. If Suzy had not heard Ms. Lim yell Keep Quiet for some time, it is possible that crying whenever Ms. Lim appears would gradually become extinct.

2.6 CONNECTIONISM - EDWARD L. THORNDIKE

Edward Thorndike (1874 1949), whose doctoral thesis entitled Animal Intelligence: An Experimental Study of the Associative Process in Animals in 1898, formed the basis for his learning theories. To Thorndike the most basic form of learning was trial-anderror learning which was based on his experiments which involved putting a hungry animal in a puzzle box (see Figure 2.3). The animal (he used cats) would attempt to escape to get at the food outside the box. Pressing on the pedal would enable the animal to escape. Before escaping, the animal would have to engage in a series of complex responses. The animal would squeeze through an opening and claw at anything it reaches. The animal had to perform in a certain way before it was Food allowed to leave the box. Figure 2.3 Thorndikes puzzle box The animal claws all over the box in an impulsive struggle to get out of the confinement. In the process presses the pedal and the door opens. It gets out and eats the food. The same cat was put in the box over and over again. Thorndike noted the

30 Chapter 2: Behavioural Learning Theories time it took the animal to solve the problem as a function of the number of trials or opportunities. The time it took to solve the problem systematically decreased as the number of trials increased. In other words, the more opportunities the animal had, the faster it solved the problem. The animal has made a connection between the proper response and the food the cat received (Stimulus-Response or S-R connection). Based on his experiments, Thorndike concluded that learning is incremental. In other words, learning occurs in very small systematic steps rather than in huge jumps. Based on his experiments, Thorndike proposed the following theories of learning: The Law of Readiness The law of readiness states that when an organism is ready to act, it will do so. When it is not ready to act, forcing it to act will be annoying. In other words, when someone is ready to perform act, to do is satisfying while not doing so is annoying. The Law of Exercise The law of exercise states that the strength of a connection between a stimulus and a response is determined by how often the connection is established. In other words, maintaining connection between the stimulus and response strengthens the connection (Law of Use). The connection between the stimulus and response is weakened when practice is discontinued (Law of Disuse). The Law of Effect The law of effect states that the strength of a connection between a stimulus and a response is influenced by the consequence of a response. For example, if a response is followed by a satisfying state of affairs, the strength of the connection is increased. If a response is followed by an annoying state of affairs, the strength of the connection is decreased.

2.7 IMPLICATIONS OF THORNDIKES THEORIES Thorndike developed the idea of connectionism. He believed that connections formed between a stimulus and a response (S-R) is the essence of intellectual development. People of higher intellect formed more bonds between stimuli and response and formed them more easily than people of lower ability. Complex ideas should be broken down into pre-requisite concepts. Positive reinforcement should be applied as these concepts are learned so that they can be applied to more complex, higher-level learning activities. Transfer of learning. o The degree of transfer between initial and later learning depends on the match between elements across the two events. o Transfer depends on the presence of identical elements in the original and new learning situations. o Transfer is always specific and never general.

SELF-CHECK 2.2 Chapter 2: Behavioural Learning Theories a) How does Thorndike explain learning? b) What are the school task of Thorndikes theories (near transfer), o Transfer from one implications to a highly similar task on teaching and learning? Give to non-school settings (far transfer), could be and from school subject specific examples. facilitated by teaching knowledge and skills in school subjects that have elements identical to activities encountered in the initial context.

31

2.8 OPERANT CONDITIONING BY B.F. SKINNER Burrhus Frederic Skinner was born in the small Pennsylvania town of Susquehanna. He obtained his masters and doctorate in psychology from Harvard University. He taught at the University of Minnesota and in 1945 moved to become the chairman of the psychology department at Indiana University. In 1948, he was invited to teach and do research at Harvard University where he remained for the rest of this life. He was an active researcher and guided hundreds of doctoral candidates as well as writing many books. His most famous book was Walden II, which is a fictional account of a community run by his behaviourist principles. B.F. Skinner, made his reputation by testing Watson's and Pavlovs theories in the laboratory. He rejected the notion that organisms are passive and have no control whether to act or not to act. He developed the theory of operant conditioning, which states that we choose to behave in a certain way because particular behaviour brings about certain consequences (Skinner, 1950). For example, if your girlfriend gives you a kiss when you give her flowers, you are likely to give her flowers when you want a kiss. You are acting in expectation of a certain reward. However, Skinner did not agree that emotions or feelings play any part in determining behaviour. Our behaviour is determined by the pleasant or unpleasant consequence of that behaviour. SKINNERS EXPERIMENTS To demonstrate operant conditioning in the laboratory, a hungry rat was placed in a box like the one shown in Figure 2.4, which is called the B.F. Skinner Skinners Box. Inside the box was a bar connected to a pellet 1904-1990 (food) dispenser. Left alone in the box the rat moves about exploring. At some point in the exploration, it presses the bar and a small food pellet is released (Skinner, 1954). The rat eats and soon presses the bar again. The food reinforces bar-pressing, and the rate of pressing increases dramatically.

Pellet dispenser

Dispenser tube

Food cup

Electric grid

To shock generator

32 Chapter 2: Behavioural Learning Theories

Figure 2.4 Skinners box

A behaviour reinforced by a pleasant consequence increases the probability of that behaviour occurring in the future. What happens if the rat is not given any more food pellets? Skinner, disconnected the food dispenser. When the rat pressed the bar, no food was released. The rate of barpressing was less frequent and finally it diminished. That is, the operant response undergoes extinction with nonreinforcement just as in classical conditioning.

A behaviour no longer followed by a pleasant consequence results in a decreased probability of that behaviour occurring in the future.

Next, Skinner connected back the pellet dispenser. Pressing the bar again provided the rat with food pellets. The behaviour of bar-pushing popped right back. In fact, the rat took a lesser time to press the bar compared to the first time it was put in the box. So, the rat has learned that if it pressed the bar, food will be released. Skinner varied the experiment by linking the release of food pellets with light. For example, the food would only be presented when the bar is pressed while the light is on but not when the light is off. Guess what happened! The rat only pressed the bar when the light was on. The light has served as a discriminative stimulus that controls response. The rat is able to discriminate between pressing the bar with the light and pressing the bar without light (Huitt and Hummel, 1998). Based on this experiment, Skinner introduced the word operant. It simply means that the behaviour operates on the environment the rats pressing the bar produces or gains access to the food pellets. In classical conditioning, the animal is passive; it merely waits for stimuli. In operant conditioning, the animal is active; its own behaviour brings on important consequences or results (Skinner, 1998). Thus, operant conditioning increases the likelihood of a response by following its occurrence with reinforcer. PRINCIPLES OF OPERANT CONDITIONING

33 Chapter 2: Behavioural Learning Theories Thus, reinforcement can be defined as any event that increases the probability of a response. Skinner distinguished between positive reinforcement and negative reinforcement, as well as punishment. Positive Reinforcement REINFORCEMENT Negative Reinforcement

PUNISHMENT

Positive Reinforcement: A positive reinforcer is a stimulus that increases the probability of a particular behaviour occurring in the future. For example, water is a positive reinforcer for getting a thirsty organism to behave in a particular way. The term reward is sometimes used as a synonym for positive reinforcement (Huitt and Hummel, 1997.

Examples: a) Amy completes her homework so that she can watch her favourite programme on TV. There is high probability that she will always complete her homework (behaviour) so that she can watch TV (reinforcer) b) Factory workers who are efficient are given bonuses. There is a high probability that factory workers will strive to be more efficient (behaviour) so that they will be given bonuses (reinforcer). Negative Reinforcement: A negative reinforcer is a stimulus when removed increases the probability of a particular behaviour occurring in the future. Refer to Skinners Box: Figure 2.4. An electric was introduced and the rat jumped around. However, when it pressed the bar, the electric shock was switched off. Guess what happened! The rat pressed the bar (behaviour) more frequently to avoid the pain or discomfort from the electric shock. Examples: a) A mother lifts (behaviour) her crying baby because she cannot bear to hear her child cry (reinforcer). b) When you enter a car, you put on the safety belt (behaviour) because you want the sound of the buzzer (reinforcer) to stop. Punishment: Punishment is not the same as negative reinforcement. The objective of negative reinforcement is to increase the probability of a particular behaviour occurring. Punishment has the opposite effect; it decreases the probability of a behaviour occurring. For example, if the rat is given an electric shock every

34 Chapter 2: Behavioural Learning Theories time it presses the bar (behaviour), the frequency of the behaviour occurring SELF-CHECK 2.3 will be reduced and finally diminish. a) What is the difference between positive reinforcement and negative reinforcement? Examples: b) How is negative reinforcement different from punishment? a) Farid refuses to help his mother wash the dishes and he is not allowed to play football. b) Any student who makes noise in class will have recess time reduced.

The above is a common problem in many classrooms. The functional nature of reinforcement theory has to be understood. It explains why the theory sometimes appears to be incorrect. To Classroom if you have used positive reinforcement Reinforcement Theory in the understand Reinforcement Theory in the Classroom (reward), you must observe its effect. If the consequence increases the behavior you want to increase, you have introduced positive reinforcement. If the consequence decreases the behavior you want to decrease, then you have a punishment. Most teachers have had the unfortunate experience of Mrs. Ragu. They have persisted in giving a consequence of punishment and the kid keeps doing the bad thing. If the behavior does not increase or decrease the way you want it to, then you need to rethink your rewards and punishments. The main point of reinforcement theory is that consequences influence behavior. Rewarding consequences increase behavior. Punishing consequences decrease behavior. No consequences extinguish a behavior. Finally, a consequence is known by its function (how it operates). Then Bala interrupts the class, Mrs. Ragu stops the class, tells Bala he's a naughty boy who broke Rule 15 and now must go to the pr Then Bala interrupts the class, Mrs. Ragu stops the class, tells Bala he's a naughty boy who broke Rule 15 and now must go to the p
2.1 ACTIVITY

A five-year old child throws a temper tantrum in front of his Parents. He embarrasses them and they give him rewards such as attention, toys, candy, or whatever. Now when this child 2.9 SCHEDULESto school and throws a temper tantrum, he is cruelly goes OF REINFORCEMENT disappointed when the teacher scolds and punishes him. a) Explain the underlying principles of the above event. b) What do you think the child may learn in the long run?

35 Chapter 2: Behavioural Learning Theories The reinforcement theory was taken a step further by introducing variation in the typical operant conditioning situation (Huitt and Hummel, 1998). What will happen when the schedule of reinforcement is varied according to time or frequency? For example, instead of rewarding a particular behaviour every time it occurs, the behaviour is rewarded every 2 minutes; i.e. reinforcement is scheduled or predetermined. Many different reinforcement schedules have been studied, but most common are as follows: FIXED RATIO (FR): According to this schedule, reinforcement occurs after a fixed number of responses (behaviour). The ratio 5:1 means that after every 5 times the response (behaviour) is exhibited it is reinforced (rewarded) once. For example, say the rat presses the bar 3 times, it gets a goodie. Or 5 times or 20 times. It is like the piece rate method in the clothing industry. You get paid so much for to many shirts. VARIABLE RATIO (VR): This schedule is similar to the Fixed Ratio. The difference is that the ratio is not fixed but variable. In other words, the ratio is changed according to the responses. For example, you may start with reinforcing every 3 times the response (behaviour) is exhibited; than every 5 times the response (behaviour) is exhibited and so on. FIXED INTERVAL (FI): According to this schedule, reinforcement (reward) is given at the specified time. For example, if the time is fixed as 2 minutes; the behaviour or response is reinforced (rewarded) after 2 minutes. No further reinforcement will occur until 2 minutes has passed. Once it has elapsed, the first response (behaviour) made will be reinforced. VARIABLE INTERVAL (VI): This schedule is similar to the Fixed Internal. The difference is that the interval is not fixed but variable. In other words, the interval may be changed according to the responses. For example, you may start with reinforcing every 20 seconds the response (behaviour) is exhibited; than every 30 seconds the response (behaviour) is exhibited and so on.

2.10 SHAPING BEHAVIOUR

Using a schedule of reinforcement, complex behaviours of various organisms can be shaped. Shaping is a method of successive approximation which involves reinforcing behaviour that is vaguely similar to the behaviour desired (Skinner, 1954). The procedure of shaping involves administering rewards for response that are not the required terminal response but that approximate what the experimenter desires. An organism is reinforced every time it makes a move in the desired direction until it has learned the desired response, and then not reinforcing it again (Skinner, . By reinforcing only successively closer approximations to the desired behaviour, it is possible to train an organism to

36 Chapter 2: Behavioural Learning Theories engage in behaviour so complex that would never ordinarily appear in the organisms repertoire. Shaping a Simple Behaviour: A three year old child was afraid to go down a slide. The father picked him up and put him at the end of the slide and asked him if he was okay. He was asked to jump and he did and was praised by the father. Next, the father picked the child and put him a foot or so up the slide and asked him if he was okay, and asked him to slide down. He did. So far so good! The father did this again and again, each time moving him a little up the slide. Eventually, he put the child at the top of the slide and he could slide all the way down and jump off. A great deal of human behaviour is modified directionally in small steps by reinforcement. It has often been observed, for example, that as previously reinforcing activities become habitual and less rewarding, they tend to be modified. For example, a motorcyclist derives some considerable reinforcement from the sensation of turning a sharp corner at high speed but eventually the sensation diminishes and the excitement becomes less. And perhaps, too, as the reinforcement begins to decrease, his speed increases, imperceptibly but progressively. This is a clear illustration of shaping effected through the outcomes of behaviour (Lefrancois, 1982). In the classroom, peer approval or disapproval, sometimes communicated in a very subtle, nonverbal way, can drastically alter a students behaviour. The classroom clown would probably not continue to be a clown if no one paid any attention to her. Indeed, he might never have been shaped into a clown had his audience not reinforced him in the first place.
2.2 ACTIVITY

a) Identify the schedule of reinforcement represented by following examples: Joe gets his salary weekly Susie gives Zack a kiss when he rubs her back for an average of 10 minutes Bill continues to play at a gambling machine Rosli gets a bonus after every ten items produced. b) Give other examples from daily life where schedules of reinforcement have been used to shape or modify 2.11 APPLYING OPERANT CONDITIONING IN THE behaviour. CLASSROOM Biehler and Snowman (1986) in their book Psychology Applied to Teaching, suggested the following classroom practices based on the principles of operant conditioning. When students are dealing with factual material, do your best to give FEEDBACK frequently, specifically and quickly. o After giving a problem, go over the correct answer immediately afterward. o Have pupils team up and give each other feedback.

37 Chapter 2: Behavioural Learning Theories o Meet with students in small groups so that you can give each pupil more individual feedback. o When you assign reading or give a lecture or demonstration, have a short self-corrected quiz or an informal Q&A session immediately afterward. When older students are dealing with complex and meaningful material, DELAYED FEEDBACK may be more appropriate o Hand back and discuss all exams even though they may have sat for the exam two weeks ago. o Give comments are papers written by students besides a grade or marks. o After having submitted an assignment you could ask your students the following: If you realised after you completed your work that you had made a mistake, make a note of it and mention how you would correct it if you were to do the assignment over again now. Then we can see if your evaluation agrees with mine. Use SEVERAL KINDS OF REINFORCERS so that each retains its effectiveness. o When a student gives a correct answer, makes a good point in class discussion or doe something helpful, say things like: Good. Thats right. Terrific. Great. Very interesting point. I hadnt thought of that. That was big help. o Walk over to stand near and smile encouragingly at a pupil who seems to be working industriously. Use awareness of EXTINCTION to reduce the frequency of undesirable forms of behaviour. o If a student exhibits undesirable behaviour to arouse attention, pay no attention and continue with the lesson. o If a student says something undesirable in class discussion, do not comment, and immediately call on someone else. Using different SCHEDULES OF REINFORCEMENT, encourage persistent and permanent learning. o When students first try a new skill or type of learning, praise almost any genuine attempt, even though it may be inaccurate. Then, as they become more skilful, reserve your praise only for correct and accurate answers. o Avoid a set pattern or predictable way of commenting on student work. o Make favourable remarks at unpredictable intervals. Use reinforcement to MOTIVATE students to learn material that is not intrinsically interesting. o Announce to students that if they complete the boring task, they will be rewarded with something they like to do. e.g. read a book of their choice, work on an art or craft project, work on homework for another class.

38 Chapter 2: Behavioural Learning Theories o Make a contract with students on the amount of work to be completed before they are entitled to the reward. o Withhold reinforcement and calling attention to rewards that will follow completion of a task. If that does not work, consider the possibility of taking away a privilege or resorting to punishment. Use the principles of PROGRAMMED INSTRUCTION. Skinner argued that in a typical classroom situation, a teacher cannot supply reinforcement quickly enough or often enough. He recommended the use of teaching machines or programmed instruction. o State clearly what is to learned i.e. the terminal behaviour (e.g. to be able to compare X and Y) o Break down the facts, concepts and principles and arrange them in a sequence designed to lead the student to the desired end result. o These series of small linear steps or frames are written to maximise the likelihood that students will supply the correct answer for each frame. When students do supply the correct answer for one step or frame, they are reinforced by discovering they are right and motivated to move on to the next. Use programmed approaches to teaching describing terminal behaviour, organising what is to be learned, and providing feedback. o Describe the terminal behaviour using instructional objectives or learning outcomes (e.g. using Blooms Taxonomy of Objectives as a guide). o If appropriate, arrange the material to be learned into a series of steps into an outline of points (e.g. when giving a lecture or demonstration give students an organised list of points to be covered) o Provide feedback (e.g. quizzes with feedback on correct answers)

2.3 ACTIVITY

. Skinner believed that operant conditioning can even be used to teach thinking (by conditioning the student to develop techniques of self-management for example; paying attention and studying efficiently), to foster creativity (by including greater amounts of behaviour and reinforcing what is original), and to encourage perseverance (by systematically widening the ratios of reinforcement). Discuss.

39 Chapter 2: Behavioural Learning Theories


2.4 ACTIVITY

. Gi Read the following situations and state whether they are examples of classical or operant conditioning. Give reasons for your decision.

In order to punish my cat for sleeping on the sofa, I paired the sound of a clicker with getting squirted with water. Now the sound of the clicker causes the animal to get off the sofa. When my son has gone for a week without arguing with his sister, he gets to choose which favorite activity he wants to engage in on Friday night In a weight management class, participants earn points for every healthy meal they eat and every period of exercise they complete. Later these points result in refunds of their class fees. When I first start teaching about a concept, I'll praise any answer that is close to the right answer. Each morning when I switch on the radio, my dogs bark and I give them dog a slice of bread each. After a while, every time I switch on the radio in the morning, my dogs bark.

SUMMARY

"Unconditioned" means unlearned, untaught, pre-existing, already-presentbefore-we-got-there. "Conditioning" just means the opposite. An organism is capable of generalisation, and able to generalise across different stimuli that are different or nearly the same. The organism is capable of discrimination, and able to differentiate among the different stimuli.

40 Chapter 2: Behavioural Learning Theories Behaviourism: Psychology should not be concerned with the mind or mental processes but should be concerned only with behaviour. Watson demonstrated that an emotion such as fear could be transferred to an organism that originally that not have such a fear. Stimulus generalisation occurs when the organism responds to stimuli that are similar or related. Extinction: A response gradually disappears when the stimulus is not applied over a period of time. The law of readiness states that when an organism is ready to act, it will do so. When it is not ready to act, forcing it to act will be annoying. In other words The law of exercise states that the strength of a connection between a stimulus and a response is determined by how often the connection is established. The law of effect states that the strength of a connection between a stimulus and a response is influenced by the consequence of a response. A behaviour reinforced by a pleasant consequence increases the probability of that behaviour occurring in the future. A positive reinforcer is a stimulus that increases the probability of a particular behaviour occurring in the future. A negative reinforcer is a stimulus when removed increases the probability of a particular behaviour occurring in the future. Punishment decreases the probability of a behaviour occurring. Schedule of Reinforcement: Instead of rewarding a particular behaviour every time it occurs, the behaviour is rewarded according to a predetermined schedule. Shaping is a method of successive approximation which involves reinforcing behaviour that is vaguely similar to the behaviour desired.

KEY TERMS Classical conditioning Discrimination Extinction Generalisation Operant conditioning

Positive reinforcement Negative reinforcement Punishment Schedule of reinforcement Shaping

Programmed instruction Feedback Stimulus generalisation Terminal behaviour Connectionism

2.1 a nd the classroom. 41 Chapter 2: Behavioural Learning Theories REFERENCES Biehler, D. and Snowman, G. (1986). Psychology of learning applied to teaching. Newark: Wardsworth Huitt, W., & Hummel, J. (1997). An introduction to operant (instrumental) conditioning. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. http://chiron.valdosta.edu/whuitt/col/behsys/operant.html. Huitt, W., & Hummel, J. (1998). An overview of the behavioral perspective. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. http://chiron.valdosta.edu/whuitt/col/behsys/behsys.html. Skinner, B.F. (1950). Are theories of learning necessary? Psychological Review, 57(4), 193-216. Skinner, B.F. (1954). The science of learning and the art of teaching. Harvard Educational Review, 24(2), 86-97. Watson, J. (1913). Psychology as the Behaviorist Views it. Psychological Review, 20, 158-177. Watson, J. B. and Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology. 3(1). 1-14.

Anda mungkin juga menyukai