Anda di halaman 1dari 16


Main article: Behaviorism

Behavorism as a theory was most developed by B. F. Skinner. It loosely includes the work of such people as Thorndike, Tolman, Guthrie, and Hull. What characterizes
these investigators is their underlying assumptions about the process of learning. In essence, three basic assumptions are held to be true. First, learning is manifested
by a change in behavior. Second, the environment shapes behavior. And third, the principles of contiguity (how close in time, two events must be for a bond to be
formed ) and reinforcement (any means of increasing the likelihood that an event will be repeated ) are central to explaining the learning process. For behaviorism,
learning is the acquisition of new behavior through conditioning.
There are two types of possible conditioning:
1) Classical conditioning, where the behavior becomes a reflex response to stimulus as in the case of Pavlov's Dogs. Pavlov was interested in studying reflexes, when he
saw that the dogs drooled without the proper stimulus. Although no food was in sight, their saliva still dribbled. It turned out that the dogs were reacting to lab coats.
Every time the dogs were served food, the person who served the food was wearing a lab coat. Therefore, the dogs reacted as if food was on its way whenever they saw
a lab coat.In a series of experiments, Pavlov then tried to figure out how these phenomena were linked. For example, he struck a bell when the dogs were fed. If the bell
was sounded in close association with their meal, the dogs learned to associate the sound of the bell with food. After a while, at the mere sound of the bell, they
responded by drooling.
2) Operant conditioning where there is reinforcement of the behavior by a reward or a punishment. The theory of operant conditioning was developed by B.F. Skinner
and is known as Radical Behaviorism. The word ‘operant’ refers to the way in which behavior ‘operates on the environment’. Briefly, a behavior may result either in
reinforcement, which increases the likelihood of the behavior recurring, or punishment, which decreases the likelihood of the behavior recurring. It is important to note
that, a punisher is not considered to be punishment if it does not result in the reduction of the behavior, and so the terms punishment and reinforcement are determined
as a result of the actions. Within this framework, behaviorists are particularly interested in measurable changes in behavior.
Educational approaches such as applied behavior analysis, curriculum based measurement, and direct instruction have emerged from this model.[citation needed]
[edit] Cognitivism
Main article: Cognitivism (psychology)
The earliest challenge to the behaviorists came in a publication in 1929 by Bode, a gestalt psychologist. He criticized behaviorists for being too dependent on overt
behavior to explain learning. Gestalt psychologists proposed looking at the patterns rather than isolated events. Gestalt views of learning have been incorporated into
what have come to be labeled cognitive theories. Two key assumptions underlie this cognitive approach: (1) that the memory system is an active organized processor of
information and (2) that prior knowledge plays an important role in learning. Cognitive theories look beyond behavior to explain brain-based learning. Cognitivists
consider how human memory works to promote learning. For example, the physiological processes of sorting and encoding information and events into short term
memory and long term memory are important to educators working under the cognitive theory. The major difference between gestaltists and behaviorists is the locus of
control over the learning activity . For gestaltists, it lies with the individual learner; for behaviorists, it lies with the environment.
Once memory theories like the Atkinson-Shiffrin memory model and Baddeley's working memory model were established as a theoretical framework in cognitive
psychology, new cognitive frameworks of learning began to emerge during the 1970s, 80s, and 90s. Today, researchers are concentrating on topics like cognitive load
and information processing theory. These theories of learning are very useful as they guide instructional design.[citation needed]. Aspects of cognitivism can be found in
learning how to learn, social role acquisition, intelligence, learning, and memory as related to age.
[edit] Constructivism
Main article: Constructivism (learning theory)
Constructivism views learning as a process in which the learner actively constructs or builds new ideas or concepts based upon current and past knowledge or
experience. In other words, "learning involves constructing one's own knowledge from one's own experiences." Constructivist learning, therefore, is a very personal
endeavor, whereby internalized concepts, rules, and general principles may consequently be applied in a practical real-world context. This is also known as social
constructivism (see social constructivism). Social constructivists posit that knowledge is constructed when individuals engage socially in talk and activity about shared
problems or tasks. Learning is seen as the process by which individuals are introduced to a culture by more skilled members"(Driver et al., 1994) Constructivism itself
has many variations, such as Active learning, discovery learning, and knowledge building. Regardless of the variety, constructivism promotes a student's free
exploration within a given framework or structure.[citation needed]The teacher acts as a facilitator who encourages students to discover principles for themselves and to
construct knowledge by working to solve realistic problems. Aspects of constructivism can be found in self-directed learning, transformational learning, experiential
learning, situated cognition, and reflective practice.
[edit] Informal and post-modern theories
Informal theories of education may attempt to break down the learning process in pursuit of practicality[citation needed]. One of these deals with whether learning should take
place as a building of concepts toward an overall idea, or the understanding of the overall idea with the details filled in later. Critics[citation needed] believe that trying to teach
an overall idea without details (facts) is like trying to build a masonry structure without bricks.
Other concerns are the origins of the drive for learning[citation needed]. Some argue that learning is primarily self-regulated, and that the ideal learning situation is one
dissimilar to the modern classroom[citation needed]. Critics argue that students learning in isolation fail[citation needed].
Classical conditioning (also Pavlovian or respondent conditioning, Pavlovian reinforcement) is a form of associative learning that was first demonstrated by Ivan
Pavlov.[1] The typical procedure for inducing classical conditioning involves presentations of a neutral stimulus along with a stimulus of some significance. The neutral
stimulus could be any event that does not result in an overt behavioral response from the organism under investigation. Pavlov referred to this as a conditioned stimulus
(CS). Conversely, presentation of the significant stimulus necessarily evokes an innate, often reflexive, response. Pavlov called these the unconditioned stimulus (US)
and unconditioned response (UR), respectively. If the CS and the US are repeatedly paired, eventually the two stimuli become associated and the organism begins to
produce a behavioral response to the CS. Pavlov called this the conditioned response (CR).
Popular forms of classical conditioning that are used to study neural structures and functions that underlie learning and memory include fear conditioning, eyeblink
conditioning, and the foot contraction conditioning of Hermissenda crassicornis.
• 1 History
○ 1.1 Pavlov's experiment
• 2 Types
○ 2.1 Forward conditioning
○ 2.2 Delay Conditioning
○ 2.3 Trace conditioning
○ 2.4 Simultaneous conditioning
○ 2.5 Backward conditioning
○ 2.6 Temporal conditioning
○ 2.7 Unpaired conditioning
○ 2.8 CS-alone extinction
• 3 Procedure variations
○ 3.1 Classical discrimination/reversal conditioning
○ 3.2 Classical ISI discrimination conditioning
○ 3.3 Latent inhibition conditioning
○ 3.4 Conditioned inhibition conditioning
○ 3.5 Blocking
• 4 Applications
○ 4.1 Little Albert
○ 4.2 Behavioral therapies
• 5 Theories of classical conditioning
• 6 In popular culture
• 7 See also
• 8 References
• 9 Further reading
• 10 External links
[edit] History
[edit] Pavlov's experiment

One of Pavlov’s dogs with a surgically implanted cannula to measure salivation, Pavlov Museum, 2005
The original and most famous example of classical conditioning involved the salivary conditioning of Pavlov's dogs. During his research on the physiology of digestion
in dogs, Pavlov noticed that, rather than simply salivating in the presence of meat powder (an innate response to food that he called the unconditioned response), the
dogs began to salivate in the presence of the lab technician who normally fed them. Pavlov called these psychic secretions. From this observation he predicted that, if a
particular stimulus in the dog’s surroundings were present when the dog was presented with meat powder, then this stimulus would become associated with food and
cause salivation on its own. In his initial experiment, Pavlov used a metronome to call the dogs to their food and, after a few repetitions, the dogs started to salivate in
response to the metronome. Thus, a neutral stimulus (metronome) became a conditioned stimulus (CS) as a result of consistent pairing with the unconditioned
stimulus (US - meat powder in this example). Pavlov referred to this learned relationship as a conditional reflex (now called conditioned response).
[edit] Types
[edit] Forward conditioning

Diagram representing forward conditioning. The time interval increases from left to right.
During forward conditioning the onset of the CS precedes the onset of the US. Two common forms of forward conditioning are delay and trace conditioning.
[edit] Delay Conditioning
In delay conditioning the CS is presented and is overlapped by the presentation of the US
[edit] Trace conditioning
During trace conditioning the CS and US do not overlap. Instead, the CS is presented, a period of time is allowed to elapse during which no stimuli are presented, and
then the US is presented. The stimulus free period is called the trace interval. It may also be called the "conditioning interval"
[edit] Simultaneous conditioning
During simultaneous conditioning, the CS and US are presented and terminated at the same time.
[edit] Backward conditioning
Backward conditioning occurs when a conditioned stimulus immediately follows an unconditioned stimulus. Unlike traditional conditioning models, in which the
conditioned stimulus precedes the unconditioned stimulus, the conditioned response tends to be inhibitory. This is because the conditioned stimulus serves as a signal
that the unconditioned stimulus has ended, rather than a reliable method of predicting the future occurrence of the unconditioned stimulus.
[edit] Temporal conditioning
The US is presented at regularly timed intervals, and CR acquisition is dependent upon correct timing of the interval between US presentations. The background, or
context, can serve as the CS in this example.
[edit] Unpaired conditioning
The CS and US are not presented together. Usually they are presented as independent trials that are separated by a variable, or pseudo-random, interval. This procedure
is used to study non-associative behavioral responses, such as sensitization.
[edit] CS-alone extinction
Main article: Extinction (psychology)
The CS is presented in the absence of the US. This procedure is usually done after the CR has been acquired through Forward conditioning training. Eventually, the CR
frequency is reduced to pre-training levels
[edit] Procedure variations
In addition to the simple procedures described above, some classical conditioning studies are designed to tap into more complex learning processes. Some common
variations are discussed below.
[edit] Classical discrimination/reversal conditioning
In this procedure, two CSs and one US are typically used. The CSs may be the same modality (such as lights of different intensity), or they may be different modalities
(such as auditory CS and visual CS). In this procedure, one of the CSs is designated CS+ and its presentation is always followed by the US. The other CS is designated
CS- and its presentation is never followed by the US. After a number of trials, the organism learns to discriminate CS+ trials and CS- trials such that CRs are only
observed on CS+ trials.
During Reversal Training, the CS+ and CS- are reversed and subjects learn to suppress responding to the previous CS+ and show CRs to the previous CS-.
[edit] Classical ISI discrimination conditioning
This is a discrimination procedure in which two different CSs are used to signal two different interstimulus intervals. For example, a dim light may be presented 30
seconds before a US, while a very bright light is presented 2 minutes before the US. Using this technique, organisms can learn to perform CRs that are appropriately
timed for the two distinct CSs.
[edit] Latent inhibition conditioning
In this procedure, a CS is presented several times before paired CS-US training commences. The pre-exposure of the subject to the CS before paired training slows the
rate of CR acquisition relative to organisms that are not CS pre-exposed. Also see Latent inhibition for applications.
[edit] Conditioned inhibition conditioning
Three phases of conditioning are typically used:
Phase 1:
A CS (CS+) is not paired with a US until asymptotic CR levels are reached.
Phase 2:
CS+/US trials are continued, but interspersed with trials on which the CS+ in compound with a second CS, but not with the
US (i.e., CS+/CS- trials). Typically, organisms show CRs on CS+/US trials, but suppress responding on CS+/CS- trials.
Phase 3:
In this retention test, the previous CS- is paired with the US. If conditioned inhibition has occurred, the rate of acquisition to
the previous CS- should be impaired relative to organisms that did not experience Phase 2.
[edit] Blocking
Main article: Blocking effect
This form of classical conditioning involves two phases.
Phase 1:
A CS (CS1) is paired with a US.
Phase 2:
A compound CS (CS1+CS2) is paired with a US.
A separate test for each CS (CS1 and CS2) is performed. The blocking effect is observed in a lack of conditioned response to
CS2, suggesting that the first phase of training blocked the acquisition of the second CS.
[edit] Applications
[edit] Little Albert
Main article: Little Albert experiment
John B. Watson, founder of behaviourism, demonstrated classical conditioning empirically through experimentation using the Little Albert experiment in which a child
("Albert") was presented with a white rat (CS). After a control period in which the child reacted normally to the presence of the rat, the experimentors paired the
presence of the rat with a loud, jarring noise caused by clanging two pipes together behind the child's head (US). As the trials progressed, the child began showing signs
of distress at the sight of the rat, even when unaccompanied by the frightening noise. Furthermore, the child demonstrated generalization of stimulus associations, and
showed distress when presented with any white, furry object - even such things as a rabbit, dog, a fur coat, a Santa Claus mask with hair and Watson's head.
[edit] Behavioral therapies
Main article: Behaviour therapy
In human psychology, implications for therapies and treatments using classical conditioning differ from operant conditioning. Therapies associated with classical
conditioning are aversion therapy, flooding and systematic desensitization.
Classical conditioning is short-term, usually requiring less time with therapists and less effort from patients, unlike humanistic therapies.[citation needed] The therapies
mentioned are designed to cause either aversive feelings toward something, or to reduce unwanted fear and aversion.
[edit] Theories of classical conditioning
There are two competing theories of how classical conditioning works. The first, stimulus-response theory, suggests that an association to the unconditioned stimulus is
made with the conditioned stimulus within the brain, but without involving conscious thought. The second theory stimulus-stimulus theory involves cognitive activity,
in which the conditioned stimulus is associated to the concept of the unconditioned stimulus, a subtle but important distinction.
Stimulus-response theory, referred to as S-R theory, is a theoretical model of behavioral psychology that suggests humans and other animals can learn to associate a
new stimulus — the conditioned stimulus (CS) — with a pre-existing stimulus — the unconditioned stimulus (US), and can think, feel or respond to the CS as if it were
actually the US.
The opposing theory, put forward by cognitive behaviorists, is stimulus-stimulus theory (S-S theory). Stimulus-stimulus theory, referred to as S-S theory, is a
theoretical model of classical conditioning that suggests a cognitive component is required to understand classical conditioning and that stimulus-response theory is an
inadequate model. It proposes that a cognitive component is at play. S-R theory suggests that an animal can learn to associate a conditioned stimulus (CS) such as a bell,
with the impending arrival of food termed the unconditioned stimulus, resulting in an observable behavior such as salivation. Stimulus-stimulus theory suggests that
instead the animal salivates to the bell because it is associated with the concept of food, which is a very fine but important distinction.
To test this theory, psychologist Robert Rescorla undertook the following experiment [2]. Rats learned to associate a loud noise as the unconditioned stimulus, and a light
as the conditioned stimulus. The response of the rats was to freeze and cease movement. What would happen then if the rats were habituated to the US? S-R theory
would suggest that the rats would continue to respond to the CS, but if S-S theory is correct, they would be habituated to the concept of a loud sound (danger), and so
would not freeze to the CS. The experimental results suggest that S-S was correct, as the rats no longer froze when exposed to the signal light. [3] His theory still
continues and is applied in everyday life.Pavlov and, I. P. (1927/1960). Conditional Reflexes. New York: Dover Publications (the 1960 edition is an unaltered
republication of the 1927 translation by Oxford University Press
[edit] In popular culture
One of the earliest literary references to classical conditioning can be found in the comic novel The Life and Opinions of Tristram Shandy, Gentleman (1759) by
Laurence Sterne. The narrator Tristram Shandy explains[4] how his mother was conditioned by his father's habit of winding up a clock before having sex with his wife:
My father, [...], was, I believe, one of the most regular men in every thing he did [...] [H]e had made it a rule for many years of his life,--on the first Sunday-night of
every month throughout the whole year,--as certain as ever the Sunday-night came,--to wind up a large house-clock, which we had standing on the back-stairs head,
with his own hands:--And being somewhere between fifty and sixty years of age at the time I have been speaking of,--he had likewise gradually brought some other
little family concernments to the same period, in order, as he would often say to my uncle Toby, to get them all out of the way at one time, and be no more plagued and
pestered with them the rest of the month. [...] [F]rom an unhappy association of ideas, which have no connection in nature, it so fell out at length, that my poor mother
could never hear the said clock wound up,--but the thoughts of some other things unavoidably popped into her head--& vice versa:--Which strange combination of
ideas, the sagacious Locke, who certainly understood the nature of these things better than most men, affirms to have produced more wry actions than all other sources
of prejudice whatsoever.
In the U.S. version of The Office, Jim uses classical conditioning to train Dwight to reach out his hand and ask for a mint each time he boots up Windows.
Another example is in the dystopian novel, A Clockwork Orange in which the film's anti-hero and protagonist, Alex, is given a solution to cause severe nausea, and is
forced to watch violent acts. This renders him unable to perform any violent acts without inducing similar nausea.
Operant conditioning is the use of consequences to modify the occurrence and form of behavior. Operant conditioning is distinguished from classical conditioning
(also called respondent conditioning, or Pavlovian conditioning) in that operant conditioning deals with the modification of "voluntary behavior" or operant behavior.
Operant behavior "operates" on the environment and is maintained by its consequences, while classical conditioning deals with the conditioning of respondent
behaviors which are elicited by antecedent conditions. Behaviors conditioned via a classical conditioning procedure are not maintained by consequences.[1]. The main
dependent variable is the rate of response that is developed over a period of time. New operant responses can be further developed and shaped by reinforcing close
approximations of the desired response.

It's important to note that organisms are not spoken of as being reinforced, punished, or extinguished; it is the response that is reinforced, punished, or extinguished.
Additionally, reinforcement, punishment, and extinction are not terms whose use is restricted to the laboratory. Naturally occurring consequences can also be said to
reinforce, punish, or extinguish behavior and are not always delivered by people.
• Reinforcement is a consequence that causes a behavior to occur with greater frequency.
• Punishment is a consequence that causes a behavior to occur with less frequency.
• Extinction is the lack of any consequence following a behavior. When a behavior is inconsequential, producing neither
favorable nor unfavorable consequences, it will occur with less frequency. When a previously reinforced behavior is no
longer reinforced with either positive or negative reinforcement, it leads to a decline in the response.
Four contexts of operant conditioning: Here the terms "positive" and "negative" are not used in their popular sense, but rather: "positive" refers to addition, and
"negative" refers to subtraction.
What is added or subtracted may be either reinforcement or punishment. Hence positive punishment is sometimes a confusing term, as it denotes the addition of a
stimulus or increase in the intensity of a stimulus that is aversive (such as spanking or an electric shock) The four procedures are:
1. Positive reinforcement(Reinforcement) occurs when a behavior (response) is followed by a favorable stimulus (commonly
seen as pleasant) that increases the frequency of that behavior. In the Skinner box experiment, a stimulus such as food or
sugar solution can be delivered when the rat engages in a target behavior, such as pressing a lever.
2. Negative reinforcement(Escape) occurs when a behavior (response) is followed by the removal of an aversive stimulus
(commonly seen as unpleasant) thereby increasing that behavior's frequency. In the Skinner box experiment, negative
reinforcement can be a loud noise continuously sounding inside the rat's cage until it engages in the target behavior, such
as pressing a lever, upon which the loud noise is removed.
3. Positive punishment(Punishment) (also called "Punishment by contingent stimulation") occurs when a behavior (response)
is followed by an aversive stimulus, such as introducing a shock or loud noise, resulting in a decrease in that behavior.
4. Negative punishment(Penalty) (also called "Punishment by contingent withdrawal") occurs when a behavior (response) is
followed by the removal of a favorable stimulus, such as taking away a child's toy following an undesired behavior, resulting
in a decrease in that behavior.
• Avoidance learning is a type of learning in which a certain behavior results in the cessation of an aversive stimulus. For
example, performing the behavior of shielding one's eyes when in the sunlight (or going indoors) will help avoid the aversive
stimulation of having light in one's eyes.
• Extinction occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box
experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever
again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
• Noncontingent reinforcement refers to delivery of reinforcing stimuli regardless of the organism's (aberrant) behavior.
The idea is that the target behavior decreases because it is no longer necessary to receive the reinforcement. This typically
entails time-based delivery of stimuli identified as maintaining aberrant behavior, which serves to decrease the rate of the
target behavior.[2] As no measured behavior is identified as being strengthened, there is controversy surrounding the use of
the term noncontingent "reinforcement".[3]
• 1 Thorndike's law of effect
○ 1.1 Operant Conditioning vs Fixed Action Patterns
○ 1.2 Criticisms
• 2 Biological correlates of operant conditioning
• 3 Factors that alter the effectiveness of consequences
• 4 Operant variability
• 5 Avoidance learning
• 6 Two-process theory of avoidance
• 7 Verbal Behavior
• 8 Four term contingency
• 9 Operant Hoarding
• 10 See also
• 11 References
• 12 External links
[edit] Thorndike's law of effect
Main article: Law of effect
Operant conditioning, sometimes called instrumental conditioning or instrumental learning, was first extensively studied by Edward L. Thorndike (1874-1949), who
observed the behavior of cats trying to escape from home-made puzzle boxes.[4] When first constrained in the boxes, the cats took a long time to escape. With
experience, ineffective responses occurred less frequently and successful responses occurred more frequently, enabling the cats to escape in less time over successive
trials. In his Law of Effect, Thorndike theorized that successful responses, those producing satisfying consequences, were "stamped in" by the experience and thus
occurred more frequently. Unsuccessful responses, those producing annoying consequences, were stamped out and subsequently occurred less frequently. In short,
some consequences strengthened behavior and some consequences weakened behavior. Thorndike produced the first known learning curves through this procedure.
B.F. Skinner (1904-1990) formulated a more detailed analysis of operant conditioning based on reinforcement, punishment, and extinction. Following the ideas of Ernst
Mach, Skinner rejected Thorndike's mediating structures required by "satisfaction" and constructed a new conceptualization of behavior without any such references. So
while experimenting with some homemade feeding mechanisms Skinner invented the operant conditioning chamber which allowed him to measure rate of response as a
key dependent variable using a cumulative record of lever presses or key pecks.[5]
[edit] Operant Conditioning vs Fixed Action Patterns
Skinner's construct of instrumental learning is contrasted with what Nobel Prize winning biologist Konrad Lorenz termed "fixed action patterns," or reflexive,
impulsive, or instinctive behaviors. These behaviors were said by Skinner and others to exist outside the parameters of operant conditioning but were considered
essential to a comprehensive analysis of behavior.
Fixed Action Patterns have their origin in the genetic makeup of the animal in question. Examples of "fixed action patterns" include ducklings that will follow any
moving object if they see that object within the period of time when the behaviour will be released, or the dance that a bee performs. Characteristics of "fixed action
patterns" include not needing to be learned or acquired; these behaviours are performed correctly the first time that they are performed.
Within operant conditioning, Fixed Action Patterns can be used as reinforcers for learned behaviours. Often, fixed action patterns such as predatory grabbing in dogs
can be used as a reinforcer. In police and military dog training, the desire to engage in the predatory bite is often used as a reinforcement for successful completion of a
search or an obedience exercise. The amount of desire that a dog might have to engage in the fixed action pattern is also known as "prey drive" although this may well
be a misnomer as there is no quantification for how much a dog wants to engage in the predatory sequence.
Fixed Action Patterns can also get in the way of successful learning. Breland and Breland note in their paper "The Mis-Behaviour of Organisms" that raccoons cannot
be taught to place an item in a jar due to the fixed action pattern that is released when they begin to place the item in the jar. When a component of a learned sequence
triggers the beginning of a fixed action pattern, it is difficult and sometimes impossible to interrupt that sequence before it is completed. In this way, teaching raccoons
to place items in jars, pigs to fetch (fetching triggers routing behaviours) or young ducklings to sit and stay.
[edit] Criticisms
Thorndike's law of effect specifically requires that a behavior be followed by satisfying consequences for learning to occur. There are, however, cases in which learning
can be shown to occur without good or bad effects following the behavior. For instance, a number of experiments examining the phenomenon of latent learning[6][7][8][9]
showed that a rat needn't receive a satisfying reward (food, if hungry; water, if thirsty) in order to learn a maze; learning that becomes apparent immediately after the
desired reward is introduced. However, views claiming such research invalidates theories of operant conditioning are molecular to a fault. If the rat has a history of
"searching behavior" being reinforced in novel environments, the behavior will occur in new environments. This is especially plausible in a species which scavenges for
food and has thus likely inherited a propensity for searching behavior to be sensitive to reinforcement. Behaving during initial extinction trials as the organism had
during reinforcement trials is not proof of latent learning, as behavior is a function of the history of the individual organism and its genetic endowment and is never
controlled by future consequences. That an organism continues to respond during unreinforced trials has been well-established when studying intermittent schedules of
A different experiment, in humans, showed that "punishing" the correct behavior may actually cause it to be more frequently taken (i.e. stamp it in)[11]. Subjects are
given a number of pairs of holes on a large board and required to learn which hole to poke a stylus through for each pair. If the subjects receive an electric shock for
punching the correct hole, they learn which hole is correct more quickly than subjects who receive an electric shock for punching the incorrect hole. This cannot,
however, be accurately described as punishment if it is increasing the probability of the behavior.
[edit] Biological correlates of operant conditioning
The first scientific studies identifying neurons that responded in ways that suggested they encode for conditioned stimuli came from work by Rusty Richardson and
Mahlon deLong.[12][13] They showed that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a
conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are equally active for positive and negative reinforcers, and have been
demonstrated to cause plasticity in many cortical regions.[14] Evidence also exists that dopamine is activated at similar times. The dopamine pathways encode positive
reward only, not aversive reinforcement, and they project much more densely onto frontal cortex regions. Cholinergic projections, in contrast, are dense even in the
posterior cortical regions like the primary visual cortex. A study of patients with Parkinson's disease, a condition attributed to the insufficient action of dopamine,
further illustrates the role of dopamine in positive reinforcement.[15] It showed that while off their medication, patients learned more readily with aversive consequences
than with positive reinforcement. Patients who were on their medication showed the opposite to be the case, positive reinforcement proving to be the more effective
form of learning when the action of dopamine is high.
[edit] Factors that alter the effectiveness of consequences
When using consequences to modify a response, the effectiveness of a consequence can be increased or decreased by various factors. These factors can apply to either
reinforcing or punishing consequences.
1. Satiation/Deprivation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of
stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes
deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior. Satiation is
generally only a potential problem with primary reinforcers, those that do not need to be learned such as food and water.
2. Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the
consequence. More immediate feedback will be more effective than less immediate feedback. If someone's license plate is
caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will
not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them
over, then their speeding behavior is more likely to be affected.
3. Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness
upon the response is reduced. But if a consequence follows the response consistently after successive instances, its ability
to modify the response is increased. The schedule of reinforcement, when consistent, leads to faster learning. When the
schedule is variable the learning is slower. Extinction is more difficult when learning occurred during intermittent
reinforcement and more easily extinguished when learning occurred during a highly consistent schedule.
4. Size: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the
consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually
large lottery jackpot, for example, might be enough to get someone to buy a one-dollar lottery ticket (or even buying
multiple tickets). But if a lottery jackpot is small, the same person might not feel it to be worth the effort of driving out and
finding a place to buy a ticket. In this example, it's also useful to note that "effort" is a punishing consequence. How these
opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed
or not.
Most of these factors exist for biological reasons. The biological purpose of the Principle of Satiation is to maintain the organism's homeostasis. When an organism has
been deprived of sugar, for example, the effectiveness of the taste of sugar as a reinforcer is high. However, as the organism reaches or exceeds their optimum blood-
sugar levels, the taste of sugar becomes less effective, perhaps even aversive.
The principles of Immediacy and Contingency exist for neurochemical reasons. When an organism experiences a reinforcing stimulus, dopamine pathways in the brain
are activated. This network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic
neurons."[16] This results in the plasticity of these synapses allowing recently activated synapses to increase their sensitivity to efferent signals, hence increasing the
probability of occurrence for the recent responses preceding the reinforcement. These responses are, statistically, the most likely to have been the behavior responsible
for successfully achieving reinforcement. But when the application of reinforcement is either less immediate or less contingent (less consistent), the ability of dopamine
to act upon the appropriate synapses is reduced.
[edit] Operant variability
Operant variability is what allows a response to adapt to new situations. Operant behavior is distinguished from reflexes in that its response topography (the form of
the response) is subject to slight variations from one performance to another. These slight variations can include small differences in the specific motions involved,
differences in the amount of force applied, and small changes in the timing of the response. If a subject's history of reinforcement is consistent, such variations will
remain stable because the same successful variations are more likely to be reinforced than less successful variations. However, behavioral variability can also be altered
when subjected to certain controlling variables.[17]
An extinction burst will often occur when an extinction procedure has just begun. This consists of a sudden and temporary increase in the response's frequency ,
followed by the eventual decline and extinction of the behavior targeted for elimination. Take, as an example, a pigeon that has been reinforced to peck an electronic
button. During its training history, every time the pigeon pecked the button, it will have received a small amount of bird seed as a reinforcer. So, whenever the bird is
hungry, it will peck the button to receive food. However, if the button were to be turned off, the hungry pigeon will first try pecking the button just as it has in the past.
When no food is forthcoming, the bird will likely try again... and again, and again. After a period of frantic activity, in which their pecking behavior yields no result, the
pigeon's pecking will decrease in frequency.
The evolutionary advantage of this extinction burst is clear. In a natural environment, an animal that persists in a learned behavior, despite not resulting in immediate
reinforcement, might still have a chance of producing reinforcing consequences if they try again. This animal would be at an advantage over another animal that gives
up too easily.
Extinction-induced variability serves a similar adaptive role. When extinction begins, and if the environment allows for it, an initial increase in the response rate is not
the only thing that can happen. Imagine a bell curve. The horizontal axis would represent the different variations possible for a given behavior. The vertical axis would
represent the response's probability in a given situation. Response variants in the middle of the bell curve, at its highest point, are the most likely because those
responses, according to the organism's experience, have been the most effective at producing reinforcement. The more extreme forms of the behavior would lie at the
lower ends of the curve, to the left and to the right of the peak, where their probability for expression is low.
A simple example would be a person inside a room opening a door to exit. The response would be the opening of the door, and the reinforcer would be the freedom to
exit. For each time that same person opens that same door, they do not open the door in the exact same way every time. Rather, each time they open the door a little
differently: sometimes with less force, sometimes with more force; sometimes with one hand, sometimes with the other hand; sometimes more quickly, sometimes more
slowly. Because of the physical properties of the door and its handle, there is a certain range of successful responses which are reinforced.
Now imagine in our example that the subject tries to open the door and it won't budge. This is when extinction-induced variability occurs. The bell curve of probable
responses will begin to broaden, with more extreme forms of behavior becoming more likely. The person might now try opening the door with extra force, repeatedly
twist the knob, try to hit the door with their shoulder, maybe even call for help or climb out a window. This is how extinction causes variability in behavior, in the hope
that these new variations might be successful. For this reason, extinction-induced variability is an important part of the operant procedure of shaping.
[edit] Avoidance learning
Avoidance training belongs to negative reinforcement schedules. The subject learns that a certain response will result in the termination or prevention of an aversive
stimulus. There are two kinds of commonly used experimental settings: discriminated and free-operant avoidance learning.
Discriminated avoidance learning
In discriminated avoidance learning, a novel stimulus such as a light or a tone is followed by an aversive stimulus such as a
shock (CS-US, similar to classical conditioning). During the first trials (called escape-trials) the animal usually experiences
both the CS (Conditioned Stimulus) and the US (Unconditioned Stimulus), showing the operant response to terminate the
aversive US. By the time, the animal will learn to perform the response already during the presentation of the CS thus
preventing the aversive US from occurring. Such trials are called avoidance trials.
Free-operant avoidance learning
In this experimental session, no discrete stimulus is used to signal the occurrence of the aversive stimulus. Rather, the
aversive stimulus (mostly shocks) are presented without explicit warning stimuli.
There are two crucial time intervals determining the rate of avoidance learning. This first one is called the S-S-interval
(shock-shock-interval). This is the amount of time which passes during successive presentations of the shock (unless the
operant response is performed). The other one is called the R-S-interval (response-shock-interval) which specifies the length
of the time interval following an operant response during which no shocks will be delivered. Note that each time the
organism performs the operant response, the R-S-interval without shocks begins anew.
[edit] Two-process theory of avoidance
This theory was originally established to explain learning in discriminated avoidance learning. It assumes two processes to take place. a) Classical conditioning of fear.
During the first trials of the training, the organism experiences both CS and aversive US (escape-trials). The theory assumed that during those trials classical
conditioning takes place by pairing the CS with the US. Because of the aversive nature of the US the CS is supposed to elicit a conditioned emotional reaction (CER) -
fear. In classical conditioning, presenting a CS conditioned with an aversive US disrupts the organism's ongoing behavior. b) Reinforcement of the operant response by
fear-reduction. Because during the first process, the CS signaling the aversive US has itself become aversive by eliciting fear in the organism, reducing this unpleasant
emotional reaction serves to motivate the operant response. The organism learns to make the response during the US, thus terminating the aversive internal reaction
elicited by the CS. An important aspect of this theory is that the term "Avoidance" does not really describe what the organism is doing. It does not "avoid" the aversive
US in the sense of anticipating it. Rather the organism escapes an aversive internal state, caused by the CS.
• One of the practical aspects of operant conditioning with relation to animal training is the use of shaping (reinforcing
successive approximations and not reinforcing behavior past approximating), as well as chaining.
[edit] Verbal Behavior
Main article: Verbal Behavior (book)
In 1957 Skinner published Verbal Behavior a theoretical extension of the work he had pioneered since 1938. This work extended the theory of operant conditioning to
human behavior previously assigned to the areas of language, linguistics and other areas. Verbal Behavior is the logical extension of Skinner's ideas, in which he
introduced new functional relationship categories such as intraverbals, autoclitics, mands, tacts and the controlling relationship of the audience. All of these
relationships were based on operant conditioning and relied on no new mechanisms despite the introduction of new functional categories.
[edit] Four term contingency
Modern behavior analysis, which is the name of the discipline directly descended from Skinner's work, holds that behavior is explained in four terms: an establishing
operation (EO), a discriminative stimulus (Sd), a response (R), and a reinforcing stimulus (Srein or Sr for reinforcers, sometimes Save for aversive stimuli).[18]
[edit] Operant Hoarding
Operant Hoarding is a term referring to the choice made by a rat, on a compound schedule called a multiple schedule, that maximizes its rate of reinforcement in an
operant conditioning context. More specifically, rats were shown to have allowed food pellets to accumulate in a food tray by continuing to press a lever on a
continuous reinforcement schedule instead of retrieving those pellets. Retrieval of the pellets always instituted a one-minute period of extinction during which no
additional food pellets were available but those that had been accumulated earlier could be consumed. This finding appears to contradict the usual finding that rats
behave impulsively in situations in which there is a choice between a smaller food object right away and a larger food object after some delay. See schedules of
reinforcement. [19]

You depend on us, now we need your help.

Support Wikipedia today.

$0.9M USD

$7.5M USD

Donate Now

Wikipedia Forever Our shared knowledge. Our shared treasure. Help us protect it.

Wikipedia Forever Our shared knowledge. Our shared treasure. Help us protect it.
Gestalt psychology
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and
removed. (February 2007)

This article is in need of attention from an expert on the subject. WikiProject Psychology or the Psychology Portal may
be able to help recruit one. (October 2009)


History of psychology

Branches of psychology

Basic science

Abnormal · Behavioral neuroscience

Cognitive · Developmental

Experimental · Evolutionary

Mathematical · Neuropsychology

Personality · Positive

Psychophysics · Social


Applied science

Clinical · Educational

Forensic · Health

Industrial and organizational

School · Sport


Outline · Publications

Topics · Therapies


Gestalt psychology or gestaltism (German: Gestalt - "form" or "whole") of the Berlin School is a theory of mind and brain positing that the operational principle of the
brain is holistic, parallel, and analog, with self-organizing tendencies, or that the whole is different from the sum of its parts. The Gestalt effect refers to the form-
forming capability of our senses, particularly with respect to the visual recognition of figures and whole forms instead of just a collection of simple lines and curves. In
psychology, gestaltism is often opposed to structuralism and Wundt. Often, the phrase "The whole is greater than the sum of the parts" is used when explaining Gestalt
• 1 Origins
• 2 Theoretical framework and
• 3 Properties
○ 3.1 Emergence
○ 3.2 Reification
○ 3.3 Multistability
○ 3.4 Invariance
• 4 Prägnanz
• 5 Gestalt views in psychology
• 6 Applications in computer science
• 7 Criticism
• 8 See also
• 9 References
• 10 External links
[edit] Origins
The concept of Gestalt was first introduced in contemporary philosophy and psychology by Christian von Ehrenfels (a member of the School of Brentano). The idea of
Gestalt has its roots in theories by Johann Wolfgang von Goethe, Immanuel Kant, and Ernst Mach. Wertheimer's unique contribution was to insist that the "Gestalt" is
perceptually primary, defining the parts of which it was composed, rather than being a secondary quality that emerges from those parts, as von Ehrenfels's earlier
Gestalt-Qualität had been.
Both von Ehrenfels and Edmund Husserl seem to have been inspired by Mach's work Beiträge zur Analyse der Empfindungen (Contributions to the Analysis of the
Sensations, 1886), in formulating their very similar concepts of Gestalt and Figural Moment, respectively.
Early 20th century theorists, such as Kurt Koffka, Max Wertheimer, and Wolfgang Köhler (students of Carl Stumpf) saw objects as perceived within an environment
according to all of their elements taken together as a global construct. This 'gestalt' or 'whole form' approach sought to define principles of perception -- seemingly
innate mental laws which determined the way in which objects were perceived.
These laws took several forms, such as the grouping of similar, or proximate, objects together, within this global process. Although Gestalt has been criticized for being
merely descriptive, it has formed the basis of much further research into the perception of patterns and objects ( Carlson et al. 2000), and of research into behavior,
thinking, problem solving and psychopathology.
It should also be emphasized that Gestalt psychology is distinct from Gestalt psychotherapy, although there is a commonality in their names. One has little to do with
the other.
[edit] Theoretical framework and methodology
This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (March 2009)
The investigations developed at the beginning of the 20th century, based on traditional scientific methodology, divided the object of study into a set of elements that
could be analyzed separately with the objective of reducing the complexity of this object. Contrary to this methodology, the school of Gestalt practiced a series of
theoretical and methodological principles that attempted to redefine the approach to psychological research.
The theoretical principles are the following:
• Principle of Totality - The conscious experience must be considered globally (by taking into account all the physical and
mental aspects of the individual simultaneously) because the nature of the mind demands that each component be
considered as part of a system of dynamic relationships.
• Principle of psychophysical isomorphism - A correlation exists between conscious experience and cerebral activity.
Based on the principles above the following methodological principles are defined:
• Phenomenon Experimental Analysis - In relation to the Totality Principle any psychological research should take as a
starting point phenomena and not be solely focused on sensory qualities.
• Biotic Experiment - The School of Gestalt established a need to conduct real experiments which sharply contrasted with
and opposed classic laboratory experiments. This signified experimenting in natural situations, developed in real conditions,
in which it would be possible to reproduce, with higher fidelity, what would be habitual for a subject.
[edit] Properties
The key principles of Gestalt systems are emergence, reification, multistability and invariance.[1]
[edit] Emergence

Emergence is demonstrated by the perception of the Dog Picture, which depicts a Dalmatian dog sniffing the ground in the shade of overhanging trees. The dog is not
recognized by first identifying its parts (feet, ears, nose, tail, etc.), and then inferring the dog from those component parts. Instead, the dog is perceived as a whole, all at
once. However, this is a description of what occurs in vision and not an explanation. Gestalt theory does not explain how the percept of a dog emerges.
[edit] Reification

Reification is the constructive or generative aspect of perception, by which the experienced percept contains more explicit spatial information than the sensory stimulus
on which it is based.
For instance, a triangle will be perceived in picture A, although no triangle has actually been drawn. In pictures B and D the eye will recognize disparate shapes as
"belonging" to a single shape, in C a complete three-dimensional shape is seen, where in actuality no such thing is drawn.
Reification can be explained by progress in the study of illusory contours, which are treated by the visual system as "real" contours.
See also: Reification (fallacy)

[edit] Multistability

the Necker Cube and the Rubin vase, two examples of multistability
Multistability (or multistable perception) is the tendency of ambiguous perceptual experiences to pop back and forth unstably between two or more alternative
interpretations. This is seen for example in the Necker cube, and in Rubin's Figure/Vase illusion shown here. Other examples include the 'three-pronged widget' and
artist M. C. Escher's artwork and the appearance of flashing marquee lights moving first one direction and then suddenly the other. Again, Gestalt does not explain how
images appear multistable, only that they do.

[edit] Invariance

Invariance is the property of perception whereby simple geometrical objects are recognized independent of rotation, translation, and scale; as well as several other
variations such as elastic deformations, different lighting, and different component features. For example, the objects in A in the figure are all immediately recognized
as the same basic shape, which are immediately distinguishable from the forms in B. They are even recognized despite perspective and elastic deformations as in C, and
when depicted using different graphic elements as in D. Computational theories of vision, such as those by David Marr, have had more success in explaining how
objects are classified.
Emergence, reification, multistability, and invariance are not separable modules to be modeled individually, but they are different aspects of a single unified dynamic
mechanism.[citation needed]

[edit] Prägnanz
The fundamental principle of gestalt perception is the law of prägnanz (German for pithiness) which says that we tend to order our experience in a manner that is
regular, orderly, symmetric, and simple. Gestalt psychologists attempt to discover refinements of the law of prägnanz, and this involves writing down laws which
hypothetically allow us to predict the interpretation of sensation, what are often called "gestalt laws".[1] These include:

Law of Closure

Law of Similarity

Law of Proximity
• Law of Closure — The mind may experience elements it does not perceive through sensation, in order to complete a regular
figure (that is, to increase regularity).
• Law of Similarity — The mind groups similar elements into collective entities or totalities. This similarity might depend on
relationships of form, color, size, or brightness.
• Law of Proximity — Spatial or temporal proximity of elements may induce the mind to perceive a collective or totality.
• Law of Symmetry (Figure ground relationships)— Symmetrical images are perceived collectively, even in spite of distance.
• Law of Continuity — The mind continues visual, auditory, and kinetic patterns.
• Law of Common Fate — Elements with the same moving direction are perceived as a collective or unit.
[edit] Gestalt views in psychology
Gestalt psychologists find it is important to think of problems as a whole. Max Wertheimer considered thinking to happen in two ways: productive and reproductive.[1]
Productive thinking- is solving a problem with insight.
This is a quick insightful unplanned response to situations and environmental interaction.
Reproductive thinking-is solving a problem with previous experiences and what is already known. (1945/1959).
This is a very common thinking. For example, when a person is given several segments of information, he/she deliberately examines the relationships among its parts,
analyzes their purpose, concept, and totality, he/she reaches the "aha!" moment, using what is already known. Understanding in this case happens intentionally by
reproductive thinking.
Other Gestalts psychologist Perkins believes insight deals with three processes:
1) Unconscious leap in thinking. [1].
2) The increased amount of speed in mental processing.
3) The amount of short-circuiting which occurs in normal reasoning. [2]
Other views going against the Gestalt psychology are:
1) Nothing-Special View
2) Neo-Gestalts View
3) The Three-Process View
Gestalt laws continue to play an important role in current psychological research on vision. For example, the object-based attention hypothesis[3] states that elements in a
visual scene are first grouped according to Gestalt principles; consequently, further attentional resources can be allocated to particular objects.
Gestalt psychology should not be confused with the Gestalt therapy of Fritz Perls, which is only peripherally linked to Gestalt psychology. A strictly Gestalt
psychology-based therapeutic method is Gestalt Theoretical Psychotherapy, developed by the German Gestalt psychologist and psychotherapist Hans-Jürgen Walter.
[edit] Applications in computer science
The Gestalt laws are used in user interface design. The laws of similarity and proximity can, for example, be used as guides for placing radio buttons. They may also be
used in designing computers and software for more intuitive human use. Examples include the design and layout of a desktop's shortcuts in rows and columns. Gestalt
psychology also has applications in computer vision for trying to make computers "see" the same things as humans do.[citation needed]
[edit] Criticism
In some scholarly communities, such as cognitive psychology and computational neuroscience, Gestalt theories of perception are criticized for being descriptive rather
than explanatory in nature. For this reason, they are viewed by some as redundant or uninformative. For example, Bruce, Green & Georgeson[4] conclude the following
regarding Gestalt theory's influence on the study of visual perception:
"The physiological theory of the Gestaltists has fallen by the wayside, leaving us with a set of descriptive principles, but
without a model of perceptual processing. Indeed, some of their "laws" of perceptual organisation today sound vague and
inadequate. What is meant by a "good" or "simple" shape, for example?"
[edit] See also

• Gestalt therapy - often mistaken for Gestalt psychology

• Structural information theory
• Rudolf Arnheim
• Wolfgang Metzger
• Kurt Goldstein
• Solomon Asch
• James Tenney
• Graz School
• Important publications in gestalt psychology
• Mereology
• Optical illusion
• Pattern recognition (psychology)
• Pattern recognition (machine learning)
• Notan
• Amodal perception
• Phenomenology
[edit] References
1. ^ a b c Sternberg, Robert, Cognitive Psychology Third Edition, Thomson Wadsworth© 2003.
2. ^ Langley& associates, 1987; Perkins, 1981; Weisberg, 1986,1995”>
3. ^ Scholl, B. J. (2001). Objects and attention: The state of the art. Cognition, 80(1-2), 1-46.
4. ^ Bruce, V., Green, P. & Georgeson, M. (1996). Visual perception: Physiology, psychology and ecology (3rd ed.). LEA.
pp. 110.
[edit] External links
• Gestalt Society of Croatia
• International Society for Gestalt Theory and its Applications - GTA
• Art, Design and Gestalt Theory
• Rudolf Arnheim: The Little Owl on the Shoulder of Athene
• Embedded Figures in Art, Architecture and Design
• On Max Wertheimer and Pablo Picasso
• On Esthetics and Gestalt Theory
• The World In Your Head - by Steven Lehar
• Gestalt Isomorphism and the Primacy of Subjective Conscious Experience - by Steven Lehar
• The new gestalt psychology of the 21st century
• The Pennsylvania Gestalt Center
• Nancy Schleich Gestalt Counseling

History · Portal · Psychologist

Affective · Behavioral neuroscience · Clinical · Cognitive · Cognitive neuroscience · Comparative ·

Developmental · Emotion · Evolutionary · Experimental · Mathematical · Neuropsychology · Personality ·

Physiological · Positive · Psycholinguistics · Psychopathology · Psychophysics · Psychophysiology ·

Qualitative research · Quantitative research · Social · Theoretical

Assessment · Clinical · Counseling · Educational · Forensic · Health · Industrial/organizational · Legal ·

Media · Military · Occupational health · Psychometrics · Relationship counseling · School · Sport · Systems
Analytical · Behaviorism · Cognitive behavioral therapy · Cognitivism · Descriptive · Ecological Systems

Orientatio Theory · Existential therapy · Family therapy · Feminist therapy · Gestalt psychology · Humanistic ·
Narrative therapy · Psychoanalysis · Psychodynamic psychotherapy · Rational emotive behavior therapy ·


Alfred Adler · Gordon Allport · Albert Bandura · Raymond Cattell · Kenneth and Mamie Clark · Erik Erikson ·

Eminent Hans Eysenck · Leon Festinger · Viktor Frankl · Sigmund Freud · Donald O. Hebb · Clark L. Hull · William
psychologi James · Carl Jung · Jerome Kagan · Kurt Lewin · Abraham Maslow · David McClelland · Stanley Milgram ·
George A. Miller · Neal E. Miller · Walter Mischel · Ivan Pavlov · Jean Piaget · Carl Rogers · Stanley

Schachter · B. F. Skinner · Edward Thorndike · John B. Watson · Wilhelm Wundt

Counseling topics · Important publications in psychology · Psychological research methods · Psychological

Lists schools · Psychologists · Psychology disciplines · Psychology organizations · Psychology topics ·

Psychotherapies · Timeline of psychology

Wiktionary definition · Wikisource · Wikimedia Commons · Wikiquote · Wikinews · Wikibooks

Retrieved from ""
Categories: Perception | Psychological schools | Cognitive psychology | Graphic design | Visualization (graphic)
Hidden categories: Articles needing additional references from February 2007 | All articles needing additional references | Psychology
articles needing expert attention | Articles needing expert attention from October 2009 | All articles needing expert attention |
Articles lacking sources from March 2009 | All articles lacking sources | All articles with unsourced statements | Articles with
unsourced statements from March 2009
• Article
• Discussion
• Edit this page
• History
Personal tools
• Try Beta
• Log in / create account
• Main page
• Contents
• Featured content
• Current events
• Random article
Top of Form

Special:Search Go Search

Bottom of Form
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
• What links here
• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page
• ‫العربية‬
• Български
• Česky
• Deutsch
• Español
• Esperanto
• Français
• Bahasa Indonesia
• Italiano
• ‫עברית‬
• Lietuvių
• Nederlands
• 日本語
• Polski
• Português
• Русский
• Slovenčina
• Svenska
• Українська
• 中文

• This page was last modified on 18 November 2009 at 15:29.

• Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use
for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
• Contact us
• Privacy policy
• About Wikipedia
• Disclaimers

You depend on us, now we need your help.

Support Wikipedia today.

$0.9M USD

$7.5M USD

Donate Now

Wikipedia Forever Our shared knowledge. Our shared treasure. Help us protect it.

Wikipedia Forever Our shared knowledge. Our shared treasure. Help us protect it.
Atkinson-Shiffrin memory model
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Note that in this diagram, sensory memory is detached from either form of memory, and represents its devolvement from short term
and long term memory, due to its storage being used primarily on a "run time" basis for physical or psychosomatic reference.
The Atkinson-Shiffrin model (also known as the Multi-store model, Multi-memory model and the Modal model) is a psychological model proposed in 1968 by
Richard Atkinson and Richard Shiffrin[1] as a proposal for the structure of memory. It proposed that human memory involves a sequence of three stages:
1. Sensory memory (SM)
2. Short-term memory (STM)
3. Long-term memory (LTM)
• 1 Summary
○ 1.1 Sensory memory
○ 1.2 Short-term memory
○ 1.3 Long-term memory
• 2 Criticisms
○ 2.1 Linearity
○ 2.2 Monolithicity
• 3 Later Developments
• 4 See also
• 5 References
• 6 External links
[edit] Summary

The original 2-stage model of the Atkinson-Shiffrin memory model; lacking the "sensory memory" stage, which was devised at a later
stage in research
The multi-store model of memory is an explanation of how memory processes work. You hear, see, and feel many things, but only a small number are remembered. The
model was first described by Atkinson and Shiffrin in 1968.
[edit] Sensory memory
The sense organs have a limited ability to store information about the world in a fairly unprocessed way for less than a second. The visual system possesses iconic
memory for visual stimuli such as shape, size, colour and location (but not meaning), whereas the hearing system has echoic memory for auditory stimuli. Coltheart et
al. (1974) have argued that the momentary freezing of visual input allows us to select which aspects of the input should go on for further memory processing. The
existence of sensory memory has been experimentally demonstrated by Sperling (1960) using a tachistoscope.
[edit] Short-term memory
Information is retained acoustically. long enough to use it, e.g. looking up a telephone number and remembering it long enough to dial it. Peterson and Peterson (1959)
have demonstrated that STM last approximately between 15 and 30 seconds, unless people rehearse the material, while Miller (1956) has found that STM has a limited
capacity of around 7+ or -2 ‘chunks’ of information[2] . STM also appears to mostly encode memory acoustically (in terms of sound) as Conrad (1964) has
demonstrated, but can also retain visuospatial images. However in many cases STM can be at a semantic level.
[edit] Long-term memory
LTM provides the lasting retention of information , from minutes to a lifetime. Long term memory appears to have an almost limitless capacity to retain information,
but it could never be measured as it would take too long. LT information seems to be encoded mainly in terms of meaning (semantic memory) as Baddeley has shown,
but also retains procedural skills and imagery.
Memory may also be transported directly from sensory memory to LTM if it receives instant attention, e.g. witnessing a fire in your house. This is known as a
"Flashbulb Memory". Another example of this is the fact most people can recall what they were doing on 11th of September 2001.
Also if information in the LTM is not rehearsed it can be forgotten through trace decay.
[edit] Criticisms
[edit] Linearity
Some argue that the Multi-Store model is too linear[citation needed], i.e., that it cannot accommodate subdivisions of STM and LTM memory stores[citation needed] .
The concept of the "stream of memory" in this model has been suggested to lack internal consistency[citation needed] , as, by definition, the stream of memory often discarded
for newer information, often with little or emphasis on the salience on the new information[citation needed] . A supposed example of this was found in the asymptote of
control data, revealing primacy and recency effects (with information recalled better when presented early or late in the test stream), overshadowing the asymptote[citation
. This suggests a need to explain decay processes in memory[citation needed]. It has been suggested that the idea of 3 separate areas for memory storage may emerge
from neuronal processes such as rates of firing[citation needed] , as well as the idea of the "ionised sodium gate" model of action potentials[citation needed].
In the case of sensory memory, the model, which is psychological, does not provide a ready explanation for the observed asynchronous nature of neural activity
occurring between anatomical structures[citation needed] - an example of this would be the reference to sensory memory being used to perform physical processes such as
motor function, which suggests that once an action is performed, it is remembered for 3 seconds and then begins a process of rapid decay[citation needed] .
[edit] Monolithicity
The Atkinson-Shiffrin model distinguishes different forms of memory, but it does not take into account what information is presented[citation needed] , nor does it take into
account individual differences in subject's performance including an cognitive ability, or previous experience with learning techniques[citation needed] .
Whilst case studies of individuals (such as Clive Wearing) have been reported indicating that memory can be severely damaged independent of at least some other
cognitive capacities[citation needed], there is less support from case studies of developmental models for the supposed tri-partite memory structure[citation needed] . Some have
argued that autistic savant performance may violate predictions from the model, based on an ability to recall precise information without the need for rehearsal[citation
, and without evidence for decay[citation needed].
[edit] Later Developments
The advent of the model provided an testable framework for subsequent work, and a strong stimulus for the experiemental study of human memory[citation needed]. This has
led to the model being superseded[citation needed]. Newer models include the possibility for cases where short-term memory is impaired, but long-term memory is not (which
is impossible in the basic model, as information can only become encoded in long-term memory after passing through the unitary short-term store.
Much recent work has focussed on the model proposed by Alan Baddeley, which distinguishes stores for phonological (speech-sound), and visuo-spatial information as
well as episodic material, and proposes the existence of central executive processes, accessing these stores [3] .
Atkinson and Shiffrin also refrain from proposing any mechanisms or processes that might be responsible for encoding memories and transferring them between the
three systems. The model is a hypothetical layout of the function of memory systems, but not in any way representative of a physical or biological basis of memory.
Newer models have been created that can better account for these other characteristics, and a tremendous body of research on the physical layout of memory systems
has emerged[citation needed].
[edit] See also
• Clive Wearing
• HM (patient)
• Richard Atkinson
• Richard Shiffrin
[edit] References
1. ^ Atkinson, R.C.; Shiffrin, R.M. (1968). "Chapter: Human memory: A proposed system and its control processes". in Spence,
K.W.; Spence, J.T.. The psychology of learning and motivation (Volume 2). New York: Academic Press. pp. 89–195.
2. ^ Miller G A (1956). "The magical number seven.". The Psychological Review 63: 81–97. doi:10.1037/h0043158.
3. ^ Baddeley A (2003). "Working memory: looking back and looking forward". Nature Reviews Neuroscience 4 (10): 829–839.
Baddeley A (April 1994). "The magical number seven: still magic after all these years?". Psychol Rev 101 (2): 353–6. doi:10.1037/0033-295X.101.2.353. PMID
[edit] External links
• Science aid: Multi Store Model Easy to understand yet comprehensive article on the model
• Simply Psychology webpage on multistore model
Retrieved from ""
Categories: Memory processes
Hidden categories: All articles with unsourced statements | Articles with unsourced statements from July 2009
• Article
• Discussion
• Edit this page
• History
Personal tools
• Try Beta
• Log in / create account
• Main page
• Contents
• Featured content
• Current events
• Random article
Top of Form

Special:Search Go Search

Bottom of Form
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
• What links here
• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

• This page was last modified on 15 November 2009 at 19:11.

• Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use
for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
• Contact us
• Privacy policy
• About Wikipedia
• Disclaimers