PSYC 2500-02 LEARNING: QUIZ 2                                    NAME:                                                                                  

Spring 2017

Read each question and all the alternatives carefully.  Circle the letter of the BEST answer on this sheet, and fill in the corresponding bubble on your bubble sheet.  Focus on what the question asks for; don't just choose an answer that is a true statement on its own.

 

1.      A compound stimulus of a dim light and a loud tone are paired together and followed by a shock in a number of trials with a rat. Then on test trials, the light or tone is presented alone. What should be observed?

a)      The rat will show an equal fear response to the light and the tone.

b)      The rat will show a greater fear response to the tone.

c)      The rat will show a greater fear response to the light.

d)      The rat will show no fear response; the stimuli cancel each other out.

 

2.      What is being calculated in the Rescorla-Wagner model's equation ? ÆVi=Si(Aj - Vsum)

a)      The salience of the conditioned stimulus.

b)      The change in predictive power of a conditioned stimulus on a certain trial.

c)      The total amount of predictive power of all conditioned stimuli presented on a certain trial.

d)      None of the above.

 

3.      Which of the following rules concerning the explanation of the Rescorla-Wagner Model is NOT correct?

a)      if the strength of the US is greater than the strength of expectation, all CSs paired with a US get excitatory conditioning (V increases).

b)      if the strength of the US is less than the strength of expectation, all CSs paired with a US get inhibitory conditioning (V decreases).

c)      more salient (noticeable) CSs will condition slower than less salient CSs.

d)      if the strength of the US equals the strength of expectation, no conditioning takes place.

 

4.      Which of the following does the Rescorla-Wagner model of classical conditioning NOT account for?

a)      blocking

b)      overshadowing

c)      spontaneous recovery

d)      over-expectation effect

 

5.      Which of the following does the Rescorla-Wagner Model explain best?

a)      Spontaneous Recovery

b)      Blocking

c)      Latent Inhibition

d)      The CS Pre-exposure Effect

 

6.      Which of the following principles of association is used to explain learning in Thorndike's version of INSTRUMENTAL conditioning, but NOT used to explain learning in Pavlov's (S-R) version of CLASSICAL conditioning?

a)      There is an association between stimulus and response.

b)      The S-R association is learned through experience.

c)      The S-R association is learned due to the utility of the response.

d)      The S-R association is strengthened with repeated trials.

 

7.      According to the stop-action principle, when a reinforcer stops a behavior it

a)      strengthens the association between the situation and the behavior that occurred at the moment of reinforcement.

b)      elicits a distinctive conditioned response.

c)      strengthens the response due to the repetition of the stimulus being paired with the primary reinforcer.

d)      all of the above

 

8.      An experiment is done in which rats run through a maze and are rewarded with food pellets at the end. This is an example of

a)      classical conditioning because reinforcement depends on a response

b)      operant conditioning because the response is emitted voluntarily

c)      classical conditioning because reinforcement comes regardless of the response

d)      operant conditioning because the response is elicited

 

9.      All of the following are differences between classical and operant conditioning EXCEPT:

a)      Classical conditioning depends on an elicited response whereas operant conditioning depends on an emitted response.

b)      Classical conditioning occurs through contingency and contiguity whereas operant conditioning does not.

c)      Classical conditioning events occur in the order Stimulus-Reinforcement-Response whereas operant conditioning events occur as Stimulus-Response-Reinforcement.

d)      Classical conditioning results in a learned signal between two stimuli whereas operant conditioning results in a learned behavior.

 

10.    According to Skinner, punishment

a)      successfully eliminates the undesirable response.

b)      causes a temporary suppression of responding.

c)      does nothing, because it has no effect on the behavior or response of the animal.

d)      is only effective when it is negative, because positive punishment acts as a motivating factor for the animal to continue performing the behavior.

 


11.    According to Guthrie punishment will work when

a)      it changes the subject's drive.

b)      it causes the subject to make a new response to the stimulus.

c)      the subject is reinforced.

d)      the punishment gives the subject negative motivation.

 

12.    According to Guthrie, extinction is:

a)      the result of too long an interval in between experiments

b)      the result of a failure to protect an established response

c)      due to the animal not receiving something it values or desires

d)      not possible given the number of stimuli connected to the response

 

13.    A discriminative stimulus indicates under what circumstances a response will be reinforced; therefore it

a)      causes the response

b)      weakens the response

c)      does not cause the response

d)      is never present when the reinforcement occurs

 

14.    Clark Hull's 1943 equation for learning was revised in 1952 to add K (incentive motivation). The addition of K was from the results of the Crespi-Zeaman Effect. Which of the following statements describes this effect accurately?

a)      Changing the number of reinforcements had an unexpected sudden effect on behavior.

b)      Changing the amount of reinforcement had an unexpected sudden effect on behavior.

c)      Changing the amount of reinforcement meant that habit strength still increased, but at a slower rate.

d)      Change in the number of reinforcements had no effect on behavior.

 

15.    Which psychologist's definition of reinforcement is described incorrectly?

a)      Clark Hull: anything that reduces drive

b)      Edward Tolman: motivation for performance

c)      B.F. Skinner: anything that strengthens an S-R connection

d)      Guthrie: a change in the stimulus situation

 

16.    In Clark Hull's Principles of Behavior (1943) the conditioned inhibition (sIr) is

a)      the product of reinforcement

b)      the incentive motivation

c)      the learned tendency not to perform a habit

d)      the random factor in prediction of a behavior strength

 

17.    In Rescorla's experiment involving forward and backward conditioning of dogs, what was found about backward conditioning?

a)      nothing was learned with the backward conditioning procedure

b)      dogs responded in the same manner regardless of the order of CS and US

c)      dogs learned a different relationship under backward conditioning than was learned under forward conditioning

d)      the CS did not predict anything in the backward conditioning procedure

 

18.    In Clark Hull's Behavior System (1952), extinction would be explained by:

a)      the dissipation of fatigue.

b)      the buildup of reactive inhibition.

c)      the absence of drive.

d)      a large oscillation in the nervous system's threshold for responding.

 

19.    Tolman's latent learning experiment indicated that

a)      Learning does not take place without reinforcement.

b)      Performance is a reliable indicator of the amount of learning that has taken place.

c)      Learning can occur in the absence of reinforcement.

d)      Drive reduction is a key factor in learning.

 

20.    A child is acting out and disobeying his parents at a family gathering by continuously bouncing a ball while dinner is being served. His parents then take his ball away and make him sit alone in the corner of the living room for 5 minutes; this is an example of:

a)      Negative Punishment

b)      Positive Punishment

c)      Negative Reinforcement

d)      Positive Reinforcement