Talk:Operant conditioning
This is the talk page for discussing improvements to the Operant conditioning article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This level-5 vital article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
A summary of this article appears in learning. |
Simplification
[edit]What is it that needs simplyfing? Operant conditioning is difficult to understand and does not lend itself to simple explanations. — Preceding unsigned comment added by 193.216.61.5 (talk) 09:39, 4 September 2005 (UTC)
- I don't think it's as complicated as phrases like "is the modification of behavior (the actions of animals) brought about by the consequences that follow upon the occurrence of the behavior." would make it seem. Elf | Talk 04:31, 13 September 2005 (UTC)
I don't see what need simplifying. Maybe it's because of my Psychology background that this page seems crystal clear to me. — Preceding unsigned comment added by 216.232.63.213 (talk) 18:08, 8 November 2005 (UTC)
Negative Reenforcement/ Negative Punishment
[edit]I believe the article has transposed the definitions for Negative Reenforcement and Negative Punishment. Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior. Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior.
I haven't edited the article because I may be missing something. — Preceding unsigned comment added by 155.76.223.253 (talk) 19:51, 10 November 2005 (UTC)
- No, because punishment always discourages a behaviour, and reinforcement always encourages a behaviour. Taking away a toy removes a pleasant stimulus, thus discouraging the behaviour (which makes it punishment). Taking away a loud noise removes an unpleasant stimulus, thus encouraging the behaviour (which makes it reinforcement).
- Even intuitively, it doesn't make sense to punish a child (for example) by removing something unpleasant - "Since you didn't clean up, you don't have to do your homework tommorrow!" =) — Preceding unsigned comment added by 129.128.232.250 (talk) 04:59, 15 February 2006 (UTC)
- Right; I'm going to rephrase it--reread the definitions of terms, because they're not used in exactly the same way that most people use the terms individually in regular conversation; as 129.128.232.250 said :
- Reinforcment is something that causes a behavior to increase in frequency
- Punishment is something that causes a behavior to decrease in frequency
- Positive is simply adding something (note that adding something unpleasant is still adding something; people tend to think of "positive" as meaning "something nice", but in behavior science, that's not what it means)
- Negative is simply removing something (whether pleasant or unpleasant)
- For example, then, "negative punishment" is the REMOVAL of SOMETHING to cause a behavior to DECREASE.
- Elf | Talk 05:52, 15 February 2006 (UTC)
- Right; I'm going to rephrase it--reread the definitions of terms, because they're not used in exactly the same way that most people use the terms individually in regular conversation; as 129.128.232.250 said :
- ~ChocoboFreak~To put it in the simplest way I can, it isn't "Since you didn't clean up, you don't have to do your homework tommorrow!". Think of a rat in a box. You want it to learn to press a button frequently. One way of doing it is to make sure that the rat contiunously receives mild electric shocks (the unpleasant stimulus) unless it presses the button. If the rat presses the button, an unpleasant stimulus is taken away (so it is more likely to want to press the button again). To use your homework analogy, it's like letting the child do their homework the next day if they do something good.
- According the operant conditioning, if you want a child to clean their room, you could punish them for having it dirty (Positive Punishment: adding something which is not good for them: say, smacking them/Negative Punishment: taking something away that they like: say, a toy). That's what most parents think about when they think about changing a child's behaviour.
- You could, however, reward them for cleaning it when they clean it (Positive reinforcement: Giving them something to make them more likely to repeat the behaviour: say, giving them money/Negative Reinforcement: taking away something bad when they do something good: in my example of the rat, it's taking away the electric shocks).~ChocoboFreak~ — Preceding unsigned comment added by 211.29.250.241 (talk) 23:31, 24 March 2006 (UTC)
- So, wait. If I give my son a cookie every morning that he doesn't wet his bed, that's positive punishment because a) I'm adding something and b) I'm trying to decrease the bed-wetting behavior? Bloody brilliant! Or is it positive reinforcement because I'm a) adding something and b) trying to increase the likelihood of dry-nights?
- I'm not a behaviorist, but I encounter them regularly enough to have these issues come to my attention, so I did some research. It turns out that Skinner's definition of positive reinforcement contains aspects that modern behaviorists include in negative reinforcement:
- "[If] it’s in our power to create any of the situations which a person likes or to remove any situation he doesn’t like, we can control his behavior. When he behaves as we want him to behave, we simply create a situation he likes, or remove one he doesn’t like. As a result, the probability that he will behave that way again goes up, which is what we want. Technically it’s called ‘positive reinforcement’" (Walden Two, pp. 259-260, emph. added).
- Over the years these definitions were co-opted and changed. Now it seems almost every intro-to-psych textbook has a slightly different take on it. You might want to read Holth's 2005 article in Behaviorism Analyst Today entitled Two Definitions of Punishment.
- Rather than debate what means what, the article should at least include a citation to which original text uses the definitions the article gives. And by this, I mean not a textbook, but a journal article by someone in the field that explains it. Jmbrowne (talk) 18:03, 28 October 2009 (UTC)
- Here's a paraphrase from what I remember B. F. Skinner's Science and Human Behavior to have stated, "the presentation of a negative reinforcer or the removal of a positive are the consequences which we call punishment".
- I'm a novice editor so I won't edit this article unless I see no attempt to represent what B. F. Skinner meant by punishment or an attempt to show that the word has evolved into this new defintion. He tried to show that punishment was an ineffective means of conditioning. Punishment is avoided by avoiding the punishing stimulus not by avoiding the behavior that causes punishment. Skinner was not an advocate of punishment.
- I think it's important to note that the link to punishment is a psychology defintion per wikipedia. Laymen will think of punishment the way Skinner did, and that is why it needs to be clarified. The wikipedia punishment (psychology) definition should maybe even include the following: a process whereby a response is followed by a negative reinforcer or the removal of a positive reinforcer, which results in a decrease in the probability of the response. ---- Rhetoricmonkey (talk) 17:13, 20 September 2010 (UTC)
"Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior."
- I am sorry, but the last statement is not correct. Negative reinforcement is the removal of an undesirable stimulus following some response. The antecedent must include the existance of the undesirable stimulus. --68.14.27.183 (talk) 02:28, 6 March 2011 (UTC)
"Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior."
- I am sorry, but the previous statement is also incorrect. Negative punishment is the removal of an existing desirable stimulus as a consequence to a behavior. I am not really sure why people who do not know this material are considering editing the article. --68.14.27.183 (talk) 02:28, 6 March 2011 (UTC)
As an attorney, my interest is in discipline (help) rather than punishment (harm). I consider the best example of negative punishment to be the suspension of social and economic privileges. This is different from positive punishment which causes harm and encourages retaliation. In discipline, the suspension of privileges can be restored as milestones are met (negative reinforcement). Both styles of punishment discourage related behavior and both styles of reinforcement encourage and direct desired behavior. --Eugene Patrick Devany-- blogging on Quora — Preceding unsigned comment added by EugenePatrickDevany (talk • contribs) 22:09, 25 July 2022 (UTC)
Suggestions for additions
[edit]- Mention the primary and antithetical approach to psychology-- Cognition. Cognition and Behaviorism are mutually incompatible. Although the operant conditioning approach works well for some contexts (e.g. animal training), the cognitive approach accomplishes the same, but using a different mechanism.
- Might be worth mentioning the failed application of Operant conditioning to human language learning, and Chomsky's critique (it's already in the behaviorism article).
- Add new section on Animal Training. There is already an entry animal training, but note here the technical issues: Note examples of 'positive reinforcement' and 'positive punishment' training techniques. Also note that although 'modern' animal trainers consider themselves to use OC, they often rely on techniques that are not strictly operant conditioning in the original Skinnerian sense, e.g. bridging.
Santaduck 03:13, 20 January 2006 (UTC)
Cat or rat?
[edit]At first it said the person worked w/ cats but then it said rats!! Which one is it? — Preceding unsigned comment added by Lilsaalex (talk • contribs) 15:41, 8 February 2006 (UTC)
- ~ChocoboFreak~ Skinner worked with rats, Thorndike worked with cats. It appears to have been fixed. Somebody probably just got the two people mixed up. — Preceding unsigned comment added by 211.29.250.241 (talk) 23:35, 24 March 2006 (UTC)
I'm curious as to what you guys think was the most effective. And if anyone thinks that which animal you use really matters. As humans, we can say we are the dominant species and it trickles down, but when it gets to the lower brain capacity of different animals, do you think it played a big role? Justin.edwards (talk) 02:12, 11 December 2017 (UTC)
Consequences
[edit]The consequences link doesn't really make sense.128.213.28.129 20:43, 15 February 2006 (UTC)
- What? You don't think a parlour game is a crucial piece of operant conditioning? (Removed link.) Elf | Talk 23:38, 15 February 2006 (UTC)
Not clear on prey drive activity not being a reward
[edit]New section includes this paragraph:
- "In dog training, the use of the prey drive, particularly in training working dogs, detection dogs, etc., the stimulation of these fixed action patterns, relative to the dog's predatory instincts, are the key to producing very difficult yet consistent behaviors, and in most cases, do not involve operant, classical, or any other kind of conditioning."
So I don't understand what the point is--that allowing the dog to indulge prey drive when they do something correct is NOT a positive reinforcement? It seems to me like it is. Dog does the weave poles really fast, they get the tug toy. Dog doesn't go as fast, dog doesn't get to play tug. How is that not a positive reinforcer? Elf | Talk 00:55, 24 February 2006 (UTC)
- ~ChocoboFreak: It seems like it is a positive reinforcer to me as well. "This is because the prey drive, once started, follows an inevitable sequence: the search, the eye-stalk, the chase, the grab-bite, the kill-bite". According to this, it seems close to being classical conditioning (in a way). — Preceding unsigned comment added by 211.29.250.241 (talk) 23:42, 24 March 2006 (UTC)
The section on prey drive is inconsistent with the rest of the article. Not everyone agrees that tracking or working dogs have to be rewarded every time; this is more the author's bias than fact, especially without seeing any citations. It is stated that prey drive is an example of an exception to operant conditioning. This is conjecture, as again no sources are cited. Giving the toy or throwing the ball is an addition of something the animal wants - therefore it is positive reinforcement. Even though this is not a food reward, it is a conditioned reinforcer. If the animal does something correctly, it is given this reinforcement. We really don't care why the animal wants the reward. The fact that it works for the reward makes it operant conditioning. — Preceding unsigned comment added by 148.168.40.4 (talk) 19:06, 7 July 2006 (UTC)
- 08/28/2006 I believe that the author of the "prey drive" section may be misunderstanding an aspect of the limitations placed on the effectiveness of a reinforcer and is seeing it as a refutation of operant procedures. Within operant conditioning, there are indeed a number of factors that can reduce how effective a reinforcer can be. The factor the author seems to reference is Satiation. Obviously, if the dog's reward is a big meal, this will drastically reduce the effectiveness of reinforcement using treats because their hunger is already satiated. That is why trainers will use a variety of different reinforcers, such a treats, toys, and praise, in order train their animal. But this does not, as the author claims, constitute a "drawback" of operant conditioning. It's just how reinforcement works. It would not be favorable for evolution to produce a species that can always be reinforced by the same thing to no end. If food were ALWAYS a reinforcer, we'd spend our whole lives at the dinner table and never be able stop. That is why we have various biological mechanisms that regulate the effectiveness of a reinforcer depending on our bodies' needs. Behaviorists call these "Establishing Operations" and the article would probably benefit from their mention. --Lunar Spectrum — Preceding unsigned comment added by Lunar Spectrum (talk • contribs) 03:03, 30 August 2006 (UTC)
OK, going by a suggestion on the new contributor's question page, I'm going to lay out what I think should be done with this section. The whole "drawbacks and limitations" section needs to be redone. Obviously, Behavior Analysis tends to draw a lot of ire and so the popular insistence for such a section, no matter how badly done, is very strong. However, the opening paragraph on the "drawbacks" section illustrates this problem nicely. A Nobel laureate is cited as stating that operant conditioning doesn't take into account "fixed" reflexes, yet in the very same paragraph we have an explanation (though incomplete) about how operant conditioning isn't supposed to deal with reflexes to begin with because the form of a reflex is, as mentioned, biologically fixed in form, whereas operant behavior is defined as behavior whose form is modifiable by consequences. This demonstrates something that BF Skinner himself noted, that a person's criticism of Behavior Analysis is inversely proportional to how much they actually understand it (a phenomenon that also holds true for other scientific models, like Evolution by Natural Selection). I intend to keep that criticism of the Nobel laureate in the article, but expand upon the paragraph to explain Skinner's rationale for not including reflexes as a form of operant behavior.
Also, the entire "prey drive" portion needs to be removed. In its place would be a listing of factors that alter the effectiveness of consequences, factors such as what I previously mentioned about "satiation." It could look like this:
- Satiation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior.
- Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness upon the response is reduced. But if a consequence follows the response reliably, it's effectiveness is increased. If someone has a habit of getting to work late, but is only occassionally reprimanded for their lateness, the reprimand will not be a very effective punishment.
- Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the consequence. If someone's liscense plate is caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them over, then their speeding behavior is more likely to be affected.
- Size: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually large lottery jackpot, for example, might be enough to get someone to buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery jackpot is small, the same person might not feel it worth the effort to drive out and find a place to buy a ticket. In this example, it's also useful to note that "effort" is a punishing consequence. How these opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed or not.
I will wait approximately a week (maybe more) for further feedback about my intended alterations. Afterwards, I will see how much of what I have included above I will implement. Lunar Spectrum 05:17, 30 August 2006 (UTC)
Nearly a month has passed and there is no comment about my suggestion. I think I will simply add what I have outlined above in a new section and deal with the prey-drive section some other time. --Lunar Spectrum | Talk 02:05, 26 September 2006 (UTC)
I am not sure if I should have done it or not; perhaps I ought to wait and think and reflect before I edit, but the section that mentioned prey drive was SO far outside of the article that I rewrote it so that it has something to do with the discussion of FAPS versus OC. Whomever wrote the one that I edited out doesn't understand OC or FAPS but would surely like to convince the rest of the world that using prey drive is a valid method of training. At best, it is sloppy terminology that doens't really belong in any training program that is developed using operant conditioning as its model. So, if I went over board in my edits, I do appologize, however, the first bit was really, really bad. I am conducting a workshop this weekend on operant conditioning, and I will go back and areferences after the workshop is done; sorry but I am swamped at the moment. Suenestnature (talk) 05:24, 2 January 2009 (UTC)suenestnatureSuenestnature (talk) 05:24, 2 January 2009 (UTC)
B.F Skinner
[edit]I was reading the section in the article on Thorndike and his theories, and noticed that there was a passing reference to Skinner's research on reinforcement, no doubt to do with the "Skinner Box" experiment. Considering he was one of the greatest researchers in this area of psychology, could a section be added to explain the principals and methodology of the experiment? 58.169.141.5 23:46, 28 April 2006 (UTC) Nick
Negative reinforcement and punishment
[edit]For what it's worth, note in passing that Karen Pryor: Don't Shoot the Dog! defines negative reinforcement and punishment differently. To Pryor, the main difference is timing. A negative reinforcement is something disagreeable that the subject can immediately stop by changing his behavior. A punishment is something that happens later that the subject cannot immediately stop by changing his behavior. If Auntie frowns when I put my feet on the coffee table, and stops frowning when I take them off, that is what Pryor calls a negative reinforcement. If I get a bad grade on my report card that reflects all the work I haven't done in class this year, that is what Pryor calls a punishment. Pryor notes that even though punishment is everyone's favorite method of untraining unwanted behavior, it rarely works because the subject usually has difficulty connecting the punishment with the behavior; often, the subject learns to evade punishment instead.
The behaviorist psychologist H. J. Eysenck talks in similar terms in his book Psychology Is About People, Chapter 3. He insists on talking about positive and negative reinforcement instead of reward and punishment, despite the clumsiness of his preferred terms, because with rewards and punishments the timing may make it difficult for the subject to connect the result with the behavior. — Preceding unsigned comment added by 4.232.102.216 (talk) 20:28, 7 May 2006 (UTC)
For what it's worth, it's all too nit picky, if you want to be pure, non-implicative animal behaviorists, P+, P-, R+, R- is this simple.
- P+: addition of a stimulus to decrease behavior frequency
- P-: removal of a stimulus to decrease behavior frequency
- R+: addition of a stimulus to increase behavior frequency or intensity
- R-: removal of a stimulus to increase behavior frequency or intensity
if a child screams, a parent picks them up, and the child stops screaming, like it or not that is P+. If you expanded the time line and looked at recurring behavior, you might see increases in intensity and frequency and then it you understandably label it R+, however purely analytically, you cannot imply pain and reward into P and R just because we think rats like cheese or dislike tail shocks. We cannot know for 100% certainty the intentions of an animal and their perceptions. We can only observe what causes behavior to go up and down. Dogs and Cats sometimes love to be pet, other times it's very punishing to them. R- is a big annoyance for me because people always use examples of physically aversive loud noises, ear pinches, etc., and while it's often the case, we are already analyzing the reinforcing agent due to our own conditioned emotional responses. Let's look at a receptionist at a doctor's office. She puts out candy and people smile more in the office so she puts candy out every day after that. Now was she reinforced by the increase of smiles (R+) or was she reinforced by the removal of frowns (R-)? It completely depends on the individuals temperament and you would have to ask her, hey, what do you like more, no frowns or smiles? Animals can't speak to us on those terms so we cannot assume what the likely reinforcer is. All punishments and reinforcements have this duality. Did the rat get reinforced by cheese because they like cheese or did the rat get reinforced by the loss of hunger? Does a child in timeout curb undesirable behavior because they lost the ability to play with friends? Or do they curb that behavior because they don't like the time out room or stool? Typically we argue it's the loss of the ability to play that makes a timeout P-, however the emotional response associated with the time out room or stool may actually cause the child to respond stronger to the instruments of the time out then the loss of opportunity, making it P+.
There are many times with animals that non-physically aversive stimuli are punishing, and non-physically rewarding stimuli are reinforcing, because you cannot dismiss the effect of Pavlov in understanding an animals conditioned emotional responses to stimuli. Some kids like time-outs, some people enjoy cutting themselves, so cluttering up Operant Conditioning with words like "rewarding" and "aversive" are anecdotal and non-scientific, as well as making it more confusing to Billy and Susie. Yes R- is often easily viewed as an escape, but that is not it's definition and can cause confusion in the less plastic mind. PB- 11/7/10 11:11PST —Preceding unsigned comment added by 98.247.244.101 (talk) 19:18, 7 November 2010 (UTC)
Extinction, other suggestions
[edit]I'm not too sure what goes ineffective when extinction occurs. I assume its the reward (the pellet)... but then it seems like the behavior became extinct. Regardless, I'm confused and this paragraph ought to be clarified.
Extinction is a related term that occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
I would also explain in the intro that Operant Conditioning is not absolute - it doesn't ensure that the subject will always perform a task (as using the prey drive I gather does.) That little factoid came out of the blue in that section. — Preceding unsigned comment added by 69.109.181.222 (talk) 09:11, 27 July 2006 (UTC)
- whomever wrote the initial bit about prey drive doesn't really understand operant theory at all. Prey drive does not ensure that the learner will always perform a task, even though those who espouse the theory would have you believe this; I have seen some pretty spectacular failures on the part of dogs trained using prey drive. Prey drive is merely a fixed action pattern being used to premack any other behaviour; that is to say that it is a more highly desired behaviour (because it is more deeply driven perhaps!) that the animal is willing to work to have the opportunity to do.
- Properly applied, operant conditioning can produce very reliable results; that is to say that within the tolerance of a particular biological system, the animal will do whatever works!Suenestnature (talk) 02:52, 3 January 2009 (UTC)suenestnatureSuenestnature (talk) 02:52, 3 January 2009 (UTC)
Potential section about biological basis of operant conditioning
[edit]In the section "Factors that alter the effectiveness of consequences" I included the mention of how certain factors are the result of biology. For example, I mentioned that the principles of Immediacy and Contingency are the result of the statistical probability of dopamine to modify the appropriate synapses. However, the necessity of an entire section devoted to the biological basis of operant procedures is becoming clear. I used the dopamine reference only to support the section about "Factors that alter the effectiveness of consequences," but already more biological references have been added to that section. They are good references and should be kept, but they should be moved to their own section because they do not contribute anything to the subject of the section they are currently in.
I think that the biological section should be the second section, placed right after the "Reinforcement, punishment, and extinction" section. It would be a good way to structure the article to first have exposition on reinforcement, punishment, and extinction procedures, then have a three-part section immediately following it to explain the neurophysiological effects of reinforcing stimulation, aversive stimulation, and extinction. An alternative to this might be to simply add such a discussion to each of the existing corresponding articles on reinforcement, punishment, and extinction. --Lunar Spectrum | Talk 00:31, 29 September 2006 (UTC)
Merges
[edit]Useful info on both articles... Schedule of reinforcement should not be an article. Reinforcement probably shouldnt be - both should redirect here. —The preceding unsigned comment was added by Thuglas (talk • contribs) 05:21, 2 March 2007 (UTC).
- If we do merge both articles, the current article may be too big to read. See article size for more information.--Janarius 14:53, 2 March 2007 (UTC)
I was kinda thinking that so i put a second link to merge schedules of reinforcement into reinforcment. perhaps a little thing on extrinsic and intrinsic reinforcement and secondary/primary reinforcement could be added thuglasT|C 17:37, 2 March 2007 (UTC)
- I was thinking the same thing, but we probably need more opinions to decide on that matter. About primary/secondary reinforcement, is it also called primary or unconditioned reinforcer or is it something else?--Janarius 16:28, 3 March 2007 (UTC)
yeah i think that would work primary means food or something secondary means money the differences between extrinsic/intrinsic and primary are very little, but for some reason they remain seperate in my mind
i figure if noone complains in a week or so we should go ahead and be WP:bold ive posted the link on WP psych. i dont think anyone would disagree with this idea. — Preceding unsigned comment added by Thuglas (talk • contribs) 18:02, 3 March 2007 (UTC)
- As I mentioned at the top of the page (wasn't clear on whether there was a convention of putting more recent talk page content at the top or at the bottom) I think it would be good to merge Schedules of reinforcement into Reinforcement, but not Reinforcement into Operant conditioning. Like someone else mentioned before, Reinforcement merged into OC could be too large and Reinforcement has more than enough content to merit its own article and already stands on its own. The focus of the Operant Conditioning article should be on operant procedures, which use consequences to modify behavior. Reinforcement is only one of a group of different kinds of consequences and is in no way the "be-all end-all" of Operant conditioning and to merge them would seriously disrupt the balance of the OC article in that respect.
- As for primary and secondary reinforcers, the equivalent terms for these are unconditioned and conditioned reinforcers, not intrinsic or extrinsic. An extrinsically reinforced response would be a behavior that is reinforced with externally delivered consequences (which CAN include food or other primary reinforcers, as well as secondary reinforcers), while an intrinsically reinforced response is a behavior that is rewarding in and of itself without the need for delivering a reinforcer, which is called automatic reinforcement (ie. the performance of the behavior itself is also the reinforcer, like with a "runner's high" or reading for pleasure).Lunar Spectrum | Talk 00:34, 22 March 2007 (UTC)
I think we were on the same page here, but to clarify: I know that secondary/primary reinforcers are not synomous with intrinsic or extrinsic. I think extrinsic, intrinsic, secondary, and primary reinforcement would all fit into the article. (i havent looked at it in a while i just dont like being misunderstood)thuglasT|C 15:13, 7 August 2007 (UTC)
Regarding article merging
[edit]Looking at the articles in question, I think that the Reinforcement article should not be merged into Operant Conditioning. The Reinforcement article has a good level of detail that makes itself stand as an article on its own. Adding to that the proposal to merge Schedules of Reinforcement into Reinforcement, and the amount of redundant content would bog down the entire article. I think that elements of the Schedule of Reinforcement article can be successfully merged into Reinforcement. But Operant Conditioning already does enough of an overview of reinforcement not to warrant Reinforcement being merged into it. That would detract from the broader focus of the Operant Conditioning article, which should be more about the modification of behavior (operant procedures) rather than the details about the tool used to modify behavior (reinforcement). Lunar Spectrum | Talk 01:11, 14 March 2007 (UTC)
merging with Reinforcement
[edit]Having lot of material on reinforcement in the operant conditioning makes the article too large. Reinforcement deserves separate section than operant conditioning. Rather than a mergefrom the reinforcement, i suggest that appropriate sections be merged to reinforcement. The two articles - Schedule of reinforcement and reinforcement can be merged together Kpmiyapuram 12:17, 10 April 2007 (UTC)
- I completely agree and I think the older comments tend to agree along those lines as well. Nobody has really commented on it for a long while though, so I think I'm going to remove the merge tags from the operant conditioning article. I'll leave the merge tags referring to reinforcement and schedules articles though in case anyone wants to go ahead with it. Lunar Spectrum | Talk 02:49, 11 April 2007 (UTC)
Biological correlates of operant conditioning
[edit]This section currently appears to have material that fits for "biological correlates of classical conditioning" and not those of operant conditioning. Kpmiyapuram 13:51, 11 April 2007 (UTC)
- If you mean the first paragraph for that section, that's just a case of a contributer using "conditioned stimulus" where "conditioned reinforcer" might be more accurate. I've had brief discussions on the matter with that contributer who comes from the position that the distinctions between classical and operant conditioning isn't as clear cut as previously thought (ex. he mentioned how unconditioned stimuli tend to also function as primary reinforcers or primary punishers). Although I come from the perspective of maintaining terminological consistency, I thought the point was a fair one so I left some of the language in the article as it was. As for the content in the second paragraph in that section, that's all about neuromodulators like dopamine and acetylcholine that correlate with the modification of the synapse (behavior) upon the delivery of a consequence, very much the domain of operant conditioning. Lunar Spectrum | Talk 06:15, 12 April 2007 (UTC)
- i disagree about neuromodulators. That's bio- neuro- psychology, not operant conditioning. It shows the current cognitive bias that is trying to move away from the study of behavior which is what made operant conditioning such a powerful paradigm to begin with. --florkle 23:53, 23 May 2007 (UTC)
- A purely behaviorist/conditioning approach was long ago shown to be flawed as a complete explanation for experimental findings in animals, and even within the realm of operant conditioning, its biological substrate is surely an important topic for discussion. To be sure, it is worth including references from authors with a more historical perspective who were vehement supporters of a behavioralism approach, but this article should also emphasize the modern understanding of operant conditioning, which, from what I've seen, is increasingly couched in terms of neural substrates and even, indeed, cognitive approaches. digfarenough (talk) 19:22, 24 May 2007 (UTC)
Extinction
[edit]There is some material on Extinction (psychology) in a separate article but i see that the current article on operant conditioning discusses it at more length. perhaps the information could be reorganized or merged. Kpmiyapuram 14:18, 24 April 2007 (UTC)
- Yes, I think that a lot of the information in the extinction section would serve the article on Extinction (psychology) well. I think the reason I wrote it in the operant conditioning article was because it addresses the variable nature of operant behavior, but some reorganizing is definitely in order. Information about extinction bursts would go well in the extinction article. Information about extinction-induced variability can fit in both Extinction (psychology) and Shaping (psychology). Lunar Spectrum | Talk 03:44, 7 May 2007 (UTC)
- After some brief research, I think the section on extinction-induced variability would be better if it were changed to "Operant variability" and it could be a good place to add other information about behavioral variability across various situations, not just during extinction. Will start moving some stuff around and see how it comes out. Lunar Spectrum | Talk 04:44, 7 May 2007 (UTC)
Thorndike
[edit]I don't think it's accurate to relate Thorndike to Operant Conditioning. Skinner's operant was "discovered" by him alone. Thorndike used different terms and explanatory systems. This is very important. Lots of people examined learning in humans and animals before Skinner. None of that was "operant conditioning" because it relied on mediating structures ('expectations', 'drives', etc). The explanatory system is as important as the actual data (perhaps even more so).
Operants were also quantified in the operant chamber - Skinner's invention - which Thorndike did not use.
Moreover it implies that Skinner's position is just another learning theory, and it is not. This is an attempt to rewrite in the dead theories of Thorndike as "operant" theories which have become popular, or scientifically validated. Thorndike was important in his little way. Put his theories on his own page, or change the name of the page to "instrumental learning". Operant = Skinner != Thorndike.
(-Florkle!)
—The preceding unsigned comment was added by Florkle (talk • contribs) 07:25, 16 May 2007 (UTC).
I have added a refutation of the thorndike extension article and cited Chiesa. This whole article is problematic in its treatment of reinforcement theory which is not very "clean" in its presentation.
Moreover the digression into the neurochemistry of reinforcement is something that Skinner has rejected since 1938 when he dismissed physiological explanations as appealing to a "conceptual nervous system (CNS)" and later.
--Florkle 06:23, 17 May 2007 (UTC)
- (Psst. Normally new comments are added to the bottom of the discussion page--makes it easier to find them.) digfarenough (talk) 13:54, 17 May 2007 (UTC)
new sections
[edit]Why are the sections "verbal behavior" and "four term contingency" at the beginning of the article? The latter seems unneeded and the former seems like it should go much later, if at all. And why do we have this paragraph arguing that Skinner's work wasn't based on Thorndike's? Is this information relevant to discussing what operant conditioning is? If anything, I think that should be moved to a separate history section. I'm also surprised reinforcement learning isn't linked in this article, but I'll toss that into the "see also" section now... digfarenough (talk) 13:53, 17 May 2007 (UTC)
- it was there arguing the reverse. --florkle 23:51, 23 May 2007 (UTC)
I also think the new additions disrupt the flow of the article. They certainly might have their place somehwere in it, but right now it seems a bit random. And it also seems that the biological section was moved from 3rd section to, apparently, the very last??? To my thinking, the biology section should be near the beginning since despite being the most heavily disparaged area of psychology, operant conditioning is more solidly grounded in biology than anything else in the field. So I think having that biological basis close to the top is important for the credibility of the subject matter. I think an appropriate structure to the article would be 1. history 2. basics 3. biological underpinnings 4. plus various other special topics. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
- that is not skinner's rationale & its not behaviorist. skinner rejected biological justifications - see functional relationship arguments, such as Chiesa's --florkle 23:51, 23 May 2007 (UTC)
Additionally, I think a special section on verbal behavior should have to clearly explain how an understanding of verbal operants extends from operant conditioning, which it presently does not accomplish. It can be done (I'd have to look over some of my old notes and google for some sources), but as an advanced topic it should go somewhere towards the end. Theoretical extentions of operant conditioning, like Skinner's Verbal Behavior, should not greatly detract from the focus of this particular article: namely, operant conditioning procedures, which are factual experimental findings. And it's certainly not a "theory" of operant conditioning... no more than a physicist would call the laws of kinematics a "theory" of kinematics. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
- verbal behavior is the extension of operant theory to humans. Why should we care about pigeons and rats if it doesn't generalize? --florkle 23:51, 23 May 2007 (UTC)
And having checked on the article for Verbal Behavior, I'm now concerned about NPOV issues regarding the user who made the recent section changes in the Operant conditioning article. In the talk page for Verbal Behavior he recently states that he has "nuked all references to Chomsky's" review. Now, I may think that Chomksy's review is completely flawed. But for historical reasons, his review is appropriate subject matter for that article. It would be like having a biography on Abraham Lincoln without mentioning John Wilkes Boothe. Anyway, I'm restoring the biological section to its original place in the article and moving some other stuff down to the bottom until it can be worked out. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
- chomsky has been restored & extended. see article. --florkle 23:51, 23 May 2007 (UTC)
It's a complete myth that Skinner rejected biology's role in behavior. It's true that Skinner was opposed to giving explanatory status to unknown mediating constructs. For example, Chomsky coming along and saying "environment can't explain verbal behavior, therefore I will invent an imaginary Language Acquiring Device and claim it exists somewhere in the brain." That is the kind of hypothetical mediationism that Skinner was against, when people pull mediating constructs out of nowhere. There's a recent article explaining Skinner's regard for biology's role in behavior in The Behavior Analyst. Even more recently is a good 2007 article outlining current research about the relationship between biology and the three-term contingency [1]. The simple fact of the matter is that neurology is the hardware of organic "learning machines." To deny that stimuli and responses are transmitted along neurons and modified at the synaptic level would be rediculous. Consider how over a hundred years ago Darwin had en enitrely environmental account of evolution (natural selection). He had no biological mechanism to explain how variation occured and how traits were passed on. He only knew that it happened, and he had strong evidence for it. Then with the discovery of DNA, Darwin's model of evolutionary change was justified because DNA behaves in exactly the way that Darwin's model predicted. Skinner's behavior analysis is much the same way. His model of learning is being justified by biological findings and biology will ultimately be what redeems behavior analysis as a "hard" science separate from psychology. Furthermore, it's very important to note that Skinner is not the be-all end-all of behavior analysis. To treat it as such is to group it with all the other dead models populating psychology texts. A living and breathing science has the ability to expand and further clarify its subject of study. Lunar Spectrum | Talk 19:44, 26 May 2007 (UTC)
"Role of cognition" article
[edit]There's been an interesting addition to the article in the form of a "further reading" section. It's an article that purports that cognition is a mediating influence on behavior under classical and operant conditioning procedures. Of course, the idea that cognition plays a role as a mediator of behavior goes against the radical behaviorist position that cognition is itself a form of behavior subject to the same laws as overt behavior, no more and no less. The authors go on to build a case (one that I don't consider convincing) using past research to support their assertion. For example, they claim that if behavior is affected by consequences, then it must be "goal-oriented" and that "expectancies" must be involved and that, therefore, this means cognition governs behavior. This is a clear example of invoking unseen causal agents. They also cite research on rat maze running whose results they interpret to mean that rats form "cognitive maps" instead of learned responses, such as the case in which a rat has learned to run a maze, then during a new trial when a path is blocked the rat uses a parallel path as an alternative, even though the rat has not learned to use that alternative parallel path. I think this does not exclude, to my satisfaction, the influence of the rat's past acquired history of navigational repertoires upon the behavior seen in the experiment. Another area the authors cite is a 1974 review by William Brewer which investigates the effects of informed consequences upon human behavior. These are cases in which neutral stimuli have acquired reinforcing or punishing functions upon a subject's behavior without any conditioning taking place. All of the Brewer (1974) examples, as far as I can see, can easily be explained by stimulus equivalence in which new stimulus functions can emerge through membership in an equivalence relation, which is a thoroughly behavior analytic area of research. Understandably, Brewer (1974) couldn't have known about stimulus equivalence as a behavioral explanation for the results he was seeing... but I think more should be expected of the present authors. It goes on to cite Rescorla (1988) which, for all intensive purposes, seems to be based upon a complete misrepresentation of the behavioral account of contingency. He claims that behaviorists view the degree of stimulus control exerted by a CS as determined by the number of CS>US pairings (which is not true of behaviorists) and goes on to state that he has "discovered" that the true relationship is the predictive value of the CS (which is what behaviorists already consider to be true). He states that behaviorists are therefore wrong (according to his understanding of behaviorism) and that there must therefore be some kind of "goal-directed" cognition going on to account for it.
I could really go on and on... and maybe I'm making a mountain out of a mole hill, but I think this reference really doesn't belong here. I guess I could remove it without much fuss, but considering the level of misunderstanding of behavior analysis among cognitivists/constructivists I could easily see how simply removing it might elicit the reaction that I was removing fair criticism of behavior analysis. Maybe if we left the reference in the article, it could instead be a blue-print for elements of conditioning that could be further addressed in the body of the article itself? At the least it would be nice for others to review the reference themselves before having it removed. What do you think? Lunar Spectrum | Talk 04:13, 1 June 2007 (UTC)
- I think the reference should stay. Like you said, it could be a blueprint for other aspects of conditioning. I'm simply saying that because my psych professors strongly emphasize the role of congition in conditioning and learning.--Janarius 13:46, 1 June 2007 (UTC)
- It gets a little tricky here. We can distinguish "cognition vs. operant conditioning" as well as "cognition vs. all-behavior-is-stimulus-response-learning". The cognitive map study with the three paths you mentioned was an experiment by Tolman, but he had a much simpler experiment that showd that rat behavior is not simply stimulus-response learning. One group of rats was allowed to run a maze with food reward at the end and the number of "errors" they made (wrong turns on the way to the food) were recorded on each day. Another group ran the maze but no food reward was given for a few days. When a food reward was finally introduced, the number of errors the rats made dropped right away. A stimulus-response explanation of the task would require that food reward information propagates backward through the maze, whereas the cognitive map hypothesis suggests that the rats formed a map in their head, so that as soon as the reward was available, spatial information they had already learned could be used to guide behavior. So I think we should keep in mind that even if you think cognition has nothing to do with operant conditioning, there is much evidence that cognition is involved in actual behavior which implies that stimulus-response learning is not the sole basis for all behavior. (See also Packard and McGaugh 1996, an inactivation study that showed a clear double dissociation between the two ways of guiding behavior). Summary: I'm neither opposing nor supporting that reference; just saying that this is an article on operant conditioning, not all animal behavior. digfarenough (talk) 14:26, 1 June 2007 (UTC)
- Well, the Tolman experiment you describe has the same problems I mentioned above about the effects of previous experience upon maze-running behavior. In the example you mentioned, the experiment assumes that the food reward is the only reinforcer experienced by the rat. The navigational responses of an actual rat in that kind of situation are going to be reinforced for proficiency by naturally occurring consequences if it is allowed to wander according to other motivations, like relieving stimulus deprivation. By comparison, the cognitive mapping model starts to look a lot more like a gray mystery box with the caption, "Something we don't understand happens here, then we see this result in performance. Let's give this mystery box a name and say that it explains the behavior." Also, regarding the Packard & McGaugh reference, the results look interesting, but I have to wonder if it has more to do with the inactivation of learning and cue-sensitivity instead of "place learning" vs. "response learning." From what I understand, current thinking on navigational behavior has been moving away from the cognitive maps model and is moving back in the direction of stimulus discrimination. I'm aware of a relatively recent review of research that also leans in that direction. Of course, that doesn't change the fact that cognitive explanations of behavior are the prevailing view in psychology, and should not be ignored as being part of the debate. It just doesn't sit right with me for it to sit there as if it didn't have the flaws I mentioned before. Then again, we're just talking about a "further reading" section. I orginally thought about adding citations for further reading that would balance out the section, but refutation by behaviorists specifically addressing the issues I've outlined has been difficult to find. Anything else I could put there would only serve to stray from the topic of operant conditioning, so I'm not inclined to do that. I'm resigned to having the article reference stay since at least I've had a chance to discuss my concerns about it here in the talk page. Lunar Spectrum | Talk 07:39, 2 June 2007 (UTC)
- I disagree about the Tolman experiment, but it might take a while to explain. My arguments would be heavily based on those given in the first chapter of Eichenbaum and Cohen's book. Briefly, the non-food-reinforced rats didn't decrease their escape latency until food was introduced, so knowledge of food was the only changed factor. "Cognitive map" isn't used a lot these days (that I've seen), as it is loaded by the work of O'Keefe and Nadel's famed book from the '70s, but the idea of cognition being important in navigation and behavior is increasingly common from what I've seen. (Give me a few months and maybe the third chapter in my thesis can convince you!:)). But I don't think cognition is a black box. I could give you a list of references that model this form of navigation as a graph search, either in hippocampus or (more correctly in my opinion) in neocortex. If anything, I think your approach to stimulus-response learning might be vague. If you take a particular model of operant conditioning, such as reinforcement learning, it gives a specific framework in which these ideas can be considered, and it becomes clear quickly that the Tolman results can't be explained (in that framework) as stimulus-response learning. Of course, Tolman was a long time ago and there are many other results since then. Anyhow, I must admit I'm more interested in discussing these ideas themselves than in deciding whether or not any given reference should go in the article. :) digfarenough (talk) 15:13, 2 June 2007 (UTC)
- Well, the Tolman experiment you describe has the same problems I mentioned above about the effects of previous experience upon maze-running behavior. In the example you mentioned, the experiment assumes that the food reward is the only reinforcer experienced by the rat. The navigational responses of an actual rat in that kind of situation are going to be reinforced for proficiency by naturally occurring consequences if it is allowed to wander according to other motivations, like relieving stimulus deprivation. By comparison, the cognitive mapping model starts to look a lot more like a gray mystery box with the caption, "Something we don't understand happens here, then we see this result in performance. Let's give this mystery box a name and say that it explains the behavior." Also, regarding the Packard & McGaugh reference, the results look interesting, but I have to wonder if it has more to do with the inactivation of learning and cue-sensitivity instead of "place learning" vs. "response learning." From what I understand, current thinking on navigational behavior has been moving away from the cognitive maps model and is moving back in the direction of stimulus discrimination. I'm aware of a relatively recent review of research that also leans in that direction. Of course, that doesn't change the fact that cognitive explanations of behavior are the prevailing view in psychology, and should not be ignored as being part of the debate. It just doesn't sit right with me for it to sit there as if it didn't have the flaws I mentioned before. Then again, we're just talking about a "further reading" section. I orginally thought about adding citations for further reading that would balance out the section, but refutation by behaviorists specifically addressing the issues I've outlined has been difficult to find. Anything else I could put there would only serve to stray from the topic of operant conditioning, so I'm not inclined to do that. I'm resigned to having the article reference stay since at least I've had a chance to discuss my concerns about it here in the talk page. Lunar Spectrum | Talk 07:39, 2 June 2007 (UTC)
"Defensive" POV
[edit]Hi - I kind of think parts of the article sound very defensive and somebody is getting rather uptight about the Skinner/Thorndike debate. I think credit is less important than making sure the point of the article is clear and explains what the current understanding of operant conditioning IS rather than making the article all messy about who made up what and so on. If I want to know who came up with what I don't think I'd come to Wikipedia to get that info. — Preceding unsigned comment added by 203.173.169.91 (talk) 21:17, 25 June 2007 (UTC)
- Yeah, I agree that the Thorndike section still sounds defensive. I think I will try to drop the parts where it sounds like the author is trying to refute a relationship between Skinner and Thorndike. Besides that, is there something else in the article you're saying sounds defensive? Lunar Spectrum | Talk 02:22, 29 June 2007 (UTC)
Mutual Operant Conditioning
[edit]I have never heard of this term unitl now and its only existence seems to be in Wiki-world forms. I would not be in favor of a link to it on the Operant Condtioning Page.(Mcole13 (talk) 17:45, 14 July 2008 (UTC))
- i agree with this. even if it is valid i expect it is yet to be studied appropriately anyway. —Preceding unsigned comment added by 134.151.0.40 (talk) 15:20, 9 December 2009 (UTC)
- Agreed. 208.96.153.158 (talk) 14:26, 10 November 2010 (UTC)
Real World Applications
[edit]I am interested in cases in which this has been used on humans for psychological treatment. Despite the effectiveness on pigeons and other less intelligent mammals I find it difficult to imagine with accuracy how operant conditioning could be used for aversion therapy. Links would be ideal.96.49.141.252 (talk) 06:06, 3 July 2009 (UTC)
Introduction is too technical
[edit]The introduction is too technical and focuses mostly on describing what Operant conditioning is *not*, i.e. it is not classical conditioning, instead of on what it *is*. Could someone with the knowledge in the field write a better introduction and push the details clarifying the distinction with classical conditioning to the body of the article? --NavarroJ (talk) 18:09, 3 June 2010 (UTC)
- In addition - it mischaracterizes classical conditioning using a very old and outdated understanding of classical conditioning. See Rescorla 1988 - Classical Conditioning is not what you think it is. — Preceding unsigned comment added by 24.97.224.6 (talk) 12:45, 24 March 2011 (UTC)
- I definately agree. It is to list-like and to technical. It's a bit confusing... and has wrong information. Is there anyone with information in this field who can help fix this? —Preceding unsigned comment added by 184.99.105.225 (talk) 00:36, 7 April 2011 (UTC)
"Rewarding", "unpleasant"
[edit] A colleague made a valuable correction, but followed it quickly with a mistaken edit that i've reverted.
The language they replaced -- "(commonly seen as pleasant)" and "(commonly seen as unpleasant) -- is deficient, but the reverted replacement was much worse, for breaking the desirable parallelism, for equating human attitudes to conditioning phenomena, and for ignoring the low correlation of unpleasantness to negative reinforcement (which parallels the low correlation of pleasantness to positive reinforcement). The problems this presents include:
- As Eysenck emphasizes in distinguishing reward from positive reinforcement and punishment from negative reinforcement, timing is a crucial element of the causal relationship.
- Virtually all species with nervous systems exhibit operant conditioning, including planaria and no doubt others so simple that attributing pleasure to them would logically require doing the same for extremely simple robots designed solely to make this point.
The language i've restored can be improved upon, starting with taking this into account:
- The only value of referring here to pleasantness and unpleasantness is to present the reader with the concept that some of the stimuli that will act as positive or negative reinforcers of their own behavior (or of behavior in others, whom they have heard label specific stimuli as pleasant or unpleasant) will correspond to stimuli that have received those labels; they are thus about human pleasure rather than the whole realm of operant conditioning.
(In an article on the psychology of conditioning and learning, the whole notion of relevance of any(un-)pleasantness other than that in humans corrupts the unassailable status of experimental psychology as science, and drags in the irrelevant arguments like whether there is such a thing as "what it is like to be a bat".) And the revision i reverted was a step away from, rather than toward, what we need.
--Jerzy•t 05:51, 30 July 2010 (UTC)
I've made a minor edit to try to capture at least some of these concerns about the previous wording. Please edit, rather than revert to the previous, if unhappy: At the very least the parenthses would need to be removed because they significantly altered the intended meaning of the sentences. Personally, I think it is important that the layperson is able to understand these basic concepts of reward and punishment, even if at the expense of some philosophical preciseness. Excuse me if I'm not in line with the Wikipedia vision in this view - I make relatively few contributions. But if I notice a section of an article is essentially unreadable I usually try to make some minimum corrections to fix that. Interlope (talk) 00:32, 2 August 2010 (UTC)
Citation for Immediacy
[edit]The section on immediacy doesn't have a citation, but here's a potential one. I don't know how to put it in myself but here's the link: http://www.sciencedirect.com/science/article/pii/S037663570400169X JDWLB (talk) 13:06, 2 June 2011 (UTC)
Who discovered operant conditioning?
[edit]The article currently states that "instrumental learning was first extensively studied by Jerzy Konorski and next by Edward L. Thorndike." But Thorndike published his work on the law of effect in 1905, and Konorski wasn't even born until 1903. Something is amiss... — Preceding unsigned comment added by 24.42.228.249 (talk) 18:13, 2 March 2014 (UTC)
Typing error
[edit]At the top of the page, a figure of a tree structure of conditioning is presented. I identified a typing error. "Appetative" should be"Appetitive" See Webster's Collegiate Dictionary.Aartsj (talk) 07:55, 27 July 2014 (UTC)
Animal applications
[edit]This article seems all very theoretical and mostly focused on human behavior. What I was trying to look up was a mention of "operant conditioning" as one of the things zoo interns/volunteers are trained in. Google eventually told me that the conditioning is applied to the animals instead of the humans, to teach them to cooperate with routine health care, transfers between enclosures, and the like. Perhaps a new section in the article is called for? 64.93.124.227 (talk) 03:18, 12 March 2015 (UTC)
- That is a very good idea. I can think of several examples: Hippos and elephants are given operant showers, pigs and cattle with electronic feeders, cattle with back scratching brushes.__DrChrissy (talk) 14:49, 12 March 2015 (UTC)
I like the idea as well. It's not difficult to relate all the animals in the world may have interactions with human being's learning techniques and methods. Human are more advanced 'animal' since they have cognitive and analytical ability while they confront of issues. However the operant conditioning should apply for both on animals and human beings. So I think having animal training section should be definitely a plus! — Preceding unsigned comment added by Huskyqqq (talk • contribs) 06:34, 29 November 2017 (UTC)
Yeah! I was wondering this myself and curious how it could be developed. The dynamic between the animal and a human is so interesting especially thinking about it in a zoo way. I'm sure a lot of things happen with animals at zoo's that people don't even realize are part of operant conditioning. Simply being able to feed an animal can be valuable to figuring out more about this theory pertaining to other animals. Justin.edwards (talk) 02:18, 11 December 2017 (UTC)
Operant Conditioning Chamber "Skinner Box"
[edit]I added more information about the operant conditioning chamber to the part on Skinner. I included some of the initial tasks such as task 1, which is isolating an individual piece of behavior to see how it could be changed. I also mentioned how the variable ratio schedule plays into human gambling problems. I found this important to add while also very interesting. I am curious to learn more about the effects the variable ratio schedule in terms of human gambling. Klaska 24 (talk) 05:54, 31 October 2017 (UTC)Kelly (klaska_24)
I find this to be very interesting as well and think you have a great point. I wonder if the ratio is completely consistent to slot machines. Something even deeper I think would be fun to research is would humans still want to gamble on the slot machines if the odds didn't have a variable ratio schedule. And if the conditioning of even one win could get them to come back to the casino. Justin.edwards (talk) 03:08, 11 December 2017 (UTC)
Traumatic Bonding
[edit]As people giving more intermittent reinforcement of reward or punishment, the pace of people changing their emotional bonds could be dramatically different. So really look out for the Traumatic bonding. I feel like this is a sensitive topic somehow but I want to come up with more examples based on it since there is nothing too much. Traumatic bonding could be applied onto a lot of cases. — Preceding unsigned comment added by Huskyqqq (talk • contribs) 04:48, 5 December 2017 (UTC)
- I agree. This seems as if it can be detrimental to those affected. I think it can easily be linked to the chains that one develops when being stuck in a situation they can not break free from. It becomes all they know and can do serious damage. Justin.edwards (talk) 02:07, 11 December 2017 (UTC)
Questions about the Law of Effect
[edit]I added some things about the law of affect as it pertains to operant conditioning. I thought it could use an example without having to leave the page. I think it also ties into other sections so it fits well and keeps the wiki page smooth. Justin.edwards (talk) 03:04, 11 December 2017 (UTC)
Justin, Good idea. I am glad that you added an example so that users wouldn't have to be redirected to another page. Klaska 24 (talk) 15:24, 12 December 2017 (UTC)klaska_24
Praise
[edit]I added a paragraph to the 'Praise' section discussing studies done on the efficacy of Cognitive-Behavioral therapy and Operant-Behavioral therapy. It was touched on earlier in the section but I thought it would be good to elaborate on it. Klaska 24 (talk) 15:21, 12 December 2017 (UTC)Klaska_24
Merge With Contingency Management article?
[edit]According to the introduction operant conditioning is the same as contingency management but there is a separate article under that heading and presumably duplication of material. I suggest the two either be combined or the differences be clarified for the reader. I know nothing about the material so can’t work on it but a page I am editing on refers to both which brought me here to understand the difference. It’s just confused be more uNfortunately. Hopefully editors here will be able to sort it out. Dakinijones (talk) 23:09, 15 January 2020 (UTC)
- @Dakinijones: Operant conditioning is not "the same as contingency management"; I removed the relevant text from the lead. Thanks for pointing that out. Contingency management is an intentional application. Biogeographist (talk) 03:57, 16 January 2020 (UTC)
Sentence doesn't make sense and is not grammatical
[edit]"In operant conditioning, stimuli present when a behavior that is rewarded or punished controls that behavior."
I cannot parse this sentence. What is the subject of 'controls'? Stimuli present? Then it should be 'control' not 'controls'. Why is there a 'that' after 'behavior'? I can't even tell what the sentence is trying to say. — Preceding unsigned comment added by 86.139.192.79 (talk) 07:20, 27 August 2021 (UTC)
- The sentence is grammatical, it's just hard to understand. The subject of that sentence is "stimuli present when a behavior that is rewarded or punished". I simplified that sentence for readability; I also undid some vandalism in the vicinity of that sentence that has gone unnoticed for weeks.--Megaman en m (talk) 09:55, 27 August 2021 (UTC)
You say the sentence is grammatical but it is not.
'Stimuli' is a plural noun, and therefore requires the 3rd person plural 'control' as in 'they control', as opposed to the 3rd person singular 'controls' as in 'he controls'.
Furthermore, the 'that' should not be there. 'That' sets up 'is rewarded or punished' as a subordinate clause, with the result that 'behavior' should then be the subject of 'controls', which it is not.
The correct grammatical sentence would be:
"In operant conditioning, stimuli present when a behavior is rewarded or punished control that behavior." — Preceding unsigned comment added by 86.139.192.79 (talk) 14:06, 27 August 2021 (UTC)
Mobile experience
[edit]The tree at the top of the page does not fit appropriately on mobile. This includes browser and app. Shaunlilan (talk) 05:43, 14 October 2021 (UTC)
Wiki Education Foundation-supported course assignment
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 14 January 2019 and 8 May 2019. Further details are available on the course page. Student editor(s): JasmineHutson21.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 05:44, 17 January 2022 (UTC)
Why no criticism section?
[edit]Why no mention of ethical alternatives? Of connections to Nazi practices? Of use by kiddy groomers? Of coercion and battery? Of dehumanisation? Of autistic monolithic opposition to operant conditioning and ABA as unethical conversion 'therapy' delivered by an outright quack cult?
Oh, the cult runs this page?
Sorry, I will go now. 2407:7000:9C65:5E00:EC95:E3EA:83EF:1F8F (talk) 07:34, 26 April 2024 (UTC)
- B-Class level-5 vital articles
- Wikipedia level-5 vital articles in Society and social sciences
- B-Class vital articles in Society and social sciences
- B-Class psychology articles
- Top-importance psychology articles
- WikiProject Psychology articles
- B-Class neuroscience articles
- Mid-importance neuroscience articles