Jump to content

Wikipedia:Reference desk/Archives/Science/2011 November 6

From Wikipedia, the free encyclopedia
Science desk
< November 5 << Oct | November | Dec >> November 7 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 6

[edit]

speed reading

[edit]

If I learn spead reading in my mother language, does it work for any other language, say, English? my language if far from similar to English, but I can read very well in English, but speed reading is different, so is it possible?--81.31.188.59 (talk) 10:32, 6 November 2011 (UTC)[reply]

If it's simply a technique that is transferable, then it probably is, and Speed reading is really just a mixture of techniques to draw the key ideas from a text while skimming over unnecessary content. Since it is certainly possible to speed read in English, provided the languages weren't hugely different (say French, Italian, or Spanish for example) then I don't see why it wouldn't be transferable if you had strong reading and comprehension skills in both languages. However if there were big differences in the languages, say an Asian language that didn't use the Latin alphabet, then this may not be the case. --jjron (talk) 10:57, 6 November 2011 (UTC)[reply]
I'm not convinced that speed-reading isn't mostly a fraudulent marketing device. 66.108.223.179 (talk) 03:05, 7 November 2011 (UTC)[reply]
No it isn't, you can actually practice this by scrambling texts such that they are barely readable. What then happens is that the text becomes easier to read in fast reading mode than in normal slow reading mode. Try it on the following text that I took from a random Wiki article and then I randomly permuted the letters in each word, except the first and last letters, they are kept fixed:
"Qitue often trhee is a tie, in wichh csae a semi-final taerebiekr rnoud is need. For exmpale, if six pyarles fiineshd the plinieamrry rudnos with seven poitns and fefetin fenshiid with six potins, the six who fniehisd wtih sveen ptoins amltcaitolauy acvadne to the fnial cotpiemiton. The fetefin with six pnotis mvoe into the semi-fnail rnuod wehre the top fuor are dniemteerd to fill the reimadner of the setas in the fanils. Tihs is done by aniskg every paelyr the same qeuostin at the same time and gviing ecah player tlwvee socedns to wirte dwon the aswenr. Ecah qetsioun is alutmcaotaily reeptead tcwie. Eronvyee reeavls thier aenwsr at the end of the twvlee sdcones and pyrales are etinelamid on a sgnile-etaoiimniln basis. If, unsig the aovbe explame of four oepn setas in the flanis, three is a qiueotsn wehre eghit pleayrs are lfet in the semi-fnail ronud and there paryles get the qteuoisn rgiht, thsoe terhe ancdave to the filans. The oethr fvie who got the qusioetn worng wlil cunoitne with the sgnlie-eiiaolmtnin purcorede to dneitemre wichh ctemopiotr wlil tkae the last open seat in the falins." Count Iblis (talk) 04:47, 7 November 2011 (UTC)[reply]

Intensity of the superposition of two coherent waves

[edit]

I think I'm missing something here. Say you have two identical, coherent light sources at equal distance from a point , so that the waves from both sources interfere constructively at . Let be the intensity, and the amplitude, of the wave from either source as measured at . The amplitude of the superposition of the 2 waves at is . Since intensity is proportional to the square of amplitude, so that intensity of the superposition of the waves should be . But intensity is a power measure; you shouldn't be able to combine power from 2 identical sources and get 4 times as much. Where do I have it wrong? — Preceding unsigned comment added by 173.49.81.140 (talk) 13:03, 6 November 2011 (UTC)[reply]

You don't have it wrong. your logic is correct. That's why constructive interference is different than simply adding intensities. Elsewhere in the interference pattern the two sources will interfere destructively and on average over the whole area of influence of the sources, energy is conserved. Energy conservation is done globally, not locally. Dauto (talk) 14:02, 6 November 2011 (UTC)[reply]
there are other situations, possibly more intuitive, where similar things happen. For instance, if a mass initially at rest is accelerated by a force for a time , its acceleration will be , the displacement will be , the work will be and the power will be . Now, if a second force identical to the first happens to be acting simultaneously to the first effectively doubling the force, the power quadruples. There is no such a thing a power superposition where the power of two forces acting together is given by the sum of the powers of each one separately. That's just not the way things work. The same thing is true for the superposition of two waves. The powers don't simply add up. The waves interfere and the resulting power can be anywhere between zero and the quadruple of the power of one of the sources (if the waves are identical). Dauto (talk) 16:26, 6 November 2011 (UTC)[reply]

Which is more stable, K or K+?

[edit]

Hey guys, just a senior in high school studying up on chemistry for his finals. There's one small thing bugging me that I just cannot find an answer for.

The statement "K is more stable than K+" : is it true or false? Because if you look at it from enthalpy's point of view, K is more stable than [K+ + e-] because you need to provide it its ionization energy if you want to separate an electron from K. But is the ion itself (just K+, not [K+ + e-]) more unstable than its neutral atom counterpart? Is it even possible to compare its stability without counting the electron? Because the way I see it, the total sum of the enthalpy of K+ and the kinetic energy of the electron is definitely greater that the enthalpy of K, but maybe the enthalpy of the ion itself (without the kinetic energy of the electron) is less than K. Because when you think about it in a simple non-thermodynamic point of view, potassium (or any alkaline metal for that matter) is definitely unstable in its atomic form, and is definitely happier in its ionic form because of the octet rule, so you could say that K+ is more stable than K, right?

Maybe I've got the whole concept of enthalpy wrong, or maybe I'm right, in which case my question above is something worth thinking about. Care to help me out? thanks.Johnnyboi7 (talk) 15:45, 6 November 2011 (UTC)[reply]

If you just have an isolated atom, K is more stable than K+. A hunk of pure potassium metal in a vacuum will sit there without the atoms decomposing. But if you have a compound such as KCl, a split into K+ and Cl- is more stable than a split into K and Cl. Looie496 (talk) 16:55, 6 November 2011 (UTC)[reply]
To expand on what Looie496 said, and also to answer your question "Is it even possible to compare its stability without counting the electron?", the answer is probably not. In order to consider whether a system is more "stable" with potassium atoms or potassium ions in it, you need to consider what happens to that one electron. If the potential energy of the electron seperated from the atom is lower than the potential energy of the electron joined to the atom, then the seperated state is more stable; if the inverse is true than the joined state is more stable. One can invent any number of possible interactions where the side with the potassium atom is more stable, likewise one could come up with situations where the side with the ion is more stable. There's no single answer. --Jayron32 19:00, 6 November 2011 (UTC)[reply]
A common way to compare "stability" is to look at the energy difference between one state and another, under certain conditions. In the gas phase, this energy difference is known as the ionization energy (our article isn't really clear that it applies to gas phase species, but it does). Specifically, it requires about 419 kJ/mol to go from gaseous potassium atoms to gaseous K+ and e- ions. Buddy431 (talk) 19:14, 6 November 2011 (UTC)[reply]
I concur — Preceding unsigned comment added by 203.112.82.1 (talk) 19:55, 6 November 2011 (UTC)[reply]

Genealogy

[edit]

how to know my ancestors? who they are a kshatriya or someone else? to which dynasty they belong? — Preceding unsigned comment added by Vichu8331 (talkcontribs) 15:49, 6 November 2011 (UTC)[reply]

We have no way of knowing who your ancestors are. Looie496 (talk) 16:49, 6 November 2011 (UTC)[reply]
The only way to know who your ancestors were to any specific accuracy is to actually have them documented; that usually means someone wrote down every time someone was born; who their father and mother was, etc. Depending on what culture you live in and at what point in history will determine how easy it is to track down your specific ancestors. See Genealogy for methods on tracking your ancestors. --Jayron32 18:54, 6 November 2011 (UTC)[reply]
I added the title "Genealogy" to this question, which was posted without a title. However, perhaps Varna or maybe Caste or Race would have been a more apt title, since I think that's more specifically what the question is about. Kshatriya is a varna, and I'm pretty sure the question about dynasties refers to Lunar Dynasty vs. Solar Dynasty, which are basically racial divisions. So respondents should just respond to the questions per se, and not be influenced by the choice of title. I probably should have added a comment to that effect when I added the title. Red Act (talk) 19:31, 6 November 2011 (UTC)[reply]
I think the standard answer is to start with what you know and work backwards, and I think that will apply in this case too. Write down who your parents are, who their parents were, and if you know it, who their parents were. India does keep birth records, but I have only seen a documentary on the Anglo-Indian community's records so I'm not sure of the extent to which the records are kept and how far they go back. --TammyMoet (talk) 19:42, 6 November 2011 (UTC)[reply]
That wouldn't be Alastair McGowan's appearance on Who do you think you are? by any chance? That dealt with trying to dig up old records, which were perhaps a little more available than one would imagine. Of course, the producers probably through huge wodges of cash at people to find them. Brammers (talk/c) 09:14, 7 November 2011 (UTC)[reply]
Yes it was, hence my caveat of the community. A researcher who knows their subject is worth more than gold, in my experience. --TammyMoet (talk) 10:36, 7 November 2011 (UTC)[reply]
The professional historian's association's rate's aren't recent recommendations (2003), but back calculating from equivalent wage earners and adding a consultant's mark up, I'd charge AUD300/hr before GST, you pay my costs (living expenses away from home based off the public servants rates, archival retrieval fees, transport economy air, first rail). The professional association recommends something similar, based off 2003 costs they're asking for a 5x hourly mark-up. The current hourly is around AUD60 (including super) for employee full-time academic historians non-professorial. Fifelfoo (talk) 10:46, 7 November 2011 (UTC)[reply]
I double checked the rates; they're 2011 and they're only asking for a 1.5x mark-up for casual consulting. Obviously that's not based on attempting to live off consulting work on a serious basis…and, correspondingly, a contractor would negotiate their professional fee downwards based on a contract with a longer fixed period than hourly engagement. Fifelfoo (talk) 10:50, 7 November 2011 (UTC)[reply]

Note: Modern DNA tests can show with some (?) accuracy where a person's ancestors lived and possibly such things as a common ancestor (Genghiz Khan has been the object of studies). DNA also shows a fairly large percentage of people with "Neanderthal ancestry" etc. Collect (talk) 13:39, 7 November 2011 (UTC)[reply]

charge

[edit]

why a charge produce electric field what is the reason for the charge to interact with other charge ? cant charge exist alone — Preceding unsigned comment added by Bhaskarandpm (talkcontribs) 17:28, 6 November 2011 (UTC)[reply]

The electromagnetic force is not limited by distance, only weakened. Dualus (talk) 18:50, 6 November 2011 (UTC)[reply]
(edit conflict)An electric field is a way to model the effect of electric charges on other electric charges. Because electric charges exert an influence on other electric charges, and that influence is related to the relative positions of those to charges, the "electric field" is merely the model which associates the nature of that interaction with various points in space. Fields are powerful tools in physics because they allow one to predict the outcome of various "thought experiments", such as calculating the effect of one charged particle placed into an environment of other charged particles. Your second question is unanswerable. The "reason" implies that there is some greater purpose. The interaction between two charges is inherent in the definition of what a charge is: Charge is that property of an electron which makes it attracted to a proton and repeled by another electron. It does not have a reason, excepting that it was how the Universe was organized during its creation. It is just a description of charge. You can't assign "reason" to such a quantity. It is just a description of being, not part of a process. --Jayron32 18:51, 6 November 2011 (UTC)[reply]
I concur — Preceding unsigned comment added by 203.112.82.1 (talk) 22:32, 6 November 2011 (UTC)[reply]

Tempture and heat

[edit]

In assuming that a body does not reflect all of the radiation that it gets, Does a visble light change the tempture of the objects it hits?77.125.136.181 (talk) —Preceding undated comment added 17:54, 6 November 2011 (UTC).[reply]

Yes. See Absorption (electromagnetic radiation). --Jayron32 18:44, 6 November 2011 (UTC)[reply]
Of course it does. Light is electromagnetic radiation, and thus contains energy. If a certain amount of light is absorbed, the corresponding energy has to go somewhere (see conservation of energy). If your absorbing object is not capable of systematically transforming this energy into another form (like a photodiode or a chloroplast) it will be transformed into heat energy.
For the average human this is hard to notice as we usually do not get to deal with sufficient quantities of light that only contains visible wavelengths to make this effect measurable by human senses. Phebus333 (talk) 03:23, 7 November 2011 (UTC)[reply]
For any warm-blooded animal, the core body temperature isn't likely to vary much, although the skin, hair or fur facing a bright light may get warmer. Thermoregulation will work to keep the core body temperature more or less constant. In a human, that includes shivering, putting on warmer clothes, eating and drinking warm things, etc., when cold, and sweating, taking off clothes, eating and drinking cold things, etc., when hot.
Of course, we also regulate our temperature by moving into and out of the light, as do cats, who are famous for sleeping in a spot of sunlight on the floor, and moving to follow it. StuRat (talk) 16:17, 7 November 2011 (UTC)[reply]
Indeed, sunlight is not all visible, but most of its energy is in the visible frequency range. Glass is not, usually, transparent in infrared, and fairly opaque to UV, so if you use a lens to start a fire, that mostly depends on visible light. --Stephan Schulz (talk) 10:59, 8 November 2011 (UTC)[reply]

A mirror and an image projector

[edit]

When an image projector projects on a wall you usually see a picture. But when a projector projects on a mirror the mirror reflects the image, So what is the regularity of which the projector works in? Exx8 (talk) —Preceding undated comment added 17:59, 6 November 2011 (UTC).[reply]

your question is unclear. You stated to unrelated facts of optics (1. Screens allows us to see real images and 2. mirrors reflect light) and then you ask for a regularity? What do you have in mind? Dauto (talk) 19:03, 6 November 2011 (UTC)[reply]
I think you are asking about how overhead projectors work, but I'm not sure. Quest09 (talk) 20:49, 6 November 2011 (UTC)[reply]
If you shine a projector ONTO a mirror, I don't think you'll see the image in the mirror, you'll see the reflection of the projector shining AT YOU. The image will be projected onto you and your surroundings. Vespine (talk) 01:55, 7 November 2011 (UTC)[reply]
That is correct. The image from a projector is so bright that your eyes cannot discern an image when seeing the light reflecting from a mirror. But, there is a slightly related way to see a reflected image. To avoid glare, projector booths in movie theaters have tilted windows that the projects shine through. The tilted glass reflects some of the light - but not a lot of it. Because only a small percent of it is reflected, you can see the movie image on the glass. -- kainaw 14:04, 7 November 2011 (UTC)[reply]
Is that an application of a Brewster angle window that is not mentioned in that article? DMacks (talk) 15:19, 7 November 2011 (UTC)[reply]
No, the idea of the (slightly) tilted projection room window isn't to pass light of a particular polarization, it's to prevent multiple reflections between the window and the optics in the projector. (Oftentimes there are two panes of glass between the projection room and the theater to improve sound isolation; these two panes will be set at slightly different angles to suppress distracting reflections between these layers of glass as well.) The goal isn't to prevent all reflections so much as to try to keep the reflections that do occur from ending up anywhere annoying. TenOfAllTrades(talk) 15:47, 7 November 2011 (UTC)[reply]

What kind of effect?

[edit]

If you give some innocuous pill to someone with an illness which heals alone within x days, you'll obtain some success. How do you call this effect? Quest09 (talk) 20:25, 6 November 2011 (UTC)[reply]

placebo effect. --Jayron32 20:29, 6 November 2011 (UTC)[reply]
yes, but isn't the placebo effect isn't always related to the expectations of the patient? I was thinking about a coincidence effect which is not a placebo effect. Note that here we are treating a condition which will heal alone, and not improving some treatment at all. I suppose there is a name for this fallacy. Quest09 (talk) 20:46, 6 November 2011 (UTC)[reply]
Not sure which way round you mean in "placebo effect isn't always related to the expectations of the patient" – the placebo effect is always related to the expectations of the patient. Returning to teh rest of the question as I understand it (it's not clear), I think what you mean is they were going to get better anyway, they take a sugar pill and they then attribute the success to the pill. It's an extension of the correlation does not imply causation fallacy. Grandiose (me, talk, contribs) 20:55, 6 November 2011 (UTC)[reply]
Yes, I was asking "isn't the place effect always related to the expectations of the patient?" What I wanted is something like correlation does not imply causation fallacy, thanks. Quest09 (talk) 21:07, 6 November 2011 (UTC)[reply]
It's also the expectation of the physician, and other researchers. If the doc knows you've been given some experimental new wonder-drug he may examine you with a different mind-set than if he knows you haven't received any treatment. The doctor's attitude may even 'rub off' on the patient. (This is why the placebo effect can occur even in veterinary drug trials. ) APL (talk) 21:12, 6 November 2011 (UTC)[reply]
I'd call it Post hoc fallacy, it's similar to the above but the pill comes before the cure so I think it fits better. Vespine (talk) 22:09, 6 November 2011 (UTC)[reply]
There's an important distinction there that you'll want to be careful of. The placebo effect is only the difference between the group that receives no treatment and the group that gets a sugar pill: the psychologically-driven physiological benefit of a biochemically-irrelevant therapy. Consider patients with the common cold in a clinical trial comparing sugar pill to no treatment, using survival as the measured outcome. (I admit that it would be difficult to secure funding for such a trial.) In the control group, 100% of patients survive, and in the placebo (sugar pill) group, 100% of patients survive. In that case, there was no relevant placebo effect on survival, because we got the same outcome in both groups.
If one of the patients in the sugar pill group nevertheless concluded that he had been saved from a horrible death by the placebo pill, it would be an example (per Vespine) of the post hoc fallacy, which in turn is either a subcase of or a related error to (depending on one's definitions) the correlation/causation fallacy noted by Grandiose. TenOfAllTrades(talk) 22:59, 6 November 2011 (UTC)[reply]
It's an example of regression to the mean. Such a disease/condition is called self-limiting. --Colapeninsula (talk) 12:42, 7 November 2011 (UTC)[reply]
Regression to the mean has nothing whatsoever to do with recovery from a self-limiting illness. Regression to the mean is a statistical artifact not reflective of the underlying properties of the system being examined. The recovery from illness (a fever declining to normal temperature, for example) is a genuine phenomenon that is being measured accurately. TenOfAllTrades(talk) 14:48, 7 November 2011 (UTC)[reply]
While I agree it's not really the answer the OP is after, I'm not sure I'd say it has "nothing" to do with it. I have read regression towards the mean used to explain why people can be fooled into thinking some treatment they are taking has an effect even if it does not, specifically with regard to chronic illness like arthritis, rather then self limiting illness like a cold. Taking the arthritis example, pain is generally experienced in cycles of good and bad periods, so you'll take the medicine when the pain is at its "worst" and regression towards the mean typically results in the pain eventually lessening and this is ascribed to whatever treatment was taken. Vespine (talk) 23:21, 7 November 2011 (UTC)[reply]

Likelihoods and conditional probabilities, in the context of sensitivity and specificity.

[edit]

I would appreciate some help in getting the concepts of likelihood and conditional probability straight. I'm posting here and not on the maths desk, because the question relates to fairly elementary staticstics applied to science, and because I think I'll have a greater chance of understanding the answers here. I shall begin with presenting what I think I know about the matter first, and then point out what appears to me to be inconsistant usage.

Here's my understanding of a likelihood: A likelihood is the probability of obtaining the data that actually resulted from an experiment, given that some hypothetical statistical model were true. The concept is often used when the parameters of the model can be varied, creating a likelihood function that dependes on the parameters, which can be used for obtaining maxiumum likelihood estimates of the parameters. Thus, a likelihood is a special case of a conditional probability, which matches the pattern "probability of observed data given hypothetical model".

Here's my understanding of sensitivity: It is the conditional probability that a person will test positively, given that he has the condition that is tested for. I would not call this a likelihood, since it does not match the pattern "probability of observed data given hypothetical model".

Here is my understanding of specificity: Specificity is the conditional probability that a person will test negatively, given that he does not have the condition that is tested for. Again, I would not call this a likelihood, since it does not match the pattern "probability of observed data given hypothetical model".

After this long introduction, here is my question. Why do we use the term likelihood rato about the ratio between sensitivity and (1 - specificity) in diagnostic testing? I've argued above that sensitivity and specificity should not be called likelihoods.

  • Are my definitions above too restrictive or otherwise wrong?
  • Is the nomenclature itself sloppy?
  • Or is there some other explanation?

Thanks in advance, --NorwegianBlue talk 22:14, 6 November 2011 (UTC)[reply]

I'm not at all qualified but until someone else turns up... It sounds to me like the word likelihood is being used with several slightly different meanings. The common meaning of "how likely an event is" can be used to describe sensitivity (what's the likelihood of a true positive) and specificity (what's the likelihood of a true negative), but it also has specific meaning, as you point out, in terms such as "likelihood ratios" (what's the likelihood someone who tested positive really is positive). Those two are obviously closely related. Our article on Sensitivity and specificity doesn't actually use the word "likelihood" to describe those terms, or even the word "likely", so it's possible that for strict usage those words are avoided to avoid ambiguity.. It does use the word "unlikely" however, so I don't think it's "incorrect". Vespine (talk) 01:01, 7 November 2011 (UTC)[reply]
Per our article Likelihood ratios in diagnostic testing, where D+ means you have the disease and D- means you don't:
so
giving a ratio
This is referred to as the (positive) likelihood ratio, because in statistics, is known as the likelihood of a positive condition (D+) given a positive test result (T+). Similarly is known as the likelihood of the negative condition (D-) given a positive test result (T+)
is therefore known as the statistical likelihood ratio given a positive test result.
The likelihood ratio is useful, because it can be used to express a very neat form of Bayes's rule expressed using the odds for D+:
Posterior odds = Prior odds × Likelihood ratio
Or, expanding that out a bit,
The prior odds ratio is the odds of having the disease based just on its prevalence in the relevant population, before any test is made. The likelihood ratio gives the factor that this is multiplied by to give the final odds ratio that takes into account both the background prevalence and the results of the test. Jheald (talk) 17:13, 7 November 2011 (UTC)[reply]
I should probably add some of the above into our Likelihood ratios in diagnostic testing article. Jheald (talk) 17:27, 7 November 2011 (UTC)[reply]
Thanks! But why is the word "likelihood" used for the conditional probability P(T+|D+)? The term likelihood is usually reserved for the probability of existing data (events that already have occured), given some model. See the definition at Mathworld - The concept differs from that of a probability in that a probability refers to the occurrence of future events, while a likelihood refers to past events with known outcomes. I don't see how P(T+|D+) fits that definition, and would have been happier with "Probability ratio". --NorwegianBlue talk 21:25, 7 November 2011 (UTC)[reply]
Then I would repeat what I said and say that the definition in mathworld is a very specific and narrow "statistical" use of the word. Other common definitions I can find, don't seem to have this temporal stipulation. For example, out of those sources valid uses of the word are "what is the likelihood it will rain today?" and "what is the likelihood a candidate will be elected?" Vespine (talk) 23:12, 7 November 2011 (UTC)[reply]
OK. Let me try to answer this with a very rough-and-ready potted history, though my answer may not be entirely NPOV. (You might get some alternate views from the Statistics wikiproject, or if you asked at Reference desk/Maths).
The "classical" view of probability of the 19th century, largely taking its cue from the works of Laplace, used Bayes' theorem to evaluate questions of so-called "inverse probability", ie
where θ is some unknown quantity of interest, which we are trying to infer on the basis of D that is some data which has been observed.
One question such a formulation leads to is how to assign the prior distribution P(θ|I). Laplace's recommendation, in the absence of any other information, was to assign to each possible case an equal possible chance -- i.e. a flat prior, .
However as the 19th century wore on, this recipe came increasingly under pressure, due in part to the highlighting of things like Bertrand's paradox -- a flat prior for P(θ|I) implies a non-flat prior for any nonlinear function f of θ. But what was the justification for giving θ instead of f(θ) the flat prior?
This led to people like John Venn proposing the frequency interpretation of probability -- essentially denying there could be any meaning to probability apart from in the case of the ratios of outcomes of long-run series of repeated trials. In particular, the very idea of probabilities of parameters like θ was rejected, because how could you talk about something that had an actual real fixed value in terms of probability?
But this left the question of how to do inference problems -- a gap that was filled by R.A. Fisher in 1922, who suggested ignoring the P(θ|I) term entirely, and just considering the forward probability P(D|θ), as a function of θ. This could be well defined, and co-incided with the most-used results from the classical theory. In the limit of an infinite amount of data would end up sharply peaked at the true unknown value of θ. If the data was not quite infinite, it should still be quite a strong indication of the true value of θ. Now this couldn't be called a "probability", since on the frequency interpretation things like θ couldn't have a probability, so instead he called the function a "likelihood".
Thus it seemed this new more rigorous, more sophisticated interpretation (in truth: a real mind-fuck) had banished the problems of the classical interpretation, while putting on a new rigorous basis its most important results. Thus was born so-called "orthodox statistics", and with the rise of university statistics departments by the 1940s its dominance had become pretty much total, part of the package if you wanted to be a member of the club. A few held out, such as the geophysicist Harold Jeffreys or the economist Keynes, but these were very much onlookers from well outside the tent.
However nothing remains the same for long, and by the late 1940s and 1950s the frequentist orthodoxy had started to be challenged by a small number of discontents who became known as Bayesians, who wanted to see inference done "right" -- i.e. in a way compatible with Bayes' Theorem. Their influence has steadily grown, particularly on the machine learning side, helped by the very close links between Bayes' Theorem and information theory, and by growing computer power and new algorithms in the 1990s which made it possible to realistically estimate full Bayesian posterior probability distributions for really very complicated models. Bayesian methods have also come very much to the fore in fields like medical statistics, where prior probabilities (such as in the form of background disease prevalences) simply have to be taken into account. Nevertheless, most standard university statistics books and courses still tend in the first instance to be primarily focussed around frequentist ways of thinking and the frequentist approach. Jheald (talk) 02:01, 8 November 2011 (UTC)[reply]
two different meanings, same term, as the article says at the top -not to be confused with.... can't be more longwinded, my ipad sucks for editing wikipedia It's been emotional (talk) 01:56, 8 November 2011 (UTC)[reply]
Thanks everyone! Special thanks to User:Jheald, for your thorough replies to my questions. --NorwegianBlue talk 08:07, 9 November 2011 (UTC)[reply]