Talk:Markov chain/Archive 1
This is an archive of past discussions about Markov chain. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
Untitled
What is a Higher Order Markov Chain ?
External link relating to google is busted ... no longer works (if ever?).
- Looks like they moved the page... I located it at Mathworks and pasted the new link into the "External links" section. Thanks for noticing, who knows how long it was broken. Happy editing, Wile E. Heresiarch 04:39, 3 Jun 2004 (UTC)
Ceran 00:36, 6 May 2006 (UTC)If a state includes variables that remember the values of other variables within the state at an earlier time, does this violate the markov property? Could a chain/process that holds such a state still be considered markovian? What exactly constitutes "previous" and "current" states?
Aperiodic
Did anyone notice that the definiton of aperiodic in the text is plain wrong, the condition mentioned implies that the chain is aperiodic but it's not neccesary. — Preceding unsigned comment added by 136.142.151.43 (talk) 21:52, 13 February 2005 (UTC)
Bad didactics
In writing wrongly P(X|Y) instead of P(X=x|Y=y) you doesn't make things easier to understand, just easier to write down and no more than that. If that's your purpose just don't write anything; most easy to copy.Nijdam 22:15, 4 February 2006 (UTC)
Steady state
Quite a nonsense story about "steady state" and "convenient variables".Nijdam 22:37, 23 March 2006 (UTC)
Related questions
Probability in the definition of a state's hitting time
A state's (next) hitting time Ti is defined as . I'm wondering what the difference is between that definition and, say, (ie. with a probabilistic notation, like in the definition of a state's period) ? Exaton 11:25, 3 July 2006 (UTC)
- The difference is that the first is a random variable, and the second says something about connectivity between states and about probability distributions. Michael Hardy 14:28, 3 July 2006 (UTC)
Slight omissions
Describe S
The "S" used in the applicable form of the Chapman-Kolmogorov equation, as well as in the one-step evolution of the marginal distribution, appears to me to be the set of sates (the state space). It even seems very clear, now that I have understood ; however, S is not actually defined anywhere, as far as I can see. Might it not be an idea to add the denotation near "state space" in bold type in the definition section (for example) ? Exaton 10:53, 3 July 2006 (UTC)
- Thanks. I've put an S in the definition paragraph (although it's not very prominent and could get lost maybe... not sure where else to put it tho). Also, I added a note about what "gcd" is, for completeness. - grubber 18:16, 3 July 2006 (UTC)
Error(s) in article
Memoryless vs Markov
From the article:
- The Markov property means the system is memoryless, i.e. it does not "remember" the states it was in before, just "knows" its present state, and hence bases its "decision" to which future state it will transit purely on the present, not considering the past.
This definition of "memoryless" seems incorrect to me. In information theory a "memoryless" source is a sequence of independent identically distributed random variables. A Markov chain is not in general memoryless. WikiC 22:19, 25 May 2006 (UTC)
- Do you know the usage at memorylessness? Michael Hardy 17:18, 13 June 2006 (UTC)
I agree with the statements above that Markov Chains are not memoryless. Statisticians reserve the term for "truly memoryless" distributions like Geometric or Exponential. As you gain info by knowing what happened in previous states (i.e: states are not independent), I believe we should remove the mention of "memoryless" in the article. We should still keep some mention the special property of the Markov chains, called the "Markov property". Do you guys agree?-akshayaj
- The concepts are very much related: in each case we're saying the conditional probability distribution of future states given the present and past states, does not depend on the past states. Michael Hardy 17:21, 13 June 2006 (UTC)
They may be similar concepts ("memoryless in exponential" vs "Markov property in Markov chains"), but using the strict definitions, you cannot use the term "memoryless" for the property of Markov chains. In fact, for the Markov chains, none of the states but the last one are needed to determine the probability of the current state. But I repeat, knowing any previous state will add information, and thus change the current state probability. Therefore, Markov probabilities are not strictly memoryless, as they "remember" all previous states just by knowing the last state. Speaking statistically, the previous states are not independent of the current one for Markov chains, like they are for Exponential states. See the wikipedia entry on "memoryless" for a perhaps better explanation-Akshayaj 19:55, 20 June 2006 (UTC)
I'm a bit unclear on the notion of independence, in regards to whether they're correct above. I still believe my main point remains, and will try to get the answer to my "independence issue" soon-Akshayaj 19:55, 20 June 2006 (UTC)
- The point is that MCs are memoryless conditional on the most recent state. 128.8.168.148 17:10, 31 August 2006 (UTC)
From the article: "In other words, the past states carry no information about future states." The mutual information between the past and future (excluding - or not - the current state) is nonzero in general. I think this is misleading. I think the better statement would be: "Any information about the future that may be contained in the past is also contained in the current state." -jmahoney@cse.ucdavis.edu
Periodicity
Also this sentence is not quite correct: "A process is periodic if there exists at least one state to which the process will continually return with a fixed time period (greater than one)." Consider the transition matrix:
Isn't a Markov chain with this transition matrix periodic? I don't know how to word that sentence better. See the definition of aperiodicity at http://www.quantlet.com/mdstat/scripts/csa/html/node26.html. WikiC 01:00, 26 May 2006 (UTC)
- That definition is indeed incorrect. I felt the other definitions were much too terse (eg, saying "accessible" without saying what it meant), and a few common terms were absent altogether (eg, transient). I've expanded those definitions quite a bit. Check them out for completeness :) - grubber 02:26, 21 June 2006 (UTC)
Finite vs Discrete
There was a section on "discrete state spaces," but then it described finite state spaces. I've cleaned up that section a little bit. When I have some time, I'm going to remove the integrals earlier in the article and put in some text that reinforces that a MC has a discrete state space. - grubber 19:16, 28 June 2006 (UTC)
Properties of Markov chains
The section right after the intro, "Properties of Markov chains", seems to be saying that
One needs to have this kind of a property in order for the claimed formulation of the Chapman-Kolmogorov equation to apply, as well as the discussion of the marginal distribution. However, I was under the mpression that. in general, one may have
and still be able to call it a "Markov process", since the next state depends only on the current state (and the current transition probability). Or is it the case that the transition probability must be time-independent, in order for it to be called "Markov"? If the former is true, then it should be stated right up front, at the begining. If not true, then there are numerous, serious faults and errors in the article ... Sincerely confused, linas 00:42, 29 August 2006 (UTC)
Hmm, it seems inescapable that the former not the latter is intended. Is there a name for the later?In particular, I am interested in studying a system where the transition matrix is finite-dimensional, but it changes over time. It is also the case that for my system, it is not a "second order Markov proces" (i.e. repeated concatentions of the transition matrix is not periodic). linas 01:02, 29 August 2006 (UTC)
- The article assumes, but does not state, that the Markov chains are "time invariant". That is the common assumption, since time-varying Markov chains are much more difficult to study. I think it would be useful for the article to mention this property once. - grubber 15:20, 29 August 2006 (UTC)
- Hmm. Well, I know how to solve a certain class of time-varying Markov chains, and surely there are other classes that are also solvable, so I was surprised that this article seemed to confuse the definition Markov chains in general with stationary Markov chains. linas 15:12, 4 September 2006 (UTC)
Never mind. On closer inspection, it appears that the article does not assume that Markov chains are time invariant. The notation in the "properties" section is marvelously misleading though: it can be misread, which is what I did at first: I took the superscript (n) to mean "raised to the power of", whereas in fact the equations all hold just fine if (n) is interpreted as merely "some label". Silly me. linas 16:01, 4 September 2006 (UTC)
More problems
After going through the article more carefully, it seems that most of the article does not assume that the process is stationary (and I explicitly marked the few places where it does). However, the section on "stationary analysis" is ambiguous. I believe that lots of it can be generalized to the non-stationary case, but would require some care with the notation being used. That section needs work. linas 16:20, 4 September 2006 (UTC)
Definition of "stationary"
The article says: "A stationary Markov chain (a.k.a. time-stationary chain or time-homogenous chain) is a process where one has (*) for all n". My professor uses "stationary" to mean something quite different, namely: for all n, m, x. When I asked him, he was quite sure that his usage was standard. When he wants to express that the transition probabilities are constant with respect to time, i.e. equation (*), he uses "homogenous" or "time-homogenous", but not "stationary". I can imagine that either the article is wrong, or that there is significant variation in terminology within the field - I propose that we either fix the article, or mention the existence of different conventions, respectively. A5 23:36, 19 November 2006 (UTC)
- I've updated the article, with my professor's help. He pointed out that at least some of the external links use his terminology. I hope this is the right thing to do. A5 14:57, 22 November 2006 (UTC)
- It's a good point, and off the top of my head, I know I've heard "time-homogenous" for that property, and I can't recall if I've heard it called "stationary". I'm going to check my Kulkarni book when I get on campus next. For now, I think your edit is fine. - grubber 16:44, 22 November 2006 (UTC)
- I got my Kulkarni book. A DTMC with a stationary distribution π is stationary iff P(Xn=j) = πj. Time-homogeneous is the proper term for independence of n. Thanks for fixing this! - grubber 19:49, 27 November 2006 (UTC)
Recurrence
Sorry, I was just puzzled about the following statement in the section on recurrence:
It can be shown that a state is recurrent if and only if
I am puzzled about this. Take for example a simple 2 states Markov chain with and , i.e. states flip between state 1 and state 2 every period. Now both states 1 and 2 are recurrent, since they are reached every second period. But and therefore , which means by the statement above no state should be recurrent. Did I get something wrong? —The preceding unsigned comment was added by 212.201.78.127 (talk) 09:45, 15 January 2007 (UTC).
- I think I can help with this - the superscript in the original notation represents the number of steps or transitions in the path from back to . Thus in your example, again using the original notation and . More generally, , thus the sum over all n would indeed be infinite. Trog23 07:20, 7 February 2007 (UTC)
text manglers
Has anyone ever made a web page showing how simple markov chains are? Wheenever I look them up on the web, I find sigmas and all sorts of stuff, I know nothing about, yet I understand markov chains and can teach any fairly bright person how to create a markov chain text mangler on paper without even the aid of a computer program, in 2 hours at most. There simple. I can think of many many other frivolouse non-text uses for them and suspect they will eventually be the solution to the Turing test given time and creativity, but they are always presented as something that needs BIG math skills. Any links that explain how to do a text mangler markov chain would be appreciatedThaddeus Slamp 06:55, 9 February 2007 (UTC)
ergodic definitions
Does anyone else find these two statements in the article to be a bit contradictory? We first have
A finite states irreducible Markov chain is said to be ergodic if its states are periodic.
and then
A state i is said to be ergodic if it is aperiodic and positive recurrent. If all states in a Markov chain are ergodic, then the chain is said to be ergodic.
I assume we want "aperiodic" and not "periodic" in the first statement? --Lavaka 23:22, 30 April 2007 (UTC)
0.7 pass
I have passed this article for Wikipedia 0.7. It meets the Criteria for approval by being a B-Class article of High Importance or Higher. It is of Top importance resulting in the passing of this article. Funpika 22:15, 5 June 2007 (UTC)
Discrete-parameter Markov process vs. discrete-state Markov process
(This is my first Wikipedia discussion/talk page entry. So, please be patient. ;) )
I'm somewhat puzzled by the following statement of the Markov chain page: "In mathematics, a Markov chain, named after Andrey Markov, is a discrete-time stochastic process with the Markov property."
To my best knowledge, I cannot agree here. As far as I know, stochastic processes with discrete state space are referred to as "chains". On the other hand, we use, e.g., "discrete-time" and "continuous-time" to differ between stochastic processes that have a discrete or continuous parameter, respectively.
See, e.g.,
- G. Bolch et al.: "Queueing Networks and Markov Chains", 2nd Edition, Wiley, 2006, page 53:
[...], we consider Markov processes with discrete state spaces only, that is, Markov chains, [...]
- K.S. Trivedi: "Probability and Statistics with Reliability, Queueing and Computer Science Applications", 2nd Editions, Wiley, 2001, page 296:
Although this definition applies to Markov processes with continuous state space, we will mostly concerned with discrete-state Markov processes — specifically, Markov chains. We will study both discrete-time and continuous-time Markov chains.
Of course, in Markov chains, state changes (transitions) between the different states happen instantaneously, so at time points. Still, the transitions may happen at arbitrary points in time (CTMCs). Only in DTMCs, these time points are fixed.
Thus, I am neither able to comprehend the statement given in the Markov chain article ("[...] a Markov chain [...] is a discrete-time stochastic process") nor the fact that "DTMC" is redirected to "Markov chain" and "CTMC" is redirected to "Continuous-time Markov process". Both DTMCs and CTMCs are Markov chains.
I'm happy to receive any pointer towards work that states that Markov chains always have a discrete parameter/time. MuesLee 13:00, 19 June 2007 (UTC)
- I believe that your understanding is correct, MuesLee, and offer two more references that support your view:
- - J.R. Norris. "Markov Chains", Cambridge University Press, 1977 (ISBN 0-521-63396-6) Introduction: "...the case where the process can assume only a finite or countable number of states, when it is usual to refer to it as a Markov Chain."
- - William J Stewart. "Introduction to the Numerical Solution of Markov Chains", Princeton University Press (ISBN 0-691-03699-3) Page 5: "If the state space of a Markov Process is discrete, the Markov process is referred to as a Markov Chain."
Periods
"A state i has period k if any return to state i must occur in some multiple of k time steps and k is the largest number with this property. For example, if it is only possible to return to state i in an even number of steps, then i is periodic with period 2. Formally, the period of a state is defined as" I'm a bit confused here - shouldn't it be "smallest number with this property"? The example with even numbers suggests that this is the case as well. I won't change it, in case I'm just wrong. 66.216.172.3 18:05, 12 July 2007 (UTC)
- Nevermind, I'm just dense.
Removed (Bad?) link
I removed this external link:
- A Markov text generator generates nonsense in the style of another work, because the probability of spitting out each word depends only on the n words before it
Because it didn't work when I ran it. I moved the source code into this utility and linked to that now: http://www.utilitymill.com/utility/Markov_Chain_Parody_Text_Generator —Preceding unsigned comment added by 69.255.197.177 (talk) 20:44, 2 September 2007 (UTC)
Submitted for Wikipedia:Mathematics Collaboration of the Week
This article needs an update, so I have asked for some help from the collaboration of the week project. From my submission, some todos for this page:
- Clearer introduction for the newcomer
- Motivation for using Markov chains
- Clean-up, reviewing and clarification of the math
- Better distinction between continuous and discrete state spaces
- A clear example of a Markov model
- Conditions for which a unique eigenvector exists, and how to use the Power method to find limit distributions
- Scientific applications should be extended; currently the list is not explicative
- Relation to Hidden Markov models
- Incorporation of Markov process
Anthony Liekens 00:07, 18 September 2005 (UTC)
- Agreed. I've read the article and I still have no idea what a Markov chain - I got directed here by a Slashdot comment, saying "On Usenet there are enough kooks that a simple Markov chain based text analysing program could pass." (in regards to A.I.). Having read the article, I have no idea what he means. Please clean up this article for readability! 80.6.114.153 11:02, 3 December 2005 (UTC)
- Many of the complex definitions have simpler versions which suffice for the simpler special case of a finite state Markov chain. Often, when people refer to "Markov chains", they actually mean "finite state Markov chains". Perhaps we need to clarify this early in the article, and link to a new page specifically on finite state Markov chains. Jim 14159 11:17, 1 November 2007 (UTC)
Ergodicity hypothesis
We can see the sentence: "Markov chains are related to Brownian motion and the ergodic hypothesis". I think Markov chains aren't always ergordic. If they were, there would only be one final class. Could someone confirm?
This is a very strange sentence. Brownian motion is a continuous time Markov process. It is not ergodic. And, Markov chains are not always ergodic.
Spmeyn (talk) 16:21, 9 December 2007 (UTC)Sean Meyn
You are right, but I don't see a problem. This part of the article explains the history of the study of Markov chains. It says that Markov chanes are related to Brownian motion (which is true). They are certainly related, since Brownian motion in the mathematical sense is a continuous time Markov process. But I think in this case the sentence actually referrs to the physics process of Brownian motion (which is also related). Then, the claim that Markov chains are related to the ergodic hypothesis is also correct. They are related, since the question of ergodicity may be stated in the context of Markov chains. It is a hypothesis, so there is no claim that it holds for every Markov chain. I hope this clarifies the issues. Oded (talk) 15:44, 8 May 2008 (UTC)
Suggestion about Examples
For the non-mathematical reader it would be good to point out that the question of whether a physical process is a Markov process or not is not a well posed question. It cannot be answered until we say what information we are willing to include in the "state" variable(s). It would be nice to give an example of a physical process that is Markov using one definition of state and not Markov using another definition of state.
Bad Example: Coin Toss
I think coin toss is a bad example for Markov chain. The outcome does not depend on current state, it is always 50/50 (or x/(1-x) with an unfair coin) regardless of state. The coin toss can be a degenerate case of Markov chain at best.
A first-in-first-out (FIFO) queue could be a better example: The state is the length of the queue. In a given time interval, a new person (or thing) arrives at the queue with probability of p; and a person (or thing) is served and leaves the queue with probability of q. Then, if current state is i, the probability of going to states i-1, i and i+1 depend only on current state, and input probabilities p and q. —Preceding unsigned comment added by 71.163.214.29 (talk) 04:43, 2 May 2008 (UTC)
I could not agree more with this comment. A coin toss is not a Markov process! It's an iid process. I will fix this in a few days. Sunbeam44 (talk) 15:15, 10 May 2008 (UTC)
- Any i.i.d. process is Markov. And a sequence of coin tosses is Markov. But it is not a good illustration of the concept of a Markov process. Oded (talk) 15:36, 10 May 2008 (UTC)
MCML - Markov Chain Markup Language
Has anyone seen a definition of this language anywhere? A Google search just found some references to it from scientific papers, but I'm hoping there's an official or semi-official site with the definition and tools to work with it. —Preceding unsigned comment added by Engmark (talk • contribs) 13:53, 11 June 2008 (UTC)
This article needs diagrams
If you're looking for a rigorous definition of something, then all that math notation is great. But when you're new to the concept, it's completely impenetrable. Some diagrams would really help to explain what's going on here. Timrb (talk) 14:47, 17 June 2008 (UTC)
- Like those in Random walk? Perhaps the link to that page should be a bit more prominent and point out that there are some pictures there. Tayste (talk - contrib) 20:34, 17 June 2008 (UTC)
- I don't think we are looking for the diagrams such as in Random walk. What would be appropriate would be the kind of diagrams as in finite state machine (not the one at the top, but further down), but with probability labels on the edges. At least, that would be the kind that would best illustrate the Markov chain concept in the finite setting. Oded (talk) 06:08, 18 June 2008 (UTC)
- Finite state machines are good. There are also some "trellis"-style diagrams which show all the states lined up above each "step" in the FSM (like here), which I think are pretty handy for visualization, though the diagram example shown could still use some cleanup and clarification.Timrb (talk) 13:17, 20 June 2008 (UTC)
- I don't think we are looking for the diagrams such as in Random walk. What would be appropriate would be the kind of diagrams as in finite state machine (not the one at the top, but further down), but with probability labels on the edges. At least, that would be the kind that would best illustrate the Markov chain concept in the finite setting. Oded (talk) 06:08, 18 June 2008 (UTC)
Just added a fomula, need some technical feedback on it
I just put in a formula for in the "Markov chains with a finite state space" section. The equation works whenever it is defined (I am pretty sure), but I lack the technical knowledge to provide all the proper caveats about its use.
I also thought about adding to the article a description of the most general method that can be used to find a stationary distribution, but again I am not familiar with the proper technical terms that should be used and all the special cases that should be taken into account.
"An irreducible chain has a stationary distribution if and only if all of its states are positive-recurrent" If I understood well the definitions of the words, this seems not to be true (if stationary means time-stationary).. ? —Preceding unsigned comment added by 195.113.30.252 (talk) 17:21, 6 October 2008 (UTC)
Steady-state analysis and limiting distributions
In the "Steady-state analysis and limiting distributions" section it might be appropriate to add one sentence of discussion concerning the periodic case (similarly to the reducible case). Randomblue (talk) 04:07, 22 March 2009 (UTC)
- Ok, I tried something out, please check. Randomblue (talk) 04:47, 22 March 2009 (UTC)
Limit of P^k
Section "Markov chains with a finite state space" states: Since P is a stochastic matrix, $\lim_{k\to\infty}\mathbf{P}^k$ always exists.
This is only true for ergodic Markov chains. --Borishollas (talk) 12:32, 24 November 2008 (UTC)
It is clearly not true in general -- take P=[0 1;1 0] (you always switch states). The limit does not exist. 128.143.1.62 (talk) 13:56, 13 April 2009 (UTC)
Clarification requests
In the explanation of additive markov chains of order m
There needs to be a "function exists such that" or at least something that clarifies what this function is.
On a side note, is P(X=x) or Pr(X=x) the standard terminology for the probability that event X will have a value of x? Ryan Brady (talk) 20:36, 11 June 2009 (UTC)
- "Event" is the wrong word here. "Random variable" can be used. The whole expression "X = x" is an "event". Both notations are fairly standard. Michael Hardy (talk) 21:40, 11 June 2009 (UTC)
- That definition of "additive Markov chain" is badly written in a number of respects. I'll have to look further to figure out what it should say instead. Michael Hardy (talk) 21:52, 11 June 2009 (UTC)
Under Properties of Markov chains
What is S? Ryan Brady (talk) 20:38, 11 June 2009 (UTC)
- It's the state space of the Markov chain. Michael Hardy (talk) 21:38, 11 June 2009 (UTC)
Graphical example (I've got one)
I think a graphical example of a finite chain wouldn't hurt. I just drew one for my thesis but it's in PNG (transparent though), not SVG. I thought I'd put it here in case anyone is feeling inspired to add it to the article. If no one volunteers, I might do it in a few days/weeks when my head's cooled off! Uploaded to Imageshack.
The corresponding transition matrix:
0 0.2 0.7 0 0 0 0 0 0.2 0.5 0 0 0 0.1 0.9 0 0 0 0 0 0 0 0 0 0
193.136.205.152 (talk) 01:34, 8 July 2009 (UTC)
Properties,Reducibility: When is a "Communicating class" not closed
In the section "properties", subsection "reducibilities", it says: "A communicating class is closed if the probability of leaving the class is zero". If it is possible to get from state A to a state B that is not in the class, then state B is accessible, and so isn't state B in the communicating class? How can an equivalence class not be closed? ( Martin | talk • contribs 07:41, 4 August 2009 (UTC))
dead link
removed 'Dead link' remark in 'External Links' section, because the link worked on August 7 '08 in the line
90.152.240.206 (talk) 10:56, 7 August 2008 (UTC) xymx
just clicked it, seems to work just fine! —Preceding unsigned comment added by 78.49.82.139 (talk) 20:30, 7 August 2009 (UTC)
Inconsistency in intro and formal definition
I the formal definition is stated: .... given the present state, the future and past states are independent but in introduction: ...means that future states depend only on the present state,... so which one is it? do future states depends on the present state or not?
GA Fantastic (talk) 12:39, 2 September 2009 (UTC)
- Both are true, and in fact equivalent. There is no contradiction here. But do not forget that it is not about functional dependence; it is probabilistic. Future states depend on the present state in the sense that their probabilities depend. And when you know the present state, past states cannot help in predicting the future, just because given the present, the future and past are independent. Boris Tsirelson (talk) 18:56, 12 September 2009 (UTC)
Biological applications
Markov models are also used to understand evolution. Mutation accumulation in a population follow markov chains. This has been used to model drug resistance development in HIV etc. See http://biomet.oxfordjournals.org/cgi/reprint/96/3/645 —Preceding unsigned comment added by 129.215.149.99 (talk) 16:57, 14 October 2009 (UTC)
Error in "Markov chains with a finite state space"
There is an Error in this part. It says "Since P is a stochastic matrix, always exists". Now each row of P sums to one and all elements are non-negative, still take this example: .
There is clearly no limit if you multiply this matrix with itself. This Statement seems very dangerous to me, since people could take it from here without thinking it through. 85.178.217.33 (talk) 20:55, 23 October 2009 (UTC)
- I've deleted the erroneous statement. Michael Hardy (talk) 21:59, 23 October 2009 (UTC)
Markov Chains and Stochastic
Right now, the introduction says:
"Being a stochastic process means that all state transitions are probabilistic (determined by random chance and thus unpredictable in detail, though likely predictable in its statistical properties). At each step the system may change its state from the current state to another state (or remain in the same state) according to a probability distribution."
Are not all Markov chains stochastic? If not, then that should be mentioned, as otherwise the above looks like a contradiction. Unless I'm misunderstanding the definition of "unpredictable in detail".
Pgn674 (talk) 22:37, 21 November 2009 (UTC)
Problems with random walk example
While the random walk is often used as an example of a Markov chain, it is confusing to use as a primary example. A recent wording of the random walk example on this page: "An example of a Markov chain is a random walk on the number line which starts at zero and transitions +1 or −1 with equal probability at each step. The position reached in the next transitions only depends on the present position and not on the way this present position is reached."
Problems with this example (under this wording or any other) include...
...it brings into the picture an extraneous property--namely that of the SUM of the random variable sequence in question (the total of all the moves up to "now"; eg, the sum of elements in the set {1,1,-1,1,1,1,-1})--an extraneous property that appears nowhere in the definition of a Markov chain
...the probabilities in this example's random variable do NOT depend on the previous state, where by the state, I am referring to the value of the latest coin toss--0 or 1 (for those who will say, "But it's technically STILL a Markov chain."--yes it is, but should not a primary example aim to avoid confusion and be 'typical' of the category under definition?--as a PRIMARY example it risks only generating confusion)
...the example risks confusion between 'positions' (partial sums of a RV chain) and states/transitions--what is the state? the current position OR the value that got you there (the value, +1 or -1, of the coin toss)--shouldn't the state here, to agree with its definition in this context, refer to the coin toss result and not the position?
I suppose the appeal to the RW example is it looks like a chain of sorts but for a primary example, it only served to send me wandering around the Web for a better explaination.
One problem the example can create is a confusion as to whether Markov chains refer to "the next STATE only depending on the current state" or "the PROBABILITY DISTRIBUTION (for the next state) only depending on the current state"--it is in fact the latter (from Wikipedia's own definition of the Markov property: "the conditional probability distribution of future states of the process depend only upon the present state; that is, given the present, the future does not depend on the past")--if we confuse state with position (the sum of the sequence of random variable values) in the example, we can lose sight of the fact it is the conditional probability that underlies the definition
(However to play devil's advocate and add to the confusion, in defense of this example I do see random walks (including those of the coin-toss type) touted as examples of Markov chains (?))
68.197.205.56 (talk) 22:28, 4 March 2010 (UTC)
- I have edited the text slightly to deal with the point about probabilities (in regard to the random walk example) at least. For the rest ... random walks are examples of Markov Chains (which the text says) but not all Markov chains are random walks (and the text doesn't say that they are). The article has (immediately following, and later on) other examples of Markov chains, so there is really no way the random walk example should cause confusion. You need only look at the next paragraph for a different example. Melcombe (talk) 17:19, 5 March 2010 (UTC)
- For the random walk example, the states are the positions. Suppose you are at state 5. The probability distribution for the transitions would be 0.5 for 5->4, 0.5 for 5->6, and 0 for everything else. After the transition, the next transition will depend only on whether you're at 4 or 6. The fact that you were once at 5 will be "forgotten." The coin toss is kind of irrelevant except to describe the setting. It could also be a die roll and the same concept would apply with different numbers. I agree the current wording is confusing because +1 and -1 are only properties of the transitions, not the transitions themselves. Maghnus (talk) 13:39, 11 March 2010 (UTC)
Unsuitable formula
I removed the premature formula from the introduction. It is in a cryptic shorthand form, confusing a lot of readers, and it doesn't serve any purpose there. Nijdam (talk) 21:46, 26 August 2010 (UTC)
Joint probability distribution for a reversible Markov chain
Uniform distribution not required
I reverted edits that indicated that
applies only for uniform distributions. The proof that it works generally is pretty short. The detailed balance equation is that
equals the same expression with i and j reversed. But
so it follows immediately that Since the left-hand side of this last string of equalities equals the same expression with i and j reversed, the same must be true with the right-hand side of this last string of equalities. Quantling (talk) 15:26, 14 September 2010 (UTC)
- Fair call. I had misread it. Jheald (talk) 16:25, 14 September 2010 (UTC)
Stationary process not required
I reverted a sentence that indicated that the result required a process to be stationary. While reversibility does require that there be a stationary distribution, it does not also require that there be a stationary process. In fact, many MCMC algorithms rotate through a set of transition matrices—so by definition the process is not stationary—but the matrices are chosen to have a shared stationary distribution, so all works out in the end. Quantling (talk) 15:26, 14 September 2010 (UTC)
Biological Examples
The Leslie Matrix is a weak example, it's not even a true markov chain. They are widely used in Biology/Bioinformatics, so it's not hard to replace it with much more interesting examples (DNA, nucleotide substitution models, Wright-Fisher genetic Drift Model, Motiff finding, ion channel/protein conformation transitions... and son on) —Preceding unsigned comment added by 79.169.3.230 (talk) 21:46, 8 January 2011 (UTC)
Elaboration of the lead image
The lead image needs an elaboration on what is going on in the image considering that it is an introductory image. What is it showing? For someone who just recently was introduced to the subject, this is quite hard, if not impossible, to see. --mgarde (talk) 15:46, 24 January 2011 (UTC)
Finite state space citation
The method to compute the stationary distribution outlined in the Finite state space section lacks a reference.
This page provides a more detailed description of that approach. —Preceding unsigned comment added by 147.162.3.236 (talk) 14:38, 26 January 2010 (UTC)
- I've found these references that could be the references needed as they are used in other paper to cite the same method. But I have no access to them, can anybody check if they are correct?
- Pyke, R. (1961a). Markov renewal processes: definitions and preliminary properties. Annals of Mathematical Statistics 32, 1231-42.
- Pyke, R. (1961b), Markov renewal properties with finitely many states. Annals of Mathematical Statistics 32, 1243-59. Conjugado (talk) 17:41, 22 February 2011 (UTC)
- I've found these references that could be the references needed as they are used in other paper to cite the same method. But I have no access to them, can anybody check if they are correct?
Reversed Markov Process is a Markov Process too - Should we mention it?
Markov property holds for a reversed Markov Process too (a reversed Markov process is a Markov process too). As Reversible Markov Chains are described here (a much stronger concept - If I understand correctly), we should consider for this simpler fact to be stated too.
[Either here or in Markov_property with possible reference to it].
I would like to hear your opinion. —Preceding unsigned comment added by Sirovsky (talk • contribs) 11:27, 10 May 2011 (UTC)
- If you think you can improve the article, then go ahead. ylloh (talk) 11:41, 10 May 2011 (UTC)
a simple proof
- Simple proof:
- For warming up, imagine a cycle of tube full of water of 100L with the current speed 1meter per sec. One meter of the tube is in your house and the rest is outside. If the whole tube lengths 5 meters, than there is 1/5 of the water indoor which is 20L. And it takes 5 seconds for the water to cycle back. If the tube lengths 10 meters, than there's 1/10 of the water indoor which is 10L. And it takes 10 seconds for the water to cycle back.
- Now imagine there are two separate tubes each forms a circle like above. They length 5 and 10 meters and both have one meter indoor. However the combined amount of the water from two tubes is only 100L. Now if we could only compare the amount of water inside the house, we found, say, 30% of the water indoor is in the first 1-meter part of the 5-meter tube and 70% of the water indoor is in the second 1-meter part of the 10-meter tube. Then what's the amount of water inside the house?
- Let us denote the answer be x. Notice that the amount of water in the first 5-meter tube is 5 times of the 1-meter part of itself which rests indoor. And the 1-meter part has 30%*x of water. Hence the first tube has 30%*x*5 of water. For the same reason, the second 10-meter tube has 70%*x*10 of water. And the total amount of water is 100L = 30%*x*5+ 70%*x*10. Thus x = 100L / (30%*5+ 70%*10) = 11.7647059 liters of water is inside the house.
- We can conclude that the portion of the water indoor is 1/E[the time it takes for the water to cycle back], in this case 1/(30%*5+ 70%*10)=11.76%. q.e.d.
I came up this simple proof, but it's undid for the reason "not a "proof" of anything, may be an example but is misplaced and WP:NOR) ".
I knew it's not a complete proof, but I believe the concept is right and help me understand deeper. So are there any adjustments could be made to improve the quality of this? Maybe should I change "simple proof" into "the idea behind this"? Isn't the reason "why there's no proof in the first place?" too long or too mathematics; so why not an "idea behind the proof"? — Preceding unsigned comment added by Guanhomer (talk • contribs) 13:47, 15 July 2011 (UTC)
Steganography using Markov chains
A while back I (Shannon Larratt) created a proof of concept showing how Markov Chains could be used very effectively for steganography. Here is the explanation and some sample software: http://www.zentastic.com/blog/2005/03/03/now-im-definitely-getting-arrested/
If anyone feels this is relevant, do feel free to add it. I don't want to add my own stuff as that's tacky and I do my best to avoid it!!!Snowrail (talk) 05:26, 16 July 2012 (UTC)
Social Sciences
The section on social sciences seems pretty unnecessary. It's just a bunch of buzzwords and half baked personal theories that are (very) loosely related to Markov chains. It is by no means a serious application. It is just jargon at best and simply wrong at worst. It's also unreferenced. 203.167.251.186 (talk) 01:57, 24 May 2010 (UTC)
- I hope I've now addressed this issue. Yaniv256 (talk) 02:52, 29 July 2012 (UTC)
Markov perfect equilibrium
I think we need a section linking to the game theory concept of Markov perfect equilibrium. I don't want to put something half-baked on the page so I thought to work on it here inside a box and get approval first. Feel free to edit inside the box and post above it. Yaniv256 (talk) 03:27, 29 July 2012 (UTC)
I now see that the stronger link is to the older concept of stochastic games, so this is now the main article. Yaniv256 (talk) 04:37, 29 July 2012 (UTC)
Game theory
- Main article: stochastic games
Stochastic games generalize both Markov decision processes and repeated games. A Markov perfect equilibrium is a refinement of the concept of sub-game perfect Nash equilibrium to this environment.
Endless applications list
The applications list is far too long and also inaccurate in the following sense. In many cases the application results not from markov chains per se but from an additional construct placed on top of markov chains, such as a Markov decision process or Markov perfect equilibrium. I don't have a clear idea what should we do about it, but I'll post again if I do. Yaniv256 (talk) 19:06, 29 July 2012 (UTC)
Doesn't teach
I apologize, but this page fails to teach anything to anyone who already isn't well versed in the subject. Perhaps a wonderful reference, but not very educational, even more so to the layman. How about breaking this down into more simple terms and more concrete examples? --Anonymous 68.97.215.103 03:25, 19 July 2006 (UTC)
- This isn't wiki-learn-ia, it is an encyclopedia. If you are looking for an article to teach you about Markov chains, you came to the wrong place. Try Wikibooks or Wikiversity. 130.126.108.104 15:24, 21 September 2007 (UTC)
- Please see WP:MTAA. Also, the corresponding article on MathWorld is much easier to understand, and it is also intended to be an encyclopedic reference, like Wikipedia. [1] 86.140.80.60 (talk) 16:42, 21 April 2009 (UTC)
- So then math articles in wikipedia are written exclusively for god? what's wrong with some pedagogy? There are few enough opportunities to explain... 24.10.97.80 (talk) 12:24, 5 November 2012 (UTC)
The article at minimum needs more of the jargon/terminology to be turned into hyperlinks to articles that explain what they mean. At present, I can't understand the steady state section that I wanted to understand more about because of such terms. And an article that only tells knowledgeable people what they already know does not serve to inform anyone. 08:18, 30 September 2007 (UTC)
Archive
Would anyone else like to see this talk page get shortened a little? Yaniv256 (talk) 19:20, 29 July 2012 (UTC)
YES! Maybe then we can work to polish the article. Mike409 (talk) 01:20, 14 March 2013 (UTC)
Is a simple random walk irreducible?
Is a simple random walk irreducible? Two states communicate if they can reach each other in a finite number of steps. But is that finite number allowed to be different for each pair of states? These two sentences mean subtly different definitions of an irreducible chain. A simple random walk is an example of this ambiguity. Which is correct?
- There is a finite such that, for any pair of states, they can reach each other within steps. is a property of the chain alone.
- For any pair of states i and j, There is a finite such that, they can reach each other within steps.
Aaron McDaid (talk - contribs) 14:27, 13 June 2013 (UTC)
- A simple random walk is indeed irreducible, the n need not be the same for each pair of states, so your second statement with nij for each pair i and j is the correct one. Gareth Jones (talk) 14:37, 13 June 2013 (UTC)
- Thanks. Edited article accordingly. On second thoughts, I shouldn't be surprised. This is necessary in order for communication to be an equivalence relation. Aaron McDaid (talk - contribs) 14:51, 13 June 2013 (UTC)
- A simple random walk is indeed irreducible, the n need not be the same for each pair of states, so your second statement with nij for each pair i and j is the correct one. Gareth Jones (talk) 14:37, 13 June 2013 (UTC)
Power
Might be a stupid question...but how do you calculate the "power" of the matrix in order to calculate the n-step transition matrix?
- See matrix multiplication for the answer to the question above. (That article could use some polishing, BTW, but the answer is there.) Michael Hardy 01:49, 30 Sep 2004 (UTC)
What is meant by a _sequence_ of random variables? A r.v. is a fn. from the probability space to some measurable space, so we can think of assigning a 'number' to each elementary event in the probability space - is the sequence here a series of values from a single r.v. generated by a series of elementary events occurring, or is it actually a sequence of different functions (which would be the literal interpretation of "a sequence of random variables").--SgtThroat 10:37, 10 Dec 2004 (UTC)
- Having read around I'm pretty sure that the sequence consists of a sequence of functions. Also these need to have the same domain and codomain (?). Then each point in the sample space would correspond to a sequence x_1,x_2...x_N of values of these functions. The set of all points then corresponds to the set of all sequences. Lets assume for simplicity that each point in the underlying probability space has equal probability - then the transition probability P(x_n|x_n-1) would be obtained by taking all the sequences in which X_n-1=x_n-1 and calculating the fraction of them that have X_n=x_n. Similarly P(x_n|x_n-1,x_n-2) would be obtained by calculating the same fraction among all those that have X_n-1=x_n-1 and X_n-2=x_n-2. The Markov property is the statement that these are equal for all x_n (i.e the conditional distributions are the same) and all n. Can anybody confirm/correct this understanding? --SgtThroat 12:04, 10 Dec 2004 (UTC)
- One data point: like SgtThroat, I found the formulation "A Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that, given the present state, the future and past states are independent" bewildering. By a state, a reader who doesn't know where this is going may guess that a set of value for all the members of that sequence is meant, instead of exactly one member of the sequence. The informal description might be easier to follow if it clearly related the terms state and variable. For example: "A Markov chain is a sequence of random variables (or states) X1, X2, X3, ... with the Markov property, namely that, given the present state (the value of any variable Xn), the future and past states (the values of the variables before and after it in the sequence) are independent." I won't make this change, but suggest it to the ear of someone with better sensitivity to the mathematics and good mathematical usage. C.M.Sperberg-McQueen (talk) 01:40, 14 September 2014 (UTC)
In mathematics, a sequence has a first element, a second one, a third one, and so on, so a sequence of random variables has a first random variable, a second random variable, etc. And since a r.v. is a function whose domain is a probability space, a sequence of random variables is a sequence of such functions. Your remarks about "elementary" events assumes a discrete probability space, and that assumption is not warranted. Definitely all of the r.v.'s in this sequence have the same domain; that is true of any stochastic process.
- each point in the sample space would correspond to a sequence x_1,x_2...x_N of values of these functions
Correct.
- Lets assume for simplicity that each point in the underlying probability space has equal probability
Again, your assuming discrete without warrant. More later .... Michael Hardy 23:02, 10 Dec 2004 (UTC)
To continue:
It seems your difficulty stems mainly from the unwarranted assumption of discreteness. Consider the "uniform" probability distribution on the interval [0, 1]. The probability that a random variable with this distribution falls in, for example, the interval [0.15, 0.23] is just the length of that interval: 0.23 − 0.15 = 0.08, and similarly for any other subinterval of [0, 1]. This is a continuous, rather than discrete, probability distribution. Nota bene: the probability assigned to each one-point set is zero! Thus, knowing the probability assigned to each one-point set does not tell you what the probability distribution is. Similarly, the familiar bell-shaped curve introduced in even the most elementary statistics courses for non-mathematically inclined people is a continuous distribution, so we're not talking about anything the least bit esoteric here.
Now consider an infinite sequence of independent fair-coin tosses. What's the probability of getting one particular infinite sequence of heads and tails? It's zero, no matter which sequence you pick. So this is also not a discrete distribution. Michael Hardy 01:01, 11 Dec 2004 (UTC)
Hi Michael. Yes, I agree that the assumption of a discrete probability space is rather limited. I was trying to get the discrete case clear before moving on to trying to understand the continuous one, although I neglected to say so. I think I now have a much better understanding of some of this material thanks to your comments and to some additional reading (in particular the entry on random vector which I hadnàt read when I wrote the above. I think we ought to clarify that the sequence of random vbles is a sequence of functions from some probability space onto some domain, both of which are the same for all elements of the sequence. It could use a link to the entry on random vector - although a sequence is not quite the same as a vector, the components of a random vector form a sequence with the required properties and there is much useful content on that page to clarify. SgtThroat 12:21, 12 Dec 2004 (UTC)
Doubtful claim about Markov chains in finance
The first financial model to use a Markov chain was from Prasad et al. in 1974.
Seems doubtful. No one ever compared the stock market to a random walk before 1974?
A person can hardly mention probability and the stock market in the same breath without at least suggesting that the stock market might be a martingale (after paying the bank), might be a Markov process, but in any case is a stochastic process.
I'm sure this notion was not invented by Prasad et al. !
Compare to:
Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare.
By 1917, more practical application of his [Markov's] work was made by Erlang to obtain formulas for call loss and waiting time in telephone networks.
Yet finance models had to wait until a moment of insight in 1974? Not likely.
178.38.83.46 (talk) 16:08, 8 March 2015 (UTC)
Misclassification as Russia article
Sorry, but why is this a Russia article, with 12 links to Russia lists on the talk page? Because Newton was French?
178.38.83.46 (talk) 16:11, 8 March 2015 (UTC)
Markov CPU attempts
the article doesn't mention "tentative" Markov CPUs — Preceding unsigned comment added by 2.84.219.136 (talk) 20:27, 16 May 2015 (UTC)
Too technical above the fold
Reading this as a lay person, I didn't understand almost all of what was being said here until I got to example of the random walk. The non-technical language needs to be moved further up the intro, and the technical maths stuff needs moving lower down (including being removed completely from above the table of contents). The lead needs be a non-technical introduction accessible to people who know nothing about the subject. Thryduulf (talk) 21:52, 19 June 2015 (UTC)
Bad formatting, perhaps?
some of the equations go off the right-hand side on Safari; dunno if this is a problem with other browsers too, but it's kinda annoying...problem is, my math is not hardcore enough for me to fix this myself--help? — Preceding unsigned comment added by M1ss1ontomars2k4 (talk • contribs) 06:07, 19 April 2006 (UTC)
I am not having this problem with internet explorer or firefox. Sounds like a mac error. —Preceding unsigned comment added by 71.83.122.32 (talk) 18:24, 22 September 2007 (UTC)
Dr. Koop's comment on this article
Dr. Koop has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:
This looks like an excellent article in general. I can only comment on the economics and finance section which is a bit weaker. The article correctly cites the James D. Hamilton paper as being seminal, but the application in that paper is a macroeconomic application about parameter switches over the business cycle and not a financial application involving volatility switches (although markov switching models have been used for the latter). Subsequent to Hamilton, there have been literally thousands of applications of econometric models which involve involve Markov processes. The remainder of this section of the article just lists a couple of randomly chosen examples (and not particularly prominent or important examples at that).
We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.
Dr. Koop has published scholarly research which seems to be relevant to this Wikipedia article:
- Reference : Markus Jochmann & Gary Koop & Roberto Leon-Gonzalez & Rodney Strachan, 2009. "Stochastic Search Variable Selection in Vector Error Correction Models with an Application to a Model of the UK Macroeconomy," Working Papers 0919, University of Strathclyde Business School, Department of Economics.
ExpertIdeas (talk) 11:32, 18 July 2015 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to one external link on Markov chain. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20120713235933/http://www.csl.sony.fr:80/%7Epachet/ to http://www.csl.sony.fr/~pachet/
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers. —cyberbot IITalk to my owner:Online 16:53, 28 August 2015 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to one external link on Markov chain. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive http://web.archive.org/web/20101206043430/http://www.fieralingue.it:80/modules.php?name=Content&pa=list_pages_categories&cid=111 to http://www.fieralingue.it/modules.php?name=Content&pa=list_pages_categories&cid=111
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 09:51, 5 March 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified one external link on Markov chain. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Corrected formatting/usage for http://www.csl.sony.fr/~pachet/
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 15:02, 12 April 2016 (UTC)
Dr. Zhang's comment on this article
Dr. Zhang has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:
It is appropriate.
We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.
Dr. Zhang has published scholarly research which seems to be relevant to this Wikipedia article:
- Reference : Tingting Cheng & Jiti Gao & Xibin Zhang, 2015. "Bayesian Bandwidth Estimation In Nonparametric Time-Varying Coefficient Models," Monash Econometrics and Business Statistics Working Papers 3/15, Monash University, Department of Econometrics and Business Statistics.
ExpertIdeasBot (talk) 16:24, 19 May 2016 (UTC)
Please translate the Math to Code
Please translate all the meaningless math mumbo jumbo into python code. Markov chains are used in programming. Make this article accessible to the people who will use it. Philo.phineas (talk) 01:09, 17 June 2016 (UTC)
Markov process versus Markov chain
I think this article should be the Markov process article. Unless somebody can find me a source saying otherwise, I believe that a Markov chain now refers to a specific type of Markov process, but the terminology varies. I completely re-wrote the lead section to reflect this, as well as added examples and some history. A lot of it came from the article on stochastic processes, so feel free to re-word, but of course keeping the relevant references. Improbable keeler (talk) 15:00, 7 January 2017 (UTC)
- I agree, a Markov chain is a specific type of Markov process, so it would make sense to re-name the article that way (even though "Markov chain" is a more popular term; this is what led me to merge Markov process into Markov chain in the first place, but the other way around probably makes more sense). About your re-write of the lead section, it is indeed much more comprehensive now but maybe a bit more difficult to apprehend for people with limited knowledge of statistics/mathematics. I will try to simplify a bit the first few paragraphs so that one does not have to read everything to have an idea of what a Markov process is. 7804j (talk) 11:04, 6 February 2017 (UTC)
Citations for 'memorylessness'
I've only seen the word "memorylessness" used for random variables, and they are only two random variables where it can be used (ie geometric distribution and exponential distribution). Can somebody give me at least two references or citations (preferably books,not lecture notes) where the word "memorylessness" is used to describe Markov processes and chains. Otherwise, it should be removed. Improbable keeler (talk) 15:00, 7 January 2017 (UTC)
- I disagree on that one. Memorylessness indeed usually refers to geometric and exponential distributions (as is explained on the page Memorylessness), but many authors/lecturers also describe Markov processes as "Memoryless" (e.g., https://www.cs.princeton.edu/courses/archive/fall05/cos521/markov.pdf, http://onlinelibrary.wiley.com/doi/10.1002/9780470465394.app2/pdf). I think it is important for the sake of completeness to include somewhere in the article that Markov processes are sometimes described as memoryless. 7804j (talk) 12:30, 6 February 2017 (UTC)
Requested move 6 February 2017
- The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review. No further edits should be made to this section.
The result of the move request was: not moved. (non-admin closure) TonyBallioni (talk) 15:07, 14 February 2017 (UTC)
Markov chain → Markov process – Markov chain is a subcategory of Markov process. When I merged both pages a few months ago, I took the arbitrary decision to use "Markov chain" rather than "Markov process", but it would make more sense for the page to have the most general of the two names. Currently, this page discusses Markov processes as well, even though they are not Markov chains per se. I am not able to proceed with a manual move as the target page exists already (empty page with a redirection) 7804j (talk) 11:26, 6 February 2017 (UTC)
- Leaning oppose. Nice work expanding the article, but most casual users have never heard of "Markov process" before I suspect (well, I haven't at least), and a quick check of Google scholar has more than 2x the number of hits for "Markov chain." It's a little weird to have the article be about the broader-but-more-obscure topic with the title referring to the well-known use case, but it isn't THAT bad I suspect, as long as the lede is clear - maybe move a brief summary of the distinction into the first two sentences. SnowFire (talk) 22:21, 9 February 2017 (UTC)
- Oppose – "Markov chain" is the longstanding most recognizable term of art. "Markov process" appropriately redirects here; the opposite, while technically correct, would look awkward. Incidentally, does the lead of this article really to say "process" no less than 35 times? Some copyediting may be in order… — JFG talk 08:49, 14 February 2017 (UTC)
- The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.
Merged Markov process and Markov chain into this page
As had been discussed for years in different talk pages, I have merged the Markov process and the Continuous-time Markov chain into markov chain, as they essentially discuss the same idea. I have tried to keep all the content from the three page and to reconcile it as well as possible. There is still a lot of cleaning to do, possibly redundant sections or things that could be reworded to be more accurate with the new structure. It would be useful to have an expert's eye on the topic to make this page more consistent.
I haven't moved the content from the talk pages of the two articles merged. There might be some useful information over there. 7804j (talk) 21:58, 12 September 2016 (UTC)
- After merging, now the article is labelled to be splitted without any proposal in talk page about how to split it. If no proposal is made I suggest to remove the label, since convenience of splitting isn't self-evident - to say the less.--Pere prlpz (talk) 09:13, 28 May 2017 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified 3 external links on Markov chain. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20150212212546/http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D&arnumber=4045263&userType=inst&denyReason=-133&arnumber=4045263&productsMatched=null&userType=inst to http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D&arnumber=4045263&userType=inst&denyReason=-133&arnumber=4045263&productsMatched=null&userType=inst
- Added archive https://web.archive.org/web/20100619010320/https://netfiles.uiuc.edu/meyn/www/spm_files/book.html to https://netfiles.uiuc.edu/meyn/www/spm_files/book.html
- Added archive https://web.archive.org/web/20100619011046/https://netfiles.uiuc.edu/meyn/www/spm_files/CTCN/CTCN.html to https://netfiles.uiuc.edu/meyn/www/spm_files/CTCN/CTCN.html
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 19:00, 3 June 2017 (UTC)
Absorbing states
A very good and clear article to someone who learned the subject - but forgot... I missed a referance about absorbing states. The folowing link helped me in the subject (and gives a simple example for a markov chain)
http://people.hofstra.edu/faculty/Stefan_Waner/Realworld/Summary8.html — Preceding unsigned comment added by 128.139.226.37 (talk) 11:25, 19 July 2006 (UTC)
Matrix Multiplication
Hello there,
Thanks for putting together the awesome material about Markov chain. I have problem with some calculations in the text. More specifically the financial market example introduced in the [formal definition] section. It seems to me that P^3 in the expression x^(n+3)=x^(n)P^3 is a matrix multiplication but when I throw the expression in R it is certainly not a case of matrix multiplication. Any insight on what the calculation is used in the text?
Thanks, — Preceding unsigned comment added by Gismatthew (talk • contribs) 21:40, 1 August 2017 (UTC)