Jump to content

Talk:Information gain (decision tree)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

drawbacks : if we don't want the credit card number to show up in the decision tree, we just would not include it in the input attributes. Thus I think this is not a good example. Nulli 08:33, 13 March 2006 (UTC)[reply]

I don't see why a credit card number would be used to "describe" a customer in the first place, useful for identifying customers but would hold no use in describing them. If we were putting customers into a decision tree from a long list of customers and their attributes, I think including credit card numbers would be analogous to including the list ID number which would obviously be stupid. I think maybe this is what the author of that example is trying to say, i.e. care must be taken as to what attributes to include.
But I agree it is not a good example at all. There are also several other disadvantages of decision trees also which are not included. I'll try improve the article. Canderra 20:54, 21 May 2006 (UTC)[reply]


Information gain, mutual information, KL divergence

[edit]

I'm confused about this. Information gain and relative entropy/KL divergence are not the same thing, assuming the common version of information gain used in decision trees. Information gain is mutual information, which is a special case of KL divergence. Both this page as well as the KL divergence page appear to make this mistake -- is there a reason for this, or should I fix it? nparikh 21:57, 21 October 2006 (UTC)[reply]

Historically, the term Information Gain was introduced by Renyi, as a more intuitive synonym for KL divergence. Information gain can be used in connection with any conditioning step that causes you to move from a distribution Q to a better distribution P. If the conditioning happens to be based on learning the value of a particular variable, then as you say the Information Gain is equal to the mutual information. But the term Information Gain is not restricted to this case. Jheald 10:14, 23 October 2006 (UTC)[reply]
Then this page (and perhaps the entire machine learning community) appear to be using the term incorrectly, and furthermore this page is internally inconsistent. The definition given in the section of this page labeled "Formal definition" defines the term to mean the specific case in which the conditioning happens to be based on learning the value of a particular variable. However, the definition given at the top of the page defines it to be synonymous with Kullback–Leibler divergence. The definition given in the section of this page labeled "Formal definition" is similar to mutual information, which makes information gain a function of random variables; the definition of it as a synonym of Kullback–Leibler divergence makes information gain a function of probability distributions. These cannot both be right. Therefore this page is internally inconsistent. Since the large machine learning community seems to be using the term differently from the way it was originally defined, I recommend keeping both definitions on the page, with citations to external sources, and a clear note to the effect that different communities use the term in different incompatible ways. Bayle Shanks (talk) 06:18, 21 August 2010 (UTC)[reply]
Certainty has a probability distribution too -- it's just a very sharp spike. The point I was making above is that the way IG is used on this page is compatible with the more general understanding of the term as a synonym for KL divergence. Jheald (talk) 11:16, 21 August 2010 (UTC)[reply]
I'm interested to know more about the second paragraph that states that In particular, the information gain... is the Kullback-Leibler divergence under specific conditions. Are there any proofs or references that can be cited to obtain this? Cortisa (talk) 22:14, 16 November 2012 (CET)
See Blachman (1968), "The amount of information that y gives about X". He shows that KL divergence and entropy difference are not the same in general. The Wikipedia page is in error for suggesting that they are equivalent. I'm not sure how to proceed, since there seems to be some disagreement in the research community about what the term information gain really means. Canjo (talk) 02:41, 25 November 2019 (UTC)[reply]

Definition

[edit]

and describes the same set, isn't it? Then it should be written identically also, otherwise it might confuse people. 84.57.82.107 08:49, 5 April 2007 (UTC)[reply]

Bullet

[edit]

What is the definition of bullet? Is it the standard multiplication symbol like asterisk? (EasyWalker) —Preceding unsigned comment added by 80.99.49.64 (talk) 21:11, 13 January 2009 (UTC)[reply]

Notation

[edit]

Please change the notation for "examples" (Ex) to something different. Current notatoin is confusing because it looks as the expectation of x. —Preceding unsigned comment added by 213.155.151.233 (talk) 14:48, 22 January 2009 (UTC)[reply]

General and formal definitions disagree

[edit]

First, the notation is a little bad. The second equation for IG:

conflates the definition of , as is already defined above as the attributes which makes up . is a composed of attributes and an output value, .

Also, one only sums over all when branching along all possible values for that attribute. It may turn out that , but a branch occurs between and . Thus I recommend making it clear that . This produces the equation:

The general and formal definitions appear to disagree:

approximating gives:

Furthermore, summing over all branches produces:

which still is not the equation given in the formal definition, but feels correct to me.

While I am confused about what the correct form actually is, I see how to recover what is currently in the formal definition if we normalize (I am not even sure if it is correct to normalize) by the number of instances (aka samples) in each branch


However, I repeat that normalizing here feels artificial. The information gain ratio normalizes another way (actually by dividing by something similar to the second term).

So I wonder does:

or does
?

The second one looks correct to me, so I suggest the formal equation should use


Mouse7mouse9 23:47, 20 November 2014 (UTC) edit: forgot to sign edit2: minor technical correction in grammar. T is not a set of attributes and an output, but rather is composed of attribute, output tuples.

Mouse7mouse9 00:12, 21 November 2014 (UTC), here. I think I was an idiot. The more I look at it, the more correct the additional normalization term appears to be. It means the information gain per branch is weighted by the number of instances in each branch, which actually makes a lot of sense. I still think the definition should be much more explicit and use

Would it be too much to add in a cleaned up version of the above to derive the form in the formal definition?

Possible AI generation "Another Take on Information Gain, with Example"

[edit]

The style of the prose is odd. It looks off compared to the rest of the article. Also has that 'helpful' vibe LLMs tend to use. Consider removing this section. AnderGapoh (talk) 07:31, 20 October 2024 (UTC)[reply]

Entropy-illustration.png is WRONG, it is not like in the cited source

[edit]

In the cited source, the image is this:

https://static.wixstatic.com/media/02b811_38e88f427f934b198464455088022da8~mv2.png/v1/fill/w_773,h_284,al_c,lg_1,q_85,enc_auto/02b811_38e88f427f934b198464455088022da8~mv2.png

Instead, the (possibly AI-generated) Entropy-illustration.png image is wrong, as the left part is actually more pure than the central one. 2001:818:D85C:3F00:213C:561:3129:2B5 (talk) 21:18, 6 December 2024 (UTC)[reply]