Jump to content

Talk:Vapnik–Chervonenkis dimension

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

A classification

[edit]

"A classification model f with some parameter vector is said to shatter a set of data points if, for all assignments of labels to those points, there exists a such that the model f makes no errors when evaluating that set of data points."

To me, it seems that this is saying that f shatters X (with |X|=n), if and only if f has zero classification error (for some ) for every possible labeling of X.

"VC dimension of a model f is h' where h' is the maximum h such that some data point set of cardinality h can be shattered by f."

To me, it seems like this is saying f shatters X if and only if there is SOME labeling of X for which f has no classification error. So, which is correct here? From the given example, I'd say the first definition is correct.

"For example, consider a straight line as the classification model: the model used by a perceptron. The line should separate positive data points from negative data points. There exist sets of 3 points that can indeed be shattered using this model (any 3 points that are not collinear can be shattered)."

Wait. So the article's example of 3 points and linear classifier fails on some labelings when the points are collinear (+ - + for example). So, it can't be the first definition above. However, this means that definition 2 must be right, and therefore, the linear classifier must have infinite VC dimension, since I can construct X with arbitrarily large size, as long as I label all the points on one side of the classifier one label, and the other side another label. (Not to mention, this is very confusing when the article says that NO labeling of a 4-point set can be separated).
Please, end the madness! Rocketman768 (talk) 13:38, 3 August 2010 (UTC)[reply]

"But one can expect that the classifier will make errors on other points, because it is too wiggly." Did I stumble onto Simple English Wikipedia? 24.14.64.107 (talk) 01:51, 16 December 2012 (UTC)[reply]

Strange

[edit]

This artical seems strange to me, just as it does to Alexander.fairley

The example should be, as far as I can tell:

"For example, consider a straight line as the classification model: the model used by a perceptron. The line should separate positive data points from negative data points. All sets of 3 points that can be shattered using this model. However, some sets of 4 points cannot be shattered. Thus, the VC dimension of this particular classifier is 3. It is important to remember that one can choose the arrangement of points, but then cannot change it as the labels on the points are considered. Note, only 3 of the possible label assignments are shown for the 3 points." --Kgeza7 (talk) 09:36, 7 March 2009 (UTC)[reply]

From the article: VC dimension of a model f is the maximum h such that some data point set of cardinality h can be shattered by f. I believe this should actually say VC dimension of a model f is the maximum h such that all data point sets of cardinality h can be shattered by f. If we take the first definition of VC dimension, the VC Dimension of oriented lines in the plane would have to be infinite, since one can create a data set of any cardinality such that all {-1} points are on one side of a line and all {+1} points are on the other. I'll be changing this shortly baring objection Alexander.fairley 16:56, 12 November 2007 (UTC)[reply]

Okay, after a little further review of Chris Burgess's tutorial on SVM's, I believe the way that this should actually be stated is: The VC dimension of a set of functions f is the maximum h such that for each possible division of a data point set of cardinality h into two categories, there exists a way of arranging the data points in the domain of such that there is some specific which correctly classifies the data points. Alexander.fairley 17:16, 12 November 2007 (UTC)[reply]

Alright, so the main difficulty I'm having with this point is this: If the definition is as I state above, which is what would seem to be the case based on all the definitions in the article, and the discussion on this page, The VC dimension of hyperplanes should be infinite, since we can always put those labeled one way on one side of the plane, and those labeled the other way on the other.


Yes, but what is it?

"The VC dimension of a model f is the maximum number of points that can be shattered by f." Should this be the maximum number of points that f is _guaranteed_ to shatter? I think a graph (or two) might prove very useful! Kevin Saff

Not sure if I understand your question. Are you asking whether the VC dimension is the maximum number of points N that can be shattered, regardless of the position of the N points? I believe the answer is yes. --hike395
Yes, that's what I meant. I think the new wording is more precise. Kevin Saff

Hi, I edited this most recently, and only thought about justifying myself here after the fact. There are definitely two different concepts: the maximum size at which one can shatter any set of that size, and and the maximum size at which one can shatter some set of that size. It is my understanding that the latter is the right definition. However, my work in this area was long ago, and in probability, not pattern recognition; so argue with me if you think I'm wrong. Mark Durst 21:30, 20 May 2004 (UTC)[reply]

I believe it is the cardinality of the largest set that can be shattered by . I think it is correct and understandable now -- hike395 04:04, 22 May 2004 (UTC)[reply]

someone should merge shatter into this article

[edit]

Currently shatter is kind of a trivial dicdef, almost devoid of context. I'm a set theorist and AFAIK the term isn't used (or at least not much) in set theory; Google comes up with VC theory. Please just merge it into the def of VC dimension and then change shatter to a redirect here. Unless of course the notion is used somewhere in VC theory other than in the def of VC dimension; in that case, maybe merge to the VC theory page and redirect it there. --Trovatore 06:08, 5 October 2005 (UTC)[reply]

Anon comment

[edit]

The page has been updated somewhat based upon this earlier comment:I hope this gets posted; I have had difficulty editing Wikipedia pages on mathematics. About shattering, VC-classes, VC dimension. 1. The concept of shattering is important, and indeed should be a separate article. In fact, although this article is an excellent starting point, it should be generalized and expanded. 2. Moreover, there should be articles entitled "VC classes" or "Vapnik Cervonenkis classes" and also an article on "empirical processes". Please let me explain. Empirical processes are very important in probability and statistics; the theory of empirical processes is what we humans really use every day to form realistic and valid probabilistic judgements. 3. Shattering and Vapnik-Cervonenkis classes have become very important not only in computer learning theory, but also, in the past 25 years, they have become a vital mathematical tool in empirical processes, which is a mathematical constuct that is so important that it requires its own article. 4. This article about shattering is quite good, but is a special case and needs to be generalized. It involves elements of a fixed set. We need to generalize this to the concept of collections of sets (also called classes of sets). Please let me elaborate: A necessary preface, to be rigorous and precise: In set theory, one often cannot discuss "sets of sets", that is, sets whose elements are sets themselves, since this leads to logical inconsistencies and paradoxes (this topic should be an article too; it is an intriguing concept and related to shattering). Therefore, mathematicians speak and write of "classes of sets" or "collections of sets", not "sets of sets" . 5. The author made no error! The author did not discuss collections of sets; the article is good, but it should be generalized and illustrated with examples. 6. The generalization: The article on shattering should define and discuss shattering by a class (collection) of sets, not just by elements of a set H. For example, the class of all discs in the plane (2-space) cannot shatter every finite set F with 4 or more points, yet the class of all convex sets in the plane can shatter any finite set whatsoever, no matter how large its cardinality -- just "connect the dots". 6. Also, the article on shattering might be more helpful to a reader if the mathematical language is augmented with some explanation in English (or any other non-technical language) and if it would include examples. 7. I would do this myself, but I have tried to edit pages on mathematics in the Wikipedia, and have had no success; I do not know how to edit Wikipedia pages. I wish I did know how to do so. In fact, I hope I have been able to edit this page successfully. 8. See Wenocur, R.S. and Dudley, R.M. "Some special Vapnik-Chervonenkis classes" (1981) for more information on all this. the preceding unsigned comment is by 151.197.61.27 (talk • contribs) in multiple edits ending 14:05, 15 December 2005

With your elaborations, shatter may indeed deserve an article, but it needs a better name. Verbs make poor article titles; somebody needs to think of a noun. Perhaps Shattering (VC theory) ? That's the best I can come up with off the top of my head, but if the concept is used outside of VC theory, then maybe something slightly more general.
I don't want this to sound aggressive, but I'm afraid your remarks about sets of sets are just wrong. Set theorists (I am one) talk about sets of sets all the time; they aren't the cause of the Russell paradox. The cause of the Russell paradox is the idea that every property should have an extension. However there's nothing wrong with discussing "classes" or "collections" of sets if it makes it easier for the reader to keep track of the types. So I left the language as you had it, but removed the false statement about Russell. --Trovatore 19:28, 15 December 2005 (UTC)[reply]

I am so very new (just a few days now) to editing Wikipedia, that I did not even realize that I should have a user name, which I have now. Thank you for your comments to improve articles. You are certainly correct about not using a verb as a title. I was not the original author of the article, so whoever wrote it first chose that title. Now, I have created an article entitled "shattering", with the term in the gerund, i.e. noun form, not as the present participle of a verb. The article does indeed deserve to be its own entry. This is analagous to the entry "cumulative distribution function". One might argue that such an article should be merged into the article on probability theory, since cumulative distribution functions are primarily a subtopic of probability theory. Yet it works out well when such an important topic or sub-topic stands alone in its own article. We did indeed need to refer to "collections of all sets" in some work on shattering which did lead to a paradox when we need to "set of all sets". If I elaborate on that, and mention the paradox, would that be all right? Also, we needed to appeal to the Axiom of Choice for an example of shattering involving sets that depended on well-ordering the reals and actually, n-space and spaces with infinite dimension, too. We had been working in the field of empirical processes, an area of probability and statistics that warrents its own article; I am surprised the Wikipedia does not cover the topic already. Probability theory has its foundations in set theory as well as in measure theory, as I am sure you know, and referring to results in set theory and in measure theory, too, often becomes unavoidable. I genuinely appreciate your helping me improve my entries in the Wikipedia, and for understanding that the concept of shattering has applications beyond VC-theory, so that entitling it with the term VC-theory would be inappropriate since that would limit its scope. It would be even more inappropriate than, for example, renaming the article on cumulative distribution function as "cumulative distribution function (probability theory)". I plan to keep improving the article on "shattering" to illustrate its importance and connection with VC-theory, but also with areas more general than and different from VC-theory. Sorry for the long message, but I have some other questions. Please forgive my being so new at editting the Wikipedia. What is the mark-up language for math? I have been only guessing on how to use it. Is it some form of TEX that I do not recognize? Is there documentation or a user's guide; my results have not been pretty; for example, a capital letter C within the text looks too much like lower case C when writing an equation. Also, I cannot get rid of the article named "shatter"; I tried to do so, now that there is the article entitled "shattering", but was accused of vandalism. Thanks for all the help in making the Wikipedia a better encyclopedia. My user name is MathStatWoman.

147.32.80.13...

[edit]

...has made two edits to the article that describe VC dimension as being the maximum cardinality *plus one*. This is inconsistent with what my lecture notes say - anybody got a definitive reference? Ubermammal (talk) 11:21, 3 June 2008 (UTC)[reply]

I've read through the notes in the first link in the article and they agree - it's maximum cardinality, not plus one. Ubermammal (talk) 11:19, 8 June 2008 (UTC)[reply]

Why isnt citation for NN good enough?

[edit]

The citation for the dimension of NNs is given as:

Shalev-Shwartz, Shai; Ben-David, Shai (2014). Understanding Machine Learning – from Theory to Algorithms. Cambridge University Press. ISBN 9781107057135.

why isnt this citation "good enough"? DMH43 (talk) 20:55, 31 December 2023 (UTC)[reply]

In statistical learning theory

[edit]

Is the $N = /Theta \frac{D + \ln \frac{1}{/delta}}{\epsilon^2}$ bound correct? I skimmed the referenced book, and I think it claims that this is only an upper bound on the number of samples.

If it is true, it might help to add some details. I think its saying that for all sets of binary functions with VC dimension D that there exists a distribution which empirical risk minimization requires that number of samples. But I'm not sure. — Preceding unsigned comment added by Mesterharm (talkcontribs) 15:38, 10 October 2024 (UTC)[reply]