Talk:Tensor product/Archive 2
This is an archive of past discussions about Tensor product. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
Left exact?
The section on abstract construction states that tensor product is left exact. That seems a bit strange. The functor "- \otimes_R N" is _right_ exact (indeed, if I have a surjection M -> M' then the comment on generating sets implies that M \otimes_R N surjects onto M' \otimes_R N). The fact that this functor is not in general exact might imply what we want here, though it might be easier to see it directly: Z/22 \otimes_Z Z/3Z = 0.
- Correct: see Tor functor for confirmation. Charles Matthews 21:00, 13 October 2005 (UTC)
Programming
I don't see why we need so much on the programming. For array programming, OK: these things are arrays, though not just arrays. But why does this page have to teach lower-level stuff like filling up arrays with numbers from other arrays? Charles Matthews 09:25, 25 October 2005 (UTC)
- I agree. Anyone who knows C could implement it based on the opening example. The SQL example is interesting because it relates
OUTER JOIN
with the tensor product, but it doesn't need two examples. The article could also do with mention of the tensor product of functions. —BenFrantzDale 12:48, 25 October 2005 (UTC)
I'm pulling one of the two SQL examples. RaulMiller 13:01, 25 October 2005 (UTC)
Why B times A?
It is unclear why you write
rather than
- ...
Paolo.dL 12:02, 26 June 2007 (UTC)
Tensor product of elements of Hilbert spaces
In the definition
I wondered what could be meant by φ⊗ψ for elements φ and ψ in a (general) Hilbert space. Maybe it should be mentioned that the above line defines both an inner and a tensor product (of elements in a Hilbert space). —Preceding unsigned comment added by Paux (talk • contribs) 08:20, 24 September 2007 (UTC)
Main example is misleading
It looks too much like matrix multiplication. I'm gonna go ahead and replace it with one of the examples from Kronecker product in a few hours/days if there are no complaints. —Preceding unsigned comment added by Thric3 (talk • contribs) 09:04, 7 February 2008 (UTC)
- k edited. -Thric3 (talk) —Preceding comment was added at 06:16, 8 February 2008 (UTC)
Most general bilinear operation
The lede says the tensor product is always the most general bilinear operation. While I think I know what is meant by that (something close to being a universal object), it is a fairly cryptic formulation, and I think it is more likely to be confusing then helpful at this point. Besides, the word "product" mostly refers to the result of an operation, rather than to the operation itself. I'm tempted to take the phrase out, does anyone object? – Marc van Leeuwen (talk) 10:02, 24 April 2008 (UTC)
- well, the result of the operation is the universal object, so the operation itself is the most general one... Mct mht (talk) 08:29, 25 April 2008 (UTC)
Splitting the Hilbert spaces section
I propose splitting the section on the tensor product of Hilbert spaces, and merging it with the existing article Tensor product of Hilbert spaces. Currently the main article is shorter than the section here. The other option is to delete the main article (Tensor product of Hilbert spaces) and merge its contents here. Either way, the way the current content fork is currently set up is not a good idea. Are there any objections or suggestions? Silly rabbit (talk) 15:55, 8 January 2008 (UTC)
- Question for editors: Should this be merged with Topological tensor product (currently Tensor product of Hilbert spaces redirects there), or should a new article be created? silly rabbit (talk) 19:47, 25 April 2008 (UTC)
Please do not use phyics slang in math articles
This article has obviously been written by physicists and is should be rewritten by a mathematician. It is generally not a good idea to us slang such as "degree of freedom" in a math article. —Preceding unsigned comment added by 213.39.148.211 (talk) 10:39, 9 July 2008 (UTC)
- I disagree if the paragraph is intended to convey the basic idea so that anyone can understand it. Nearly all of the remaining parts of the article, however, are quite mathematically precise. siℓℓy rabbit (talk) 12:08, 9 July 2008 (UTC)
- I agree with the point, but not with the example. I am a pure mathematician, and I use the phrase "degrees of freedom". For example a straight line has two degrees of freedom, a circle has three, and a conic has five. Besides, our statistician friends use it very often too. Dharma6662000 (talk) 17:05, 12 August 2008 (UTC)
- ...our statistician friends use it very often too: Remind me never to come to one of your parties. </joking> siℓℓy rabbit (talk) 17:27, 12 August 2008 (UTC)
Earlier thread
In the first example shown at the top I think there might be a mistake. It looks as though you take the tensor product of two rank two tensors only to get a matrix. It looks as though the end result of the tensor product of two rank two tensors is another rank two tensor and not a rank 4 tensor. There might be some confusion between the rank of a tensor and the rank of a matrix. Possibly a more knowledgeable mathematician should rewrite this. —Preceding unsigned comment added by 128.104.2.98 (talk) 01:22, 27 October 2008 (UTC)
On a different point, the top of the article mentions "degrees of freedom"... what is that?!?
Can a function of 7 real variables have 12 degrees of freedom? How, and why, please explain this mystery!
No. A function of 7 real variables has at most 7 degrees of freedom. if, by "function of 7 variables", you mean z = f(x1,x2,x3,x4,x5,x6,x7), and not 0 (or constant) = f(x1,x2,x3,x4,x5,x6,x7) or x7 = f(x1,x2,x3,x4,x5,x6), which have a maximum of 6 degrees of freedom. if x1 and x2 are neccessarily correlated then the function really has one less degree of freedom than the number of variables. the number of degrees of freedom another name for the number of "linearly independant" variables. Kevin Baastalk: new 03:30, July 31, 2005 (UTC)
Fair enough... why then does the article state "...resultant dimension = 12" when the displayed tensor product has 7 variables (see http://wiki.riteme.site/wiki/Tensor_product ; the left hand side has variables a1,a2,a3,b1,b2,b3,b4) ?!?
- It lies in a space of arrays having 12 components. Charles Matthews 07:17, 2 August 2005 (UTC)
Something needs to be adjusted... as things stand (i.e., as the page is actually written):
12 = dimension of ambient space = resultant dimension = count of degrees of freedom = number of variables = 7
- The page is OK, really. The question of what the image of the tensor product is, which is what this discussion is circling round, is really something different (comes under Segre embedding, for example) and non-linear. Sticking with the linear theory, there are 12 dimensions in the 'array' space, into which 7 linear dimensions are mapped. Since the image is not closed under addition, there is no 'paradox'. Charles Matthews 15:28, 2 August 2005 (UTC)
Connect the dots ... on one side is the number 12, and on the other side is the number 7... I don't think 12 = 7, so which equality
in the chain is wrong?!?
- The third = sign is questionable; there are constraints. Charles Matthews 21:03, 13 October 2005 (UTC)
Tensor product of multilinear maps
I think the author of the section "Tensor product of multilinear maps" meant to say "Tensor product of multilinear forms", since if the maps take values somewhere else than in the underlying ring or field, the product on the r.h.s. is a priori not defined (or in turn assumes the tensor product of the two "arrival spaces" ?). — MFH:Talk 13:55, 12 May 2009 (UTC)
ToDo: topology
It would be nice to have a section devoted to the projective topology for locally convex spaces. The article on initial topology gives a one-sentence nod to this concept -- expanding that section into a sentence would seem to clutter things up. Similarly adding a section here would also seem to result in clutter. What to do? linas (talk) 04:06, 8 June 2009 (UTC)
- Never mind, I just discovered Topological tensor product. Which seems to have numerous faults e.g. doesn't manage to ever wikilink to [[initial topology] Oh well. linas (talk) 04:10, 8 June 2009 (UTC)
Split?
It seems a good idea to split the material on Array programming languages off, or to merge it with another article. Are there any volunteers who know about such things? I have some rather limitted exposure, and would prefer it if an array programming enthusiast would do it. siℓℓy rabbit (talk) 22:39, 6 August 2008 (UTC)
- I don't think it's necessary. There are already too many tensor articles, and information relevant to array programming can be found in abundance elsewhere. The article should merely mention some methods or algorithms for working with and implementing specific aspects of tensors that aren't likely to be standard to the language. LokiClock (talk) 10:41, 26 September 2009 (UTC)
Tensor product of vector spaces
I'm not sure about the formulation "take the vector space generated by V x W and factor out the subspace generated by the following relations". Maybe it's just that the word "generated" is ambiguous, but when I think of V x W as a vector space, I usually have an addition like (v1,w1)+(v2,w2)=(v1+v2,w1+w2) in mind. In this space, you surely can't factor out a subspace to get the tensor product, which is in genral bigger than V x W. Should the addition in this case be defined in a different way? I think this should be mentioned. SaschaR 15:28, 4 June 2006 (UTC)
- It means the vector space with basis V x W. I've modified the article to state this explicitly. --Zundark 16:40, 4 June 2006 (UTC)
- Hm. Let V = W = R^2. Now V x W [b] as a vector space[/b] can be isomorphically represented has R^4 with the usual basis { (1, 0,0,0), (0,1,0,0) ...}.
- The representation by the Kronecker Product sends all these base vectors to 0 Thus the "factoring out" can not be the vector-homeomorphic factoring but only the factoring of sets (the grouping of elements within the same set theoretic equivalence relaction.
- Or have I misunderstood the term "with basis V x W"? 84.160.205.130 07:20, 24 October 2006 (UTC)
- I think you've misunderstood it. V x W = R^2 x R^2 is the basis of the vector space, and the elements of the vector space are formal linear combinations of basis elements. You should ignore the structure on R^2 when forming this vector space - the basis elements are all linearly independent, by definition. --Zundark 08:24, 24 October 2006 (UTC)
- So the whole of V x W is a basis where all elements (uncountably many) are all linearly independent by definition? That's to say the set V x W realy is an index set for the base vectors of the pre-factored vector space? If so, wouldn't it be clearer to start with the space Hom(VxW, R)? 84.160.205.130 18:18, 24 October 2006 (UTC)
- Yes, you can think of V x W as an index set for the basis vectors. In fact, I would define the vector space as , where each is a copy of the base field. This is the set of functions with finite support from V x W to the base field (with the obvious operations). It's not the same as Hom(VxW, R). --Zundark 19:52, 24 October 2006 (UTC)
- I see Hom(VxW, K) is obviously not the same as . Further, for finite dimensions, Hom(VxW, K) has the same dimension as VxW and . So, on the way to , there is nothing much left to be factored out. Trivially, Hom(VxW, K) is isomorphic to the way any two vector spaces with the same dimensions are isomorphic.
- Now all useful information for a human brain is packed into the way in which those isomorphisms can be chosen.
- Let v_1 ... v_N be a basis of V and w_1 ... w_M be a basis of W. Would be a basis of in a canonical way? (Note the braces to mark a single element of VxW).
- 84.160.237.162 20:32, 26 October 2006 (UTC)
- If you are considering V x W as a vector space, then its dimension is the sum of the dimensions of V and W. But the dimension of is the product of the dimensions of V and W. So they are not usually isomorphic when the dimensions are finite. --Zundark 08:51, 27 October 2006 (UTC)
- Ouch! Shame on me! I should have seen that. 84.160.238.49 18:33, 27 October 2006 (UTC)
Universal Property
Regarding the universal property of the tensor product of vector spaces, I am of the opinion that in the described category, the tensor product is an initial object rather than a terminal object. Please respond to this. Quiet photon (talk) 14:36, 23 February 2010 (UTC)
- Yeah, it's an initial object. I've corrected it. RobHar (talk) 14:56, 23 February 2010 (UTC)
Gowers article
How to lose your fear of tensor products by Tim Gowers. Might be suitable for an extlink. 67.122.211.208 (talk) 21:28, 2 August 2010 (UTC)
Tensor product of two tensors?
I'm a bit dubious about this line:
In the traditions I'm familiar with, dimensions are represented as vectors, and \cdot represents the inner product between two vectors. However, the dimension of a tensor product is the concatenation of the dimensions of its arguments. I think either (a) better notation should be used, or (b) an explicitly labeled link to a description of this notation should be used. RaulMiller 13:01, 25 October 2005 (UTC)
No, dimensions are numbers and this is just the dot standing for ordinary product of integers. Charles Matthews 13:08, 25 October 2005 (UTC)
Ok, then this is an ambiguity. A tensor of rank 5 could be said to have five numbers describing its dimension -- perhaps <2,3,5,7,11> or it could be said to have a single number describing its dimension -- perhaps the product of (2)(3)(5)(7)(11). I don't want to belabor this point, but I think the entry could use some phrase indicating the latter usage of the term. (I'll update the page if I think up something good before someone else does.) RaulMiller 15:53, 25 October 2005 (UTC)
Right, I see that the initial example was in unhelpful notation and I've clarified that. I've taken out the\cdot also. Charles Matthews 16:37, 25 October 2005 (UTC)
In the "Tensor product of vector spaces" section the author mentions "set builder notation", but this does not really define the "N" and "K" symbols he then uses. In the "set builder notation" article there is a bold "N", but not the hand-tooled "N" used here, and "K" is variously K, k, ks, and otherwise. So, why not define these symbols within the article? At this point, there are essentially arbitrary and undefined symbols. —Preceding unsigned comment added by 70.66.1.110 (talk) 07:08, 21 September 2010 (UTC)
tensor product of vector spaces
The article described the free vector space over V×W in terms of a countable sum over elements of the base field K. That didn't make any sense in the usual case where K is uncountable, e.g. K is the reals or the complex numbers. I tried rewriting it but may have messed it up, so review would be appreciated. It's also possible that it was right before, and I misunderstood it in some dumb way. If that happened, please feel free to revert. 66.127.52.47 (talk) 10:21, 11 April 2010 (UTC)
- I've reverted, since it was correct as it was. Even if K is uncountable, elements of a vector space over K are linear combinations of only finitely many basis elements. --Zundark (talk) 12:03, 11 April 2010 (UTC)
- Thanks, it's still pretty confusing. The space of functions R→R is certainly infinite dimensional and its elements are infinite sums. So I'm missing something.66.127.52.47 (talk) 18:26, 11 April 2010 (UTC)
- Well, as far as I know it's impossible to specify a basis of RR, so I can't show you how each element is a finite linear combination of basis elements - but obviously it is, because, by definition, a basis must span the space. I'm not really sure what you're missing here. Perhaps you think that the set of constant functions R → R is a basis, but it isn't. --Zundark (talk) 18:48, 11 April 2010 (UTC)
- The basis of RR would be the functions where δ is the Kronecker delta. It's certainly not generally true in linear algebra that elements of a vector space are linear combinations of only finitely many basis elements. Think of the familiar case of Fourier series. Are we talking about some special case where we only use finite combinations? Some more exposition in the article would be helpful if this is so. (Or are you saying "finite" where you mean "countable"? The summation over is already infinite.) I do have some uncertainty over whether a summation over an uncountable index set necessarily makes sense, but in this case it seems to. 66.127.52.47 (talk) 19:16, 11 April 2010 (UTC)
- Well, as far as I know it's impossible to specify a basis of RR, so I can't show you how each element is a finite linear combination of basis elements - but obviously it is, because, by definition, a basis must span the space. I'm not really sure what you're missing here. Perhaps you think that the set of constant functions R → R is a basis, but it isn't. --Zundark (talk) 18:48, 11 April 2010 (UTC)
- The Kronecker delta functions are not even functions R→R so they most certainly do not form a basis of the space of functions. And when you talk about Fourier series, you are getting confused with the notion of othornormal basis of a Hilbert space. A basis of a vector space is by definition a subset of that vector space such that any element of the vector space can be written uniquely as a finite linear combination of the vectors in the basis. By the axiom of choice, every (non-zero) vector space has a basis, though you may not be able to actually write down a specific example (this is what sucks about using the axiom of choice). It is true that in the theory of Hilbert spaces, an orthonormal basis (which allows infinite sums) is sometimes simply referred to as a "basis", but it is a different notion from that under discussion here. Without a topology on the vector space, limiting processes such as infinite sums do not make sense. RobHar (talk) 21:00, 11 April 2010 (UTC)
- When I say finite I mean finite, not countable. (There is no summation over N in the part of the article in question - all the sums are from 1 to n, and n is finite.) As RobHar says, what is confusing you here is that in functional analysis it's common to use a different concept of basis: an orthonormal basis (or, more generally, a Schauder basis). In such contexts, a basis in the linear algebra sense is often called a Hamel basis. The section of the article in question is purely algebraic, so "basis" implicitly means "Hamel basis" - it can't mean anything else. --Zundark (talk) 21:32, 11 April 2010 (UTC)
Thanks. I don't have any textbooks here that I can check, but the idea that a linear combination has to be a finite sum does seem to be supported by other Wikipedia articles. That completely surprises me and I wonder how I managed to pass any classes in these topics without ever encountering the issue. The Kronecker delta is a function on two variables, so for a fixed real α, the mapping is a function from the reals to the reals, if x ranges over the reals. It can also be written as the indicator function 1{α} so maybe I should have written that instead. 66.127.52.47 (talk) 21:58, 11 April 2010 (UTC)
- In linear algebra classes, one normally only deals with finite-dimensional vector spaces, so all bases are finite and this issue does not arise. Infinite-dimensional vector spaces one deals with in college courses are usually tacitly assumed to be Hilbert spaces and the word basis is tacitly assumed to mean "orthonormal basis of a Hilbert space". Re the Kronecker delta: I assumed you simply meant the Dirac delta since it is what is normally used when the arguments are real (as opposed to integers), but I see that you really meant an indicator function of a singleton, which I do suppose one could call a Kronecker delta, though I think that would cause confusion, as it has done here. RobHar (talk) 22:26, 11 April 2010 (UTC)
- Thanks. It never occurred to me that the orthonormal bases from Fourier analysis weren't also bases as in linear algebra. I do remember that my linear algebra class talked about function spaces, so maybe that's where I got the idea from. The formula in the article makes more sense now: n on the left side of the set builder is bound on the right side, so it's not an infinite sum. My class described tensors as multilinear forms but I'd never seen a presentation of the tensor product in terms of a free vector space like in this article. It might be more intuitive (though less formal) to write the set as .
Infinite dimensional vector spaces also come up in Galois theory, e.g. the algebraic closure of Q is an infinite degree field extension of Q, that can be seen as a vector space over Q. If (convergent) infinite sums in that space were allowed, then the algebraic closure would presumably just be , so the requirement of finite sums makes sense now that I think of it that way. (Added:) The Schauder basis article is helpful. Thanks again for the links. 66.127.52.47 (talk) 01:44, 12 April 2010 (UTC)
- Thanks. It never occurred to me that the orthonormal bases from Fourier analysis weren't also bases as in linear algebra. I do remember that my linear algebra class talked about function spaces, so maybe that's where I got the idea from. The formula in the article makes more sense now: n on the left side of the set builder is bound on the right side, so it's not an infinite sum. My class described tensors as multilinear forms but I'd never seen a presentation of the tensor product in terms of a free vector space like in this article. It might be more intuitive (though less formal) to write the set as .
In the absence of a topology, one can't even begin to define countable sums in the vector space. Algebraically, all you have is the binary operation which, by induction, will only give you finite sums. I encountered this phenomenon the other way around :) A 'basis' generating the space with countable sums was strange to me!Rschwieb (talk) 19:20, 18 February 2011 (UTC)
"Outer square"
In general the tensor product is , but in a great many applications (such as computing the covariance matrix), one computes the outer/tensor product of the same vector with itself, . I've called this the "outer square" since it always seems to come up when generalizing a in a scalar equation, but am wondering: there is a real word for this? —Ben FrantzDale (talk) 14:50, 17 June 2011 (UTC)
Main example incorrect?
Presuming that the main example is meant to represent the tensor product of tensor a with tensor b I believe that in the definition the b component should be transposed, as otherwise if one reduces a and b to rank 1 tensors by removing the second column of both (that is changing them to 1 by 2 vectors) the result is a 1 by 4 vector not a 2 by 2 2nd rank tensor as it should be, that is the formula should read:
rather than
as it does currently --Physdragon (talk) 15:52, 1 March 2009 (UTC)
- Actually having said that I'm not sure if my suggested revision is correct either since that is just a 4 by 4 Rank 2 tensor, the tensor product of two rank two tensors (that is they are represented in component representation using two indices) should give a rank 4 tensor (that is one with 4 indices). So it should be
- --Physdragon (talk) 14:14, 2 March 2009 (UTC)
- The example is a bit confusing/poorly explained. You are thinking Cartesian product; but the tensor product is *not* the cartesian product... you should be thinking Kronecker product instead. It is 4 by 4 rank two because the tensor space has a 4-dimensional basis: Right? linas (talk) 03:58, 8 June 2009 (UTC)
- I agree that the example is confusing, one of the primary problems being that you are quite correct in saying that it has a 4 dimensional basis, but this means that the tensor, if one wanted to represent it properly in the matrix like form, should be a 4-dimensional array of values with each 'side' of the array being two values wide. Quite obviously however it is not possible to represent this physically never mind on the two dimensions of a wikipedia page, and exactly how one would go about attempting to represent it in two dimensions I'm not sure. Nonetheless I believe the elements of the tensor are now correct at least whereas I'm fairly certain they weren't before even though I'm not sure how to represent it. To be honest given it's potential for confusion I'm not sure whether the graphical example should even be included, something akin to:
- Would probably have less potential for confusion but I felt that removing the example in it's entirety was rather a major edit I wasn't comfortable to make without agreement from other editors. Physdragon (talk) 14:52, 16 June 2009 (UTC)
- I just had a little look at the French version of the tensor product page and the initial example with the tensor product of 2 vectors there seems much better constructed and more informative, as and when I have time I shall probably replace the main example here with an Anglicised version of the French one. Should anyone else feel like doing this before I get around to it, feel free. Physdragon (talk) 17:55, 16 June 2009 (UTC)
- The problem is that the Kronecker product and the tensor product are not the same thing. The tensor product is more like the Cartesian product (although the Cartesian product is not a real product between tensors; that's bad nomenclature. Better would be to call the tensor product the direct product, if you don't want to use the word tensor.) Physdragon is completely right, and I agree that the entire Kronecker product section should be removed from this page, except perhaps it should be linked as an example of a different operation between tensors (or actually only matrices and vectors, I believe). I discussed this on the Kronecker product talk page as well. 129.32.11.206 (talk) 18:23, 15 October 2012 (UTC)
- I just had a little look at the French version of the tensor product page and the initial example with the tensor product of 2 vectors there seems much better constructed and more informative, as and when I have time I shall probably replace the main example here with an Anglicised version of the French one. Should anyone else feel like doing this before I get around to it, feel free. Physdragon (talk) 17:55, 16 June 2009 (UTC)
A free Abelian group is not a free group
In the section "Tensor products of modules over a ring" we write that the tensor product is a quotient of a free abelian group and something else. This is not right, it is a quotient of the free group over i.e. and something else. A free abelian group is rarely a free group (see for example http://wiki.riteme.site/wiki/Free_abelian_group#Terminology). If no one objects, I will correct the definition. --Larsborn (talk) 18:09, 11 February 2013 (UTC)
Definition of e
I don't understand the definition of the set . What is ? — Preceding unsigned comment added by Henriqueroscoe (talk • contribs) 17:15, 19 June 2013 (UTC)
- I am working on this article right now, so this will hopefully be improved soon. Anyway, for any set S, the vector space F(S) is the vector space whose elements are formal linear sums of elements of S (with coefficients in K). If you prefer that, you can also think of F(S) as the vector space of functions from S to F. In this parlance is the function that assigns 1 to s and 0 to any other element of S. Now, apply this to S = V x W. Jakob.scholbach (talk) 19:12, 19 June 2013 (UTC)
Presentation of a module
In the "Tensor products of modules over a ring: Computing the tensor product" subsection there is an incorrect link for the presentation of a module pointing here. There seems to be no article about it. Furthermore the relations given () make no sense to me. Shouldn't the sum be over a finite subset of I? Is J a set indexing the set of relations? — Preceding unsigned comment added by 195.134.119.23 (talk) 00:51, 12 August 2013 (UTC)
- It does look strange. I think what is intended is to point to Representation (mathematics). — Quondum 01:45, 12 August 2013 (UTC)
- I think it refers to something analogous to the Presentation of a group. 195.134.119.23 (talk) 07:48, 12 August 2013 (UTC)
- Considered as a presentation of a group, the scalar multiplication would presumably not be permissible in the relations. And however you look at it, the summation over the scalars aji can be done before multiplication (ignoring for now the problem of a potentially infinite number of terms). Considered in the case of a vector space, the summation over i makes sense: this would decouple the choice of basis from periodicities in the vector space (e.g. a toroidal space or a space over a finite field). — Quondum 19:58, 12 August 2013 (UTC)
- I meant analogous to the presentation of a group in the sense that the module is described as a quotient of a free module over a module generated by the relations. The multiplication of ring elements with module elements is well defined. About the indexing we agree that it should be over i in a (finite?) subset of I. My trouble is what J is, as it is used in the construction that follows.195.134.119.23 (talk) 01:39, 13 August 2013 (UTC)
- Considered as a presentation of a group, the scalar multiplication would presumably not be permissible in the relations. And however you look at it, the summation over the scalars aji can be done before multiplication (ignoring for now the problem of a potentially infinite number of terms). Considered in the case of a vector space, the summation over i makes sense: this would decouple the choice of basis from periodicities in the vector space (e.g. a toroidal space or a space over a finite field). — Quondum 19:58, 12 August 2013 (UTC)
- I think it refers to something analogous to the Presentation of a group. 195.134.119.23 (talk) 07:48, 12 August 2013 (UTC)
Link tensor product defined by the universal property and bifunctor in the def of monoidal category
??? — Preceding unsigned comment added by Noix07 (talk • contribs) 17:09, 4 April 2014 (UTC)
Infinite tensor products
Is there an agreed on definition of infinite tensor products (in a category)? VictorPorton (talk) 21:08, 24 December 2014 (UTC)
We need an article on this topic.
Quotient space definition of N seems wrong
I was quite confused by the "definition" part of this article the first time I read it, but on re-reading after a few weeks of mind-simmering, it is somewhat more clear. But the definition of the set N to be used as the equivalence class of the zero vector seems wrong. In particular, it only includes formal sums of 3 vectors from F(VxW) when the zero equivalence class is *much* bigger. As an example, (v1,w1) + (v2,w2) - (v1+v2,w1+w2) is a member of the equivalence class of (0,0), but not specified as member of N. Perhaps some sort of 'closure over these operations' is needed as well? Or just remove that paragraph completely? 159.153.4.50 (talk) 03:59, 25 April 2015 (UTC)
- The equivalence relations in N imply the relation (v1+v2,w1+w2) ~ (v1,w1+w2) + (v2,w1+w2) ~ (v1,w1) + (v1,w2) + (v2,w1) + (v2,w2). If the equivalence that you suggest is valid, this would imply that (v1,w2) + (v2,w1) ~ (0,0), or in effect that an arbitrary, independent pair is in the equivalence class (0,0). —Quondum 05:33, 25 April 2015 (UTC)
Problem with 1/(1-x) infinite sum?
From the "Prerequisite:_the_free_vector_space" section: "... and 1/(1 − x) is a formal sum \sum_{k=0}^\infty x_k with no restrictions on values of x ..." I have a couple of problems with this. First, calling the infinite sum "1/(1-x)" without justification seems wrong. Second, the sum is an infinite sum, and that does not seem appropriate here, where we are considering only finite formal linear combinations when constructing the free vector space. Would someone with more expertise than I check this? — Preceding unsigned comment added by 66.188.89.180 (talk) 13:43, 11 June 2015 (UTC)
Is N really a subspace?
The following part of "Definition" claims N is a subspace of F(V × W):
- In short, the tensor product V ⊗ W is defined as the quotient space F(V × W)/N, where N is the subspace of F(V × W) consisting of the equivalence class of :the zero element. This expresses the equivalence relations described above:
But I don't think it is. For example, it doesn't contain the zero vector. So should N instead be defined as the span of the above? --178.7.221.35 (talk) 15:54, 14 June 2015 (UTC)
- It is badly formulated. N (which is spanned by the above vectors (and contains the zero vector, e.g just take v1, w1, c = 0 in the last relation)) is mapped to the zero vector in the quotient. A reference could be John M. Lee's Introduction to Smooth manifolds. YohanN7 (talk) 16:07, 14 June 2015 (UTC)
- In other words, taking the quotient, thus mapping N to zero, forces
- to hold in the quotient, which is what we want. YohanN7 (talk) 16:13, 14 June 2015 (UTC)
I don't see that it is badly formulated.Bourbaki uses this approach. The stated elements do include the zero vector. —Quondum 16:28, 14 June 2015 (UTC)- OTOH, Bourbaki does say "generated by elements of the form [...]". So the comment about needing to say "spanned by" should be looked into. —Quondum 16:32, 14 June 2015 (UTC)
- Wouldn't taking v1, w1, c = 0 in the last relation result in n = -1·(0,0), which is a nonzero vector in F(V × W) since it's a formal sum with a nonzero coefficient in front of (0,0)? That we are going to take the quotient later does not affect whether N, as a subset of F(V × W), is a subspace (edit: but taking the quotient will of course only be possible if it's in fact a subspace). --178.7.221.35 (talk) 17:17, 14 June 2015 (UTC)
- Ah, it is indeed possible to get the zero vector, for example with v1, w1 = 0, c = 1 in the last relation. But I still think N is not a subspace, since every element of N has at most 3 nonzero coefficients, but once you take the span there is no such restriction.--178.7.221.35 (talk) 17:26, 14 June 2015 (UTC)
- What may initially trick the eye is that both F(V × W) and N are infinite-dimensional. Each element, like (1, 7.72634) ∈ F(V × W), spans its own private dimension.
Problem with explanatory passage
The section Prerequisite: the free vector space begins as follows:
"The definition of ⊗ requires the notion of the free vector space F(S) on some set S. The elements of the vector space F(S) are formal sums of elements of S with coefficients in a given field K. A formal sum is an expression written in the form of a sum in which no actual arithmetic operations can be carried out. For example 2a + 3b is a formal sum, and 1/1 − x is a formal sum Σ∞
k=0 xk with no restrictions on values of x (versus the usual case where |x| < 1 must hold for a geometric series to converge), since no value substitution will actually be performed. For the set of all formal sums of elements of S with coefficients in K to be a vector space, we need to define addition and scalar multiplication. A formal sum is order-independent (i.e. the addition is commutative and associative)."
But the correct definition of tensor product does not involve any formal sums that are infinite. So the example of the power series is just going to confuse people.Daqu (talk) 18:00, 17 June 2015 (UTC)
- Absolutely. Jakob.scholbach (talk) 18:52, 17 June 2015 (UTC)
- It should be "finite formal sum"; in any case, the section is poorly written. One unfortunate problem is this: it is the easiest to define a (finite) formal sum as an element of a free vector space. I have rewritten the section so to avoid a "formal sum" language altogether. -- Taku (talk) 23:01, 26 August 2015 (UTC)
Definition of tensor product of two vector spaces is incomplete
As mentioned above in the section "Is N really a subspace?", the definition given for tensor product of two vector spaces is wrong, essentially on account of not allowing for linear combinations. In the definition of the space N, one would need to take a span of the given set. In the definition of the defining equivalence relations, you need an additional equivalence to the effect
Otherwise, e.g. we do not get
I am not of the most efficient way to encode this postulate, so I will not make the change myself, but I am flagging this section as inaccurate. --2607:F6D0:CED:5B2:94B0:4FF4:F7CC:BE0C (talk) 20:01, 6 May 2016 (UTC)
- Yes, I think this is correct. What one needs to do is form the quotient of the linear space generated by those relations. This will require a bit of care to do properly. Sławomir Biały (talk) 12:37, 7 May 2016 (UTC)
- Lee (Introduction to Smooth Manifolds) phrases it a the subspace spanned by the set of vectors
Deleted post by me here written when in state of confusion.YohanN7 (talk) 14:25, 18 May 2016 (UTC)- It actually seems right. Let
- Lee (Introduction to Smooth Manifolds) phrases it a the subspace spanned by the set of vectors
- and
- and
- where span is linear span. (This is pretty much a free vector space on with operations inherited for free from F(V × W).) If
- then each ni should be mapped to zero in the quotient and hence their sum. That N is a subspace is now obvious from the definition of span. Thus put
- I find this description easier than involving equivalence relations and I think this is what Lee means (what else could he mean by "spanned by"?), so it can be referenced. YohanN7 (talk) 12:46, 19 May 2016 (UTC)
Is the word "freest" used in any reliable sources?
The word "freest" was first added to the article in this edit. No source was cited. I can find no evidence that mathematicians ever use the word "freest" in the definition of Tensor product. --50.53.61.217 (talk) 14:57, 12 December 2016 (UTC)
- Have you tried Googling to see if the word freest is actually used, with this meaning, in mathematics? See, for instance, this. Sławomir Biały (talk) 15:31, 12 December 2016 (UTC)
- None of these algebra texts in the References section use the word "freest": Bourbaki, Grillet, Halmos, Hungerford, Lang, Mac Lane & Birkhoff. Thanks for adding Eisenbud, but his use of the word "freest" seems to be non-standard. --50.53.61.217 (talk) 17:23, 12 December 2016 (UTC)
- No, it's a standard adjective in mathematics, being used in a completely standard way here. See, for example, Topics in random matrix theory, by Terrence Tao, for another use of the adjective which is consistent with this article's use. A more detailed explanation of the meaning of the term, as it is widely used in mathematics, appears in William Massey, A basic course in algebraic topology, p. 63. Sławomir Biały (talk) 18:17, 12 December 2016 (UTC)
- If "freest" is so standard, there should be an article explaining it: Freest (mathematics). Notably, the article on free vector spaces to which you linked in this edit does not use the word "freest". For comparison, there is an article on maximal element and a redirect from minimal element. --50.53.61.217 (talk) 19:14, 12 December 2016 (UTC)
- Freest is derived from the standard rules of English from the word "free". It means "most free". It is not some obscure term of art. Free, in turn, means "without constraint". It is true that this has a precise technical meaning in mathematics, discussed at free object (and other related articles). But I don't see how having a separate freest (mathematics) is likely to clarify confusion if readers are not willing to consider the meanings of common English words. And a think we would be hard-pressed to write such an article from sources. Sławomir Biały (talk) 22:30, 12 December 2016 (UTC)
There's an obvious and flawless way to handle issues of this sort. When a notion is introduced, use the most common term and add some or-clauses if other reasonably common terms exist. YohanN7 (talk) 11:42, 13 December 2016 (UTC)
- It's not necessarily ideal to use only the most common descriptions of things. (And then there would be the usual question about what is the most common description of the tensor product: using free vector spaces, using a universal property, using duality to bilinear forms?) I think "freest" captures the intuition of the tensor product succinctly, since it currently overemphasizes the nuts and bolts. Sławomir Biały (talk) 11:49, 13 December 2016 (UTC)
- Sure, but that wasn't what I meant. I meant something like "...freest or most general...". Or "...most general or freest..." for those who prefer to have it in that order. But the order is another discussion. YohanN7 (talk) 12:01, 13 December 2016 (UTC)
- Ok, thanks. I've added it. Sławomir Biały (talk) 12:03, 13 December 2016 (UTC)
"generalises the outer product"?
The lead states:
- "... the tensor product of two vector spaces V and W is itself a vector space, together with an operation of bilinear composition denoted by from ordered pairs in the Cartesian product into , in a way that generalizes the outer product."
Surely the outer product is the operation being referred to (confusingly also often called the tensor product)? There is no generalization of the outer product as an operation. —Quondum 16:08, 6 January 2017 (UTC)
- I usually think of the outer product as defined for coordinate vectors in , and the tensor product as the generalization for arbitrary pairs of vector spaces. Sławomir Biały (talk) 17:52, 6 January 2017 (UTC)
- Ah, okay, makes sense. Just like the dot product is essentially defined on coordinate vectors (despite being used in other senses), in contrast to terms like inner product. Perhaps we should simply emphasize which of the two meanings of tensor product is meant when used, such as by referring to the tensor product of vectors (or tensors) or the tensor product operator when the bilinear operator is meant. The article Tensor product should then also clearly define and distinguish both in the lead (currently it simply avoids using the term tensor product for the operation, even though it refers to it). I've tweaked Outer product to be a little clearer in this sense by my understanding; feel free to change/revert. —Quondum 18:33, 6 January 2017 (UTC)
Definitions confounded by construction
The definition used in this article badly confuses the construction of a tensor product with the definition of tensor product. For an excellent intrinsic definition of the tensor product see the one on PlanetMath for example. In the current state the definition mixes a method of building a tensor product with an abstract definition of the tensor product. — Preceding unsigned comment added by 139.48.54.241 (talk) 16:04, 24 August 2016 (UTC)
- I partly agree that the focus of this article is not ideal. However, I do not think that the planetmath is really much better, with the over-reliance on the universal property. Is there no way to make something reasonably explicit that is also satisfactory as a definition (even an intuitive one)? Sławomir Biały (talk) 17:34, 24 August 2016 (UTC)
- The "definition" also badly confuses 'definition' and 'motivation'. Mixing in motivational remarks into the text of a definition shows negligence for the aesthetics and rigor of precise mathematical language. — Preceding unsigned comment added by 93.207.197.239 (talk) 17:24, 6 September 2016 (UTC)
Can't we define the tensor product the same way polynomials are handled? Quoting from there
A polynomial is an expression that can be built from constants and symbols called indeterminates or variables by means of addition, multiplication and exponentiation to a non-negative integer power. Two such expressions that may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication are considered as defining the same polynomial.
If it were done like in this article, we'd have first built a free ring involving finitely-supported sequences of coefficients, and then built a big equivalence relation for "can be algebraically manipulated into one another", and then quotiented out by the equivalence class of 0. This is crazy, but it's what this article does for tensors! Patterning a definition after the above could give
An element of the tensor product V ⊗ W is an expression that can be built from vectors in V and vectors in W by vector addition, subtraction, scalar multiplication, and application of a formal variable F representing a multilinear map whose domain is V × W. Two such expressions that may be transformed, one to the other, by applying the usual properties of linear algebra and multilinear maps are considered as defining the same element.
which should immediately be followed by a simple example IMO (ex: 2 F(v,w), F(v,w) + F(v,w), F(v+v,w), and F(v,2 w) are all elements of the tensor product and are equal).
This would easily segue into the universal property, because the map : V ⊗ W → Z is just substitution of h for the formal variable F.
If we have to have a formalish definition (again, polynomial doesn't have one; even polynomial ring doesn't go to the extent we do here) couldn't it come after general motivational remarks like this? 64.92.17.6 (talk) 16:28, 13 May 2017 (UTC)
"Quick Sense"
I object to including this because the reader is left wondering what "subject to" means, and a definition for arbitrary modules is far less elementary than for vector spaces. By the way, header titles are not capitalized, per WP:MOSHEADER. @Gedt11: I also object to it on the grounds that anyone wanting a quick definition should get a rigorous one as a quotient module - other readers will need more introduction. Plus it's more confusing to readers because a tensor product is more than an abelian group. R has to act on it in a suitable way as well.--Jasper Deng (talk) 18:18, 18 December 2017 (UTC)
The symbol e
In the first section, it is stated that "each notation stands for the sum " but the quantity ei is not explained. All the above complaints about illegitimate simplifications should IMO be put on hold until basic concepts such as this are adequately explained. I think it is fair to guess that e is meant to denote the basis for the vector space. It is not too much to ask that symbols be explained. A simple wikilink to a page that explains the concept would suffice. If I have got this right I can edit in myself, but I am not knowledgeable in this field. Wdanbae (talk) 07:07, 14 February 2019 (UTC)
Tone
The tone of this article is inappropriate; see MOS:MATH#TONE
-- Jgranata13 (talk) 19:03, 19 March 2019 (UTC)
Reorganization Suggestions
The current definition of a tensor product is just the construction — instead of this, the definition should be given by the universal property, and then from the definition of a tensor product, a vector space representing it should be constructed. Check out definition 3.1 in http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf — Preceding unsigned comment added by Username6330 (talk • contribs) 02:05, 20 December 2017 (UTC)
- @Username6330: So, the idea is that in Wikipedia we prefer to give a concrete down-to-earth definition first even if it is not correct for theoresits’ point of views. We in fact give the universal property def in tensor product of modules since the target audience of the latter needs to see the correct definition first. — Taku (talk) 00:51, 30 December 2017 (UTC)
- (irony = on) Agreed. The idea of Wikipedia in many cases unfortunately is to first give a wrong description to have people get an incorrect idea and only then, when they understood the wrong definition to provide the correct one with the effect that they then no longer understand the correct concept (irony = off). It would be much better to give a motivation (and to make it clear that this is only a motivation) and then show how the precise definition evolves from that motivation. All good books do so. Just Wikipedia fails to do so in many places. :-/ — Preceding unsigned comment added by 217.95.169.8 (talk) 15:32, 13 January 2019 (UTC)
- In what fashion, exactly, is the constructive definition "wrong", however? What counterexample would you pose against it? mike4ty4 (talk) 09:00, 19 May 2019 (UTC)
Confusing choice of notation
The section Relation to dual space contains this passage
"an isomorphism can be defined by , when acting on pure tensors
- "
The use of both and in this passage is very confusing, and particularly when, immediately afterward, that same notation with subscripts has a different meaning (elements that are dual to each other).
Just in case I'm wrong about this, that should be understood as further reason to conclude that this passage is confusing!50.205.142.35 (talk) 02:37, 27 November 2019 (UTC) 50.205.142.35 (talk) 02:37, 27 November 2019 (UTC)
- I modified this section. Let me know if you think it's an improvement. Will Orrick (talk) 14:12, 6 February 2020 (UTC)